Forallx (Adelaide) 2022
Forallx (Adelaide) 2022
forall𝓍
ADELAIDE
Antony Eagle
University of Adelaide
Tim Button
University College London
P.D. Magnus
University at Albany, State University of New York
© 2005–2022 by Antony Eagle, Tim Button and P.D. Magnus. Some rights reserved.
This version of forall𝓍 ADELAIDE is current as of 20 January 2022.
This book is a derivative work created by Antony Eagle, based upon Tim Button’s 2016
Cambridge version of P.D. Magnus’s forall𝓍 (version 1.29). There are pervasive sub‐
stantive changes in content, theoretical approach, coverage, and appearance. (For one
thing, it’s more than twice as long.)
You can find the most up to date version of forall𝓍 ADELAIDE at github.com/
antonyeagle/forallx‐adl.
This work is licensed under the Creative Commons Attribution 4.0 International Li‐
cense. To view a copy of this license, visit creativecommons.org/licenses/by/4.
0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042,
USA.
Typesetting was carried out entirely in XELATEX using the Memoir class. The body text is
set in Constantia; math is set in Cambria Math; sans serif in Calibri; and monospaced
text in Consolas. The style for typesetting proofs is based on fitch.sty (v0.4) by Peter
Selinger, University of Ottawa.
1 Key Notions 1
1 Arguments 2
2 Valid Arguments 5
3 Other Logical Notions 17
3 Truth Tables 62
8 Truth‐Functional Connectives 63
9 Complete Truth Tables 74
10 Semantic Concepts 79
11 Entailment and Validity 84
12 Truth Table Shortcuts 92
13 Partial Truth Tables 97
14 Expressiveness of Sentential 104
iii
iv CONTENTS
5 Interpretations 176
21 Extensionality 177
22 Truth in Quantifier 200
23 Semantic Concepts 216
24 Demonstrating Consistency and Invalidity 222
25 Reasoning about All Interpretations 228
Appendices 364
v
vi LIST OF FIGURES
This book has been designed for use in conjunction with the University of Adelaide
courses PHIL 1110 Introduction to Logic and PHIL 1111OL Introductory Logic. But it is suit‐
able for self‐study as well. I have included a number of features to assist your learning.
› forall𝓍 is divided into seven chapters, each further divided into sections and
subsections. The sections are continuously numbered.
› Figure 1 shows how the sections depend on one another. For example, the arrows
coming from §21 in the diagram show that understanding that section requires
familiarity with §11 and §20, and also any sections on which they depend.
› Each chapter in the book concludes with a box labelled ‘Key Ideas in §𝑛’. These
are not a summary of the chapter, but contain some indication of what I regard as
the main ideas that you should be taking away from your reading of the chapter.
› Logical ideas and notation are pretty ubiquitous in philosophy, and there are a
lot of different systems. We cannot cover all the alternatives, but some indication
of other terminology and notation is contained in Appendix A.
› A quick reference to many of the aspects of the logical systems I introduce can
be found in Appendix B.
vii
§1
§2 §15
§3 §16 §20
§6 §23 §28
§8 §25 §31
§9 §33 §32
› The book is the product of a number of authors. ‘I’ doesn’t always mean me, but
‘you’ mostly means you, the reader, and ‘we’ mostly means you and me.
Key Notions
1
Arguments
Logic is the business of evaluating arguments – identifying some of the good ones and
explaining why they are good. So what is an argument?
In everyday language, we sometimes use the word ‘argument’ to talk about belligerent
shouting matches. Logic is not concerned with such teeth‐gnashing and hair‐pulling.
They are not arguments, in our sense; they are disputes. A dispute like this is often
more about expressing feelings than it is about persuasion.
Offering an ARGUMENT, in the sense relevant to logic (and other disciplines, like law
and philosophy), is something more like making a case. It involves presenting reasons
that are intended to favour, or support, a specific claim. Consider this example of an
argument that someone might give:
It is raining heavily.
If you do not take an umbrella, you will get soaked.
So: You should take an umbrella.
We here have a series of sentences. The word ‘So’ on the third line indicates that
the final sentence expresses the CONCLUSION of the argument. The two sentences be‐
fore that express PREMISES of the argument. If the argument is well‐constructed, the
premises provide reasons in favour of the conclusion. In this example, the premises
do seem to support the conclusion. At least they do, given the tacit assumption that
you do not wish to get soaked.
So this is the sort of thing that logicians are interested in when they look at arguments.
We shall say that an argument is any collection of premises, together with a conclu‐
sion.1
In the example just given, we used individual sentences to express both of the argu‐
ment’s premises, and we used a third sentence to express the argument’s conclusion.
1 Because arguments are made of sentences, logicians are very concerned with the details of particu‐
lar words and phrases appearing in sentences. Logic thus also has close connections with linguistics,
particularly that sub‐discipline of linguistics known as SEMANTICS, the theory of meaning.
2
§1. ARGUMENTS 3
Many arguments are expressed in this way. But a single sentence can contain a com‐
plete argument. Consider:
You should take an umbrella. After all, it is raining heavily. And if you do
not take an umbrella, you will get soaked.
Equally, it might have been presented with the conclusion in the middle:
When approaching an argument, we want to know whether or not the conclusion fol‐
lows from the premises. So the first thing to do is to separate out the conclusion from
the premises. As a guideline, the following words are often used to indicate an argu‐
ment’s conclusion:
And these expressions often indicate that we are dealing with a premise, rather than a
conclusion
Key Ideas in §1
› An argument is a collection of sentences, divided into one or
more premises and a single conclusion.
› The conclusion may be indicated by ‘so’, ‘therefore’ or other ex‐
pressions; the premises indicated by ‘since’ or ‘because’.
› The premises are intended to support the conclusion – though
whether they do so is another matter.
Practice exercises
At the end of every section, there are practice exercises that review and explore the
material covered in the chapter. There is no substitute for actually working through
some problems, because logic is more about cultivating a way of thinking than it is
about memorising facts.
A. What is the difference between argument in the everyday sense, and in the logicians’
sense? What is the point of logical arguments?
B. Highlight the phrase which expresses the conclusion of each of these arguments:
In §1, we gave a very permissive account of what an argument is. To see just how
permissive it is, consider the following:
5
6 KEY NOTIONS
2. The conclusion might not follow from, or be a consequence of, the premises –
even if the premises were true, they would not support the conclusion.
To determine whether or not the premises of an argument are true is often a very
important matter. But that is normally a task best left to experts in the field: as it
might be, historians, scientists, or whomever. In our role as logicians, we are more
concerned with arguments in general. So we are (usually) more concerned with the
second way in which arguments can go wrong.
An argument is CONCLUSIVE if, and only if, the truth of the premises
guarantees the truth of the conclusion.
In other words: an argument is conclusive if, and only if: it is not pos‐
sible for the premises of the argument to be true while the conclusion
is false.
This is not a terrible argument. Both of the premises are true. And most people who
read this book are logic students. Yet, it is possible for someone besides a logic student
to read this book. If your housemate picked up the book and thumbed through it, they
would not immediately become a logic student. So the premises of this argument,
even though they are true, do not guarantee the truth of the conclusion. This is not a
conclusive argument.
Or consider this pair of arguments:
not impossible. Strictly speaking, if Mary had a sister the speaker did not mention, it
would be possible for the premise to be true while the conclusion is false. So the left
argument is not conclusive, interpreted strictly.
The argument on the right shows that the premise does make a compelling case for
a related but more hedged conclusion. You might think ‘that second argument is a
bit nit‐picking’. But that is exactly what makes it so watertight. It doesn’t depend on
what would make for a ‘normal’ conversation, or whether you are making the same
assumptions as the speaker, etc. No matter what, the truth of the premises secures the
truth of the conclusion: it is conclusive.
The crucial thing about a conclusive argument is that it is impossible, in a very strict
sense, for the premises to be true whilst the conclusion is false. Consider this example:
The argument is conclusive, as there is no possible way in which the earth could have
twenty‐eight moons while having an odd number of moons. But the premise is so ob‐
viously false that this argument could never be used to persuade anyone. The premise
supports the conclusion, but that support is moot given the evident falsity of the
premise.
conclusion. But a conclusive argument need not provide you with a reason to believe
the conclusion. One way this can happen is when you don’t accept any of the premises
in the first place. When the premises support the conclusion, that might just mean
that they would be excellent reasons to accept the conclusion – if only they were true!
So: we are interested in whether or not a conclusion follows from some premises. Don’t,
though, say that the premises infer the conclusion. Entailment is a relation between
premises and conclusions; inference is something we do. So if you want to mention
inference when the conclusion follows from the premises, you could say that one may
infer the conclusion from the premises.
But even this may be doubted. Often, when you believe the premises, a conclusive
argument provides you with a reason to believe the conclusion. In that case, it might
be appropriate for you to infer the conclusion from the premises.
But sometimes a conclusive argument shows that some premises support a conclusion
you cannot accept. Suppose, for example, that you know the conclusion to be false.
The fact that the argument is conclusive and has a false conclusion tells you that the
premises cannot all be true. (Consider the argument from the previous section with
the false conclusion ‘Oranges are musical instruments’: the second premise is as absurd
as the conclusion.) In general, when an argument is conclusive
› the truth of all the premises guarantees the truth of the conclusion; and equally
› the falsity of the conclusion guarantees the falsity of at least one of the premises.
In this sort of situation, you might find that the argument gives you a better reason
to abandon any belief in one of the premises than to accept the conclusion. A con‐
clusive argument shows there is some reason to believe its conclusion, if you accept its
premises; it doesn’t mean there aren’t better reasons to reject its premises, if you reject
its conclusion. Consider this sort of example:1
Someone might believe both premises, and not accept the conclusion, because on look‐
ing in the cupboard, they do not see the muesli. (Someone else finished it off earlier.)
It would be silly for this person to ‘follow the argument where it leads’. Rather, they
should use the fact that these premises entail a conclusion they now know to be false,
having just looked, to reject one of the premises. The obvious candidate is the first
premise. So this person should probably stop believing that they’ll find muesli if they
look in the cupboard, and start believing instead that they are out of muesli or suchlike.
Some cases are less straightforward. Consider this argument:
1 This sort of example is discussed by Gilbert Harman, Change in View, MIT Press, esp. ch. 2.
§2. VALID ARGUMENTS 9
Many people will find the premises plausible, and the conclusion therefore compelling.
This is part of the case for vegetarianism. But not everyone finds the conclusion accept‐
able; such people end up finding one or both of the premises should be rejected. In‐
terestingly, people may find themselves still finding the premises attractive even when
they recognise a conclusion follows that they cannot accept. Here they may find that
there is some reason to accept the premises (perhaps they seem true at first glance),
and some reason to reject them (they have, at second glance, consequences the person
cannot accept). I don’t want to adjudicate the merits of this argument here. I only want
to emphasise that even offering a conclusive argument to someone, with premises that
they currently accept, is not enough to make them come to believe the conclusion.
Sometimes people think that logic will provide a powerful tool to persuade and con‐
vince others of their point of view. They sometimes want to study logic as if it is some
dark art enabling them to subdue beliefs that contradict their own. This is not really
a very nice thing to want to do – to force a belief on someone, whether they want to
believe it or not – and so it is not particularly regrettable that logic doesn’t help you
to do it. Logic can show which claims follow from which others, and which contradict
one another. It can help us elaborate the content of some claim, or delineate the com‐
mitments a certain belief would incur. But logic does not tell you what to believe, even
when you have a conclusive argument.
The question, what ought I to believe? is one of the deepest in the area of philosophy
known as EPISTEMOLOGY, the theory of knowledge. Logic is not able to answer that
question all by itself. Even if logic tells you that there is a conclusive argument from
premise 𝒜 to conclusion ℬ, logic can’t tell you whether you ought to believe both, or
reject both. However, logic will tell you something important, even if it is only a limited
part of the answer to the question of rational belief. It will tell you that, when you
know an argument to be conclusive, you cannot both accept its premises while rejecting
its conclusion – at least, not while being ideally rational. Thus conceived, logic is not
even a science of reasoning, because it does not tell you what to think. Logic can’t tell
you, or anyone else, which packages of premises‐and‐conclusions to accept.
Juanita is a vixen.
So: Juanita is a fox.
It is impossible for the premise to be true and the conclusion false. So the argument is
conclusive. But this is not due to the structure of the argument. Here is an inconclusive
argument with seemingly the same structure or form. The new argument is the result
of replacing the word ‘vixen’ in the first argument with the word ‘cathedral’, but keeping
the overall grammatical structure the same:
Juanita is a cathedral.
So: Juanita is a fox.
10 KEY NOTIONS
This might suggest that the conclusiveness of the first argument is keyed to the mean‐
ing of the words ‘vixen’ and ‘fox’. But, whether or not that is right, it is not simply
the FORM of the argument that makes it conclusive. It is instructive to compare the
first argument with this modification, where we replace ‘vixen’ with the near‐synonym
‘female fox’:
This also seems to be conclusive. But now we might suspect the occurrence of the word
‘fox’ in both premise and conclusion is not mere coincidence, but an essential part of
the explanation as to why this is conclusive.
Equally, consider the argument:
Again, because nothing can be both green all over and red all over, the truth of the
premise would guarantee the truth of the conclusion. So the argument is conclusive.
But here is an inconclusive argument with the same form:
The argument is inconclusive, since it is possible to be green all over and shiny all over.
(I might paint my nails with an elegant shiny green varnish.) Plausibly, the conclusive‐
ness of this argument is keyed to the way that colours (or colour‐words) interact. But,
whether or not that is right, it is not simply the form of the argument that makes it
conclusive.
An argument can be conclusive due to its structure, and also be conclusive for other
reasons. Arguably, this might be going on in the argument discussed at the end of §2.2,
with the premise ‘Oranges are not fruits’. Some people might think this premise has to
be false, because of what oranges are. (Many will say that being a fruit is an essential
part of what it is to be an orange.) But if the premise ‘Oranges are not fruits’ has to be
false, it is not possible for the premises to be true. So it is not possible for premises to
be true while the conclusion is false. Hence the argument is conclusive – both because
it has a good structure, but also because it has a premise that cannot be true.2
2 When an argument has an impossible premise, any argument with that premise will be conclusive no
matter what the conclusion is! So this is a weird kind of case of conclusiveness. But nothing much
really turns on it, and it is simpler to simply count it as conclusive than to try and separate out such
‘degenerate’ cases of conclusive arguments. See also §3.3.
§2. VALID ARGUMENTS 11
2.5 Validity
Logicians try to steer clear of controversial matters like whether there is a definition
of an orange that requires it to be a fruit, or whether there is a ‘connection in mean‐
ing’ between being green and not being red. It is often difficult to figure such things
out from the armchair (a logician’s preferred habitat), and there may be widespread
disagreement even among subject matter experts.
So logicians do not study conclusive arguments in general, but rather concentrate on
those conclusive arguments which have a good structure or form.3 This is why the
logic we are studying is sometimes called FORMAL LOGIC. We introduce a special term
for the class of arguments logicians are especially interested in:
An argument is VALID if, and only if, it is conclusive due to its structure;
otherwise it is INVALID.
Either Oranges are fruits or oranges Either Ogres are fearsome or ogres
are musical instruments. are mythical.
It is not the case that Oranges are It is not the case that Ogres are fear‐
fruits. some.
So: Oranges are musical instruments. So: Ogres are mythical.
The shared structure of these two arguments is something like this:
Either 𝒜 or ℬ.
It is not the case that 𝒜 .
So: ℬ.
Any argument with this structure will be conclusive in virtue of structure, and hence
valid. It does not matter, really, what sentences we put in place of ‘𝒜 ’ and ‘ℬ’. (Within
limits: you can’t put a question or an exclamation and get a valid argument – see §3.1.)
This highlights that valid arguments do not need to have true premises or even true
conclusions. We can put a true sentence in place of 𝒜 and a false sentence in place of
ℬ , and both premises and the conclusion will be false. The argument is still valid.
3 It can be very hard to tell whether an invalid argument is conclusive or inconclusive. Consider the
argument ‘The sea is full of water; so the sea is full of H2 O’. This is conclusive, since water just is the
same stuff as H2 O. The sea cannot be full of that water stuff without being full of that exact same stuff,
namely, H2 O stuff. But it took a lot of chemistry and ingenious experiments to figure out that water is
H2 O. So it was not at all obvious that this argument was conclusive. On the other hand, it is generally
very clear when an argument is conclusive due to its structure – you can just see the structure when
the argument is presented to you.
12 KEY NOTIONS
Conversely, having true premises and a true conclusion is not enough to make an ar‐
gument valid. Consider this example:
London is in England.
Beijing is in China.
So: Paris is in France.
The premises and conclusion of this argument are, as a matter of fact, all true. But the
argument is invalid. If Paris were to declare independence from the rest of France, then
the conclusion would be false, even though both of the premises would remain true.
Thus, it is possible for the premises of this argument to be true and the conclusion false.
The argument is therefore inconclusive, and hence invalid.
Return briefly to another example we discussed earlier:
𝑎 is an ℱ 𝒢.
So: 𝑎 is a 𝒢.
For most adjectives ℱ , this structure yields a conclusive argument when you replace
the schematic letters by English words. E.g.,
But not all: some adjectives like ‘fake’ or ‘alleged’ do not yield conclusive arguments
when substituted for ℱ : ‘This is a fake gun; so this is a gun’ is not a conclusive argument.
We will return in §15 to the logical structure of examples like these, and to expressions
like ‘fake gun’ in §16.6.
2.6 Soundness
The important thing to remember is that validity is not about the actual truth or falsity
of the sentences in the argument. It is about whether the structure of the argument
ensures that the premises support the conclusion. Nonetheless, we shall say that an
argument is SOUND if, and only if, it is both valid and all of its premises are true. So
every sound argument is valid and conclusive. But not every valid argument is sound,
and not every conclusive argument is sound.
It is often possible to see that an argument is valid even when one has no idea whether
it is sound. Consider this extreme example (after Lewis Carroll’s Jabberwocky):
’Twas brillig, and the slithy toves did gyre and gimble in the wabe.
So: The slithy toves did gyre and gimble in the wabe.
§2. VALID ARGUMENTS 13
This argument is valid, simply because of its structure (it has a premise conjoining two
claims by ‘and’, and a conclusion which is one of those claims). But is it sound? That
would depend on figuring out what all those nonsense words mean!
This argument generalises from observations about several cases to a conclusion about
all cases. Though it is invalid, that doesn’t mean it is a bad argument. The premises
appear to provide some support for the conclusion, though it falls short of being con‐
clusive.
INDUCTION describes a form of reasoning from evidence to hypotheses about that evid‐
ence. For example, when we reason from a sample of eligible voters to a hypothesis
about how the whole population will vote, we are reasoning inductively. INDUCTIVE
LOGIC is the attempt to generalise deductive logic to evaluate arguments in line with
the canons of good inductive reasoning. The proponents of inductive logic think there
might be a generalisation of deductive logic which enables us to evaluate arguments in
a more fine‐grained way than the options we’ve canvassed so far (i.e., just ‘valid’, and
‘invalid and conclusive’ and ‘invalid and inconclusive’). A significant part of the project
of inductive logic is the attempt to classify inconclusive arguments in terms of whether
their premises provide good inductive evidence in favour of their conclusions.
In the example, the premises are of the form ‘In January 𝑛, it rains’, and the conclusion is
of the form ‘Every January, it rains’. This argument thus has a general conclusion drawn
from instances of that generalisation. (That is, ‘it rains in January 1997’ is an instance
of the generalisation ‘it rains in January of every year’.) If we regard the foregoing
premises as providing decent support for the conclusion, we might think that adding
additional premises of the same sort before drawing the conclusion would make it even
stronger: In January 2001, it rained in London; In January 2002…. This principle, that a
generalisation is increasingly supported the more instances we adduce, more might be
part of our toolkit when evaluating arguments inductively. But, no matter how many
premises of this form we add, the argument will remain inconclusive. Even if it has
rained in London in every January thus far, it remains possible that London will stay
dry next January. (Even if we include every instance – past, present, and future – the
argument will be inconclusive, because one will need the additional premise that there
are no other instances than those we’ve included.)
The point of all this is that most arguments which are inductively very strong are not
(deductively) valid. Arguments which represent very good examples of inductive reas‐
oning generally are not watertight. Unlikely though it might be, it is possible for their
14 KEY NOTIONS
conclusion to be false, even when all of their premises are true. In this book, our in‐
terest is simply in sorting the (deductively) valid arguments from the invalid ones. The
project of inductive logic, of sorting the invalid arguments further into the inductively
good and poor ones, we shall set aside entirely from here on.
This new argument has a premise which makes explicit a fact about green and red that
was merely implicit in the original argument. Since the original argument was con‐
clusive – since the fact about green and red is true just in virtue of the meaning of the
words ‘green’ and ‘red’ (and ‘not’) – the new argument remains conclusive. (We can’t
undermine conclusiveness by adding further premises.) But the new argument is valid,
because the additional premise we have added yields an argument with a structure that
guarantees the truth of the conclusion, given the truth of the premises.
The original argument is sometimes thought to be merely an abbreviation of the ex‐
panded valid argument. An argument with an unstated premise, such that it can be
seen to be valid when the premise is made explicit, is called an ENTHYMEME.4 Many
inconclusive arguments can be treated as enthymematic, if the unstated premise is
obvious enough:
The Nasty party platform includes imprisoning people for chewing gum;
So: The Nasty party will not form the next government.
The unstated premise is something like ‘If a party platform includes imprisoning
people for chewing gum, then that party will win too few votes to form the next govern‐
ment’. The unstated premise may or may not be true. But if it is added, the argument
is made valid.
Any conclusive argument you are likely to come across will either already be valid, or
can be transformed into a valid argument by making some assumption on which it
implicitly relies into an explicit premise.
Not every inconclusive argument should be treated as an enthymeme. In particular,
many strong inductive arguments can be made weaker when they are treated as en‐
thymematic. Consider:
4 The term is from ancient Greek; the concept was given its first philosophical treatment by Aristotle in
his Rhetoric. He gives this example, among others: ‘He is ill, since he has fever’.
§2. VALID ARGUMENTS 15
This argument is inconclusive. It can be made valid by adding the unstated premise
‘Every January, it is hot in Adelaide’. But that unstated premise is extremely strong – we
do not have sufficient evidence to conclude that it will be hot in Adelaide in January for
eternity. So while the premises we have been given explicitly are good reason to think
Adelaide will continue to have a hot January for the foreseeable future, we do not have
good enough reason to think that every January will be hot. Treating the argument
as an enthymeme makes it valid, but also makes it less persuasive, since the unstated
premise on which it relies is not one many people will share.5
Key Ideas in §2
› An argument is conclusive if, and only if, the truth of the
premises guarantees the truth of the conclusion.
› An argument is valid if, and only if, the form of the premises
and conclusion alone ensures that it is conclusive. Not every
conclusive argument is valid (though they can be made valid by
addition of appropriate premises).
› An argument can be good and persuade us of its conclusion even
if it is not conclusive; and we can fail to be persuaded of the
conclusion of a conclusive argument, since one might come to
reject its premises.
Practice exercises
A. What is a conclusive argument? What, in addition to being conclusive, is required
for an argument to be valid? What, in addition to being valid, is required for an argu‐
ment to be sound?
B. Which of the following arguments are valid? Which are invalid but conclusive?
Which are inconclusive? Comment on any difficulties or points of interest.
1. Socrates is a man.
1. 2. All men are carrots.
So: Therefore, Socrates is a carrot.
1. If the world ends today, then I will not need to get up tomorrow morning.
6. 2. I will need to get up tomorrow morning.
So: The world will not end today.
We must do something.
4. Quacking like a duck is something.
So: We must quack like a duck.
1. A valid argument that has one false premise and one true premise?
2. A valid argument that has only false premises?
3. A valid argument with only false premises and a false conclusion?
4. A sound argument with a false conclusion?
5. An invalid argument that can be made valid by the addition of a new premise?
6. A valid argument that can be made invalid by the addition of a new premise?
In §2, we introduced the idea of a valid argument. We will want to introduce some
more ideas that are important in logic.
The common feature of these three kinds of sentence is that they cannot be used to
make assertions: they cannot be true or false. It does not even make sense to ask
whether a question is true (it only makes sense to ask whether the answer to a question
is true).
The general point is that the premises and conclusion of an argument must be sen‐
tences that are capable of having a TRUTH VALUE. The notions of validity and conclus‐
iveness are defined in terms of truth preservation, so these properties depend on the
constituents of arguments being the kinds of things which have a truth value.
17
18 KEY NOTIONS
The two truth values that concern us are just TRUE and FALSE. We may not know
which truth value a declarative sentence has, but we generally know what kinds of
conditions would need to obtain in order for it to be true or false. (We thus have no
need of a supposed intermediate truth value like ‘unknown’ – that would reflect our
attitudes or beliefs about a claim, whereas the truth value reflects how things really
are independently of our attitudes or beliefs.)
To form part of an argument, a sentence must have the kind of grammatical structure
that permits it to have a truth value. In terms of the notion of structure introduced
in §2.2, we are going to focus on those structures which yield a declarative sentence
when supplied with declarative sentences. For example ‘it is not the case that 𝒜 ’ yields
a declarative sentence whenever we put a declarative sentence in for 𝒜 , and may not
yield anything grammatical at all otherwise:
3.2 Consistency
Consider these two sentences:
Logic alone cannot tell us which, if either, of these sentences is true. Yet we can say
that if the first sentence 5 is true, then the second sentence 6 must be false. And if
6 is true, then 5 must be false. It is impossible that both sentences are true together.
These sentences are inconsistent with each other. And this motivates the following
definition:
Sentences are JOINTLY CONSISTENT if, and only if, it is possible for
them all to be true together.
be 50m’, and ‘Brunhilde correctly measured the spaceship to be 45m’, so long as Albert
and Brunhilde are in relative motion.
We can ask about the consistency of any number of sentences. For example, consider
the following four sentences:
7 and 10 together entail that there are at least four Martian giraffes at the park. This
conflicts with 9, which implies that there are no more than two Martian giraffes there.
So the sentences 7–10 are jointly inconsistent. They cannot all be true together. (Note
that the sentences 7, 9 and 10 are jointly inconsistent. But if some sentences are already
jointly inconsistent, adding an extra sentence to the mix will not make them consist‐
ent!)
There is an interesting connection between consistency and conclusive arguments. A
conclusive argument is one where the premises guarantee the truth of the conclusion.
So it is an argument where if the premises are true, the conclusion must be true. So
the premises cannot be jointly consistent with the claim that the conclusion is false.
Since the argument ‘Dogs and cats are animals, so dogs are animals’ is conclusive, that
shows that the sentences ‘Dogs and cats are animals’ and ‘Dogs are not animals’ are
jointly inconsistent. If an argument is conclusive, the premises of the argument taken
together with the denial of the conclusion will be jointly inconsistent.
Sentences are JOINTLY FORMALLY CONSISTENT if, and only if, consider‐
ing only their structure, they can all be true together.
Another way to put it: some sentences are formally consistent iff, looking just at their
structure (and not looking at what they are actually about), they might all be true
together.
Just as validity is more stringent than conclusiveness (it is conclusiveness plus some‐
thing more), consistency is more stringent than formal consistency (it is formal con‐
sistency plus substantive consistency). If some sentences are jointly consistent, they
are also jointly formally consistent. But some formally consistent sentences are jointly
inconsistent. Any conclusive but invalid argument will give us an example.
20 KEY NOTIONS
For example, since ‘The sculpture is green all over, so the sculpture is not red all over’
is conclusive, these sentences are jointly formally consistent but not consistent:
These sentences have have no interesting internal structure for our purposes, so it
is easy to make them formally consistent. But, holding fixed their actual meaning
(particularly the actual meaning of ‘red’ and ‘green’), we see that the truth of 11 excludes
the truth of 12. (Again, more on the notion of structure invoked here in §4.1.)
13. It is raining.
14. If it is raining, water is precipitating from the sky.
15. If something is green, it is colourless.
16. Either it is raining here, or it is not.
17. It is both raining here and not raining here.
In order to know if sentence 13 is true, you would need to look outside or check the
weather channel. It might be true; it might be false.
Sentence 14 is different. You do not need to look outside to know that it says something
true. Regardless of what the weather is like, if it is raining, water is precipitating – that
is just what rain is, meteorologically speaking. That is a NECESSARY TRUTH. Here, a
necessary connection in meaning between ‘rain’ and ‘precipitation’ makes what the
sentence says true in every circumstance.
Sentence 15 is a NECESSARY FALSEHOOD or IMPOSSIBILITY. Nothing is, or even could be,
both green and colourless. We don’t need to do any scientific or other investigation to
know that 15 is not and cannot be true.
Sentence 16 is also a necessary truth. Unlike sentence 14, however, it is the structure of
the sentence which makes it necessary. No matter what ‘raining here’ means, ‘Either it
is raining here or it is not raining here’ will be true. The structure ‘Either it is … or it is
not …’, where both gaps (‘…’) are filled by the same phrase, must be a true sentence.
Equally, you do not need to check the weather, or even the meaning of words, to de‐
termine whether or not sentence 17 is true. It must be false, simply as a matter of
structure. It might be raining here and not raining across town; it might be raining
now but stop raining even as you finish this sentence; but it is impossible for it to be
both raining and not raining in the same place and at the same time. So, whatever
the world is like, it is not both raining here and not raining here. It is a NECESSARY
FALSEHOOD.
§3. OTHER LOGICAL NOTIONS 21
These last two examples, of necessary truths and impossibilities in virtue of structure,
are of particular interest to logicians. We will come back to them in §10.
A sentence which is capable of being true or false, but which says something which is
neither necessarily true nor necessarily false, is CONTINGENT.
If a sentence says something which is sometimes true and sometimes false, it will def‐
initely be contingent. But something might always be true and still be contingent. For
instance, it seems plausible that whenever there have been people, some of them ha‐
bitually arrive late. ‘Some people are habitually late’ is always true. But it is contingent,
it seems: human nature could have been more punctual. If so, the sentence would have
been false. But if something is really necessary, it will always be true, and couldn’t even
possibly be false.1
If some sentences contain amongst themselves a necessary falsehood, those sentences
are jointly inconsistent. At least one of them cannot be true, so they cannot all be true
together. Accordingly, if an argument has a premise that is a necessary falsehood, or its
conclusion is a necessary truth, or both, then the argument is conclusive – its premises
and the denial of its conclusion will be jointly inconsistent.
This second observation might be surprising. But note that if a premise is impossible,
there is no way to make it true, and hence no way to make it true while making the con‐
clusion false. These are both degenerate cases of conclusiveness, where there need be
no real connection between the premises and conclusion to ground the conclusiveness
of an argument.
Key Ideas in §3
› Arguments are made up of declarative sentences, all of which are
either true or false.
› Some declarative sentences are formally consistent if, and only
if, their structures don’t rule out the possibility that they are all
true together.
› Some declarative sentences can only have one truth value – they
are either necessary or impossible. Others are contingent, hav‐
ing one truth value in some circumstances and the other truth
value in other circumstances.
1 Here’s an interesting example to consider. It seems that, whenever anyone says the sentence ‘I am here
now’, they say something true. That sentence is, whenever it is uttered, truly uttered. But does it say
something necessary or contingent?
22 KEY NOTIONS
Practice exercises
A. Which of the following sentences are capable of being true or false?
1. Answer me!
2. ‘Answer me!’, she demanded.
3. You are required to answer me.
4. Saying nothing is not an answer.
5. If you want answers, ask Alfred.
6. Why won’t you answer me?
7. All that is left to ask is: ‘Who has the answers?’
D. Look back at the sentences 7–10 in this section (about giraffes, gorillas and Martians
in the wild animal park), and consider each of the following:
1. 8, 9, and 10
2. 7, 9, and 10
3. 7, 8, and 10
4. 7, 8, and 9
1. There are three people leaving the party: Atheer, Brigitte, and James.
2. Brigitte is wearing Atheer’s hat.
3. Each of the people is wearing a hat.
4. No person is wearing their own hat.
5. Atheer is wearing Brigitte’s hat.
§3. OTHER LOGICAL NOTIONS 23
It is raining outside.
If it is raining outside, then Jenny is miserable.
So: Jenny is miserable.
Jenny is an anarcho‐syndicalist.
If Jenny is an anarcho‐syndicalist, then Dipan is an avid reader of Tolstoy.
So: Dipan is an avid reader of Tolstoy.
Both arguments are valid, and there is a straightforward sense in which we can say
that they share a common structure. We might express the structure thus, when we
let letters stand for phrases in the original argument:
A
If A, then C
So: C
This is an excellent argument STRUCTURE. Surely any argument with this structure will
be valid. And this is not the only good argument structure. Consider an argument like:
25
26 THE LANGUAGE OF SENTENTIAL LOGIC
A or B
not: A
So: B
A superb structure! You will recall that this was the structure we saw in the original
arguments which introduced the idea of validity in §2.5. And here is a final example:
It’s not the case that Jim both studied hard and acted in lots of plays.
Jim studied hard
So: Jim did not act in lots of plays.
The examples illustrate the idea of validity – conclusiveness in virtue of structure. The
validity of the arguments just considered has nothing very much to do with the mean‐
ings of English expressions like ‘Jenny is miserable’, ‘Dipan is an avid reader of Tolstoy’,
or ‘Jim acted in lots of plays’. If it has to do with meanings at all, it is with the meanings
of phrases like ‘and’, ‘or’, ‘not,’ and ‘if …, then …’.
This sentence (phrase of type S) can be divided into two main parts: the NOUN PHRASE
‘Tariq’ (type NP), and the VERB PHRASE ‘is a man’ (type VP). In this example, the verb
phrase itself divides into the verb ‘is’ and the DETERMINER PHRASE ‘a man’ (type DP).
We will return to the internal structure of sentences in §15. This phrase structure is
depicted in the tree in Figure 4.1.
Let’s look at a more complicated example. Consider
NP VP
Tariq is DP
a man
NP VP Coord S
Alice will VP or NP VP
First we note that this sentence is elliptical, the verb phrase ‘ace the test’ being omitted
after ‘Bob will’ as it is understood to be supplied by the first clause of the sentence.1
The full tree, with the elided phrase supplied (marked by having the VP label in a box
at the lower right), is depicted in Figure 4.2. In this example, the whole sentence is
a compound of two clauses which are sentences in their own right, or SUBSENTENCES,
connected by ‘or’.
When analysing the structure of compound sentences, a useful notion is that of a CA‐
NONICAL CLAUSE.2 These are the simplest units of English sentences that can constitute
a sentence by themselves. Here are some examples:
1 A closely related example is discussed by Paul Elbourne (2011) Meaning, Oxford University Press, pp.
74–6. He argues that this kind of elision is crucial evidence for phrase structure grammars, for only
phrases can be omitted in this way: witness the ungrammatical ‘Alice will ace the test and Bob will
the’, where arbitrary words are omitted that do not form a grammatical phrase. This is evidence that
English syntax is sensitive to the phrasal structure of sentences, and does not treat them as merely a
string of words.
2 A fuller treatment of canonical clauses can be found in Rodney Huddleston and Geoffrey Pullum (2005)
A Student’s Introduction to English Grammar, Cambridge University Press, pp. 24–5.
28 THE LANGUAGE OF SENTENTIAL LOGIC
A canonical clause has internal structure too, but its parts are not themselves sentences.
It is composed of a GRAMMATICAL SUBJECT, typically but not always a noun phrase
(‘Jenny’, ‘She’), and a PREDICATE, always a verb phrase (‘knew the victim’, ‘is happy’).
The following, in contrast with the previous examples, are noncanonical clauses:
› They are positive, as in 20, rather than negative, as indicated by the underlined
‘not’ in 23.
› Canonical clauses are simple, and not coordinated with any other clause, as in
21. In 24, the COORDINATOR ‘and’ links two clauses that would be canonical on
their own into a longer COMPOUND sentence.
Adjunct heads including ‘if … then …’, ‘only if’ (also ‘unless’, ‘because’, ‘since’, ‘hence’ –
though we treat these last as signalling premises and conclusions of arguments)
Negatives including ‘not’, ‘‐n’t’, ‘it is not the case that’ (‘never’).
And there are others that we won’t deal with (‘always’, ‘might’).
How can we identify the logical structure of an argument? The logical structure of
an argument is the form one arrives at by ‘abstracting away’ the details of the words
in an argument, except for a special set of words, the STRUCTURAL WORDS. This con‐
nects with what we have just been saying, because in many of the examples from §4.1,
§4. FIRST STEPS TO SYMBOLISATION 29
the arguments can be analysed as involving canonical clauses which are linked by the
structural words ‘and’, ‘or’, ‘not’, ‘if … then …’ and ‘… if and only if …’. So our efforts
to identify the logical structure of arguments often coincide with linguist’s efforts to
identify the canonical clauses that constitute the sentences in those arguments – or,
at least, constitute some plausible paraphrase or reformulation of those sentences.
We can see this if we return to this earlier argument:
We paraphrase the compound sentence ‘Jenny is either happy or sad’ into a compound
of two canonical clauses, joined by the coordinator ‘or’:
And we paraphrase the noncanonical clause ‘Jenny is not happy’ as the negative clause
Identifying the coordinator ‘or’ and the negative clause ‘it is not the case that’ as struc‐
tural words, and replacing the canonical clause ‘Jenny is happy’ with the placeholder
letter ‘A’, and the canonical clause ‘Jenny is sad’ with the placeholder letter ‘B’, we arrive
again at the argument structure we previously identified:
A or B
not: A
So: B
In this and the other examples above, we removed all details of the arguments except
for the special words ‘and’, ‘or’, ‘not’ and ‘if …, then …’, and replaced the other clauses
in the argument (or its near paraphrase) by placeholder letters ‘A’, ‘B’, etc. (We were
careful to replace the same clause by the same placeholder every time it appeared.)
Another example:
We begin by paraphrasing. We note that the pronoun ‘it’ is actually referring to the
previously mentioned subject ‘butter’, and we move the negative ‘isn’t’ into a position
that reveals the canonical clause ‘butter is healthy’. We also note that the effect of ‘but’
is roughly the same as our structural word ‘and’ (though it suggests a contrast that ‘and’
does not, ‘but’ expresses the roughly equivalent idea that each of the claims it connects
are true) we obtain this more stilted paraphrase:
27. It is not the case that butter is healthy and butter is delicious.
30 THE LANGUAGE OF SENTENTIAL LOGIC
Not S and S
S Not S
S S
A B B
Figure 4.4: Different sentential structures for ‘Not A and B’ shown in schematic syn‐
tactic trees.
This paraphrase has the syntactic tree in Figure 4.3. Note that we have not further
broken down the canonical clauses into subject‐predicate form, because our structural
words, the sentence connectives, do not occur within any canonical clause and hence
are not ‘visible’ to our analysis at the level of sentences. Finally, we replace the canon‐
ical clauses with placeholder sentences, and we reach the structure ‘Not A and B’.
The schematic paraphrase ‘Not A and B’ is potentially ambiguous, because it is not
obvious whether the ‘not’ applies just to the A‐clause, or to the whole ‘A and B’ clause.
We could introduce parentheses to eliminate this ambiguity, distinguishing ‘Not (A
and B)’ from ‘Not A and B’. This is the approach we will take in our formal language
Sentential(see page 40). Alternatively, we can use the hierarchical nature of syntactic
trees to see see that there are two different structures possible for ‘Not A and B’, as
depicted in the schematic syntactic trees in Figure 4.4.
Sharp‐eyed readers will have noticed that our list of special words doesn’t precisely line
up with the canonical clauses we introduced in §4.2. Consider the noncanonical clause
‘She said that Jenny is sad’, in which the canonical clause ‘Jenny is sad’ is subordinate
within an indirect speech report. Because ‘She said that’ is not on our special list of
words, we cannot analyse this sentence as composed from a canonical clause and some
§4. FIRST STEPS TO SYMBOLISATION 31
structural expression. So for the purposes of sentential logic, we will treat ‘She said
that Jenny is sad’ as if it were canonical, even though it is not from the point of view of
English grammar. From the point of view of our structural words, this sentence doesn’t
have any further structure that we can identify. Such QUASI‐CANONICAL CLAUSES –
clauses that do not feature any of our list of structural words – will also be replaced by
placeholder letters in our analysis.
This raises another question. What makes the words on our list special? Logicians
tend to take a pragmatic attitude to this question. They say: nothing! If you had chosen
different words, you would have come up with a different structure. In terms of the
syntactic trees we drew above, there is a hierarchy of levels when breaking down the
top‐level sentence into constituent phrases, and different choices of structural words
correspond to different choices concerning the level at which to stop our analysis.
Linguists take a very expansive view of structural words, so that the class of canonical
clauses is rather small, and there is a lot of structure to be identified in natural language.
For example, the presence of modal auxiliaries (like ‘will’ or ‘must’, as in ‘James must
eat’), verb inflections other than the present tense (e.g., ‘they took snuff’), and subor‐
dination (as in ‘Etta knew that peaches were abundant’), in addition to items on our
list of structural words, suffices to make a clause noncanonical. And there are logics
which do take these to be structural words: modal logics, tense logics, and epistemic
logics treat these as structural words and provide recipes for the structural analysis of
arguments that abstract away features other than these words. But there is no funda‐
mental principle that divides words once and for all into structural words and other
words. So logicians are more interested in quasi‐canonical clauses, given some fixed
list of structural words, when using logic to model or represent natural languages.
We will start simply, and focus on the list of truth‐functional sentence connectives,
principally ‘and’, ‘or’, ‘not’, ‘if … then …’ and ‘… if and only if …’. There are practical
reasons why these words are useful ones to focus on initially, which I will now discuss.
In this chapter we will begin developing a formal language which will allow us to rep‐
resent many sentences of English, and arguments involving those sentences. The lan‐
guage will have a very small basic vocabulary, since we are designing it to represent the
sentential structure of the examples with which we began. So it will have expressions
corresponding to the English sentence connectives ‘and’, ‘or’, ‘not’ and ‘if …, then …’
and ‘if and only if’. The language represents these English connectives by its own class
of dedicated sentence connectives which allow simple sentences of the language to be
combined into more complex sentences. These words are a good class to focus on in
developing a formal language, because languages with expressions analogous to these
feature a good balance between being useful and being well behaved and easy to study.
The language we will develop is called Sentential, and the study of that language and
its features is called sentential logic. (It also has other names: see Appendix A.)
Once we have our formal language Sentential, we will be able to go on (in chapter 3)
to show that it has a very nice feature: it is able to represent the structure of a large
class of valid natural language arguments. So while logic can’t help us with every good
argument, and it can’t even help us with every conclusive argument, it can help us to
understand arguments that are valid, or conclusive due to their structure, when that
structure involves ‘and’, ‘or’, ‘not’, ‘if’ and various other expressions.
We will see later in this book that one could take additional expressions in natural
language to be involved is determining the structure of a sentence or argument. The
language we develop in chapter 4 is one that is suited to represent what we get when
we take names like ‘Juanita’, predicates like ‘is a vixen’ or ‘is a fox’, and quantifier expres‐
sions like ‘every’ and ‘some’ to be structural – see also §16. And as we have mentioned,
there are still other formal logical languages, unfortunately beyond the scope of this
book, which take still other words or grammatical categories as structural: logics which
take modal words like ‘necessarily’ or ‘possibly’ as structural, and logics which take tem‐
poral adverbs like ‘always’ and ‘now’ as structural.3 What we notice here is that using
logic to model natural language arguments inevitably involves a compromise. If we
have lots of structural words, we can show many conclusive arguments to be valid, but
our logic is complex and involves many different sentence connectives. If we take rel‐
atively few words as structural, we can represent fewer conclusive arguments as valid,
but our formal language is a lot easier to work with.
We will start now by putting those tantalising observations about richer logics out of
your mind! We’ll focus to begin with on the language Sentential, and on how we can
use it to model arguments in English featuring some particular sentence connectives
as structural words, including those we’ve just highlighted above. This collection of
structural words has struck many logicians over the years as providing a good balance
between simplicity and strength, ideal for an introduction to logic.
3 Some arguments will be valid in richer logical frameworks, but not according to the more austere frame‐
work for logical structure provided by Sentential. For example ‘Sylvester is always active; so Sylvester
is active now’ is conclusive. From the point of view of temporal logic, the argument has this form ‘Al‐
ways A; so now A’, which is valid in normal temporal logics. But it is not valid according to Sentential,
because the premise ‘Sylvester is always active’ has no internal structural words that occur on the list
of sentence connectives that Sentential represents.
§4. FIRST STEPS TO SYMBOLISATION 33
𝐴, 𝑃, 𝑃1 , 𝑃2 , 𝐴234 .
Atomic sentences are the basic building blocks of Sentential. We introduce them in
order to represent, or symbolise, certain English sentences. To do this, we provide a
SYMBOLISATION KEY, such as the following, which assigns a temporary linkage between
some atomic sentences of Sentential, and some quasi‐canonical sentences of the natural
language we are representing using Sentential, such as English:
𝐴: It is raining outside.
𝐶 : Jenny is miserable.
In doing this, we are not fixing this symbolisation once and for all. We are just saying
that, for the time being, we shall use the atomic sentence of Sentential, ‘𝐴’, to symbolise
the English sentence ‘It is raining outside’, and the atomic sentence of Sentential, ‘𝐶 ’, to
symbolise the English sentence ‘Jenny is miserable’. Later, when we are dealing with
different sentences or different arguments, we can provide a new symbolisation key;
as it might be:
𝐴: Jenny is an anarcho‐syndicalist.
𝐶 : Dipan is an avid reader of Tolstoy.
Given this flexibility, it isn’t true that a sentence of Sentential means the same thing
as any particular natural language sentence – see also §8.4. The question of what any
given atomic sentence of Sentential means is actually a bit hard to make sense of; we
will return to it in §9.
34 THE LANGUAGE OF SENTENTIAL LOGIC
Key Ideas in §4
› Arguments are made of sentences, and to understand the struc‐
ture of an argument we must understand the syntactic struc‐
ture of those sentences, including analysing those sentences into
their simplest constituents, roughly, how a compound sentence
can be constructed by combining canonical clauses.
› The formal language Sentential is designed to model English ar‐
guments involving compound sentences structured by the sen‐
tence connectives ‘and’, ‘or’, ‘not’, and ‘if’, and related expressions.
› We symbolise these arguments this by abstracting away any
other aspects of English sentences, using structureless atomic
sentences to represent clauses that do not include these special
expressions.
› Many English words and phrases can be treated as structural,
and different formal languages can be motivated by other
choices of structural expressions. Sentential represents a particu‐
larly well behaved aspect of English sentence structure.
Practice exercises
A. True or false: if you are going to represent an English sentence by an atomic sen‐
tence in Sentential, the English sentence cannot have a sentence connective (like ‘and’)
occurring within it.
B. Which one or more of the following are not atomic sentences of Sentential?
1. 𝐴′ ;
2. 𝑊0 ;
3. 𝑄5902222 ;
4. 77𝑃 ;
5. 𝑉9.
C. This argument is invalid in English: ‘There is water in the glass and it is cold; there‐
fore it is cold’. Comment on why it is invalid, and what potential pitfalls arise when
considering how to symbolise this argument into Sentential.
5
Connectives
As the table suggests, we will introduce these connectives by the English connectives
they parallel. It is important to bear in mind that they are perfectly legitimate stan‐
dalone expressions of Sentential, with a meaning independent of the meaning of their
English analogues. The nature of that meaning we will see in §9. Sentential is not
a strange way of writing English, but a free‐standing formal language, albeit one de‐
signed to represent some aspects of natural language.
35
36 THE LANGUAGE OF SENTENTIAL LOGIC
To the grammarian, these differences are of great significance. To the logician, they
are not. When the logician considers these examples, what matters is that these are
all negated sentences, not the particular way that the idea of negation happens to be
implemented syntactically in English. All these sentences seem to be expressing an
idea that might roughly be expressed as ‘It is not the case that: Vassiliki likes ballet’.
These sentences are all acceptable PARAPHRASES of each other, because they all express
more or less the same content. They need not be perfectly synonymous to be accept‐
able paraphrases, because sometimes the small divergences in meaning do not matter
for our project.
Logic aims to represent relations within and between sentences that are significant for
arguments. Since there are important grammatical differences that are argumentat‐
ively insignificant, logic will sometimes overlook linguistic accuracy in order to capture
the ‘spirit’ of a sentence. That spirit is what is present in all acceptable paraphrases of
that sentence. If some group of sentences can all play more or less the same role in an
argument, the logician will aim to capture just what is essential to the argumentative
role. In the following argument, it doesn’t matter which of the previous examples we
use for the second premise: the argument remains good whichever we include.
Partly this tolerant attitude arises because logicians are interested principally in how
to represent natural language arguments in a formal language. The formal language
is already artificial and limited compared to the expressive power of natural language.
Sentential only has five different sentence connectives, compared to the rich variety
seen in English. From a logical point of view we are already sacrificing nuances of
meaning when we represent an argument in a formal language. So it really doesn’t
matter which way of paraphrasing the original argument we choose, as long as it pre‐
serves the gist of the argument, because we’ll already be modelling that argument in
a way that cannot be faithful to its exact meaning. So when symbolising natural lan‐
guage arguments into English, it is generally appropriate to find a paraphrase that is
good enough, but one that most explicitly displays the sentence connectives that you
take to be involved in the logical structure of the argument. So while the sentence ‘It
is not the case that Vassiliki likes ballet’ is much more stilted sounding than ‘Vassiliki
doesn’t like ballet’, it has the virtue of displaying the logical structure of the sentence
more clearly. It is obvious, in the paraphrase, that we have a negation operating on the
canonical clause ‘Vassiliki likes ballet’, and that makes symbolisation straightforward.
As will be evident throughout this section, symbolising an argument in Sentential is
not like translating it into another natural language. Translation aims to preserve the
meaning of your original argument in all its nuance. Symbolisation is more like mod‐
elling, where you choose to include certain important features and leave out other
§5. CONNECTIVES 37
features that are not important for you. A physicist, for example, might model a sys‐
tem of moving bodies by treating all of them as point particles. Of course the bodies
are in fact extended in space, but for the purposes for which the model is designed
it may be irrelevant to include that detail, if all that the physicist is trying to do is to
predict the overall trajectories of those bodies. Likewise, in logic if our project is to
analyse whether an argument is conclusive or not, we may not need to include every
detail of meaning in order to complete that project. A good model needn’t be perfectly
accurate, and in fact, highly accurate models can be very poor because their additional
complexity makes them too unwieldy to work with. We will happily settle for models
which are good enough to represent the important details. In the present context, that
means we want to paraphrase arguments in a way that makes their logical structure
explicit, and then to symbolise them using the closest available Sentential connectives,
even if they are not perfectly synonymous with the English sentence connectives they
contain. We’ll return to this issue further in §8.4.
Because a symbolisation isn’t exactly alike in meaning to the original sentence, there
is some room for the exercise of judgment. You will need sometimes to make choices
between different paraphrases that seem to capture the gist of the sentence about
equally well. You may have to make a judgment about what the sentence is ‘really’
trying to say. Such choices cannot be reduced to a mechanical algorithm. Though
you can ask yourself some leading questions: ‘which way was this ambiguous sentence
intended?’ or ‘are these two options plausibly intended to be understood as mutually
exclusive?’.
5.2 Negation
Consider how we might symbolise these sentences:
In order to symbolise sentence 28, we will need an atomic sentence. We might offer
this symbolisation key:
𝐵: Mary is in Barcelona.
Since sentence 29 is obviously related to the sentence 28, we shall not want to symbolise
it with a completely different sentence. Roughly, sentence 29 means something like ‘It
is not the case that B’. In order to symbolise this, we need a symbol for negation. We
will use ‘¬’. Now we can symbolise sentence 29 with ‘¬𝐵’.
Sentence 30 also contains the word ‘not’. And it is obviously equivalent to sentence 29.
As such, we can also symbolise it with ‘¬𝐵’.
It is much more common in English to see negation appear as it does in 30, with a
‘not’ found somewhere within the sentence, than the more formal form in 29. The
form in 29 has the benefit that the sentence ‘Mary is in Barcelona’ itself appears as
38 THE LANGUAGE OF SENTENTIAL LOGIC
Sentence 31 can now be symbolised by ‘𝑅’. Moving on to sentence 32: saying the widget
is irreplaceable means that it is not the case that the widget is replaceable. So even
though sentence 32 does not contain the word ‘not’, we shall symbolise it as follows:
‘¬𝑅’. This is close enough for our purposes.
Sentence 33 can be paraphrased as ‘It is not the case that the widget is irreplaceable.’
Which can again be paraphrased as ‘It is not the case that it is not the case that the
widget is replaceable’. So we might symbolise this English sentence with the Sentential
sentence ‘¬¬𝑅’. Any sentence of Sentential can be negated, not only atomic sentences,
so ‘¬¬𝑅’ is perfectly acceptable as a sentence of Sentential. You might have the sense
that these two negations should ‘cancel out’, and in Sentential that sense will turn out
to be vindicated.
But some care is needed when handling negations. Consider:
If we let the Sentential‐sentence ‘𝐻’ symbolise ‘Jane is happy’, then we can symbolise
sentence 34 as ‘𝐻’. However, it would be a mistake in general to symbolise sentence 35
with ‘¬𝐻’. If Jane is unhappy, then she is not happy; but sentence 35 does not mean the
same thing as ‘It is not the case that Jane is happy’. Jane might be neither happy nor
unhappy; she might be in a state of blank indifference. In order to symbolise sentence
35, then, we will typically want to introduce a new atomic sentence of Sentential. Nev‐
ertheless, there may be limited circumstances where it doesn’t matter which of these
nonsynonymous ways to deny ‘Jane is happy’ I adopt.
§5. CONNECTIVES 39
Sometimes a sentence will include a negative for literary effect, even though what is
expressed isn’t negative. Consider
The first clause seems negative, but the speaker confounds the hearer’s expectation
by going on to assert something even more positive than mere liking. Here the first
clause really means something like ‘I don’t merely like cricket’ – what is denied is that
the speaker’s affection for cricket is limited to mere liking.2
5.3 Conjunction
Consider these sentences:
We will need separate atomic sentences of Sentential to symbolise sentences 37 and 38;
perhaps
𝐴: Adam is athletic.
𝐵: Barbara is athletic.
Sentence 37 can now be symbolised as ‘𝐴’, and sentence 38 can be symbolised as ‘𝐵’.
Sentence 39 roughly says ‘A and B’. We need another symbol, to deal with ‘and’. We will
use ‘∧’. Thus we will symbolise it as ‘(𝐴 ∧ 𝐵)’. This connective is called CONJUNCTION.
We also say that ‘𝐴’ and ‘𝐵’ are the two CONJUNCTS of the conjunction ‘(𝐴 ∧ 𝐵)’.
Notice that we make no attempt to symbolise the word ‘also’ in sentence 39. Words
like ‘both’ and ‘also’ function to draw our attention to the fact that two things are being
conjoined. Maybe they affect the emphasis of a sentence. But we will not (and cannot)
symbolise such things in Sentential.
Some more examples will bring out this point:
Sentence 40 is obviously a conjunction. The sentence says two things (about Barbara).
In English, it is permissible to refer to Barbara only once. It might be tempting to
2 Technically, the negation here targets not the content of the clause, but the typical expectation that a
cooperative speaker who says ‘I like X’ is communicating that liking is the highest point on the scale of
affection their attitude to X reaches. These ‘scalar implicatures’ are discussed at length in Larry Horn
(1989) A Natural History of Negation, University of Chicago Press, p. 382. We return to this ‘metalin‐
guistic’ negation in §19.4 below.
40 THE LANGUAGE OF SENTENTIAL LOGIC
think that we need to symbolise sentence 40 with something along the lines of ‘𝐵 and
energetic’. This would be a mistake. Once we symbolise part of a sentence as ‘𝐵’, any
further structure is lost. ‘𝐵’ is an atomic sentence of Sentential. Conversely, ‘energetic’
is not an English sentence at all. What we are aiming for is something like ‘𝐵 and
Barbara is energetic’. So we need to add another sentence letter to the symbolisation
key. Let ‘𝐸 ’ symbolise ‘Barbara is energetic’. Now the entire sentence can be symbolised
as ‘(𝐵 ∧ 𝐸)’.
Sentence 41 says one thing about two different subjects. It says of both Barbara and
Adam that they are athletic, and in English we use the word ‘athletic’ only once. The
sentence can be paraphrased as ‘Barbara is athletic, and Adam is athletic’. We can
symbolise this in Sentential as ‘(𝐵 ∧ 𝐴)’, using the same symbolisation key that we have
been using.
Sentence 42 is slightly more complicated. The word ‘although’ sets up a contrast
between the first part of the sentence and the second part. Nevertheless, the sentence
tells us both that Barbara is energetic and that she is not athletic. In order to make
each of the conjuncts an atomic sentence, we need to replace ‘she’ with ‘Barbara’. So we
can paraphrase sentence 42 as, ‘Both Barbara is energetic, and Barbara is not athletic’.
The second conjunct contains a negation, so we paraphrase further: ‘Both Barbara is
energetic and it is not the case that Barbara is athletic’. And now we can symbolise this
with the Sentential sentence ‘(𝐸 ∧ ¬𝐵)’. Note that we have lost all sorts of nuance in this
symbolisation. There is a distinct difference in tone between sentence 42 and ‘Both
Barbara is energetic and it is not the case that Barbara is athletic’. Sentential does not
(and cannot) preserve these nuances.
Sentence 43 raises similar issues. There is a contrastive structure. The speaker who
asserts it means something by that ‘but’ — something to the effect of there being a
contrast between those two features. But their brief utterance doesn’t tell exactly which
contrast is intended. These contrasts are not something that Sentential is designed to
deal with. So we can paraphrase the sentence as ‘Both Adam is athletic, and Barbara is
more athletic than Adam’. (Notice that we once again replace the pronoun ‘him’ with
‘Adam’.) How should we deal with the second conjunct? We already have the sentence
letter ‘𝐴’, which is being used to symbolise ‘Adam is athletic’, and the sentence ‘𝐵’ which
is being used to symbolise ‘Barbara is athletic’; but neither of these concerns their
relative ‘athleticity’. So, to to symbolise the entire sentence, we need a new sentence
letter. Let the Sentential sentence ‘𝑅’ symbolise the English sentence ‘Barbara is more
athletic than Adam’. Now we can symbolise sentence 43 by ‘(𝐴 ∧ 𝑅)’.
You might be wondering why I am putting parentheses around the conjunctions. The
reason for this is to avoid potential ambiguity. This can be brought out by considering
how negation might interact with conjunction. Consider:
44. It’s not the case that you will get both soup and salad.
45. You will not get soup but you will get salad.
§5. CONNECTIVES 41
Sentence 44 can be paraphrased as ‘It is not the case that: both you will get soup and
you will get salad’. Using this symbolisation key:
We would symbolise ‘both you will get soup and you will get salad’ as ‘(𝑆1 ∧ 𝑆2 )’. To
symbolise sentence 44, then, we simply negate the whole sentence, thus: ‘¬(𝑆1 ∧ 𝑆2 )’.
Sentence 45 is a conjunction: you will not get soup, and you will get salad. ‘You will not
get soup’ is symbolised by ‘¬𝑆1 ’. So to symbolise sentence 45 itself, we offer ‘(¬𝑆1 ∧ 𝑆2 )’.
These English sentences are very different, and their symbolisations differ accordingly.
In one of them, the entire conjunction is negated. In the other, just one conjunct is
negated. Parentheses help us to avoid ambiguity by clear distinguishing these two
cases.
Once again, however, English does feature this sort of ambiguity. Suppose instead of
44, we’d just said
The sentence 46 is arguably ambiguous; one of its readings says the same thing as 44,
the other says the same thing as 45. Parentheses enable Sentential to avoid precisely
this ambiguity.
The introduction of parentheses prompts us to define some other useful concepts. If
a sentence of Sentential has the overall form (𝒜 ∧ ℬ), we say that its main connective is
conjunction – even if other connectives occur within 𝒜 or ℬ. Likewise, if the sentence
has the form ¬𝒜 , its main connective is negation. We define the scope of an occurrence
of a connective in a sentence as the subsentence which has that connective as its main
connective. So the scope of ‘¬’ in ‘¬(𝑆1 ∧ 𝑆2 )’ is the whole sentence (because negation
is the main connective), while the scope of ‘¬’ in ‘(¬𝑆1 ∧ 𝑆2 )’ is just the subsentence
‘¬𝑆1 ’ – the main connective of the whole sentence is conjunction. Parentheses help us
keep track of things the scope of our connectives, and which connective in a sentence
is the main connective. I say more about the notions of a main connective and the
scope of a connective in §6.3.
5.4 Disjunction
Consider these sentences:
47. Either Denison will play golf with me, or he will watch movies.
48. Either Denison or Ellery will play golf with me.
Sometimes in English, the word ‘or’ excludes the possibility that both disjuncts are
true. This is called an EXCLUSIVE OR. An exclusive ‘or’ is clearly intended when it says,
on a restaurant menu, ‘Entrees come with either soup or salad’: you may have soup;
you may have salad; but, if you want both soup and salad, then you have to pay extra.
At other times, the word ‘or’ allows for the possibility that both disjuncts might be true.
This is probably the case with sentence 48, above. I might play golf with Denison, with
Ellery, or with both Denison and Ellery. Sentence 48 merely says that I will play with
at least one of them. This is called an INCLUSIVE OR. The Sentential symbol ‘∨’ always
symbolises an inclusive ‘or’.
It might help to see negation interact with disjunction. Consider:
49. Either you will not have soup, or you will not have salad.
50. You will have neither soup nor salad.
51. You get either soup or salad, but not both.
Using the same symbolisation key as before, sentence 49 can be paraphrased in this
way: ‘Either it is not the case that you get soup, or it is not the case that you get salad’.
To symbolise this in Sentential, we need both disjunction and negation. ‘It is not the
case that you get soup’ is symbolised by ‘¬𝑆1 ’. ‘It is not the case that you get salad’ is
symbolised by ‘¬𝑆2 ’. So sentence 49 itself is symbolised by ‘(¬𝑆1 ∨ ¬𝑆2 )’.
Sentence 50 also requires negation. It can be paraphrased as, ‘It is not the case that
either you get soup or you get salad’. Since this negates the entire disjunction, we
symbolise sentence 50 with ‘¬(𝑆1 ∨ 𝑆2 )’.
Sentence 51 is an exclusive ‘or’. We can break the sentence into two parts. The first
part says that you get one or the other. We symbolise this as ‘(𝑆1 ∨ 𝑆2 )’. The second
part says that you do not get both. We can paraphrase this as: ‘It is not the case both
that you get soup and that you get salad’. Using both negation and conjunction, we
symbolise this with ‘¬(𝑆1 ∧ 𝑆2 )’. Now we just need to put the two parts together. As we
saw above, ‘but’ can usually be symbolised with ‘∧’. Sentence 51 can thus be symbolised
as ‘((𝑆1 ∨ 𝑆2 ) ∧ ¬(𝑆1 ∧ 𝑆2 ))’. This last example shows something important. Although
the Sentential symbol ‘∨’ always symbolises inclusive ‘or’, we can symbolise an exclusive
‘or’ in Sentential. We just have to use a few of our other symbols as well.
§5. CONNECTIVES 43
5.5 Conditional
Consider these sentences:
𝑃: Jean is in Paris.
𝐹 : Jean is in France.
Sentence 52 is roughly of this form: ‘if P, then F’. We will use the symbol ‘→’ to symbolise
this ‘if …, then …’ structure. So we symbolise sentence 52 by ‘(𝑃 → 𝐹)’. The connective
is called THE CONDITIONAL. Here, ‘𝑃’ is called the ANTECEDENT of the conditional
‘(𝑃 → 𝐹)’, and ‘𝐹 ’ is called the CONSEQUENT.
Sentence 53 is also a conditional. Since the word ‘if’ appears in the second half of the
sentence, it might be tempting to symbolise this in the same way as sentence 52. That
would be a mistake. My knowledge of geography tells me that sentence 52 is unprob‐
lematically true: there is no way for Jean to be in Paris that doesn’t involve Jean being
in France. But sentence 53 is not so straightforward: were Jean in Dijon, Marseilles, or
Toulouse, Jean would be in France without being in Paris, thereby rendering sentence
53 false. Since geography alone dictates the truth of sentence 52, whereas travel plans
(say) are needed to know the truth of sentence 53, they must mean different things.
In fact, sentence 53 can be paraphrased as ‘If Jean is in France, then Jean is in Paris’. So
we can symbolise it by ‘(𝐹 → 𝑃)’.
In fact, many English expressions can be represented using the conditional. Consider:
If we think really hard, all four of these sentences mean the same as ‘If Jean is in Paris,
then Jean is in France’. So they can all be symbolised by ‘𝑃 → 𝐹 ’.
It is important to bear in mind that the connective ‘→’ tells us only that, if the ante‐
cedent is true, then the consequent is true. It says nothing about a causal connection
between two events (for example). In fact, we seem to lose a huge amount when we
use ‘→’ to symbolise English conditionals. We shall return to this in §§8.6 and 11.5.
44 THE LANGUAGE OF SENTENTIAL LOGIC
5.6 Biconditional
Consider these sentences:
𝐻: Shergar is a horse.
𝑀: Shergar is a mammal.
Other expressions in English which can be used to mean ‘iff’ include ‘exactly if’ and
‘exactly when’, or even ‘just in case’. So if we say ‘You run out of time exactly when the
buzzer sounds’, we mean: ‘if the buzzer sounds, then you are out of time; and also if
you are out of time, then the buzzer sounds’.
A word of caution. Ordinary speakers of English often use ‘if …, then …’ when they
really mean to use something more like ‘… if and only if …’. Perhaps your parents
told you, when you were a child: ‘if you don’t eat your vegetables, you won’t get any
dessert’. Suppose you ate your vegetables, but that your parents refused to give you
any dessert, on the grounds that they were only committed to the conditional (roughly
‘if you get dessert, then you will have eaten your vegetables’), rather than the bicondi‐
tional (roughly, ‘you get dessert iff you eat your vegetables’). Well, a tantrum would
rightly ensue. So, be aware of this when interpreting people; but in your own writing,
make sure you use the biconditional iff you mean to.
§5. CONNECTIVES 45
5.7 Unless
We have now introduced all of the connectives of Sentential. We can use them together
to symbolise many kinds of sentences, but not every kind. It is a matter of judgment
whether a given English connective can be symbolised in Sentential. One rather tricky
case is the English‐language connective ‘unless’:
These two sentences are clearly equivalent. To symbolise them, we shall use the sym‐
bolisation key:
How should we try to symbolise these in Sentential? Note that 62 seems to say: ’either
you will catch cold, or if you don’t catch cold, it will be because you wear a jacket’. That
would have this symbolisation in Sentential: ‘(𝐷 ∨ (¬𝐷 → 𝐽)’. This turns out to be just
a long‐winded way of saying ‘(¬𝐷 → 𝐽)’, i.e., if you don’t catch cold, then you will have
worn a jacket.
Equally, however, both sentences mean that if you do not wear a jacket, then you will
catch cold. With this in mind, we might symbolise them as ‘¬𝐽 → 𝐷’.
Equally, both sentences mean that either you will wear a jacket or you will catch a cold.
With this in mind, we might symbolise them as ‘𝐽 ∨ 𝐷’.
All three are correct symbolisations. Indeed, in chapter 3 we shall see that all three
symbolisations are equivalent in Sentential.
63. We’ll capture the castle, unless the Duke tries to stop us.
This certainly says that if we fail to capture the castle, it will have been because of that
pesky Duke. But what if the Duke tries but fails to stop us? We might in that case
capture the castle even though he tried to stop us. While 63 is still true, it is not the
case that ‘if the Duke tries to stop us, we won’t capture the castle’ is true.
Key Ideas in §5
› Sentential features five connectives: ‘∧’ (‘and’), ‘∨’ (‘or’), ‘¬’ (‘not’),
‘→’ (‘if …, then …’) and ‘↔’ (‘if and only if’ or ‘iff’).
› These connectives, alone and in combination, can be used to
symbolise many English constructions, even some which do not
feature the English counterparts of the connectives – as when we
approached the symbolisation of sentences involving ‘unless’.
› Figuring out how to symbolise a given natural language sentence
might not be straightforward, however, as in the cases of ‘A if B’
and ‘A only if B’, which have quite different symbolisations into
Sentential. ‘A unless B’ is perhaps even trickier, being sometimes
used ambiguously by English speakers and being able to be sym‐
bolised in many good ways.
› What matters in symbolisation is that the ‘spirit’ of the argument
is preserved and modelled appropriately, not that every nuance
of meaning is preserved.
Practice exercises
A. Using the symbolisation key given, symbolise each English sentence in Sentential.
B. Using the symbolisation key given, symbolise each English sentence in Sentential.
§5. CONNECTIVES 47
C. Using the symbolisation key given, symbolise each English sentence in Sentential.
𝐸1 : Ava is an electrician.
𝐸2 : Harrison is an electrician.
𝐹1 : Ava is a firefighter.
𝐹2 : Harrison is a firefighter.
𝑆1 : Ava is satisfied with her career.
𝑆2 : Harrison is satisfied with his career.
D. Give a symbolisation key and symbolise the following English sentences in Senten‐
tial.
48 THE LANGUAGE OF SENTENTIAL LOGIC
E. Give a symbolisation key and symbolise the following English sentences in Sentential.
1. If there is food to be found in the pridelands, then Rafiki will talk about squashed
bananas.
2. Rafiki will talk about squashed bananas unless Simba is alive.
3. Rafiki will either talk about squashed bananas or he won’t, but there is food to
be found in the pridelands regardless.
4. Scar will remain as king if and only if there is food to be found in the pridelands.
5. If Simba is alive, then Scar will not remain as king.
F. For each argument, write a symbolisation key and symbolise all of the sentences of
the argument in Sentential.
1. If Dorothy plays the piano in the morning, then Roger wakes up cranky. Dorothy
plays piano in the morning unless she is distracted. So if Roger does not wake
up cranky, then Dorothy must be distracted.
2. It will either rain or snow on Tuesday. If it rains, Neville will be sad. If it snows,
Neville will be cold. Therefore, Neville will either be sad or cold on Tuesday.
3. If Zoog remembered to do his chores, then things are clean but not neat. If he
forgot, then things are neat but not clean. Therefore, things are either neat or
clean; but not both.
G. We symbolised an exclusive ‘or’ using ‘∨’, ‘∧’, and ‘¬’. How could you symbolise an
exclusive ‘or’ using only two connectives? Is there any way to symbolise an exclusive
‘or’ using only one connective?
6
Sentences of Sentential
The sentence ‘either apples are red, or berries are blue’ is a sentence of English, and
the sentence ‘(𝐴 ∨ 𝐵)’ is a sentence of Sentential. Although we can identify sentences
of English when we encounter them, we do not have a formal definition of ‘sentence
of English’. But in this chapter, we shall offer a complete definition of what counts as a
sentence of Sentential. This is one respect in which a formal language like Sentential is
more precise than a natural language like English. Of course, Sentential was designed
to be much simpler than English.
6.1 Expressions
We have seen that there are three kinds of symbols in Sentential:
Atomic sentences 𝐴, 𝐵, 𝐶, …, 𝑍
with subscripts, as needed 𝐴1 , 𝐵1 , 𝑍1 , 𝐴2 , 𝐴25 , 𝐽375 , …
Connectives ¬, ∧, ∨, →, ↔
Parentheses (,)
6.2 Sentences
We want to know when an expression of Sentential amounts to a sentence. Many expres‐
sions of Sentential will be uninterpretable nonsense. ‘)𝐴17 𝐽𝑄𝐹¬𝐾))∧)()’ is a perfectly
49
50 THE LANGUAGE OF SENTENTIAL LOGIC
good expression, but doesn’t look likely to end up a correctly formed sentence of our
language. Accordingly, we need to clarify the grammatical rules of Sentential. We’ve
already seen some of those rules when we introduced the atomic sentences and con‐
nectives. But I will now make them all explicit.
Obviously, individual atomic sentences like ‘𝐴’ and ‘𝐺13 ’ should count as sentences.
We can form further sentences out of these by using the various connectives. Using
negation, we can get ‘¬𝐴’ and ‘¬𝐺13 ’. Using conjunction, we can get ‘(𝐴∧𝐺13 )’, ‘(𝐺13 ∧𝐴)’,
‘(𝐴 ∧ 𝐴)’, and ‘(𝐺13 ∧ 𝐺13 )’. We could also apply negation repeatedly to get sentences
like ‘¬¬𝐴’ or apply negation along with conjunction to get sentences like ‘¬(𝐴 ∧ 𝐺13 )’
and ‘¬(𝐺13 ∧ ¬𝐺13 )’. The possible combinations are endless, even starting with just
these two sentence letters, and there are infinitely many sentence letters. So there is
no point in trying to list all the sentences one by one.
Instead, we will describe the process by which sentences can be constructed. Consider
negation: Given any sentence 𝒜 of Sentential, ¬𝒜 is a sentence of Sentential. (Why the
funny fonts? I return to this in §7.)
We can say similar things for each of the other connectives. For instance, if 𝒜 and ℬ
are sentences of Sentential, then (𝒜 ∧ ℬ) is a sentence of Sentential. Providing clauses
like this for all of the connectives, we arrive at the following formal definition for a
SENTENCE of Sentential
Definitions like this are called RECURSIVE. Recursive definitions begin with some spe‐
cifiable base elements, and then present ways to generate indefinitely many more ele‐
ments by compounding together previously established ones. To give you a better idea
of what a recursive definition is, we can give a recursive definition of the idea of an an‐
cestor of mine. We specify a base clause.
Using this definition, we can easily check to see whether someone is my ancestor: just
check whether she is the parent of the parent of … one of my parents. And the same is
true for our recursive definition of sentences of Sentential. Just as the recursive defini‐
tion allows complex sentences to be built up from simpler parts, the definition allows
us to decompose sentences into their simpler parts. And if we get down to atomic
sentences, then we are ok.
Let’s consider some examples.
2. Next, consider the expression ‘¬(𝑃 ∧ ¬(¬𝑄 ∨ 𝑅))’. Looking at the second clause
of the definition, this is a sentence if ‘(𝑃 ∧ ¬(¬𝑄 ∨ 𝑅))’ is. And this is a sentence
if both ‘𝑃’ and ‘¬(¬𝑄 ∨ 𝑅)’ are sentences. The former is an atomic sentence, and
the latter is a sentence if ‘(¬𝑄 ∨ 𝑅)’ is a sentence. It is. Looking at the fourth
clause of the definition, this is a sentence if both ‘¬𝑄’ and ‘𝑅’ are sentences. And
both are!
4. A final example. Consider the expression ‘(𝑃 → ¬(𝑄 → 𝑃)’. If this is a sentence,
then it’s main connective is ‘→’, and it was formed from the sentences ‘𝑃’ and
‘¬(𝑄 → 𝑃’. ‘𝑃’ is a sentence because it is a sentence letter. Is ‘¬(𝑄 → 𝑃’ a sen‐
tence? Only if ‘(𝑄 → 𝑃’ is a sentence. But this isn’t a sentence; it lacks a closing
parenthesis which would need to be there if this was correctly formed using the
clause in the definition covering conditionals. It follows that the expression with
which we started isn’t a sentence either.
last, when constructing the sentence. We call that the MAIN CONNECTIVE of the sen‐
tence. In the case of ‘¬¬¬𝐷’, the main connective is the very first ‘¬’ sign. In the case
of ‘(𝑃 ∧ ¬(¬𝑄 ∨ 𝑅))’, the main connective is ‘∧’. In the case of ‘((¬𝐸 ∨ 𝐹) → ¬¬𝐺)’, the
main connective is ‘→’.
The recursive structure of sentences in Sentential will be important when we consider
the circumstances under which a particular sentence would be true or false. The sen‐
tence ‘¬¬¬𝐷’ is true if and only if the sentence ‘¬¬𝐷’ is false, and so on through the
structure of the sentence, until we arrive at the atomic components. We will return to
this point in chapter 3.
The recursive structure of sentences in Sentential also allows us to give a formal defini‐
tion of the scope of a negation (mentioned in §5.3). The scope of ‘¬’ is the subsentence
for which ‘¬’ is the main connective. So a sentence like
(𝑃 ∧ (¬(𝑅 ∧ 𝐵) ↔ 𝑄))
was constructed by conjoining ‘𝑃’ with ‘(¬(𝑅 ∧ 𝐵) ↔ 𝑄)’. This last sentence was con‐
structed by placing a biconditional between ‘¬(𝑅 ∧ 𝐵)’ and ‘𝑄’. And the former of these
sentences – a subsentence of our original sentence – is a sentence for which ‘¬’ is the
main connective. So the scope of the negation is just ‘¬(𝑅 ∧ 𝐵)’. More generally:
I talk of ‘instances’ of a connective because, in an example like ‘¬¬𝐴’, there are two
occurrences of the negation connective, with different scopes – one is the main con‐
nective of the whole sentence, the other has just ‘¬𝐴’ as its scope.
The recursive definition of a sentence of Sentential can also be depicted using a FORMA‐
TION TREE, similar to the syntactic trees for English we saw in §4, but much simpler. At
each leaf node of the tree is either an atomic sentence or a sentence connective. Each
nonleaf node contains a Sentential sentence, and branching from it are (i) its main
connective, and (ii) the immediate subsentences in the scope of the main connective.
Sentential sentences are just those expressions of the language that have a formation
tree respecting these rules. Consider again the sentence ‘¬(𝑃 ∧ ¬(¬𝑄 ∨ 𝑅))’. This has
the formation tree depicted in Figure 6.1.
At the beginning of this chapter, we introduced the sentences of Sentential in a way that
was parasitic upon identifying the structure of certain English sentences. Now we can
see that Sentential has its own syntax, which can be understood and used independently
of English. Our understanding of the meaning of the sentence connectives is still tied
to their English counterparts, but in §8.3 we will see that we can also understand their
meaning independently from the natural language we used to motivate them. However,
it remains true that what makes Sentential useful is that its syntax and connectives
are designed to capture, more or less, elements of the structure of natural language
sentences.
§6. SENTENCES OF Sentential 53
¬ (𝑃 ∧ ¬(¬𝑄 ∨ 𝑅))
𝑃 ∧ ¬(¬𝑄 ∨ 𝑅)
¬ (¬𝑄 ∨ 𝑅)
¬𝑄 ∨ 𝑅
¬ 𝑄
Leaving the temporal adverbial phrase ‘at all times’ aside, here are two symbolisations
of our target sentence:
This ambiguity matters. If a child is carried, but is not quiet, the parent is violating
the regulation with the first structure, but in compliance with the regulation with the
second structure. As we will soon see, Sentential has resources to ensure that no ambi‐
guity is present in any symbolisation of a given target sentence, which helps us make
clear what we really might have meant by that sentence. And as this case makes clear,
that might be important in the framing of legal statutes or contracts, in the formula‐
tion of government policy, and in other written documents where clarity of meaning
is crucial.
54 THE LANGUAGE OF SENTENTIAL LOGIC
(𝐻 → 𝐼) ∨ (𝐼 → 𝐻) ∧ (𝐽 ∨ 𝐾)
Key Ideas in §6
› The class of sentences of Sentential has a perfectly precise recurs‐
ive definition that allows us to determine in a step‐by‐step fash‐
ion, for any expression, whether it is a sentence or not.
› The main connective of a sentence is the final rule applied in the
construction of a sentence; the scope of a connective is that sub‐
sentence in the construction of which it is the main connective.
› Each Sentential sentence, unlike English sentences, has an unam‐
biguous structure.
› We can sometimes permit ourselves some liberality in the use of
parentheses in Sentential, when we can be sure it gives rise to no
difficulty.
§6. SENTENCES OF Sentential 55
Practice exercises
A. For each of the following: (a) Is it a sentence of Sentential, strictly speaking? (b) Is
it a sentence of Sentential, allowing for our relaxed parenthetical conventions?
1. (𝐴)
2. 𝐽374 ∨ ¬𝐽374
3. ¬¬¬¬𝐹
4. ¬∧𝑆
5. (𝐺 ∧ ¬𝐺)
6. (𝐴 → (𝐴 ∧ ¬𝐹)) ∨ (𝐷 ↔ 𝐸)
7. (𝑍 ↔ 𝑆) → 𝑊 ∧ 𝐽 ∨ 𝑋
8. (𝐹 ↔ ¬𝐷 → 𝐽) ∨ (𝐶 ∧ 𝐷)
B. Construct a formation tree in the style of Figure 6.1 for the following sentences:
C. Are there any sentences of Sentential that contain no atomic sentences? Explain
your answer.
(𝐻 → 𝐼) ∨ (𝐼 → 𝐻) ∧ (𝐽 ∨ 𝐾)
7
Use and Mention
In this chapter, I have talked a lot about sentences. So I need to pause to explain an
important, and very general, point.
› The expression ‘Malcolm Turnbull’ is composed of two upper case letters and
thirteen lower case letters.
When we want to talk about this ex‐Prime Minister, we USE his name. When we want
to talk about his name, we MENTION that name. And in English, we normally do so by
putting it in quotation marks.
There is a general point here. When we want to talk about things in the world, we
just use words. When we want to talk about words, we typically have to mention those
words.1 We need to indicate that we are mentioning them, rather than using them.
To do this, some convention is needed. We can surround the expression in matched
left and right quotation marks, or display them centrally in the page (say). So this
sentence:
says that some expression is the Prime Minister. And that’s false. The man is the Prime
Minister; his name isn’t. Conversely, this sentence:
1 More generally, when we want to talk about something we use its name. So when we want to talk about
an expression, we use the name of the expression – which is just the expression enclosed in quotation
marks. Mentioning an expression is using a name of that expression.
56
§7. USE AND MENTION 57
› Malcolm Turnbull is composed of two upper case letters and thirteen lower case
letters.
also says something false: Malcolm Turnbull is a man, made of meat rather than letters.
One final example:
On the left‐hand‐side, here, we have the name of a name (it consists of an expression in
quotation marks, and that embedded expression itself contains quotation marks). On
the right hand side, we have a name (of an expression). Perhaps this kind of sentence
only occurs in logic textbooks, but it is true.
Those are just general rules for quotation, and you should observe them carefully in
all your work! To be clear, the quotation‐marks here do not indicate indirect speech.
They indicate that you are moving from talking about an object, to talking about the
name of that object.
𝐴, 𝐵, 𝐶, 𝑍, 𝐴1 , 𝐵4 , 𝐴25 , 𝐽375 , …
These are sentences of the object language (Sentential). They are not sentences of Eng‐
lish. So I must not say, for example:
Obviously, I am trying to come out with an English sentence that says something about
the object language (Sentential). But ‘𝐷’ is a sentence of Sentential, and no part of Eng‐
lish. So the preceding is gibberish, just like:
The general point is that, whenever we want to talk in English about some specific
expression of Sentential, we need to indicate that we are mentioning the expression,
rather than using it. We can either deploy quotation marks, or we can adopt some
similar convention, such as placing it centrally in the page.
English is, generally, its own metalanguage. An expression of English enclosed in
matching quotation marks is another expression of English, as the quotation marks
are parts of English too. This causes a potential problem of ambiguity if the expression
quoted itself contains quotation marks. English allows us to talk about operations on
English expressions, as in this example
64. An English word results from adding ‘ing’ or ‘ion’ to the expression ‘confus’.
But this example is ambiguous. On one reading, it is discussing the expressions ‘con‐
fusing’ and ‘confusion’, and saying truly that they are both English words. But on an‐
other reading, the matching quotation marks are the one before ‘ing’ and the one after
‘ion’, and Example 64 is stating falsely that this unusual string of English letters and
punctuation can be added to ‘confus’ to form an English word:
ing’ or ‘ion
To avoid this, we might introduce some mechanism for indicating which quotation
marks are matched with each other.
𝒜, ℬ, 𝒞, 𝒟, …
These symbols do not belong to Sentential. Rather, they are part of our (augmented)
metalanguage that we use to talk about any expression of Sentential. To repeat the
second clause of the recursive definition of a sentence of Sentential, we said:
this would not have allowed us to determine whether ‘¬𝐵’ is a sentence. To emphasise,
then:
But this is no good: ‘¬𝒜 ’ is not a Sentential sentence, since ‘𝒜 ’ is a symbol of (augmen‐
ted) English rather than a symbol of Sentential. What we really want to say is something
like this:
2″ . If 𝒜 is any Sentential sentence, then the expression that consists of the symbol
‘¬’, followed immediately by the sentence 𝒜 , is also a sentence.
the expression that consists of the symbol ‘¬’ followed by the sentence 𝒜
and similarly, for expressions like ‘(𝒜 ∧ ℬ)’, ‘(𝒜 ∨ ℬ)’, etc. The latter is the expression
which consists of an opening parenthesis, followed by the sentence 𝒜 , followed by the
symbol ‘∨’, followed by the sentence ℬ, followed by a closing parenthesis.
If you like, you can think of our recursive definition of a sentence as a schema stand‐
ing for infinitely many instances of each clause, one for each Sentential sentence. In
the schematic clause for negation (‘If 𝒜 is a sentence, ¬𝒜 is also a sentence’), we can
60 THE LANGUAGE OF SENTENTIAL LOGIC
consider each instance involving ‘𝒜 ’ being replaced by some Sentential sentence sur‐
rounded by quotation marks in accordance with our conventions. So ‘¬𝒜 ’ is to be
understood as abbreviating the expression consisting of a left quotation mark, a neg‐
ation sign, the same Sentential sentence as 𝒜 , and a right quotation mark. Hence if if
𝒜 is ‘𝑃’, ¬𝒜 just is ‘¬𝑃’.
To avoid unnecessary clutter, we shall not regard this as requiring quotation marks
around it. This is a name of an argument, not an argument itself. (Note, then, that ‘∴’
is a symbol of our augmented metalanguage, and not a new symbol of Sentential.)
I could have said my say fair and square, bending no rules. It would have
been tiresome, but it could have been done…. I could have taken great
care to distinguish between (1) the language I use when I talk about know‐
ledge, or whatever, and (2) the second language that I use to talk about
the semantic and pragmatic workings of the first language. If you want to
hear my story told that way, you probably know enough to do the job for
yourself.
2 David Lewis (1996) ‘Elusive Knowledge’, Australasian Journal of Philosophy 74, pp. 549–67, at pp. 566–7.
§7. USE AND MENTION 61
Wise words. In the end, the distinction between use and mention is intended to re‐
move potential confusion. But sometimes over‐eager application of it can prove just
as big an obstacle to communication.
Key Ideas in §7
› It is crucial to distinguish between use and mention – between
talking about the world, and talking about expressions.
› We use Sentential to represent sentences and arguments. But we
use English – augmented with some additional vocabulary – to
talk about Sentential.
› We introduced a slightly unusual convention for understanding
quoted expressions involving script font letters: ‘(𝒜 → ℬ)’, to
take a representative example, is to be interpreted as the Sen‐
tential expression consisting of a left parenthesis, followed by
whatever Sentential expression 𝒜 represents, followed by ‘→’, fol‐
lowed by whatever Sentential expression ℬ represents, followed
by a closing right parenthesis.
Practice exercises
A. For each of the following: Are the quotation marks correctly used, strictly speaking?
If not, propose a corrected version.
B. Example 64 was ambiguous because it was unclear which pairs of quotation marks
were matched with each other. Can you come up with a proposal for how we might
indicate matching quotation marks to avoid this potential ambiguity?
Chapter 3
Truth Tables
8
Truth‐Functional Connectives
8.1 Functions
So much for the grammar or syntax of Sentential. We turn now to the meaning of
Sentential sentences. For technical reasons, it is best to start with the intended inter‐
pretation of the connectives.
As a preliminary, we need to have the concept of a (mathematical) function. Frequently,
we refer to things not by name, but by the relations they have to other things. You
can refer to Barack Obama by name, but one could equally refer to him in relation
to his role, as ‘the 44th President of the United States’, or in relation to his family,
as ‘the husband of Michelle and father of Malia and Sasha’. These kinds of referring
expressions are known as descriptions, and we will look at them in more detail in §19.
Our interest in them now is in the relations they involve. Barack Obama is denoted
by ‘the biological father of Malia Obama’ or ‘the biological father of Sasha Obama’, in
relation to his children. But we can consider ‘the biological father of …’ in relation
to other individuals: so ‘the biological father of Ivanka Trump’ denotes Donald, ‘the
biological father of Daisy Turnbull’ denotes Malcolm, and so on. We can summarise
this information in a table like this:
In this table we have an input, on the left, which is related to the output on the right.
The relation which maps the things in the left column to their corresponding outputs
on the right is known as a function – in this case, the ‘biological father of’ function.
More precisely, a FUNCTION is a relation between the members of some collection 𝐴 and
63
64 TRUTH TABLES
some collection 𝐵 (which may be the same as 𝐴), such that each input to the function
is a member of 𝐴, each output of the function is a member of 𝐵, and, crucially, each
member of 𝐴 is associated with at most one member of 𝐵. So ‘the biological father of …’
is a function from the set of people (living or dead) to itself, and associates each person
with their biological father. We implicitly that ‘biological father’ permits each person
to be associated with a unique father. If we consider other notions of fatherhood, such
as paternal figure, those would not yield a function, because many people have two
or more paternal figures in their lives. Note that while everyone is associated with a
unique biological father by this function (no input is associated with more than one
output), the converse does not hold: some outputs are linked to more than one input,
in fact, every father with more than one child.
Common examples of functions occur in mathematics: we can consider the function
‘the sum of 𝑥 and 𝑦’, which takes two numbers as input, and spits out their unique
sum, 𝑥 + 𝑦. This is again a function from a set to itself, this time the set of integers.
There are functions which are from one set to another: consider, ‘the number of …’s
children’, which is a function from people to numbers, mapping each person to the
number of children they have. (Even if they have more than one child, there is still a
unique number characterising how many they have, and that is what this function spits
out.) There are functions which do not associate an output with every input: consider
‘the eldest child of’, which associates each parent with their first‐born child, but doesn’t
associate nonparents with anything. There are many relations which are not functions.
While Barack Obama can be characterised as ‘the father of Malia’, Malia Obama cannot
be characterised as ‘the child of Barack’, since that attempted description would apply
equally to her sister.
It turns out that we don’t need to know anything more about the atomic sentences
of Sentential than their truth values to assign a truth value to those nonatomic, or
COMPOUND, sentences. More generally, the truth value of any compound sentence
depends only on the truth value of the subsentences that comprise it. In order to
know the truth value of ‘(𝐷 ∧ 𝐸)’, for instance, you only need to know the truth value of
‘𝐷’ and the truth value of ‘𝐸 ’. In order to know the truth value of ‘ (𝐷 ∧𝐸)∨𝐹 ’, you need
only know the truth value of ‘(𝐷 ∧ 𝐸)’ and ‘𝐹 ’. And so on. This is fact a good part of the
reason why we chose these connectives, and chose their English near‐equivalents as
our structural words. To determine the truth value of some Sentential sentence, we only
need to know the truth value of its components. This is why the study of Sentential is
termed truth‐functional logic.
Negation For any sentence 𝒜 : If 𝒜 is true, then ‘¬𝒜 ’ is false. If ‘¬𝒜 ’ is true, then
𝒜 is false. We can summarize this dependence in the SCHEMATIC TRUTH TABLE for
negation, which shows how any sentence with negation as its main connective has a
truth value depending on the truth value of its immediate subsentence:
𝒜 ¬𝒜
T F
F T
This is a schematic table, because the input truth values are not associated with any
specific Sentential sentence, but with arbitrarily chosen sentences which have the truth
value in question. Whatever Sentential sentence we choose in place of 𝒜 , whether
atomic or not, we know that if 𝒜 has the truth value True, then ¬𝒜 will have the truth
value False.
Conjunction For any sentences 𝒜 and ℬ, 𝒜∧ℬ is true if and only if both 𝒜 and ℬ
are true. We can summarize this in the schematic truth table for conjunction:
𝒜 ℬ 𝒜∧ℬ
T T T
T F F
F T F
F F F
Note that conjunction is COMMUTATIVE. The truth value for 𝒜 ∧ ℬ is always the same
as the truth value for ℬ ∧ 𝒜 .
Disjunction Recall that ‘∨’ always represents inclusive or. So, for any sentences 𝒜
and ℬ, 𝒜 ∨ ℬ is true if and only if either 𝒜 or ℬ is true. We can summarize this in the
schematic truth table for disjunction:
𝒜 ℬ 𝒜∨ℬ
T T T
T F T
F T T
F F F
Conditional I’m just going to come clean and admit it. Conditionals are a problem
in Sentential. This is not because there is any problem finding a truth table for the
connective ‘→’, but rather because the truth table we put forward seems to make ‘→’
behave in ways that are different to the way that the English counterpart ‘if … then
…’ behaves. Exactly how much of a problem this poses is a matter of philosophical
contention. I shall discuss a few of the subtleties in §8.6 and §11.5. (It is no problem
for Sentential itself, of course – the only potential difficulty arises when we try to use
the Sentential conditional to represent ‘if …, then …’.)
We know at least this much from a parallel with the English conditional: if 𝒜 is true
and ℬ is false, then 𝒜 → ℬ should be false. The conditional claim ‘if I study hard, then
I’ll pass’ is clearly false if you study hard and still fail. For now, I am going to stipulate
that this is the only type of case in which 𝒜 → ℬ is false. We can summarize this with
a schematic truth table for the Sentential conditional.
𝒜 ℬ 𝒜→ℬ
T T T
T F F
F T T
F F T
The conditional is not commutative. You cannot swap the antecedent and consequent
in general without changing the truth value, because 𝒜 → ℬ has a different truth table
from ℬ → 𝒜 . Compare:
1. If a coin lands heads 1000 times, something surprising has happened. (True –
that is very surprising.)
2. If something surprising has happened, then a coin lands heads 1000 times. (False
– there are other surprising things than that.)
𝒜 ℬ 𝒜↔ℬ
T T T
T F F
F T F
F F T
You can think of the biconditional as saying that the two immediate constituents have
the same truth value – it’s true if they do, false otherwise. Unsurprisingly, the bicondi‐
tional is commutative.
68 TRUTH TABLES
All of the above sentences will be symbolised with the same Sentential sentence, per‐
haps ‘𝐹 ∧ 𝑄’.
I keep saying that we use Sentential sentences to symbolise English sentences. Many
other textbooks talk about translating English sentences into Sentential. But a good
translation should preserve certain facets of meaning, and – as I have just pointed
out – Sentential just cannot do that. This is why I shall speak of symbolising English
sentences, rather than of translating them.
This affects how we should understand our symbolisation keys. Consider a key like:
𝐹 : Jon is elegant.
𝑄: Jon is quick.
Other textbooks will understand this as a stipulation that the Sentential sentence ‘𝐹 ’
should mean that Jon is elegant, and that the Sentential sentence ‘𝑄’ should mean that
Jon is quick. But Sentential just is totally unequipped to deal with meaning. The pre‐
ceding symbolisation key is doing no more nor less than stipulating that the Sentential
sentence ‘𝐹 ’ should take the same truth value as the English sentence ‘Jon is elegant’
(whatever that might be), and that the Sentential sentence ‘𝑄’ should take the same
truth value as the English sentence ‘Jon is quick’ (whatever that might be).
§8. TRUTH‐FUNCTIONAL CONNECTIVES 69
1. 2 + 2 = 4.
2. Shostakovich wrote fifteen string quartets.
Whereas it is necessarily the case that 2 + 2 = 4,1 it is not necessarily the case that
Shostakovich wrote fifteen string quartets. If Shostakovich had died earlier, he would
have failed to finish Quartet no. 15; if he had lived longer, he might have written a few
more. So ‘It is necessarily the case that …’ is a connective of English, but it is not truth‐
functional. Another example: ‘one hundred years ago’. Both ‘many people have cars’
and ‘many people have children’ are true. But while ‘One hundred years ago, many
people had children’ is true, ‘One hundred years ago, many people had cars’ is false.
In these cases, we had the same input truth values, but different output. We can turn
this into a test for truth‐functionality: if for some one‐place connective ‘#’, you can
find sentences 𝒜 and ℬ such that (i) 𝒜 has the same truth value as ℬ, while (ii) #𝒜
has a different truth value from #ℬ, then ‘#’ is not a truth‐functional connective. Its
truth value is obviously not fixed by the truth value of its immediate subsentence. The
test can be generalised in the obvious way to binary connectives, etc: can you find a
connective such that when you feed it the same truth values as input, you get different
results? If so, that result is not a function of the input truth values.
1 Given that the English numeral ‘2’ names the number two, and the numeral ‘4’ names the number four,
and ‘+’ names addition, then it must be that the result of adding two to itself is four. This is not to say
that ‘2’ had to be used in the way we actually use it – if ‘2’ had named the number three, the sentence
would have been false. But in its actual use, it is a necessary truth.
70 TRUTH TABLES
a perfectly legitimate rule for associating input truth values with outputs. What needs
justification is our idea that we should use this truth function when symbolise English
sentences whose main connective is ‘if … then …’. They attempt to show that our usage
of the English conditional ‘if’ is in line with the way ‘→’ behaves.
Edgington’s Argument The first follows a line of argument due to Dorothy Edging‐
ton.2 Suppose that Lara has drawn some shapes on a piece of paper, and coloured
some of them in. I have not seen them, but I claim:
A C D
In this case, my claim is surely true. Shapes C and D are not grey, and so can hardly
present counterexamples to my claim. Shape A is grey, but fortunately it is also circular.
So my claim has no counterexamples. It must be true. And that means that each of
the following instances of my claim must be true too:
A B C D
then my claim would have be false. So it must be that this claim is false:
Now, recall that every connective of Sentential has to be truth‐functional. This means
that the mere truth value of the antecedent and consequent must uniquely determine
the truth value of the conditional as a whole. Thus, from the truth values of our four
claims – which provide us with all possible combinations of truth and falsity in ante‐
cedent and consequent – we can read off the truth table for the material conditional.
2 Dorothy Edgington (2020) ‘Indicative Conditionals’, in Edward N Zalta, ec., TheStanford Encyclopedia
of Philosophy, http://plato.stanford.edu/entries/conditionals/.
§8. TRUTH‐FUNCTIONAL CONNECTIVES 71
The or‐to‐if argument A second justification for symbolising ‘if’ as ‘→’ is this. We
know already that if 𝒜 is true and 𝒞 is false, then ‘if 𝒜 then 𝒞 ’ will be false. What should
we say about the other rows of the truth table, the rows on which either 𝒜 is false, or
ℬ is true (or both)? On these lines either 𝒜 is false or ℬ is true. So the disjunction
‘Not‐𝒜 or ℬ’ is true on these lines. If a disjunction is true, and its first disjunct isn’t,
then the second disjunct has to be true. In this case, the first disjunct (‘Not‐𝒜 ’) isn’t
true just in case 𝒜 is true; so it turns out that if 𝒜 obtains, then so does ℬ. So the
truth of the disjunction on those three lines leads us to conclude that the conditional
‘if 𝒜 , then ℬ’ should also be true on those three lines. Thus we obtain the truth table
we have associated with ‘→’ as the best truth table to use for symbolising ‘if’.
What these two arguments show is that ‘→’ is the only candidate for a truth‐functional
conditional. Otherwise put, it is the best conditional that Sentential can provide. But is
it any good, as a surrogate for the conditionals we use in everyday language? Should
we think that ‘if’ is a truth functional connective? Consider two sentences:
65. If Mitt Romney had won the 2012 election, then he would have been the 45th
President of the USA.
66. If Mitt Romney had won the 2012 election, then he would have turned into a
helium‐filled balloon and floated away into the night sky.
Sentence 65 is true; sentence 66 is false. But both have false antecedents and false
consequents. So the truth value of the whole sentence is not uniquely determined by
the truth value of the parts. This use of ‘if’ fails our test for truth‐functionality. Do not
just blithely assume that you can adequately symbolise an English ‘if …, then …’ with
Sentential’s ‘→’.
The crucial point is that sentences 65 and 66 employ SUBJUNCTIVE conditionals, rather
than INDICATIVE conditionals. Subjunctive conditionals are also sometimes known as
COUNTERFACTUALS. They ask us to imagine something contrary to what we are assum‐
ing as fact – that Mitt Romney lost the 2012 election – and then ask us to evaluate what
would have happened in that case. The classic illustration of the difference between
the indicative and subjunctive conditional comes from pairs like these:
The indicative conditional in 67 is true, given the actual historical fact that she was
taken, and given that we are not assuming at this point anything about how she was
taken. But is the subjunctive in 68 also true? It seems not. She was not destined to
be taken by something or other, and if the dingo hadn’t intervened, she wouldn’t have
disappeared at all.3
3 If we are assuming as fact that a dingo took her, then when we consider what would have happened
had the dingo not been involved, we imagine a situation in which all the actual consequences of the
dingo’s action are removed.
72 TRUTH TABLES
The point to take away from this is that subjunctive conditionals cannot be tackled
using ‘→’. This is not to say that they cannot be tackled by any formal logical language,
only that Sentential is not up to the job.4
So the ‘→’ connective of Sentential is at best able to model the indicative conditional
of English, as in 67. In fact there remain difficulties even with indicatives in Senten‐
tial. One family of difficulties arises from consideration of the or‐to‐if argument. The
argument seems compelling in cases like this:
But what if our confidence in the premise 69 derives from our confidence in just one
disjunct? Suppose that we are certain it was the butler, and certain that the gardener
has an airtight alibi and wasn’t anywhere near the manor at the time of the murder?
Because we are certain it was the butler, we might be equally certain of 69, that it was
either the butler or the gardener (this inference from a disjunct to a disjunction seems
odd, but it is surely valid). But we might also be sure that if it wasn’t the butler, it was
the valet – he was the only other person with motive. In this sort of case, we might have
confidence in the premise 69 of this or‐to‐if argument and reject its conclusion. Yet
the Sentential analogue of the or‐to‐if argument is valid: as we’ll see after we introduce
the concept of validity for Sentential in §11), ‘𝐴 ∨ 𝐵 ∴ ¬𝐴 → 𝐵’ turns out to be valid.
This mismatch suggests that ‘if’ and ‘→’ aren’t a perfect match. I shall say a little more
about other difficulties for the material conditional analysis of indicatives in §11.5 and
in §30.1.
For now, I shall content myself with the observation that ‘→’ is the only plausible can‐
didate for a truth‐functional conditional. Our working hypothesis is that many uses
of ‘if’ can be adequately approximated by ‘→’. Many English conditionals cannot be rep‐
resented adequately using ‘→’. Sentential is an intrinsically limited language. But this
is only a problem if you try to use it to do things it wasn’t designed to do.
Key Ideas in §8
› The connectives of Sentential are all truth‐functional, and have
their meanings specified by the truth‐tables laid out in §8.3.
› When we treat a sentence of Sentential as symbolising an English
sentence, we need only say that as far as truth value is concerned
and truth‐functional structure is concerned, they are alike.
› English has many nontruth‐functional connectives. Some uses
of the conditional ‘if’ are nontruth‐functional. But as long as
we remain aware of the limitations of Sentential, it can be a very
powerful tool for modelling a significant class of arguments.
4 There are in fact logical treatments of counterfactuals, the most influential of which is David Lewis
(1973) Counterfactuals, Blackwell.
§8. TRUTH‐FUNCTIONAL CONNECTIVES 73
Practice exercises
A. Which of the following arguably may not characterise a function, where 𝑥 is the
input and 𝑦 is the output:
B. True or false: if the main connective of some sentence is truth‐functional, then the
truth value of the sentence uniquely determines the truth values of any constituents.
C. Suppose † is some English one‐place connective, so that ‘†𝒜 ’ is a grammatical sen‐
tence. How can we test if it is not truth‐functional?
9
Complete Truth Tables
9.1 Valuations
So far, we have considered assigning truth values to Sentential sentences indirectly. We
have said, for example, that a Sentential sentence such as ‘𝐵’ is to take the same truth
value as the English sentence ‘Big Ben is in London’ (whatever that truth value may
be). But we can also assign truth values directly. We can simply stipulate that ‘𝐵’ is to
be true, or stipulate that it is to be false – at least for present purposes.
A valuation is thus a function from atomic sentences to truth values. So this is a valu‐
ation:
𝐴, 𝐺, 𝑃, 𝐺7 ↦ T
𝐹, 𝑅, 𝑍 ↦ F.
There is no requirement that a valuation be a TOTAL FUNCTION, that is, that it assign
a truth value to every atomic sentence. To be fix the truth value of a sentence 𝒜 of
Sentential a valuation must assign a truth value to every atomic sentence 𝒜 contains.
A valuation is a temporary assignment of ‘meanings’ to Sentential sentences, in much
the same way as a symbolisation key might be. (It only assigns truth values, the only di‐
mension of meaning that Sentential is sensitive to.) What is distinctive about Sentential
is that almost all of its basic vocabulary – the atomic sentences – only get their mean‐
ings in this temporary fashion. The only parts of Sentential that get their meanings
permanently are the connectives, which always have a fixed interpretation.
This is rather unlike English, where most words have their meanings on a permanent
basis. But there are some words in English – like pronouns (‘he’, ‘she’, ‘it’) and demon‐
stratives (‘this’, ‘that’) – that get their meaning assigned temporarily, and then can
74
§9. COMPLETE TRUTH TABLES 75
be reused with a different meaning in another context. Such expressions are called
CONTEXT SENSITIVE. In this sense, all the atomic sentences of Sentential are context
sensitive expressions. Of course we don’t have anything so explicit and deliberate as a
valuation or a symbolisation key in English to assign a meaning to a particular use of
‘this’ or ‘that’ – the circumstances of a conversation automatically assign an appropriate
object (usually). In Sentential, however, we need to explicitly set out the interpretations
of the atomic sentences we are concerned with.
𝐻 𝐼 (𝐻 ∧ 𝐼) → 𝐻
T T
T F
F T
F F
To calculate the truth value of the entire sentence ‘(𝐻 ∧ 𝐼) → 𝐻’, we first copy the truth
values for the atomic sentences and write them underneath the letters in the sentence:
𝐻 𝐼 (𝐻 ∧ 𝐼) → 𝐻
T T T T T
T F T F T
F T F T F
F F F F F
76 TRUTH TABLES
Now consider the subsentence ‘(𝐻 ∧ 𝐼)’. This is a conjunction, (𝒜 ∧ ℬ), with ‘𝐻’ as 𝒜
and with ‘𝐼 ’ as ℬ. The schematic truth table for conjunction gives the truth conditions
for any sentence of the form (𝒜 ∧ ℬ), whatever 𝒜 and ℬ might be. It summarises the
point that a conjunction is true iff both conjuncts are true. In this case, our conjuncts
are just ‘𝐻’ and ‘𝐼 ’. They are both true on (and only on) the first row of the truth table.
Accordingly, we can calculate the truth value of the conjunction on all four rows.
𝒜 ∧ℬ
𝐻 𝐼 (𝐻 ∧ 𝐼) → 𝐻
T T T TT T
T F T FF T
F T F FT F
F F F FF F
Now, the entire sentence that we are dealing with is a conditional, 𝒞 → 𝒟, with ‘(𝐻 ∧ 𝐼)’
as 𝒞 and with ‘𝐻’ as 𝒟. On the second row, for example, ‘(𝐻 ∧ 𝐼)’ is false and ‘𝐻’ is true.
Since a conditional is true when the antecedent is false, we write a ‘T’ in the second
row underneath the conditional symbol. We continue for the other three rows and get
this:
𝒞 →𝒟
𝐻 𝐼 (𝐻 ∧ 𝐼) → 𝐻
T T T T T
T F F T T
F T F T F
F F F T F
The conditional is the main logical connective of the sentence. And the column of ‘T’s
underneath the conditional tells us that the sentence ‘(𝐻 ∧ 𝐼) → 𝐻’ is true regardless
of the truth values of ‘𝐻’ and ‘𝐼 ’. They can be true or false in any combination, and the
compound sentence still comes out true. Since we have considered all four possible
assignments of truth and falsity to ‘𝐻’ and ‘𝐼 ’ – since, that is, we have considered all the
different valuations – we can say that ‘(𝐻 ∧ 𝐼) → 𝐻’ is true on every valuation.
In this example, I have not repeated all of the entries in every column in every suc‐
cessive table. When actually writing truth tables on paper, however, it is impractical
to erase whole columns or rewrite the whole table for every step. Although it is more
crowded, the truth table can be written in this way:
𝐻 𝐼 (𝐻 ∧ 𝐼) → 𝐻
T T T TT TT
T F T FF TT
F T F FT TF
F F F FF TF
§9. COMPLETE TRUTH TABLES 77
Most of the columns underneath the sentence are only there for bookkeeping purposes.
The column that matters most is the column underneath the main connective for the
sentence, since this tells you the truth value of the entire sentence. I have emphasised
this, by putting this column in bold. When you work through truth tables yourself, you
should similarly emphasise it (perhaps by drawing a box around the relevant column).
𝐶 (𝐶 ↔𝐶 )→𝐶 ∧ ¬(𝐶 →𝐶 )
T TTT TT FF TTT
F F T F F F FF FTF
Looking at the column underneath the main connective, we see that the sentence is
false on both rows of the table; i.e., the sentence is false regardless of whether ‘𝐶 ’ is
true or false. It is false on every valuation.
A sentence that contains two atomic sentences requires four rows for a complete truth
table, as in the schematic truth tables, and as in the complete truth table for ‘(𝐻 ∧ 𝐼) →
𝐻’.
A sentence that contains three atomic sentences requires eight rows:
𝑀 𝑁 𝑃 𝑀 ∧ (𝑁 ∨ 𝑃)
T T T T T T T T
T T F T T T T F
T F T T T F T T
T F F T F F F F
F T T F F T T T
F T F F F T T F
F F T F F F T T
F F F F F F F F
From this table, we know that the sentence ‘𝑀 ∧ (𝑁 ∨ 𝑃)’ can be true or false, depending
on the truth values of ‘𝑀’, ‘𝑁’, and ‘𝑃’.
78 TRUTH TABLES
A complete truth table for a sentence that contains four different atomic sentences
requires 16 rows. Five letters, 32 rows. Six letters, 64 rows. And so on. To be perfectly
general: If a complete truth table has 𝑛 different atomic sentences, then it must have
2𝑛 rows.1
In order to fill in the columns of a complete truth table, begin with the right‐most
atomic sentence and alternate between ‘T’ and ‘F’. In the next column to the left, write
two ‘T’s, write two ‘F’s, and repeat. For the third atomic sentence, write four ‘T’s fol‐
lowed by four ‘F’s. This yields an eight row truth table like the one above. For a 16
row truth table, the next column of atomic sentences should have eight ‘T’s followed
by eight ‘F’s. For a 32 row table, the next column would have 16 ‘T’s followed by 16 ‘F’s.
And so on.
Key Ideas in §9
› A valuation of some atomic sentences associates each of them
with exactly one of our truth values; it is like an extremely
stripped down version of a symbolisation key. In there are 𝑛
atomic sentences, there are 2𝑛 valuations of them.
› A truth table lays out the truth values of a particular Sentential
sentence in each of the distinct possible valuations of its con‐
stituent atomic sentences.
Practice exercises
A. How does a schematic truth table differ from a regular truth table? What is a com‐
plete truth table?
B. Offer complete truth tables for each of the following:
1. 𝐴→𝐴
2. 𝐶 → ¬𝐶
3. (𝐴 ↔ 𝐵) ↔ ¬(𝐴 ↔ ¬𝐵)
4. (𝐴 → 𝐵) ∨ (𝐵 → 𝐴)
5. (𝐴 ∧ 𝐵) → (𝐵 ∨ 𝐴)
6. ¬(𝐴 ∨ 𝐵) ↔ (¬𝐴 ∧ ¬𝐵)
7. (𝐴 ∧ 𝐵) ∧ ¬(𝐴 ∧ 𝐵) ∧ 𝐶
8. (𝐴 ∧ 𝐵) ∧ 𝐶 → 𝐵
9. ¬ (𝐶 ∨ 𝐴) ∨ 𝐵
If you want additional practice, you can construct truth tables for any of the sentences
and arguments in the exercises for Chapter 2.
1 Since the values of atomic sentences are independent of each other, each new atomic sentence 𝒜𝑛+1
we consider is capable of being true or false on every existing valuation on 𝒜1 , …, 𝒜𝑛 , and so there
must be twice as many valuations on 𝒜1 , …, 𝒜𝑛 , 𝒜𝑛+1 as on 𝒜1 , …, 𝒜𝑛 .
10
Semantic Concepts
In §9.1, we introduced the idea of a valuation and showed how to determine the truth
value of any Sentential sentence on any valuation using a truth table in the remainder
of the chapter. In this section, we shall introduce some related ideas, and show how to
use truth tables to test whether or not they apply.
We need the parenthetical clause because of the way we have defined valuations. A
given valuation 𝑣 might only assign truth values to some atomic sentences and not
all. For any sentence 𝒜 which contains an atomic sentence to which 𝑣 doesn’t assign a
truth value, 𝒜 will not have any truth value according to 𝑣. Logical truths in Sentential
are sometimes called TAUTOLOGIES.
We can determine whether a sentence is a logical truth just by using truth tables. If
the sentence is true on every row of a complete truth table, then it is true on every
valuation for its constituent atomic sentences, so it is a logical truth. In the example
of §9, ‘(𝐻 ∧ 𝐼) → 𝐻’ is a logical truth.
This is only, though, a surrogate for necessary truth. There are some necessary truths
that we cannot adequately symbolise in Sentential. An example is ‘2 + 2 = 4’. This
must be true, but if we try to symbolise it in Sentential, the best we can offer is an
79
80 TRUTH TABLES
atomic sentence, and no atomic sentence is a logical truth.1 Still, if we can adequately
symbolise some English sentence using a Sentential sentence which is a logical truth,
then that English sentence expresses a necessary truth.
We have a similar surrogate for necessary falsity:
We can determine whether a sentence is a logical falsehood just by using truth tables.
If the sentence is false on every row of a complete truth table, then it is false on every
valuation, so it is a logical falsehood. In the example of §9, ‘ (𝐶 ↔ 𝐶) → 𝐶 ∧ ¬(𝐶 → 𝐶)’
is a logical falsehood. A logical falsehood is often called a CONTRADICTION.
𝒜 and ℬ are LOGICALLY EQUIVALENT iff they have the same truth value
on every valuation among those which assign both of them a truth
value.
It is easy to test for logical equivalence using truth tables. Consider the sentences
‘¬(𝑃 ∨ 𝑄)’ and ‘¬𝑃 ∧ ¬𝑄’. Are they logically equivalent? To find out, we may construct
a truth table.
𝑃 𝑄 ¬ (𝑃 ∨ 𝑄) ¬𝑃 ∧ ¬𝑄
T T F T T T FT FFT
T F F T T F FT FTF
F T F F T T TF FFT
F F T F F F TF TT F
Look at the columns for the main connectives; negation for the first sentence, conjunc‐
tion for the second. On the first three rows, both are false. On the final row, both are
true. Since they match on every row, the two sentences are logically equivalent.
1 At this risk of repeating myself: 2 + 2 = 4 is necessarily true, but it is not necessarily true in virtue of
its structure. A necessary truth is true, with its actual meaning, in every possible situation. A Sentential‐
logical truth is true in the actual situation on every possible way of interpreting its atomic sentences.
These are interestingly different notions.
§10. SEMANTIC CONCEPTS 81
((𝐴 ∧ 𝐵) ∧ 𝐶)
(𝐴 ∧ (𝐵 ∧ 𝐶))
These have the same truth table, and are logically equivalent. Consequently, it will
never make any difference from the perspective of truth value – which is all that Sen‐
tential cares about (see §8) – which of the two sentences we assert (or deny). And since
the order of the parentheses does not matter, I shall allow us to drop them. In short,
we can save some ink and some eyestrain by writing:
𝐴∧𝐵∧𝐶
The general point is that, if we just have a long list of conjunctions, we can drop the
inner parentheses. (I already allowed us to drop outermost parentheses in §6.) The
same observation holds for disjunctions. Since the following sentences are logically
equivalent:
((𝐴 ∨ 𝐵) ∨ 𝐶)
(𝐴 ∨ (𝐵 ∨ 𝐶))
𝐴∨𝐵∨𝐶
And generally, if we just have a long list of disjunctions, we can drop the inner par‐
entheses. But be careful. These two sentences have different truth tables, so are not
logically equivalent:
((𝐴 → 𝐵) → 𝐶)
(𝐴 → (𝐵 → 𝐶))
So if we were to write:
𝐴→𝐵→𝐶
((𝐴 ∨ 𝐵) ∧ 𝐶)
(𝐴 ∨ (𝐵 ∧ 𝐶))
So if we were to write:
𝐴∨𝐵∧𝐶
it would be dangerously ambiguous. Never write this. The moral is: you can drop
parentheses when dealing with a long list of conjunctions, or when dealing with a
long list of disjunctions. But that’s it.
82 TRUTH TABLES
10.4 Consistency
In §3, I said that sentences are jointly consistent iff it is possible for all of them to be
true at once. We can offer a surrogate for this notion too:
𝑃 𝑄 ¬𝑃 𝑃→𝑄 𝑄
T T FT T TT T
T F FT T FF F
F T TF F TT T
F F TF F TF F
We can see on the third row, the valuation which assigns F to ‘𝑃’ and T to ‘𝑄’, each of
the sentences is true. So these are jointly consistent.
Practice exercises
A. Check all the claims made in introducing the new notational conventions in §10.3,
i.e., show that:
§10. SEMANTIC CONCEPTS 83
1. ‘((𝐴 ∧ 𝐵) ∧ 𝐶)’ and ‘(𝐴 ∧ (𝐵 ∧ 𝐶))’ have the same truth table
2. ‘((𝐴 ∨ 𝐵) ∨ 𝐶)’ and ‘(𝐴 ∨ (𝐵 ∨ 𝐶))’ have the same truth table
3. ‘((𝐴 ∨ 𝐵) ∧ 𝐶)’ and ‘(𝐴 ∨ (𝐵 ∧ 𝐶))’ do not have the same truth table
4. ‘((𝐴 → 𝐵) → 𝐶)’ and ‘(𝐴 → (𝐵 → 𝐶))’ do not have the same truth table
5. ‘((𝐴 ↔ 𝐵) ↔ 𝐶)’ and ‘(𝐴 ↔ (𝐵 ↔ 𝐶))’ have the same truth table.
B. What is the difference between a logical truth and a logical falsehood? Are there
any other kinds of sentence in Sentential?
Revisit your answers to exercise §9B (page 78). Determine which sentences were logical
truths, which were logical falsehoods, and which, if any, were neither logical truths nor
logical falsehoods.
C. What does it mean to say that two sentences of Sentential are logically equivalent?
Use truth tables to decide if the following pairs of sentences are logically equivalent:
D. What does it mean to say that some sentences of Sentential are jointly inconsistent?
Use truth tables to determine whether these sentences are jointly consistent, or jointly
inconsistent:
1. 𝐴 → 𝐴, ¬𝐴 → ¬𝐴, 𝐴 ∧ 𝐴, 𝐴 ∨ 𝐴
2. 𝐴 ∨ 𝐵, 𝐴 → 𝐶 , 𝐵 → 𝐶
3. 𝐵 ∧ (𝐶 ∨ 𝐴), 𝐴 → 𝐵, ¬(𝐵 ∨ 𝐶)
4. 𝐴 ↔ (𝐵 ∨ 𝐶), 𝐶 → ¬𝐴, 𝐴 → ¬𝐵
11
Entailment and Validity
11.1 Entailment
The following idea is related to joint consistency, but is of great interest in its own right:
(Why is this not a biconditional? The full answer will have to wait until §23.)
Again, it is easy to test this with a truth table. Let us check whether ‘¬𝐿 → (𝐽 ∨ 𝐿)’ and
‘¬𝐿’ entail ‘𝐽’, we simply need to check whether there is any valuation which makes
both ‘¬𝐿 → (𝐽 ∨ 𝐿)’ and ‘¬𝐿’ true whilst making ‘𝐽’ false. So we use a truth table:
𝐽 𝐿 ¬ 𝐿 → (𝐽 ∨ 𝐿) ¬𝐿 𝐽
T T FT TTT T FT T
T F TF TTT F TF T
F T FT TFTT FT F
F F TF FFF F TF F
The only row on which both‘¬𝐿 → (𝐽 ∨ 𝐿)’ and ‘¬𝐿’ are true is the second row, and that
is a row on which ‘𝐽’ is also true. So ‘¬𝐿 → (𝐽 ∨ 𝐿)’ and ‘¬𝐿’ entail ‘𝐽’.
𝒜1 , 𝒜2 , …, 𝒜𝑛 ⊨ 𝒞.
84
§11. ENTAILMENT AND VALIDITY 85
The symbol ‘⊨’ is known as the DOUBLE TURNSTILE, since it looks like a turnstile with
two horizontal beams.
But let me be clear. ‘⊨’ is not a symbol of Sentential. Rather, it is a symbol of our
metalanguage, augmented English (recall the difference between object language and
metalanguage from §7). So the metalanguage sentence:
70. 𝑃, 𝑃 → 𝑄 ⊨ 𝑄
Note that there are no constraints on the number of Sentential sentences that can be
mentioned before the symbol ‘⊨’. Indeed, one limiting case is of special interest:
72. ⊨ 𝒞 .
72 is false if there is a valuation which makes all the sentences appearing on the left
hand side of ‘⊨’ true and makes 𝒞 false. Since no sentences appear on the left side of
‘⊨’ in 72, it is trivial to make ‘them’ all true. So it is false if there is a valuation which
makes 𝒞 false – and so 72 is true iff every valuation makes 𝒞 true. Otherwise put, 72
says that 𝒞 is a logical truth. Equally:
73. 𝒜1 , …, 𝒜𝑛 ⊨
74. 𝒜 ⊨
𝒜1 , …, 𝒜𝑛 ⊨ 𝒞 iff 𝒜1 , …, 𝒜𝑛 , ¬𝒞 ⊨.
If every valuation which makes each of 𝒜1 , …, 𝒜𝑛 true also makes 𝒞 true, then all of
those valuations also make ‘¬𝒞 ’ false. So there can be no valuation which makes each
of 𝒜1 , …, 𝒜𝑛 , ¬𝒞 true. So those sentences are jointly inconsistent.
1 If you find it difficult to see why ‘⊨ 𝒜 ’ should say that 𝒜 is a logical truth, you should just take 72 as
an abbreviation for that claim. Likewise you should take ‘𝒜 ⊨’ as abbreviating the claim that 𝒜 is a
logical falsehood.
86 TRUTH TABLES
When ‘→’ is flanked with two Sentential sentences, the result is a longer Sentential sen‐
tence. By contrast, when we use ‘⊨’, we form a metalinguistic sentence that mentions
the surrounding Sentential sentences.
If 𝒜 → 𝒞 is a logical truth, then 𝒜 ⊨ 𝒞 . But 𝒜 → 𝒞 can be true on a valuation without
being a logical truth, and so can be true on a valuation even when 𝒜 doesn’t entail
𝒞 . Sometimes people are inclined to confuse entailment and conditionals, perhaps
because they are tempted by the thought that we can only establish the truth of a
conditional by logically deriving the consequent from the antecedent. But while this is
the way to establish the truth of a logically true conditional, most conditionals posit a
weaker relation between antecedent and consequent than that – for example, a causal
or statistical relationship might be enough to justify the truth of the conditional ‘If you
smoke, then you’ll lower your life expectancy’.
If 𝒜1 , 𝒜2 , …, 𝒜𝑛 ⊨ 𝒞 , then 𝒜1 , 𝒜2 , …, 𝒜𝑛 ∴ 𝒞 is valid.
2 This result sometimes goes under a fancy title: the Deduction Theorem for Sentential. It is easy to see that
this more general result follows from the Deduction Theorem: ℬ1 , …, ℬ𝑛 , 𝒜 ⊨ 𝒞 iff ℬ1 , …, ℬ𝑛 , ⊨ 𝒜 → 𝒞 .
§11. ENTAILMENT AND VALIDITY 87
The only conclusive arguments in Sentential are valid ones. Because we consider every
valuation, the conclusiveness of an argument cannot turn on the particular truth val‐
ues assigned to the atomic sentences. For any collection of atomic sentences, there
is a valuation corresponding to any way of assigning them truth values. This means
that we treat the atomic sentences as all independent of one another. So there is no
possibility that there might be some connection in meaning between sentences of Sen‐
tential unless it is in virtue of those sentences having shared constituents and the right
structure.
In short, we have a way to test for the validity of some English arguments. First, we sym‐
bolise them in Sentential, as having premises 𝒜1 , 𝒜2 , …, 𝒜𝑛 , and conclusion 𝒞 . Then we
test for entailment using truth tables. If there is an entailment, then we can conclude
that the argument we symbolised has the right kind of structure to count as valid.
For example, suppose we consider this argument:
Jim studied hard, so he didn’t act in a lot of plays. For he can’t study hard
while acting in lots of plays.
It’s not the case that Jim both studied hard and acted in lots of plays.
Jim studied hard
So: Jim did not act in lots of plays.
A S ¬(𝑆 ∧ 𝐴) 𝑆 ¬𝐴
T T F T F
T F T F F
F T T T T
F F T F T
The only valuation on which the premises are both true is represented on the third
row, assigning F to 𝐴 and T to 𝑆. And on this valuation, the conclusion is true. So this
argument is valid, and this statement of entailment is correct: ¬(𝑆 ∧ 𝐴), 𝑆 ⊨ ¬𝐴.
Our test uses the precisely defined relation of entailment in Sentential as a test for
validity in natural language. If we have symbolised an argument successfully, then
we have captured its form (or rather, one of its forms). If the symbolised Sentential
argument turns out to be valid, we can conclude that the natural language argument
is valid too, because it can be modelled as having a valid form.
88 TRUTH TABLES
3. There are some putative examples of sentences that cannot be symbolised be‐
cause of some assumptions that Sentential makes.
To symbolise this argument in Sentential, we would have to use two different atomic
sentences – perhaps ‘𝑆’ and ‘𝐶 ’ – for the premise and the conclusion respectively. Now,
it is obvious that ‘𝑆’ does not entail ‘𝐶 ’. But the English argument surely seems valid –
the structure of ‘Daisy is a small cow’ guarantees that ‘Daisy is a cow’ is true. Note that
a small cow might still be rather large, so we cannot fudge things by symbolising ‘Daisy
is a small cow’ as a conjunction of ‘Daisy is small’ and ‘Daisy is a cow’. (We’ll return
to this sort of case in §16, where we will see how to symbolise 75 as a valid argument
in Quantifier.) But our Sentential‐based test for validity in English will have some false
negatives: it will classify some valid English arguments as invalid. This is because some
valid arguments are valid in virtue of structure which is not truth‐functional. We’ll see
more examples of this in §15.
Second, consider the following arguments:
76. It’s not the case that the Crows will win by a lot, if they win. So the Crows will
win.
77. It’s not the case that, if God exists, She answers malevolent prayers. So God
exists.
Both of these arguments have the same structure. Let’s focus on the second, example
77. Symbolising it in Sentential, we would offer something like ‘¬(𝐺 → 𝑀) ∴ 𝐺 ’. Now,
as can easily be checked with a truth table, this is a correct entailment in Sentential.
So if we symbolise the argument 77 in Sententialin this way, the conditional premise
entails that God exists. But that’s strange: surely even the atheist can accept sentence
77, without contradicting herself! Some say that 77 would be better symbolised by
‘𝐺 → ¬𝑀’, even though that doesn’t reflect the apparent form of the English sentence.
‘𝐺 → ¬𝑀’ does not entail 𝐺 . This symbolisation does a better job of reflecting the
intuitive consequences of the English sentence 77, but at the cost of abandoning a
§11. ENTAILMENT AND VALIDITY 89
To symbolise this sentence in Sentential, we would offer something like ‘¬(𝐽 ∨ ¬𝐽)’. This
a logical falsehood (check this with a truth‐table). But sentence 78 does not itself
seem like a logical falsehood; for we might have happily go on to add ‘Jan is on the
borderline of baldness’! To make this point another way: as is easily seen by truth
tables, ‘¬(𝐽 ∨ ¬𝐽)’ is logically equivalent to ‘¬𝐽 ∧ 𝐽’. This latter sentence symbolises an
obvious logical falsehood in English:
It is so obvious, though, that 78 is synonymous with 79? It seems like it may not be,
even though our test will classify any English argument from one to the other as valid
(since both are symbolised as logical falsehoods, which degenerately entail anything).
Because of the way we have defined valuations, every sentence of Sentential is assigned
either True or False in any valuation which makes it meaningful by assigning truth
values to its atomic sentences. This property of Sentential is known as BIVALENCE: that
every sentence has exactly one of the two possible truth values. The case of Jan’s bald‐
ness (or otherwise) raises the general question of what logic we should use when deal‐
ing with vague discourse, properties like ‘bald’ or ‘tall’ which seem to have borderline
cases. Many think it plausible that a borderline case of F is neither a case of F, nor does
3 Edgington discusses Stalnaker’s version of such an account in ‘Nearest Possible Worlds’, §4.1 of her
‘Indicative Conditionals’, cited above (http://plato.stanford.edu/entries/conditionals/#Sta).
90 TRUTH TABLES
it fail to be a case of F. Hence they have been tempted to deny bivalence for English:
‘Jan is bald’, they say, is neither True nor False! If 𝑝 is neither true nor false, then it is
hardly surprising that ‘𝑝 or not‐𝑝’ turns out to be untrue. If these thinkers are right that
vagueness in English leads to the denial of bivalence, while Sentential is bivalent, this
will give rise to mismatches between English and Sentential. These mismatches will
not involve inadequate symbolisation, but a more fundamental disagreement about
the background framework – here, a disagreement about the nature of truth.4
In different ways, these three examples highlight some of the limits of working with
a language like Sententialthat can only handle truth‐functional connectives. Moreover,
these limits give rise to some interesting questions in philosophical logic. Part of the
purpose of this course is to equip you with the tools to explore these questions of philo‐
sophical logic. But we have to walk before we can run; we have to become proficient
in using Sentential, before we can adequately discuss its limits, and consider alternat‐
ives. It is important to recognise that these are limits to Sentential only in its role as
a framework to model validity in English and other natural languages. They are not
problems for Sentential as a formal language. Moreover, as I have emphasised already,
these limitations are merely manifestations of the fact that Sentential is being used as
a model of natural language. Models are typically not designed or intended to capture
every aspect of what they model. Their utility derives often from being simpler than
the complex things they are representing. The limitations we have noted indicate that
Sentential may not model English perfectly in these cases. But Sentential remains an
adequate model of English in many other cases.
4 For more on the logic of vagueness, see Roy Sorensen’s 2018 entry ‘Vagueness’ in The Stanford Encyc‐
lopedia of Philosophy (https://plato.stanford.edu/entries/vagueness/). He discusses views that
deny bivalence in §5.
§11. ENTAILMENT AND VALIDITY 91
Practice exercises
A. What does it mean to say that sentences 𝒜1 , 𝒜2 , … , 𝒜𝑛 of Sentential entail a further
sentence 𝒞 ?
B. If 𝒜1 , 𝒜2 , … , 𝒜𝑛 ⊨ 𝒞 , what can you say about the argument with premises
𝒜1 , 𝒜2 , … , 𝒜𝑛 and conclusion 𝒞 ?
C. Use truth tables to determine whether each argument is valid or invalid.
1. 𝐴→𝐴∴𝐴
2. 𝐴 → (𝐴 ∧ ¬𝐴) ∴ ¬𝐴
3. 𝐴 ∨ (𝐵 → 𝐴) ∴ ¬𝐴 → ¬𝐵
4. 𝐴 ∨ 𝐵, 𝐵 ∨ 𝐶, ¬𝐴 ∴ 𝐵 ∧ 𝐶
5. (𝐵 ∧ 𝐴) → 𝐶, (𝐶 ∧ 𝐴) → 𝐵 ∴ (𝐶 ∧ 𝐵) → 𝐴
1. Suppose that 𝒜 and ℬ are logically equivalent. What can you say about 𝒜 ↔ ℬ?
2. Suppose that (𝒜 ∧ ℬ) → 𝒞 is neither a logical truth nor a logical falsehood. What
can you say about whether 𝒜, ℬ ∴ 𝒞 is valid?
3. Suppose that 𝒜 , ℬ and 𝒞 are jointly inconsistent. What can you say about (𝒜 ∧
ℬ ∧ 𝒞)?
4. Suppose that 𝒜 is a logical falsehood. What can you say about whether 𝒜, ℬ ⊨
𝒞?
5. Suppose that 𝒞 is a logical truth. What can you say about whether 𝒜, ℬ ⊨ 𝒞 ?
6. Suppose that 𝒜 and ℬ are logically equivalent. What can you say about (𝒜 ∨ ℬ)?
7. Suppose that 𝒜 and ℬ are not logically equivalent. What can you say about
(𝒜 ∨ ℬ)?
E. If two sentences of Sentential, 𝒜 and 𝒟, are logically equivalent, what can you say
about (𝒜 → 𝒟)? What about the argument 𝒜 ∴ 𝒟?
F. Consider the following principle:
With practice, you will quickly become adept at filling out truth tables. In this section,
I want to give you some permissible shortcuts to help you along the way.
𝑃 𝑄 (𝑃 ∨ 𝑄) ↔ ¬ 𝑃
T T T FF
T F T FF
F T T TT
F F F FT
You also know for sure that a disjunction is true whenever one of the disjuncts is true.
So if you find a true disjunct, there is no need to work out the truth values of the other
disjuncts. Thus you might offer:
𝑃 𝑄 (¬ 𝑃 ∨ ¬ 𝑄) ∨ ¬ 𝑃
T T F FF FF
T F F TT TF
F T TT
F F TT
Equally, you know for sure that a conjunction is false whenever one of the conjuncts
is false. So if you find a false conjunct, there is no need to work out the truth value of
the other conjunct. Thus you might offer:
92
§12. TRUTH TABLE SHORTCUTS 93
𝑃 𝑄 ¬ (𝑃 ∧ ¬ 𝑄) ∧ ¬ 𝑃
T T FF
T F FF
F T T F TT
F F T F TT
A similar short cut is available for conditionals. You immediately know that a condi‐
tional is true if either its consequent is true, or its antecedent is false. Thus you might
present:
𝑃 𝑄 ((𝑃 → 𝑄 ) → 𝑃) → 𝑃
T T T
T F T
F T T F T
F F T F T
Since all we are doing is looking for bad rows, we should bear this in mind. So: if we
find a row where the conclusion is true, we do not need to evaluate anything else on
that row: that row definitely isn’t bad. Likewise, if we find a row where some premise
is false, we do not need to evaluate anything else on that row.
With this in mind, consider how we might test the following claimed entailment:
¬𝐿 → (𝐽 ∨ 𝐿), ¬𝐿 ⊨ 𝐽.
The first thing we should do is evaluate the conclusion on the right of the turnstile. If
we find that the conclusion is true on some row, then that is not a bad row. So we can
simply ignore the rest of the row. So at our first stage, we are left with something like:
𝐽 𝐿 ¬ 𝐿 → (𝐽 ∨ 𝐿) ¬𝐿 𝐽
T T T
T F T
F T ? ? F
F F ? ? F
94 TRUTH TABLES
where the blanks indicate that we are not going to bother doing any more investiga‐
tion (since the row is not bad) and the question‐marks indicate that we need to keep
investigating.
The easiest premise on the left of the turnstile to evaluate is the second, so we next do
that:
𝐽 𝐿 ¬ 𝐿 → (𝐽 ∨ 𝐿) ¬𝐿 𝐽
T T T
T F T
F T F F
F F ? T F
Note that we no longer need to consider the third row on the table: it will not be a bad
row, because (at least) one of premises is false on that row. And finally, we complete
the truth table:
𝐽 𝐿 ¬ 𝐿 → (𝐽 ∨ 𝐿) ¬𝐿 𝐽
T T T
T F T
F T F F
F F T F F T F
The truth table has no bad rows, so this claimed entailment is genuine. (Any valuation
on which all the premises are true is a valuation on which the conclusion is true.)
It might be worth illustrating the tactic again, this time for validity. Let us check
whether the following argument is valid
𝐴 𝐵 𝐶 𝐷 𝐴∨𝐵 ¬ (𝐴 ∧ 𝐶) ¬ (𝐵 ∧ ¬ 𝐷) (¬ 𝐶 ∨ 𝐷)
T T T T T
T T T F T F T F F
T T F T T
T T F F T T
T F T T T
T F T F T F T F F
T F F T T
T F F F T T
F T T T T
F T T F T T F F TT F F
F T F T T
F T F F T T
F F T T T
F F T F F F F
F F F T T
F F F F T T
If we had used no shortcuts, we would have had to write 256 ‘T’s or ‘F’s on this table.
Using shortcuts, we only had to write 37. We have saved ourselves a lot of work.
By the notion of a bad rows – a potential counterexample to a purported entailment –
you can save yourself a huge amount of work in testing for validity. There is still lots of
work involved in symbolising any natural language argument into Sentential, but once
96 TRUTH TABLES
Practice exercises
A. Using shortcuts, determine whether each sentence is a logical truth, a logical false‐
hood, or neither.
1. ¬𝐵 ∧ 𝐵
2. ¬𝐷 ∨ 𝐷
3. (𝐴 ∧ 𝐵) ∨ (𝐵 ∧ 𝐴)
4. ¬ 𝐴 → (𝐵 → 𝐴)
5. 𝐴 ↔ 𝐴 → (𝐵 ∧ ¬𝐵)
6. ¬(𝐴 ∧ 𝐵) ↔ 𝐴
7. 𝐴 → (𝐵 ∨ 𝐶)
8. (𝐴 ∧ ¬𝐴) → (𝐵 ∨ 𝐶)
9. (𝐵 ∧ 𝐷) ↔ 𝐴 ↔ (𝐴 ∨ 𝐶)
13
Partial Truth Tables
Sometimes, we do not need to know what happens on every row of a truth table. Some‐
times, just a single row or two will do.
𝑆 𝑇 𝑈 𝑊 (𝑈 ∧ 𝑇) → (𝑆 ∧ 𝑊)
F
We have only left space for one row, rather than 16, since we are only looking for one
valuation on which the sentence is false. For just that reason, we have filled in ‘F’ for the
entire sentence. A partial truth table is a device for ‘reverse engineering’ a valuation,
given a truth value assigned to a complex sentence. We work backward from that truth
value to what the valuation must or could be.
The main connective of the sentence is a conditional. In order for the conditional to be
false, the antecedent must be true and the consequent must be false. So we fill these
in on the table:
𝑆 𝑇 𝑈 𝑊 (𝑈 ∧ 𝑇) → (𝑆 ∧ 𝑊)
T F F
97
98 TRUTH TABLES
In order for the ‘(𝑈 ∧ 𝑇)’ to be true, both ‘𝑈’ and ‘𝑇’ must be true.
𝑆 𝑇 𝑈 𝑊 (𝑈 ∧ 𝑇) → (𝑆 ∧ 𝑊)
T T TTT F F
Now we just need to make ‘(𝑆 ∧ 𝑊)’ false. To do this, we need to make at least one of
‘𝑆’ and ‘𝑊 ’ false. We can make both ‘𝑆’ and ‘𝑊 ’ false if we want. All that matters is that
the whole sentence turns out false on this row. Making an arbitrary decision, we finish
the table in this way:
𝑆 𝑇 𝑈 𝑊 (𝑈 ∧ 𝑇) → (𝑆 ∧ 𝑊)
F T T F TTT FFF F
So we now have a partial truth table, which shows that ‘(𝑈 ∧ 𝑇) → (𝑆 ∧ 𝑊)’ is not a
logical truth. Put otherwise, we have shown that there is a valuation which makes
‘(𝑈 ∧ 𝑇) → (𝑆 ∧ 𝑊)’ false, namely, the valuation which makes ‘𝑆’ false, ‘𝑇’ true, ‘𝑈’ true
and ‘𝑊 ’ false.
𝑆 𝑇 𝑈 𝑊 (𝑈 ∧ 𝑇) → (𝑆 ∧ 𝑊)
T
To make the sentence true, it will suffice to ensure that the antecedent is false. Since
the antecedent is a conjunction, we can just make one of them false. For no particular
reason, we choose to make ‘𝑈’ false; and then we can assign whatever truth value we
like to the other atomic sentences.
𝑆 𝑇 𝑈 𝑊 (𝑈 ∧ 𝑇) → (𝑆 ∧ 𝑊)
F T F F F FT TFF F
Equivalence To show that two sentences are logically equivalent, we must show that
the sentences have the same truth value on every valuation. So this requires us to
consider each row of a complete truth table.
To show that two sentences are not logically equivalent, we only need to show that
there is a valuation on which they have different truth values. So this requires only a
§13. PARTIAL TRUTH TABLES 99
partial truth table: construct the table given the assumption that one sentence is true
and the other false. So, for example, to show that ‘(¬𝐴 ∧ 𝐵)’ and ‘(𝐴 ∨ ¬𝐵)’ are not
logically equivalent, constructing this partial truth‐table would suffice:
𝐴 𝐵 (¬ 𝐴 ∧ 𝐵) (𝐴 ∨ ¬ 𝐵)
F T T FTT F FF T
𝑃 𝑄 ¬ (𝑃 ∨ 𝑄)
F T
We see that the falsity of the whole negated sentence requires the embedded disjunc‐
tion to be true. But what we do we now? There is no unique way to proceed. What we
do is add a new row, in effect ‘branching’ the possible ways of constructing a valuation
which makes this embedded disjunction true. On the first we assume the first disjunct
is true, and on the second that the right disjunct is true
𝑃 𝑄 ¬ (𝑃 ∨ 𝑄)
T FTT
T F TT
But we can then see that no matter how we fill in the blank cells, in either row, we will
be able to make the whole sentence come out false. For example:
𝑃 𝑄 ¬ (𝑃 ∨ 𝑄)
T T FTTT
F T FFTT
Consistency To show that some sentences are jointly consistent, we must show that
there is a valuation which makes all of the sentence true. So this requires only a partial
truth table with a single row.
To show that some sentences are jointly inconsistent, we must show that there is no
valuation which makes all of the sentence true. So this requires a complete truth table:
You must show that on every row of the table at least one of the sentences is false.
Validity To show that an argument is valid, we must show that there is no valuation
which makes all of the premises true and the conclusion false. So this requires us to
consider all valuations in a complete truth table.
To show that argument is invalid, we must show that there is a valuation which makes
all of the premises true and the conclusion false. So this requires only a one‐line partial
truth table on which all of the premises are true and the conclusion is false.
This table summarises what we need to consider in order to demonstrate the presence
or absence of various semantic features of sentences and arguments. So, checking a
sentence for contradictoriness involves considering all valuations, and we can directly
do that by constructing a complete truth table.
In all these uses of partial truth tables, we must begin constructing them with a par‐
ticular semantic property in mind. We will begin the construction with a different
hypothesis about the target sentence, depending on what property we are testing for.
If we are using partial truth tables to test consistency, we will begin by assigning each
sentence ‘true’. If we are testing for validity, we will assign the premises ‘true’, but the
conclusion ‘false’.
But it turns out we can use the method of partial truth tables in an indirect way to
also evaluate arguments for validity or sentences for inconsistency. The idea is this:
we attempt to construct a partial truth table showing that the argument is invalid, and
if we fail, we can conclude that the argument is in fact valid.
So consider showing that an argument is invalid, which we just saw requires only a
one‐line partial truth table on which all of the premises are true and the conclusion is
false. So consider an attempt to show this argument invalid: (𝑃 ∧ 𝑅), (𝑄 ↔ 𝑃) ∴ 𝑄. We
construct a partial truth table, and attempt to construct a valuation which makes all
the premises true and the conclusion false:
𝑃 𝑄 𝑅 (𝑃 ∧ 𝑅) (𝑄 ↔ 𝑃) 𝑄
F T T F
Looking at the second premise, if we are to construct this valuation we need to make 𝑃
false: the premise is true, so both constituents have to have the same truth value, and
𝑄 is false by assumption in this valuation:
𝑃 𝑄 𝑅 (𝑃 ∧ 𝑅) (𝑄 ↔ 𝑃) 𝑄
F T F TF F
But looking at the first premise, we see that both 𝑃 and 𝑅 have to be true to make this
conjunction true:
𝑃 𝑄 𝑅 (𝑃 ∧ 𝑅) (𝑄 ↔ 𝑃) 𝑄
?? F T T TT F TF F
The truth of the first premise (given the other assumptions) has to make 𝑃 true, but
the truth of the second (given the other assumptions) has to make 𝑃 false. So: there
is no coherent way of assigning a truth value to 𝑃 so as to make this argument invalid.
(This is marked by the ‘??’ in the partial truth table.) Hence, it is valid.
I call this an INDIRECT use of partial truth tables. We do not construct the valuations
which actually demonstrate the presence or absence of a semantic property of an argu‐
ment or set of sentences. Rather, we show that the assumption that there is a valuation
that meets a certain condition is not coherent. So in the above case, we conclude that
nowhere among the 8 rows of the complete truth table for that argument is one that
makes the premises true and the conclusion false.
This procedure works because our partial truth table test is guaranteed to succeed in
demonstrating the absence of validity, for an invalid argument. Accordingly, if our test
fails to demonstrate the absence of validity, that must be because the argument is in
fact valid. (This is in keeping with the table at the end of the previous section. Our
failure to construct a valuation showing the argument invalid implicitly considers all
valuations.)
102 TRUTH TABLES
This indirect method of partial truth tables can also be used if we need to add addi‐
tional branches to our partial truth table. Suppose we are testing if ‘𝑃 ↔ ¬𝑃’ is a logical
falsehood. We attempt to construct a valuation making it true:
𝑃 (𝑃 ↔ ¬ 𝑃)
T
Again we need to branch our partial truth table to deal with the two possible ways this
biconditional might be true: if the two sides are both false, or if they are both true
𝑃 (𝑃 ↔ ¬ 𝑃)
T TT
F TF
We can see, as we complete the table, that there is no coherent valuation making this
sentence true. So the indirect method allows us to deduce that it is a logical falsehood:
𝑃 (𝑃 ↔ ¬ 𝑃)
?? T TT F
?? F TF T
Practice exercises
A. Use complete or partial truth tables (as appropriate) to determine whether these
pairs of sentences are logically equivalent:
§13. PARTIAL TRUTH TABLES 103
1. 𝐴, ¬𝐴
2. 𝐴, 𝐴 ∨ 𝐴
3. 𝐴 → 𝐴, 𝐴 ↔ 𝐴
4. 𝐴 ∨ ¬𝐵, 𝐴 → 𝐵
5. 𝐴 ∧ ¬𝐴, ¬𝐵 ↔ 𝐵
6. ¬(𝐴 ∧ 𝐵), ¬𝐴 ∨ ¬𝐵
7. ¬(𝐴 → 𝐵), ¬𝐴 → ¬𝐵
8. (𝐴 → 𝐵), (¬𝐵 → ¬𝐴)
B. Use complete or partial truth tables (as appropriate) to determine whether these
sentences are jointly consistent, or jointly inconsistent:
1. 𝐴 ∧ 𝐵, 𝐶 → ¬𝐵, 𝐶
2. 𝐴 → 𝐵, 𝐵 → 𝐶 , 𝐴, ¬𝐶
3. 𝐴 ∨ 𝐵, 𝐵 ∨ 𝐶 , 𝐶 → ¬𝐴
4. 𝐴, 𝐵, 𝐶 , ¬𝐷, ¬𝐸 , 𝐹
C. Use complete or partial truth tables (as appropriate) to determine whether each
argument is valid or invalid:
1. 𝐴 ∨ 𝐴 → (𝐴 ↔ 𝐴) ∴ 𝐴
2. 𝐴 ↔ ¬(𝐵 ↔ 𝐴) ∴ 𝐴
3. 𝐴 → 𝐵, 𝐵 ∴ 𝐴
4. 𝐴 ∨ 𝐵, 𝐵 ∨ 𝐶, ¬𝐵 ∴ 𝐴 ∧ 𝐶
5. 𝐴 ↔ 𝐵, 𝐵 ↔ 𝐶 ∴ 𝐴 ↔ 𝐶
14
Expressiveness of Sentential
The Sheffer stroke For any sentences 𝒜 and ℬ, 𝒜 ↓ ℬ is true if and only if both 𝒜
and ℬ are false. We can summarize this in the schematic truth table for the Sheffer
Stroke:
𝒜 ℬ 𝒜↓ℬ
T T F
T F F
F T F
F F T
Inspection of the schematic truth tables for ∧, ∨, etc., shows that their truth tables are
different from this one, and hence the Sheffer Stroke is not one of the connectives of
Sentential. It is a connective of English however: it is the ‘neither … nor …’ connective
that features in ‘Siya is neither an archer nor a jockey’, which is false iff she is either.
‘Whether or not’ The connective ‘… whether or not …’, as in the sentence ‘Sam is
happy whether or not she’s rich’ seems to have this schematic truth table:
104
§14. EXPRESSIVENESS OF Sentential 105
𝒜 ℬ 𝒜 whether or not ℬ
T T T
T F T
F T F
F F F
1 There are in fact sixteen truth‐functional connectives that join two simpler sentences into a complex
sentence, but Sentential only includes four. (Why sixteen? Because there are four rows of the schem‐
atic truth table for such a connective, and each row can have either a T or an F recorded against it,
independent of the other rows, so there are 2 × 2 × 2 × 2 = 16 ways of constructing such a truth‐table.)
2 I should flag a potential limitation here: this result needs our assumption that True and False are the
only truth values. Some MANY‐VALUED LOGICS include further ‘truth values’; for example a third truth
value Indeterminate. (It is dubious whether that is a truth value, or whether a sentence having it just
reflects our ignorance of its truth or falsity.) Such logics can have truth‐functional connectives that
don’t behave like any connective of Sentential. One motivating example for a third truth value, neither
true nor false, are cases of presupposition failure we will look at in §19.4.
106 TRUTH TABLES
We want now to find a schematic Sentential sentence that has this same truth table. So
we shall want the sentence to be true on the second row, and true on the third row, and
false on the other rows. In other words, we want a sentence which is true on either the
second row or the third row.
Let’s begin by focusing on that second row, or rather the family of valuations corres‐
ponding to it. Those valuations include only those that make 𝒜 true and and ℬ false.
These are the only valuations among those we are considering which make 𝒜 true and
ℬ false. So they are the only valuations which make both 𝒜 and ¬ℬ true. So we can
construct a sentence which is true on valuations in that family, and those valuations
alone: the conjunction of 𝒜 and ¬ℬ, (𝒜 ∧ ¬ℬ).
Now look at the third row and its associated family of valuations. Those valuations
make 𝒜 false and ℬ true. They are the only valuations among those we are considering
which make 𝒜 false and ℬ true. So they are the only valuations among those we are
considering which make both of ¬𝒜 and ℬ true. So we can construct a schematic
sentence which is true on valuations in that family, and only those valuations: the
conjunction of ¬𝒜 and ℬ, (¬𝒜 ∧ ℬ).
Our target sentence, the one with the same truth table as ‘Exactly one of 𝒜 and ℬ’, is
true on either the second or third valuations. So it is true if either (𝒜 ∧ ¬ℬ) is true or
if (¬𝒜 ∧ ℬ) is true. And there is of course a schematic Sentential sentence with just this
profile: (𝒜 ∧ ¬ℬ) ∨ (¬𝒜 ∧ ℬ).
Let us summarise this construction by adding to our truth table:
As we can see, we have come up with a schematic Sentential sentence with the intended
truth table.
2. Then, identify which families of valuations (schematic truth table rows) the tar‐
get sentence is true on, and for each such row, construct a conjunctive schematic
Sentential sentence true on that row alone. (It will be a conjunction of schematic
letters sentences of those schematic letters which are true on the valuation, and
negated schematic letters, for those schematic letters false on the valuation).
› What if the target connective is true on no valuations? Then let the schem‐
atic Sentential sentence (𝒜∧¬𝒜) represent it – it too is true on no valuation.
§14. EXPRESSIVENESS OF Sentential 107
› What if there is only one such conjunction, because the target sentence
is true in only one valuation? Then just take that conjunction to be the
Sentential rendering of the target sentence.
Logicians say that the schematic sentences that this procedure spits out are in DIS‐
JUNCTIVE NORMAL FORM.
This procedure doesn’t always give the simplest schematic Sentential sentence with a
given truth table, but for any truth table you like this procedure gives us a schematic
Sentential sentence with that truth table. Indeed, we can see that the Sentential sentence
¬(𝒜 ↔ ℬ) has the same truth table as our target sentence too.
The procedure can be used to show that there is some redundancy in Sentential itself.
Take the connective ↔. Our procedure, applied to the schematic truth table for 𝒜 ↔ ℬ,
yields the following schematic sentence:
(𝒜 ∧ ℬ) ∨ (¬𝒜 ∧ ¬ℬ).
This schematic sentence says the same thing as the original schematic sentence with
the biconditional as its main connective, without using the biconditional. This could
be used as the basis of a program to remove the biconditional from the language. But
that would make Sentential more difficult to use, and we will not pursue this idea fur‐
ther.
Practice exercises
A. For each of columns (i), (ii) and (iii) below, use the procedure just outlined to find
a Sentential sentence that has the truth table depicted:
108 TRUTH TABLES
B. Can you find Sentential schematic sentences which have the same truth table as these
English connectives?
𝐿: Willard is a logician.
𝐴: All logicians wear funny hats.
𝐹 : Willard wears a funny hat.
Again, the best Sentential symbolisation is the invalid 𝑃 ∴ 𝑄. The validity of this argu‐
ment depends on the internal structure of the sentences, and specifically the connec‐
tion between the name ‘James’ and the phrase ‘someone’.
110
§15. BUILDING BLOCKS OF Quantifier 111
The basic units of Sentential are atomic sentences, and Sentential cannot decompose
these. None of the sentences in the arguments above have any truth‐functional con‐
nectives, so must be symbolised as atomic sentences of Sentential. To symbolise argu‐
ments like the preceding, we will have to develop a new logical language which will
allow us to split the atom. We will call this language Quantifier, and the study of this
language and its features is quantified logic.
The details of Quantifier will be explained throughout this chapter, but here is the ba‐
sic idea about how to split the atom(ic sentence). The key insight is that many natural
language sentences have subject‐predicate structure, and some arguments are valid in
virtue of this structure. Quantifier adds to Sentential some resources for modelling this
structure – or perhaps more accurately, it allows us to model the predicate‐name struc‐
ture of many sentences, along with any truth‐functional structure.
Names First, we have names. In Quantifier, we indicate these with lower case italic
letters. For instance, we might let ‘𝑏’ stand for Bertie, or let ‘𝑖 ’ stand for Willard. The
names of Quantifier correspond to proper names in English, like ‘Willard’ or ‘Elyse’,
which also stand for the things they name.
will be a sentence in Quantifier, which symbolises the English sentence ‘Bertie is a dog’.
Equally, we might let the Quantifier predicate ‘𝐿’ symbolise the English predicate ‘
is a logician’. Then the expression ‘𝐿𝑖 ’ will symbolise the English sentence ‘Willard is a
logician’.
Quantifiers Third, we have quantifier phrases. These tell us how much. In Eng‐
lish there are lots of quantifier phrases. But in Quantifier we will focus on just two:
‘all’/‘every’ and ‘there is at least one’/‘some’. So we might symbolise the English sen‐
tence ‘there is a dog’ with the Quantifier sentence ‘∃𝑥𝐷𝑥 ’, which we would naturally
read aloud as ‘there is at least one thing, 𝑥, such that 𝑥 is a dog’.
That is the general idea. But Quantifier is significantly more subtle than Sentential. So
we will come at it slowly.
The clearest cases of singular terms are proper names, and these occupy a distinct
syntactic category in Quantifier. Most of the other types of English singular terms are
modelled in more or less indirect ways in Quantifier.
Definites and possessives are handled principally by paraphrase. We treat possessives
as disguised definite descriptions, paraphrasing ‘Antony’s eldest child’ as something
like ‘the eldest child of Antony’. In turn, definites are handled as complex constructions
in Quantifier, as we’ll see when we take them up in §19.
Even trickier in some ways are singular term uses of pronouns and demonstratives.
Both of these constructions rely on the CONVERSATIONAL CONTEXT to fix a determinate
reference. I might need to gesture, or rely on our previous utterances, to understand
who ‘she’ refers to in ‘she plays violin’. Likewise, ‘that loud dog’ might vary in which
dog it refers to from conversation to conversation. There are approaches that attempt
to model the role of context in determining the meaning of an expression, but we
will not attempt to do this here. Quantifier is not designed to model every aspect of
English. If we are forced to try to represent some English sentences involving context‐
sensitive singular terms in Quantifier, we will need to resort to paraphrases that are
not fully adequate (e.g., treating demonstratives as definite descriptions, despite their
differences).
Moreover, pronouns are not singular terms in every use. Compare the uses of the
pronoun ‘she’ in ‘She plays violin’ and ‘Every girl thinks she deserves icecream’. The
first refers to a specific individual, perhaps with some contextual cues like gestures to
help identify which. The second, however, doesn’t refer to a specific girl – rather, it
ranges over all girls. Perhaps surprisingly, Quantifier takes this second kind of use of
pronouns to be primary. We will discuss how Quantifier represents pronouns when we
introduce variables in §15.5.
15.3 Names
PROPER NAMES are a particularly important kind of singular term. These are expres‐
sions that label individuals without describing them. The name ‘Emerson’ is a proper
name, and the name alone does not tell you anything about Emerson. Of course, some
names are traditionally given to boys and other are traditionally given to girls. If ‘Hil‐
ary’ is used as a singular term, you might guess that it refers to a woman. You might,
§15. BUILDING BLOCKS OF Quantifier 113
though, be guessing wrongly. Indeed, the name does not necessarily mean that what
is referred to is even a person: Hilary might be a giraffe, for all you could tell just from
the name ‘Hilary’. In English, the use of certain names triggers our knowledge of these
conventions, so that (for example) the use of name ‘Fido’ might well trigger an expect‐
ation that the thing named is a dog. However, while it would violate convention to
name your child ‘Fido’, once the you manage to assign the name it can perfectly well
refer to a human child.
In Quantifier, there are no conventions around its category of NAMES. These are pure
names, whose only role is to designate some specific individual. Names in Quantifier
are represented by lower‐case letters ‘𝑎’ through to ‘𝑟’. We can add subscripts if we
want to use some letter more than once, if we have a complicated discourse with many
different names. So here are some names in Quantifier:
These should be thought of along the lines of proper names in English. But with some
differences. First, ‘Antony Eagle’ is a proper name, but there are a number of people
with this name. (Equally, there are at least two people with the name ‘P.D. Magnus’
and several people named ‘Tim Button’.) We live with this kind of ambiguity in Eng‐
lish, allowing context to determine that some particular utterance of the name ‘Antony
Eagle’ refers to one of the contributors to this book, and not some other guy. In Quan‐
tifier, we do not tolerate any such ambiguity. Each name must pick out exactly one
thing. (However, two different names may pick out the same thing.) Second, names
(and predicates) in Quantifier are assigned their meaning, or interpreted, only tempor‐
arily. (This is just like the way that atomic sentences are only assigned a truth value in
Sentential temporarily, relative to a valuation.)
As with Sentential, we provide symbolisation keys. These indicate, temporarily, what a
name shall pick out. So we might offer:
𝑒: Elsa
𝑔: Gregor
𝑚: Marybeth
Again, what we are saying in using natural language names is that the thing referred
to by the Quantifier name ‘𝑒’ is stipulated – for now – to be the same thing referred to
by the English proper noun ‘Elsa’. We are not saying that ‘𝑒’ and ‘Elsa’ are synonyms.
For all we know, perhaps there is additional nuance to the meaning of the name ‘Elsa’
other than what it refers to. If there is, it is not preserved in the Quantifier name ‘𝑒’,
which has no nuance in its meaning other than the thing it denotes. The Quantifier
name ‘𝑒’ might be stipulated to denote Elsa on one occasion, and Eddie on another.
You may have been taught in school that a noun is a ‘naming word’. It is safe to use
names in Quantifier to symbolise proper nouns. But dealing with common nouns is a
more subtle matter. Even though they are often described as ‘general names’, common
nouns actually function as names relatively rarely. Some common nouns which do
often function as names are NATURAL KIND TERMS like ‘gold’ and ‘tiger’. These are
common nouns which really do name a kind of thing. Consider this example:
114 THE LANGUAGE OF QUANTIFIED LOGIC
Gold is scarce;
Nothing scarce is cheap;
So: Gold isn’t cheap.
This argument is valid in virtue of its form. The form of the first premise has the phrase
‘ is scarce’ being predicated of ‘gold’, in which case, ‘gold’ must be functioning as
a name in this argument. But notice that ‘ is gold’ is a perfectly good predicate.
So we cannot simply treat all natural kind terms as proper names of general kinds.
15.4 Predicates
The simplest predicates denote properties of individuals. They are things you can say
about the features or behaviour of an object. Here are some examples of English pre‐
dicates:
› runs
› is a dog
› A piano fell on
In general, you can think about predicates as things which combine with singular terms
to make sentences. In these cases, they are interpreted with the aid of natural language
VERB PHRASES. The most elementary phrases that correspond to predicates are simple
intransitive verb phrases like ‘runs’. But predicates can symbolise more complex verb
phrases. In a subject‐predicate sentence, we can treat any syntactic sub‐unit of a sen‐
tence including everything but the subject as a predicate. So ‘Bertie is a dog’ can be
seen as involving the name ‘Bertie’ and the predicate ‘is a dog’. Note that proper names
can occur within a predicate, as in the verb phrase ‘was a student of David Lewis’. (We
will see how to model sentences where multiple names interact with a complex pre‐
dicate below in §17.) You can begin with sentences and make predicates out of them
by removing singular terms and leaving ‘slots’ in which singular terms can be placed.
Consider the sentence, ‘Vinnie borrowed the family car from Nunzio’. By removing one
singular term, we can obtain any of three different predicates:
(What if we wanted to remove two or more singular terms and leave more than one
gap? We shall return to this in §17.) Quantifier predicates are capital letters 𝐴 through
𝑍, with or without subscripts. We might write a symbolisation key for predicates thus:
𝐴: is angry
𝐻: is happy
If we combine our two symbolisation keys, we can start to symbolise some English
sentences that use these names and predicates in combination. For example, consider
the English sentences:
83. It is snowing;
84. It seems that George is hungry;
85. She’ll be right.
The pronouns in these cases are known as DUMMY PRONOUNS. These pronouns have
no obvious referent, but that poses no problem for the interpretation of the sentences.
Example 83 means something like the bare verb ‘snowfalling’, if only that were gram‐
matical. In these cases, the predicate following the pronoun does not denote a property
of individuals, because there is no way to attach it to a particular individual. Contrast
these examples:
116 THE LANGUAGE OF QUANTIFIED LOGIC
In 86, ‘it’ refers to Coober Pedy. Not so in 87: ‘It’ in that example is a DUMMY PRONOUN.
Yet ‘there’ does refer to Coober Pedy. Drop the latter pronoun, and you obtain ‘it seldom
rains’. Having already argued that ‘it’ is a dummy pronoun in the longer sentence, it
appears to remain a dummy pronoun in the shortened sentence. Try substituting some
proper name for ‘it’ in ‘it seldom rains’ and see what nonsense you get.
Such predicates, where the dummy pronoun should not be thought of as a genuine
gap in the sentence, and with no other singular terms evident, have a meaning which
is purely determined by the predicate. English requires a grammatical subject, but
Quantifier is not subject to a similar limitation. Rather than reproduce this feature of
English, we will allow Quantifier predicates occuring by themselves to count as gram‐
matical sentences of Quantifier. These zero‐place predicates, semantically requiring
no subject, are to be symbolised by capital letters ‘𝐴’ through ‘𝑍’ (perhaps subscrip‐
ted), but without needing any adjacent name to be grammatical.
Note we have thereby included all atomic sentences of Sentential in Quantifier among
these special predicates of Quantifier. In Sentential we used these sentences to symbol‐
ise any sentence of English which did not include sentence connectives. In Quantifier
their use will be more limited. But nevertheless, syntactically, since Quantifier has all
the connectives of Sentential and all the atomic sentences, we see that every sentence
of Sentential is also a sentence of Quantifier.
15.5 Quantifiers
We are now ready to introduce quantifiers. In general, a quantifier tells us how many.
Consider these sentences:
We will focus initially on the coarse‐grained quantifiers ‘every’/‘all’ and ‘some’. We will
look at numerical quantifiers, as in examples 92 and 93, in §18.
It might be tempting to symbolise sentence 88 as ‘(𝐻𝑒 ∧ (𝐻𝑔 ∧ 𝐻𝑚))’. Yet this would
only say that Elsa, Gregor, and Marybeth are happy. We want to say that everyone is
happy, even those we have not named, even those who are nameless.
Note that 88 and 89 and 90 can be roughly paraphrased like this:
In each of these, we have a pronoun – singular ‘they’ in 94 and 95, ‘she’ in 96 – which
is governed by the preceding phrase. That phrase gives us information about what
this pronoun is pointing to – is it pointing severally to everyone, as in 94? Or to just
someone, though it is generally unknown which particular person it is, as in 95? In
either case, something general is being said, rather than something specific, even in
example 95 which is true just in case there is at least one angry person – it doesn’t
matter which person it is.
In this sort of construction, the sentences ‘they are happy’ and ‘she thinks she herself
deserves icecream’, which are headed by a bare pronoun, are called OPEN SENTENCES.
An open sentence in English can be used to say something meaningful, if the circum‐
stances permit a unique interpretation of the pronoun – consider ‘she plays violin’ from
§15.3. But in many cases no such unique interpretation is possible. If I gesture at a large
crowd and say simply ‘he is angry’, I may not manage to say anything meaningful if there
is no way to establish which person this use of ‘he’ is pointing to. The other part of the
sentence, the ‘every person is such that …’ part, is called a QUANTIFIER PHRASE. The
quantifier phrase gives us guidance about how to interpret the otherwise bare pronoun.
The treatment of quantifier phrases in Quantifier actually follows the structure of these
paraphrases rather well. The Quantifier analogue of these embedded pronouns is the
category of VARIABLE. In Quantifier, variables are italic lower case letters ‘𝑠’ through ‘𝑧’,
with or without subscripts. They combine with predicates to form open sentences of
the form ‘𝒜𝓍’. Grammatically variables are thus like singular terms. However, as their
name suggests, variables do not denote any fixed individual. They will not be assigned
a meaning by a symbolisation key, even temporarily. Rather, their role is to be governed
by an accompanying quantifier phrase to say something general about a situation. In
Quantifier, an open sentence combines with a quantifier to form a sentence. (Notice
that I have here returned to the practice of using ‘𝒜 ’ as a metavariable, from §7.)
Universal Quantifier The first quantifier from Quantifier we meet is the UNIVERSAL
QUANTIFIER, symbolised ‘∀’, and which corresponds to ‘every’. Unlike English, we al‐
ways follow a quantifier in Quantifier by the variable it governs, to avoid the possibility
of confusion. Putting this all together, we might symbolise sentence 88 as ‘∀𝑥𝐻𝑥 ’. The
variable ‘𝑥’ is serving as a kind of placeholder, playing the role that is allotted to the
pronoun in the English paraphrase 94. The expression ‘∀𝑥’ intuitively means that you
can pick anyone to be temporarily denoted by ‘𝑥’. The subsequent ‘𝐻𝑥’ indicates, of
that thing you picked out, that it is happy. (Note that pronoun again.)
I should say that there is no special reason to use ‘𝑥’ rather than some other variable.
The sentences ‘∀𝑥𝐻𝑥 ’, ‘∀𝑦𝐻𝑦’, ‘∀𝑧𝐻𝑧’, and ‘∀𝑥5 𝐻𝑥5 ’ use different variables, but they will
all be logically equivalent.
Sentence 97 can be paraphrased as, ‘It is not the case that someone is angry’. We can
then symbolise it using negation and an existential quantifier: ‘¬∃𝑥𝐴𝑥 ’. Yet sentence
97 could also be paraphrased as, ‘Everyone is not angry’. With this in mind, it can
be symbolised using negation and a universal quantifier: ‘∀𝑥¬𝐴𝑥 ’. Both of these are
acceptable symbolisations. Indeed, it will transpire that, in general, ∀𝑥¬𝒜 is logically
equivalent to ¬∃𝑥𝒜 . Symbolising a sentence one way, rather than the other, might
seem more ‘natural’ in some contexts, but it is not much more than a matter of taste.
Sentence 98 is most naturally paraphrased as, ‘There is some x, such that x is not happy’.
This then becomes ‘∃𝑥¬𝐻𝑥 ’. Of course, we could equally have written ‘¬∀𝑥𝐻𝑥 ’, which
we would naturally read as ‘it is not the case that everyone is happy’. And that would
be a perfectly adequate symbolisation of sentence 99.
Quantifiers get their name because they tell us how many things have a certain feature.
Quantifier allows only very crude distinctions: we have seen that we can symbolise ‘no
one’, ‘someone’, and ‘everyone’. English has many other quantifier phrases: ‘most’, ‘a
few’, ‘more than half’, ‘at least three’, etc. Some can be handled in a roundabout way in
Quantifier, as we will see: the numerical quantifier ‘at least three’, for example, we will
meet again in §18. But others, like ‘most’, are simply unable to be reliably symbolised
in Quantifier.
15.6 Domains
Given the symbolisation key we have been using, ‘∀𝑥𝐻𝑥 ’ symbolises ‘Everyone is happy’.
Who is included in this everyone? When we use sentences like this in English, we
usually do not mean everyone now alive on the Earth. We almost certainly do not
mean everyone who was ever alive or who will ever live. We usually mean something
more modest: everyone now in the building, everyone enrolled in the ballet class, or
whatever.
In order to eliminate this ambiguity, we will need to specify a DOMAIN. The domain is
just the things that we are talking about. So if we want to talk about people in Chicago,
we define the domain to be people in Chicago. We write this at the beginning of the
symbolisation key, like this:
The quantifiers range over the domain. Given this domain, ‘∀𝑥’ is to be read roughly as
‘Every person in Chicago is such that …’ and ‘∃𝑥’ is to be read roughly as ‘Some person
in Chicago is such that …’.
In Quantifier, the domain must always include at least one thing. Moreover, in English
we can conclude ‘something is angry’ when given ‘Gregor is angry’. In Quantifier, then,
§15. BUILDING BLOCKS OF Quantifier 119
we shall want to be able to infer ‘∃𝑥𝐴𝑥 ’ from ‘𝐴𝑔’. So we shall insist that each name
must pick out exactly one thing in the domain. If we want to name people in places
beside Chicago, then we need to include those people in the domain.
In permitting multiple domains, Quantifier follows the lead of natural languages like
English. Consider an argument like this:
100. All the beer has been drunk; so we’re going to the bottle‐o.
The premise says that all the beer is gone. But the conclusion only makes sense if
there is more beer at the bottle shop. So whatever domain of things we are talking
about when we state the premise, it cannot include absolutely everything. In Quantifier,
we sidestep the interesting issues involved in deciding just what domain is involved
in evaluating sentences like ‘all the beer has been drunk’, and explicitly include the
current domain of quantification in our symbolisation key.
Note further that to make sense of the sentence ‘all the beer has been drunk’, the do‐
main will have to contain both past and present things, so we can understand what we
are saying about the now‐absent beer. A domain contains what we are talking about.
It might be difficult to understand how we do it, but we do talk about past things,
fictional things, abstract things, merely possible things, and other unusual entities.
So our domains must be flexible enough to include any of these things we might be
talking about. It is a question in philosophical logic as to how we can explain how we
manage to include nonexistent things in our domain of discourse, but for the purposes
of Quantifier all we need to know is that this is something we somehow manage to do.
A domain must have at least one member. A name must pick out
exactly one member of the domain. But a member of the domain may
be picked out by one name, many names, or none at all. The domain
can consist of anything we might be discussing; it is not restricted to
things that presently exist.
120 THE LANGUAGE OF QUANTIFIED LOGIC
Practice exercises
A. In each of the following sentences, identify the names and predicates. Comment on
any difficulties.
B. Identify the possible predicates that can be found by replacing singular terms with
gaps in these sentences:
1. He dislikes Joel;
C. Make use of this symbolisation key to symbolise the following sentences into Quan‐
tifer, commenting on any difficulties:
§15. BUILDING BLOCKS OF Quantifier 121
1. She is tall;
2. Bob likes her;
So: Bob likes someone tall.
16
Sentences with One Quantifier
We now have the basic pieces of Quantifier. Symbolising many sentences of English
will only be a matter of knowing the right way to combine predicates, names, quanti‐
fiers, and the truth‐functional connectives. There is a knack to this, and there is no
substitute for practice.
Sentence 101 is most naturally symbolised using a universal quantifier. The universal
quantifier says something about everything in the domain, not just about the coins in
122
§16. SENTENCES WITH ONE QUANTIFIER 123
my pocket. So if we want to talk just about coins in my pocket, we will need to restrict
the quantifier, by imposing a condition on the things we are saying are 20¢ pieces. That
is: something in the domain is claimed to be a 20¢ piece only if it meets the restricting
condition. That leads us to this conditional paraphrase:
105. For any (coin): if that coin is in my pocket, then it is a 20¢ piece.
restriction
Example 102 uses the quantifier phrase ‘some’. The same thought could be expressed
using different quantifier phrases:
These phrases all indicate an existential quantifier. In these examples, the class of coins
on the table is being related to the class of dollar coins, and it is claimed that at least
one member of the former class is also in the latter class – that there is overlap. This
is represented in Quantifier following the example of this paraphrase:
108. There is something (a coin): it is in both the class of things on the table, and in
the class of dollar coins.
A conditional will usually be the natural connective to use with a universal quantifier,
but a conditional within the scope of an existential quantifier tends to say something
very weak indeed. As a general rule of thumb, do not put conditionals in the scope of
existential quantifiers unless you are sure that you need one.
Sentence 103 can be paraphrased as, ‘It is not the case that every coin on the table
is a dollar’. So we can symbolise it by ‘¬∀𝑥(𝑇𝑥 → 𝐷𝑥)’. You might look at sentence
103 and paraphrase it instead as, ‘Some coin on the table is not a dollar’. You would
then symbolise it by ‘∃𝑥(𝑇𝑥 ∧ ¬𝐷𝑥)’. Although it is probably not immediately obvious
yet, these two sentences are logically equivalent. (This is due to the logical equival‐
ence between ¬∀𝑥𝒜 and ∃𝑥¬𝒜 , mentioned in §15, along with the logical equivalence
between ¬(𝒜 → ℬ) and 𝒜 ∧ ¬ℬ.)
Sentence 104 can be paraphrased as, ‘It is not the case that there is some dollar
in my pocket’. This can be symbolised by ‘¬∃𝑥(𝑃𝑥 ∧ 𝐷𝑥)’. It might also be para‐
phrased as, ‘Everything in my pocket is a nondollar’, and then could be symbolised
by ‘∀𝑥(𝑃𝑥 → ¬𝐷𝑥)’. Again the two symbolisations are logically equivalent. Both are
correct symbolisations of sentence 104.
It is possible to write the symbolisation key for these sentences in this way:
domain: animals
𝑀: is a monkey.
𝑆: knows sign language.
Sentence 109 can now be symbolised by ‘∀𝑥(𝑀𝑥 → 𝑆𝑥)’. Sentence 110 can be symbolised
as ‘∃𝑥(𝑀𝑥 ∧ 𝑆𝑥)’.
It is tempting to say that sentence 109 entails sentence 110. That is, we might think that
it is impossible for it to be the case that every monkey knows sign language, without
§16. SENTENCES WITH ONE QUANTIFIER 125
it’s also being the case that some monkey knows sign language. But this would be
a mistake. It is possible for the sentence ‘∀𝑥(𝑀𝑥 → 𝑆𝑥)’ to be true even though the
sentence ‘∃𝑥(𝑀𝑥 ∧ 𝑆𝑥)’ is false.
How can this be? The answer comes from considering whether these sentences would
be true or false if there are no monkeys. If there are no monkeys at all (in some domain),
then ‘∀𝑥(𝑀𝑥 → 𝑆𝑥)’ would be vacuously true. Take the domain of reptiles. Look at the
domain, and pick any monkey you like – it knows sign language!1 There is certainly
no counterexample to the claim available in this domain. And because of the role of
the conditional in our symbolisation, it turns out that a universally quantified claim
with an unsatisfied restricting condition will also be true. In Quantifier, a universally
quantified sentence of the form ∀𝓍𝒜𝓍 → ℬ𝓍 is false only if we can find something
which is 𝒜 without being ℬ. If we can’t find such a thing, perhaps because we can’t
find anything which is 𝒜 in the first place, then the sentence will be true (since truth is
just lack of falsity, and this sentence isn’t false because we can’t find a case that falsifies
it). This derives ultimately from the feature of Sentential we have already acknowledged
to be questionably analogous to English, namely, the fact that a conditional is false only
if there is a counterexample, a case where the antecedent is true and the consequent
false.
Another example will help to bring this home. Suppose we extend the above symbol‐
isation key, by adding:
𝑅: is a refrigerator
Now consider the sentence ‘∀𝑥(𝑅𝑥 → 𝑀𝑥)’. This symbolises ‘every refrigerator is a mon‐
key’. And this sentence is true, given our symbolisation key. This is counterintuitive,
since we do not want to say that there are a whole bunch of refrigerator monkeys. It
is important to remember, though, that ‘∀𝑥(𝑅𝑥 → 𝑀𝑥)’ is true iff any member of the
domain that is a refrigerator is a monkey. Since the domain is animals, there are no
refrigerators in the domain. Again, then, the sentence is vacuously true.
If you were actually dealing with the sentence ‘All refrigerators are monkeys’, then you
would most likely want to include kitchen appliances in the domain. Then the predic‐
ate ‘𝑅’ would not be empty and the sentence ‘∀𝑥(𝑅𝑥 → 𝑀𝑥)’ would be false. Remember,
though, that a predicate is empty only relative to a particular domain.
1 Remember this is not a counterfactual claim (8.6); even if ‘All monkeys know sign language’ is vacuously
true in the domain of reptiles, that wouldn’t mean that ‘Any monkey would know sign language’ is true.
126 THE LANGUAGE OF QUANTIFIED LOGIC
𝑅: is a rose
𝑇: has a thorn
It is tempting to say that sentence 111 should be symbolised as ‘∀𝑥(𝑅𝑥 → 𝑇𝑥)’. But
we have not yet chosen a domain. If the domain contains all roses, this would be
a good symbolisation. Yet if the domain is merely things on my kitchen table, then
‘∀𝑥(𝑅𝑥 → 𝑇𝑥)’ would only come close to covering the fact that every rose on my kitchen
table has a thorn. If there are no roses on my kitchen table, the sentence would be
trivially true. This is not what we want. To symbolise sentence 111 adequately, we need
to include all the roses in the domain. But now we have two options.
First, we can restrict the domain to include all roses but only roses. Then sentence
111 can, if we like, be symbolised with ‘∀𝑥𝑇𝑥 ’. This is true iff everything in the domain
has a thorn; since the domain is just the roses, this is true iff every rose has a thorn.
By restricting the domain, we have been able to symbolise our English sentence with a
very short sentence of Quantifier. So this approach can save us trouble, if every sentence
that we want to deal with is about roses.
Second, we can let the domain contain things besides roses: rhododendrons; rats;
rifles; whatevers. And we will certainly need to include a more expansive domain if
we simultaneously want to symbolise sentences like:
Our domain must now include both all the roses (so that we can symbolise sentence
111) and all the cowboys (so that we can symbolise sentence 112). So we might offer the
following symbolisation key:
Now we will have to symbolise sentence 111 with ‘∀𝑥(𝑅𝑥 → 𝑇𝑥)’, since ‘∀𝑥𝑇𝑥 ’ would
symbolise the sentence ‘every person or plant has a thorn’. Similarly, we will have to
symbolise sentence 112 with ‘∀𝑥(𝐶𝑥 → 𝑆𝑥)’.
In general, the universal quantifier can be used to symbolise the English expression
‘everyone’ if the domain only contains people. If there are people and other things in
the domain, then ‘everyone’ must be treated as ‘every person’.
§16. SENTENCES WITH ONE QUANTIFIER 127
If you choose a narrow domain, you can make the task of symbolisation easier. If we
are attempting to symbolise example 96, ‘every girl thinks that she deserves icecream’,
we can pick the domain to be girls and then we only need to introduce a predicate
‘ thinks that they themselves deserve icecream’, symbolised ‘𝐷’. The symbolisa‐
tion is then the simple ‘∀𝑥𝐷𝑥 ’. But our options are limited if the conversation goes
on to talk about things other than girls. On the other hand, if you pick an expansive
domain (such as everything whatsoever), you can always just impose an appropriate
restriction. In this case, we could introduce the predicate ‘𝐺 ’ to stand for ‘ is a
domain: women
𝐵: is a bassist.
𝑅: is a rock star.
𝑘: Kim Deal
The same words appear as the consequent in sentences 113 and 114 (‘… she is a rock
star’), but they mean very different things (recall §15.5). To make this clear, it often
helps to paraphrase the original sentences into a more unusual but clearer form.
Sentence 113 can be paraphrased as, ‘Consider Kim Deal: if she is a bassist, then she
is a rockstar’. The bare pronoun ‘she’ gets to denote Kim Deal because of our initial
‘Consider Kim Deal’ remark. This then says something about one particular person,
and can obviously be symbolised as ‘𝐵𝑘 → 𝑅𝑘’.
128 THE LANGUAGE OF QUANTIFIED LOGIC
Sentence 114 gets a very similar paraphrase, with the same embedded conditional: ‘Con‐
sider any woman: if she is a bassist, then she is a rockstar’. The difference in the ‘Con‐
sider …’ phrase however forces a very different intepretation for the sentence as a whole.
Replacing the English pronouns by variables, the Quantifier equivalent of a pronoun, we
get this awkward quasi‐English paraphrase: ‘For any woman x, if x is a bassist, then x
is a rockstar’. Now this can be symbolised as ‘∀𝑥(𝐵𝑥 → 𝑅𝑥)’. This is the same sentence
we would have used to symbolise ‘Every woman who is a bassist is a rock star’. And on
reflection, that is surely true iff sentence 114 is true, as we would hope.
Consider these further sentences, and let us consider the same interpretation as above,
though in a domain of all people.
The same words appear as the antecedent in sentences 115 and 116 (‘If anyone is a
bassist…’). But it can be tricky to work out how to symbolise these two uses. Again,
paraphrase will come to our aid.
Sentence 115 can be paraphrased, ‘If there is at least one bassist, then Kim Deal is a
rock star’. It is now clear that this is a conditional whose antecedent is a quantified
expression; so we can symbolise the entire sentence with a conditional as the main
connective: ‘∃𝑥𝐵𝑥 → 𝑅𝑘’.
Sentence 116 can be paraphrased, ‘For all people x, if x is a bassist, then x is a rock star’.
Or, in more natural English, it can be paraphrased by ‘All bassists are rock stars’. It is
best symbolised as ‘∀𝑥(𝐵𝑥 → 𝑅𝑥)’, just like sentence 114.
The word ‘any’ is particularly tricky, because it can sometimes mean ‘every’ and some‐
times ‘at least one’! Think about the two occurrences of ‘any’ in this sentence:
This can be symbolised ‘∀𝑥(𝑆𝑥 → (∃𝑦(𝑀𝑦 ∧ 𝑃𝑥𝑦) → 𝐻𝑥))’, where the first occurrence of
‘any’ is represented by a universal quantifier, and the second by an existential quantifier.
The moral is that the English words ‘any’ and ‘anyone’ should typically be symbolised
using quantifiers. And if you are having a hard time determining whether to use an
existential or a universal quantifier, try paraphrasing the sentence with an English sen‐
tence that uses words besides ‘any’ or ‘anyone’.2
2 The story about ‘any’ and ‘anyone’ is actually rather interesting. It is well‐known to linguists that ‘any’
has at least two readings: so‐called FREE CHOICE ‘ANY’, which is more or less like a universal quantifier
(‘Any friend of Jessica’s is a friend of mine!’), and NEGATIVE POLARITY (NPI) ‘ANY’, which only occurs
in ‘negative’ contexts, like negation (‘I don’t want any peas!’), where it functions more or less like an
existential quantifier (‘It is not the case that: there exist peas that I want’).
Interestingly, the antecedent of a conditional is a negative environment (being equivalent to ¬𝒜 ∨ 𝒞 ),
and so we expect that ‘any’ in the antecedent of a conditional will have an existential interpretation.
§16. SENTENCES WITH ONE QUANTIFIER 129
To symbolise these sentences, I shall have to add a new name to the symbolisation key,
namely:
𝑏: Tim
And it does: ‘If anyone is home, they will answer the door’ means something like: ‘If someone is home,
then that person will answer the door’. It does not mean ‘If everyone is home, then they will answer
the door’. This is what we see in 115.
But 116 is not a conditional – its main connective is a quantifier. So here free choice ‘any’ is the natural
interpretation, so we use the universal quantifier.
We see the same thing with the quantifier ‘someone’. In ‘if someone is a bassist, Kim Deal is’, someone
gets symbolised by an existential quantifier in the scope of a conditional. But in ‘If someone is a bassist,
they are a musician’ it should be symbolised by a universal taking scope over a conditional.
130 THE LANGUAGE OF QUANTIFIED LOGIC
Scope of ‘∀𝑥’
120. ( ∀𝑥𝐵𝑥 → 𝐵𝑏)
If everything is 𝐵, then 𝑏 is too. Trivially true
Scope of ‘∀𝑥’
121. ∀𝑥(𝐵𝑥 → 𝐵𝑏)
All 𝐵s are such that 𝑏 is 𝐹 . False if 𝑏 isn’t 𝐹 but there are some 𝐹 s
The moral of the story is simple. When you are using quantifiers and conditionals, be
very careful to make sure that you have sorted out the scope correctly.
we can paraphrase this as ‘Herbie is white and Herbie is a car’. We can then use a
symbolisation key like:
𝑊: is white
𝐶: is a car
ℎ: Herbie
This allows us to symbolise sentence 122 as ‘𝑊ℎ ∧ 𝐶ℎ’. But now consider:
Following the case of Herbie, we might try to use a symbolisation key like:
𝐹: is former
𝑃: is Prime Minister
𝑗: Julia Gillard.
Then we would symbolise 123 by ‘𝐹𝑗∧𝑃𝑗’, and symbolise 124 by ‘𝑃𝑗’. That would however
be a mistake, since that symbolisation suggests that the argument from 123 to 124 is
valid, because the symbolisation of the premise does logically entail the symbolisation
of the conclusion.
‘White’ is a INTERSECTIVE adjective, which is a fancy way of saying that the white 𝐹 s are
among the 𝐹 s and among the white things: any white car is a car and white, just like
any successful lawyer is a lawyer and successful, and a one tonne rhinoceros is both a
rhino and a one tonne thing. But ‘former’ is a PRIVATIVE adjective, which means that
any former 𝐹 is not now among the 𝐹 s. Other privative adjectives occur in phrases such
§16. SENTENCES WITH ONE QUANTIFIER 131
as ‘fake diamond’, ‘Deputy Lord Mayor’, and ‘mock trial’. When symbolising these sen‐
tences, you cannot treat them as a conjunction. So you will need to symbolise ‘
We note that a small cow is definitely a cow, and so it seems we might treat ‘small’ as an
intersective adjective. We might formalise this sentence like this: ‘𝑆𝑑 ∧ 𝐶𝑑 ’, assuming
this symbolisation key:
𝑆: is small
𝐶: is a cow
𝑑 : Daisy
But note that our symbolisation would suggest that this argument is valid:
for a cow, while ‘ is small’ denotes the property of being a small thing. (In or‐
dinary speech we tend to keep the ‘for an 𝐹 ’ part of these phrases silent, and let our
conversational circumstances supply it automatically.) But neither should we treat
‘small’ as a nonintersective adjective. If we do, we will be unable to account for the
valid argument ‘Daisy is a small cow, so Daisy is a cow’.
The correct symbolisation key will thus be this, keeping the other symbols as they
were:
𝑆: is small‐for‐a‐cow
3 This caution also applies to adjectives which are neither intersective nor privative, like ‘alleged’ in ‘al‐
leged murderer’. These ought not be symbolised by conjunction either.
132 THE LANGUAGE OF QUANTIFIED LOGIC
Likewise, this rather unusual English argument turns out to be valid too:
(Note that it can be rather difficult to hear the English sentence ‘Daisy is small’ as
saying the same thing as the conclusion of this argument, ‘Daisy is small for a cow’,
which explains why ‘Daisy is a small cow, so Daisy is small’ strikes us as invalid.)
If we take these observations to heart, there are many intersective adjectives which can
change their meaning depending on what predicate they are paired with. Small‐for‐an‐
oil‐tanker is a rather different size property than small‐for‐a‐mouse, but in ordinary
English we use the phrases ‘small oil tanker’ and ‘small mouse’ without bothering to
make these different senses of ‘small’ explicit.
The way that ‘small’ behaves makes it a member of the class of SUBSECTIVE adjectives,
as in ‘poor dancer’. These are like intersective adjectives in that every poor dancer is
a dancer (and every small cow is a cow). But the way that ‘poor’ behaves in this ex‐
pression is such that we cannot conclude that a poor dancer is poor – they are bad at
dancing, not necessarily financially disadvantaged. In these cases, the meaning of the
modifying adjective is itself modified by the noun: in ‘poor dancer’, we get a distinct‐
ively dancing‐related sense of ‘poor’.
When symbolising, it is best to make these modified adjectives very explicit, generally
introducing a new predicate to the symbolisation key to represent the. Doing so blocks
the fallacious argument from ‘Daisy is a small cow’ to ‘Daisy is small’, where the natural
sense of the conclusion is the generic size claim ‘Daisy is small‐for‐a‐thing’. (Likewise,
symbolising ‘ is poor dancer’ as ‘ is poor‐for‐a‐dancer and is a dancer’
bloocks the fallacious argument from ‘Rupert Murdoch is a poor dancer’ to ‘Rupert
Murdoch is poor’.)
The upshot is this: though you can symbolise ‘Daisy is a small cow’ as a conjunction,
you will need to symbolise ‘ is a small cow’ and ‘ is a small animal’ using
16.7 Generics
One final complication presents itself. In English, there seems to be a difference
between these sentences:
The sentence in 127 is false: drakes and ducklings do not, for example. But nevertheless
126 seems to be true, for all that. That sentence lacks an explicit quantifier – it doesn’t
say ‘all ducks lay eggs’. It is what is known as a GENERIC claim: it shares a structure
with examples like ‘cows eat grass’ or ‘rocks are hard’. Generic claims concern what is
typical or normal: the typical duck lays eggs, the typical rock is hard, the typical cow
eats grass. Unlike universally quantified claims, generics are exception‐tolerant: even
if drakes don’t lay eggs, still, ducks lay eggs.
We cannot represent this exception‐tolerance very easily in Quantifier. The initial idea
is to use the universal quantifier, but this will give the wrong results in some cases. For
it will make this argument come out valid, when it should be ruled invalid:
Ducks lay eggs. Donald Duck is a duck. So Donald Duck lays eggs.
One alternative idea that we can implement is that the word ‘ducks’ in 127 is referring to
a natural kind, the species of ducks. So in fact rather than being a quantified sentence,
it is in fact just a subject‐predicate sentence, saying something more or less like ‘The
duck is an oviparous species’. This certainly works for some cases, such as ‘Rabbits are
abundant’, where we have to be understood as saying something about the kind. (How
could an individual be abundant?)
But this cannot handle every aspect of ‘ducks lay eggs’. People do treat those generics
as quantified, because they are often willing to conclude things about individuals given
the generic claim. Given the information that ducks lay eggs, and that Wilhelmina is a
duck, most people conclude that Wilhelmina lays eggs – thus apparently treating the
generic as having the logical role of a universal quantifier.
The proper treatment of generics in English remains a wide‐open question.4 We will
not delve into it further, but you should be careful when symbolising not to be drawn
into the trap of unwarily treating every generic as a universal.
4 See, for example, Sarah‐Jane Leslie and Adam Lerner (2016) ‘Generic Generalizations’, in Edward N Za‐
lta, ed., The Stanford Encyclopedia of Philosophy, https://plato.stanford.edu/archives/win2016/
entries/generics/.
134 THE LANGUAGE OF QUANTIFIED LOGIC
Practice exercises
A. Here are the syllogistic figures identified by Aristotle and his successors, along with
their medieval names:
domain: people
𝐾: knows the combination to the safe
𝑆: is a spy
𝑉: is a vegetarian
ℎ: Hofthor
𝑖 : Ingmar
D. For each argument, write a symbolisation key and symbolise the argument in Quan‐
tifier. In each case, try to decide if the argument you have symbolized is valid.
1. Willard is a logician. All logicians wear funny hats. So Willard wears a funny
hat.
2. Nothing on my desk escapes my attention. There is a computer on my desk. As
such, there is a computer that does not escape my attention.
3. All my dreams are black and white. Old TV shows are in black and white. There‐
fore, some of my dreams are old TV shows.
4. Neither Holmes nor Watson has been to Australia. A person could see a
kangaroo only if they had been to Australia or to a zoo. Although Watson has
not seen a kangaroo, Holmes has. Therefore, Holmes has been to a zoo.
5. No one expects the Spanish Inquisition. No one knows the troubles I’ve seen.
Therefore, anyone who expects the Spanish Inquisition knows the troubles I’ve
seen.
6. All babies are illogical. Nobody who is illogical can manage a crocodile. Berthold
is a baby. Therefore, Berthold is unable to manage a crocodile.
17
Multiple Generality
So far, we have only considered sentences that require simple predicates with just one
‘gap’, and at most one quantifier. Much of the fragment of Quantifier that focuses on
such sentences was already discovered and codified into syllogistic logic by Aristotle
more than 2000 years ago. The full power of Quantifier really comes out when we start
to use predicates with many ‘gaps’ and multiple quantifiers. Despite first appearances,
the discovery of how to handle such sentences was a very significant one. For this
insight, we largely have the German mathematician and philosopher Gottlob Frege
(1879) to thank.1
loves
is to the left of
1 Frege (1879) Begriffsschrift, eine der arithmetischen nachgebildete Formelsprache des reinen Denkens,
Halle a. S.: Louis Nebert. Translated as ‘Concept Script, a formal language of pure thought modelled
upon that of arithmetic’, by S. Bauer‐Mengelberg in J. van Heijenoort, ed. (1967) From Frege to Gödel:
A Source Book in Mathematical Logic, 1879–1931, Cambridge, MA: Harvard University Press. The same
logic was independently discovered by the American philosopher and mathematician Charles Sanders
Peirce: see Peirce (1885) ‘On the Algebra of Logic. A Contribution to the Philosophy of Notation’ Amer‐
ican Journal of Mathematics 7, pp. 197–202. Warning: don’t consult either of these original works,
which will likely just confuse you. The present text is the result of more than a century of refinement
in how to present the basic systems Frege and Peirce introduced, and is a lot more user friendly.
136
§17. MULTIPLE GENERALITY 137
is in debt to
is supervised by
These are TWO‐PLACE predicates. They need to be filled in with two terms (names or
pronouns, most commonly) in order to make a sentence. Conversely, if we start with an
English sentence containing many singular terms, we can remove two singular terms,
to obtain different two‐place predicates. Consider the sentence ‘Vinnie borrowed the
family car from Nunzio’. By deleting two singular terms, we can obtain any of three
different two‐place predicates:
borrowed from .
Indeed, there is no in principle upper limit on the number of gaps or places that our
predicates may contain.
Now there is a little problem with the above. I have used the same symbol, ‘ ’, to
indicate a gap formed by deleting a term from a sentence. However (as Frege emphas‐
ised), these are different gaps. To obtain a sentence, we can fill them in with the same
term, but we can equally fill them in with different terms, and in various different or‐
ders. The following are all perfectly good sentences, obtained by filling in the gaps in
‘ loves ’, but they mean quite different things:
The point is that we need some way of keeping track of the gaps in predicates, so that
we can keep track of how we are filling them in.
Another way to put the point: when it comes to two‐(or more)‐place predicates, some‐
times the order matters. ‘Shaq is taller than Jordan’ doesn’t mean the same thing as
‘Jordan is taller than Shaq’. It matters whose name fills the first gap in the predicate,
and whose name fills the second.
To keep track of the gaps, we shall label them. The labelling conventions I adopt are
best explained by example. Suppose I want to symbolise the following sentences:
138 THE LANGUAGE OF QUANTIFIED LOGIC
domain: people
𝑖 : Imre
𝑘: Karl
𝐿: loves
1 2
This last example highlights something important. Suppose we add to our symbolisa‐
tion key the following:
𝑀: loves
2 1
Here, we have used the same English word (‘loves’) as we used in our symbolisation
key for ‘𝐿’. However, we have swapped the order of the gaps around (just look closely
at those little subscripts!) So ‘𝑀𝑘𝑖 ’ and ‘𝐿𝑖𝑘’ now both symbolise ‘Imre loves Karl’. ‘𝑀𝑖𝑘’
and ‘𝐿𝑘𝑖 ’ now both symbolise ‘Karl loves Imre’. Since love can be unrequited, these are
very different claims. The moral is simple. When we are dealing with predicates with
more than one place, we need to pay careful attention to the order of the places.
With these examples in hand, I can now give the official account of how we understand
Quantifier symbolisations. Suppose we have an Quantifier expression 𝒜𝑡1 , …, 𝑡𝑘 , where
each 𝑡𝑖 is a name or a variable, symbolising a singular term, and where 𝒜 symbolises
a 𝑘‐place predicate. The 𝑖 ‐th term is to be interpreted as filling the gap labelled ‘𝑖 ’. So
consider the following symbolisation key:
domain: places
𝑎: Adelaide
𝑏: Alice Springs
𝑐: Coober Pedy
𝐵: is between and
2 1 3
𝐾: is between and
1 2 3
§17. MULTIPLE GENERALITY 139
. Then if we want to symbolise ‘Coober Pedy is between Adelaide and Alice Springs’,
we can do so using either ‘𝐵𝑎𝑐𝑏’ or ‘𝐾𝑐𝑎𝑏’. The difference is in how the symbolisation
key instructs us to fill the gaps we have established in the predicate as we take steps
to represent it symbolically. There is no ‘right’ answer here: either can be good. The
representation using ‘𝐵’ graphically represents which item is between the other two
in the syntax itself, while the representation using ‘𝐾’ is more faithful to the original
English.
Suppose we add to our symbolisation key the following:
𝑆: thinks only of
1 1
𝑇: thinks only of
1 2
𝑎: Alice
As in the case of ‘𝐿’ and ‘𝑀’ above, the difference between these examples is only in how
the gaps in the construction ‘… thinks only of …’ are labelled. In ‘𝑇’, we have labelled the
two gaps differently. They do not need to be filled with different names or variables,
but there is always the potential to put different names in those different gaps. In the
case of ‘𝑆’, the gaps have the same label. In some sense, there is only one gap in this
sentence, which is why the symbolisation key associates it with a one‐place predicate
– it means something like ‘𝑥 thinks only of themself’. The second predicate is more
flexible. Take something we can say with the predicate ‘𝑆’, such as ‘𝑆𝑎’, ‘Alice thinks
only of herself’. We can express pretty much the same thought using the two‐place
predicate ‘𝑇’: ‘𝑇𝑎𝑎’.
We have introduced a potential ambiguity in our treatment of predicates. (See also
§20.2.) There is nothing overt in our language that distinguishes the one‐place pre‐
dicate ‘𝐴’ (such that ‘𝐴𝑏’ is grammatical) from the two‐place predicate ‘𝐴’ (such that
‘𝐴𝑏’ is ungrammatical, but ‘𝐴𝑏𝑘’ is grammatical). We are, in effect, just letting context
disambiguate how many argument places there are in a given predicate, by assuming
that in any expression of Quantifier we write down, the number of names or variables
following a predicate indicates how many places it has. We could introduce a system
to disambiguate: perhaps adding a superscripted ‘1’ to all one‐place predicates, a su‐
perscripted ‘2’ to all two‐place predicates, etc. Then ‘𝐴1 𝑏’ is grammatical while ‘𝐴1 𝑏𝑘’
is not; conversely, ‘𝐴2 𝑏’ is ungrammatical and ‘𝐴2 𝑏𝑘’ is grammatical. This system of
superscripts would be effective but cumbersome. We will thus keep to our existing
practice, letting context disambiguate. What you should not do, however, is make use
of the same capital letter to symbolise two different predicates in the same symbolisa‐
tion key. If you do that, context will not disambiguate, and you will have failed to give
an interpretation of the language at all.
the predicate ‘𝑇’, ‘ thinks only of ’, we can put a variable in the first gap, and
1 2
a name in the second, if we wish: ‘𝑇𝑥𝑎’. This isn’t a sentence, because no quantifer tells
us how to understand that variable. (The sentence might be representing ‘they think
only of Alice’, but without context there is no determinate referent for the pronoun
‘they’.) Introduce a quantifier, and we have an interpretable sentence:
132. ∀𝑥𝑇𝑥𝑎;
Everyone thinks only of Alice.
The fact that we can fill the two gaps of two‐place predicates with different things, or
even with the same thing, gives us an reason to favour the two‐place predicate symbol‐
isation of ‘Alice thinks only of themself’ as ‘𝑇𝑎𝑎’. That allows us to symbolise certain
arguments that cannot be adequately symbolised using a one‐place predicate. For ex‐
ample: ‘Alice thinks only of herself; so there is someone who is the only person Alice
thinks of’. The symbolisation of this argument might be: ‘𝑇𝑎𝑎 ∴ ∃𝑥𝑇𝑎𝑥 ’. This might
have some prospect of being valid, whereas ‘𝑆𝑎 ∴ ∃𝑥𝑇𝑎𝑥 ’ will not be valid.
The real power of many‐place predicates comes when we consider examples in which
both gaps in the predicate are filled by variables governed by different quantifiers. In
cases where the quantifier expressions interact, we can express things we cannot say
even when we allow logically complex combinations of one‐quantifier sentences. With
this power comes potential confusion too. So let’s proceed carefully.
Consider the sentence ‘everyone loves someone’. This illustrates our goal, as two quan‐
tifier expressions occur in this sentence: ‘everyone’ and ‘someone’. But it also illustrates
the potential pitfalls, as there is a possible ambiguity in this sentence. It might mean
either of the following:
133. For every person, there is some person that they love
134. There is some particular person whom every person loves
It is fairly straightforward to see that these don’t mean the same thing. The first would
be true as long as everybody has somebody they love. One sort of case in which 133 is
true is the cyclic central love triangle in Twelfth Night, where Viola loves Duke Orsino,
the Duke loves Olivia, and Olivia loves Viola (who, disguised as a young man, is the
Duke’s go‐between with Olivia).
In the Twelfth Night situation, 134 is not true. It could only be true if everybody loves
the same person, e.g., if the Duke, Viola, and Olivia herself all love Olivia.
How can we symbolise these two different disambiguations of our original sentence?
(Remember: one of the strengths of symbolic logic is that it is supposed to be able to
clearly represent that which would be ambiguous in natural language.)
Let’s paraphrase a little more formally as we step towards a fully symbolic representa‐
tion. As our sentence has two quantifiers, I will use numbers to link pronouns in our
paraphrase with the quantifier expressions which govern them. Using this device, our
sentences can be paraphrased as follows:
§17. MULTIPLE GENERALITY 141
135. Everyone1 is such that there is someone2 such that: they1 love them2 .
136. There is someone2 such that everyone1 is such that: they1 love them2 .
The quantifier order in the paraphrases governs how they interact. As we saw in §16.5,
the scope of a quantifier is roughly the Quantifier expression in which that quantifier
is the main connective. (Later on we will be a little more precise about the way that
quantifier scope functions in Quantifier: see §20.) So in ‘∀𝑥∃𝑦𝐿𝑥𝑦’, the scope of ‘∀𝑥’ is
the whole sentence, while the scope of ‘∃𝑦’ is just ‘∃𝑦𝐿𝑥𝑦’. The following guides us in
interpreting these ‘nested’ quantifiers, in which one falls in the scope of another:
Let’s apply this to our example. In 135 ‘everyone’ comes first, and ‘someone’ comes next.
The intended interpretation is that this is true iff for any person 𝑥 that you pick, with
respect to that choice you can then find someone 𝑦 who 𝑥 loves. If you had chosen
someone else as the value of 𝑥, then parasitic on that different choice you may end up
needing to find a different value for 𝑦. Compare the reversed quantifier scope in 136.
That is true iff there is someone 𝑦 such that, with respect to that particular choice for
𝑦, any person 𝑥 you pick, 𝑥 loves 𝑦. With respect to a different initial choice for 𝑦, there
may be values for 𝑥 that do not satisfy 𝑥 loves 𝑦, but as the initial choice is governed
by an existential quantifier, that won’t undermine the truth of the sentence. This gives
us our two different symbolisations:
› Sentence 133 can be symbolised by ‘∀𝑥∃𝑦𝐿𝑥𝑦’. Return to our example love tri‐
angle between Duke Orsino, Viola, and Olivia. For any of the three people you
might choose, you can find another person in the domain who they love. So
sentence 133 is true.
› Sentence 134 is symbolised by ‘∃𝑦∀𝑥𝐿𝑥𝑦’. Sentence 134 is not true in the Twelfth
Night situation. For each of the people in the domain, you can find someone
who doesn’t love them, and hence no one is universally beloved. If, instead, each
person loved Olivia, then we could find someone (Olivia), such that everyone
else we examined turns out to love them. In that case, 134 would be true.
This example, besides giving some indication of how to read sentences with multiple
quantifiers, illustrates that quantifier scope matters a great deal. Indeed, the mistake
142 THE LANGUAGE OF QUANTIFIED LOGIC
that arises when one illegitimately switches them around even has a special name: a
quantifier shift fallacy. Here is a real life example from Aristotle:2
Suppose, then, that [A] the things achievable by action have some end that
we wish for because of itself, and because of which we wish for the other
things, and that we do not choose everything because of something else
– for if we do, it will go on without limit, so that desire will prove to be
empty and futile[; c]learly, [B] this end will be the good, that is to say, the
best good. (Aristotle, Nichomachean Ethics 1094a 18‐22)
Setting aside Aristotle’s subsidiary argument about desire, this argument seems to in‐
volve the following pattern of inference:
Every action aims at some end which is desired because of itself. (∀∃)
So: There is end desired because of itself which is the aim of every action, the best
good. (∃∀)
The moral is: take great care with the scope of quantification.
2 Note that it is hotly contested whether Aristotle actually commits a fallacy here, given the compressed
nature of his prose. See, inter alia, J L Ackrill (1999), ‘Aristotle on eudaimonia’, pp. 57–77 in N Sherman,
ed., Aristotle’s Ethics: Critical Essays, Rowman & Littlefield.
3 Thanks to Rob Trueman for the example.
§17. MULTIPLE GENERALITY 143
Sentence 137 can be paraphrased as, ‘There is a dog that Geraldo owns’. This can be
symbolised by ‘∃𝑥(𝐷𝑥 ∧ 𝑂𝑔𝑥)’.
Sentence 138 can be paraphrased as, ‘There is some y such that y is a dog owner’. Deal‐
ing with part of this, we might write ‘∃𝑦(𝑦 is a dog owner)’. Now the fragment we have
left as ‘𝑦 is a dog owner’ is much like sentence 137, except that it is not specifically about
Geraldo. So we can symbolise sentence 138 by:
∃𝑦∃𝑥(𝐷𝑥 ∧ 𝑂𝑦𝑥)
I need to pause to clarify something here. In working out how to symbolise the last
sentence, we wrote down ‘∃𝑦(𝑦 is a dog owner)’. To be very clear: this is neither an
Quantifier sentence nor an English sentence: it uses bits of Quantifier (‘∃’, ‘𝑦’) and bits
of English (‘dog owner’). It is really is just a stepping‐stone on the way to symbolising
the entire English sentence with a Quantifier sentence, a bit of rough‐working‐out.
Sentence 139 can be paraphrased as, ‘Everyone who is a friend of Geraldo is a dog owner’.
Using our stepping‐stone tactic, we might write
Now the fragment that we have left to deal with, ‘𝑥 is a dog owner’, is structurally just
like sentence 137. But it would be a mistake for us simply to put ‘𝑥’ in place of ‘𝑔’ from
our symbolisation of 137, yielding
Here we have a CLASH OF VARIABLES. The scope of the universal quantifier, ‘∀𝑥 ’, is
the entire conditional. But ‘𝐷𝑥 ’ also falls within the scope of the existential quantifier
‘∃𝑥’. Which quantifier has priority and governs the interpretation of the variable? In
Quantifier, if a variable 𝓍 occurs in an Quantifier sentence, it is always governed by the
quantifier which has the narrowest scope which includes that occurrence of 𝓍. So in the
sentence above, the quantifier ‘∃𝑥’ governs every occurrence of ‘𝑥’ in ‘(𝐷𝑥∧𝑂𝑥𝑥)’. Given
this, the symbolisation does not mean what we intended. It says, roughly, ‘everyone
who is a friend of Geraldo is such that there is a self‐owning dog’. This is not at all the
meaning of the English sentence we are aiming to symbolise.
To provide an adequate symbolisation, then, we must avoid clashing variables. We
can do this easily enough. There was no requirement to use ‘𝑥’ as the variable in our
symbolisation of 137, so we can easily choose some different variable for our existential
quantifier. That will give us something like this, which adequately symbolises sentence
139:
∀𝑥 𝐹𝑥𝑔 → ∃𝑧(𝐷𝑧 ∧ 𝑂𝑥𝑧) .
144 THE LANGUAGE OF QUANTIFIED LOGIC
Sentence 140 can be paraphrased as ‘For any x that is a dog owner, there is a dog owner
who is a friend of x’. Using our stepping‐stone tactic, this becomes
Note that we have used the same variable, ‘𝑧’, in both the antecedent and the con‐
sequent of the conditional, but that these are governed by two different quantifiers.
This is ok: there is no potential confusion here, because it is obvious which quanti‐
fier governs each variable. We might graphically represent the scope of the quantifiers
thus:
scope of ‘∀𝑥’
scope of ‘∃𝑦’
scope of 1st ‘∃𝑧’ scope of 2nd ‘∃𝑧’
∀𝑥 ∃𝑧(𝐷𝑧 ∧ 𝑂𝑥𝑧) → ∃𝑦( ∃𝑧(𝐷𝑧 ∧ 𝑂𝑦𝑧) ∧ 𝐹𝑦𝑥)
Even in this case, however, you might want to choose different variables for every quan‐
tifier just as a practical matter, preventing any possibility of confusion for your readers.
Sentence 141 is the trickiest yet. First we paraphrase it as ‘For any x that is a friend of a
dog owner, x owns a dog which is also owned by a friend of x’. Using our stepping‐stone
tactic, this becomes:
Note that any structure identified is preserved: the name ‘𝑎’ continues to appear in
more fine‐grained symbolisations once it has appeared. A one‐place predicate, having
appeared in the second symbolisation, still appears indirectly in the third symbolisa‐
tion. For the open sentence ‘∃𝑦(𝐶𝑦∧𝑂𝓍𝑦)’ can be understood a representing a complex
one‐place predicate: ‘𝓍’ is not associated with any quantifier, and can be replaced by a
name to form a grammatical sentence.
How to symbolise the structure of a sentence very much depends on what purpose
you have in symbolising. What matters is that you manage to represent enough struc‐
ture to determine whether the target argument you are symbolising is valid. Valid
arguments can be symbolised as invalid arguments, if you don’t attend to the relevant
structure, or don’t have resources in your language to represent that structure. But we
just observed that any more fine‐grained analysis of the structure of a sentence retains
the coarser structure (as it just adds more detailed substructure). So if you can show
an argument is valid (conclusive in virtue of its structure) at some level of analysis, it
will remain valid according to any more fine‐grained understanding of the structure of
that sentence. So you should aim to symbolise just enough structure in an argument
to be able to demonstrate its validity – if indeed it is valid.
Practice exercises
A. Using this symbolisation key:
1. (𝐿𝑒𝑏 ∧ 𝐿𝑓𝑒);
2. ∃𝑥(𝐷𝑥 ∧ 𝐿𝑥𝑒);
3. ∃𝑥(𝐷𝑥 ∧ 𝐿𝑥𝑓) → ∃𝑦(𝐷𝑦 ∧ 𝐹𝑦𝑒) ;
4. ∀𝑥(𝐷𝑥 → ¬𝐿𝑥𝑓);
5. ∃𝑦((𝐿𝑦𝑏 ∧ 𝐿𝑒𝑦) ∨ (𝐿𝑦𝑒 ∧ 𝐿𝑏𝑦));
6. ∀𝑥(𝐷𝑥 → ∃𝑦(𝐷𝑦 ∧ 𝐹𝑥𝑦));
7. ∀𝑥∀𝑦(𝐿𝑥𝑦 → ∃𝑧(𝐷𝑧 ∧ (𝐹𝑦𝑧 ∧ 𝐹𝑧𝑥))).
1. ∀𝑥(𝐹𝑥 → 𝑇𝑥);
2. (∃𝑥𝐺𝑥 → ∀𝑥(𝐺𝑥 → 𝑇𝑥));
3. ∀𝑥∀𝑦((𝑃𝑥 ∧ 𝐺𝑦) → 𝐿𝑥𝑦);
4. ∃𝑥∃𝑦(((𝐺𝑥 ∧ 𝑃𝑦) ∧ 𝐿𝑦𝑥) → 𝐿𝑒𝑥);
5. ∀𝑥(𝐿𝑓𝑥 → 𝑅𝑥);
6. ∀𝑥(𝑃𝑥 → ¬(𝐿𝑓𝑥 ∨ 𝐿𝑥𝑓));
7. ∀𝑥(∃𝑦(𝐺𝑦 ∧ 𝐿𝑥𝑦) → 𝐿𝑒𝑥);
8. ∀𝑥∀𝑦((𝑃𝑥 ∧ 𝑃𝑦) → ((𝐿𝑒𝑥 ∧ 𝐿𝑦𝑥) →)𝐿𝑒𝑦);
9. (∃𝑥(𝑃𝑥 ∧ 𝑇𝑥) → ∀𝑦(𝐹𝑦 → 𝑅𝑦)).
domain: people
𝐷: dances ballet.
1
𝐹: is female.
1
𝑀: is male.
1
𝐶: is a child of .
1 2
𝑆: is a sibling of .
1 2
𝑒: Elmer
𝑗: Jane
𝑝: Patrick
If we hold fixed this assignment of meanings to the predicates, why is it possible that
‘∃𝑥∀𝑦𝑃𝑥𝑦’ is true, but not possible that ‘∃𝑥∀𝑦𝑇𝑥𝑦’ is true?
18
Identity
Let the domain be people; this will allow us to translate ‘everyone’ as a universal quan‐
tifier. Offering the symbolisation key:
𝑂: owes money to
1 2
𝑝: Pavel
we can symbolise sentence 142 by ‘∀𝑥𝑂𝑝𝑥 ’. But this has a (perhaps) odd consequence. It
requires that Pavel owes money to every member of the domain (whatever the domain
may be). The domain certainly includes Pavel. So this entails that Pavel owes money
to himself.
Perhaps we meant to say:
We want to add something to the symbolisation of 142 to handle these italicised words.
Some interesting issues arise as we do so.
150
§18. IDENTITY 151
146. Everyone who isn’t Pavel is such that: Pavel owes money to them.
This is a sentence of the form ‘every F [person who is not Pavel] is G [owed money by
Pavel]’. Accordingly it can be symbolised by something with this structure: ∀𝑥(𝒜𝑥 →
𝒢𝑥). Here is an attempt to fill in the schematic letters:
𝑂: owes money to
1 2
𝐼: is
1 2
𝑝: Pavel
This argument is valid in English. But its symbolisation is not valid. If we pick Hikaru
to be the value of ‘𝑦’, we get from 147 the conditional ¬𝐼𝑝ℎ → 𝑂𝑝ℎ. But 148 doesn’t give
us the antecedent of this conditional: ¬𝐼𝑝ℎ is potentially quite different from ¬𝐼ℎ𝑝. So
the argument isn’t formally valid.
The argument isn’t formally valid, because the sentence ‘¬𝐼ℎ𝑝’ doesn’t formally entail
‘¬𝐼ℎ𝑝’ as a matter of logical structure alone. If the original argument is valid, then we
need a symbolisation that as a matter of logic allows the distinctness (non‐identity) of
Hikaru and Pavel to entail the distinctness of Pavel and Hikaru.
1 We don’t absolutely have to do this: there are logical languages in which identity is not a logical pre‐
dicate, and is symbolised by just choosing a two‐place predicate like 𝐼𝑥𝑦. But in our logical language
Quantifier, we are choosing to treat identity as a structural word.
152 THE LANGUAGE OF QUANTIFIED LOGIC
That one thing is logically identical to another does not mean merely that the objects
in question are indistinguishable, or that all of the same things are true of them. When
two things are alike in every respect, we may say they are QUALITATIVELY IDENTICAL.
This is the sense of identity involved in ‘identical twins’, who are two distinct individu‐
als who share their properties. In Quantifier, the identity predicate represents not this
relation of similarity, but a relation of absolute or NUMERICAL IDENTITY: there is only
one, rather than two. This is the sense in which Lewis Carroll (the author of Alice in
Wonderland) is identical to Charles Lutwidge Dodgson (the Oxford mathematician):
‘they’ are the very same person, with two different names.
This might seem odd. Identity is a relation, but it doesn’t relate different things to each
other: it relates everything to itself, and to nothing else. We need a predicate for that
relation because the names and (especially) variables of Quantifier aren’t guaranteed to
have different referents, and sometimes we want to explicitly require that two terms
don’t denote the same thing. For example, suppose we want to symbolise ‘Barry is the
tallest person’. You might try ‘Barry is taller than everyone’. However, that would lead
to the absurdity that Barry is taller than himself, since he is surely among ‘everyone’.
So what we really need is ‘Barry is taller than everyone else, i.e., everyone who’s not
(identical to) Barry’, which is most naturally formulated using the identity predicate.
𝑐 : Mister Checkov
Now sentence 149 can be symbolised as ‘𝑝 = 𝑐 ’. This means that 𝑝 is 𝑐 , and it follows
that the thing named by ‘𝑝’ is the thing named by ‘𝑐 ’.2
Let’s return to our example ‘Barry is taller than everyone else’. We want to start with a
paraphrase, like this: choose anyone from the domain; if they are not Barry, than Barry
is taller than them. Where 𝑏 symbolises ‘Barry’ and 𝑇 symbolises ‘ is taller than
1
’, we might symbolise this as: ‘∀𝑥(¬𝑥 = 𝑏 → 𝑇𝑏𝑥)’ (on the domain of people).
2
Using that same kind of structure, we can also now deal with sentences 143–145. All
of these sentences can be paraphrased as ‘Everyone who isn’t Pavel is such that: Pavel
owes money to them’. Paraphrasing some more, we get: ‘For all x, if x is not Pavel, then
x is owed money by Pavel’. Now that we are armed with our new identity symbol, we
can symbolise this as ‘∀𝑥(¬𝑥 = 𝑝 → 𝑂𝑝𝑥)’.
2 One must be careful: the sentence ‘𝑝 = 𝑐 ’ is, on this symbolisation, about Pavel and Mister Checkov;
it is not about ‘Pavel’ and ‘Mister Checkov’, which are obviously distinct expressions of English.
§18. IDENTITY 153
This last sentence contains the formula ‘¬𝑥 = 𝑝’. And that might look a bit strange,
because the symbol that comes immediately after the ‘¬’ is a variable, rather than a
predicate. But this is no problem. We are simply negating the entire formula, ‘𝑥 =
𝑝’. But if this is confusing, you may use the NON‐IDENTITY PREDICATE ‘≠’. This is an
abbreviation, characterised as folllows:
I will use both expressions in what follows, but strictly speaking ‘¬𝑎 = 𝑏’ is the official
version, and we allow a conventional abbreviation ‘𝑎 ≠ 𝑏’.
In addition to sentences that use the word ‘else’, ‘other than’ and ‘except’, identity will
be helpful when symbolising some sentences that contain the words ‘besides’ and ‘only.’
Consider these examples:
Sentence 150 can be paraphrased as, ‘No one who is not Pavel owes money to Hikaru’.
This can be symbolised by ‘¬∃𝑥(¬𝑥 = 𝑝 ∧ 𝑂𝑥ℎ)’. Equally, sentence 150 can be para‐
phrased as ‘for all x, if x owes money to Hikaru, then x is Pavel’. Then it can be symbol‐
ised as ‘∀𝑥(𝑂𝑥ℎ → 𝑥 = 𝑝)’. Sentence 151 can be treated similarly.3
3 But there is one subtlety here. Do either sentence 150 or 151 entail that Pavel himself owes money to
Hikaru?
154 THE LANGUAGE OF QUANTIFIED LOGIC
These principles can apply to other two‐place predicates too. For example, the two‐
place English predicate ‘ is taller than ’ is also transitive, since if Albert is
1 2
taller than Barbara, and Barbara is taller than Chloe, then Albert must be taller than
Chloe too. But it is not reflexive or symmetric: Albert is not taller than himself, and
if Albert is taller than Barbara, it cannot be also that Barbara is taller than Albert. We
will return to this topic in §21.10.
A final principle about identity is LEIBNIZ’ LAW, named after the philosopher and math‐
ematician Gottfried Leibniz:
If 𝓍 = 𝓎 then for any property at all, 𝓍 has it iff 𝓎 has it too. That is:
every instance of this schematic sentence of Quantifier, for any predic‐
ate ℱ whatsoever, is true:
Leibniz’ Law certainly entails that identical things are indistinguishable, sharing every
property in common. But as we have already noted, identity isn’t merely indistin‐
guishability. Two things might be indistinguishable, but if there are two, they are not
strictly identical in the logical sense we are concerned with. Yet in many cases, even
very similar things do turn out to have some distinguishing property. There is a sig‐
nificant philosophical controversy over whether there can be cases of mere numerical
difference, i.e., of nonidentity without any qualitative dissimilarity.
𝐴: is an apple
1
Sentence 152 does not require identity. It can be adequately symbolised by ‘∃𝑥𝐴𝑥 ’:
There is some apple; perhaps many, but at least one.
It might be tempting to also translate sentence 153 without identity. Yet consider the
sentence ‘∃𝑥∃𝑦(𝐴𝑥 ∧ 𝐴𝑦)’. Roughly, this says that there is some apple 𝑥 in the domain
and some apple 𝑦 in the domain. Since nothing precludes these from being one and
the same apple, this would be true even if there were only one apple.4 In order to make
sure that we are dealing with different apples, we need an identity predicate. Sentence
153 needs to say that the two apples that exist are not identical, so it can be symbolised
by ‘∃𝑥∃𝑦(𝐴𝑥 ∧ 𝐴𝑦 ∧ ¬𝑥 = 𝑦)’.
Sentence 154 requires talking about three different apples. Now we need three existen‐
tial quantifiers, and we need to make sure that each will pick out something different:
‘∃𝑥∃𝑦∃𝑧(𝐴𝑥 ∧ 𝐴𝑦 ∧ 𝐴𝑧 ∧ 𝑥 ≠ 𝑦 ∧ 𝑦 ≠ 𝑧 ∧ 𝑥 ≠ 𝑧)’.
Sentence 155 can be paraphrased as, ‘It is not the case that there are at least two apples’.
This is just the negation of sentence 153:
¬∃𝑥∃𝑦(𝐴𝑥 ∧ 𝐴𝑦 ∧ ¬𝑥 = 𝑦).
But sentence 155 can also be approached in another way. It means that if you pick
out an object and it’s an apple, and then you pick out an object and it’s also an apple,
you must have picked out the same object both times. With this in mind, it can be
symbolised by
∀𝑥∀𝑦 (𝐴𝑥 ∧ 𝐴𝑦) → 𝑥 = 𝑦 .
The two sentences will turn out to be logically equivalent.
In a similar way, sentence 156 can be approached in two equivalent ways. It can be
paraphrased as, ‘It is not the case that there are three or more distinct apples’, so we
can offer:
¬∃𝑥∃𝑦∃𝑧(𝐴𝑥 ∧ 𝐴𝑦 ∧ 𝐴𝑧 ∧ 𝑥 ≠ 𝑦 ∧ 𝑦 ≠ 𝑧 ∧ 𝑥 ≠ 𝑧).
Or, we can read it as saying that if you pick out an apple, and an apple, and an apple,
then you will have picked out (at least) one of these objects more than once. Thus:
4 Note that both ∃𝑥𝐴𝑥 and ∃𝑦𝐴𝑦 are true in a domain with only one apple: the use of different variables
doesn’t require that different apples are the values of those variables.
156 THE LANGUAGE OF QUANTIFIED LOGIC
Sentence 157 can be paraphrased as, ‘There is at least one apple and there is at most
one apple’. This is just the conjunction of sentence 152 and sentence 155. So we can
offer:
∃𝑥𝐴𝑥 ∧ ∀𝑥∀𝑦 (𝐴𝑥 ∧ 𝐴𝑦) → 𝑥 = 𝑦 .
But it is perhaps more straightforward to paraphrase sentence 157 as, ‘There is a thing
x which is an apple, and everything which is an apple is just x itself’. Thought of in this
way, we offer:
∃𝑥 𝐴𝑥 ∧ ∀𝑦(𝐴𝑦 → 𝑥 = 𝑦) .
Similarly, sentence 158 may be paraphrased as, ‘There are at least two apples, and there
are at most two apples’. Thus we could offer
More efficiently, though, we can paraphrase it as ‘There are at least two different apples,
and every apple is one of those two apples’. Then we offer:
∃𝑥∃𝑦 𝐴𝑥 ∧ 𝐴𝑦 ∧ ¬𝑥 = 𝑦 ∧ ∀𝑧(𝐴𝑧 → (𝑥 = 𝑧 ∨ 𝑦 = 𝑧) .
It might be tempting to add a predicate to our symbolisation key, to symbolise the Eng‐
lish predicate ‘ is a thing’ or ‘ is an object’. But this is unnecessary. Words
like ‘thing’ and ‘object’ do not sort wheat from chaff: they apply trivially to everything,
which is to say, they apply trivially to every thing. So we can symbolise either sentence
with either of the following:
Practice exercises
A. Explain why:
E. Identity is a reflexive, symmetric, and transitive predicate. Can you give examples
of English predicates which are
In Quantifier, names function rather like names in English. They are simply labels for
the things they name, and may be attached arbitrarily, without any indication of the
characteristics of what they name.1
But complex noun phrases can also be used to denote particular things in English (re‐
call §15.2), and they do so not merely by acting as arbitrary labels, but often by describ‐
ing the thing they refer to. Consider sentences like:
These underlined noun phrases headed by ‘the’ – ‘the traitor’, ‘the deputy’, ‘the shortest
person who went to Cambridge’ – are known as DEFINITE DESCRIPTIONS. They are
meant to pick out a unique object, by using a description which applies to that object
and to no other (at least, to no other salient object). The class of possessive singular
terms, such as ‘Antony’s eldest child’ or ‘Facebook’s founder’, might be subsumed into
the class of definite descriptions. They can be paraphrased using definite descriptions:
‘the eldest child of Antony’ or ‘the founder of Facebook’.
Definite descriptions must be contrasted with INDEFINITE DESCRIPTIONS, such as
‘A traitor went to Cambridge’, where no unique traitor is implied. Definite descrip‐
tions must also be contrasted with what we might call descriptive names, such as ‘the
Pacific Ocean’. While the Pacific Ocean is an ocean, it isn’t reliably peaceful, and even
when it is, it surely isn’t the unique ocean that merits that description. These descript‐
ive name uses might also be involved in cases of GENERIC ‘the’, such as in ‘The whale is
1 This is not strictly true: consider the name ‘Fido’ which is conventionally the name of a dog. But even
here the name doesn’t carry any information in itself about what it names – the fact that we use that
as a name only for dogs allows someone who knows that to reasonably infer that Fido is a dog. But
‘Fido is a dog’ isn’t a trivial truth, as it would be if somehow ‘Fido’ carried with it the information that
it applies only to dogs.
159
160 THE LANGUAGE OF QUANTIFIED LOGIC
a mammal’. Here there is no implication that some specific whale is under discussion,
but rather that the species is mammalian. (So maybe ‘the whale’ is a complex name for
the species.) In the generic use, ‘the whale is a mammal’ can be paraphrased ‘whales
are mammals’. But a genuine definite description, such as ‘the Prime Minister is a Lib‐
eral’ cannot be paraphrased as ‘Prime Ministers are Liberals’. The question we face is:
can we adequately symbolise definite descriptions in Quantifier?2
domain: people
𝑇: is a traitor
1
𝐷: is a deputy
1
𝐶: went to Cambridge
1
𝑆: is shorter than
1 2
𝑛: Nick
We could symbolise sentence 162 with ‘℩𝑥𝑇𝑥 = 𝑛’ (‘the thing which is a traitor is
identical to Nick’), sentence 163 with ‘𝐶℩𝑥𝑇𝑥’, sentence 164 with ‘℩𝑥𝑇𝑥 = ℩𝑥𝐷𝑥 ’, and
sentence 165 with ‘℩𝑥𝑇𝑥 = ℩𝑥(𝐶𝑥 ∧ ∀𝑦((𝐶𝑦 ∧ 𝑥 ≠ 𝑦) → 𝑆𝑥𝑦))’. This last example may be
a bit tricky to parse. In semi‐formal English, it says (supposing a domain of persons):
‘the unique person such that they are a traitor is identical with the unique person such
that they went to Cambridge and they are shorter than anyone else who went to Cam‐
bridge’.
However, even adding this new symbol to our language doesn’t quite help with our
initial complaint, since it is not self‐evident that the symbolisation of ‘The traitor is
2 There is another question that I don’t address: can we come up with a good theory of the meaning of
‘the’ in English that unifies how it behaves in ‘the whale is a mammal’, ‘the Pacific Ocean is stormy’, and
‘Ortcutt is the shortest spy’? That question is very hard. Our task is to offer a symbolisation, and as
we’ve seen, a symbolisation needn’t be a translation, but only needs to capture the relevant implications
to be successful.
§19. DEFINITE DESCRIPTIONS 161
a traitor’ as ‘𝑇℩𝑥𝑇𝑥 ’ yields a logical truth. More seriously, the idea that all definite de‐
scriptions are to be treated as terms makes it more difficult to give a unified treatment
of descriptions in predicate position. It would be desirable to give a unified treatment
of ‘Ortcutt is a short spy’ and ‘Ortcutt is the short spy’; but while the former might be
symbolised as ‘(𝑆𝑜 ∧ 𝑃𝑜)’, using the predicative ‘is’, the latter would need to treated as
‘𝑜 = ℩𝑥(𝑆𝑥 ∧ 𝑃𝑥)’, using the ‘is’ of identity.
More practically, it would be nice if we didn’t have to add a new symbol to Quantifier.
And indeed, we might be able to handle descriptions using what we already have.
Note a very important feature of this paraphrase: ‘the’ does not appear on the right‐side
of the equivalence. This approach would allow us to paraphrase every sentence of the
same form as the left hand side into a sentence of the same form as the right hand side,
and thus ‘paraphrase away’ the definite description.
It is crucial to notice that we can handle each of the conjuncts on the right hand side
of the equivalence in Quantifier, using our techniques for dealing with numerical quan‐
tification. We can deal with the three conjuncts on the right‐hand side of Russell’s
paraphrase as follows:
In fact, we could express the same point rather more crisply, by recognising that the
first two conjuncts just amount to the claim that there is exactly one ℱ , and that the last
conjunct tells us that that object is 𝒢. So, equivalently, we could offer this symbolisation
of ‘The ℱ is 𝒢’:
∃𝑥 ℱ𝑥 ∧ ∀𝑦(ℱ𝑦 → 𝑥 = 𝑦) ∧ 𝒢𝑥
Using these sorts of techniques, we can now symbolise sentences 162–164 without using
any new‐fangled fancy operator, such as ‘℩’.
3 Bertrand Russell (1905) ‘On Denoting’, Mind 14, pp. 479–93; see also Russell (1919) Introduction to Math‐
ematical Philosophy, London: Allen and Unwin, ch. 16.
162 THE LANGUAGE OF QUANTIFIED LOGIC
› Sentence 162 is exactly like the examples we have just considered. So we would
symbolise it by ‘∃𝑥(𝑇𝑥 ∧ ∀𝑦(𝑇𝑦 → 𝑥 = 𝑦) ∧ 𝑥 = 𝑛)’.
› Sentence 164 is a little trickier, because it links two definite descriptions. But,
deploying Russell’s paraphrase, it can be paraphrased by ‘something is such that:
there is exactly one traitor and there is exactly one deputy and it is each of them’.
So we can symbolise it by:
∃𝑥 𝑇𝑥 ∧ ∀𝑦(𝑇𝑦 → 𝑥 = 𝑦) ∧ 𝐷𝑥 ∧ ∀𝑧(𝐷𝑧 → 𝑥 = 𝑧) .
Note that I have made sure that both uniqueness conditions are in the scope of
the initial existential quantifier.
Yet Russell’s account has some nice features that predict and explain some otherwise
puzzling features of English ‘the’, and many logicians have followed Russell in thinking
that the Russellian account might provide an adequate semantics for English definite
descriptions. So in this section and in §19.4, I cannot resist discussing some of the
evidence for Russell’s account of the English ‘the’, and some of the major puzzles for
that account. These two sections should be regarded as optional.
Scope and Descriptions Indeed, Russell’s paraphrase helpfully highlights two ways
one can go wrong with definite descriptions. To adapt an example from Stephen
Neale,4 suppose I, Antony Eagle, claim:
𝑎: Antony
𝐾: is a present king of France
1
𝐺: is a grandfather of
1 2
But your denial is ambiguous. There are two available readings of 167, corresponding
to these two different sentences:
168. There is no one who is both the present King of France and such that Antony is
his grandfather.
169. There is a unique present King of France, but Antony is not his grandfather.
Sentence 168 might be paraphrased by ‘It is not the case that: Antony is a grandfather
of the present King of France’. It will then be symbolised by ‘¬∃𝑥 𝐾𝑥 ∧ ∀𝑦(𝐾𝑦 → 𝑥 =
𝑦) ∧ 𝐺𝑎𝑥 ’. We might call this WIDE SCOPE negation, since the negation takes scope
over the entire sentence. Note that this sentence is predicted to be true, because the
embedded sentence contains an empty definite description.
Sentence 169 can be symbolised by ‘∃𝑥(𝐾𝑥 ∧ ∀𝑦(𝐾𝑦 → 𝑥 = 𝑦) ∧ ¬𝐺𝑎𝑥). We might call
this NARROW SCOPE negation, since the negation occurs within the scope of the definite
description. Note that its truth would require that there be a present King of France,
albeit one who is a grandchild of Antony; so this sentence, unlike 168, is predicted to
be false.
These two disambiguations of your rejection 167 have different truth values, so don’t
mean the same thing. So there are two different reasons you could have for your rejec‐
tion of my claim. Are you accepting that the definite description refers and denying
what I said of the present king of France? Or are you denying that the definite descrip‐
tion refers, rejecting a more basic assumption of what I said?
The basic point is that the Russellian paraphrase provides two places in the symbolisa‐
tion of 167 for the negation of ‘isn’t’ to fit: either taking scope over the whole sentence,
or taking scope just over the predicate ‘𝐺𝑎𝑥’. We see evidence that there are these two
options in the ambiguity of 167, and so we should opt for a semantics, like Russell’s,
which has the resources to handle these ambiguities – in this case, by positing a quan‐
tifier scope ambiguity like those we discussed in §17.2.
The term‐forming operator approach to definite descriptions cannot handle this con‐
trast. There is just one symbolisation of the negation of 166 available in this frame‐
work: ‘¬𝐺𝑎℩𝑥𝐾𝑥 ’. The original sentence is false, so this negation must be true. Since
sentence 169 is false, this sentence does not express the inner negation of sentence 166.
But there is no way to put the negation elsewhere that will express the same claim as
169. (‘𝐺𝑎℩𝑥¬𝐾𝑥 ’ clearly doesn’t do the job – it says there is just one unique thing which
isn’t the present king of France, and Antony is grandfather to it!) So the sentence ‘Ant‐
ony isn’t grandfather to the present king of France’ has only one correct symbolisation,
and hence only one reading, if the operator approach to definite descriptions is correct.
Since it is ambiguous, with multiple readings, the ‘℩’‐operator approach cannot be cor‐
rect – it doesn’t provide a complex enough grammar for definite description sentences
to allow for these kinds of scope ambiguities.
sentences should not be regarded as false, exactly.5 Rather, they both seem to assume
that ‘the ℱ ’ refers, and since this assumption is incorrect, the sentences misfire in a way
that should, Strawson thinks, make us regard as neither true nor false, but nevertheless
still meaningful.
A SEMANTIC PRESUPPOSITION of a declarative sentence is something that must be taken
for granted by anyone asserting the sentence, triggered or forced by the words in‐
volved.6 A pretty reliable test for whether 𝒫 is a semantic presupposition of a sentence
𝒜 is whether 𝒫 is a consequence of both 𝒜 and ¬𝒜 . Strawson elevates this test to
a definition of semantic presupposition: presuppositions are entailments that persist
when a sentence is embedded under negation. So ‘John has stopped drinking’ and its
negation ‘John hasn’t stopped drinking’ both entail in English that John used to drink,
and hence ‘John used to drink’ is a semantic presupposition of ‘John has stopped drink‐
ing’. Here the presupposition is triggered by the aspectual verb ‘stopped’.
Strawson says that PRESUPPOSITION FAILURE occurs when the presupposition of a sen‐
tence is false. If John never used to drink, both ‘John has stopped drinking’ and ‘John
hasn’t stopped drinking’ misfire. Strawson, following Frege, suggests that in cases of
presupposition failure, a sentence is neither true nor false.
In the case of definite descriptions, the Frege‐Strawson view would say that ‘the present
King of France is bald’ presupposes that there is a present King of France. Since that
presupposition fails, the sentence is neither true nor false. This is contrary to Russell’s
position that the sentence is false.
With the notion of presupposition failure in hand, the Frege‐Strawson theory seems
to be able to address the scope evidence for Russell’s account. For there is now a dis‐
tinction between the denial involved in 168 and that involved in 169, even though the
logical form of the denied sentence is ‘𝐺𝑎℩𝑥𝐾𝑥 ’. The logical negation of that sentence is
‘¬𝐺𝑎℩𝑥𝐾𝑥 ’, which shares the presupposition that there is a present King of France. But
there is also another way of rejecting a sentence, one which targets not what was said
by the sentence, but its presuppositions. This is sometimes called METALINGUISTIC
NEGATION.7 One way of identifying its presence is the use of focal stress, emphasising
the word to be targeted, and accompanied by an gloss explaining the presupposition
to be rejected, as in:
The effects which Russell sees as the result of scope ambiguity are re‐analysed as in‐
volving the contrast between ordinary negation in 169, and metalinguistic negation
in 168. Crucially, on the Frege‐Strawson view, the successful deployment of metalin‐
guistic negation in cases like 172 renders the sentence ‘Sarah is the source of the leak’
neither true nor false.
5 P F Strawson (1950) ‘On Referring’, Mind 59 pp. 320–34.
6 See David I Beaver, Bart Geurts, and Kristie Denlinger (2021) ‘Presupposition’, in Edward N Zalta, ed.,
The Stanford Encyclopedia of Philosophy https://plato.stanford.edu/archives/spr2021/entries/
presupposition/, esp. §2 and §6.
7 Larry Horn (1989) A Natural History of Negation, University of Chicago Press.
166 THE LANGUAGE OF QUANTIFIED LOGIC
The phenomenon of metalinguistic negation is real. But if we agree with Frege and
Strawson here on how to model it semantically, we shall need to revise our logic. For,
in our logic, there are only two truth values (True and False), and every meaningful
sentence is assigned exactly one of these truth values. Why? Suppose there is presup‐
position failure of ‘John has stopped drinking’, because John never drank. Then ‘John
has stopped drinking’ can’t be true since it entails something false, its presupposition
‘John drank’. And ‘John hasn’t stopped drinking’ can’t be true, since it also entails that
same falsehood. So neither can be true. Since one is the negation of the other, if either
is false, the other is true. So neither can be false, either. So if there are nontrivial se‐
mantic presuppositions – semantic presuppositions that might be false – then we shall
have to admit that some meaningful sentences with a false presupposition are neither
true nor false. It remains an open question, admittedly, whether presuppositions in
the ordinary and intuitive sense are really semantic presuppositions in this sense.
But there is room to disagree with Strawson. Strawson is appealing to some linguistic
intuitions, but it is not clear that they are very robust. For example: isn’t it just false,
not ‘gappy’, that Antony is grandfather to the present King of France? (This is Neale’s
line.)
Misdescription Keith Donnellan raised a second sort of worry, which (very roughly)
can be brought out by thinking about a case of mistaken identity.8 Two men stand in
the corner: a very tall man drinking what looks like a gin martini; and a very short man
drinking what looks like a pint of water. Seeing them, Malika says:
173′ . There is exactly one gin‐drinker [in the corner], and whomever is a gin‐drinker
[in the corner] is very tall.
But now suppose that the very tall man is actually drinking water from a martini glass;
whereas the very short man is drinking a pint of (neat) gin. By Russell’s paraphrase, Ma‐
lika has said something false. But don’t we want to say that Malika has said something
true?
Again, one might wonder how clear our intuitions are on this case. We can all agree
that Malika intended to pick out a particular man, and say something true of him (that
he was tall). On Russell’s paraphrase, she actually picked out a different man (the
short one), and consequently said something false of him. But maybe advocates of
Russell’s paraphrase only need to explain why Malika’s intentions were frustrated, and
so why she said something false. This is easy enough to do: Malika said something
false because she had false beliefs about the men’s drinks; if Malika’s beliefs about the
drinks had been true, then she would have said something true.9
8 Keith Donnellan (1966) ‘Reference and Definite Descriptions’, Philosophical Review 77, pp. 281–304.
9 Interested parties should read Saul Kripke (1977) ‘Speaker Reference and Semantic Reference’, 1977 in
French et al., eds., Contemporary Perspectives in the Philosophy of Language, Minneapolis: University
of Minnesota Press, pp. 6‐27.
§19. DEFINITE DESCRIPTIONS 167
To say much more here would lead us into deep philosophical waters. That would be
no bad thing, but for now it would distract us from the immediate purpose of learning
formal logic. So, for now, we shall stick with Russell’s paraphrase of definite descrip‐
tions, when it comes to putting things into Quantifier. It is certainly the best that we
can offer, without significantly revising our logic. And it is quite defensible as an para‐
phrase.
Practice exercises
A. Using the following symbolisation key:
domain: people
𝐾: knows the combination to the safe.
1
𝑆: is a spy.
1
𝑉: is a vegetarian.
1
𝑇: trusts .
1 2
ℎ: Hofthor
𝑖 : Ingmar
domain: animals
𝐶𝓍: is a cat.
1
𝐺: is grumpier than .
1 2
𝑓: Felix
𝑔: Sylvester
› 𝑇𝑛 ∧ ∀𝑦(𝑇𝑦 → 𝑛 = 𝑦)
› ∀𝑦(𝑇𝑦 ↔ 𝑦 = 𝑛)
H. Russell’s paraphrase of an indefinite description sentence like ‘José met a man’ is:
there is at least one thing x such that x is male and human and and José met x.
Note that the word ‘is’ has apparently two readings: sometimes, as in ‘Fido is heavy’,
it indicates predication; sometimes, as in ‘Fido is Rover’, it indicates identity (in the
case, the same dog is known by two names). Something interesting arises if Russell’s
account of indefinite descriptions is right, since in an example like ‘Fido is a dog of
unusual size’, we might interpret the ‘is’ in either way, roughly:
Suppose that ‘𝑈’ symbolises the property of being a dog of unusual size, then our two
readings can be symbolised ‘∃𝑥(𝑈𝑥 ∧ 𝑓 = 𝑥)’ and ‘𝑈𝑓’.
Is there any significant difference in meaning between these two symbolisations?
20
Sentences of Quantifier
We know how to represent English sentences in Quantifier. The time has finally come
to properly define the notion of a sentence of Quantifier.
20.1 Expressions
There are six kinds of symbols in Quantifier:
Predicate symbols 𝐴, 𝐵, 𝐶, …, 𝑍
with subscripts, as needed 𝐴1 , 𝐵1 , 𝑍1 , 𝐴2 , 𝐴25 , 𝐽375 , …
and the identity symbol =.
Names 𝑎, 𝑏, 𝑐, …, 𝑟
with subscripts, as needed 𝑎1 , 𝑏224 , ℎ7 , 𝑚32 , …
Variables 𝑠, 𝑡, 𝑢, 𝑣, 𝑤, 𝑥, 𝑦, 𝑧
with subscripts, as needed 𝑥1 , 𝑦1 , 𝑧1 , 𝑥2 , …
Parentheses (,)
170
§20. SENTENCES OF Quantifier 171
stage: via the notion of a formula. The intuitive idea is that a formula is any sentence,
or anything which can be turned into a sentence by adding quantifiers out front. But
this will take some unpacking.
We start by defining the notion of a term.
The use of script fonts here follows the conventions laid down in §7. So, ‘ℛ ’ is not
itself a predicate of Quantifier. Rather, it is a symbol of our metalanguage (augmented
English) that we use to talk about any predicate of Quantifier. Similarly, ‘𝓉1 ’ is not a
term of Quantifier, but a symbol of the metalanguage that we can use to talk about any
term of Quantifier. So here are some atomic formulae:
𝑥=𝑎
𝑎=𝑏
𝐹𝑥
𝐹𝑎
𝐺𝑥𝑎𝑦
𝐺𝑎𝑎𝑎
𝑆𝑥1 𝑥2 𝑎𝑏𝑦𝑥1
𝑆𝑏𝑦254 𝑧𝑎𝑎𝑧
Remember that we allow zero‐place predicates too, to ensure that sentence letters of
Sentential are grammatical expressions of Quantifier too. According to the definition,
any predicate symbol followed by no terms at all is also an atomic formula of Quantifier.
So ‘𝑄’ by itself is an acceptable atomic formula.
Earlier, we distinguished many‐place from one‐place predicates. We made no distinc‐
tion however in our list of acceptable symbols between predicate symbols with dif‐
ferent numbers of places. This means that ‘𝐹 ’, ‘𝐹𝑎’, ‘𝐹𝑎𝑥 ’, and ‘𝐹𝑎𝑥𝑏’ are all atomic
formulae. We will not introduce any device for explicitly indicating what number of
places a predicate has. Rather, we will assume that in every atomic formula of the
172 THE LANGUAGE OF QUANTIFIED LOGIC
form 𝒜𝓉1 …𝓉𝑛 , 𝒜 denotes an 𝑛‐place predicate. This means there is a potential for
confusion in practice, if someone chooses to symbolise an argument using both the
one‐place predicate ‘𝐴’ and the two‐place predicate ‘𝐴’. Rather than forbid this en‐
tirely, we recommend choosing distinct symbols for different place predicates in any
symbolisation you construct.
Once we know what atomic formulae are, we can offer recursion clauses to define
arbitrary formulae. The first few clauses are exactly the same as for Sentential.
𝐹𝑥
𝐺𝑎𝑦𝑧
𝑆𝑦𝑧𝑦𝑎𝑦𝑥
(𝐺𝑎𝑦𝑧 → 𝑆𝑦𝑧𝑦𝑎𝑦𝑥)
∀𝑧(𝐺𝑎𝑦𝑧 → 𝑆𝑦𝑧𝑦𝑎𝑦𝑥)
𝐹𝑥 ↔ ∀𝑧(𝐺𝑎𝑦𝑧 → 𝑆𝑦𝑧𝑦𝑎𝑦𝑥)
∃𝑦(𝐹𝑥 ↔ ∀𝑧(𝐺𝑎𝑦𝑧 → 𝑆𝑦𝑧𝑦𝑎𝑦𝑥))
∀𝑥∃𝑦(𝐹𝑥 ↔ ∀𝑧(𝐺𝑎𝑦𝑧 → 𝑆𝑦𝑧𝑦𝑎𝑦𝑥))
∀𝑥(𝐹𝑥 → ∃𝑥𝐺𝑥)
∀𝑦(𝐹𝑎 → 𝐹𝑎)
We can now give a formal definition of scope, which incorporates the definition of
the scope of a quantifier. Here we follow the case of Sentential, though we note that a
connective can be either a connective or a quantifier:
So we can graphically illustrate the scope of the quantifiers in the last two examples
thus:
scope of ‘∀𝑥’
scope of ‘∃𝑦’ scope of ‘∀𝑥’
scope of ‘∀𝑧’ scope of ‘∃𝑥’
∀𝑥 ∃𝑦(𝐹𝑥 ↔ ∀𝑧(𝐺𝑎𝑦𝑧 → 𝑆𝑦𝑧𝑦𝑎𝑦𝑥)) ∀𝑥(𝐹𝑥 → ∃𝑥𝐺𝑥 )
Note that it follows from our recursive definition that ‘∀𝑥𝐹𝑎’ is a formula. While puzz‐
ling on its face, with that quantifier governing no variable in its scope, it nevertheless
is a formula of the language. It will turn out that, because the quantifier binds no
variable in its scope, it is redundant; the formula ‘∀𝑥𝐹𝑎’ is logically equivalent to ‘𝐹𝑎’.
Eliminating such formulae from the language involves greatly increased complication
in the definition of a formula for no important gain.
20.3 Sentences
Recall that we are largely concerned in logic with assertoric sentences: sentences that
can be either true or false. Many formulae are not sentences. Consider the following
symbolisation key:
domain: people
𝐿: loves
1 2
𝑏: Boris
Consider the atomic formula ‘𝐿𝑧𝑧’. All atomic formulae are formulae, so ‘𝐿𝑧𝑧’ is a for‐
mula. But can it be true or false? You might think that it will be true just in case the
person named by ‘𝑧’ loves herself, in the same way that ‘𝐿𝑏𝑏’ is true just in case Boris
(the person named by ‘𝑏’) loves himself. But ‘𝑧’ is a variable, and does not name anyone
or any thing. It is true that we can sometimes manage to make a claim by saying ‘it
loves it’, the best Engish rendering of ‘𝐿𝑧𝑧’. But we can only do so by making use of
contextual cues to supply referents for the pronouns ‘it’ – contextual cues that artifical
languages like Quantifier lack. (If you and I are both looking at a bee sucking nectar on
a flower, we might say ‘it loves it’ to express the claim that the bee [it] loves the nectar
[it]. But we don’t have such a rich environment to appeal to when trying to interpret
formulae of Quantifier.)
Of course, if we put an existential quantifier out front, obtaining ‘∃𝑧𝐿𝑧𝑧’, then this
would be true iff someone loves themself (i.e., someone [𝑧] is such that they [𝑧] love
themself [𝑧]). Equally, if we wrote ‘∀𝑧𝐿𝑧𝑧’, this would be true iff everyone loves them‐
selves. The point is that, in the absence of an explicit introduction or contextual cues,
we need a quantifier to tell us how to deal with a variable.
Let’s make this idea precise.
The scope of the universal quantifier ‘∀𝑥 ’ is ‘∀𝑥(𝐸𝑥 ∨ 𝐷𝑦)’, so the first ‘𝑥’ is bound by the
universal quantifier. However, the second and third occurrence of ‘𝑥’ are free. Equally,
the ‘𝑦’ is free. The scope of the existential quantifier ‘∃𝑧’ is ‘∃𝑧(𝐸𝑥 → 𝐿𝑧𝑥)’, so ‘𝑧’ is
bound.
In our last example from the previous section, ‘∀𝑥(𝐹𝑥 → ∃𝑥𝐺𝑥)’, the variable ‘𝑥’ in ‘𝐺𝑥 ’
is bound by the quantifier ‘∃𝑥’, and so 𝑥 is free in ‘𝐹𝑥 → ∃𝑥𝐺𝑥’ only when it appears in
‘𝐹𝑥’. So while the scope of ‘∀𝑥 ’ is the whole sentence, it nevertheless doesn’t bind every
variable in its scope – only those such that, were it absent, would be free. (So we might
say an occurrence of a variable 𝓍 is BOUND BY an occurrence of quantifier ∀𝓍/∃𝓍 just
in case it would have been free had that quantifier been omitted.)
Finally we can say the following.
Since an atomic formula formed by a zero‐place predicate contains no terms at all, and
hence cannot contain a variable, every such expression is a sentence – they are just the
atomic sentences of Sentential. Any other formula which contains no variables, but
only names, is also a sentence, as well as all those formulae which contain only bound
variables.
Our definition of a formula allows for examples like ‘∃𝑥∀𝑥𝐹𝑥 ’. This is a sentence, since
the variable in ‘𝐹𝑥’ is in the scope of a quantifier attached to ‘𝑥’. But which one? It
could make a difference whether the sentence is to be understood as saying everything
is 𝐹 , or something it. To resolve this issue, let us stipulate that a variable is bound by
the quantifier which is the main connective of the smallest subformula in which the
variable is bound. So in ‘∃𝑥∀𝑥𝐹𝑥 ’, the variable is bound by the universal quantifier,
because it was already bound in the subformula ‘∀𝑥𝐹𝑥 ’.
Practice exercises
A. Identify which variables are bound and which are free. Are any of these expressions
formulas of Quantifier? Are any of them sentences? Explain your answers.
B. Identify which of the following are (a) expressions of Quantifier; (b) formulae of
Quantifier; and (c) sentences of Quantifier.
Interpretations
21
Extensionality
Recall that Sentential is a truth‐functional language. Its connectives are all truth‐
functional, and all that we can do with Sentential is key sentences to particular truth
values. We can do this directly. For example, we might stipulate that the Sentential sen‐
tence ‘𝑃’ is to be true. Alternatively, we can do this indirectly, offering a symbolisation
key, e.g.:
› The Sentential sentence ‘𝑃’ is to take the same truth value as the English sentence
‘Big Ben is in London’ (whatever that truth value may be)
The point that I emphasised is that Sentential cannot handle differences in meaning
that go beyond mere differences in truth value.
we do not carry the meaning of the English predicate across into our Quantifier predic‐
ate. We are simply stipulating something like the following:
177
178 INTERPRETATIONS
So, in particular:
› ‘𝐶 ’ is to be true of all and only those things which lecture on logic in Adelaide in
Semester 2, 2018 (whatever those things might be).
and ‘ lectures on logic in Adelaide in Semester 2, 2018’ have very different mean‐
ings!
The point is that Quantifier does not give us any resources for dealing with nuances
of meaning. When we interpret Quantifier, all we are considering is what the predic‐
ates are actually true of. For this reason, I say only that Quantifier sentences symbolise
English sentences. It is doubtful that we are translating English into Quantifier, for
translations should preserve meanings.
The EXTENSION of an English expression is just the things to which it actually applies.
So the extension of a name is the thing actually named; and the extension of a predic‐
ate is just those things it actually covers. Our symbolisation keys can be understood
as stipulating that the extension of an Quantifier expression is to be the same as the
extension of some English expression.
English is not an extensional language. In English, there is a distinction between the
extension of a term – or what it denotes – and what it means. While the predicate ‘
In Quantifier, by contrast, the substitition of one name for another with the same exten‐
sion will always yield a sentence with the same truth‐value; likewise with the substitu‐
tion of one predicate for another with the same extension. This is normally summed
up by saying that Quantifier is an EXTENSIONAL LANGUAGE. An extensional language is
one where non‐logical expressions with the same extension can always be swapped for
one another in a sentence without changing the truth value.1
We noted above that a name and a definite description might have very different mean‐
ings, despite having the same extension. Our treatment of definite descriptions (§19)
allows us to preserve the extensionality of Quantifier while also preserving the key lo‐
gical features of descriptions. The definite description in English is analysed as a quan‐
tifier expression in Quantifier, so there is no such singular term as a definite description
in Quantifier to be assigned an extension at all. What are assigned extensions are pre‐
dicates and names, and our approach to definite descriptions allows the extensions
of those predicates and names to fix the truth value of any symbolisation of an Eng‐
lish sentence involving definite descriptions. The logical properties of the symbolised
sentence in Quantifier, however, are also determined by the quantifier structure, which
isn’t fixed by assigning an extension.
David Cameron
the number 𝜋
every top‐F key on every piano ever made
1 What this shows, in passing, is that Quantifier lacks the resources to express things like ‘ has always
been ’ or ‘ is necessarily ’, which would allow us to separate expressions with the same
present extension.
Another construction with apparently similar effects is when a true identity, such as ‘Lewis Carroll is
Charles Lutwidge Dodgson’, is embedded in a belief report such as ‘AE believes that Lewis Carroll is
Charles Lutwidge Dodgson’. The belief report appears to be false, if AE doesn’t know that ‘Lewis Carroll’
is a pen name for the Oxford mathematician. But still, this seems hard to deny: ‘AE believes that Lewis
Carroll is Lewis Carroll’. Many have used this to argue that the meaning of a name in English is not just
its extension. But this is actually a rather controversial case, unlike the example in the main text. Many
philosophers think that the meaning of a name in English just is its extension. But let us be clear: no
one generalises this to predicates. Everyone agrees that the meaning of an English predicate is not just
its extension.
180 INTERPRETATIONS
Now, the objects that we have listed have nothing particularly in common. But this
doesn’t matter. Logic doesn’t care about what strikes us mere humans as ‘natural’ or
‘similar’ (see below, §21.7). As long as the extension assigned consists of elements of
the domain, we’ve managed to come up with a acceptable interpretation, at least from
a purely logical point of view. Armed with this interpretation of ‘𝐻’, suppose I now add
to my symbolisation key:
𝑑 : David Cameron
𝑛: Julia Gillard
𝑝: the number 𝜋
Then ‘𝐻𝑑 ’ and ‘𝐻𝑝’ will both be true, on this interpretation, but ‘𝐻𝑛’ will be false, since
Julia Gillard was not among the stipulated objects.
This process of explicit stipulation is just giving the extension of a predicate by a list
of items falling under it. A more common way of identifying the extension of a pre‐
dicate is to derive it from a classification rule that sorts everything into those items
that fall under it and those that do not. Such a classification rule is often what people
think of as the meaning of a predicate. But such a classification rule goes beyond the
extension. In effect, our earlier symbolisation keys assign an extension by relying on
our knowledge of the classification rule associated with English predicates. While the
rule might determine the extension in a domain, it is not the extension: two different
rules could come up with the same extension, as in the case of monotremes earlier.
The extension of a one‐place predicate is just some things. We can identify them by
simply listing them. A list like that isn’t anything more than the items in the list, so
that list will be the same in every domain which contains those things in the extension.
But we can see a further difference between extensions and classification rules when
we can consider applying the same rule in different domains. So the classification rule
associated with the English predicate ‘ is a student’ applies to many many people
in the domain of all people. In the domain ‘people in this class’, it applies only to a
select few. In fact, you can even use the classification rule ‘ is a student’ to fix
an extension where the domain doesn’t include any students at all. In that case it will
just yield an EMPTY EXTENSION – one containing nothing. Here are some other cases
where the empty extension should be assigned to a predicate:
The empty collection too is a legitimate extension, because the empty set is still some‐
thing: it is still a sub‐collection of the domain. However, our names must always be
§21. EXTENSIONALITY 181
assigned some element of the domain. An ‘empty’ name would have no extension at all
– and in an extensional language it is hard to see how we can admit such expressions.
So keep clear the distinction between assigning an empty collection as an extension,
and assigning nothing at all.
There are also trivial cases of the opposite sort, where we assign the entire domain as
the extension of a predicate:
𝐿: loves
1 2
Given what I said above, this symbolisation key should be read as saying:
› ‘𝐿’ and ‘ loves ’ are to be true of exactly the same things in the domain
1 2
So, in particular:
It is important that we insist upon the order here, since love – famously – is not always
reciprocated. (Note that ‘a’ and ‘b’ here are symbols of English, and that they are being
used to talk about particular things in the domain.)
That is an indirect stipulation. What about a direct stipulation? This is slightly harder.
If we simply list objects that fall under ‘𝐿’, we will not know whether they are the lover
or the beloved (or both). We have to find a way to include the order in our explicit
stipulation.
To do this, we can specify that two‐place predicates are true of ORDERED PAIRS of ob‐
jects, which differ from two‐membered collections in that the order of a pair is import‐
ant. Thus we might stipulate that ‘𝐵’ is to be true of, and only of, the following pairs
of objects:
182 INTERPRETATIONS
⟨Lenin, Marx⟩
⟨Heidegger, Sartre⟩
⟨Sartre, Heidegger⟩
Here the angle brackets keep us informed concerning order – ⟨a, b⟩ is a different pair
from ⟨b, a⟩ even though they correspond to the same collection of two things, a and b.
Suppose I now add the following stipulations:
𝑙: Lenin
𝑚: Marx
ℎ: Heidegger
𝑠: Sartre
Then ‘𝐵𝑙𝑚’ will be true, since ⟨Lenin, Marx⟩ was in my explicit list. But ‘𝐵𝑚𝑙 ’ will be
false, since ⟨Marx, Lenin⟩ was not in my list. However, both ‘𝐵ℎ𝑠’ and ‘𝐵𝑠ℎ’ will be true,
since both ⟨Heidegger, Sartre⟩ and ⟨Sartre, Heidegger⟩ are in my explicit list.
It is perhaps worth being explicit that our sequences are ordered groups of things,
not lists of words or names. (Unless our domain is the set of words or names!) The
extension of a many‐place predicate is no less a worldly collection than the extension
of a name or a one‐place predicate. The sequences that are members of that extension
have things as their constituents, not names.
To make these ideas more precise, we would need to develop some set theory, the
mathematical theory of collections. This would give you some tools for modelling
extensions and ordered pairs (and ordered triples, etc.), as well as some ideas about the
metaphysical status of collections and sets. Indeed, set theoretic tools lie at the heart
of contemporary linguistic approaches to meaning in natural languages. However, we
have neither the time nor the need to cover set theory in this book. I shall leave these
notions at an imprecise level; the general idea will be clear enough, I hope.
If two names 𝒶 and 𝒷 are assigned to the same object in a given symbolisation key, and
thus have the same extension, 𝒶 = 𝒷 will also be true on that symbolisation key. And
since the two names have the same extension, and Quantifier is extensional, substitut‐
ing one name for another will not change the truth value of any Quantifier sentence.
So, in particular, if ‘𝑎’ and ‘𝑏’ name the same object, then all of the following will be
true:
𝐴𝑎 ↔ 𝐴𝑏
𝐵𝑎 ↔ 𝐵𝑏
𝑅𝑎𝑎 ↔ 𝑅𝑏𝑏
𝑅𝑎𝑎 ↔ 𝑅𝑎𝑏
𝑅𝑐𝑎 ↔ 𝑅𝑐𝑏
∀𝑥𝑅𝑥𝑎 ↔ ∀𝑥𝑅𝑥𝑏
open sentence which doesn’t always yield the truth value when we substitute names
with the same referent is called OPAQUE. The word which is responsible for the opacity
here is the attitude verb ‘believes’. Intuitively, what matters for a belief ascription is not
which things are identical, but which things the subject represents as identical. Other
attitude verbs also lead to opacity: ‘knows’, ‘wants’, ‘remembers’. A language which
contains opaque constructions cannot be completely represented in an extensional
language like Quantifier.
21.6 Interpretations
I defined a valuation in Sentential as any assignment of truth and falsity to atomic sen‐
tences. In Quantifier, I am going to define an INTERPRETATION as consisting of three
things:
› for each name that we care to consider, an assignment of exactly one object
within the domain as its extension;
› for each nonlogical 𝑛‐place predicate 𝒜 (i.e., any predicate other than ‘=’) that
we care to consider, a specification of its extension: what things (or pairs of
things, or triples of thing, etc.) the predicate is to be true of – where all those
things must be in the domain.
So far, the domains we have used are drawn from the actual world, and English pre‐
dicates have been used to assign their actual extensions to Quantifier predicates. But a
perfectly good interpretation might assign merely possible entities to the domain, and
merely possible extensions to the predicates of Quantifier, or a mixture of the two.
Let’s therefore make the assumption that interpretations are not restricted to actuality:
A challenging question, which we will not address, is what enables us to talk about and
apparently make use of such merely possible things in our interpretations.
say that extension is both is a bird and is flightless – but since those two are distinct,
the extension cannot be both, and in fact it is clear that it cannot be either.2
A final reason for caution is that many legitimate extensions don’t seem to correspond
to genuine properties. It is hard to characterise the distinction between genuine prop‐
erties and the others, but one idea is that genuine properties contribute to resemblance.
If two things are both red, then they resemble one another in appearance. But any
things from a domain can form an extension, and we can imagine just selecting things,
which can then be assigned to a predicate. In that case there is unlikely to be anything
those things have in common just because they belong to an extension. Suppose I toss
a coin ten times, and determine an extension by selecting 𝑛 from the domain of natural
numbers if the 𝑛‐th toss was heads. The extension I get is: 1, 3, 4, 5, 7. But clearly noth‐
ing in this recipe means this extension corresponds to a genuine property. Nothing
in Quantifier requires that predicates must have genuine (i.e., resemblance‐grounding)
properties determining their extensions.
Properties determine extensions for one‐place predicates. The corresponding entity
determining the extension of a many‐place predicate is a RELATION. So the relation of
loving on a domain determines an extension, a set of pairs from the domain standing
in that relation. The same caveats apply as in the case of properties: we shouldn’t
take Quantifier predicates to have relations as their meaning, we shouldn’t identify a
relation with its extension, and we shouldn’t take any extension to correspond to a
genuine relation.
2 These properties are coextensive on this domain, but not in general. But some properties seem to be
necessarily coextensive: take the properties being a plane figure with three sides and being a plane figure
with three vertices. At first glance these are different properties – one involves counting lines, the other
counting angles.
§21. EXTENSIONALITY 187
Mammals
Egg‐layers
Birds
Animals
domain but outside the given region – represents those members of the domain
that are not in the extension of 𝒫 ;
› If two predicates have extensions that overlap, the regions that represent them
in the diagram should overlap.
› If one predicate has an extension that is entirely within the extension of another
predicate, the region associated with the first should be entirely within the re‐
gion associated with the second;
We then label and shade each region to enable us to tell which predicates they are
associated with. We may also add labelled dots to denote individuals who may fall
into various predicate extensions.
These rules are quite abstract, but they are easy to apply in practice. Suppose we are
given the following interpretation:
We may represent this interpretation as in Figure 21.1. In this diagram, the egg‐layers
are a subset of the animals – not all animals reproduce by laying eggs, so they do not
coincide with the whole domain. The birds are a subset of animals too, and in fact
wholly within the egg‐layers: all birds reproduce oviparously. The mammals are a
subset of the animals, and one that overlaps with the egg‐layers (the monotremes!),
though neither is contained in the other, so the regions overlap without either being
wholly within the other. The birds and mammals do not have any common members,
so the associated regions do not overlap at all. Note that the sizes of the regions may,
188 INTERPRETATIONS
but need not, represent the number of things within them. In this diagram they prob‐
ably don’t.
Another example. Consider this interpretation, discussed earlier on page 126:
The wisdom of Bret Michaels then tells us that the Euler diagram should look like the
representation in Figure 21.2.
We can start to see how interpretations can help us evaluate arguments quite vividly
in this graphical environment. Recall this example from §15: ‘Willard (‘𝑤’) is a logician
(‘𝐿’); Every logician wears a funny hat (‘𝐹 ’) ∴ Willard wears a funny hat’. This can
be represented using an Euler diagram making use of a labelled dot to represent the
individual Willard, as in Figure 21.3. We can see from the diagram that this argument
is valid: Willard falls in the region corresponding to ‘𝐹 ’ because he falls in the region
corresponding to ‘𝐿’, which is wholly within the ‘𝐹 ’‐region.
If you would like a real challenge, you might try to figure out the interpretation corres‐
ponding to the Euler diagram in Figure 21.4.
roses
cowboys
𝐹
𝑤
𝐿
People
the predicate. Such a diagram is known as a DIRECTED GRAPH. A directed graph com‐
prises a collection of NODES, and a collection of arrows (ordered links) between those
nodes; in our case, the nodes are the elements of the domain, and the arrows corres‐
pond to the extension of a two‐place predicate on that domain, given the convention
that when there is an arrow running from 𝓍 to 𝓎 in a graph, that means ⟨𝓍, 𝓎⟩ is in the
extension of the predicate.
Let’s consider some examples.
› First, consider the following interpretation, written in the more standard man‐
ner:
1 2
4 3
domain: 1, 2, 3, 4
𝑅: ⟨1, 2⟩, ⟨2, 3⟩, ⟨3, 4⟩, ⟨4, 1⟩, ⟨1, 3⟩.
That is, an interpretation whose domain is the first four positive whole num‐
bers, and which interprets ‘𝑅’ as being true of and only of the specified pairs of
numbers. This might be represented by the simple graph depicted in Figure 21.5.
domain: 1, 2, 3, 4
𝑅: ⟨1, 3⟩, ⟨3, 1⟩, ⟨3, 4⟩, ⟨1, 1⟩, ⟨3, 3⟩, ⟨4, 4⟩.
We might offer the graph in Figure 21.6 to represent it. The existence of pairs
like ⟨1, 1⟩ in the extension of 𝑅 is borne out in the presence of ‘loops’ from nodes
to themselves in the graph. (Because of these loops, our graph is not what graph
theorists call a ‘simple’ graph.) Notice that 2 is in the domain, and hence in‐
cluded as a node in the graph, but it has no arrows attached to it, because it is
not included in the extension assigned to 𝑅.
1 2
4 3
Amia Barbara
Davis Corine
If we wanted, we can extend our graphical conventions, making our diagrams more
complex in order to depict more complex interpretations. For example, we could in‐
troduce another kind of arrow (maybe with a dashed shaft) to represent a further two‐
place predicate.3 We could add names as labels attached to particular objects. To
symbolise the extension of a one‐place predicate, we might simply adopt the approach
from §21.8, and mark a shaded region around some particular objects and stipulate
that the thus encircled objects (and only them) are to fall in the extension of some
predicate ‘𝐻’, say.4
All of these graphical innovations are used in Figure 21.8, which graphically represents
the following interpretation:
domain: 2, 3, 4, 5, 6
𝐺: ⩾ 4.
1
𝐷: is distinct from and exactly divisible by ;
1 2
𝑇: +3 = ;
1 2
𝑓: 4;
Here the label ‘𝑓’ represents the name attached to 4, the blue dotted arrow represents
the relation assigned to 𝑇, the black solid arrow represents the relation assigned to 𝐷,
and the grey ellipse represents the collection of things in the domain which fall in the
extension assigned to 𝐺 .
how directed graphs can help grasp an interpretation, because the properties of binary
relations often correspond to easily grasped conditions on the associated graphs.
› This section should be regarded as optional, and only for the dedicated student of
logic.
2 3
4 5 6
‘𝑓 ’
4 3
Figure 21.10: A graph of the transitive relation ‘older than’ on some University of Ad‐
elaide buildings.
there is an edge from 1 (𝑥) to 3 (𝑦), and from 3 (𝑦) to 3 (𝑧), and there is the ‘shortcut’
from 1 (𝑥) to 3 (𝑧). This may not be how you were thinking of transitivity, but look back
at the definition, which is phrased in terms of picking any pairs from the domain. This
even includes those pairs consisting of a node and itself.
Broome
Perth Sydney
Adelaide
’ on the domain of people. If Alf lives next to Beth, then Beth also lives next door
2
to Alf. An asymmetric relation is ‘ is greater than ’ on the natural numbers:
1 2
if 𝑥 > 𝑦, then it cannot also be that 𝑦 > 𝑥. If we consider not ‘greater than’, but
the weaker relation ‘greater than or equal to’, we see an example of an antisymmetric
relation: if 𝑥 ≥ 𝑦 then it can be the case that 𝑦 ≥ 𝑥 only in the special case where 𝑥 = 𝑦.
An example of a nonsymmetric relation might be the relation ‘ loves ’ on
1 2
the domain of people: sometimes love is requited, so that both members of a given
pair love each other; and sometimes it is unrequited.
Pictorially, whenever we have an arrow from one node to another, if there is also a ‘re‐
verse’ arrow back from the second to the first, then the relation depicted is symmetric.
See the depiction of the ‘next to’ relation in Figure 21.12. A relation is asymmetric if
there are never such reverse arrows. A relation is antisymmetric if the only time there
are arrows from 𝑥 to 𝑦 and back is when 𝑥 = 𝑦 and there is a loop. I depict both rela‐
tions > and ≥ in Figure 21.13; the difference is that that the antisymmetric relation ≥
has an arrow from each node to itself.
The observant reader will have noticed that, among these definitions of properties of
binary relations, the only definition which actually mentions the domain is the defini‐
tion of reflexivity. All the other definitions are conditional in form: they say, if certain
pairs are in the extension, then certain other pairs will (or won’t) be too. We needn’t
specify a domain to check whether these conditionals hold of any relation. But to check
whether a relation is reflexive, we need not only the ordered pairs of the relation, but
also what the domain is, so we can see if any members of the domain are missing from
the relation.5
These properties of binary relations are not completely independent of each other. For
example, a reflexive relation cannot be asymmetric: a relation ℜ is asymmetric only if
it is never the case that for any 𝑥 and 𝑦, ⟨x,y⟩ is in ℜ; but if the relation is reflexive, then
every pair consisting of something and itself is in ℜ, and nothing prohibits us from
picking the same value for 𝑥 and 𝑦. A reflexive relation can at best be antisymmetric.
A relation which is reflexive, symmetric and transitive is known as an EQUIVALENCE RE‐
LATION. We’ve already established that identity is an equivalence relation in §18.5. But
5 Reflexivity is an extrinsic property of a relation – you can’t tell just from the extension whether a relation
is reflexive, because it is relative to the domain from which the relata of the relation may be drawn. The
others are intrinsic properties of a relation; whether the relation has them is determined just by which
pairs are in the relation. The very same set of pairs might be reflexive on one domain and nonreflexive
on another, but it will be symmetric on every domain if it is symmetric on any.
196 INTERPRETATIONS
0 1 2
Figure 21.13: > (black arrows) and ≥ (orange dotted arrows) on the domain {0, 1, 2}.
there are other equivalence relations too: consider ‘ is the same height as ’.
1 2
This relation will structure the domain into clusters of people with the same height as
each other. Such a division into groups is known as a PARTITION, and the individual
groups are known as cells. When you partition a domain, you sort the domain into
cells which are uniform with respect a given feature – in this case, height. There will
be no connections between cells of the partition, but within each cell, each person will
be related to every other person in the cell. These cells are also known as EQUIVALENCE
CLASSES. Identity is the extreme case of an equivalence relation, because it partitions
the domain into cells containing entities that are equivalent in every respect, i.e., are
identical, and hence each cell contains just one individual member.
A relation on a domain 𝐷 is TOTAL iff for any 𝑥 and 𝑦 in 𝐷, either ⟨x,y⟩ or ⟨y,x⟩ (or both) is
in the extension.6 Any two things are related, in some way, by a total relation. We can
use the notion of totality to define a kind of relation that is of particular mathematical
importance:
6 If the predicate ‘𝑅’ is assigned ℜ as its extension in some interpretation, then ℜ is total iff ‘∀𝑥∀𝑦(𝑅𝑥𝑦 ∨
𝑅𝑦𝑥)’ holds in this interpretation.
§21. EXTENSIONALITY 197
{𝑎, 𝑏}
{𝑏} {𝑎}
Figure 21.14: Black arrows indicate both ⊂ and ⊆, dotted arrows indicate ⊆ only, on the
domain ℘{𝑎, 𝑏} = {{𝑎, 𝑏}, {𝑎}, {𝑏}, ∅}.
strict: a total order with ties is ≥ on the natural numbers, or the relation ‘ is no
1
shorter than ’, which allows that whatever fills the first slot might be either taller
2
than, or the same height as, whatever fills the second.
If not everything in the domain is comparable, or ordered with respect to each other,
the relation is partial. Consider a railway network, in which all lines radiate from a
central station, like the Adelaide Metro network. The relation ‘ is no further
1
along the line than ’ is a partial order.7 But there are many pairs of incomparable
2
stations: Woodville is not on the same line as Oaklands, so neither is further along the
line than the other. (We could create a total order, perhaps by measuring distance or
counting stations along the line, and say that two stations are equally far away iff they
are the same number of stops from Adelaide. But we are focussed here on the partial
order induced by the actual railway network.) A strict partial order would be ‘ is
1
further along the line than ’, which excludes ties.
2
A mathematical example of a partial order is the subset relation ⊆. Consider some set
𝑋 containing just 𝑎 and 𝑏 as members. Recall from §21.8 that the subsets of 𝑋 are those
sets such that everything in them is also in 𝑋. So 𝑋 is a subset of 𝑋, as is the set just
containing 𝑎, and the set just containing 𝑏. Finally, the empty set (with no members)
obviously is a subset of any set (trivially – nothing in it is absent from any other set).
That gives the structure of subsets depicted in Figure 21.14. The corresponding strict
partial order ⊂ results from removing any loops from a set to itself in that diagram. This
is obviously a partial order, since {𝑎} is neither a subset of {𝑏} or vice versa, though is
is a subset of the original set {𝑎, 𝑏}, and the empty set ∅ is a subset of both.
7 Every station is no further along than itself; pick any two distinct stations, such as Woodville and
Kilkenny: if Kilkenny is no further along than Woodville, then Woodville is further along than Kilkenny;
and if we pick three stations, such as Oaklands, Brighton and Seacliff, then since Oaklands is no further
than Brighton, and Bright is no further than Seacliff, it follows that Oaklands is no further than Seacliff.
198 INTERPRETATIONS
Practice exercises
A. For each of the following collections of individuals, properties and relations, con‐
struct an interpretation that includes any appropriate extensions they determine, and
an appropriate domain. Use your personal judgment if needed, and comment on any
difficulties.
1. The relation ‘ is just as rich as ’, Bill Gates, Elon Musk, and Warren
1 2
Buffet;
4. The relation ‘ is part of ’, your left leg, your lower body, you.
1 2
B. Since Quantifier is an extensional language, if let the Quantifier name ‘𝑠’ denote Su‐
perman, and the name ‘𝑐 ’ denote Clark Kent, then ‘𝑠 = 𝑐 ’ will be true. But it will be
true for the same reason that ‘𝑠 = 𝑠’ is true: because ⟨Clark Kent, Clark Kent⟩ is in
the extension of ‘=’. Can you use this observation as the basis for any argument that
English is not an extensional language?
§21. EXTENSIONALITY 199
C. Using some of the methods introduced in this section (Euler diagrams and directed
graphs), give a graphical representation of the following interpretations. You will need
to rely on your general knowledge to prepare these diagrams.
domain: people
𝑀: is a musician;
1
1. 𝑊: is a woman;
1
𝐸: is an economist.
1
D. These questions concern the material from the optional section on binary relations
(§21.10).
We now know what interpretations are. Since, among other things, they tell us which
predicates are true of which objects – and pairs, etc., of objects –, they provide us with
an account of the truth of atomic sentences. But we must show how to extend that to an
account of what it is for any Quantifier sentence to be true or false in an interpretation.
We know from §20 that there are three kinds of sentence in Quantifier:
› atomic sentences (i.e., atomic formulae of Quantifier which have no free vari‐
ables);
200
§22. TRUTH IN Quantifier 201
interpretation, this is true iff ‘is wise’ is true of whatever is named by ‘Aristotle’, i.e., iff
Aristotle is wise. In fact (in the actual world) Aristotle is wise. So the sentence is true.
Equally, ‘𝑊𝑏’ is false on our go‐to interpretation, because George W Bush is not wise.
Likewise, on this interpretation, ‘𝑅𝑎𝑏’ is true iff the object named by ‘𝑎’ was born before
the object named by ‘𝑏’. Well, Aristotle was born before Bush. So ‘𝑅𝑎𝑏’ is true. Equally,
‘𝑅𝑎𝑎’ is false: Aristotle was not born before Aristotle. We can summarise these intuitive
ideas more generally:
Two other kinds of atomic sentences exist: zero‐place predicates, and identity sen‐
tences.
› Identity sentences (two names flanking the identity predicate) are also easy to
handle. Where 𝒶 and 𝒷 are any names, 𝒶 = 𝒷 is true in an interpretation iff:
𝒶 and 𝒷 have the same extension (are assigned the very same object) in that
interpretation.1
So in our go‐to interpretation, ‘𝑎 = 𝑏’ is false, since Aristotle is distinct from
Bush; but ‘𝑎 = 𝑎’ is true.
1 Of course this is just the result of applying the conditions above for atomic sentences with two‐place
predicates to the special constant extension assigned to ‘=’: 𝒶 = 𝒷 is true iff the pair of the extensions
of 𝒶 and 𝒷 is in the extension of identity, which can only happen if the extensions are the same.
202 INTERPRETATIONS
This presents the very same information as the schematic truth tables for the connect‐
ives; it just does it in a slightly different way. Some examples will probably help to
illustrate the idea. On our go‐to interpretation:
› ‘¬𝑎 = 𝑏’ is true;
comes to sentences whose main connective is a quantifier. We cannot say that the
truth value of ‘∃𝑥𝐹𝑥’ depends on its syntax and the truth value of ‘𝐹𝑥’, because we have
no guidance in assigning a truth value to a formula with a free variable (though see
§22.6).
Here is a first naïve thought. We want to say that ‘∀𝑥ℱ𝑥 ’ is true iff ℱ is true of everything
in the domain. This should not be too problematic: our interpretation will specify
directly what ℱ is true of.
Unfortunately, this naïve first thought is not general enough. For example, we want to
be able to say that ‘∀𝑥∃𝑦ℒ𝑥𝑦’ is true just in case ‘∃𝑦ℒ𝑥𝑦’ is true of everything in the do‐
main. And this is problematic, since our interpretation does not directly specify what
‘∃𝑦ℒ𝑥𝑦’ is to be true of. Instead, whether or not this is true of something should follow
just from the interpretation of ℒ , the domain, and the meanings of the quantifiers.
So here is a second naïve thought. We might try to say that ‘∀𝑥∃𝑦ℒ𝑥𝑦’ is to be true
in an interpretation iff ∃𝑦ℒ𝒶𝑦 is true for every name 𝒶 that we have included in our
interpretation. And similarly, we might try to say that ∃𝑦ℒ𝒶𝑦 is true just in case ℒ𝒶𝒷
is true for some name 𝒷 that we have included in our interpretation. (This kind of
approach is known as SUBSTITUTIONAL QUANTIFICATION – our own approach below in
§22.4 will make use of substitution, but in a more sophisticated way.)
Unfortunately, this is not right either. To see this, observe that in our go‐to interpret‐
ation, we have only given interpretations for two names, ‘𝑎’ and ‘𝑏’. But the domain
– all people born before the year 2000CE – contains many more than two people. I
have no intention of trying to name all of them! In most interpretations, things in the
domain go unnamed; we can’t understand quantifiers as ranging over only actually
named things without missing out some things in such interpretations.2
So here is a third thought. (And this thought is not naïve, but correct.) Although
it is not the case that we have named everyone, each person could have been given a
name. So we should focus on this possibility of extending an interpretation, by adding
a previously uninterpreted name to our interpretation. I shall offer a few examples of
how this might work, centring on our go‐to interpretation, and I shall then present the
formal definition.
› In our go‐to interpretation, ‘∃𝑥𝑅𝑏𝑥 ’ should be true. After all, in the domain,
there is certainly someone who was born after Bush. Lady Gaga is one of those
2 Can we solve this issue by the brute force proposal to simply assign everything in the domain a name?
That is not possible, because there are not enough names. Remember (§20) that names in Quantifier
consist of the English letters 𝑎, …, 𝑟 with numerical subscripts if needed. So every name in Quantifier
consists of a letter and a finite numeral. It turns out you can arrange all these names in a single, infinitely
long list (basically, represent each name by a code number, and order the names by the size of the code
number). The items in a list like that can be enumerated. But Cantor famously showed that some
things are too many to be enumerated; any single list which attempts to include all of them would
inevitably miss some. (One example: while you can enumerate all the finite sequences of itemse from
a finite alphabet, you cannot enumerate all the infinite sequences of such items.) So some collections
are too many for them all to have names, because we’d run out of names before labelling them all. So
substitutional quantification can’t handle all examples of quantification.
Cantor’s result, and the ‘diagonal argument’ he used to establish it, is discussed in ch. 5 of Tim Button
(2021) Set Theory: An Open Introduction, st.openlogicproject.org/settheory‐screen.pdf.
204 INTERPRETATIONS
› In our go‐to interpretation, ‘∃𝑥(𝑊𝑥 ∧ 𝑅𝑥𝑎)’ should also be true. After all, in the
domain, there is certainly someone who was both wise and born before Aristotle.
Socrates is one such person. Indeed, if we were to extend our go‐to interpretation
by letting a previously uninterpreted name, ‘𝑐 ’, denote Socrates, then ‘𝑊𝑐 ∧ 𝑅𝑐𝑎’
would be true on this extended interpretation. Again, this should surely suffice
to make ‘∃𝑥(𝑊𝑥 ∧ 𝑅𝑥𝑎)’ true on the original go‐to interpretation.
› In our go‐to interpretation, ‘∀𝑥∃𝑦𝑅𝑥𝑦’ should be false. After all, consider the
last person born in the year 1999. I don’t know who that was, but if we were to
extend our go‐to interpretation by letting a previously uninterpreted name, ‘𝑑 ’,
denote that person, then we would not be able to find anyone else in the domain
to denote with some further previously uninterpreted name, perhaps ‘𝑒’, in such
a way that ‘𝑅𝑑𝑒’ would be true. Indeed, no matter whom we named with ‘𝑒’, ‘𝑅𝑑𝑒’
would be false. And this observation is surely sufficient to make ‘∃𝑦𝑅𝑑𝑦’ false in
our extended interpretation. And this is sufficient to make ‘∀𝑥∃𝑦𝑅𝑥𝑦’ false on
the original interpretation.
Look at the multiplying extensions; the quantifiers involved are mirrored in the inter‐
pretations we are asked to consider. ‘∃𝑥𝐹𝑥’ is true if there exists an extended interpreta‐
tion where some named thing is 𝐹 , while ‘∀𝑥𝐹𝑥 ’ is true if every extended interpretation
makes the newly named thing 𝐹 . In effect, we handle quantification over possibly un‐
named objects in a domain by quantifying over potential interpretations that assign
names to those objects while keeping everything else the same.
Some readers might prefer a more visual aid. If the domain is large, we need to consider
lots of extended intepretations, because we need to consider assigning each of the
items in the domain a new name. So I will consider a very simple interpretation J, with
two objects and some binary relation between them, pictured here:
• •
Note there are no labels on these nodes. No names are assigned to objects in this
interpretation. The arrows give the extension of ‘𝑄’ in this interpretation J. Suppose we
want to consider whether ‘∃𝑥∀𝑦𝑄𝑦𝑥 ’ is true in J. Then we want to know whether there
is some way of assigning a new name ‘𝑑 ’ to entities in this domain to make ‘∀𝑦𝑄𝑦𝑑 ’
come out true. So there are two interpretations to consider, L and R, because there are
two things in the domain to which ‘𝑑 ’ might be attached:
§22. TRUTH IN Quantifier 205
‘𝑑 ’ ‘𝑑 ’
• • • •
L R
The original sentence is true in J iff ‘∀𝑦𝑄𝑦𝑑 ’ is true in one of these extended interpret‐
ations. But in turn that is the case iff ‘𝑄𝑓𝑑 ’ is true in each interpretation extending the
already extended interpretation by adding yet another new name ‘𝑓’. So now there are
four interpretations to consider: two extensions of L, and two extensions of R:
‘𝑑 ’ ‘𝑑 ’ ‘𝑑 ’ ‘𝑑 ’
• • • • • • • •
‘𝑓 ’ ‘𝑓 ’ ‘𝑓 ’ ‘𝑓 ’
LL LR RL RR
We now have all the interpretations we’ll need. Let’s go through it step by step:
6. ‘∃𝑥∀𝑦𝑄𝑦𝑥 ’ is true in J.
All of these interpretations that spawn from our original interpretation share the same
domain, and the same extension for ‘𝑄’. They differ only in that the extended interpret‐
ations add new labels to the items in the domain, interpreting the previously unused
names.
Let’s try another visual example. Consider this interpretation I, with a domain of three
things, an interpreted name ‘𝑎’ and two interpreted predicates ‘𝐹 ’ and ‘𝐺 ’.
206 INTERPRETATIONS
• • •
‘𝑎 ’ 𝐺
𝐹
There are three extended interpretations, II–IV, corresponding to the three ways of
adding some unused name ‘𝑑 ’:
II III
•‘𝑑 ’ • • • •‘𝑑 ’ •
‘𝑎 ’ 𝐺 ‘𝑎 ’ 𝐺
𝐹 𝐹
IV
• • •‘𝑑 ’
‘𝑎 ’ 𝐺
𝐹
We can see that ‘∀𝑥𝐹𝑥 ’ is true in I, because in each of II–IV, ‘𝑑 ’ labels something in
the ‘𝐹 ’‐region. We can see that ‘∀𝑥𝐺𝑥 ’ is false in I, because in II and III, ‘𝑑 ’ labels
something not in the ‘𝐺 ’‐region. ‘∃𝑥𝐺𝑥’ is true in I, because interpretation IV attaches
‘𝑑 ’ to something in the ‘𝐺 ’‐region.
∃𝑥(𝑅𝑒𝑥 ↔ 𝐹𝑥)
is a substitution instance of
∀𝑦∃𝑥(𝑅𝑦𝑥 ↔ 𝐹𝑥)
§22. TRUTH IN Quantifier 207
with the instantiating name ‘𝑒’, because ‘∃𝑥(𝑅𝑦𝑥 ↔ 𝐹𝑥)’|𝑒↷𝑦 turns out to be ‘∃𝑥(𝑅𝑒𝑥 ↔
𝐹𝑥)’.
Armed with this notation, the rough idea is as follows. The sentence ∀𝓍𝒜 will be true
iff 𝒜|𝒸↷𝓍 is true no matter what object (in the domain) we name with 𝒸. Similarly, the
sentence ∃𝓍𝒜 will be true iff there is some way to assign the name 𝒸 to an object that
makes 𝒜|𝒸↷𝓍 true. More precisely, we stipulate:
That is: we pick a previously uninterpreted name that doesn’t appear in 𝒜 .3 We uni‐
formly replace any free occurrences of the variable 𝓍 in 𝒜 by our previously uninter‐
preted name, which creates a substitution instance of ‘∀𝓍𝒜 ’ and ‘∃𝓍𝒜 ’. Then if this
substitution instance is true on every (respectively, some) way of adding an interpret‐
ation of the previously uninterpreted name to our existing interpretation, then ‘∀𝓍𝒜 ’
(respectively, ‘∃𝓍𝒜 ’) is true on that existing interpretation.
To be clear: all this is doing is formalising (very pedantically) the intuitive idea ex‐
pressed above. The result is a bit ugly, and the final definition might look a bit opaque.
Hopefully, though, the spirit of the idea is clear.
The trickiest part of all of this is keeping things straight when you have nested quan‐
tifiers, particularly quantifiers of different types. As above, when we considered
‘∃𝑦∀𝑥𝑄𝑦𝑥 ’, we needed first to consider interpretations that assigned some new name
‘𝑑 ’, and then, for each of those interpretations, we generate a parasitic family of new
interpretations assigning some other new name ‘𝑒’. If we apply our truth conditions,
we can summarise the basic cases of two nested quantifiers as follows:
3 There will always be such a previously uninterpreted name: any given sentence of Quantifier only con‐
tains finitely many names, but Quantifier has a potentially infinite stock of names to draw from.
208 INTERPRETATIONS
Note that these rules are listed here for your convenience; they can be derived directly
from the stipulation above, so you don’t need to learn them separately.
Why Restrict to New Names? Here is the first proposed simplification: do away
with the requirement for the name in the extended interpretation to be new. That
would give us this proposal:
This only differs from the correct proposal in cases where the name ‘𝑐 ’ is already used.
Suppose we consider ‘∀𝑥𝑐 = 𝑥 ’. This is false in any interpretation with two or more
things – they can’t both be 𝑐 . But ‘𝑐 = 𝑥’|𝑐↷𝑥 is just ‘𝑐 = 𝑐 ’. And this is true in every
interpretation which assigns anything to be 𝑐 at all. Since ‘𝑐 = 𝑐 ’ is true in every ex‐
tended interpretation, this alternative rule wrongly predicts ‘∀𝑥𝑐 = 𝑥 ’ is true in the
original intepretation. The problem arises of course because the name we are substi‐
tuting for the universally quantified variable interacts with the existing occurences of
the name. The moral: always use a new previously uninterpreted name.
§22. TRUTH IN Quantifier 209
Why not substitute 𝑥 for 𝑐 ? The truth conditions above start with a quantified sen‐
tence, drop the quantifier, and substitute a name for the associated variable. Why do
things that way? Couldn’t we start by considering the truth values of some sentence
with a name across many interpretations, and then swap the name for a variable and
add a quantifier? Here is a second proposed alternative:
The problem here is that 𝒜 is actually not well‐defined. Consider this case, a sen‐
tence with a degenerate initial quantifier: ‘∀𝑥∃𝑥𝑅𝑥𝑥 ’. The official truth conditions say:
this sentence is true iff ‘∃𝑥𝑅𝑥𝑥 ’ is true in every extended interpretation that assigns
something to a new name. Since ‘∃𝑥𝑅𝑥𝑥 ’ has no names, it will be true on all of these
extended intepretations iff it is true in the original interpretation. So the initial ∀𝑥
quantifier is redundant.
What happens on the alternative approach? It turns out there are several candidates
for 𝒜 :
The problem is, these different candidates for 𝒜 don’t all give the same results in a given
interpretation. (‘∃𝑥𝑅𝑥𝑥 ’ certainly doesn’t always have the same truth value as ‘∃𝑥𝑅𝑥𝑐 ’.)
This means that this doesn’t actually provide a way of assigning a truth value to a
quantified sentence. We need our definition of truth to determine an unambiguous
answer, and this proposal doesn’t manage to meet that requirement.
22.6 Satisfaction
The discussion of truth in §§22.1–22.4 only ever involved assigning truth values to sen‐
tences. When confronted with formulae involving free variables, we altered them by
substituting a previously uninterpreted name for the variable, converting it into a sen‐
tence, and temporarily extending the interpretation to cover that previously uninter‐
preted name. This is a significant departure from the definition of truth for Senten‐
tial sentences in §8.3. There, we showed how the truth value of a complex Sentential
sentence in a valuation depended on the truth value of its constituents in that same
valuation. By contrast, on the approach just outlined, the truth value of ‘∃𝑥𝐹𝑥’ in an
interpretation depends on the truth value of some other sentence ‘𝐹𝑐 ’, which is not a
constituent of ‘∃𝑥𝐹𝑥’, in some different (although related) interpretation!
There is another way to proceed, which allows arbitrary formulae of Quantifier, even
those with free variables, to be assigned (temporarily) a truth value. This approach can
let the truth value of ‘∃𝑥𝐹𝑥’ in an interpretation depend on the temporary truth value
of ‘𝐹𝑥’ in that same interpretation. This is conceptually neater (with no multiplying
210 INTERPRETATIONS
interpretations) than the approach just introduced, and I present it briefly here as an
alternative to the substitutional approach of the preceding sections.
› This section should be regarded as optional, and only for the dedicated student of
logic.
The inspiration for the approach comes, once again, from thinking of variables in Quan‐
tifier as behaving like pronouns in English. In §15.5 we gave a gloss of ‘Someone is angry’
as ‘Some person is such that: they are angry’. Concentrate on ‘they are angry’. This sen‐
tence featuring a bare pronoun doesn’t express any specific claim intrinsically. But we
can, temporarily, fix a referent for the pronoun ‘they’ – temporarily elevating it to the
status of a name. If we do so, the sentence can be evaluated. We can introduce a ref‐
erent by pointing: ‘They [points to someone] are angry’. Or we can fix a referent by
simply assigning one: ‘Consider that person over there. They are angry’. If we can find
someone or other to temporarily be the referent of the pronoun ‘they’, then it will be
true that there is someone such that they are angry. If no matter who we fix as the
referent, they are angry, then it will be true that everyone is such that they are angry.
This is, in a nutshell, the idea we will use to handle quantification in Quantifier. Let us
introduce some terminology:
If we have an interpretation, and a variable assignment, then we can evaluate every for‐
mula of Quantifier – not only sentences. Of course, the evaluation of the open formulae
will be very fragile, since even given a single background interpretation, a formula like
‘𝐹𝑥’ might be true relative to one variable assignment and false relative to another.
Let us start, as before, by giving rules for evaluating atomic formulae of Quantifier,
given an interpretation and a variable assignment.
These are very similar clauses to those we saw for atomic sentences in §22.4. Indeed,
when we are considering atomic sentences of Quantifier, the mention of a variable as‐
signment is redundant, since no atomic sentence contains a variable. (If an atomic
formula contains a variable, the variable would be free and the formula thus not a
sentence.)
The recursion clauses that extend truth for atomic formulae to truth for arbitrary for‐
mulae are these (I omit the clauses for ∨, →, and ↔, which you can easily fill in yourself,
following the model in §22.2):
The last two clauses are where this approach is strongest. Rather than consider‐
ing some substitition instance of ∀𝓍𝒜 , we simply consider the direct constituent 𝒜 .
Rather than considering all variations on the original interpretation which include
some previously uninterpreted name, we simply consider all ways of varying what is as‐
signed to 𝓍 by the original variable assignment, but keeping everything else unchanged.
If you see the rationale for the clauses offered in §22.4, you can see why the clauses just
offered are appropriate.
We now have the idea of truth on a variable assignment over an intepretation. But what
we want – if this alternative approach is to yield the same end result – is truth in an
interpretation. Notice that, given an interpretation, varying the variable assignment
can change the truth value of a formula with free variables. But it cannot change the
truth value of a formula which is a sentence, so if a sentence is true on one variable
assignment over an interpretation, it is true on every variable assignment over that
interpretation. So we can reintroduce the notion of truth in an intepretation, like so:
212 INTERPRETATIONS
Here’s how this works in practice. Suppose we want to figure out whether the sentence
‘∀𝑥∃𝑦𝐿𝑥𝑦’ is true on an interpretation which associates the two‐place predicate ‘𝐿’ with
the relation ‘ is no heavier than ’, and has as its domain the planets in our
1 2
solar system. We might reason as follows:
For each way of picking planets to be the values of ‘𝑥’ and ‘𝑦’, either we pick
two different planets, and one is lighter than the other (no two planets have
the same mass); or we pick the same planet, and ‘they’ are identical in mass.
In either case, we can always find something no heavier than anything we
pick. So for any variable assignment to ‘𝑥 ’ over this interpretation, we can
then assign something to ‘𝑦’ so as to make ‘𝐿𝑥𝑦’ true on that joint assign‐
ment. Hence no matter what we assign to ‘𝑥’, ‘∃𝑦𝐿𝑥𝑦’ is true on that assign‐
ment. But since that is true no matter what we assign to ‘𝑥’, ‘∀𝑥∃𝑦𝐿𝑥𝑦’ is
true on every variable assignment over this interpretation. But that latter is
a sentence, so is true in this interpretation.
Let us say that a sequence of objects ⟨𝑎1 , …, 𝑎𝑛 ⟩ SATISFIES a formula 𝒜 in which the
variables 𝓍1 , …, 𝓍𝑛 occur freely iff there is a variable assignment over an intepretation
whose domain includes each 𝑎𝑖 , and which assigns each 𝑎𝑖 to the variable 𝓍𝑖 , and on
which 𝒜 is true over that interpretation. What we have expressed in terms of variable
assignments could have been expressed, a little more awkwardly, using the notion of
some objects satisfying a formula. Indeed, this is how Alfred Tarski, the inventor of
this approach to truth in Quantifier, first introduced the idea.4
4 A translation of his original 1933 paper is Alfred Tarski (1983) ‘The Concept of Truth in Formalized
Languages’ in his Logic, Semantics, Metamathematics, Indianapolis: Hackett, pp. 152–278. It is quite
technical in places.
§22. TRUTH IN Quantifier 213
Practice exercises
A. Consider the following interpretation:
Determine whether each of the following sentences is true or false in that interpreta‐
tion:
1. 𝐵𝑐
2. 𝐴𝑐 ↔ ¬𝑁𝑐
3. 𝑁𝑐 → (𝐴𝑐 ∨ 𝐵𝑐)
4. ∀𝑥𝐴𝑥
5. ∀𝑥¬𝐵𝑥
6. ∃𝑥(𝐴𝑥 ∧ 𝐵𝑥)
7. ∃𝑥(𝐴𝑥 → 𝑁𝑥)
8. ∀𝑥(𝑁𝑥 ∨ ¬𝑁𝑥)
9. ∃𝑥𝐵𝑥 → ∀𝑥𝐴𝑥
Determine whether each of the following sentences is true or false in that interpreta‐
tion:
1. 𝐻𝑐
2. 𝐻𝑒
3. 𝑀𝑐 ∨ 𝑀𝑒
4. 𝐺𝑐 ∨ ¬𝐺𝑐
5. 𝑀𝑐 → 𝐺𝑐
6. ∃𝑥𝐻𝑥
7. ∀𝑥𝐻𝑥
8. ∃𝑥¬𝑀𝑥
9. ∃𝑥(𝐻𝑥 ∧ 𝐺𝑥)
10. ∃𝑥(𝑀𝑥 ∧ 𝐺𝑥)
11. ∀𝑥(𝐻𝑥 ∨ 𝑀𝑥)
12. ∃𝑥𝐻𝑥 ∧ ∃𝑥𝑀𝑥
13. ∀𝑥(𝐻𝑥 ↔ ¬𝑀𝑥)
14. ∃𝑥𝐺𝑥 ∧ ∃𝑥¬𝐺𝑥
15. ∀𝑥∃𝑦(𝐺𝑥 ∧ 𝐻𝑦)
C. Following the diagram conventions introduced at the end of §21, consider the fol‐
lowing interpretation:
1 2
3 4 5
Determine whether each of the following sentences is true or false in that interpreta‐
tion:
1. ∃𝑥𝑅𝑥𝑥
2. ∀𝑥𝑅𝑥𝑥
3. ∃𝑥∀𝑦𝑅𝑥𝑦
4. ∃𝑥∀𝑦𝑅𝑦𝑥
5. ∀𝑥∀𝑦∀𝑧((𝑅𝑥𝑦 ∧ 𝑅𝑦𝑧) → 𝑅𝑥𝑧)
6. ∀𝑥∀𝑦∀𝑧((𝑅𝑥𝑦 ∧ 𝑅𝑥𝑧) → 𝑅𝑦𝑧)
7. ∃𝑥∀𝑦¬𝑅𝑥𝑦
8. ∀𝑥(∃𝑦𝑅𝑥𝑦 → ∃𝑦𝑅𝑦𝑥)
9. ∃𝑥∃𝑦(¬𝑥 = 𝑦 ∧ 𝑅𝑥𝑦 ∧ 𝑅𝑦𝑥)
10. ∃𝑥∀𝑦(𝑅𝑥𝑦 ↔ 𝑥 = 𝑦)
11. ∃𝑥∀𝑦(𝑅𝑦𝑥 ↔ 𝑥 = 𝑦)
12. ∃𝑥∃𝑦(¬𝑥 = 𝑦 ∧ 𝑅𝑥𝑦 ∧ ∀𝑧(𝑅𝑧𝑥 ↔ 𝑦 = 𝑧))
D. Why, when we are trying to figure out whether ‘∀𝑥𝑅𝑥𝑎’ is true in an interpreta‐
tion, do we need to consider whether ‘𝑅𝑐𝑎’ is true in some expanded interpretation
§22. TRUTH IN Quantifier 215
with a new name ‘𝑐 ’. Why can’t we make do with substituting a name we’ve already
interpreted?
E. Explain why on page 207 we did not give the truth conditions for the existential
quantifer like this:
Offering a precise definition of truth in Quantifier was more than a little fiddly. But
now that we are done, we can define various central logical notions. These will look
very similar to the definitions we offered for Sentential. However, remember that they
concern interpretations, rather than valuations.
So:
216
§23. SEMANTIC CONCEPTS 217
‘𝐹𝑎’ is true in this interpretation, while ‘𝐺𝑎’ is false. So the conditional ‘(𝐹𝑎 → 𝐺𝑎)’ is
false, and so ‘¬(𝐹𝑎 → 𝐺𝑎)’ is true. Suppose we add a previously unused name 𝑏 to this
interpretation: it will either denote a musician or a singer (I am including producers
as playing a musical instrument). So (𝐹𝑏 ∨ 𝐺𝑏) will be true in each such extended
interpretation. (𝐹𝑏 ∨ 𝐺𝑏) is (𝐹𝑥 ∨ 𝐺𝑥)|𝑏↷𝑥 , so by our semantic clauses, ∀𝑥(𝐹𝑥 ∨ 𝐺𝑥)
is true in the original interpretation. Since there is an interpretation making each of
these sentences true, they are consistent.
Now look at ‘∃𝑦(𝑃𝑦∧¬𝑃𝑦)’. If this is true in an intepretation, assigning some extension
to ‘𝑃’, then some interpretation with the same extension assigned to ‘𝑃’ and some new
name ‘𝑏’ makes ‘(𝑃𝑏 ∧ ¬𝑃𝑏)’ true. But that can be the case only if the extension of ‘𝑏’ is
included in the extension of ‘𝑃’, and also isn’t included in that extension – impossible.
So whatever extension we assign to ‘𝑃’, ‘∃𝑦(𝑃𝑦 ∧ ¬𝑃𝑦)’ is false. So that sentence is
false on every interpretation, and is a logical falsehood. Its negation is therefore a
logical truth: ‘¬∃𝑦(𝑃𝑦 ∧ ¬𝑃𝑦)’. This is entirely general: the negation of a logical truth
(falsehood) is a logical falsehood (truth).
Look now at ‘∀𝑥¬𝐺𝑥 ’ and ‘¬∃𝑥𝐺𝑥’. Any interpretation which makes the first true must
be such that any extended interpretation with a new name ‘𝑏’ makes ‘¬𝐺𝑏’ true, i.e.,
any extended interpretation will make ‘𝐺𝑏’ false. Hence, there is no extended interpret‐
ation making ‘𝐺𝑏’ true; so ‘∃𝑥𝐺𝑏’ is false in the original interpretation, hence ‘¬∃𝑥𝐺𝑥’
is true. Each step in this line of argument can also be run in reverse; so ‘∀𝑥¬𝐺𝑥 ’ and
‘¬∃𝑥𝐺𝑥’ are true in exactly the same interpretations. They are logically equivalent.
Consider finally ‘∀𝑥𝐺𝑥 ’ and ‘𝐺𝑎 → ∃𝑦(𝑃𝑦 ∧ ¬𝑃𝑦)’. These are inconsistent. Any inter‐
pretation making ‘𝐺𝑎 → ∃𝑦(𝑃𝑦 ∧ ¬𝑃𝑦))’ true must make ‘𝐺𝑎’ false – if it didn’t, it would
have to make its consequent true, but that’s a logical falsehood. But could ‘∀𝑥𝐺𝑥 ’ be
true in such an interpretation? No – that would require every extended interpretation
to make ‘𝐺𝑏’ true, even on the extended interpretation where the new name ‘𝑏’ is as‐
signed the same referent as the existing name ‘𝑎’, which is not in the extension of ‘𝐺 ’
in this interpretation. So no interpretation can make both of these sentences true.
Given this, any interpretation which makes ‘𝐺𝑎 → ∃𝑦(𝑃𝑦∧¬𝑃𝑦)’ true must make ‘∀𝑥𝐺𝑥 ’
false, and must therefore make ‘¬∀𝑥𝐺𝑥 ’ true. So there is no interpretation on which
‘𝐺𝑎 → ∃𝑦(𝑃𝑦 ∧ ¬𝑃𝑦)’ is true and in which ‘¬∀𝑥𝐺𝑥 ’ is false. This is a familiar notion
from Sentential: entailment. We will use the symbol ‘⊨’ for entailment in Quantifier,
much as we did for Sentential. This introduces an ambiguity, but a harmless one, since
every valid argument in Sentential remains a valid argument in Quantifier.
It is raining
So: It will be that it was raining.
This is conclusive: there is no possible situation in which it is raining now, but in the
future it turns out that it never was raining. But the best we can apparently do in
quantifier is to symbolise ‘it is raining’ by ‘𝑃’, and ‘it will be that it was raining’ by ‘𝑄’,
and 𝑃 ∴ 𝑄 is not valid in Quantifier. If this argument is valid, it will depend on the
logic of expressions that are treated as non‐logical expressions in Quantifier– here, the
tenses ‘is’, ‘was’ and ‘will be’. We will look at this example again briefly in §39.
expressions are just are the atomic sentences). And we characterised a formally valid ar‐
gument as one in which each possible reinterpretation (i.e., valuation) of the premises
which makes them actually true, is also one which makes the conclusion actually true.
But this account of formal validity needs refining when it comes to Quantifier. Consider
the Quantifier sentence which says that there are at least three things:
∃𝑥∃𝑦∃𝑧(𝑥 ≠ 𝑦 ∧ 𝑥 ≠ 𝑧 ∧ 𝑦 ≠ 𝑧).
This sentence contains no nonlogical expressions. Hence the sentence has a constant
meaning, and is true on each reinterpretation just in case it is actually true. It is actually
true – there are actually at least three things. Hence our Sentential‐inspired account
of logical truth would suggest that this sentence is a logical truth. But it seems very
strange to think that it is a logical truth that there are at least three things. Isn’t it
possible that there had been fewer? And shouldn’t logic allow for that possibility?
The keen‐eyed among you will have noticed that this sentence is not, in fact, a logical
truth of Quantifier. The reason is that in defining truth for sentences of Quantifier, we
considered not just reinterpretations of the nonlogical expressions, but also we allowed
the domain of our interpretation to freely depart from actuality (§21.6). So we are, in
effect, allowing our interpretations to vary the meanings of the nonlogical vocabulary
and to vary the possible situations at which our sentences are to be evaluated. So
we can consider a possible situation in which there are just two things, and if that
situation provides the domain of an interpretation, there is no way of extending that
interpretation to three names ‘𝑎’, ‘𝑏’, and ‘𝑐 ’ such that ‘𝑎 ≠ 𝑏 ∧ 𝑎 ≠ 𝑐 ∧ 𝑏 ≠ 𝑐 ’ is true;
hence ‘∃𝑥∃𝑦∃𝑧(𝑥 ≠ 𝑦 ∧ 𝑥 ≠ 𝑧 ∧ 𝑦 ≠ 𝑧)’ isn’t true on the original interpretation.
One sentence of this class is nevertheless a logical truth: the claim that something
exists, ‘∃𝑥𝑥 = 𝑥 ’. Since it is a constraint on Quantifier domains that they cannot be
empty (§15.6), there is for any interpretation a way of adding a previously uninterpreted
name ‘𝑐 ’ which denotes an object in the domain, and of course ‘𝑐 = 𝑐 ’ is true in that
extended interpretation, since each name must have a referent in any interpretation,
and the identity predicate is always interepreted as representing pairs of objects in the
domain and themselves.
This appeal to possible situations is very suggestive. Every possible situation seems
to have a domain of things which exist in that possibility, and the properties and re‐
lations that are instantiated in that possibility determine extensions drawn from that
domain. So you might wonder: should we think of interpretations as just being ‘pos‐
sible worlds’? Should we, that is, think that a sentence 𝒜 is consistent (i.e., there is an
interpretation on which it is true) iff 𝒜 is true in some possible situation?
We should not. In Quantifier it is not even clear how to do this, because a sentence
like ‘∃𝑥(𝐹𝑥 ∧ 𝐺𝑥)’ doesn’t even seem to mean anything without a prior intepretation.
If we assign a meaning to ‘𝐹 ’ and ‘𝐺 ’, we can then see if there are possible situations
making the sentence true. The problem then will be that there are logically consistent
sentences that are true in no possible world. For example, suppose we interpret ‘𝐹 ’ to
be ‘ is a cat’ and ‘𝐺 ’ to be ‘ is a mineral’. Obviously, given these asssignments
of meaning, the sentence can be rendered in English as ‘there is a cat which is a mineral’,
220 INTERPRETATIONS
and that is impossible: I would assume that it is of the essence of a cat that it be a living
animal, not an inanimate mineral. But the sentence is perfectly consistent despite
being impossible, precisely consistency involves reinterpreting the predicates so that
they can possibly hold together. There is a significant difference between possibility
and consistency:
This distinction is not always noted. Philosophers are prone to talking of ‘logical pos‐
sibility’ where they generally mean to talk about consistency. Possibility, as I under‐
stand it, is a property of intepreted sentences, or propositions. Logical consistency
is a property of interpretable sentences, relative to some language in which they are
assigned a meaning. In this sense, the logical ‘languages’ Sentential and Quantifier are
more like proto‐languages, because only a select few of the expressions of the language
are antecedently meaningful.1
Practice exercises
A. The following entailments are correct in Quantifier. In each case, explain why.
1 I owe my understanding of this point to the distinction between ‘representational’ and ‘interpretational’
semantics drawn by John Etchemendy (1999) The Concept of Logical Consequence, CSLI Publications,
chs. 2–5.
§23. SEMANTIC CONCEPTS 221
B. Show that, for any formula 𝒜 with at most 𝑥 free, the following two sentences are
logically equivalent: ∃𝑥𝒜 and ¬∀𝑥¬𝒜 .
C. Show that
domain: Paris
The name ‘𝑑 ’ must name something in the domain, so we have no option but:
𝑑 : Paris
Recall that we want ‘∃𝑥𝐴𝑥𝑥 ’ to be true, so we want all members of the domain to be
paired with themselves in the extension of ‘𝐴’. We can offer:
Now ‘𝐴𝑑𝑑 ’ is true, so it is surely true that ‘∃𝑥𝐴𝑥𝑥 ’. Next, we want ‘𝐵𝑑 ’ to be false, so the
referent of ‘𝑑 ’ must not be in the extension of ‘𝐵’. We might simply offer:
𝐵: is in Germany
1
222
§24. DEMONSTRATING CONSISTENCY AND INVALIDITY 223
Now we have an interpretation where ‘∃𝑥𝐴𝑥𝑥 ’ is true, but where ‘𝐵𝑑 ’ is false. So there
is an interpretation where ‘∃𝑥𝐴𝑥𝑥 → 𝐵𝑑 ’ is false. So ‘∃𝑥𝐴𝑥𝑥 → 𝐵𝑑 ’ is not a logical truth.
We can just as easily show that ‘∃𝑥𝐴𝑥𝑥 → 𝐵𝑑 ’ is not a logical falsehood. We need only
specify an interpretation in which ‘∃𝑥𝐴𝑥𝑥 → 𝐵𝑑 ’ is true; i.e., an interpretation in which
either ‘∃𝑥𝐴𝑥𝑥 ’ is false or ‘𝐵𝑑 ’ is true. Here is one:
domain: Paris
𝑑 : Paris
𝐴: is in the same place as
1 2
𝐵: is in France
1
We can make ‘∃𝑥𝑆𝑥 ’ true by including something in the extension of ‘𝑆’, and we can
make ‘∀𝑥𝑆𝑥 ’ false by leaving something out of the extension of ‘𝑆’. For concreteness we
shall offer:
𝑆: plays saxophone
1
Now ‘∃𝑥𝑆𝑥 ’ is true, because ‘𝑆’ is true of Ornette Coleman. Slightly more precisely,
extend our interpretation by allowing ‘𝑐 ’ to name Ornette Coleman. ‘𝑆𝑐 ’ is true in this
extended interpretation, so ‘∃𝑥𝑆𝑥 ’ was true in the original interpretation. Similarly,
‘∀𝑥𝑆𝑥 ’ is false, because ‘𝑆’ is false of Sarah Vaughan. Slightly more precisely, extend our
interpretation by allowing ‘𝑑 ’ to name Sarah Vaughan, and ‘𝑆𝑑 ’ is false in this extended
interpretation, so ‘∀𝑥𝑆𝑥 ’ was false in the original interpretation. We have provided a
counter‐interpretation to the claim that ‘∀𝑥𝑆𝑥 ’ and ‘∃𝑥𝑆𝑥 ’ are logically equivalent.
224 INTERPRETATIONS
To show that this is invalid, we must make the premise true and the conclusion false.
The conclusion is a conditional, so to make it false, the antecedent must be true and
the consequent must be false. Clearly, our domain must contain two objects. Let’s try:
Given that Marx wrote The Communist Manifesto, ‘𝐺𝑎’ is plainly false in this interpret‐
ation. But von Mises famously hated communism. So ‘∃𝑥𝐺𝑥’ is true in this interpreta‐
tion. Hence ‘∃𝑥𝐺𝑥 → 𝐺𝑎’ is false, as required.
But does this interpretation make the premise true? Yes it does! For ‘∃𝑥(𝐺𝑥 → 𝐺𝑎)’
to be true, ‘𝐺𝑐 → 𝐺𝑎’ must be true in some extended interpretation that is almost
exactly like the interpretation just given, except that it also interprets some previously
uninterpreted name ‘𝑐 ’. Let’s extend our original interpretation by letting ‘𝑐 ’ denote
Karl Marx – the same thing as ‘𝑎’ denotes in the original interpretation. Since ‘𝑎’ and
‘𝑐 ’ denote the same thing in the extended interpretation, obviously ‘𝐺𝑐 → 𝐺𝑎’ will be
true. So ‘∃𝑥(𝐺𝑥 → 𝐺𝑎)’ is true in the original interpretation. So the premise is true,
and the conclusion is false, in this original interpretation. The argument is therefore
invalid.
In passing, note that we have also shown that ‘∃𝑥(𝐺𝑥 → 𝐺𝑎)’ does not entail ‘∃𝑥𝐺𝑥 →
𝐺𝑎’. And equally, we have shown that the sentences ‘∃𝑥(𝐺𝑥 → 𝐺𝑎)’ and ‘¬(∃𝑥𝐺𝑥 → 𝐺𝑎)’
are jointly consistent.
Let’s consider a second example. Consider:
∀𝑥∃𝑦𝐿𝑥𝑦 ∴ ∃𝑦∀𝑥𝐿𝑥𝑦
Again, I want to show that this is invalid. To do this, we must make the premises true
and the conclusion false. Here is a suggestion:
§24. DEMONSTRATING CONSISTENCY AND INVALIDITY 225
The premise is clearly true on this interpretation. Anyone in the domain has a living
sibling. That sibling will also, then, be in the domain, because one cannot be someone’s
sibling without also having them as a sibling. So for everyone in the domain, there will
be at least one other person in the domain who is their sibling, and thus has a parent
in common with them. Hence ‘∀𝑥∃𝑦𝐿𝑥𝑦’ is true. But the conclusion is clearly false, for
that would require that there is some single person who shares a parent with everyone
in the domain, and there is no such person. So the argument is invalid. We observe
immediately that the sentences ‘∀𝑥∃𝑦𝐿𝑥𝑦’ and ‘¬∃𝑦∀𝑥𝐿𝑥𝑦’ are jointly consistent and
that ‘∀𝑥∃𝑦𝐿𝑥𝑦’ does not entail ‘∃𝑦∀𝑥𝐿𝑥𝑦’.
For my third example, I will mix things up a bit. In §21, I described how we can present
some interpretations using diagrams. For example:
1 2
Using the conventions employed in §21, the domain of this interpretation is the first
three positive whole numbers, and ‘𝑅’ is true of 𝓍 and 𝓎 just in case there is an arrow
from 𝓍 to 𝓎 in our diagram. Here are some sentences that the interpretation makes
true:
› ‘∀𝑥∃𝑦𝑅𝑦𝑥 ’
› ‘∃𝑥∀𝑦𝑅𝑥𝑦’ witness 1
› ‘∃𝑥∀𝑦¬𝑅𝑥𝑦’ witness 3
This immediately shows that all of the preceding six sentences are jointly consistent.
We can use this observation to generate invalid arguments, e.g.:
When you provide an interpretation to refute a claim – that some sentence is a logical
truth, say – this is sometimes called providing a COUNTERMODEL.
Practice exercises
A. Show that each of the following is neither a logical truth nor a logical falsehood:
1. 𝐷𝑎 ∧ 𝐷𝑏
2. ∃𝑥𝑇𝑥ℎ
3. 𝑃𝑚 ∧ ¬∀𝑥𝑃𝑥
4. ∀𝑧𝐽𝑧 ↔ ∃𝑦𝐽𝑦
5. ∀𝑥(𝑊𝑥𝑚𝑛 ∨ ∃𝑦𝐿𝑥𝑦)
6. ∃𝑥(𝐺𝑥 → ∀𝑦𝑀𝑦)
7. ∃𝑥(𝑥 = ℎ ∧ 𝑥 = 𝑖)
B. For each of the following, say whether the sentence is a logical truth, a logical false‐
hood, or neither:
1. ∃𝑥∃𝑦𝑥 = 𝑦;
2. (∀𝑥(𝑃𝑥 ∧ 𝑄𝑥) ↔ ∀𝑥𝑃𝑥 ∧ ∀𝑥𝑄𝑥);
3. ∃𝑥(𝑃𝑥 ∧ 𝑄𝑥) → ¬(∃𝑥𝑃𝑥 ∧ ∃𝑥𝑄𝑥);
4. ∀𝑥∀𝑦(𝑥 ≠ 𝑦 → (𝐹𝑥 ↔ ¬𝐹𝑦)).
C. Show that the following pairs of sentences are not logically equivalent.
§24. DEMONSTRATING CONSISTENCY AND INVALIDITY 227
1. 𝐽𝑎, 𝐾𝑎
2. ∃𝑥𝐽𝑥 , 𝐽𝑚
3. ∀𝑥𝑅𝑥𝑥 , ∃𝑥𝑅𝑥𝑥
4. ∃𝑥𝑃𝑥 → 𝑄𝑐 , ∃𝑥(𝑃𝑥 → 𝑄𝑐)
5. ∀𝑥(𝑃𝑥 → ¬𝑄𝑥), ∃𝑥(𝑃𝑥 ∧ ¬𝑄𝑥)
6. ∃𝑥(𝑃𝑥 ∧ 𝑄𝑥), ∃𝑥(𝑃𝑥 → 𝑄𝑥)
7. ∀𝑥(𝑃𝑥 → 𝑄𝑥), ∀𝑥(𝑃𝑥 ∧ 𝑄𝑥)
8. ∀𝑥∃𝑦𝑅𝑥𝑦, ∃𝑥∀𝑦𝑅𝑥𝑦
9. ∀𝑥∃𝑦𝑅𝑥𝑦, ∀𝑥∃𝑦𝑅𝑦𝑥
1. 𝑃𝑎 ⊭ ∃𝑥(𝑃𝑥 ∧ 𝑄𝑥);
2. ∀𝑦(𝑃𝑦 → ∃𝑥𝑅𝑦𝑥) ⊭ ∀𝑥(𝑃𝑥 → ∃𝑦𝑅𝑦𝑦);
3. ∀𝑥𝑅𝑥𝑥 ⊭ ∀𝑥𝑅𝑎𝑥 .
Any relevant interpretation will give ‘𝑅𝑎𝑎’ a truth value. If ‘𝑅𝑎𝑎’ is true
in an interpretation, then ‘𝑅𝑎𝑎 ↔ 𝑅𝑎𝑎’ is true in that interpretation. If
‘𝑅𝑎𝑎’ is false in an interpretation, then ‘𝑅𝑎𝑎 ↔ 𝑅𝑎𝑎’ is true in that inter‐
pretation. These are the only alternatives. So ‘𝑅𝑎𝑎 ↔ 𝑅𝑎𝑎’ is true in every
interpretation. Therefore, it is a logical truth.
This argument is valid, of course, and its conclusion is true. However, it is not an
argument in Quantifier. Rather, it is an argument in English about Quantifier: it is an
argument in the metalanguage.
Note another feature of the argument. Since the sentence in question contained no
quantifiers, we did not need to think about how to interpret ‘𝑎’ and ‘𝑅’; the point was
just that, however we interpreted them, ‘𝑅𝑎𝑎’ would have some truth value or other.
(We could ultimately have given the same argument concerning Sentential sentences.)
228
§25. REASONING ABOUT ALL INTERPRETATIONS 229
Here is another bit of reasoning. Consider the sentence ‘∀𝑥(𝑅𝑥𝑥 ↔ 𝑅𝑥𝑥)’. Again, it
should obviously be a logical truth. But to say precisely why is quite a challenge. We
cannot say that ‘𝑅𝑥𝑥 ↔ 𝑅𝑥𝑥 ’ is true in every interpretation, since ‘𝑅𝑥𝑥 ↔ 𝑅𝑥𝑥 ’ is not
even a sentence of Quantifier (remember that ‘𝑥’ is a variable, not a name). So we have
to be a bit cleverer.
This is quite longwinded, but, as things stand, there is no alternative. In order to show
that a sentence is a logical truth, we must reason about all interpretations.
But sometimes we can draw a conclusion about some way all interpretations must be,
by considering a hypothetical interpretation which isn’t that way, and showing that
hypothesis leads to trouble. Here is an example.
› that a sentence is a logical falsehood; for this requires that it is false in every
interpretation.
› that two sentences are logically equivalent; for this requires that they have the
same truth value in every interpretation.
› that some sentences are jointly inconsistent; for this requires that there is no
interpretation in which all of those sentences are true together; i.e., that, in every
interpretation, at least one of those sentences is false.
› that an argument is valid; for this requires that the conclusion is true in every
interpretation where the premises are true.
The problem is that, with the tools available to you so far, reasoning about all interpret‐
ations is a serious challenge! Let’s take just one more example. Here is an argument
which is obviously valid:
∀𝑥(𝐻𝑥 ∧ 𝐽𝑥) ∴ ∀𝑥𝐻𝑥
After all, if everything is both H and J, then everything is H. But we can only show
that the argument is valid by considering what must be true in every interpretation in
which the premise is true. And to show this, we would have to reason as follows:
Even for a simple argument like this one, the reasoning is somewhat complicated. For
longer arguments, the reasoning can be extremely torturous.
Reductio reasoning is particularly useful in demonstrating the validity of valid argu‐
ments. In that case we will typically make the hypothetical supposition that there is
some interpretation which makes the premises true and the conclusion false, and then
derive an absurdity from that supposition. From that we can conclude there is no such
intepretation, and hence that any interpretation which makes the premises true must
also be one which makes the conclusion true.
The following table summarises whether a single (counter‐)interpretation suffices, or
whether we must reason about all interpretations (whether directly considering them
all, or indirectly making use of reductio).
§25. REASONING ABOUT ALL INTERPRETATIONS 231
Yes No
logical truth? all interpretations one counter‐interpretation
logical falsehood? all interpretations one counter‐interpretation
logically equivalent? all interpretations one counter‐interpretation
consistent? one interpretation consider all interpretations
valid? all interpretations one counter‐interpretation
entailment? all interpretations one counter‐interpretation
This might usefully be compared with the table at the end of §13.1. The key difference
resides in the fact that Sentential concerns truth tables, whereas Quantifier concerns
interpretations. This difference is deeply important, since each truth‐table only ever
has finitely many lines, so that a complete truth table is a relatively tractable object.
By contrast, there are infinitely many interpretations for any given sentence(s), so that
reasoning about all interpretations can be a deeply tricky business.
Practice exercises
A. Show that each of the following is either a logical truth or a logical falsehood:
1. 𝐷𝑎 ∧ ¬𝐷𝑎;
2. 𝑃𝑚 ∧ ¬∃𝑥𝑃𝑥 ;
3. ∀𝑥∀𝑦(𝑥 ≠ 𝑦 ↔ 𝑦 ≠ 𝑥);
4. ∀𝑥(𝐹𝑥 ∨ ∃𝑦¬𝐹𝑦));
5. ∃𝑥((𝑥 = ℎ ∧ 𝑥 = 𝑖) ∧ (𝐹ℎ ↔ ¬𝐹𝑖)).
1. 𝑀𝑎 ∴ (𝑄𝑎 ∨ ¬𝑄𝑎);
2. ¬(𝑀𝑏 ∧ ∃𝑥𝐴𝑥), (𝑀𝑏 ∧ 𝐹𝑏) ∴ ¬∀𝑥(𝐹𝑥 → 𝐴𝑥);
3. ∀𝑦𝐺𝑦, ∃𝑦¬𝐻𝑦 ∴ ¬∀𝑥(𝐺𝑥 → 𝐻𝑥);
4. ∃𝑥𝐾𝑥, ∀𝑥(𝐾𝑥 ↔ ¬𝐿𝑥) ∴ ∃𝑥¬𝐿𝑥 ;
5. ∀𝑥𝑄𝑥, ∀𝑥(𝑄𝑥 → 𝑅𝑥) ∴ ∃𝑥𝑅𝑥 ;
6. ∃𝑧(𝑁𝑧 ∧ 𝑂𝑧𝑧) ∴ ¬∀𝑥∀𝑦(𝑂𝑥𝑦 → ∃𝑧(¬𝑁𝑧 ∧ (𝑧 = 𝑥 ∨ 𝑧 = 𝑦)));
7. ∃𝑥∃𝑦(𝑍𝑥 ∧ 𝑍𝑦 ∧ 𝑥 ≠ 𝑦), ¬𝑍𝑑 ∴ ∃𝑦(𝑍𝑦 ∧ 𝑦 ≠ 𝑑);
8. ∀𝑥∀𝑦 (𝑅𝑥𝑦 → ∀𝑧(𝑅𝑦𝑧 → 𝑥 ≠ 𝑧)) ∴ 𝑅𝑎𝑎 → ¬𝑅𝑎𝑎.
Chapter 6
𝑃 ∨ 𝑄, ¬𝑃 ∴ 𝑄
𝑃 ↔ 𝑄, ¬𝑃 ∴ ¬𝑄.
Clearly, these are valid arguments. You can confirm that they are valid by constructing
four‐row truth tables. With truth tables, we only really care about the truth values
assigned to whole sentences, since that is what ultimately determines whether there
is an entailment. But we might say that these two arguments, proceeding from differ‐
ent premises with different logical connectives involved, must make use of different
principles of IMPLICATION – different principles about what follows from what. What
follows from a disjunction is not at all the same as what follows from a biconditional,
and it might be nice to keep track of these differences. While a truth table can show
that an argument is valid, it doesn’t really explain why the argument is valid. To ex‐
plain why 𝑃 ∨ 𝑄, ¬𝑃 ∴ 𝑄 is valid, we have to say something about how disjunction and
negation work and interact.
Certainly human reasoning treats disjunctions very differently from biconditionals.
While logic is not really about human reasoning, which is more properly a subject mat‐
ter for psychology, nevertheless we can formally study the different forms of reasoning
involved in arguing from sentences with different structures, by asking: what would it
234
§26. PROOF AND REASONING 235
be acceptable to conclude from premises with a certain formal structure, supposing one
cannot give up one’s committment to those premises?
The role of reasoning should not be overstated. Many principles of good reasoning
have no place in logic, because they concern how to make judgments on the basis of
inconclusive evidence. Our focus here is on implication, whether or not it would be
good reasoning to form beliefs in accordance with those implications. The idea here
is that ‘𝑃 ∨ 𝑄’ and ‘¬𝑃’ implies ‘𝑄’, whether or not it would be a good idea to belive ‘𝑄’
if you believed those premises. To emphasise a point we made earlier (§2.3): maybe
once you notice that these premises imply ‘𝑄’, what good reasoning demands is that
you give up one of those premises.
› The rules apply at some stage of a proof because of the syntactic structure of
the sentences involved – no consideration of valuations or interpretations is in‐
volved.
These two features make it easy to design a computer program that produces formal
proofs, as long as it can parse and analyse the syntax of expressions. No consideration
236 NATURAL DEDUCTION FOR SENTENTIAL
of meanings need be involved, which makes it quite unlike the earlier ways we had
of analysing arguments in Sentential, such as truth‐tables, which essentially involved
understanding the truth‐functions that characterise the meanings of the logical con‐
nectives of the language.
The proof system we adopt is called NATURAL DEDUCTION. It is an attractive system
in many ways, in part because the rules it uses are designed to be very simple and
to emulate certain obvious and natural patterns of reasoning. This makes it useful
for some purposes – it is often easy to understand that the rules are correct, and it
has the very nice feature that one can reuse one formal proof within the course of
constructing another. Don’t be misled, however: the formal proofs constructed by
using natural deduction are stylised and abstracted from ordinary reasoning. Natural
deduction may be slightly less artificial than using a truth table, but it is not in any
sense a psychologically realistic model of reasoning. (For one thing, many ‘natural’
instincts in human reasoning correspond to invalid patterns of argument.)
One specific way in which natural deduction is supposed to improve over truth table
techniques is in the insight into how an argument works that it can provide. Rather
than just discovering that one sentence cannot be true without another being true,
we see almost literally how to break down premises into their consequences, and then
build up from those intermediate consequences to the conclusion, where all the steps
of deconstruction and reconstruction involve the use of simple and obviously correct
implications. Though this doesn’t mimic human reasoning perfectly, it resembles it
sufficiently well that we often seem to better understand how an argument works once
we’ve constructed a natural deduction proof of it, even if we already knew it to be valid.
Consider this pair of valid arguments:
¬𝑃 ∨ 𝑄, 𝑃 ∴ 𝑄
𝑃 → 𝑄, 𝑃 ∴ 𝑄.
𝐴1 → 𝐶1 ∴ (𝐴1 ∧ 𝐴2 ) → (𝐶1 ∨ 𝐶2 ).
To test this argument for validity, you might use a 16‐row truth table. If you do it
correctly, then you will see that there is no row on which all the premises are true and
on which the conclusion is false. So you will know that the argument is valid. (But, as
just mentioned, there is a sense in which you will not know why the argument is valid.)
But now consider:
𝐴1 → 𝐶1 ∴ (𝐴1 ∧ 𝐴2 ∧ 𝐴3 ∧ 𝐴4 ∧ 𝐴5 ∧ 𝐴6 ∧ 𝐴7 ∧ 𝐴8 ∧ 𝐴9 ∧ 𝐴10 ) →
(𝐶1 ∨ 𝐶2 ∨ 𝐶3 ∨ 𝐶4 ∨ 𝐶5 ∨ 𝐶6 ∨ 𝐶7 ∨ 𝐶8 ∨ 𝐶9 ∨ 𝐶10 ).
This argument is also valid – as you can probably tell – but to test it requires a truth
table with 220 = 1048576 rows. In principle, we can set a machine to grind through
truth tables and report back when it is finished. In practice, complicated arguments
in Sentential can become intractable if we use truth tables. But there is a very short
natural deduction proof of this argument – just 6 rows. You can see it on page 264,
though it won’t make much sense if you skip the intervening pages.
When we get to Quantifier, though, the problem gets dramatically worse. There is noth‐
ing like the truth table test for Quantifier. To assess whether or not an argument is valid,
we have to reason about all interpretations. But there are infinitely many possible in‐
terpretations. We cannot even in principle set a machine to grind through infinitely
many possible interpretations and report back when it is finished: it will never finish.
We either need to come up with some more efficient way of reasoning about all in‐
terpretations, or we need to look for something different. Since we already have some
motivation for considering the role in arguments of particular premises, rather than all
the premises collectively as in the truth‐table approach, we will opt for the ‘something
different’ path – natural deduction.
1 Gerhard Gentzen (1935) ‘Untersuchungen über das logische Schließen’, translated as ‘Investigations into
Logical Deduction’ in M. E. Szabo, ed. (1969) The Collected Works of Gerhard Gentzen, North‐Holland.
Stanisław Jaśkowski (1934) ‘On the rules of suppositions in formal logic’, reprinted in Storrs McCall, ed.
(1967) Polish logic 1920–39, Oxford University Press.
2 Frederic Fitch (1952) Symbolic Logic: An introduction, Ronald Press Company.
In the design of the present proof system, I drew on earlier versions of forall𝓍, but also on the natural
deduction systems of Jon Barwise and John Etchemendy (1992) The Language of First‐Order Logic, CSLI;
and of Volker Halbach (2010) The Logic Manual, Oxford University Press.
238 NATURAL DEDUCTION FOR SENTENTIAL
Natural deduction was so‐called because the rules of implication it codifies were seen
as reflecting ‘natural’ forms of human reasoning. It must be admitted that no one
spontaneously and instinctively reasons in a way that conforms to the rules of natural
deduction. But there is one place where these forms of inference are widespread –
mathematical reasoning. And it will not surprise you to learn that these systems of in‐
ference were introduced initially to codify good practice in mathematical proofs. Don’t
worry, though: we won’t expect that you are already a fluent mathematician. Though
some of the rules might be a bit stilted and formal for everyday use, the rationale for
each of them is transparent and can be easily understood even by those without ex‐
tensive mathematical training.
One further thing about the rules we shall give is that they are extremely simple. At
every stage in a proof, it is trivial to see which rules apply, how to apply them, and
what the result of applying them is. While constructing proofs as a whole might take
some thought, the individual steps are the kind of thing that can be undertaken by a
completely automated process. The development of formal proofs in the early years of
the twentieth century emphasised this feature, as a part of a general quest to remove
the need for ‘insight’ or ‘intuition’ in mathematical reasoning. As the philosopher and
mathematician Alfred North Whitehead expressed his conception of the field, ‘the ul‐
timate goal of mathematics is to eliminate any need for intelligent thought’. You will
see I hope that natural deduction does require some intelligent thought. But you will
also see that, because the steps in a proof are trivial and demonstrably correct, that
finding a formal proof for a claim is a royal road to mathematical knowledge.
If we have a correct natural deduction proof of an argument, we can often do more
than simply report that the argument is valid. Often the natural deduction proof mir‐
rors the intuitive line of reasoning that justifies the valid argument, or even leads to its
discovery. Because of the prevalence of natural deduction in introductory logic texts
like this one, many philosophers have internalised the rules of natural deduction in
their own thought. So a good knowledge of natural deduction can be helpful in inter‐
preting contemporary philosophy: oftentimes the prose presentation of an argument
more or less exactly corresponds to some natural deduction proof in an appropriate
symbolisation.
§26. PROOF AND REASONING 239
Practice exercises
A. Is the purely syntactic nature of formal proof a virtue or a vice? Can we be sure that
any class of ‘good’ arguments that is identified on purely syntactic grounds corresponds
to an interesting category?
B. Are formal proofs always more efficient than truth table arguments? Does reasoning
about Sentential sentences using valuations never give understanding?
27
The Idea of Natural Deduction
Some of these rules also have a further effect of removing a previous assumption, or
discharging it. A natural deduction proof is just a sequence of sentences constructed
by making assumptions or using these introduction and elimination rules:
Henceforth, I shall simply call these ‘proofs’, but you should be aware that there are
informal proofs too.1
1 Many of the arguments we offer in our metalanguage, quasi‐mathematical arguments about our formal
languages, are proofs. Sometimes people call formal proofs ‘deductions’ or ‘derivations’, to ensure that
240
§27. THE IDEA OF NATURAL DEDUCTION 241
Below (§27.3), I will introduce a system for representing natural deduction proofs that
will make clear which sentences are assumptions, when those assumptions are made
and discharged, and also provide a commentary explaining how the proof was con‐
structed, i.e., which rules and sentences are used to justify others. The commentary
isn’t strictly necessary for a correct formal proof, but it is essential in learning how
those proofs work.
no one will confuse the metalanguage activity of proving things about our logical languages with the
activity of constructing arguments within those languages. But it seems unlikely that anyone in this
course will be confused on this point, since we are not offering very many proofs in the metalanguage
in the first place!
242 NATURAL DEDUCTION FOR SENTENTIAL
We could make more assumptions. If we had the sequence of sentences ‘𝑃’ followed
by ‘𝑄’, they would both be undischarged assumptions, and the conclusion would be ‘𝑄’.
So this would be a proof of the argument 𝑃, 𝑄 ∴ 𝑄.
Admittedly there isn’t much we can do if the only rule we have is the one that allows
us to make an assumption whenever we want. We shall want some other rules. But
first I’ll introduce a way of depicting natural deduction proofs that makes the role of
assumptions very clear.
1 𝑃
In this proof, the horizontal line marks that the sentence above it is an assumption,
and not justified by earlier sentences in the proof. Everything written below the line
will either be something which follows (directly or indirectly) from the assumptions
we have already made, or it will be some new assumption. We don’t need a special
indication for the conclusion: it’s just the last line.
There is also a vertical line at the left, the ASSUMPTION LINE. This indicates the RANGE
of the assumption. This vertical line should be continued downwards whenever we
extend the proof by applying the natural deduction rules, unless and until a rule that
discharges the assumption is applied. Then we discontinue the vertical line. We’ll have
to wait until §29 to see real examples of rules that discharge assumptions.2
When a sentence is in the range of an assumption, that generally means the assump‐
tion will be playing some role in justifying that sentence. This isn’t always the case,
however, and we will see that not every sentence in the range of an assumption intu‐
itively depends on the assumption. Not every undischarged assumption is essential to
the derivation of a given sentence. This again mirrors a feature of valid arguments:
the truth of the conclusion of a a valid argument doesn’t always require the truth of
every premise (e.g., 𝐴, 𝐵 ⊨ 𝐴, but ‘𝐵’ isn’t really playing an essential role here).
Whenever we make a new assumption, we underline it and introduce a new assump‐
tion line. So this is a proof of the argument 𝑃, 𝑄 ∴ 𝑄:
2 We’ll also see there that sometimes we can discharge all the assumptions in a proof, and we will have an
assumption line with no horizontal assumption marker, and so no undischarged assumptions attached
to it. So an assumption line really marks the range of zero or more assumptions.
§27. THE IDEA OF NATURAL DEDUCTION 243
1 𝑃
2 𝑄
Here you see we’ve extended the assumption line adjacent to ‘𝑃’, and introduced a new
assumption line for ‘𝑄’.
You see that we also number the sentences in the proof. These numbers are not strictly
part of the proof, but are part of the commentary, and help us remember which sen‐
tences we are referring to when we explain how subsequent sentences added to the
proof are justified.
We now have enough to describe our first natural deduction proof rule. It is the rule
that says we can extend any proof by making a new assumption. In abstract generality,
here is our NEW ASSUMPTION rule: if we have any natural deduction proof, we can
extend it by making a new assumption from any sentence. Graphically, we add the new
sentence at the bottom of the proof, indenting it in the range of a new assumption line,
and extending the range of all existing undischarged assumptions. Abstractly, the rule
looks like this:
𝑛 𝒜
𝑛+1 ℬ
That is, whenever we have a proof, whatever its contents, we may make an arbitrary
assumption to extend that proof. We will discuss new assumptions, and the special
family of rules that handles discharging assumptions, in §29.1.
Introducing a new assumption line for each new assumption is best practice. That
allows each assumption potentially to be discharged independently of all the others.
But sometimes you know that you won’t be discharging some assumptions, and that
they will all remain active throughout the proof. (Every sentence in the proof will
be in the range of those assumptions.) In that case, you can use a single assumption
line for a number of assumptions. For example, if we know we’re going to make and
retain the assumptions ‘𝑃’ and ‘𝑄’, we can write them on successive lines, draw a single
assumption line, and a single horizontal line marking that any sentences above it are
the assumptions attached to that assumption line:
1 𝑃
2 𝑄
⋮
244 NATURAL DEDUCTION FOR SENTENTIAL
For any given sentence in a proof, you can easily see the undischarged assumptions
on which that sentence depends: just look at which assumptions are attached to the
assumptions lines to its left. If some line in a proof is in the range of an assumption,
we will say that it is an ACTIVE ASSUMPTION at that point in the proof. Likewise, for a
given assumption you can use the assumption line to easily see which sentences are in
the range of that assumption.
Let’s consider a couple more examples of how to set up up a proof.
¬(𝐴 ∨ 𝐵) ∴ ¬𝐴 ∧ ¬𝐵.
1 ¬(𝐴 ∨ 𝐵)
We are hoping to conclude that ‘¬𝐴 ∧ ¬𝐵’; so we are hoping ultimately to conclude our
proof with
1 ¬(𝐴 ∨ 𝐵)
𝑛 ¬𝐴 ∧ ¬𝐵
for some number 𝑛. It doesn’t matter which line we end on, but we would obviously
prefer a short proof to a long one! We don’t have any rules yet, so we cannot fill in the
middle of this proof.
Suppose we had an argument with more than one premise:
If our argument has more than one premise, we can use either single or multiple as‐
sumption lines:
1 𝐴∨𝐵 1 𝐴∨𝐵
2 ¬(𝐴 ∧ 𝐶) 2 ¬(𝐴 ∧ 𝐶)
⋮ ⋮
𝑛 ¬𝐶 ∨ 𝐷 𝑛 ¬𝐶 ∨ 𝐷
Again, these represent the same proof; the right hand form is a conventional shorthand
for the official form on the left.
What remains to do is to explain each of the rules that we can use along the way from
premises to conclusion. The rules are divided into two families: those rules that involve
§27. THE IDEA OF NATURAL DEDUCTION 245
making or getting rid of further assumptions that are made ‘for the sake of argument’,
and those that do not. The latter class of rules are simpler, so we will begin with those
in §28, and turning to the others in §29. After introducing the rules, I will return in
§29.10 to the two incomplete proofs above, to see how they may be completed.
Practice exercises
A. If the following a correctly formed proof in our natural deduction system?
1 𝐴
2 ((𝐵 ↔) ∨ 𝐴)
3 ¬𝐴
4 𝐷 → ¬𝐷
B. Which of the following could, given the right rules, be turned into a proof corres‐
ponding to the argument
¬(𝑃 ∧ 𝑄), 𝑃 ∴ ¬𝑄?
1 ¬(𝑃 ∧ 𝑄) 1 ¬𝑄
2 𝑃 ⋮
1. 3.
⋮ 𝑛 𝑃
𝑛 ¬𝑄 𝑛+1 ¬(𝑃 ∧ 𝑄)
1 ¬(𝑃 ∧ 𝑄) 1 𝑃
2 𝑃 2 ¬(𝑃 ∧ 𝑄)
2. 4.
⋮ ⋮
𝑛 ¬𝑄 𝑛 ¬𝑄
28
Basic Rules for Sentential: Rules
without Subproofs
𝑅: Ludwig is reactionary
𝐿: Ludwig is libertarian
Perhaps I am working through a proof, and I have obtained ‘𝑅’ on line 8 and ‘𝐿’ on line
15. Then on any subsequent line I can obtain ‘(𝑅 ∧ 𝐿)’ thus:
8 𝑅
15 𝐿
16 (𝑅 ∧ 𝐿) ∧I 8, 15
Note that every line of our proof must either be an assumption, or must be justified by
some rule. We add the commentary ‘∧I 8, 15’ here to indicate that the line is obtained
by the rule of conjunction introduction (∧I) applied to lines 8 and 15. Note the derived
conjunction depends on the collective assumptions of the two conjuncts.
246
§28. BASIC RULES FOR Sentential: RULES WITHOUT SUBPROOFS 247
Since the order of conjuncts does not matter in a conjunction, I could equally well have
obtained ‘(𝐿 ∧ 𝑅)’ as ‘(𝑅 ∧ 𝐿)’. I can use the same rule with the commentary reversed,
to reflect the reversed order of the conjuncts:
8 𝑅
15 𝐿
16 (𝐿 ∧ 𝑅) ∧I 15, 8
𝑚 𝒜
𝑛 ℬ
(𝒜 ∧ ℬ) ∧I 𝑚, 𝑛
To be clear, the statement of the rule is schematic. It is not itself a proof. ‘𝒜 ’ and ‘ℬ’ are
not sentences of Sentential. Rather, they are symbols in the metalanguage, which we
use when we want to talk about any sentence of Sentential (see §7). Similarly, ‘𝑚’ and
‘𝑛’ are not a numerals that will appear on any actual proof. Rather, they are symbols
in the metalanguage, which we use when we want to talk about any line number of
any proof. In an actual proof, the lines are numbered ‘1’, ‘2’, ‘3’, and so forth. But when
we define the rule, we use variables to emphasise that the rule may be applied at any
point. The rule requires only that we have both conjuncts available to us somewhere
in the proof, earlier than the line that results from the application of the rule. They
can be separated from one another, and they can appear in any order. So 𝑚 might be
less than 𝑛, or greater than 𝑛. Indeed, 𝑚 might even equal 𝑛, as in this proof:
1 𝑃
2 𝑃∧𝑃 ∧I 1, 1
Note that the rule involves extending the vertical line to cover the newly introduced
sentence. This is because what has been derived depends on the same assumptions as
what it was derived from, and so it must also be in the range of those assumptions.
248 NATURAL DEDUCTION FOR SENTENTIAL
All of the rules in this section justify a new claim which inherits all the
assumptions of anything from which it has been derived by a natural
deduction rule.
The two starting conjuncts needn’t have the same assumptions, but the derived con‐
junction inherits their joint assumptions:
1 𝑃
2 𝑄
3 (𝑃 ∧ 𝑄) ∧I 1, 2
𝑚 (𝒜 ∧ ℬ) 𝑚 (𝒜 ∧ ℬ)
⋮ ⋮
𝒜 ∧E 𝑚 ℬ ∧E 𝑚
The point is simply that, when you have a conjunction on some line of a proof, you
can obtain either of the conjuncts by ∧E later on. There are two rules, because each
conjunction justifies us in deriving either of its conjuncts. We could have called them
∧E‐LEFT and ∧E‐RIGHT, to distinguish them, but in the following we will mostly not
distinguish them.1
One point might be worth emphasising: you can only apply this rule when conjunction
is the main connective. So you cannot derive ‘𝐷’ just from ‘𝐶 ∨ (𝐷 ∧ 𝐸)’! Nor can you
1 Why do we have two rules at all, rather than one rule that allows us to derive either conjunct? The
answer is that we want our rules to have an unambiguous result when applied to some prior lines of
the proof. This is important if, for example, we are implementing a computer system to produce formal
proofs.
§28. BASIC RULES FOR Sentential: RULES WITHOUT SUBPROOFS 249
derive ‘𝐷’ directly from ‘𝐶 ∧ (𝐷 ∧ 𝐸)’, because it is not one of the conjuncts of the main
connective of this sentence. You would have to first obtain ‘(𝐷 ∧ 𝐸)’ by ∧E, and then
obtain ‘𝐷’ by a second application of that rule, as in this proof:
1 𝐶 ∧ (𝐷 ∧ 𝐸)
2 𝐷∧𝐸 ∧E 1
3 𝐷 ∧E 2
Even with just these two rules, we can start to see some of the power of our formal
proof system. Consider this tricky‐looking argument:
(𝐴 ∨ 𝐵) → (𝐶 ∨ 𝐷) ∧ (𝐸 ∨ 𝐹) → (𝐺 ∨ 𝐻)
∴ (𝐸 ∨ 𝐹) → (𝐺 ∨ 𝐻) ∧ (𝐴 ∨ 𝐵) → (𝐶 ∨ 𝐷)
Dealing with this argument using truth‐tables would be a very tedious exercise, given
that there are 8 sentence letters in the premise and we would thus require a 28 = 256
line truth table! But we can deal with it swiftly using our natural deduction rules.
The main connective in both the premise and conclusion of this argument is ‘∧’. In or‐
der to provide a proof, we begin by writing down the premise, which is our assumption.
We draw a line below this: everything after this line must follow from our assumptions
by (successive applications of) our rules of implication. So the beginning of the proof
looks like this:
From the premise, we can get each of its conjuncts by ∧E. The proof now looks like
this:
2 [(𝐴 ∨ 𝐵) → (𝐶 ∨ 𝐷)] ∧E 1
3 [(𝐸 ∨ 𝐹) → (𝐺 ∨ 𝐻)] ∧E 1
So by applying the ∧I rule to lines 3 and 2 (in that order), we arrive at the desired
conclusion. The finished proof looks like this:
2 [(𝐴 ∨ 𝐵) → (𝐶 ∨ 𝐷)] ∧E 1
3 [(𝐸 ∨ 𝐹) → (𝐺 ∨ 𝐻)] ∧E 1
This is a very simple proof, but it shows how we can chain rules of proof together into
longer proofs. Our formal proof requires just four lines, a far cry from the 256 lines
that would have been required had we approached the argument using the techniques
from chapter 3.
It is worth giving another example. Way back in §10.3, we noted that this argument is
valid:
𝐴 ∧ (𝐵 ∧ 𝐶) ∴ (𝐴 ∧ 𝐵) ∧ 𝐶.
To provide a proof corresponding to this argument, we start by writing:
1 𝐴 ∧ (𝐵 ∧ 𝐶)
From the premise, we can get each of the conjuncts by applying ∧E twice. And we can
then apply ∧E twice more, so our proof looks like:
1 𝐴 ∧ (𝐵 ∧ 𝐶)
2 𝐴 ∧E 1
3 𝐵∧𝐶 ∧E 1
4 𝐵 ∧E 3
5 𝐶 ∧E 3
But now we can merrily reintroduce conjunctions in the order we want them, so that
our final proof is:
1 𝐴 ∧ (𝐵 ∧ 𝐶)
2 𝐴 ∧E 1
3 𝐵∧𝐶 ∧E 1
4 𝐵 ∧E 3
5 𝐶 ∧E 3
6 𝐴∧𝐵 ∧I 2, 4
7 (𝐴 ∧ 𝐵) ∧ 𝐶 ∧I 6, 5
Recall that our official definition of sentences in Sentential only allowed conjunctions
with two conjuncts. When we discussed semantics, we became a bit more relaxed, and
allowed ourselves to drop inner parentheses in long conjunctions, since the order of
the parentheses did not affect the truth table. The proof just given suggests that we
could also drop inner parentheses in all of our proofs. However, this is not standard,
and we shall not do this. Instead, we shall return to the more austere parenthetical
§28. BASIC RULES FOR Sentential: RULES WITHOUT SUBPROOFS 251
› 𝒜, ℬ ⊨ 𝒜 ∧ ℬ;
› 𝒜 ∧ ℬ ⊨ 𝒜;
› 𝒜 ∧ ℬ ⊨ ℬ.
For example, the first of these says that 𝒜 and ℬ separately suffice to entail the truth
of their conjunction. This justifies the proof rule of conjunction introduction, since
at a stage in the proof where we are assuming both 𝒜 and ℬ to be true, we are then
permitted to conclude that 𝒜 ∧ ℬ is true – just as conjunction introduction says we
can.
It can be recognised, then, that our proof rules correspond to valid arguments in Sen‐
tential, and so our conjunction rules will never permit us to derive a false sentence from
true sentences. There is no guarantee of course that the assumptions we make in our
formal proofs are in fact true – only that if they were true, what we derive from them
would also be true. So despite the fact that our proof rules are a syntactic procedure,
that rely only on recognising the main connective of a sentence and applying an appro‐
priate rule to introduce or eliminate it, each of our rules corresponds to an acceptable
entailment.
This argument is certainly valid. If you have a conditional claim, that commits you to
the consequent given the antecedent, and you also have the antecedent, then you have
sufficient material to derive the consequent.
This suggests a straightforward CONDITIONAL ELIMINATION rule (→E):
252 NATURAL DEDUCTION FOR SENTENTIAL
𝑚 (𝒜 → ℬ)
𝑛 𝒜
ℬ →E 𝑚, 𝑛
This rule is also sometimes called MODUS PONENS. Again, this is an elimination rule,
because it allows us to obtain a sentence that may not contain ‘→’, having started with
a sentence that did contain ‘→’. Note that the conditional, and the antecedent, can
be separated from one another, and they can appear in any order. However, in the
commentary for →E, we always cite the conditional first, followed by the antecedent.
Here is an illustration of the rules we have so far in action, applied to this intuitively
correct argument:
𝑃, (𝑃 → 𝑄) ∧ (𝑃 → 𝑅) ∴ (𝑅 ∧ 𝑄).
1 (𝑃 → 𝑄) ∧ (𝑃 → 𝑅)
2 𝑃
3 (𝑃 → 𝑄) ∧E 1
4 (𝑃 → 𝑅) ∧E 1
5 𝑄 →E 3, 2
6 𝑅 →E 4, 2
7 (𝑅 ∧ 𝑄) ∧I 6, 5
The correctness of our proof rule of conditional elimination is supported by the easily
demonstrated validity of the corresponding entailment:
› 𝒜, 𝒜 → ℬ ⊨ ℬ.
So applying this rule can never produce false conclusions if we began with true assump‐
tions.
direction. So, thought of informally, our biconditional rules correspond to the left‐to‐
right conditional elimination rule, the other of which corresponds to a right‐to‐left
application of conditional elimination.
If we know that Alice is coming to the party iff Bob is, then if we knew that either of
them was coming, we’d know that the other was coming. If you have the left‐hand
subsentence of the biconditional, you can obtain the right‐hand subsentence. If you
have the right‐hand subsentence, you can obtain the left‐hand subsentence. So we
have these two instances of the rule:
𝑚 (𝒜 ↔ ℬ) 𝑚 (𝒜 ↔ ℬ)
⋮ ⋮
𝑛 𝒜 𝑛 ℬ
⋮ ⋮
ℬ ↔E 𝑚, 𝑛 𝒜 ↔E 𝑚, 𝑛
Note that the biconditional, and the right or left half, can be distant from one another
in the proof, and they can appear in any order. However, in the commentary for ↔E,
we always cite the biconditional first.
Here is an example of the biconditional rules in action, demonstrating the following
argument:
𝑃, (𝑃 ↔ 𝑄), (𝑄 → 𝑅) ∴ 𝑅.
1 (𝑃 ↔ 𝑄)
2 𝑃
3 (𝑄 → 𝑅)
4 𝑄 ↔E 1, 2
5 𝑅 →E 3, 4
Note the way that our conjunction and conditional elimination rules can be used to
parallel the biconditional elimination rules:
1 ((𝑃 → 𝑄) ∧ (𝑄 → 𝑃)) 1 (𝑃 ↔ 𝑄)
2 𝑄 2 𝑄
3 (𝑄 → 𝑃) ∧E 1 3 𝑃 ↔E 1, 2
4 𝑃 →E 3, 2
254 NATURAL DEDUCTION FOR SENTENTIAL
› 𝒜 ↔ ℬ, 𝒜 ⊨ ℬ;
› 𝒜 ↔ ℬ, ℬ ⊨ 𝒜 .
𝑚 𝒜 𝑚 𝒜
⋮ ⋮
(𝒜 ∨ ℬ) ∨I 𝑚 (ℬ ∨ 𝒜) ∨I 𝑚
Notice that ℬ can be any sentence of Sentential whatsoever. So the following is a per‐
fectly good proof:
1 𝑀
2 𝑀∨ (𝐴 ↔ 𝐵) → (𝐶 ∧ 𝐷) ↔ 𝐸 ∧ 𝐹 ∨I 1
Using a truth table to show this would have taken 128 lines.
Here is an example, to show our rules in action:
§28. BASIC RULES FOR Sentential: RULES WITHOUT SUBPROOFS 255
1 (𝐴 ∧ 𝐵)
2 𝐴 ∧E 1
3 𝐵 ∧E 1
4 (𝐴 ∨ 𝐶) ∨I 2
5 (𝐵 ∨ 𝐶) ∨I 3
6 ((𝐴 ∨ 𝐶) ∧ (𝐵 ∨ 𝐶)) ∧I 4, 5
This disjunction rule is supported by the following valid Sentential argument forms:
› 𝒜 ⊨ 𝒜 ∨ ℬ; and
› ℬ ⊨ 𝒜 ∨ ℬ.
The rule of disjunction introduction is one place where ‘natural’ deduction doesn’t
seem to live up to its name. Is this implication really one we would naturally make?
Despite appearances, maybe we do sometimes reason like this. Consider this argument:
‘I won’t ever eat meat again. Well, either I won’t, or it will be an accident!’ (But perhaps
this is better thought of as a retraction of my initial over‐bold claim, rather than an
inference from it.)
Nevertheless, even if the rule is artificial, that doesn’t make it incorrect. We can see,
clearly, that it corresponds to a valid argument. So perhaps the problem with it as a
piece of reasoning is due to something other than invalidity.
› Sometimes disjunction introduction looks like you are ‘throwing away’ inform‐
ation that you already have – you know enough to treat ‘𝑃’ as a premise, but
you end up assenting only to the weaker claim ‘𝑃 ∨ 𝑄’. But can this be the full
story? Conjunction elimination seems to involve the same sort of inference from
a logically stronger to a logically weaker claim, and that doesn’t arouse nearly as
much animosity as disjunction introduction.
These considerations of relevance or information value lie beyond logic proper. They
concern what we, as thinkers and hearers, can conclude about the speaker’s state of
mind, given that they have said something with a particular content. This is the do‐
main of that part of linguistics known as PRAGMATICS, the study of meaning in context.
Most theories of pragmatics predict that disjunction introduction is valid but often
conversationally inappropriate. So Paul Grice, for example, says that if you are in a
256 NATURAL DEDUCTION FOR SENTENTIAL
28.6 Reiteration
The last natural deduction rule in this category is REITERATION (R). This just allows
us to repeat an assumption or claim 𝒜 we have already established, so long as the
repeated sentence remains in the range of any assumption which the original was in
the range of.
𝑚 𝒜
𝒜 R𝑚
Such a rule is obviously legitimate; but one might well wonder how such a rule could
ever be useful. Here is an example of it in action:
1 𝑃
2 ((𝑃 ∧ 𝑃) → 𝑄)
3 𝑃 R1
4 (𝑃 ∧ 𝑃) ∧I 1, 3
5 𝑄 →E 2, 4
This rule is unnecessary at this point in the proof (we could have applied conjunction
introduction and cited line 1 twice in our commentary), but it can be easier in practice
to have two distinct lines to which to apply conjunction introduction. The real benefits
of reiteration come when we have multiple subproofs, as we will see in the following
section (§29.1) – particularly when it comes to the negation rules. But we will also
see later in §33.1 that, strictly speaking, we don’t need the reiteration rule – though for
convenience we will keep it. And once we are able to discharge assumptions, reiteration
carries some risks (§29.3).
2 Grice’s discussion of disjunction is at pp. 44–6 in H P Grice (1989) Studies in the Way of Words, Harvard
University Press; see also §4 of Maria Aloni (2016) ‘Disjunction’ in Edward N Zalta, ed., The Stanford
Encyclopedia of Philosophy https://plato.stanford.edu/entries/disjunction/#DisjConv.
§28. BASIC RULES FOR Sentential: RULES WITHOUT SUBPROOFS 257
Practice exercises
A. The following ‘proof’ is incorrect. Explain the mistakes it makes.
1 𝐴 ∧ (𝐵 ∧ 𝐶)
2 (𝐵 ∨ 𝐶) → 𝐷
3 𝐵 ∧E 1
4 𝐵∨𝐶 ∨I 3
5 𝐷 →E 4, 2
1 𝐴
2 𝐵
3 𝐴 R1
C. The following proof is missing its commentary. Please supply the correct annota‐
tions on each line that needs one:
1 𝑃∧𝑆
2 𝑃
3 𝑆
4 𝑆→𝑅
5 𝑅
6 𝑅∨𝐸
258 NATURAL DEDUCTION FOR SENTENTIAL
1. 𝑃 ∴ ((𝑃 ∨ 𝑄) ∧ 𝑃);
2. ((𝑃 ∧ 𝑄) ∧ (𝑅 ∧ 𝑃)) ∴ ((𝑅 ∧ 𝑄) ∧ 𝑃);
3. (𝐴 → (𝐴 → 𝐵)), 𝐴 ∴ 𝐵;
4. (𝐵 ↔ (𝐴 ↔ 𝐵)), 𝐵 ∴ 𝐴.
29
Basic Rules for Sentential: Rules with
Subproofs
We’ve already seen in §27 how to start a proof by making assumptions. But the true
power of natural deduction relies on its rules governing when you can make additional
assumptions during the course of the proof, and how you can discharge those assump‐
tions when you no longer need them.
259
260 NATURAL DEDUCTION FOR SENTENTIAL
original assumptions. (After all, those original assumptions are still in effect.) But at
some point, we shall want to stop working with the additional assumption: we shall
want to return from the subproof to the main proof. To indicate that we have returned
to the main proof, the vertical line for the subproof comes to an end. At this point,
we say that the subproof is CLOSED. Having closed a subproof, we have set aside the
additional assumption, so it will be illegitimate to draw upon anything that depends
upon that additional assumption. This has been implicit in our discussion all along,
but it is good to make it very clear:
1 (𝑃 ∧ 𝑄)
2 𝑄 ∧E 1
3 𝑅
4 𝑄 R2
In this proof, even though the occurrence of ‘𝑄’ on line 4 occurs within the range of
the assumption ‘𝑅’, it does not intuitively depend on it. We used reiteration to show
that ‘𝑅’ is still derivable from active assumptions at line 4, but it does not follow that
‘𝑅’ depends in any robust way on every assumption that is active at line 4.
If someone doubted that this was valid, we might try to convince them otherwise by
explaining ourselves as follows:
This kind of reasoning is vital for understanding conditional claims. As the Cambridge
philosopher Frank Ramsey pointed out:
If two people are arguing ‘If 𝒫 , will 𝒬 ?’ and are both in doubt as to 𝒫 , they
are adding 𝓅 hypothetically to their stock of knowledge and arguing on
that basis about 𝓆….1
Ramsey’s idea is that if we can reach the conclusion that 𝒞 on the basis of the hypo‐
thetical supposition that 𝒜 (generally together with some other assumptions) then we
would be entitled to judge, given the other assumptions alone, that if 𝒜 turns out to
be true, then 𝒞 will also turn out to be true – for short, that if 𝒜 then 𝒞 . This obser‐
vation of Ramsey’s – that conditionals embody the categorical content of hypothetical
1 F P Ramsey (1929), ‘General Propositions and Causality’, at p. 155 in F P Ramsey (1990) Philosophical
Papers, D H Mellor, ed., Cambridge University Press.
262 NATURAL DEDUCTION FOR SENTENTIAL
reasoning – has been important for many accounts of the English conditional, not all
of them wholly congenial to the idea that ‘if’ is to be understood as ‘→’. Yet the essence
of his idea motivates the conditional introduction rule of natural deduction.
Transferred into natural deduction format, here is the pattern of reasoning that we just
used. We started with one premise, ‘Ludwig is reactionary’, symbolised ‘𝑅’. Thus:
1 𝑅
1 𝑅
2 𝐿
We are not claiming, on line 2, to have proved ‘𝐿’ from line 1. We are just making an‐
other assumption. So we do not need to write in any justification for the additional
assumption on line 2. We do, however, need to mark that it is an additional assump‐
tion. We do this in the usual way, by drawing a line under it (to indicate that it is an
assumption) and by indenting it with a further assumption line (to indicate that it is
additional).
With this extra assumption in place, we are in a position to use ∧I. So we could continue
our proof:
1 𝑅
2 𝐿
3 𝑅∧𝐿 ∧I 1, 2
The two vertical lines to the left of line 3 show that ‘𝑅 ∧ 𝐿’ is in the range of both
assumptions, and indeed depends on them collectively.
So we have now shown that, on the additional assumption, ‘𝐿’, we can obtain ‘𝑅 ∧ 𝐿’.
We can therefore conclude that, if ‘𝐿’ obtains, then so does ‘𝑅 ∧ 𝐿’. Or, to put it more
briefly, we can conclude ‘𝐿 → (𝑅 ∧ 𝐿)’:
1 𝑅
2 𝐿
3 𝑅∧𝐿 ∧I 1, 2
4 𝐿 → (𝑅 ∧ 𝐿) →I 2–3
§29. BASIC RULES FOR Sentential: RULES WITH SUBPROOFS 263
Observe that we have dropped back to using one vertical line. We are no longer re‐
lying on the additional assumption, ‘𝐿’, since the conditional itself follows just from
our original assumption, ‘𝑅’. The use of conditional introduction has discharged the
temporary assumption, so that the final line of this proof relies only on the initial as‐
sumption ‘𝑅’ – we made use of the assumption ‘𝐿’ only in the nested subproof, and the
range of that assumption is restricted to sentences in that subproof. Note that the con‐
ditional sentence ‘𝐿 → (𝑅 ∧ 𝐿)’ is a summary of what went on in the subproof, given the
undischarged assumption ‘𝑅’: if you made the additional assumption 𝐿, then you could
derive ‘(𝑅 ∧ 𝐿)’.
The general pattern at work here is the following. We first make an additional as‐
sumption, A; and from that additional assumption, we prove B. In that case, we have
established the following: If is does in fact turn out that A, then it also turns out that
B. This is wrapped up in the rule for CONDITIONAL INTRODUCTION:
𝑖 𝒜
𝑗 ℬ
(𝒜 → ℬ) →I 𝑖 –𝑗
There can be as many or as few lines as you like between lines 𝑖 and 𝑗. Notice that in our
presentation of the rule, discharging the assumption 𝒜 takes us out of the subproof in
which ℬ is derived from 𝒜 . If 𝒜 is the initial assumption of a proof, then discharging
it may well leave us with a conditional claim that depends on no undischarged assump‐
tions at all. We see an example in this proof, where the main proof, marked by the
leftmost vertical line, features no horizontal line marking an assumption:
1 𝑃∧𝑃
2 𝑃 ∧E 1
3 (𝑃 ∧ 𝑃) → 𝑃 →I 1–2
It might come as no surprise that the conclusion of this proof – being provable from
no undischarged assumptions at all – turns out to be a logical truth.
It will help to offer a further illustration of →I in action. Suppose we want to consider
the following:
𝑃 → 𝑄, 𝑄 → 𝑅 ∴ 𝑃 → 𝑅.
We start by listing both of our premises. Then, since we want to arrive at a conditional
(namely, ‘𝑃 → 𝑅’), we additionally assume the antecedent to that conditional. Thus
our main proof starts:
264 NATURAL DEDUCTION FOR SENTENTIAL
1 𝑃→𝑄
2 𝑄→𝑅
3 𝑃
Note that we have made ‘𝑃’ available, by treating it as an additional assumption. But
now, we can use →E on the first premise. This will yield ‘𝑄’. And we can then use →E
on the second premise. So, by assuming ‘𝑃’ we were able to prove ‘𝑅’, so we apply the
→I rule – discharging ‘𝑃’ – and finish the proof. Putting all this together, we have:
1 𝑃→𝑄
2 𝑄→𝑅
3 𝑃
4 𝑄 →E 1, 3
5 𝑅 →E 2, 4
6 𝑃→𝑅 →I 3–5
Let’s consider another example, this one demonstrating why reiteration can be so use‐
ful in subproofs. We know that 𝑃 ∴ 𝑄 → 𝑃 is a valid argument, from truth‐tables. This
is a proof:
1 𝑃
2 𝑄
3 𝑃 R1
4 (𝑄 → 𝑃) →I 2–3
Note that strictly speaking we needn’t have used reiteration here: the assumption of
‘𝑃’ remains active at line 2, so technically we could apply →I to close the subproof and
introduce the conditional immediately after line 2. But the use of reiteration makes it
much clearer what is going on in the proof – even though all it does it repeat the earlier
assumption and remind us that it is still an active assumption.
We now have all the rules we need to show that the argument on page 237 is valid. Here
is the six line proof, some 175,000 times shorter than the corresponding truth table:
§29. BASIC RULES FOR Sentential: RULES WITH SUBPROOFS 265
1 𝐴1 → 𝐶1
2 (𝐴1 ∧ 𝐴2 ∧ 𝐴3 ∧ 𝐴4 ∧ 𝐴5 ∧ 𝐴6 ∧ 𝐴7 ∧ 𝐴8 ∧ 𝐴9 ∧ 𝐴10 )
3 𝐴1 ∧E 2
4 𝐶1 →E 1, 3
5 (𝐶1 ∨ 𝐶2 ∨ 𝐶3 ∨ 𝐶4 ∨ 𝐶5 ∨ 𝐶6 ∨ 𝐶7 ∨ 𝐶8 ∨ 𝐶9 ∨ 𝐶10 ) ∨I 4
Import‐Export
Our rules so far can be used to demonstrate two important and rather controversial
principles governing the conditional. The principle of IMPORTATION is the claim that
from ‘if 𝑃 then, if also 𝑄, then 𝑅’ it follows that ‘if 𝑃 and also 𝑄, then 𝑅’. The principle
of EXPORTATION is the converse, that from ‘if 𝑃 and also 𝑄, then 𝑅’ it follows that ‘if 𝑃,
then if also 𝑄, then 𝑅’. First, we prove importation holds for our conditional:
1 (𝑃 → (𝑄 → 𝑅))
2 (𝑃 ∧ 𝑄)
3 𝑃 ∧E 2
4 (𝑄 → 𝑅) →E 1, 3
5 𝑄 ∧E 2
6 𝑅 →E 4, 5
7 ((𝑃 ∧ 𝑄) → 𝑅) →I 2–6
Second, we show exportation holds. Here, we need to open two nested subproofs:
1 ((𝑃 ∧ 𝑄) → 𝑅)
2 𝑃
3 𝑄
4 (𝑃 ∧ 𝑄) ∧I 2, 3
5 𝑅 →E 1, 4
6 (𝑄 → 𝑅) →I 3–5
7 (𝑃 → (𝑄 → 𝑅)) →I 2–6
266 NATURAL DEDUCTION FOR SENTENTIAL
1 𝐴
2 𝐵
3 𝐵∧𝐵 ∧I 2, 2
4 𝐵 ∧E 3
5 𝐵→𝐵 →I 2–4
This is perfectly in keeping with the rules we have laid down already. And it should
not seem particularly strange. Since ‘𝐵 → 𝐵’ is a logical truth, no particular premises
should be required to prove it – note that ‘𝐴’ plays no particular role in the proof apart
from beginning it.
But suppose we now tried to continue the proof as follows:
1 𝐴
2 𝐵
3 𝐵∧𝐵 ∧I 2, 2
4 𝐵 ∧E 3
5 𝐵→𝐵 →I 2–4
1 𝑃
2 𝑄
3 𝑃∧𝑄 ∧I 1, 2
4 𝑄 → (𝑃 ∧ 𝑄) →I 2–3
5 𝑃∧𝑄 R3
6 𝑃 → (𝑃 ∧ 𝑄) →I 1–5
This is certainly not a logical truth. What’s gone wrong is that we reiterated ‘𝑃 ∧ 𝑄’
without retaining the assumptions on which it was dependent. Naturally enough, it
was dependent on both ‘𝑃’ and ‘𝑄’, but it was reiterated into a context where the as‐
sumption ‘𝑄’ had been discharged. This is illegitimate.
1 𝐴
2 𝐵
3 𝐶
4 𝐴∧𝐵 ∧I 1, 2
5 𝐶 → (𝐴 ∧ 𝐵) →I 3–4
6 𝐵 → (𝐶 → (𝐴 ∧ 𝐵)) →I 2–5
Notice that the commentary on line 4 refers back to the initial assumption (on line 1)
and an assumption of a subproof (on line 2). This is perfectly in order, since neither
assumption has been discharged at the time (i.e., by line 4).
Again, though, we need to keep careful track of what we are assuming at any given
moment. For suppose we tried to continue the proof as follows:
268 NATURAL DEDUCTION FOR SENTENTIAL
1 𝐴
2 𝐵
3 𝐶
4 𝐴∧𝐵 ∧I 1, 2
5 𝐶 → (𝐴 ∧ 𝐵) →I 3–4
6 𝐵 → (𝐶 → (𝐴 ∧ 𝐵)) →I 2–5
This would be awful. If I tell you that Anne is smart, you should not be able to derive
that, if Cath is smart (symbolised by ‘𝐶 ’) then both Anne is smart and Queen Boudica
stood 20‐feet tall! But this is just what such a proof would suggest, if it were permiss‐
ible.
The essential problem is that the subproof that began with the assumption ‘𝐶 ’ de‐
pended crucially on the fact that we had assumed ‘𝐵’ on line 2. By line 6, we have
discharged the assumption ‘𝐵’: we have stopped asking ourselves what we could show,
if we also assumed ‘𝐵’. So it is simply cheating, to try to help ourselves (on line 7) to the
subproof that began with the assumption ‘𝐶 ’. The attempted disastrous proof violates,
as before, the rule in the box on page 260. The subproof of lines 3–4 occurs within
a subproof that ends on line 5. Its assumptions are discharged before line 7, so they
cannot be invoked in any rule which applies to produce line 7.
It is always permissible to open a subproof with any assumption. However, there is
some strategy involved in picking a useful assumption. Starting a subproof with an
arbitrary, wacky assumption would just waste lines of the proof. In order to obtain a
conditional by →I, for instance, you must assume the antecedent of the conditional in
a subproof.
Equally, it is always permissible to close a subproof and discharge its assumptions.
However, it will not be helpful to do so, until you have reached something useful.
Recall the proof of the argument
𝑃 → 𝑄, 𝑄 → 𝑅 ∴ 𝑃 → 𝑅
from page 264. One thing to note about the proof there is that because there are two
assumptions with the same range in the main proof, it is not easily possible to discharge
just one of them using the →I rule. For that rule only applies to a one‐assumption
subproof. If we wanted to discharge another of our assumptions, we shall have to put
the proof into the right form, with each assumption made individually as the head of
its own subproof:
§29. BASIC RULES FOR Sentential: RULES WITH SUBPROOFS 269
1 𝑃→𝑄
2 𝑄→𝑅
3 𝑃
4 𝑄 →E 1, 3
5 𝑅 →E 2, 4
6 𝑃→𝑅 →I 3–5
The conclusion is now in the range of both assumptions, as in the earlier proof – but
now it is also possible to discharge these assumptions if we wish:
1 𝑃→𝑄
2 𝑄→𝑅
3 𝑃
4 𝑄 →E 1, 3
5 𝑅 →E 2, 4
6 𝑃→𝑅 →I 3–5
7 (𝑄 → 𝑅) → (𝑃 → 𝑅) →I 2–6
While it is permissible, and often convenient, to have several assumptions with the
same range and without nesting, I recommend always trying to construct your proofs
so that each assumption begins its own subproof. That way, if you later wish to apply
rules which discharge a single assumption, you may always do so.
(𝑃 → 𝑄) ∧ (𝑃 → 𝑅) ∴ (𝑃 → (𝑅 ∧ 𝑄)).
We begin by opening our proof by assuming the premise. We also note that the con‐
clusion is a conditional, and so we’ll assume that it is obtained by an instance of con‐
ditional introduction. That will give us this ‘skeleton’ of a proof before we begin filling
in the details:
270 NATURAL DEDUCTION FOR SENTENTIAL
1 (𝑃 → 𝑄) ∧ (𝑃 → 𝑅)
2 𝑃
𝑚 (𝑅 ∧ 𝑄) ∧I 6, 5
Then we recall – perhaps! – that we already have a proof that looks very much like
this. On page 252 we have a proof that uses the same premise as ours, but also uses
the premise ‘𝑃’ to derive ‘(𝑅 ∧ 𝑄)’ – which is what we need. So we can simply copy that
whole proof over to fill in the missing section of our proof:
1 (𝑃 → 𝑄) ∧ (𝑃 → 𝑅)
2 𝑃
3 (𝑃 → 𝑄) ∧E 1
4 (𝑃 → 𝑅) ∧E 1
5 𝑄 →E 3, 2
6 𝑅 →E 4, 2
7 (𝑅 ∧ 𝑄) ∧I 6, 5
8 (𝑃 → (𝑅 ∧ 𝑄)) →I 2–7
This is a very useful feature: for if you have proved something once, you can re‐use
that proof whenever you need to, as a subproof in some other proof.
The converse isn’t always true, because sometimes in a subproof you use an assumption
from outside the subproof, and if you don’t make the same assumption in your other
proof, the displaced subproof may no longer correctly follow all the rules it uses.
𝑖 𝒜
𝑗 ℬ
𝑘 ℬ
𝑙 𝒜
(𝒜 ↔ ℬ) ↔I 𝑖 –𝑗, 𝑘–𝑙
There can be as many lines as you like between 𝑖 and 𝑗, and as many lines as you like
between 𝑘 and 𝑙 . Moreover, the subproofs can come in any order, and the second
subproof does not need to come immediately after the first. Again, this rule permits
us to discharge assumptions, and the same restrictions on making use of claims derived
in a closed subproof outside of that subproof apply.
We can now prove that a biconditional is like two conjoined conditionals. Using the
conditional and biconditional rules, we can prove that a biconditional entails a con‐
junction of conditionals, and vice versa:
1 (𝐴 ↔ 𝐵) 1 (𝐴 → 𝐵) ∧ (𝐵 → 𝐴)
2 𝐴 2 (𝐴 → 𝐵) ∧E 1
3 𝐵 ↔E 1, 2 3 𝐴
4 (𝐴 → 𝐵) →I 2–3 4 𝐵 →E 2, 3
5 𝐵 5 (𝐵 → 𝐴) ∧E 1
6 𝐴 ↔E 1, 5 6 𝐵
7 (𝐵 → 𝐴) →I 5–6 7 𝐴 →E 5, 6
8 (𝐴 → 𝐵) ∧ (𝐵 → 𝐴) ∧I 4, 7 8 (𝐴 ↔ 𝐵) ↔I 3–4, 6–7
1 (𝑃 → (𝑄 → 𝑅))
2 (𝑃 ∧ 𝑄)
3 𝑃 ∧E 2
4 (𝑄 → 𝑅) →E 1, 3
5 𝑄 ∧E 2
6 𝑅 →E 4, 5
7 ((𝑃 ∧ 𝑄) → 𝑅) →I 2–6
8 ((𝑃 ∧ 𝑄) → 𝑅)
9 𝑃
10 𝑄
11 (𝑃 ∧ 𝑄) ∧I 9, 10
12 𝑅 →E 8, 11
13 (𝑄 → 𝑅) →I 10–12
14 (𝑃 → (𝑄 → 𝑅)) →I 9–13
Note the small gap between the nested vertical lines between lines 7 and 8 – that shows
we have two subproofs here, not one. (That would also be indicated by the fact that
the sentence on line 8 has a horizontal line under it – no vertical assumption line has
two markers of where the assumptions cease.)
The acceptability of our proof rules is grounded in the fact that they will never lead
us from truth to falsehood. The acceptability of the biconditional introduction rule is
demonstrated by the following correct entailment:
› If 𝒞1 , …, 𝒞𝑛 , 𝒜 ⊨ ℬ and 𝒞1 , …, 𝒞𝑛 , ℬ ⊨ 𝒜 , then 𝒞1 , …, 𝒞𝑛 ⊨ 𝒜 ↔ ℬ.
𝑚 (𝒜 ∨ ℬ)
𝑖 𝒜
𝑗 𝒞
𝑘 ℬ
𝑙 𝒞
𝒞 ∨E 𝑚, 𝑖 –𝑗, 𝑘–𝑙
This is obviously a bit clunkier to write down than our previous rules, but the point
is fairly simple. Suppose we have some disjunction, 𝒜 ∨ ℬ. Suppose we have two
subproofs, showing us that 𝒞 follows from the assumption that 𝒜 , and that 𝒞 follows
from the assumption that ℬ. Then we can derive 𝒞 itself. As usual, there can be as
many lines as you like between 𝑖 and 𝑗, and as many lines as you like between 𝑘 and 𝑙 .
Moreover, the subproofs and the disjunction can come in any order, and do not have
to be adjacent.
Some examples might help illustrate the rule in action. Consider this argument:
(𝑃 ∧ 𝑄) ∨ (𝑃 ∧ 𝑅) ∴ 𝑃.
1 (𝑃 ∧ 𝑄) ∨ (𝑃 ∧ 𝑅)
2 𝑃∧𝑄
3 𝑃 ∧E 2
4 𝑃∧𝑅
5 𝑃 ∧E 4
6 𝑃 ∨E 1, 2–3, 4–5
An adaptation of the previous proof can be used to establish a proof for this argument:
(𝑃 ∧ 𝑄) ∨ (𝑃 ∧ 𝑅) ∴ (𝑃 ∧ (𝑄 ∨ 𝑅)).
We begin the cases in the same way as above, but as we continue please note the use
of the disjunction introduction rule to get the last line of each subproof in the right
format to use disjunction elimination.
1 (𝑃 ∧ 𝑄) ∨ (𝑃 ∧ 𝑅)
2 𝑃∧𝑄
3 𝑃 ∧E 2
4 𝑄 ∧E 2
5 (𝑄 ∨ 𝑅) ∨I 4
6 𝑃 ∧ (𝑄 ∨ 𝑅) ∧I 3, 5
7 𝑃∧𝑅
8 𝑃 ∧E 7
9 𝑅 ∧E 7
10 (𝑄 ∨ 𝑅) ∨I 9
11 𝑃 ∧ (𝑄 ∨ 𝑅) ∧I 8, 10
12 𝑃 ∧ (𝑄 ∨ 𝑅) ∨E 1, 2–6, 7–11
Don’t be alarmed if you think that you wouldn’t have been able to come up with this
proof yourself. The ability to come up with novel proofs will come with practice. The
key question at this stage is whether, looking at the proof, you can see that it conforms
with the rules that we have laid down. And that just involves checking every line, and
making sure that it is justified in accordance with the rules we have laid down.
Another slightly tricky example. Consider:
𝐴 ∧ (𝐵 ∨ 𝐶) ∴ (𝐴 ∧ 𝐵) ∨ (𝐴 ∧ 𝐶).
1 𝐴 ∧ (𝐵 ∨ 𝐶)
2 𝐴 ∧E 1
3 𝐵∨𝐶 ∧E 1
4 𝐵
5 𝐴∧𝐵 ∧I 2, 4
6 (𝐴 ∧ 𝐵) ∨ (𝐴 ∧ 𝐶) ∨I 5
7 𝐶
8 𝐴∧𝐶 ∧I 2, 7
9 (𝐴 ∧ 𝐵) ∨ (𝐴 ∧ 𝐶) ∨I 8
10 (𝐴 ∧ 𝐵) ∨ (𝐴 ∧ 𝐶) ∨E 3, 4–6, 7–9
This disjunction rule is supported by the following valid Sentential argument form:
› If 𝒟1 , …, 𝒟𝑛 , 𝒜 ∨ ℬ, 𝒜 ⊨ 𝒞 and 𝒟1 , …, 𝒟𝑛 , 𝒜 ∨ ℬ, ℬ ⊨ 𝒞, then 𝒟1 , …, 𝒟𝑛 , 𝒜 ∨ ℬ ⊨ 𝒞.
𝑖 𝒜
𝑗 ℬ
𝑘 ¬ℬ
¬𝒜 ¬I 𝑖 –𝑗, 𝑖 –𝑘
Here, we can prove a sentence and its negation both within the range of the assumption
that 𝒜 . So if 𝒜 were assumed, something contradictory would be derivable under
that assumption. (We could apply conjunction introduction to lines 𝑗 and 𝑘 to make
the logical falsehood explicit, but that wouldn’t be strictly necessary.) Since logical
falsehoods are fundamentally unacceptable as the termination of a chain of argument,
we must have begun with an inappropriate starting point when we assumed 𝒜 . So,
in fact, we conclude, ¬𝒜 , discharging our erroneous assumption that 𝒜 . There is no
need for the line with ℬ on it to occur before the line with ¬ℬ on it.
Almost always the logical falsehood arises because of a clash between the claim we
assume and some bit of prior knowledge – typically, some claim we have established
earlier in the proof. We will thus make frequent use of the rule of reiteration in ap‐
plications of negation introduction, to get the contradictory claims in the right place
to make the rule easy to apply. Here is an example of the rule in action, showing that
this argument is provable:
𝐴, ¬𝐵 ∴ ¬(𝐴 → 𝐵).
1 𝐴
2 ¬𝐵
3 𝐴→𝐵
4 𝐵 →E 3, 1
5 ¬𝐵 R2
(𝐶 → ¬𝐴)∴(𝐴 → ¬𝐶).
§29. BASIC RULES FOR Sentential: RULES WITH SUBPROOFS 277
1 (𝐶 → ¬𝐴)
2 𝐴
3 𝐶
4 ¬𝐴 →E 1, 3
5 𝐴 R2
6 ¬𝐶 ¬I 3–5, 3–4
7 (𝐴 → ¬𝐶) →I 2–6
The correctness of the negation introduction rule is demonstrated by this valid Senten‐
tial argument form:
𝑖 ¬𝒜
𝑗 ℬ
𝑘 ¬ℬ
𝑖 ¬𝒜
𝑗 ℬ
𝑘 ¬ℬ
𝒜 ¬E 𝑖 –𝑗, 𝑖 –𝑘
This is also reductio reasoning, though in this case from a negated assumption. But
again, if the assumption of ¬𝒜 goes awry and allows us to derive contradictory claims
(perhaps given what we’ve already shown), that licenses us to conclude 𝒜 .
With the rule of negation elimination, we can prove some claims that are hard to prove
directly. For example, suppose we wanted to prove an instance of the LAW OF EXCLUDED
MIDDLE, that (𝒜 ∨ ¬𝒜) is true for any sentence 𝒜 . Suppose we aim at proving the
specific instance ‘(𝑃 ∨ ¬𝑃)’. (It’s easy to see that the proof we give can be adapted to
any other instance of the law.) You might initially have thought: it is a disjunction, so
should be proved by disjunction introduction. But we cannot prove either the sentence
letter ‘𝑃’ or its negation from no assumptions – so we could not prove excluded middle
from no assumptions if it was by disjunction introduction from one of its disjuncts.
So we proceed indirectly: we show that supposing the negation of the law of excluded
middle leads to logical falsehood, and conclude it by negation elimination:
1 ¬(𝑃 ∨ ¬𝑃)
2 𝑃
3 (𝑃 ∨ ¬𝑃) ∨I 2
4 ¬(𝑃 ∨ ¬𝑃) R1
5 ¬𝑃 ¬I 2–3, 2–4
6 (𝑃 ∨ ¬𝑃) ∨I 5
7 ¬(𝑃 ∨ ¬𝑃) R1
One interesting feature of this proof is that one of the contradictory sentences is the
assumption itself. When the assumption that ¬𝒜 goes wrong, it might be because we
have the resources to prove 𝒜 ! Some interesting philosophical controversy surrounds
proofs like this: see §30.3.
§29. BASIC RULES FOR Sentential: RULES WITH SUBPROOFS 279
𝑃 ∴ (𝑃 ∧ 𝐷) ∨ (𝑃 ∧ ¬𝐷).
1 𝑃
2 ¬ (𝑃 ∧ 𝐷) ∨ (𝑃 ∧ ¬𝐷)
3 𝐷
4 (𝑃 ∧ 𝐷) ∧I 1, 3
5 (𝑃 ∧ 𝐷) ∨ (𝑃 ∧ ¬𝐷) ∨I 4
6 ¬𝐷 ¬I 3–5, 3–2
7 (𝑃 ∧ ¬𝐷) ∧I 1, 6
8 (𝑃 ∧ 𝐷) ∨ (𝑃 ∧ ¬𝐷) ∨I 7
9 (𝑃 ∧ 𝐷) ∨ (𝑃 ∧ ¬𝐷) ¬E 2–8
I make two comments. In line 6, the justification cites line 2 which lies outside the
subproof. That is okay, since the application of the rule lies within the range of the
assumption of line 2. In line 9, the justification only cites the subproof from 2 to 8,
rather than two ranges of line numbers. This is because in this application of our rule,
we have the special case where the sentence such that both it and its negation can be
derived from the assumption is that assumption. It would be trivial to derive it from
itself.
The negation elimination rule is supported by this valid Sentential argument form:
We can now see that the proof we began to construct can be completed can be
proved as in Figure 29.1.
280 NATURAL DEDUCTION FOR SENTENTIAL
1 ¬(𝐴 ∨ 𝐵)
2 𝐴
3 (𝐴 ∨ 𝐵) ∨I 2
4 ¬(𝐴 ∨ 𝐵) R1
5 ¬𝐴 ¬I 2–3, 2–4
6 𝐵
7 (𝐴 ∨ 𝐵) ∨I 6
8 ¬(𝐴 ∨ 𝐵) R1
9 ¬𝐵 ¬I 6–7, 6–8
10 (¬𝐴 ∧ ¬𝐵) ∧I 5, 9
3. Finally, Figure 29.3 shows a long proof involving most of our rules in action (page
282).
These three proofs are more complex than the others we’ve considered, because they
involve multiple rules in tandem. You should make sure you understand why each
rule applies where it does, and that the proofs are correct, before you move on. You
probably won’t feel that you are able to construct a proof yourself as yet, and that is
okay. It is important now to see that these are in fact proofs. Some ideas about how
to go about constructing them yourself will be presented in §32. But you will also get
a sense about how to construct complex proofs as you practice constructing simpler
proofs and start to see how they can be slotted together to form larger proofs. There is
no substitute for practice.
§29. BASIC RULES FOR Sentential: RULES WITH SUBPROOFS 281
1 𝐴∨𝐵
2 ¬(𝐴 ∧ 𝐶)
3 ¬(𝐵 ∧ ¬𝐷)
4 𝐴
5 𝐶
6 𝐴∧𝐶 ∧I 4, 5
7 ¬(𝐴 ∧ 𝐶) R2
8 ¬𝐶 ¬I 5–6, 5–7
9 (¬𝐶 ∨ 𝐷) ∨I 8
10 𝐵
11 ¬𝐷
12 (𝐵 ∧ ¬𝐷) ∧I 10, 11
13 ¬(𝐵 ∧ ¬𝐷) R3
14 𝐷 ¬E 11–12, 11–13
15 (¬𝐶 ∨ 𝐷) ∨I 14
Figure 29.2: Proof that (𝐴 ∨ 𝐵), ¬(𝐴 ∧ 𝐶), ¬(𝐵 ∧ ¬𝐷) ∴ (¬𝐶 ∨ 𝐷).
1 ¬𝑃 ∨ 𝑄
2 ¬𝑃
3 𝑃
4 ¬𝑄
5 𝑃 ∧ ¬𝑃 ∧I 3, 2
6 𝑃 ∧E 5
7 ¬𝑃 ∧E 5
8 𝑄 ¬E 4–7
9 𝑃→𝑄 →I 3–8
10 𝑄
11 𝑃
12 𝑄∧𝑄 ∧I 10, 10
13 𝑄 ∧E 12
14 𝑃→𝑄 →I 11–13
16 𝑃→𝑄
17 ¬(¬𝑃 ∨ 𝑄)
18 𝑃
19 𝑄 →E 16, 18
20 ¬𝑃 ∨ 𝑄 ∨I 19
21 ¬𝑃 ¬I 18–17, 18–20
22 ¬𝑃 ∨ 𝑄 ∨I 21
23 ¬𝑃 ∨ 𝑄 ¬E 17–17, 17–22
Practice exercises
A. The following ‘proof’ is incorrect. Explain the mistakes it makes.
1 ¬𝐿 → (𝐴 ∧ 𝐿)
2 ¬𝐿
3 𝐴 →E 1, 2
4 𝐿
5 𝐿 ∧ ¬𝐿 ∧I 4, 2
6 ¬𝐴
7 𝐿 →I 5
8 ¬𝐿 →E 5
9 𝐴 ¬I 6–7, 6–8
10 𝐴 ∨E 2–3, 4–9
B. The following proofs are missing their commentaries (rule and line numbers). Add
them, to turn them into bona fide proofs. Additionally, write down the argument that
corresponds to each proof.
1 𝐴→𝐷 1 ¬𝐿 → (𝐽 ∨ 𝐿)
2 𝐴∧𝐵 2 ¬𝐿
3 𝐴 3 𝐽∨𝐿
4 𝐷 4 𝐽
5 𝐷∨𝐸 5 𝐽∧𝐽
6 (𝐴 ∧ 𝐵) → (𝐷 ∨ 𝐸) 6 𝐽
7 𝐿
8 ¬𝐽
9 𝐽
10 𝐽
284 NATURAL DEDUCTION FOR SENTENTIAL
1. 𝐽 → ¬𝐽 ∴ ¬𝐽
2. 𝑄 → (𝑄 ∧ ¬𝑄) ∴ ¬𝑄
3. 𝐴 → (𝐵 → 𝐶) ∴ (𝐴 ∧ 𝐵) → 𝐶
4. 𝐾∧𝐿 ∴𝐾 ↔𝐿
5. (𝐶 ∧ 𝐷) ∨ 𝐸 ∴ 𝐸 ∨ 𝐷
6. 𝐴 ↔ 𝐵, 𝐵 ↔ 𝐶 ∴ 𝐴 ↔ 𝐶
7. ¬𝐹 → 𝐺, 𝐹 → 𝐻 ∴ 𝐺 ∨ 𝐻
8. (𝑍 ∧ 𝐾) ∨ (𝐾 ∧ 𝑀), 𝐾 → 𝐷 ∴ 𝐷
9. 𝑃 ∧ (𝑄 ∨ 𝑅), 𝑃 → ¬𝑅 ∴ 𝑄 ∨ 𝐸
10. 𝑆 ↔ 𝑇 ∴ 𝑆 ↔ (𝑇 ∨ 𝑆)
11. ¬(𝑃 → 𝑄) ∴ ¬𝑄
12. ¬(𝑃 → 𝑄) ∴ 𝑃
D. For each of the following sentences, construct a natural deduction proof which has
the sentence as its last line, and contains no undischarged assumptions:
1. 𝐽 ↔ (𝐽 ∨ (𝐿 ∧ ¬𝐿))
2. ((𝑃 ∧ 𝑄) ↔ (𝑄 ∧ 𝑃))
3. ((𝑃 → 𝑃) → 𝑄) → 𝑄.
30
Some Philosophical Issues about
Conditionals, Meaning, and Negation
If you can establish 𝒞 , given the assumption that 𝒜 and perhaps some
supplementary assumptions ℬ1 , …, ℬ𝑛 , then you can establish ‘if 𝒜 , 𝒞 ’
solely on the basis of the assumptions ℬ1 , …, ℬ𝑛 .
Conditional proof captures a significant aspect of the English ‘if’: the way that con‐
ditional helps us neatly summarise reasoning from assumptions, and then store that
reasoning for later use in a conditional form.
But if conditional proof is a good rule for English ‘if’, then we can argue that ‘if’ is
actually synonymous with ‘→’:
285
286 NATURAL DEDUCTION FOR SENTENTIAL
Our discussion in §11.5 seemed to indicate that the English conditional was not syn‐
onymous with ‘→’. That is hard to reconcile with the above argument. Most philo‐
sophers have concluded that, contrary to appearances, conditional proof is not always
a good way of reasoning for English ‘if’. Here is an example which seems to show this,
though it requires some background.
It is clearly valid in English, though rather pointless, to argue from a claim to itself. So
when 𝒞 is some English sentence, 𝒞 ∴ 𝒞 is a valid argument in English. And we cannot
make a valid argument invalid by adding more premises: if premises we already have
are conclusive grounds for the conclusion, adding more premises while keeping the
conclusive grounds cannot make the argument less conclusive. So 𝒞, 𝒜 ∴ 𝒞 is a valid
argument in English.
If conditional proof were a good way of arguing in English, we could convert this valid
argument into another valid argument with this form: 𝒞 ∴ if 𝒜, 𝒞 . But then condi‐
tional proof would then allow us to convert this valid argument:
This is invalid, because even if the premise is true, the conclusion seems to be actually
false. If conditional proof enables us to convert a valid English argument into an invalid
argument, so much the worse for conditional proof.
There is much more to be said about this example. What is beyond doubt is that →I is
a good rule for Sentential, regardless of the fortunes of conditional proof in English. It
does seem that instances of conditional proof in mathematical reasoning are all accept‐
able, which again shows the roots of natural deduction as a formalisation of existing
mathematical practice. This would suggest that → might be a good representation of
mathematical uses of ‘if’.
30.2 Inferentialism
You will have noticed that our rules come in pairs: an introduction rule that tells you
how to introduce a connective into a proof from what you have already, and an elim‐
ination rule that tells you how to remove it in favour of its consequences. These rules
can be justified by consideration of the meanings we assigned to the connectives of
Sentential in the schematic truth tables of §8.3.
§30. SOME PHILOSOPHICAL ISSUES ABOUT CONDITIONALS, MEANING, AND NEGATION 287
But perhaps we should invert this order of justification. After all, the proof rules
already seem to capture (more or less) how ‘and’, ‘or’, ‘not’, ‘if’, and ‘iff’ work in con‐
versation. We make some claims and assumptions. The introduction and elimination
rules summarise how we might proceed in our conversation against the background
of those claims and assumptions. Many philosophers have thought that the meaning
of an expression is entirely governed by how it is or might be used in a conversation by
competent speakers – in slogan form, meaning is fixed by use. If these proof rules de‐
scribe how we might bring an expression into a conversation, and what we may do with
it once it is there, then these proof rules describe the totality of facts on which meaning
depends. The meaning of a connective, according to this INFERENTIALIST picture, is
represented by its introduction and elimination rules – and not by the truth‐function
that a schematic truth table represents. On this view, it is the correctness of the schem‐
atic proof of 𝒜 ∨ ℬ from 𝒜 which explains why the schematic truth table for ‘∨’ has a
T on every row on which at least one of its constituents gets a T.
There is a significant debate on just this issue in the philosophy of language, about the
nature of meaning. Is the meaning of a word what it represents, the view sometimes
called REPRESENTATIONALISM? Or is the meaning of a word, rather, given by some
rules for how to use it, as inferentialism says? We cannot go deeply into this issue
here, but I will say a little. The representationalist view seems to accomodate some
expressions very well: the meaning of a name, for example, seems very plausibly to be
identified with what it names; the meaning of a predicate might be thought of as the
corresponding property. But inferentialism seems more natural as an approach to the
logical connectives:
It seems rather unnatural, by contrast, to think that the meaning of ‘and’ is some ab‐
stract mathematical ‘thing’ represented by a truth table.
Can the inferentialist distinguish good systems of rules, such as those governing ‘and’,
from bad systems? The problem is that without appealing to truth tables or the like,
we seem to be committed to the legitimacy of rather problematic connectives. The
most famous example is Prior’s ‘tonk’ governed by these rules:
𝑚 𝒜 𝑚 𝒜 tonk ℬ
⋮ ⋮
You will notice that ‘tonk’ has an introduction rule like ‘∨’, and an elimination rule
like ‘∧’. Of course ‘tonk’ is a connective we would not like in a language, since pairing
the introduction and elimination rules would allow us to prove any arbitrary sentence
from any assumption whatsover:
1 𝑃
2 𝑃 tonk 𝑄 tonk‐I 1
3 𝑄 tonk‐E 2
If we are to rule out such deviant connectives as ‘tonk’, Prior argues, we have to ac‐
cept that ‘an expression must have some independently determined meaning before
we can discover whether inferences involving it are valid or invalid’ (Prior, op. cit., p.
38). We cannot, that is, accept the inferentialist position that the rules of implication
come first and the meaning comes second. Inferentialists have replied, but we must
unfortunately leave this interesting debate here for now.2
30.3 Constructivism
The proof of excluded middle we saw on p. 278 is an example of an INDIRECT PROOF:
even though the main connective of our conclusion is a disjunction, we don’t establish
it by disjunction introduction. This is typical in fact of reductio reasoning in general:
we show something is true, by showing that an absurdity would be true if it were false.
An influential group of mathematicians are worried by indirect proofs. There is a view
known as CONSTRUCTIVISM which regards mathematical objects as constructed not dis‐
covered. The closely related view known as INTUITIONISM agrees, while offering a par‐
ticular account of the construction as fundamentally deriving from human perception
of the passage of time. The Dutch mathematician L E J Brouwer is most famously as‐
sociated with this view. He says this about the origins of our understanding of the
natural numbers:
Regardless of your view of intuitionism, the idea that mathematical objects are not
pre‐existing inhabitants of some Platonic realm has a lot of appeal.
2 The interested reader might wish to start with this reply to Prior: Nuel D Belnap, Jr (1962) ‘Tonk, Plonk
and Plink’, Analysis 22, pp. 130–34.
3 L E J Brouwer (1981) Brouwer’s Cambridge lectures on intuitionism, D van Dalen, ed., Cambridge Univer‐
sity Press, at pp. 4–5.
§30. SOME PHILOSOPHICAL ISSUES ABOUT CONDITIONALS, MEANING, AND NEGATION 289
Constructivists of all stripes think we shouldn’t accept the law of excluded middle,
because to think that any claim of the form 𝒜 ∨ ¬𝒜 must be true is to think that
there is a pre‐existing fact of the matter as to whether or not 𝒜 – and there may not
be until we have constructed the mathematical objects in question. A mathematical
existence proof, for example showing that a number with a certain property exists,
must – according to the constructivist – consist in a construction of the specific number
in question. We cannot show that some number has a property just by showing that
the supposition that no number has that property leads to contradiction.
When you show that ¬𝒜 leads to absurdity, you have constructed a proof of ¬¬𝒜 , but
that is not the same as a proof of 𝒜 itself. Constructivists thus typically accept the
¬I rule. If you prove ¬𝒜 by showing that the assumption 𝒜 leads to a contradiction,
you have positively constructed an absurdity on the basis of that assumption, and that
proof is acceptable. However, constructivists reject the ¬E rule. A proof showing that
the assumption ¬𝒜 leads to a contradiction amounts only to a positive construction
of ¬¬𝒜 – not a construction of 𝒜 .
Constructive logic has some interesting features that our logical system does not. For
example, constructive logic is typically understood to have the DISJUNCTION PROPERTY:
Our logic clearly lacks the disjunction property, as the proof of excluded middle
demonstrates: ‘𝑃 ∨ ¬𝑃’ can be proved but neither of its disjuncts is provable.
Constructive and intuitionistic logic is an alternative conception of the nature of lo‐
gic to that we have advocated. It is typically understood to involve recentering logic
around provability rather than truth. So rather than saying ‘𝑃 ∨ ¬𝑃’ is false, construct‐
ivists deny that it is provable, and then add that mathematical language should be
restricted to what is provable, rather than relying on a Platonistic notion of abstract
unworldly truth. The philosophical potential of this alternative way of thinking about
logic unfortunately takes us beyond the scope of the present course.4
4 A brief account of intuitionistic logic and the central role it gives to provability can be found in §§3.1–3.2
of Rosalie Iemhoff (2020) ‘Intuitionism in the Philosophy of Mathematics’, in Edward N. Zalta, ed., The
Stanford Encyclopedia of Philosophy https://plato.stanford.edu/entries/intuitionism/#BHKInt.
290 NATURAL DEDUCTION FOR SENTENTIAL
𝒜1 , 𝒜2 , …, 𝒜𝑛 ⊢ 𝒞.
1 𝐴
2 ¬𝐵 → ¬𝐴
3 ¬𝐵
4 ¬𝐴 →E 2, 3
5 𝐴 R1
6 𝐵 ¬E 3–4, 3–5
The undischarged assumptions are ‘𝐴’ and ‘¬𝐵 → ¬𝐴’ – the assumption ‘¬𝐵’ on line 3
is discharged by the application of negation elimination that leads to the last line, ‘𝐵’.
So this proof shows that 𝐴, ¬𝐵 → ¬𝐴 ⊢ 𝐵.
The symbol ‘⊢’ is known as the single turnstile. I want to emphasise that this is different
from the double turnstile symbol (‘⊨’) that represents entailment (§23).
291
292 NATURAL DEDUCTION FOR SENTENTIAL
› The single turnstile, ‘⊢’, concerns the existence of a certain kind of formal proof
– namely, 𝒜1 , 𝒜2 , …, 𝒜𝑛 ⊢ 𝒞 claims that there is a formal proof which termin‐
ates with 𝒞 and has among its undischarged assumptions only sentences among
𝒜1 , 𝒜2 , …, 𝒜𝑛 .
› The double turnstile, ‘⊨’, concerns the non‐existence of a certain kind of inter‐
pretation (or valuation, in the special case of Sentential) – namely, that there is
no interpretation making each of 𝒜1 , 𝒜2 , …, 𝒜𝑛 true while making 𝒞 false.
𝒜1 , …, 𝒜𝑛 , ℬ ⊢ 𝒞 iff 𝒜1 , …, 𝒜𝑛 ⊢ ℬ → 𝒞 .
1 𝒜1 1 𝒜1
⋮ ⋮
𝑛 𝒜𝑛 𝑛 𝒜𝑛
𝑛+1 ℬ 𝑛+1 ℬ
⋮ ⋮
𝑘 𝒞 𝑘 𝒞
The other direction is just as easy. Suppose we have the proof on the left, terminating
in ℬ → 𝒞 , we can make a new assumption of ℬ and use conditional elimination to
generate a proof terminating in 𝒞 , with that new assumption remaining undischarged.
§31. PROOF‐THEORETIC CONCEPTS 293
1 𝒜1 1 𝒜1
⋮ ⋮
𝑛 𝒜𝑛 𝑛 𝒜𝑛
⋮ ⋮
𝑘 ℬ→𝒞 𝑘 ℬ→𝒞
𝑘+1 ℬ
𝑘+2 𝒞 →E 𝑘 , 𝑘 + 1
⊢𝒜
𝒜 is a THEOREM iff ⊢ 𝒜 .
Just as provability is analogous to entailment (and is, we hope, coextensive with it), so
theoremhood corresponds to logical truth.
To illustrate the idea, suppose I want to prove that ‘¬(𝐺 ∧ ¬𝐺)’ is a theorem. So I must
start my proof without any assumptions. However, since I want to prove a sentence
whose main connective is a negation, I shall want to immediately begin a subproof by
making the additional assumption ‘𝐴 ∧ ¬𝐴’ for the sake of argument, and show that
this leads to contradictory consequences. All told, then, the proof looks like this:
1 𝐺 ∧ ¬𝐺
2 𝐺 ∧E 1
3 ¬𝐺 ∧E 1
of any instance of the law of non‐contradiction. Simply substitute any sentence 𝒜 for
every occurence of ‘𝐺 ’ in the above proof, and the transformed proof will remain cor‐
rect (any internal sentence connectives featuring in 𝒜 aren’t addressed by the proof
rules in that proof).1
Because every proof begins with an assumption, we can only we can only obtain a proof
of a theorem if we discharge that opening assumption with a rule which allows one to
close a subproof: conditional or biconditional introduction, or either of the negation
rules (introduction or elimination):
1 𝑄
2 𝑃
3 ¬𝑄
4 𝑄 R1
5 𝑄 ¬E 3–4, 3–3
6 𝑃→𝑄 →I 2–5
7 𝑄 → (𝑃 → 𝑄) →I 1–6
There is a connection to the deduction theorem here too. Any correct proof of 𝒞 with
one undischarged assumption 𝒜 will demonstrate 𝒜 ⊢ 𝒞 . The deduction theorem
then assures us that ⊢ 𝒜 → 𝒞 . We see just this in the last line of the above proof,
where a proof that 𝑄 ⊢ (𝑃 → 𝑄) is converted to a proof showing that ⊢ 𝑄 → (𝑃 → 𝑄).
But we cannot say that every theorem has a negation, a conditional or a biconditional as
its main connective. For one thing, we could have started with a negated disjunction or
conjunction. For another, once we have a proof of a theorem, we can apply disjunction
or conjunction introduction to its last line: e.g., we could extend the above proof by
conjunction introduction to show that ⊢ ((𝑄 → (𝑃 → 𝑄)) ∧ (𝑄 → (𝑃 → 𝑄))).
To show that something is a theorem, you just have to find a suitable proof. It is typ‐
ically much harder to show that something is not a theorem. To do this, you would
have to demonstrate, not just that certain proof strategies fail, but that no proof is
possible. Even if you fail in trying to prove a sentence in a thousand different ways,
perhaps the proof is just too long and complex for you to make out. Perhaps you just
didn’t try hard enough. Even if you come up with a systematic search strategy to show
that some sentence ℬ isn’t a theorem, there is no guarantee your strategy will yield a
result. Suppose you tried to construct all well‐formed proofs terminating with ℬ from
shortest to longest, aiming to show there is no proof in which all assumptions have
been discharged. As there is no longest proof, there is no guarantee at any stage in this
process that your failure to find such a proof shows there is no such proof. It might
just be that the shortest such proof is longer than any you’ve yet considered. On the
1 We have already seen a proof showing an instance of the law of excluded middle is a theorem in §29.9,
page 278.
§31. PROOF‐THEORETIC CONCEPTS 295
other hand, if one of the proofs you construct is a proof of ℬ with no undischarged
assumptions, they you have shown conclusively that it is a theorem, and you can stop
your search. Showing that something isn’t theorem can be harder than showing that
it is, in terms of how many proofs you have to consider. (On the other hand, showing
that something is a logical truth can be harder than showing that it is not, in terms of
how many interpretations you need to consider.)
Here is another new bit of terminology:
Equivalently, some sentences are jointly contrary if you can prove a contradiction from
them: 𝒜1 , 𝒜2 , …, 𝒜𝑛 ⊢ ℬ ∧ ¬ℬ.
It is straightforward to show that some sentences 𝒜1 , 𝒜2 , …, 𝒜𝑛 are jointly contrary (if
they are): you just need to provide two proofs, one terminating in ℬ and the other in
¬ℬ , such that all of the undischarged assumptions in those proofs are among the 𝒜𝑖 s.
Showing that some sentences are not jointly contrary is much harder. It would require
more than just providing a proof or two; it would require showing that no proof of a
certain kind is possible.
Some sentences are jointly contrary iff the negation of their conjunction is a theorem.
Suppose we have these proofs showing the 𝒜𝑖 s to be jointly contrary:
1 𝒜1 1 𝒜1
⋮ ⋮
𝑛 𝒜𝑛 𝑛 𝒜𝑛
⋮ ⋮
𝑘 ℬ 𝑘′ ¬ℬ
1 𝒜1 ∧ … ∧ 𝒜𝑛
2 𝒜1 ∧E 1
𝑛+1 𝒜𝑛 ∧E 1
𝑘′ + 𝑖 ¬ℬ from 2–𝑛 + 1
Yes No
theorem? one proof all possible proofs
equivalent? two proofs all possible proofs
jointly contrary? two proofs all possible proofs
provable one proof all possible proofs
2 For more on structural rules, and the various logics that don’t have all the structural features of our
natural deduction system, see Greg Restall (2018) ‘Substructural Logics’ in Edward N Zalta, ed., The
Stanford Encyclopedia of Philosophy https://plato.stanford.edu/entries/logic‐substructural/.
§31. PROOF‐THEORETIC CONCEPTS 297
For example: in our proof system, it does not matter in what order we make as‐
sumptions. These two proofs, distinct in their structure, nevertheless both show that
𝐴, 𝐵 ⊢ (𝐴 ∧ 𝐵).
1 𝐴 1 𝐵
2 𝐵 2 𝐴
3 (𝐴 ∧ 𝐵) ∧I 1, 2 3 (𝐴 ∧ 𝐵) ∧I 2, 1
Recall that our definition of provability says that 𝒜1 , …, 𝒜𝑛 ⊢ ℬ just in case there is a
proof whose undischarged assumptions are all among the 𝒜𝑖 s. No mention is made
of the order of those undischarged assumptions. So while both the proofs above show
that 𝐴, 𝐵 ⊢ (𝐴 ∧ 𝐵), they also both show that 𝐵, 𝐴 ⊢ (𝐴 ∧ 𝐵).
This feature, that you can permute the order of assumptions arbitrarily, is wholly gen‐
eral, and is known as the COMMUTATIVITY of assumptions. That is to say:
Commutativity and the deduction theorem seem trivial. But they can be surprisingly
powerful. Consider, for example, this trivial proof by reiteration that 𝑃 → 𝑄 ⊢ 𝑃 → 𝑄:
1 𝑃→𝑄
2 𝑃→𝑄 R1
1. 𝑃 → 𝑄 ⊢ 𝑃 → 𝑄;
3. 𝑃, 𝑃 → 𝑄 ⊢ 𝑄 (by commutativity);
This argument doesn’t construct a formal proof; it just assures you that there will be
one. (One of the exercises asks you to construct the formal proof.)
Here’s another example of a structural feature of our proof system. We allow a given
line of a proof to be reused multiple times, as long as the assumptions on which that
line relies remain undischarged. See this proof that (𝑃 → (𝑃 → 𝑄)) ⊢ (𝑃 → 𝑄):
298 NATURAL DEDUCTION FOR SENTENTIAL
1 𝑃 → (𝑃 → 𝑄)
2 𝑃
3 𝑃→𝑄 →E 1, 2
4 𝑄 →E 3, 2
5 𝑃→𝑄 →I 2–4
Here we appeal to line 2 multiple times: in eliminating the conditional on line 1 and
the conditional on line 3. No rule governing any of our connectives is associated with
this behaviour: rather, it is built in to the way we allow all of our rules to appeal to any
previous line (as long as the line doesn’t appear in a closed subproof), even if that line
has been appealed to already by some other rule. It is fairly easy to see that the above
proof cannot succed without multiple appeal to line 2.
This feature of our proof system is known as CONTRACTION: if there is a proof in which
any 𝒜 occurs as an undischarged assumption on two or more distinct lines, there is
also a proof in which one of those assumptions of 𝒜 is removed. More concisely:
𝒜1 , … , ℬ, … , ℬ, … , 𝒜𝑛 ⊢ 𝒞 iff 𝒜1 , … , ℬ, … , 𝒜𝑛 ⊢ 𝒞.
Contraction, or the principle that you can appeal to the same prior line multiple times,
is essentially the same as our proof rule of reiteration. In fact the rule of reiteration is
strictly dispensible (§33.1), in part because we can always appeal instead to the original
sentence multiple times.
The final structural feature I want to point to is that adding additional assumptions
doesn’t undermine provability. This property is called WEAKENING:
This is a very general feature, because any correct natural deduction proof can be em‐
bedded within an arbitrary additional assumption and still remain correct. So if a
proof with the structure illustrated schematically on the left is correct, showing that
𝒜1 , … , 𝒜𝑛 ⊢ 𝒞 , then so is the proof scheme on the right, which has all the same as‐
sumptions plus the additional assumption ℬ, and so shows 𝒜1 , … , 𝒜𝑛 , ℬ ⊢ 𝒞 :
1 𝒜1
+1 𝒜𝑛
+1 𝒞
§31. PROOF‐THEORETIC CONCEPTS 299
1 ℬ
2 𝒜1
+1 𝒜𝑛
+1 𝒞
These three structural principles involved in the construction of our natural proofs
support our decision to define provability as we did. Our definition was: 𝒜1 …𝒜𝑛 ⊢ 𝒞
when there is a proof with undischarged assumptions among the 𝒜𝑖 s. We do not re‐
quire that the undischarged assumptions be exactly the 𝒜𝑖 s, nor that the undischarged
assumptions don’t contain any redundancy, nor that the order in which assumptions
are made in the proof is the same as the order of the sentences on the left side of
the turnstile. We will briefly mention some alternative logics in which some of these
structural rules are abandoned in §39.
Practice exercises
A. Give a proof showing that each of the following sentences is a theorem:
1. 𝑂 → 𝑂;
2. 𝐽 ↔ 𝐽 ∨ (𝐿 ∧ ¬𝐿) ;
3. ((𝐴 → 𝐵) → 𝐴) → 𝐴;
4. (𝑃 → 𝑃) → 𝑄) → 𝑄 ;
5. (𝐶 ∧ 𝐷) ↔ (𝐷 ∧ 𝐶) .
300 NATURAL DEDUCTION FOR SENTENTIAL
1. 𝑃 ⊢ (𝑃 → 𝑄) → 𝑄
2. 𝐶 → (𝐸 ∧ 𝐺), ¬𝐶 → 𝐺 ⊢ 𝐺
3. 𝑀 ∧ (¬𝑁 → ¬𝑀) ⊢ (𝑁 ∧ 𝑀) ∨ ¬𝑀
4. (𝑍 ∧ 𝐾) ↔ (𝑌 ∧ 𝑀), 𝐷 ∧ (𝐷 → 𝑀) ⊢ 𝑌 → 𝑍
5. (𝑊 ∨ 𝑋) ∨ (𝑌 ∨ 𝑍), 𝑋 → 𝑌, ¬𝑍 ⊢ 𝑊 ∨ 𝑌
C. Show that each of the following pairs of sentences are provably equivalent:
1. 𝑅 ↔ 𝐸, 𝐸 ↔ 𝑅
2. 𝐺 , ¬¬¬¬𝐺
3. 𝑇 → 𝑆, ¬𝑆 → ¬𝑇
4. 𝑈 → 𝐼 , ¬(𝑈 ∧ ¬𝐼)
5. ¬(𝐶 → 𝐷), 𝐶 ∧ ¬𝐷
6. ¬𝐺 ↔ 𝐻, ¬(𝐺 ↔ 𝐻)
D. If you know that 𝒜 ⊢ ℬ, what can you say about (𝒜 ∧𝒞) ⊢ ℬ? What about (𝒜 ∨𝒞) ⊢
ℬ ? Explain your answers.
E. In this section, I claimed that it is just as hard to show that two sentences are not
provably equivalent, as it is to show that a sentence is not a theorem. Why did I claim
this? (Hint: think of a sentence that would be a theorem iff 𝒜 and ℬ were provably
equivalent.)
F. Show that 𝒜1 , 𝒜2 , …, 𝒜𝑛 ⊢ ℬ ∧ ¬ℬ iff both 𝒜1 , 𝒜2 , …, 𝒜𝑛 ⊢ ℬ and 𝒜1 , 𝒜2 , …, 𝒜𝑛 ⊢
¬ℬ .
32
Proof Strategies
There is no simple recipe for proofs, and there is no substitute for practice. Here,
though, are some rules of thumb and strategies to keep in mind.
Work backwards from what you want The ultimate goal is to obtain the conclusion.
Look at the conclusion and ask what the introduction rule is for its main connective.
This gives you an idea of what should happen just before the last line of the proof. Then
you can treat this line as if it were your goal. Ask what you could do to get to this new
goal.
For example: If your conclusion is a conditional 𝒜 → ℬ, plan to use the →I rule. This
requires starting a subproof in which you assume 𝒜 . The subproof ought to end with
ℬ . So, what can you do to get ℬ ?
Work forwards from what you have When you are starting a proof, look at the
premises; later, look at the sentences that you have obtained so far. Think about the
elimination rules for the main operators of these sentences. These will tell you what
your options are.
For a short proof, you might be able to eliminate the premises and introduce the con‐
clusion. A long proof is formally just a number of short proofs linked together, so you
can fill the gap by alternately working back from the conclusion and forward from the
premises.
Try proceeding indirectly If you cannot find a way to show 𝒜 directly, try starting
by assuming ¬𝒜 . If a contradiction follows, then you will be able to obtain 𝒜 by ¬E.
This will often be a good way of proceeding when the conclusion you are aiming at has
a disjunction as its main connective.
Persist These are guidelines, not laws. Try different things. If one approach fails,
then try something else. Remember that if there is one proof, there are many – different
proofs that make use of different ideas.
301
302 NATURAL DEDUCTION FOR SENTENTIAL
For example: suppose you tried to follow the idea ‘work backwards from what you want’
in establishing ‘𝑃, 𝑃 → (𝑃 → 𝑄) ⊢ 𝑃 → 𝑄’. You would be tempted to start a subproof
from the assumption ‘𝑃’, and while that proof strategy would eventually succeed, you
would have done better to simply apply →E and terminate after one proof step.
By contrast, suppose you tried to follow the idea ‘work forwards from what you have’
in trying to establish ‘(𝑃 ∨ (𝑄 ∨ (𝑅 ∨ 𝑆))) ⊢ 𝑃 → 𝑃’. You might begin an awkward nested
series of subproofs to apply ∨E to the disjunctive premise. But beginning with the
conclusion might prompt you instead to simply open a subproof from the assumption
𝑃, and the subsequent proof will make no use of the premise at all, as the conclusion
is a theorem.
Neither of these heuristics is sacrosanct. You will get a sense of how to construct proofs
efficiently and fluently with practice. Unfortunately there is no quick substitute for
practice.
In §§29–28, we introduced the basic rules of our proof system for Sentential. In this
section, we shall consider some alternative or additional rules for our system.
None of these rules adds anything fundamentally to our system. They are all DERIVED
rules, which means that anything we can prove by using them, we could have proved
using just the rules in our original official system of natural deduction proofs. Any of
these rules is a conservative addition to our proof system, because none of them would
enable us to prove anything we could not already prove. (Adding the rules for ‘tonk’
from §30.2, by contrast, would allow us to prove many new things – any system which
includes those rules is not a conservative extension of our original system of proof
rules.)
But sometimes adding new rules can shorten proofs, or make them more readable
and user‐friendly. And some of them are of interest in their own right, as arguably
independently plausible rules of implication as they stand, or as alternative rules we
could have taken as basic instead.
33.1 Reiteration
The first derived rule is actually one of our main proof rules: reiteration. It turns out
that we need not have assumed a rule of reiteration. We can replace each application of
the reiteration rule on some line 𝑘+1 (reiterating some prior line 𝑚) with the following
combination of moves deploying just the other basic rules of §§28–29:
𝑚 𝒜
𝑘 𝒜∧𝒜 ∧I 𝑚
𝑘+1 𝒜 ∧E 𝑘
To be clear: this is not a proof. Rather, it is a proof scheme. After all, it uses a variable,
𝒜 , rather than a sentence of Sentential. But the point is simple. Whatever sentences
303
304 NATURAL DEDUCTION FOR SENTENTIAL
of Sentential we plugged in for 𝒜 , and whatever lines we were working on, we could
produce a legitimate proof. So you can think of this as a recipe for producing proofs.
Indeed, it is a recipe which shows us that, anything we can prove using the rule R,
we can prove (with one more line) using just the basic rules of §§29–28. So we can
describe the rule R as a derived rule, since its justification is derived from our basic
rules.
You might note that in lines 5–7 in the complicated proof in Figure 29.3, we in effect
made use of this proof scheme, introducing a conjunction from prior lines only to im‐
mediately eliminate again, just to ensure that the relevant sentences appeared directly
in the range of the assumption ‘𝑄’.
We even have an explanation here about why you can’t reiterate a line from a closed
subproof. If all applications of reiteration are in fact abbreviations of the above schema,
then that restriction on reiteration derives from the more general restriction that we
cannot appeal to a proof line that relies on an assumption that has been discharged.
This inference pattern is called DISJUNCTIVE SYLLOGISM. We could add it to our proof
system:
𝑚 (𝒜 ∨ ℬ) 𝑚 (𝒜 ∨ ℬ)
𝑛 ¬𝒜 𝑛 ¬ℬ
ℬ DS 𝑚, 𝑛 𝒜 DS 𝑚, 𝑛
This is, if you like, a new rule of disjunction elimination. But there is nothing funda‐
mentally new here. We can emulate the rule of disjunctive syllogism using our basic
proof rules, as the schematic proof in Figure 33.1 indicates.
We have used the rule of reiteration in this schematic proof, but we already know that
any uses of that rule can themselves be replaced by more roundabout proofs using
conjunction introduction and elimination, if required. So adding disjunctive syllogism
would not make any new proofs possible that were not already obtainable in our ori‐
ginal system.
𝑚 𝒜∨ℬ
𝑛 ¬𝒜
𝑘 𝒜
𝑘+1 ¬ℬ
𝑘+2 𝒜 R𝑘
𝑘+3 ¬𝒜 R𝑛
𝑘+5 ℬ
𝑘+6 ℬ R𝑘+5
If Hillary won the election, then she is in the White House. She is not in
the White House. So she did not win the election.
This inference pattern is called MODUS TOLLENS. The corresponding rule is:
𝑚 (𝒜 → ℬ)
𝑛 ¬ℬ
¬𝒜 MT 𝑚, 𝑛
𝑚 𝒜→ℬ
𝑛 ¬ℬ
𝑘 𝒜
𝑘+1 ℬ →E 𝑚, 𝑘
𝑘+2 ¬𝐵 R𝑛
emphasis can prevent them from doing so. Consider: ‘Jane is not not happy’. Arguably,
one cannot derive ‘Jane is happy’, since the first sentence should be understood as
meaning the same as ‘Jane is not unhappy’. This is compatible with ‘Jane is in a state
of profound indifference’. As usual, moving to Sentential forces us to sacrifice certain
nuances of English expressions – we have, in Sentential, just one resource for translating
negative expressions like ‘not’ and the suffix ‘un‐’, even if they are not synonyms in
English.
Obviously we can show that 𝒜 ⊢ ¬¬𝒜 by means of the following proof:
1 𝒜
2 ¬𝒜
There is a proof rule that corresponds to the other direction of this equivalence, the
rule of DOUBLE NEGATION ELIMINATION:
𝑖 ¬¬𝒜
𝒜 ¬¬E 𝑖
1 ¬¬𝒜
2 ¬𝒜
3 ¬¬𝒜 R1
4 ¬𝒜 R2
5 𝒜 ¬E 2–4, 2–3
Anything we can prove using the ¬¬E rule can be proved almost as briefly using just
¬E.
𝑖 𝒜
𝑗 ℬ
𝑘 ¬𝒜
𝑙 ℬ
The rule is sometimes called TERTIUM NON DATUR, which means roughly ‘no third way’.
There can be as many lines as you like between 𝑖 and 𝑗, and as many lines as you like
between 𝑘 and 𝑙 . Moreover, the subproofs can come in any order, and the second
subproof does not need to come immediately after the first.
Tertium non datur is able to be emulated using just our original proof rules. Figure 33.3
contains a schematic proof which demonstrates this. Once again, a dispensible use of
reiteration occurs in this proof just to make it more readable.
𝑖 𝒜
𝑗 ℬ
𝑘 ¬𝒜
𝑙 ℬ
𝑚 𝒜→ℬ →I 𝑖 –𝑗
𝑚+1 ¬𝒜 → ℬ →I 𝑘–𝑙
𝑚+2 ¬ℬ
𝑚+3 𝒜
𝑚+4 ℬ →E 𝑚, 𝑚 + 3
𝑚+5 ¬ℬ R𝑚+2
𝑚+6 ¬𝒜 ¬I 𝑚 + 3–𝑚 + 5
𝑚+7 ℬ →E 𝑚 + 1, 𝑚 + 6
𝑚+8 ℬ ¬E 𝑚 + 2–𝑚 + 7
Figure 33.3: Tertium non datur is derivable in the standard proof system.
The second pair of De Morgan rules are dual to the first pair: they show the provable
equivalence of a negated disjunction and a conjunction of negations.
The De Morgan rules are no genuine addition to the power of our original natural
deduction system. Here is a demonstration of how we could derive the first De Morgan
rule:
𝑘 ¬(𝒜 ∧ ℬ)
𝑚 ¬(¬𝒜 ∨ ¬ℬ)
𝑚+1 ¬𝒜
𝑚+2 ¬𝒜 ∨ ¬ℬ ∨I 𝑚 + 1
𝑚+4 ¬ℬ
𝑚+5 ¬𝒜 ∨ ¬ℬ ∨I 𝑚 + 4
𝑚+7 𝒜∧ℬ ∧I 𝑚 + 3, 𝑚 + 6
𝑘 ¬𝒜 ∨ ¬ℬ
𝑚 ¬𝒜
𝑚+1 𝒜∧ℬ
𝑚+2 𝒜 ∧E 𝑚 + 1
𝑚+4 ¬ℬ
𝑚+5 𝒜∧ℬ
𝑚+6 ℬ ∧E 𝑚 + 5
Similar demonstrations can be offered explaining how we could derive the third and
fourth De Morgan rules. These are left as exercises.
Those mentioned above are all of the additional rules of our proof system for Sentential.
310 NATURAL DEDUCTION FOR SENTENTIAL
Practice exercises
A. The following proofs are missing their commentaries (rule and line numbers). Add
them wherever they are required: you may use any of the original or derived rules, as
appropriate.
1 𝑍 → (𝐶 ∧ ¬𝑁) 1 𝑊 → ¬𝐵
2 ¬𝑍 → (𝑁 ∧ ¬𝐶) 2 𝐴∧𝑊
3 ¬(𝑁 ∨ 𝐶) 3 𝐵 ∨ (𝐽 ∧ 𝐾)
4 ¬𝑁 ∧ ¬𝐶 4 𝑊
5 ¬𝑁 5 ¬𝐵
6 ¬𝐶 6 𝐽∧𝐾
7 𝑍 7 𝐾
8 𝐶 ∧ ¬𝑁
9 𝐶 1 𝐿 ↔ ¬𝑂
10 ¬𝐶 2 𝐿 ∨ ¬𝑂
11 ¬𝑍 3 ¬𝐿
12 𝑁 ∧ ¬𝐶 4 ¬𝑂
13 𝑁 5 𝐿
14 ¬¬(𝑁 ∨ 𝐶) 6 ¬𝐿
15 𝑁∨𝐶 7 ¬¬𝐿
8 𝐿
§33. DERIVED RULES FOR Sentential 311
B. Give a proof representing each of these arguments; you may use any of the original
or derived rules, as appropriate:
1. 𝐸 ∨ 𝐹 , 𝐹 ∨ 𝐺 , ¬𝐹 ∴ 𝐸 ∧ 𝐺
2. 𝑀 ∨ (𝑁 → 𝑀) ∴ ¬𝑀 → ¬𝑁
3. (𝑀 ∨ 𝑁) ∧ (𝑂 ∨ 𝑃), 𝑁 → 𝑃, ¬𝑃 ∴ 𝑀 ∧ 𝑂
4. (𝑋 ∧ 𝑌) ∨ (𝑋 ∧ 𝑍), ¬(𝑋 ∧ 𝐷), 𝐷 ∨ 𝑀 ∴𝑀
C. Provide proof schemes that justify the addition of the third and fourth De Morgan
rules as derived rules.
D. The proofs you offered in response to question A above used derived rules. Replace
the use of derived rules, in such proofs, with only basic rules. You will find some ‘repe‐
tition’ in the resulting proofs; in such cases, offer a streamlined proof using only basic
rules. (This will give you a sense, both of the power of derived rules, and of how all the
rules interact.)
34
Alternative Proof Systems for Sentential
We’ve now developed a system of proof rules, all of which we have supported by show‐
ing that they correspond to correct entailments of Sentential. We’ve also seen that
these rules allow us to introduce some derived rules, which make proofs shorter and
more convenient but do not allow us to prove anything that we could not have proved
already.
This choice of proof system is not forced on us. There are alternative proof systems
which are nevertheless equivalent to the system we have introduced, in that everything
which is provable in our system is provable in the alternative system, and vice versa.
Indeed, there are lots of alternative systems. In this section, I will discuss just a couple.
The alternative systems I will discuss here result from taking one of our derived rules
as basic, and showing that doing so allows us to derive a formerly basic rule.
𝑖 ¬𝒜
𝑗 ℬ
𝑘 ¬ℬ
𝑘+2 𝒜 ¬¬E 𝑘 + 1
312
§34. ALTERNATIVE PROOF SYSTEMS FOR Sentential 313
This proof shows that, in a system with ¬I and ¬¬E, we do not need any separate
elimination rule for a single negation – the effect of any such rule could be perfectly
simulated by the above schematic proof. The addition of a single negation elimination
rule would not allow us to prove any more than we already can. So in a sense, the rules
of double negation elimination and negation elimination are equivalent, at least given
the other rules in our system.
Since we’ve shown ¬¬E to be a derived rule in our original system, this alternative
system proves exactly the same things as our original system. The proofs will look
different, but there is a correct proof of 𝒞 from 𝒜1 , …, 𝒜𝑛 in one system, there will be a
corresponding proof in the other system. Any proof in the one system can be converted
into a proof in the other by replacement of the appropriate instances of the rules.
Pro Using DS as an elimination rule for ∨ has the nice feature that we eliminate
a disjunction in favour of one of its disjuncts, rather than the unprecedented 𝒞 that
appears as if from nowhere in the original ∨E rule. We also dispense with the use of
subproofs in the statement of the disjunction rules.
Con Adopting DS as a basic rule destroys the nice feature of our standard rules that
only one connective is used in any rule. DS needs both disjunction and negation. We
cannot, therefore, consider a system which lacks negation rules but has disjunction
– the rules are no longer modular. This is not especially important for Sentential, but
314 NATURAL DEDUCTION FOR SENTENTIAL
1 𝒜∨ℬ
1 𝒜∨ℬ
2 ¬𝒞
⋮
⋮
𝑖 𝒜
𝑖 𝒜
⋮
⋮
𝑗 𝒞
𝑗 𝒞
⋮
𝑗+1 ¬𝒜 ¬I 𝑖 –𝑗, 𝑖 –2
𝑘 ℬ
𝑘 ℬ DS 1, 𝑗 + 1
⋮
⋮
𝑙 𝒞
𝑙 𝒞
𝒞 ∨E 1, 𝑖 –𝑗, 𝑘–𝑙
𝒞 ¬E 2–𝑙 , 2–2
could be important if you go on to consider other logical systems which may vary the
rules for one connective independently of all the others. DS shackles negation and
disjunction together – even arguments which have, intuitively nothing to do with neg‐
ation, end up having to use negation in their proof. This can be illustrated by consider‐
ing proofs that (𝐴 ∨ 𝐵) ⊢ (𝐵 ∨ 𝐴). The proofs themselves are left for an exercise, but you
can see that the proof in a system which has DS as a basic rule makes unavoidable use
of negation rules, while the proof in our standard system uses only disjunction rules.
𝑘 𝒜
𝑗 ℬ
𝑗+1 ¬ℬ
𝑗+2 ℬ ∧ ¬ℬ ∧I 𝑗, 𝑗 + 1
𝑗+4 ¬¬𝒜
𝑗+5 ¬𝒜
𝑗+8 ℬ ∧ ¬ℬ →E 𝑗 + 3, 𝑗 + 7
𝑗+9 ℬ ∧E 𝑗 + 8
𝑗 + 10 ¬ℬ ∧E 𝑗 + 8
𝑗 + 11 ¬𝒜 ¬E 𝑗 + 4–𝑗 + 9, 𝑗 + 4–𝑗 + 10
One thing to note about this schematic proof is that it is much longer and more com‐
plicated than our negation introduction rule. It also relies essentially on rules for the
conditional and conjunction, violating our desire that each of our rules be ‘pure’ in the
sense that the introduction and elimination of a connective should ideally only involve
sentences with it as the main connective. So we will not be availing ourselves of the
possible economy of getting rid of the negation introduction rule.
Another interesting thing here is that the negation elimination rule seems to be
twinned with the conditional introduction rule. This hints at quite a deep fact, namely,
that negation itself can be understood as a disguised conditional. Some alternative for‐
mulations of sentential logic include a sentential constant, ⊥. This is like a sentence
letter, but it has a constant truth value in every valuation: it is always F. Given this con‐
stant value, we can see that ¬𝒜 and 𝒜 → ⊥ are logically equivalent in such systems:
𝒜 ¬𝒜 𝒜 → ⊥
T F T F F
F T F T F
In this sort of system, we can understand the negation introduction rule as literally a
special case of conditional introduction: if we can show ℬ and ℬ → ⊥ in the scope of
the assumption 𝒜 , then conditional elimination leads to ⊥, and conditional introduc‐
tion gives us 𝒜 → ⊥ while discharging the assumption that 𝒜 . We still need some
316 NATURAL DEDUCTION FOR SENTENTIAL
Practice exercises
A. Consider an alternative proof system which drops our negation introduction rule,
but adopts tertium non datur (§33.5) in its place. Is this alternative system equivalent
to our standard system – in particular, can you show how to emulate negation intro‐
duction using just negation elimination, tertium non datur (and structural rules like
reiteration)?
B. Construct two proofs showing that (𝐴 ∨ 𝐵) ⊢ (𝐵 ∨ 𝐴), the first using our stand‐
ard natural deduction system, the second using the system which has DS in place of
disjunction elimination. Comment on any points of interest.
Chapter 7
1 Though there is a category ‘formulae which are not sentences’ in Quantifier, no member of this class
will ever appear in any correctly formed proof.
318
§35. BASIC RULES FOR Quantifier 319
1 ¬(∀𝑥𝑃𝑥 ∨ ∃𝑦𝑃𝑦)
2 ∀𝑥𝑃𝑥
3 (∀𝑥𝑃𝑥 ∨ ∃𝑦𝑃𝑦) ∨I 2
4 ¬(∀𝑥𝑃𝑥 ∨ ∃𝑦𝑃𝑦) R1
The sentences on each line are Quantifier sentences that are not sentences of Senten‐
tial, but the main connectives involved are just those governed by the rules we already
introduced to handle Sentential proofs.
However, not every Quantifier sentence has a Sentential connective as its main connect‐
ive. So we will also need some new basic rules to govern the quantifiers, and to govern
the identity sign, to deal with those sentences where the main connective is a quantifier
and where the sentence is an identity predication.
1 ∀𝑥𝑅𝑥𝑥𝑑
2 𝑅𝑎𝑎𝑑 ∀E 1
We obtained line 2 by dropping the universal quantifier and replacing every instance
of ‘𝑥’ with ‘𝑎’. Equally, the following should be allowed:
1 ∀𝑥𝑅𝑥𝑥𝑑
2 𝑅𝑑𝑑𝑑 ∀E 1
We obtained line 2 here by dropping the universal quantifier and replacing every in‐
stance of ‘𝑥’ with ‘𝑑 ’. We could have done the same with any other name we wanted.
This motivates the UNIVERSAL ELIMINATION rule (∀E), using the notion for uniform
substitution we introduced in §22.4:
320 NATURAL DEDUCTION FOR QUANTIFIER
𝑚 ∀𝓍𝒜
𝒜|𝒸↷𝓍 ∀E 𝑚
The intent of the rule is that you can obtain any substitution instance of a universally
quantified formula: replace every occurrence of the free variable 𝓍 in 𝒜 with any
chosen name. (If there are any – the rule is also good when 𝒜 has no free variable,
because then the quantifier ∀𝓍 is redundant.) Remember here that the expression ‘𝒸’
is a metalanguage variable over names: you are not required to replace the variable 𝓍
by the Quantifier name ‘𝑐 ’, but you can select any name you like!
I should emphasise that (as with every elimination rule) you can only apply the ∀E rule
when the universal quantifier is the main connective. Thus the following is outright
banned:
1 (∀𝑥𝐵𝑥 → 𝐵𝑘)
This is illegitimate, since ‘∀𝑥 ’ is not the main connective in line 1. (If you need a re‐
minder as to why this sort of inference should be banned, reread §16.)
Here is an example of the rule in action. Suppose we wanted to show that ∀𝑥∀𝑦(𝑅𝑥𝑥 →
𝑅𝑥𝑦), 𝑅𝑎𝑎 ∴ 𝑅𝑎𝑏 is provable. The proof might go like this:
1 ∀𝑥∀𝑦(𝑅𝑥𝑥 → 𝑅𝑥𝑦)
2 𝑅𝑎𝑎
3 ∀𝑦(𝑅𝑎𝑎 → 𝑅𝑎𝑦) ∀E 1
4 𝑅𝑎𝑎 → 𝑅𝑎𝑏 ∀E 3
5 𝑅𝑎𝑏 →E 4, 2
Here on line 3 we substitute the previously used name ‘𝑎’ for the variable ‘𝑥’ in
‘∀𝑦(𝑅𝑥𝑥 → 𝑅𝑥𝑦)’; and then on line 4 we substitute the new name ‘𝑏’ for the variable ‘𝑦’
in ‘𝑅𝑎𝑎 → 𝑅𝑎𝑦’. The rule of universal elimination doesn’t discriminate between new
and old names.
§35. BASIC RULES FOR Quantifier 321
1 𝑅𝑎𝑎𝑑
2 ∃𝑥𝑅𝑎𝑎𝑥 ∃I 1
Here, we have replaced the name ‘𝑑 ’ with a variable ‘𝑥’, and then existentially quantified
over it. Equally, we would have allowed:
1 𝑅𝑎𝑎𝑑
2 ∃𝑥𝑅𝑥𝑥𝑑 ∃I 1
Here we have replaced both instances of the name ‘𝑎’ with a variable, and then exist‐
entially generalised.
There are some pitfalls with this description of what we have done. The following ar‐
gument is invalid: ‘Someone loves Alice; so someone is such that someone loves them‐
selves’. So we ought not to be able to conclude ‘∃𝑥∃𝑥𝑅𝑥𝑥 ’ from ‘∃𝑥𝑅𝑥𝑎’. Accordingly,
our rule cannot be replace a name by a variable, and stick a corresponding quantifier out
the front – since that would would permit the proof of the invalid argument.
We take our cue from the ∀E rule. This rule says: take a sentence ∀𝓍𝒜 , then we can
remove the quantifier and substitute an arbitrary name for some free variable in the
formula 𝒜 (assuming there is one). The ∃I rule is in some sense a mirror image of
this rule: it allows us to move from a sentence with an arbitrary name – that might be
thought of as the result of substituting a name for a free variable in some formula 𝒜
– to a quantified sentence ∃𝓍𝒜 . So here is how we formulate our rule of EXISTENTIAL
INTRODUCTION:
𝑚 𝒜|𝒸↷𝓍
∃𝓍𝒜 ∃I 𝑚
So really we should think that the proof just above should be thought of as concluding
∃𝑥𝑅𝑥𝑥𝑑 from ‘𝑅𝑥𝑥𝑑 ’|𝑎↷𝑥 (i.e., ‘𝑅𝑎𝑎𝑑 ’).
If we have this rule, we cannot provide a proof of the invalid argument. For ‘∃𝑥𝑅𝑥𝑎’ is
not a substitution instance of ‘∃𝑥∃𝑥𝑅𝑥𝑥 ’ – both instances of ‘𝑥’ in ‘𝑅𝑥𝑥 ’ are bound by
322 NATURAL DEDUCTION FOR QUANTIFIER
1 𝑅𝑎𝑎
2 ∃𝑥𝑅𝑎𝑥 ∃I 1
Why? Because the assumption ‘𝑅𝑎𝑎’ is in fact not only a substitution instance of ∃𝑥𝑅𝑥𝑥 ,
but also a substitution instance of ‘∃𝑥𝑅𝑎𝑥 ’, since ‘𝑅𝑎𝑥 ’|𝑎↷𝑥 is just ‘𝑅𝑎𝑎’ too. So we can
vindicate the intuitively correct argument ‘Narcissus loves himself, so there is someone
who loves Narcissus’.
As we just saw, applying this rule requires some skill in being able to recognise substi‐
tution instances. Thus the following is allowed:
1 𝑅𝑎𝑎𝑑
2 ∃𝑥𝑅𝑥𝑎𝑑 ∃I 1
3 ∃𝑦∃𝑥𝑅𝑥𝑦𝑑 ∃I 2
This is okay, because ‘𝑅𝑎𝑎𝑑 ’ can arise from substitition of ‘𝑎’ for ‘𝑥’ in ‘𝑅𝑥𝑎𝑑 ’, and
‘∃𝑥𝑅𝑥𝑎𝑑 ’ can arise from substitition of ‘𝑎’ for ‘𝑦’ in ‘∃𝑥𝑅𝑥𝑦𝑑 ’. But this is banned:
1 𝑅𝑎𝑎𝑑
2 ∃𝑥𝑅𝑥𝑎𝑑 ∃I 1
This is because ‘∃𝑥𝑅𝑥𝑎𝑑 ’ is not a substitution instance of ‘∃𝑥∃𝑥𝑅𝑥𝑥𝑑 ’, since (again) both
occurrences of ‘𝑥’ in ‘𝑅𝑥𝑥𝑑 ’ are already bound and so not available for free substitution.
Here is an example which shows our two proof rules in action, a proof showing that
1 ∀𝑥∀𝑦(𝑅𝑥𝑦 ∧ 𝑅𝑦𝑥)
2 ∀𝑦(𝑅𝑎𝑦 ∧ 𝑅𝑦𝑎) ∀E 1
3 (𝑅𝑎𝑎 ∧ 𝑅𝑎𝑎) ∀E 2
4 𝑅𝑎𝑎 ∧E 3
5 ∃𝑥𝑅𝑥𝑥 ∃I 4
§35. BASIC RULES FOR Quantifier 323
For another example, consider this proof of ‘∃𝑥(𝑃𝑥 ∨ ¬𝑃𝑥)’ from no assumptions:
1 ¬(𝑃𝑑 ∨ ¬𝑃𝑑)
2 ¬𝑃𝑑
3 (𝑃𝑑 ∨ ¬𝑃𝑑) ∨I 2
4 ¬(𝑃𝑑 ∨ ¬𝑃𝑑) R1
5 𝑃𝑑 ¬E 2–3, 2–4
6 (𝑃𝑑 ∨ ¬𝑃𝑑) ∨I 5
7 ¬(𝑃𝑑 ∨ ¬𝑃𝑑) R1
9 ∃𝑥(𝑃𝑥 ∨ ¬𝑃𝑥) ∃I 8
1 ∀𝑥∀𝑦(𝑅𝑥𝑦 → 𝑅𝑦𝑥)
2 ∀𝑦(𝑅𝑎𝑦 → 𝑅𝑦𝑎) ∀E 1
3 (𝑅𝑎𝑏 → 𝑅𝑏𝑎) ∀E 2
4 ∀𝑦𝑅𝑎𝑦
5 𝑅𝑎𝑏 ∀E 4
6 𝑅𝑏𝑎 →E 3, 5
7 ∃𝑦𝑅𝑦𝑎 ∃I 6
9 ∀𝑥(∀𝑦𝑅𝑥𝑦 → ∃𝑦𝑅𝑦𝑥) ∀I 8
1 ∀𝑥𝐹𝑥
2 𝐹𝑎 ∀E 1
3 ∃𝑥𝐹𝑥 ∃I 2
Could this be a bad proof? If anything exists at all, then certainly we can infer that
something is F, from the fact that everything is F. But what if nothing exists at all?
324 NATURAL DEDUCTION FOR QUANTIFIER
Then it is surely vacuously true that everything is F; however, it ought not follow that
something is F, for there is nothing to be F. So if we claim that, as a matter of logic
alone, ‘∃𝑥𝐹𝑥’ follows from ‘∀𝑥𝐹𝑥 ’, then we are claiming that, as a matter of logic alone,
there is something rather than nothing. This might strike us as a bit odd.
Actually, we are already committed to this oddity. In §15, we stipulated that domains in
Quantifier must have at least one member. We then defined a logical truth (of Quantifier)
as a sentence which is true in every interpretation. Since ‘∃𝑥 𝑥 = 𝑥 ’ will be true in every
interpretation, this also had the effect of stipulating that it is a matter of logic that there
is something rather than nothing.
Since it is far from clear that logic should tell us that there must be something rather
than nothing, we might well be cheating a bit here.
If we refuse to cheat, though, then we pay a high cost. Here are three things that we
want to hold on to:
› the ability to copy‐and‐paste proofs together: after all, reasoning works by put‐
ting lots of little steps together into rather big chains.
If we get what we want on all three counts, then we have to countenance that ∀𝑥𝐹𝑥 ⊢
∃𝑥𝐹𝑥 . So, if we get what we want on all three counts, the proof system alone tells us
that there is something rather than nothing. And if we refuse to accept that, then we
have to surrender one of the three things that we want to hold on to!
In fact the choice is even starker. Consider this proof:
1 𝐹𝑎
2 𝐹𝑎 R1
4 ∃𝑥(𝐹𝑥 → 𝐹𝑥) ∃I 3
This proof uses only the obvious rule of conditional introduction, and our existential
introduction rule. It terminates in a claim that a certain thing exists: a thing that is 𝐹
if it is 𝐹 , and has no undischarged assumptions. Again the existence of something is
a theorem of our logic. The real source of the existential commitment here seems to
be the use of the name ‘𝑎’, because our rules implicitly assume that every name has a
referent, and hence as soon as you use a name you assume that there is something in
the domain for the name to latch on to.
§35. BASIC RULES FOR Quantifier 325
Before we start thinking about which to surrender,2 we might want to ask how much
of a cheat this is. Granted, it may make it harder to engage in theological debates
about why there is something rather than nothing. But the rest of the time, we will get
along just fine. So maybe we should just regard our proof system (and Quantifier, more
generally) as having a very slightly limited purview. If we ever want to allow for the
possibility of nothing, then we shall have to cast around for a more complicated proof
system. But for as long as we are content to ignore that possibility, our proof system is
perfectly in order. (As, similarly, is the stipulation that every domain must contain at
least one object.)
∀𝑥𝐹𝑥 ∴ ∀𝑦𝐹𝑦
This argument should obviously be valid. After all, alphabetical variation in choice of
variables ought to be a matter of taste, and of no logical consequence. But how might
our proof system reflect this? Suppose we begin a proof thus:
1 ∀𝑥𝐹𝑥
2 𝐹𝑎 ∀E 1
We have proved ‘𝐹𝑎’. And, of course, nothing stops us from using the same justification
to prove ‘𝐹𝑏’, ‘𝐹𝑐 ’, …, ‘𝐹𝑗2 ’, …, ‘𝐹𝑟79002 , …, and so on until we run out of space, time, or
patience. But reflecting on this, we see that this is a way to prove 𝐹𝒸, for any name
𝒸. And if we can do it for any thing, we should surely be able to say that ‘𝐹 ’ is true of
everything. This therefore justifies us in inferring ‘∀𝑦𝐹𝑦’, thus:
2 In light of the second proof, many will opt for restricting ∃I. If we permit an empty domain, we will
also need ‘empty names’ – names without a referent. When the name 𝒸 is empty, it seems problematic
to conclude from ‘𝒸 is F’ that there is something which is F. (Does ‘Santa Claus drives a flying sleigh’
entail ‘Someone drives a flying sleigh’?) But empty names are not cost‐free; understanding how a name
that doesn’t name anything can have any meaning at all has vexed many philosophers and linguists.
326 NATURAL DEDUCTION FOR QUANTIFIER
1 ∀𝑥𝐹𝑥
2 𝐹𝑎 ∀E 1
3 ∀𝑦𝐹𝑦 ∀I 2
The crucial thought here is that ‘𝑎’ was just some arbitrary name. There was nothing
special about it – we might have chosen any other name – and still the proof would be
fine. And this crucial thought motivates the universal introduction rule (∀I):
𝑚 𝒜|𝒸↷𝓍
∀𝓍𝒜 ∀I 𝑚
› If J Doe took the train, then they had to go via Paris, and that leg of
the journey alone takes 3 hours.
› If J Doe flew, then they would have spent at least an hour in airport
transfers at each end, even setting aside the flight time itself.
3 The details about how this sort of arbitrary reference works are interesting. A controversial but never‐
theless attractive view of how it might work is Wylie Breckenridge and Ofra Magidor (2012) ‘Arbitrary
Reference’, Philosophical Studies 158, pp. 377–400.
§35. BASIC RULES FOR Quantifier 327
› The other options – driving, walking, etc., – are all even slower.
So J Doe’s journey took over two hours in every possible case. Therefore –
since J Doe is an arbitrary person – every traveller’s journey from London
to Munich in 2016 took over two hours.
We don’t have stipulations like the above to introduce a name as an arbitrary name in
Quantifier. But we do have a way of ensuring that the name has no prior associations
other than those linked to a prior universal generalisation, if we insist that, when the
name is about to be eliminated from the proof, no assumption about what that name
denotes is being relied on. That way, we can know that however it was introduced to
the proof, it was not done in a way that involved making specific assumptions about
whatever the name arbitrarily picks out.
If you can conclude something about a named object that doesn’t in‐
volve making any assumptions about it other than assumptions which
we are making more generally, then you can conclude that same some‐
thing about everything.
The simplest way for ensure that a name is not subject to any specific assumptions is if
the name was introduced by an application of ∀E, as an arbitrary name in the standard
sense. But there are other ways too. In general what we need is that the name not occur
in the range of any assumption which uses the name. If the name has been introduced
without making any assumptions about what it denotes, then we are not relying on
any special features of what the name happens to denote when we conclude that if
this arbitrary thing is F, then everything is F.
Consider the following proof to see how this works in action.
1 ∀𝑥(𝐴𝑥 ∧ 𝐵𝑥)
2 𝐴𝑎 ∧ 𝐵𝑎 ∀E 1
3 𝐴𝑎 ∧E 2
4 ∀𝑥𝐴𝑥 ∀I 3
The crucial step is applying the ∀I rule to the name ‘𝑎’ on the last line. While the name
‘𝑎’ does appear on lines 2 and 3, it doesn’t occur in the assumption – it was introduced
on line 2 as an arbitrary instance of the universal assuption.
This constraint ensures that we are always reasoning at a sufficiently general level. To
see the importance of the constraint in action, consider this terrible argument:
∀𝑥𝐿𝑥𝑘 ∴ ∀𝑥𝐿𝑥𝑥
1 ∀𝑥𝐿𝑥𝑘
2 𝐿𝑘𝑘 ∀E 1
1 𝐺𝑑
2 𝐺𝑑 R1
3 𝐺𝑑 → 𝐺𝑑 →I 1–2
4 ∀𝑧(𝐺𝑧 → 𝐺𝑧) ∀I 3
This tells us that ‘∀𝑧(𝐺𝑧 → 𝐺𝑧)’ is a theorem. And that is as it should be.
Here is another proof featuring an application of ∀I after discharging an assumption
about some name ‘𝑎’:
1 𝐹𝑎 ∧ ¬𝐹𝑎
2 𝐹𝑎 ∧E 1
3 ¬𝐹𝑎 ∧E 1
5 ∀𝑥¬(𝐹𝑥 ∧ ¬𝐹𝑥) ∀I 4
Here we were able to derive that something could not be true of 𝑎, no matter what 𝑎
is. We cannot make a coherent assumption that 𝑎 is both 𝐹 and isn’t 𝐹 , so it doesn’t
really matter what ‘𝑎’ denotes. So the open sentence ‘¬(𝐹 ∧ ¬𝐹𝑥)’ could not be true
of anything at all. That is why we are entitled to discharge that assumption, and then
any subsequent use of ‘𝑎’ in the proof must be depending not on particular facts about
this 𝑎, but about anything at all, including whatever it is that ‘𝑎’ happens to pick out.
§35. BASIC RULES FOR Quantifier 329
You might wish to recall the proof of ‘∃𝑥(𝑃𝑥 ∨ ¬𝑃𝑥)’ from page 323. Note that, by
the second‐last line, we had already discharged any assumption which relied on the
specific name chosen (in that case, ‘𝑑 ’). The existential introduction rule has no con‐
straints on it, so that it was not necessary to discharge any assumptions using the name
before applying that rule. But we see now that, since those assumptions were in fact
discharged, we could have applied universal introduction at that second last line, to
yield a proof of ‘∀𝑥(𝑃𝑥 ∨ ¬𝑃𝑥)’.
We can also use our universal rules together to show some things about how quantifier
order doesn’t matter, when the strings of quantifiers are of the same type. For example
∀𝑥∀𝑦∀𝑧𝑆𝑦𝑥𝑧 ∴ ∀𝑧∀𝑦∀𝑥𝑆𝑦𝑥𝑧
1 ∀𝑥∀𝑦∀𝑧𝑆𝑦𝑥𝑧
2 ∀𝑦∀𝑧𝑆𝑦𝑎𝑧 ∀E 1
3 ∀𝑧𝑆𝑏𝑎𝑧 ∀E 2
4 𝑆𝑏𝑎𝑐 ∀E 3
5 ∀𝑥𝑆𝑏𝑥𝑐 ∀I 4
6 ∀𝑦∀𝑥𝑆𝑦𝑥𝑐 ∀I 5
7 ∀𝑧∀𝑦∀𝑥𝑆𝑦𝑥𝑧 ∀I 6
1 ∃𝑥𝐹𝑥
2 ∀𝑥(𝐹𝑥 → 𝐺𝑥)
3 𝐹𝑜
4 𝐹𝑜 → 𝐺𝑜 ∀E 2
5 𝐺𝑜 →E 4, 3
6 ∃𝑥𝐺𝑥 ∃I 5
7 ∃𝑥𝐺𝑥 ∃E 1, 3–6
Breaking this down: we started by writing down our assumptions. At line 3, we made
an additional assumption: ‘𝐹𝑜’. This was just a substitution instance of ‘∃𝑥𝐹𝑥’. On this
assumption, we established ‘∃𝑥𝐺𝑥’. But note that we had made no special assumptions
about the object named by ‘𝑜’; we had only assumed that it satisfies ‘𝐹𝑥’. So nothing
depends upon which object it is. And line 1 told us that something satisfies ‘𝐹𝑥’. So our
reasoning pattern was perfectly general. We can discharge the specific assumption ‘𝐹𝑜’,
and simply infer ‘∃𝑥𝐺𝑥’ on its own.
Putting this together, we obtain the existential elimination rule (∃E):
§35. BASIC RULES FOR Quantifier 331
𝑚 ∃𝓍𝒜
𝑖 𝒜|𝒸↷𝓍
𝑗 ℬ
ℬ ∃E 𝑚, 𝑖 –𝑗
As with universal introduction, the constraints are extremely important. To see why,
consider the following terrible argument:
1 𝐿𝑏
2 ∃𝑥¬𝐿𝑥
3 ¬𝐿𝑏
4 𝐿𝑏 ∧ ¬𝐿𝑏 ∧E 1, 3
The last line of the proof is not allowed. The name that we used in our substitution
instance for ‘∃𝑥¬𝐿𝑥 ’ on line 3, namely ‘𝑏’, occurs in line 4. And the following proof
would be no better:
332 NATURAL DEDUCTION FOR QUANTIFIER
1 𝐿𝑏
2 ∃𝑥¬𝐿𝑥
3 ¬𝐿𝑏
4 𝐿𝑏 ∧ ¬𝐿𝑏 ∧E 1, 3
5 ∃𝑥(𝐿𝑥 ∧ ¬𝐿𝑥) ∃I 4
The last line of the proof would still not be allowed. For the name that we used in our
substitution instance for ‘∃𝑥¬𝐿𝑥 ’, namely ‘𝑏’, occurs in an undischarged assumption,
namely line 1.
The moral of the story is this.
That way, you can guarantee that you meet all the constraints on the rule for ∃E. A new
name functions like an arbitrary name – it carries no prior baggage with it, apart from
what we stipulate or assume to hold of it.
Here’s an example using this newly introduced rule: ∃𝑥(𝐹𝑥 ∧ 𝐺𝑥) ⊢ (∃𝑥𝐹𝑥 ∧ ∃𝑥𝐺𝑥):
1 ∃𝑥(𝐹𝑥 ∧ 𝐺𝑥)
2 𝐹𝑎 ∧ 𝐺𝑎
3 𝐹𝑎
4 ∃𝑥𝐹𝑥 ∃I 3
5 𝐺𝑎
6 ∃𝑥𝐺𝑥 ∃I 5
7 ∃𝑥𝐹𝑥 ∧ ∃𝑥𝐺𝑥 ∧I 4, 6
The use of ∃E on the last line relies on the fact that we’ve gotten rid of any occurence
of the new arbitrary name by the second last line. We have derived something that is
generic from the generic existential assumption, so it is safe to conclude that generic
claim holds regardless of the identity of the individual that makes the existential claim
true.
§35. BASIC RULES FOR Quantifier 333
An argument that makes use of both patterns of arbitrary reasoning is this example
due to Breckenridge and Magidor: ‘from the premise that there is someone who loves
everyone to the conclusion that everyone is such that someone loves them’. Here is a
proof in our system, letting ‘𝐿𝑥𝑦’ symbolise ‘ loves ’, letting ‘ℎ’ symbolise the
1 2
arbitrarily chosen name ‘Hiccup’ and letting ‘𝑎’ symbolise the arbitrarily chosen name
‘Astrid’:
1 ∃𝑥∀𝑦𝐿𝑥𝑦
2 ∀𝑦𝐿ℎ𝑦
3 𝐿ℎ𝑎 ∀E 2
4 ∃𝑥𝐿𝑥𝑎 ∃I 3
5 ∀𝑦∃𝑥𝐿𝑥𝑦 ∀I 4
6 ∀𝑦∃𝑥𝐿𝑥𝑦 ∃E 1, 2–5
At line 3, both our arbitrary names are in play – ℎ was newly introduced to the proof
in line 2 as the arbitrary person Hiccup who witnesses the truth of ‘∃𝑥∀𝑦𝐿𝑥𝑦’, and ‘𝑎’
at line 3 as an arbitrary person Astrid beloved by Hiccup. We can apply ∃I without
restriction at line 4, which takes the name ‘ℎ’ out of the picture – we no longer rely
on the specific instance chosen, since we are back at generalities about someone who
loves everyone, being such that they also love the arbitrarily chosen someone Astrid.
So we can safely apply ∀I at line 5, since the name ‘𝑎’ appears in no assumption nor in
‘∀𝑦∃𝑥𝐿𝑥𝑦’. But now we have at line 5 a claim that doesn’t involve the arbitrary name
‘ℎ’ either, which was newly chosen to not be in any undischarged assumption or in
∃𝑥∀𝑦𝐿𝑥𝑦. So we can safely say that the name Hiccup was just arbitrary, and nothing
in the proof of ‘∀𝑦∃𝑥𝐿𝑥𝑦’ depended on it, so we can discharge the specific assumption
about ℎ that was used in the course of that proof and nevertheless retain our entitled
ment to ‘∀𝑦∃𝑥𝐿𝑥𝑦’.
are, we realize that the story cannot be true: there cannot be such a barber,
or such a village. The story is unacceptable.4
This uses some of our tricky quantifer rules, disjunction elimination (proof by cases)
and negation introduction (reductio), so it is really a showcase of many things we’ve
learned so far.
Let’s first try to symbolise the argument.
The argument then revolves around the claim that there is a barber who shaves every‐
one who doesn’t shave themselves. Semi‐formally paraphrased: someone x exists such
that x is a barber and for all people y: y does not shave themselves iff x shaves y. That
is:
∃𝑥(𝐵𝑥 ∧ ∀𝑦(¬𝑆𝑦𝑦 ↔ 𝑆𝑥𝑦)).
The argument takes the form of a reductio, so we will begin the proof by assuming this
claim for the sake of argument and see what happens:
3 ∀𝑦(¬𝑆𝑦𝑦 ↔ 𝑆𝑥𝑦) ∧E 2
4 (¬𝑆𝑎𝑎 ↔ 𝑆𝑎𝑎) ∀E 3
5 ¬(𝑃 ∧ ¬𝑃)
6 ¬𝑆𝑎𝑎
7 𝑆𝑎𝑎 ↔E 4, 6
8 ¬𝑆𝑎𝑎 R6
10 ¬𝑆𝑎𝑎 ↔E 4, 9
12 (𝑃 ∧ ¬𝑃) ∃E 1, 2–11
13 𝑃 ∧E 12
14 ¬𝑃 ∧E 12
4 R M Sainsbury (2009) Paradoxes 3rd ed., Cambridge University Press, pages 1–2.
§35. BASIC RULES FOR Quantifier 335
One trick to this proof is to be sure to instantiate the universally quantified claim at line
3 by using the same name ‘𝑎’ as was already used in line 2. This is because, intuitively,
the problem case for this supposed barber arises when you think about whether they
shaves themselves or not. But themselves trickiest part of this proof occurs at lines
5–11. By line 4, we’ve already derived a contradictory biconditional. But if we just use it
to derive ‘𝑆𝑎𝑎’ and ‘¬𝑆𝑎𝑎’, the contradictory claims we obtain would end up involving
the name ‘𝑎’. That would mean we couldn’t apply the ∃E rule, since the final line of the
subproof would contain the chosen name, so we couldn’t get our logical falsehood out
of the subproof beginning on line 2, and hence could perform the desired reductio on
line 1 via ¬I. So our trick is to suppose the negation of an unrelated logical falsehood
on line 5, derive the logical falsehood from line 4 in the range of that assumption, and
hence use ¬E to derive the logical falsehood ‘𝑃 ∧ ¬𝑃’ on line 11. This doesn’t contain
the name ‘𝑎’, and hence can be extracted from the subproof to show that line 1 by itself
suffices to derive a logical falsehood, and that shows the supposition that there is such
a barber is a logical falsehood.
where the name 𝒸 occurs nowhere among 𝒞𝑖 or elsewhere in 𝒜 , then this entail‐
ment also holds:
𝒞1 , …, 𝒞𝑛 ⊨ ∀𝓍𝒜.
For we could have substituted any other name for 𝒸 and the original entailment
would still have succeeded, since it could not have depended on the specific
name chosen. So it doesn’t matter what the interpretation of 𝒸 happens to be,
and if that doesn’t matter, it must be because everything is 𝒜 .
So we are again comforted: our proof rules can never lead us from true assumptions
to false claims, if correctly applied.
Practice exercises
A. The following three ‘proofs’ are incorrect. Explain why they are incorrect. If the argu‐
ment ‘proved’ is invalid, provide an interpretation which shows that the assumptions
involved do not entail the conclusion:
1 ∀𝑥𝑅𝑥𝑥
2 𝑅𝑎𝑎 ∀E 1
1.
3 ∀𝑦𝑅𝑎𝑦 ∀I 2
4 ∀𝑥∀𝑦𝑅𝑥𝑦 ∀I 3
§35. BASIC RULES FOR Quantifier 337
1 ∀𝑥∃𝑦𝑅𝑥𝑦
2 ∃𝑦𝑅𝑎𝑦 ∀E 1
2. 3 𝑅𝑎𝑎
4 ∃𝑥𝑅𝑥𝑥 ∃I 3
5 ∃𝑥𝑅𝑥𝑥 ∃E 2, 3–4
1 ∃𝑦¬(𝑇𝑦 ∨ ¬𝑇𝑦)
2 ¬(𝑇𝑑 ∨ ¬𝑇𝑑)
3 𝑇𝑑
4 𝑇𝑑 ∨ ¬𝑇𝑑 ∨I 3
5 ¬(𝑇𝑑 ∨ ¬𝑇𝑑) R2
3. 6 ¬𝑇𝑑 ¬E 3–4, 3–5
7 (𝑇𝑑 ∨ ¬𝑇𝑑) ∨I 6
8 ¬(𝑇𝑑 ∨ ¬𝑇𝑑) R2
B. The following three proofs are missing their commentaries (rule and line numbers).
Add them, to turn them into bona fide proofs.
1 ∀𝑥∃𝑦(𝑅𝑥𝑦 ∨ 𝑅𝑦𝑥)
2 ∀𝑥¬𝑅𝑚𝑥
3 ∃𝑦(𝑅𝑚𝑦 ∨ 𝑅𝑦𝑚)
4 𝑅𝑚𝑎 ∨ 𝑅𝑎𝑚
5 𝑅𝑎𝑚
6 𝑅𝑚𝑎
7 ¬𝑅𝑎𝑚
8 ¬𝑅𝑚𝑎
9 𝑅𝑎𝑚
10 𝑅𝑎𝑚
11 ∃𝑥𝑅𝑥𝑚
12 ∃𝑥𝑅𝑥𝑚
338 NATURAL DEDUCTION FOR QUANTIFIER
2 𝐿𝑎𝑏 2 ∃𝑥∀𝑦𝐿𝑥𝑦
4 ∃𝑦𝐿𝑎𝑦 4 ∀𝑦𝐿𝑎𝑦
5 ∀𝑧𝐿𝑧𝑎 5 𝐿𝑎𝑎
6 𝐿𝑐𝑎 6 𝐽𝑎
7 ∃𝑦𝐿𝑐𝑦 → ∀𝑧𝐿𝑧𝑐 7 𝐽𝑎 → 𝐾𝑎
8 ∃𝑦𝐿𝑐𝑦 8 𝐾𝑎
9 ∀𝑧𝐿𝑧𝑐 9 𝐾𝑎 ∧ 𝐿𝑎𝑎
1. ⊢ ∀𝑥𝐹𝑥 ∨ ¬∀𝑥𝐹𝑥
2. ⊢ ∀𝑧(𝑃𝑧 ∨ ¬𝑃𝑧)
3. ∀𝑥(𝐴𝑥 → 𝐵𝑥), ∃𝑥𝐴𝑥 ⊢ ∃𝑥𝐵𝑥
4. ∀𝑥(𝑀𝑥 ↔ 𝑁𝑥), 𝑀𝑎 ∧ ∃𝑥𝑅𝑥𝑎 ⊢ ∃𝑥𝑁𝑥
5. ∀𝑥∀𝑦𝐺𝑥𝑦 ⊢ ∃𝑥𝐺𝑥𝑥
6. ⊢ ∀𝑥𝑅𝑥𝑥 → ∃𝑥∃𝑦𝑅𝑥𝑦
7. ⊢ ∀𝑦∃𝑥(𝑄𝑦 → 𝑄𝑥)
8. 𝑁𝑎 → ∀𝑥(𝑀𝑥 ↔ 𝑀𝑎), 𝑀𝑎, ¬𝑀𝑏 ⊢ ¬𝑁𝑎
9. ∀𝑥∀𝑦(𝐺𝑥𝑦 → 𝐺𝑦𝑥) ⊢ ∀𝑥∀𝑦(𝐺𝑥𝑦 ↔ 𝐺𝑦𝑥)
10. ∀𝑥(¬𝑀𝑥 ∨ 𝐿𝑗𝑥), ∀𝑥(𝐵𝑥 → 𝐿𝑗𝑥), ∀𝑥(𝑀𝑥 ∨ 𝐵𝑥) ⊢ ∀𝑥𝐿𝑗𝑥
F. Write a symbolisation key for the following argument, symbolise it, and prove it:
There is someone who likes everyone who likes everyone that she likes.
Therefore, there is someone who likes herself.
G. For each of the following pairs of sentences: If they are provably equivalent, give
proofs to show this. If they are not, construct an interpretation to show that they are
not logically equivalent.
1. ∃𝑦∀𝑥𝑅𝑥𝑦 ∴ ∀𝑥∃𝑦𝑅𝑥𝑦
2. ∃𝑥(𝑃𝑥 ∧ ¬𝑄𝑥) ∴ ∀𝑥(𝑃𝑥 → ¬𝑄𝑥)
3. ∀𝑥(𝑆𝑥 → 𝑇𝑎), 𝑆𝑑 ∴ 𝑇𝑎
4. ∀𝑥(𝐴𝑥 → 𝐵𝑥), ∀𝑥(𝐵𝑥 → 𝐶𝑥) ∴ ∀𝑥(𝐴𝑥 → 𝐶𝑥)
5. ∃𝑥(𝐷𝑥 ∨ 𝐸𝑥), ∀𝑥(𝐷𝑥 → 𝐹𝑥) ∴ ∃𝑥(𝐷𝑥 ∧ 𝐹𝑥)
6. ∀𝑥∀𝑦(𝑅𝑥𝑦 ∨ 𝑅𝑦𝑥) ∴ 𝑅𝑗𝑗
7. ∃𝑥∃𝑦(𝑅𝑥𝑦 ∨ 𝑅𝑦𝑥) ∴ 𝑅𝑗𝑗
8. ∀𝑥𝑃𝑥 → ∀𝑥𝑄𝑥, ∃𝑥¬𝑃𝑥 ∴ ∃𝑥¬𝑄𝑥
36
Derived Rules for Quantifier
In this section, we shall add some additional rules to the basic rules of the previous
section. These govern the interaction of quantifiers and negation. But they are no sub‐
stantive addition to our basic rules: for each of the proposed additions, it can be shown
that their role in any proof can be wholly emulated by some suitable applications of
our basic rules from §35. (The point here is as in §33.)
𝑚 ∀𝓍¬𝒜 𝑚 ¬∃𝓍𝒜
340
§36. DERIVED RULES FOR Quantifier 341
1 ∀𝓍¬𝒜
2 ∃𝓍𝒜
3 𝒜|𝒸↷𝓍
4 ¬(ℬ ∧ ¬ℬ)
5 ¬𝒜|𝒸↷𝓍 ∀E 1
6 ℬ ∧ ¬ℬ ¬I 4–5, 4–3
7 ℬ ∧ ¬ℬ ∃E 2, 3–6
8 ¬∃𝓍𝒜 ¬I 2–7
1. I was hasty at line 9 – officially I ought to have applied ∧E to line 8, obtaining the
contradictory conjuncts in the subproof, and then applied ¬I to the assumption
opening that subproof. (But then the proof would have gone over the page.)
2. Note that we had to introduce the new name 𝒸 at line 3. Once we did so, there
was no obstacle to applying ∀E on that newly introduced name in line 5. But if
we had done things the other way around, applying ∀E first to some new name
𝒸, we would have had to open the subproof with yet another new name 𝒹 .
A similar schematic proof could be offered for the second conversion rule, CQ¬∃/∀ .
Equally, we might add rules corresponding to the equivalence of ∃𝓍¬𝒜 and ¬∀𝓍𝒜 :
𝑚 ∃𝓍¬𝒜 𝑚 ¬∀𝓍𝒜
Here is a schematic basic proof showing that the third conversion of quantifiers rule
just introduced, CQ∃/¬∀ , can be emulated just using the standard quantifier rules in
combination with the other rules of our system, in which some of the same issues arise
as in the earlier schematic proof:
342 NATURAL DEDUCTION FOR QUANTIFIER
1 ∃𝓍¬𝒜
2 ∀𝓍𝒜
3 ¬𝒜|𝒸↷𝓍
4 ¬(ℬ ∧ ¬ℬ)
5 𝒜|𝒸↷𝓍 ∀E 2
6 ¬𝒜|𝒸↷𝓍 R3
7 ℬ ∧ ¬ℬ ¬E 4–5, 4–6
8 ℬ ∧ ¬ℬ ∃E 1, 3–7
9 ¬∀𝓍𝒜 ¬I 2–8
1 ∀𝓍𝒜
2 ¬𝒜|𝒸↷𝓍
3 ∃𝓍¬𝒜 ∃I 2
4 ¬∀𝓍𝒜 CQ∃/¬∀ 3
5 ∀𝓍𝒜 R1
A schematic proof emulating ∀I using our other basic rules is trickier. Here it is:
§36. DERIVED RULES FOR Quantifier 343
1 𝛤
𝑚 ¬∀𝓍𝒜
𝑚+2 ¬𝒜|𝒸↷𝓍
𝑛+3 ℬ ∧ ¬ℬ ∃E 𝑚 + 1, 𝑚 + 2–𝑛 + 2
To understand this schematic proof, what we need to remember is that, in order for
the original ∀I rule to apply, we must already have a proof of 𝒜|𝒸↷𝓍 which relies on
assumptions 𝛤 that do not mention 𝒸 at all. The trick is to make use of that proof
inside an assumption about an existential witness. We don’t try to perform that proof
to derive 𝒜|𝒸↷𝓍 and then attempt to manipulate ¬∀𝓍𝒜 to generate a logical falsehood.
Rather, we first assume ¬∀𝓍𝒜 , apply quantifier conversion to obtain ∃𝓍¬𝒜 , assume
that 𝒸 witnesses that existential claim so that ¬𝒜|𝒸↷𝓍 , and then use our original proof
to derive 𝒜|𝒸↷𝓍 at line 𝑛. To avoid problems with the name appearing at the bottom of
the existential witness subproof, we perform the same trick of assuming the falsehood
of an arbitrary logical falsehood (so long as ℬ doesn’t include 𝒸), and then we manage
to derive from 𝛤 what we had hoped to: that ∀𝓍𝒜 .
Practice exercises
A. Show that the following are jointly contrary:
C. In §16, I considered what happens when we move quantifiers ‘across’ various con‐
nectives. Show that each pair of sentences is provably equivalent:
𝒸=𝒸 =I
Notice that this rule does not require referring to any prior lines of the proof, nor does
it rely on any assumptions. For any name 𝒸, you can write 𝒸 = 𝒸 at any point, with only
the =I rule as justification.
345
346 NATURAL DEDUCTION FOR QUANTIFIER
Recall that a relation is reflexive iff it holds between anything in the domain and itself
(§§18.5 and21.10). Let’s see this rule in action, in a proof that identity is reflexive:
1 𝑎=𝑎 =I
2 ∀𝑥𝑥 = 𝑥 ∀I 1
This seems like magic! But note that the first line is not an assumption (there is no ho‐
rizontal line), and hence not an undischarged assumption. So the constant ‘𝑎’ appears
in no undischarged assumption or anywhere in the proof other than in ‘𝑥 = 𝑥 ’|𝑎↷𝑥 ,
so the conclusion ‘∀𝑥𝑥 = 𝑥 ’ follows by legitimate application of the ∀I rule. So we’ve
established the reflexivity of identity: ⊢ ∀𝑥𝑥 = 𝑥 .
This proof can seem equally magic:
1 𝑎=𝑎 =I
2 ∃𝑥𝑥 = 𝑥 ∃I 1
Again we’ve shown that there is something rather than nothing on the basis of no
assumptions. Again, of course, it is the implicit assumption that every name in the
proof refers that does the heavy lifting here.
So for any sentence with ‘𝑎’ in it, given the prior claim that ‘𝑎 = 𝑏’, you can replace
some or all of the occurrences of ‘𝑎’ with ‘𝑏’ and produce an equivalent sentence. For
example, from ‘𝑅𝑎𝑎’ and ‘𝑎 = 𝑏’, you are justified in inferring ‘𝑅𝑎𝑏’, ‘𝑅𝑏𝑎’ or ‘𝑅𝑏𝑏’. More
generally:
𝑚 𝒶=𝒷
𝑛 𝒜|𝒶↷𝓍
𝒜|𝒷↷𝓍 =E 𝑚 , 𝑛
§37. RULES FOR IDENTITY 347
This uses our standard notion of substitution – it basically says that if you have some
sentence which arises from substituting 𝒶 for some variable in a formula, then you are
entitled to another substitution instance of the same formula using 𝒷 instead. Lines
𝑚 and 𝑛 can occur in either order, and do not need to be adjacent, but we always cite
the statement of identity first.
Note that nothing in the rule forbids the constant 𝒷 from occurring in 𝒜 . So this is a
perfectly good instance of the rule:
1 𝑎=𝑏
2 𝑅𝑎𝑏
3 𝑅𝑏𝑏 =E 1, 2
Here, ‘𝑅𝑎𝑏’ is ‘𝑅𝑥𝑏’|𝑎↷𝑥 , and the conclusion ‘𝑅𝑏𝑏’ is ‘𝑅𝑥𝑏’|𝑏↷𝑥 , which conforms to the
rule. This formulation allows us, in effect, to replace some‐but‐not‐all occurrences of a
name in a sentence by a co‐referring name. This rule is sometimes called Leibniz’s Law
– though recall §18.5, where we used that name for a claim about the interpretation of
‘=’. Here’s a slightly more complex example of the rule in action:
1 ∀𝑥𝑥 = 𝑑
2 𝐹𝑑
3 𝑗=𝑑 ∀E 1
4 𝑗=𝑗 =I
5 𝑑=𝑗 =E 3, 4
6 𝐹𝑗 =E 5, 6
7 ∀𝑥𝐹𝑥 ∀I 6
This proof has two features worth commenting on. First, the name ‘𝑗’ occurs on the
second last line, but no undischarged assumption uses it, so it is correct to apply ∀I on
the last line.
The second thing to note is the curious sequence of steps at lines 3–5. We need to
do that because the =E rule takes an identity statement of the form ‘𝒶 = 𝒷 ’, and a
sentence containing ‘𝒶’ – the name on the left of the identity – and generates a sentence
containing ‘𝒷 ’, the name of the right of the identity. But in our proof we ended up with
‘𝑗 = 𝑑 ’ and ‘𝐹𝑑 ’ – strictly speaking, the identity rule doesn’t apply to these sentences,
because that would be to substitute the name on the right of the identity into ‘𝐹𝑑 ’. The
sequence of steps at lines 3–5 allows us to ‘flip’ an identity. We start with ‘𝑗 = 𝑑 ’ and we
want to substitute one occurence of ‘𝑗’ in ‘𝑗 = 𝑗’ for ‘𝑑 ’. That is allowed, because ‘𝑗 = 𝑗’
is the same as ‘𝑥 = 𝑗|𝑗↷𝑥 ’. That yields ‘𝑥 = 𝑗|𝑑↷𝑥 ’, or ‘𝑑 = 𝑗’ on line 5, which is just what
we need to yield line 6.
348 NATURAL DEDUCTION FOR QUANTIFIER
To see the rules in action, we shall prove some quick results. Recall that a relation is
symmetric iff whenever it holds between x and y in one direction, it holds also between
y and x in the other direction (§21.9). This condition can be expressed as a sentence of
Quantifier:
So first, we shall prove that identity is symmetric, a result we already noted on semantic
grounds in §§18.5 and 21.10. That is, ⊢ ∀𝑥∀𝑦(𝑥 = 𝑦 → 𝑦 = 𝑥):
1 𝑎=𝑏
2 𝑎=𝑎 =I
3 𝑏=𝑎 =E 1, 2
4 𝑎=𝑏→𝑏=𝑎 →I 1–3
5 ∀𝑦(𝑎 = 𝑦 → 𝑦 = 𝑎) ∀I 4
6 ∀𝑥∀𝑦(𝑥 = 𝑦 → 𝑦 = 𝑥) ∀I 5
Line 2 is just ‘𝑥 = 𝑎’|𝑎↷𝑥 , as well as being of the right form for =I, and line 3 is just
𝑥 = 𝑎’|𝑏↷𝑥 , so the move from 2 to 3 is in conformity with the =E rule given the opening
assumption. This is the same sequence of moves we saw in the proof above, in a more
general setting,
Having noted the symmetry of identity, note that we can use this to establish the fol‐
lowing schematic proof that allows us to use 𝒶 = 𝒷 to also move from a claim about 𝒷
to a claim about 𝒶, not just vice versa as in our =E rule.:
𝑚 𝒶=𝒷
𝑚+1 𝒶=𝒶 =I
𝑚+2 𝒷=𝒶 =E 𝑚, 𝑚 + 1
𝑛 𝒜|𝒷↷𝓍
𝒜|𝒶↷𝓍 =E 𝑚 + 2 , 𝑛
𝑚 𝒶=𝒷
𝑛 𝒜|𝒷↷𝓍
𝒜|𝒶↷𝓍 =ES 𝑚, 𝑛
§37. RULES FOR IDENTITY 349
You can use either just the original identity elimination rule, or use it in combination
with this derived rule, in your proofs.
A relation is transitive (§21.10) iff whenever it holds between x and y and between y and
z, it also holds between x and z. (In the directed graph representation of the relation
introduced in §21.9, if there is a path along arrows going from node a to node b via a
third node, there is also a direct arrow from a to b.) Second, we shall prove that identity
is transitive, that ⊢ ∀𝑥∀𝑦∀𝑧((𝑥 = 𝑦 ∧ 𝑦 = 𝑧) → 𝑥 = 𝑧).
1 𝑎 =𝑏∧𝑏 =𝑐
2 𝑏=𝑐 ∧E 1
3 𝑎=𝑏 ∧E 1
4 𝑎=𝑐 =E 2, 3
5 (𝑎 = 𝑏 ∧ 𝑏 = 𝑐) → 𝑎 = 𝑐 →I 1–4
6 ∀𝑧((𝑎 = 𝑏 ∧ 𝑏 = 𝑧) → 𝑎 = 𝑧) ∀I 5
7 ∀𝑦∀𝑧((𝑎 = 𝑦 ∧ 𝑦 = 𝑧) → 𝑎 = 𝑧) ∀I 6
8 ∀𝑥∀𝑦∀𝑧((𝑥 = 𝑦 ∧ 𝑦 = 𝑧) → 𝑥 = 𝑧) ∀I 7
We obtain line 4 by replacing ‘𝑏’ in line 3 with ‘𝑐 ’; this is justified given line 2, ‘𝑏 = 𝑐 ’.
We could alternatively have used the derived rule =ES to replace ‘𝑏’ in line 2 with ‘𝑎’,
justified by line 3, ‘𝑎 = 𝑏’.
Recall from §21.10 that a relation that is reflexive, symmetric, and transitive is an equi‐
valence relation. So we’ve formally proved that identity is an equivalence relation. We
can also give formal proofs of other features of identity, such as a proof that identity is
serial.
5 𝐻𝑎
6 ∀𝑦(𝐻𝑦 ↔ 𝑏 = 𝑦) ∧E 4
7 (𝐻𝑎 ↔ 𝑏 = 𝑎) ∀E 6
8 𝑏=𝑎 ↔E 7, 5
9 𝐺𝑎 ∧E 3
10 ¬𝐺𝑏 ∧E 4
11 ¬𝐺𝑎 =E 8, 10
13 ¬𝐻𝑎 ∃E 2, 4–12
Practice exercises
A. Provide a proof of each of the following.
1. 𝑃𝑎 ∨ 𝑄𝑏, 𝑄𝑏 → 𝑏 = 𝑐, ¬𝑃𝑎 ⊢ 𝑄𝑐
2. 𝑚 = 𝑛 ∨ 𝑛 = 𝑜, 𝐴𝑛 ⊢ 𝐴𝑚 ∨ 𝐴𝑜
3. ∀𝑥 𝑥 = 𝑚, 𝑅𝑚𝑎 ⊢ ∃𝑥𝑅𝑥𝑥
4. ∀𝑥∀𝑦(𝑅𝑥𝑦 → 𝑥 = 𝑦) ⊢ 𝑅𝑎𝑏 → 𝑅𝑏𝑎
5. ¬∃𝑥¬𝑥 = 𝑚 ⊢ ∀𝑥∀𝑦(𝑃𝑥 → 𝑃𝑦)
6. ∃𝑥𝐽𝑥, ∃𝑥¬𝐽𝑥 ⊢ ∃𝑥∃𝑦 ¬𝑥 = 𝑦
7. ∀𝑥(𝑥 = 𝑛 ↔ 𝑀𝑥), ∀𝑥(𝑂𝑥 ∨ ¬𝑀𝑥) ⊢ 𝑂𝑛
8. ∃𝑥𝐷𝑥, ∀𝑥(𝑥 = 𝑝 ↔ 𝐷𝑥) ⊢ 𝐷𝑝
9. ∃𝑥 (𝐾𝑥 ∧ ∀𝑦(𝐾𝑦 → 𝑥 = 𝑦)) ∧ 𝐵𝑥 , 𝐾𝑑 ⊢ 𝐵𝑑
10. ⊢ 𝑃𝑎 → ∀𝑥(𝑃𝑥 ∨ ¬𝑥 = 𝑎)
› ∃𝑥 𝐹𝑥 ∧ ∀𝑦(𝐹𝑦 → 𝑥 = 𝑦) ∧ 𝑥 = 𝑛
› 𝐹𝑛 ∧ ∀𝑦(𝐹𝑦 → 𝑛 = 𝑦)
And hence that both have a decent claim to symbolise the English sentence ‘Nick is
the F’.
C. In §18, I claimed that the following are logically equivalent symbolisations of the
English sentence ‘there is exactly one F’:
› ∃𝑥 𝐹𝑥 ∧ ∀𝑦(𝐹𝑦 → 𝑥 = 𝑦)
› ∃𝑥∀𝑦(𝐹𝑦 ↔ 𝑥 = 𝑦)
Show that they are all provably equivalent. (Hint: to show that three claims are prov‐
ably equivalent, it suffices to show that the first proves the second, the second proves
the third and the third proves the first; think about why.)
D. Symbolise the following argument
There is exactly one F. There is exactly one G. Nothing is both F and G. So:
there are exactly two things that are either F or G.
𝒜1 , 𝒜2 , …, 𝒜𝑛 ⊢ 𝒞
means that there is some proof which starts with assumptions 𝒜1 , 𝒜2 , …, 𝒜𝑛 and ends
with 𝒞 (and no undischarged assumptions other than 𝒜1 , 𝒜2 , …, 𝒜𝑛 ). This is a proof‐
theoretic notion.
By contrast, this:
𝒜1 , 𝒜2 , …, 𝒜𝑛 ⊨ 𝒞
means that there is no valuation (or interpretation) which makes all of 𝒜1 , 𝒜2 , …, 𝒜𝑛
true and makes 𝒞 false. This concerns assignments of truth and falsity to sentences. It
is a semantic notion.
I cannot emphasise enough that these are different notions. But I can emphasise it a
bit more: They are different notions.
Once you have internalised this point, continue reading.
352
§38. PROOF‐THEORETIC CONCEPTS AND SEMANTIC CONCEPTS 353
between showing that a sentence is a theorem and showing that it is a logical truth, it
would be easier to show that it is a theorem.
Contrawise, to show that a sentence is not a theorem is hard. We would need to reason
about all (possible) proofs. That is very difficult. But to show that a sentence is not a
logical truth, you need only construct an interpretation in which the sentence is false.
Granted, it may be hard to come up with the interpretation; but once you have done so,
it is relatively straightforward to check what truth value it assigns to a sentence. Given
a choice between showing that a sentence is not a theorem and showing that it is not
a logical truth, it would be easier to show that it is not a logical truth.
Fortunately, a sentence is a theorem if and only if it is a logical truth. As a result, if
we provide a proof of 𝒜 on no assumptions, and thus show that 𝒜 is a theorem, we
can legitimately infer that 𝒜 is a logical truth; i.e., ⊨ 𝒜 . Similarly, if we construct an
interpretation in which 𝒜 is false and thus show that it is not a logical truth, it follows
that 𝒜 is not a theorem.
More generally, we have the following powerful result:
𝒜1 , 𝒜2 , …, 𝒜𝑛 ⊢ ℬ iff 𝒜1 , 𝒜2 , …, 𝒜𝑛 ⊨ ℬ.
The left‐to‐right direction of this result, that provable argument really is a valid entail‐
ment, is known as SOUNDNESS (different from soundness of an argument from §2.6).
The right‐to‐left direction, that every entailment has a proof, is known as COMPLETE‐
NESS.
Soundness and completeness together show that, whilst provability and entailment
are different notions, they are extensionally equivalent, holding between just the same
sentences in our languages. As such:
› An argument is valid iff the conclusion can be proved from the premises.
› Two sentences are logically equivalent iff they are provably equivalent.
› Sentences are jointly consistent iff they are not jointly contrary.
For this reason, you can pick and choose when to think in terms of proofs and when to
think in terms of valuations/interpretations, doing whichever is easier for a given task.
Table 38.1 summarises which is (usually) easier.
It is intuitive that provability and semantic entailment should agree. But – let me re‐
peat this – do not be fooled by the similarity of the symbols ‘⊨’ and ‘⊢’. These two
symbols have very different meanings. And the fact that provability and semantic en‐
tailment agree is not an easy result to come by.
We showed part of this result along the way, actually. All those little observations I
made about how our proof rules were good each took the form of an argument that
whenever 𝒜1 , 𝒜2 , …, 𝒜𝑛 ⊢ ℬ, then also 𝒜1 , 𝒜2 , …, 𝒜𝑛 ⊨ ℬ. So in effect we’ve already
established the soundness of our natural deduction proof system, when we justified
those rules in terms of the existing understanding of the semantics we possess.
354 NATURAL DEDUCTION FOR QUANTIFIER
Yes No
That brings you to the end of the material in this course. But we’ve barely scratched the
surface of the subject, and there are a number of further directions in which you could
pursue the further study of logic. I will briefly indicate some of the further things you
can do in logic, as well as some of the applications that logic has found in other fields.
I’ll also mention some further reading.
› Tim Button’s book is Metatheory; it covers just Sentential, and some of the results
there were actually covered in this book already, notably results around express‐
iveness of Sentential in §14. Metatheory is available at www.homepages.ucl.ac.
uk/~uctytbu/Metatheory.pdf It should be accessible for self‐study by students
who have successfully completed this course.
› Antony Eagle’s book Elements of Deductive Logic goes rather further than Meta‐
theory, including direct proofs of Compactness for Sentential, consideration of
alternative derivation systems, discussion of computation and decidability, and
metatheory for Quantifier. It is available at github.com/antonyeagle/edl.
› The most comprehensive open access logic texts are those belonging to the Open
Logic Project (https://openlogicproject.org). There are open texts on inter‐
mediat logic, set theory, modal logic, non‐classical logics. For most of the topics
I touch on below, the Open Logic Project texts are reliable and accessible sources.
There are many texts, remixing a common core of resources which together make
up the whole open logic text.
355
356 NATURAL DEDUCTION FOR QUANTIFIER
The metatheory of classical logic, the logic we’ve discussed, is well‐understood. The
subject of logic itself goes well beyond classical logic, in at least three ways.
Using Logic
The original spur to Frege and colleagues in creating modern logic was to provide a
framework in which mathematics could be formally regimented and mathematical
proof could be systematically represented and checked. So a very standard next step in
logic is to formalise various mathematical theories. This is standardly done by fixing
a symbolisation key to give a mathematical interpretation to predicates and names of
Quantifier, and adding some axioms that carry substantive information about the in‐
terpretation. Often some new expressive resources are added too, such as FUNCTION
SYMBOLS and other term‐forming operators. Some familiar binary function symbols
include ‘+’ and ‘⋅’ (multiplication): these take two terms and yield a complex term.
They operate recursively, so complex terms can themselves be given as arguments. So
‘7 + 5’, ‘7 + 𝑥’, and ‘(3 ⋅ 𝑥) + 9’ are all complex terms.
One standard formalised mathematical theory is ROBINSON ARITHMETIC 𝑸.1 This is a
theory in the language of Quantifier plus function symbols ‘+’, ‘⋅’ and the symbol ‘′ ’ for
successor (the number immediately after a given number). There is one interpreted
name, ‘𝟎’, which names zero in the intended interpretation. The axioms of the theory
– sentences that are assumed to be true, and so serve to delimit the possible inter‐
pretations under consideration – are these, listed here together with their intended
interpretation:
These axioms serve to fix the intepretation of addition, multiplication, and successor.
These axioms are all true in interpretations of Quantifier in which the domain is the
natural numbers, and the function symbols have their intended interpretation. The
consequences of these axioms, those sentences that are true in every interpretation
1 Robinson arithmetic is weaker than full ordinary arithmetic, but occupies a special place in formal
mathematics because of its role in the Gödel incompleteness theorems. Robinson arithmetic in more or
less this form is discussed by George Boolos, John P Burgess and Richard C Jeffrey (2007) Computability
and Logic, 5th ed., Cambridge University Press, chapter 16.
§39. NEXT STEPS 357
in which all the axioms are true, are the arithmetical truths that hold in Robinson
arithmetic.2
Once you have a theory like this, a collection of axioms with an intended interpretation
symbolised in Quantifier, you can ask questions about soundness and completeness of
these theories with respect to that model. Robinson arithmetic is sound with respect
to its intended interpretation in the natural numbers. But it is incomplete: there are
arithmetical truths it does not include.
The most striking aspect of this result is that any arithmetical theory that includes
Robinson arithmetic will be incomplete, in that there are truths that are not con‐
sequences of the axioms. This is the upshot of the famous limitative results that are
the central target of most intermediate logic courses: Gödel’s incompleteness theorem
and Tarski’s theorem on the indefinability of truth. These results rely essentially on
the fact that even simple arithmetical theories include a device of SELF‐REFERENCE,
enabling them to define arithmetical formulae that ‘talk about’ sentences of the lan‐
guage of arithmetic. By a diagonal argument reminiscent of the Liar paradox (‘this
sentence is not true’), Gödel and Tarski show that, as long as arithmetic is consistent,
there will be true sentences that are not provable.3
Extending Logic
The logic we have was designed to model mathematical arguments, so the uses men‐
tioned above are unsurprising. But there are arguments on many topics outside of
mathematics, and it is not obvious that the logics we have are suitable for these argu‐
ments.4
We’ve already seen in §23 an example which we might hope is valid, but which isn’t
symbolised as a valid argument in Quantifier:
It is raining
So: It will be that it was raining.
Given our liberal attitude to structural words (§4.3), the natural idea is to take the
tenses expressions – in this case, the future ‘will’ and the past ‘was’ – to be structural
words. The standard approach of TENSE LOGIC is to treat those tenses as monadic sen‐
tential connectives. Then if ‘𝑃’ means ‘it is raining’, the argument could be symbolised
like so: 𝑃 ∴ will was 𝑃.
This symbolisation isn’t especially helpful without some semantic understanding of
what these tense operators mean. In the classical logic of this text, sentences don’t
2 Another project of the same sort is the symbolisation of set theory in Quantifier(see Patrick Suppes
(1972) Axiomatic Set Theory, Dover), or the symbolisations of mereology, the formal theory of part
and whole (see Achille C Varzi (2016) ‘Mereology’ in Edward N Zalta, ed., Stanford Encyclopedia of
Philosophy, https://plato.stanford.edu/entries/mereology/).
3 See Boolos, Burgess and Jeffrey, op. cit., and Raymond M Smullyan (2001) ‘Gödel’s Incompleteness
Theorems’, pp. 72–89 in Lou Goble, ed., The Blackwell Guide to Philosophical Logic, Blackwell.
4 Good sources for the logics discussed in this section and the next – especially tense, modal, and non‐
classical logics – are John P Burgess (2008) Philosophical logic, Princeton University Press; JC Beall and
Bas C van Fraassen (2003) Possibilities and Paradox, Oxford University Press; Graham Priest (2008) An
Introduction to Non‐Classical Logic, Cambridge University Press.
358 NATURAL DEDUCTION FOR QUANTIFIER
change their truth values within a valuation. This is not the way we introduced them,
but we can think of a valuation as a snapshot, capturing a momentary assignment of
truth values at a time. A HISTORY will be an ordered sequence of valuations, represent‐
ing the way that the truth values of the atomic sentences change over time. (We may
also need to mark a special valuation, the present one, which represents how things
actually are.) Then ‘will 𝒜 ’ is true at a valuation in a history iff 𝒜 is true at some later
valuation in that history; ‘was 𝒜 ’ is true at a valuation in a history iff 𝒜 is true at some
earlier valuation in that history. An argument is valid in a history iff every valuation
at which the premises is true is also valuation in which the conclusion is true; and an
argument is valid iff it is valid on every history. The argument above will turn out to
be valid. Suppose ‘𝑃’ is true at a valuation 𝑣 in a history. Then at any valuation later
than 𝑣, ‘was 𝑃’ is true. But then at 𝑣, there is a later valuation at which ‘was 𝑃’ is true,
so that ‘will was 𝑃’ is true at 𝑣. The history was chosen arbitrarily (it does need time to
extend arbitrarily in both directions though), so the argument is valid.
This kind of structured collection of valuations we’ve called a history is also used in
other extensions of classical logic. This is a non‐exhaustive list of examples
› The logic that results from taking the sentence operators ‘necessarily’ and ‘pos‐
sibly’ to be structural words, known as MODAL LOGIC, uses a collection of valu‐
ations, which modal logicians tend picturesquely to call the POSSIBLE WORLDS;
› The logic that result from taking ‘obligatorily’ and ‘permissibly’ to be structural
words, or DEONTIC LOGIC, also uses the same framework. There a collection of
valuations corresponds to ideal possible worlds; 𝒜 is obligatory iff it is true at all
ideal worlds, permissible iff it is true at some. This is an interesting logic because
the actual world is not typically thought to be among the ideal worlds.
These constructions using valuations are extensions of Sentential. There are also quan‐
tified temporal/modal/deontic logics, where each moment of history (or each possible
world) is an interpretation, not a valuation. There are some rich issues about how the
domain should be permitted to vary over time, or just the extensions of predicates, as
well as disputes over whether the temporal/modal operators should be permitted to
occur within the scope of quantifiers. For example, should ‘∃𝑥 was 𝐹𝑥’ be acceptable
– that there is something which has the temporal property was 𝐹 ? Or should we con‐
fine the temporal properties to tensed truth, i.e., it is only sentences which have the
temporal properties of previously and subsequent truth.
A huge literature has grown up on the topic of CONDITIONAL LOGICS, logics which
add new connectives, distinct from ‘→’, hoping to better represent the behaviour of
conditionals in English. Perhaps the most celebrated are the logics associated with
Lewis‐Stalnaker conditionals. In Lewis’ version, the counterfactual conditional (§8.6)
has a certain modal force: it tells you about how things would have been under altern‐
ative possible circumstances. So the logic of conditionals he develops draws on the
framework of modal logic, with some innovations of his own. The Stalnaker condi‐
tional is similar, but Stalnaker wants to claim that the indicative conditional too has
modal force.5
5 See David Lewis (1973) Counterfactuals, Blackwell, and Robert C Stalnaker (1975) ‘Indicative condition‐
als’, Philosophia 5: 269–286, https://doi.org/10.1007/bf02379021.
§39. NEXT STEPS 359
1. There is a property Bob has and Alice does not. ∃𝑋(𝑋𝑏 ∧ ¬𝑋𝑎)
So: Bob isn’t Alice. 𝑏 ≠ 𝑎
This sort of example can seem trivial, but in fact second‐order logic has many powerful
features that allow it to vastly outstrip the expressive capacity of Quantifier. It has so
much power that the philosopher W V O Quine once said that second‐order logic is
‘set theory in sheep’s clothing’ – it has substantial mathematical content, disguised up
as if it were nothing but pure logic.6
Modifying Logic
Another direction we could take is not adding extra expressive resources to logic, but
modifying the logic we already have in some way. We’ve already seen some attempts to
do this, when we briefly considered intuitionistic logic and its restrictions on negation
elimination in §30.3.
More recently, influential alternative logics have been explored which result from re‐
stricting the structural rules permitted in proofs (§31.3). These logics, even though they
are purely sentential and lack quantifiers, turn out to be very complex and puzzling.
One prominent alternative is LINEAR LOGIC, which result from restrictions to the rule
of contraction. Linear logic is sometimes said to be resource conscious; it matters, in
linear logic, how many times you need to appeal to an assumption in order to construct
a proof. Some claims which might be provable by appealing twice to an assumption,
may fail to be provable if you were permitted only to appeal once to that assumption.
So in the assumptions of a linear logic proof it is important to note how many times
an assumption appears – contraction, which says that the same things can be proved
even if you throw away duplicate assumptions, is not compatible with keeping track
of resources in this way. Linear logic has found uses in computer science, in model‐
ling the behaviour of algorithms – in an actual computation, it can be very important
to be efficient in the calls you make on resources. If appealing to an assumption in‐
volves reading that assumption from disk, for example, then the fewer appeals to the
assumption you can make, the faster your algorithm will run.
Amongst philosophers, however, the most prominent substructural logic is RELEVANCE
LOGIC (US), aka RELEVANT LOGIC (UK/Australasia).7 Relevant logicians argue that the
so‐called ‘paradoxes of material implication’ (§11.5) are symptoms of a broader failure
of classical logic to require that the premises of a valid argument must be relevant to
its conclusion. Relevant logicians are particularly incensed by the fact that 𝐴 ∧ ¬𝐴 ⊢ 𝐵;
accordingly, they need to block the standard proof:
1 𝐴 ∧ ¬𝐴
2 ¬𝐵
3 𝐴 ∀E 1
4 ¬𝐴 ∀E 1
5 𝐵 ¬E 2–3, 2–4
The relevant logician doesn’t wish to abandon any of these rules, or at least, not the
intuition behind them. But they do generally want to resist our being able to introduce
a new irrelevant assumption like ‘¬𝐵’ whenever we want. So relevant logics are a class
of substructural logic which don’t satisfy weakening.
A final class of modifications to classical logic I will consider are MANY‐VALUED LOGICS:
logics that go beyond two truth values.8 Some many‐valued logics introduce just a
third truth value, indeterminate, to represent the truth values of unsettled matters
such as contingent future outcomes. Such logics also gained some purchase as con‐
ditional logics – not extending classical logic this time, but changing the behaviour of
the existing conditional. For example, many resist the idea that a conditional should
be true just because its antecedent is false. Perhaps we should say rather that the con‐
ditional is unsettled when the antecedent is false, because we just can’t tell whether
the consequent follows.
Another use of many valued logics appeals to not just a third truth value, but infinitely
many, sometimes called DEGREES OF TRUTH. Such logics have had some appeal to
philosophers working on vagueness, a topic we will turn to shortly.
8 See Priest, op. cit., and Beall and van Fraassen, op. cit..
9 Henry Prakken and Giovanni Sartor (2015) ‘Law and Logic: a Review from an Argumentation Perspect‐
ive’, Artificial Intelligence 227: 214–45, https://doi.org/10.1016/j.artint.2015.06.005.
10 See Paul Portner (2005) What is Meaning?, Blackwell.
§39. NEXT STEPS 361
FORM. The properties and nature of this postulated logical form are heavily
endebted to the kinds of logical resources available to the theorist. We saw a
prominent example of different logical frameworks being brought to bear on a
hypothesis about the ‘real’ meaning of a natural language expression in our brief
discussion of the analysis of definite descriptions in §19.
› In management consulting and related fields, the topic‐neutrality of logic and the
habits of rigorous thought it encourages are very useful for people who have to
offer recommendations about complex areas without necessarily having much
subject‐specific knowledge.
In the case of ’red’, adjacent slivers of colour in the spectrum are very similar – they are
visually indistinguishable, i.e., they look the same. And surely if two things look the
same, they cannot be different colours. How could there be two distinct colours that
are indistinguishable in appearance – colours are appearance properties, most say.
362 NATURAL DEDUCTION FOR QUANTIFIER
If ‘red’ is tolerant, then any two adjacent slivers will be either both red, or both not
red. So where ‘𝑅’ is interpreted to mean ‘ is red’, and ‘𝐶 ’ means ‘ is adjacent
1 1
to ’, in the domain of slivers of colour in this spectrum, this Quantifier sentence
2
appears to be true:
∀𝑥∀𝑦((𝑅𝑥 ∧ 𝐴𝑥𝑦) → 𝑅𝑦).
This is an instance of the principle of Tolerance, because adjacent slivers are extremely
similar. Let the slivers of colour be denoted ‘𝑎1 , …, 𝑎1000 ’, with ‘𝑎1 ’ the leftmost sliver.
Then the above tolerance principle, together with, ‘𝑅𝑎1 ’, and many premises of the
form ‘𝐴𝑎𝑖 𝑎𝑖+1 ’ will entail ‘∀𝑥𝑅𝑥 ’, given that there is a case of red, and we have enough
adjacent cases: as we do in the spectrum. The formal proof is long, but basically: if
there is a case of red, and a case of non‐red, then Tolerance tells us they cannot be
linked by a sequence of adjacent cases. But in the spectrum any two slivers can be
linked by a sequence of adjacent cases.
There are a number of responses, some of which we’ve mentioned already. The degree‐
theory solution says that each adjacent sliver to the right is red to a slightly less degree
than the sliver to the left, and that it is true to a lower degree that it is red. Such
views abandon tolerance for vague predicates in favour of a principle of ‘closeness’:
that extremely similar things are both 𝐹 to an extremely similar degree.11 There are
also solutions that appeal to truth‐value ‘gaps’, resembling many‐valued logics in some
ways – most prominent among these is the approach known as SUPERVALUATIONISM.12
Finally, there is a purely classical logic solution, Williamson’s theory of EPISTEMICISM,
which says that there are sharp cutoffs (so that the tolerance premise is false), but
explains the appearance that there are no sharp cutoffs as a byproduct of our essential
inability to know where the sharp boundaries lie. Logic won’t decide between these
approaches. But it does help us in formulating them precisely and in enabling us to
classify their strengths and weaknesses precisely. This is an advantage in most areas
of controversy and debate.
11 See N J J Smith (2008) Vagueness and Degrees of Truth, Oxford University Press.
12 See Kit Fine (1975) ‘Vagueness, truth and logic’, Synthese 54: 235–59, https://doi.org/10.1007/
bf00485047.
Appendices
Appendix A
Alternative Terminology and
Notation
Alternative terminology
Sentential logic The study of Sentential goes by other names. The name we have
given it – sentential logic – derives from the fact that it deals with whole sentences as
its most basic building blocks. Other features motivate different names. Sometimes
it is called truth‐functional logic, because it deals only with assignments of truth and
falsity to sentences, and its connectives are all truth‐functional. Sometimes it is called
propositional logic, which strikes me as a misleading choice. This may sometimes be
innocent, as some people use ‘proposition’ to mean ‘sentence’. However, noting that
different sentences can mean the same thing, many people use the term ‘proposition’
as a useful way of referring to the meaning of a sentence, what that sentence expresses.
In this sense of ‘proposition’, ‘propositional logic’ is not a good name for the study of
Sentential, since synonymous but not logically equivalent sentences like ‘Vixens are
bold’ and ‘Female foxes are bold’ will be logically distinguished even though they ex‐
press the same proposition.
Quantifier logic The study of Quantifier goes by other names. Sometimes it is called
predicate logic, because it allows us to apply predicates to objects. Sometimes it is
called first‐order logic, because it makes use only of quantifiers over objects, and vari‐
ables that can be substituted for constants. This is to be distinguished from higher‐
order logic, which introduces quantification over properties, and variables that can be
substituted for predicates. (This device would allow us to formalise such sentences as
‘Jane and Kane are alike in every respect’, treating the italicised phrase as a quantifier
over ‘respects’, i.e., properties. This results in something like ∀𝑃(𝑃𝑗 ↔ 𝑃𝑘), which is
not a sentence of Quantifier.)
Atomic sentences Some texts call atomic sentences sentence letters. Many texts use
lower‐case roman letters, and subscripts, to symbolise atomic sentences.
364
§A. ALTERNATIVE TERMINOLOGY AND NOTATION 365
Formulas Some texts call formulas well‐formed formulas. Since ‘well‐formed for‐
mula’ is such a long and cumbersome phrase, they then abbreviate this as wff. This
is both barbarous and unnecessary (such texts do not make any important use of the
contrasting class of ‘ill‐formed formulas’). I have stuck with ‘formula’.
In §6, I defined sentences of Sentential. These are also sometimes called ‘formulas’
(or ‘well‐formed formulas’) since in Sentential, unlike Quantifier, there is no distinction
between a formula and a sentence.
Valuations Some texts call valuations truth‐assignments; others call them struc‐
tures.
Names In Quantifier, I have used ‘𝑎’, ‘𝑏’, ‘𝑐 ’, for names. Some texts call these ‘constants’,
because they have a constant referent in a given interpretation, as opposed to variables
which have variable referents. Other texts do not mark any difference between names
and variables in the syntax. Those texts focus simply on whether the symbol occurs
bound or unbound.
Interpretations Some texts call interpretations models; others call them structures.
Alternative notation
In the history of formal logic, different symbols have been used at different times and
by different authors. Often, authors were forced to use notation that their printers
could typeset.
This appendix presents some common symbols, so that you can recognise them if you
encounter them in an article or in another book. Unless you are reading a research
article in philosophical or mathematical logic, these symbols are merely different nota‐
tions for the very same underlying things. So the truth‐functional connective we refer
to with ‘∧’ is the very same one that another textbook might refer to with ‘&’. Com‐
pare: the number six can be referred to by the numeral ‘6’, the Roman numeral ‘VI’, the
English word ‘six’, the German word ‘sechs’, the kanji character ‘六’, etc.
366 APPENDICES
Negation Two commonly used symbols are the not sign, ‘¬’, and the tilde operator,
‘∼’. In some more advanced formal systems it is necessary to distinguish between two
kinds of negation; the distinction is sometimes represented by using both ‘¬’ and ‘∼’.
Some texts use an overline to indicate negation, so that ‘𝒜 ’ expresses the same thing as
‘¬𝒜 ’. This is clear enough if 𝒜 is an atomic sentence, but quickly becomes cumbersome
if we attempt to nest negations: ‘¬(𝐴 ∧ ¬(¬¬𝐵 ∧ 𝐶))’ becomes the unwieldy
𝐴 ∧ (𝐵 ∧ 𝐶).
Disjunction The symbol ‘∨’ is typically used to symbolize inclusive disjunction. One
etymology is from the Latin word ‘vel’, meaning ‘or’.
Conjunction Conjunction is often symbolized with the ampersand, ‘&’. The am‐
persand is a decorative form of the Latin word ‘et’, which means ‘and’. (Its etymology
still lingers in certain fonts, particularly in italic fonts; thus an italic ampersand might
appear as ‘& ’.) Using this symbol is not recommended, since it is commonly used in
natural English writing (e.g., ‘Smith & Sons’). As a symbol in a formal system, the am‐
persand is not the English word ‘&’, so it is much neater to use a completely different
symbol. The most common choice now is ‘∧’, which is a counterpart to the symbol used
for disjunction. Sometimes a single dot, ‘•’, is used (you may have seen this in Argument
and Critical Thinking). In some older texts, there is no symbol for conjunction at all;
‘𝐴 and 𝐵’ is simply written ‘𝐴𝐵’. These are often texts that use the overlining notation
for negation. Such texts often involve languages in which conjunction and negation
are the only connectives, and they typically also dispense with parentheses which are
unnecessary in such austere languages, because negation scope is indicated directly.
‘¬(𝐴 ∧ 𝐵)’ can be distinguished from ‘(¬𝐴 ∧ 𝐵)’ easily: ‘𝐴𝐵’ vs. ‘𝐴𝐵’.
Material conditional There are two common symbols for the material conditional:
the arrow, ‘→’, and the hook, ‘⊃’. Rarely you might see ‘⇒’.
Material biconditional The double‐headed arrow, ‘↔’, is used in systems that use
the arrow to represent the material conditional. Systems that use the hook for the
conditional typically use the triple bar, ‘≡’, for the biconditional.
Quantifiers The universal quantifier is typically symbolised ‘∀’ (a rotated ‘A’), and
the existential quantifier as ‘∃’ (a rotated ‘E’). In some texts, there is no separate symbol
for the universal quantifier. Instead, the variable is just written in parentheses in front
of the formula that it binds. For example, they might write ‘(𝑥)𝑃𝑥 ’ where we would
write ‘∀𝑥𝑃𝑥 ’.
The common alternative notations are summarised below:
§A. ALTERNATIVE TERMINOLOGY AND NOTATION 367
negation ¬, ∼ , 𝒜
conjunction ∧, &, •
disjunction ∨
conditional →, ⊃, ⇒
biconditional ↔, ≡
universal quantifier ∀𝑥 , (𝑥)
1 In what follows I draw on Peter Simons (2017) ‘Łukasiewicz’s Parenthesis‐Free or Polish Notation’, in Ed‐
ward N Zalta, ed., The Stanford Encyclopedia of Philosophy, https://plato.stanford.edu/archives/
spr2017/entries/lukasiewicz/polish‐notation.html.
368 APPENDICES
2. If 𝒜 and ℬ are sentences, then so are ‘𝑁𝒜 ’, 𝐾𝒜ℬ, 𝐴𝒜ℬ , 𝐶𝒜ℬ and 𝐸𝒜ℬ;
It is easy to see that one can type sentences in Polish notation without the use of any
special symbols on a standard typewriter keyboard.
The notation doesn’t require parentheses. The ambiguous string ‘𝑃 → 𝑄 → 𝑅’ may
correspond to either of these two Sentential sentences: (i) ‘(𝑃 → (𝑄 → 𝑅))’ or (ii) ((𝑃 →
𝑄) → 𝑅). These are symbolised in Polish notation as, respectively, (i′) ‘𝐶𝑝𝐶𝑞𝑟’ and (ii′)
‘𝐶𝐶𝑝𝑞𝑟’. Why?
› You know, from the recursive definition, that any logically complex sentence is
formed by taking a connective and placing one or two sentences after it. So the
main connective is always the leftmost character.
› You then need only to identify a sentence that follows it. So in (i′), the main
connective is ‘𝐶 ’, which is followed by the atomic sentence ‘𝑝’ and the complex
sentence ‘𝐶𝑞𝑟’; in (ii′), with the same main connective, the sentences which fol‐
low are ‘𝐶𝑝𝑞’ and ‘𝑟’.
(𝑃 ∧ 𝑄) → 𝑅 → 𝑃 ∧ (¬𝑄 ∨ 𝑅) ;
𝐶𝐶𝐾𝑝𝑞𝑟𝐾𝑝𝐴𝑁𝑞𝑟.
The Sentential version has 22 characters, 10 of them parentheses; the Polish version just
12.
The notation never really caught on, partly because ‐ as in the example above – it is not
always immediate to the naked eye where one constituent sentence begins and another
ends. But the main obstacle to its wider use was the lack of any easy way to indicate the
scope of a quantifier. Thus the notation has become something of a historical curiosity.
Appendix B
Quick Reference
Symbolisation
Sentential connectives
Predicates
369
370 APPENDICES
Identity
Only c is G ∀𝑥(𝐺𝑥 ↔ 𝑥 = 𝑐)
Everything besides c is G ∀𝑥(¬𝑥 = 𝑐 → 𝐺𝑥)
The F is G ∃𝑥(𝐹𝑥 ∧ ∀𝑦(𝐹𝑦 → 𝑥 = 𝑦) ∧ 𝐺𝑥)
It is not the case that the F is G ¬∃𝑥(𝐹𝑥 ∧ ∀𝑦(𝐹𝑦 → 𝑥 = 𝑦) ∧ 𝐺𝑥)
The F is nonG ∃𝑥(𝐹𝑥 ∧ ∀𝑦(𝐹𝑦 → 𝑥 = 𝑦) ∧ ¬𝐺𝑥)
one: ∃𝑥𝐹𝑥
two: ∃𝑥1 ∃𝑥2 (𝐹𝑥1 ∧ 𝐹𝑥2 ∧ ¬𝑥1 = 𝑥2 )
three: ∃𝑥1 ∃𝑥2 ∃𝑥3 (𝐹𝑥1 ∧ 𝐹𝑥2 ∧ 𝐹𝑥3 ∧ ¬𝑥1 = 𝑥2 ∧ ¬𝑥1 = 𝑥3 ∧ ¬𝑥2 = 𝑥3 )
𝑛: ∃𝑥1 …∃𝑥𝑛 (𝐹𝑥1 ∧ … ∧ 𝐹𝑥𝑛 ∧ ¬𝑥1 = 𝑥2 ∧ … ∧ ¬𝑥𝑛−1 = 𝑥𝑛 )
One way to say ‘there are at most 𝑛 Fs’ is to put a negation sign in front of the symbol‐
isation for ‘there are at least 𝑛 + 1 Fs’. Equivalently, we can offer:
One way to say ‘there are exactly 𝑛 Fs’ is to conjoin two of the symbolizations above
and say ‘there are at least 𝑛 Fs and there are at most 𝑛 Fs.’ The following equivalent
formulae are shorter:
zero: ∀𝑥¬𝐹𝑥
one: ∃𝑥 𝐹𝑥 ∧ ∀𝑦(𝐹𝑦 → 𝑥 = 𝑦)
two: ∃𝑥1 ∃𝑥2 𝐹𝑥1 ∧ 𝐹𝑥2 ∧ ¬𝑥1 = 𝑥2 ∧ ∀𝑦 𝐹𝑦 → (𝑦 = 𝑥1 ∨ 𝑦 = 𝑥2 )
three: ∃𝑥1 ∃𝑥2 ∃𝑥3 𝐹𝑥1 ∧ 𝐹𝑥2 ∧ 𝐹𝑥3 ∧ ¬𝑥1 = 𝑥2 ∧ ¬𝑥1 = 𝑥3 ∧ ¬𝑥2 = 𝑥3 ∧
∀𝑦 𝐹𝑦 → (𝑦 = 𝑥1 ∨ 𝑦 = 𝑥2 ∨ 𝑦 = 𝑥3 )
𝑛: ∃𝑥1 …∃𝑥𝑛 𝐹𝑥1 ∧ … ∧ 𝐹𝑥𝑛 ∧ ¬𝑥1 = 𝑥2 ∧ … ∧ ¬𝑥𝑛−1 = 𝑥𝑛 ∧
∀𝑦 𝐹𝑦 → (𝑦 = 𝑥1 ∨ … ∨ 𝑦 = 𝑥𝑛 )
§B. QUICK REFERENCE 371
𝑚 𝒜
𝑖 ¬𝒜
𝑛 ℬ
𝑗 ℬ
𝒜∧ℬ ∧I 𝑚, 𝑛
𝑘 ¬ℬ
𝒜 ¬E 𝑖 –𝑗, 𝑖 –𝑘
Conjunction Elimination, p. 248
𝒜 ∧E 𝑚 𝑚 𝒜
𝒜∨ℬ ∨I 𝑚
𝑚 𝒜∧ℬ
ℬ ∧E 𝑚 𝑚 𝒜
ℬ∨𝒜 ∨I 𝑚
Conditional Introduction, p. 261
Disjunction Elimination, p. 272
𝑖 𝒜
𝑗 ℬ
𝒜→ℬ →I 𝑖 –𝑗 𝑚 𝒜∨ℬ
𝑖 𝒜
𝑘 ℬ
𝑙 𝒞
𝑚 𝒜→ℬ
𝒞 ∨E 𝑚, 𝑖 –𝑗, 𝑘–𝑙
𝑛 𝒜
ℬ →E 𝑚, 𝑛
Biconditional Introduction, p. 270
Negation Introduction, p. 275
𝑖 𝒜
𝑖 𝒜 𝑗 ℬ
𝑗 ℬ 𝑘 ℬ
𝑘 ¬ℬ 𝑙 𝒜
⋮
𝑚 𝒜↔ℬ
ℬ
𝑛 𝒜
ℬ ↔E 𝑚, 𝑛 Reiteration, p. 256
𝑚 𝒜↔ℬ 𝑚 𝒜
𝑛 ℬ ⋮
𝒜 ↔E 𝑚, 𝑛 𝒜 R𝑚
𝑚 𝒜∨ℬ 𝑖 𝒜
𝑛 ¬𝒜 𝑗 ℬ
ℬ DS 𝑚, 𝑛 𝑘 ¬𝒜
𝑙 ℬ
𝑚 𝒜∨ℬ ℬ TND 𝑖 –𝑗, 𝑘–𝑙
𝑛 ¬ℬ
𝒜 DS 𝑚, 𝑛 De Morgan Rules
𝑚 ¬(𝒜 ∨ ℬ)
Modus Tollens
¬𝒜 ∧ ¬ℬ DeM 𝑚
𝑚 𝒜→ℬ
𝑚 ¬𝒜 ∧ ¬ℬ
𝑛 ¬ℬ
¬(𝒜 ∨ ℬ) DeM 𝑚
¬𝒜 MT 𝑚, 𝑛
𝑚 ¬(𝒜 ∧ ℬ)
Double Negation Elimination ¬𝒜 ∨ ¬ℬ DeM 𝑚
𝑚 ¬¬𝒜 𝑚 ¬𝒜 ∨ ¬ℬ
𝑚 𝒜|𝒸↷𝓍
𝑚 ∀𝓍𝒜 𝒸 can be any name
𝒸 can be any ∃𝓍𝒜 ∃I 𝑚
𝒜|𝒸↷𝓍 ∀E 𝑚
name
Existential elimination, p. 329
𝑖 𝒜|𝒸↷𝓍
𝑚 𝒜|𝒸↷𝓍 𝑗 ℬ
∀𝓍𝒜 ∀I 𝑚 ℬ ∃E 𝑚, 𝑖 –𝑗
𝒸 must not occur in any undischarged as‐ 𝒸 must not occur in any undischarged as‐
sumption, or in 𝒜 sumption, in ∃𝓍𝒜 , or in ℬ
𝑚 𝒶=𝒷
𝑛 𝒜|𝒶↷𝓍
𝒸=𝒸 =I 𝒜|𝒷↷𝓍 =E 𝑚 , 𝑛
𝑚 𝒶=𝒷
𝑚 ¬∃𝓍𝒜
𝑛 𝒜|𝒷↷𝓍
∀𝓍¬𝒜 CQ¬∃/∀ 𝑚
𝒜|𝒶↷𝓍 =ES 𝑚, 𝑛
𝑚 ∃𝓍¬𝒜
¬∀𝓍𝒜 CQ∃/¬∀ 𝑚
𝑚 ¬∀𝓍𝒜
∃𝓍¬𝒜 CQ¬∀/∃ 𝑚
Appendix C
Index of defined terms
374
INDEX OF DEFINED TERMS 375
total function, 74
total order, 196
transitive, 153, 193
True, 18
truth table, 75
truth value, 17
truth‐function, 64
truth‐functional, 65
truth‐functionally complete, 105
two‐place, 137
undischarged, 241
universal elimination, 319
universal quantifier, 117
use, 56
weakening, 298
wide scope, 164
Acknowledgements
Antony Eagle would like to thank P.D. Magnus and Tim Button for their work from
which this text derives, and those acknowledged below who helped them. Thanks also
to Atheer Al‐Khalfa, Caitlin Bettess, Andrew Carter, Keith Dear, Jack Garland, Bowen
Jiang, Millie Lewis, Yaoying Li, Jon Opie, Matt Nestor, Jaime von Schwarzburg, and
Mike Walmer for comments on successive versions of the Adelaide text.
Tim Button would like to thank P.D. Magnus for his extraordinary act of generosity, in
making forall𝓍 available to everyone. Thanks also to Alfredo Manfredini Böhm, Sam
Brain, Felicity Davies, Emily Dyson, Phoebe Hill, Richard Jennings, Justin Niven, and
Igor Stojanovic for noticing errata in earlier versions.
P.D. Magnus would like to thank the people who made this project possible. Notable
among these are Cristyn Magnus, who read many early drafts; Aaron Schiller, who was
an early adopter and provided considerable, helpful feedback; and Bin Kang, Craig
Erb, Nathan Carter, Wes McMichael, Selva Samuel, Dave Krueger, Brandon Lee, and
the students of Introduction to Logic, who detected various errors in previous versions
of the book.
Tim Button is a Lecturer in Philosophy at UCL. His first book, The Limits of Real‐
ism, was published by Oxford University Press in 2013. www.homepages.ucl.ac.uk/
~uctytbu/index.html
P.D. Magnus is a professor at the University at Albany, State University of New York.
His primary research is in the philosophy of science. www.fecundity.com/job/
When you come to any passage you don’t understand, read it again: if you
still don’t understand it, read it again: if you fail, even after three readings,
very likely your brain is getting a little tired. In that case, put the book
away, and take to other occupations, and next day, when you come to it
fresh, you will very likely find that it is quite easy.
The same might be said for this volume, although readers are forgiven if they take a
break for snacks after two readings.