0% found this document useful (0 votes)
31 views376 pages

Notes (1) Ai

The document describes a lecture on logic. It covers topics like valid inferences, propositional logic, first-order logic, and decidability. It also discusses licenses for open access and contents like preliminaries, syntax, and semantics.

Uploaded by

Walid Zerrad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views376 pages

Notes (1) Ai

The document describes a lecture on logic. It covers topics like valid inferences, propositional logic, first-order logic, and decidability. It also discusses licenses for open access and contents like preliminaries, syntax, and semantics.

Uploaded by

Walid Zerrad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 376

Inleiding Logica KI1V13001

Lecture Notes

Johannes Korbmacher
Utrecht University

[email protected]

Version 0.10

September 7, 2022
License

This work is open access under the Creative Commons

Attribution 4.0 International

license (CC BY 4.0). This means that you’re free to:

ˆ Share — copy and distribute the material in any medium or format.

ˆ Adapt — remix, transform, and build upon the material for any pur-
pose, even commercially.

Under the following terms:

ˆ Attribution — You must give appropriate credit, provide a link to


the license, and indicate if changes were made. You may do so in
any reasonably manner, but not in any way that suggests the licensor
endorses you or your use.

ˆ No additional restrictions — You may not apply legal terms or


technological measures that legally restrict others from doing anything
the license permits.

You can find out more about the license under:

https://creativecommons.org/licenses/by/4.0/

Under this link a human-readable summary of the license is provided, as


well as a link to the official, legally binding version.
Learn more about the Creative Commons project under:

https://creativecommons.org

CC BY 4.0 is a approved for free cultural works. Learn more under:

https://freedomdefined.org/Definition
Contents

I Preliminaries 1

1 Introduction 2
1.1 Valid Inferences . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Propositional and First-Order Logic . . . . . . . . . . . . . . 6
1.3 Classical Logic . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Decidability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.5 Core Ideas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.6 Self-Study Questions . . . . . . . . . . . . . . . . . . . . . . . 12
1.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.8 Further Readings . . . . . . . . . . . . . . . . . . . . . . . . . 13

2 A Mathematics Primer for Aspiring Logicians 14


2.1 Logic and Mathematics . . . . . . . . . . . . . . . . . . . . . 14
2.2 Mathemateze . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3 Mathodology . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.4 Friendly Advice . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.5 Core Ideas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.6 Self-Study Questions . . . . . . . . . . . . . . . . . . . . . . . 38
2.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.8 Further Readings . . . . . . . . . . . . . . . . . . . . . . . . . 40

3 Elementary Set Theory 42


3.1 Sets and Set Notation . . . . . . . . . . . . . . . . . . . . . . 42
3.2 The Subset Relation . . . . . . . . . . . . . . . . . . . . . . . 43
3.3 The Axiom of Extensionality . . . . . . . . . . . . . . . . . . 45
3.4 Operations on Sets: Union, Intersection, Difference . . . . . . 46
3.5 Ordered-Pairs and Cartesian Products . . . . . . . . . . . . . 48
3.6 Properties, Relations, and Functions . . . . . . . . . . . . . . 49
3.7 Inductive Definitions and Proof by Induction . . . . . . . . . 53
3.8 Core Ideas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.9 Self Study Questions . . . . . . . . . . . . . . . . . . . . . . . 66
3.10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.11 Further Readings . . . . . . . . . . . . . . . . . . . . . . . . . 70

ii
CONTENTS iii

II Propositional Logic 72

4 Syntax of Propositional Logic 73


4.1 Propositional Languages . . . . . . . . . . . . . . . . . . . . . 73
4.2 Proof by Induction on Formulas . . . . . . . . . . . . . . . . . 79
4.3 Unique Readability and Parsing Trees . . . . . . . . . . . . . 82
4.4 Function Recursion on Propositional Languages . . . . . . . . 92
4.5 Some Useful Notational Conventions . . . . . . . . . . . . . . 95
4.6 Core Ideas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
4.7 Self Study Questions . . . . . . . . . . . . . . . . . . . . . . . 98
4.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
4.9 Further Readings . . . . . . . . . . . . . . . . . . . . . . . . . 102

5 Semantics for Propositional Logic 105


5.1 Valuations and Truth . . . . . . . . . . . . . . . . . . . . . . 105
5.2 Consequence and the Deduction Theorem . . . . . . . . . . . 113
5.3 Truth Tables and Decidability . . . . . . . . . . . . . . . . . . 127
5.4 Core Ideas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
5.5 Self-Study Questions . . . . . . . . . . . . . . . . . . . . . . . 135
5.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
5.7 Further Readings . . . . . . . . . . . . . . . . . . . . . . . . . 137

6 Tableaux for Propositional Logic 138


6.1 Proof Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
6.2 Satisfiability and Consequence . . . . . . . . . . . . . . . . . 142
6.3 Analytic Tableaux . . . . . . . . . . . . . . . . . . . . . . . . 145
6.4 Core Ideas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
6.5 Self Study Questions . . . . . . . . . . . . . . . . . . . . . . . 156
6.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
6.7 Further Readings . . . . . . . . . . . . . . . . . . . . . . . . . 158

7 Soundness and Completeness 160


7.1 Soundness and Completeness . . . . . . . . . . . . . . . . . . 160
7.2 The Soundness Theorem . . . . . . . . . . . . . . . . . . . . . 162
7.3 The Completeness Theorem . . . . . . . . . . . . . . . . . . . 167
7.4 Infinite Premiss Sets and Compactness . . . . . . . . . . . . . 171
7.5 Core Ideas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
7.6 Self Study Questions . . . . . . . . . . . . . . . . . . . . . . . 176
7.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176

III First-Order Logic 178

8 Syntax of First-Order Logic 179


8.1 First-Order Languages . . . . . . . . . . . . . . . . . . . . . . 179
CONTENTS iv

8.2 Terms and Formulas . . . . . . . . . . . . . . . . . . . . . . . 184


8.3 Parsing Trees and Occurrences . . . . . . . . . . . . . . . . . 191
8.4 Free and Bound Variables . . . . . . . . . . . . . . . . . . . . 199
8.5 Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
8.6 Formalization . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
8.7 Core Ideas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
8.8 Self Study Questions . . . . . . . . . . . . . . . . . . . . . . . 210
8.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211

9 Semantics for First-Order Logic 215


9.1 Truth, Models, and Assignments . . . . . . . . . . . . . . . . 215
9.2 Models and Assignments . . . . . . . . . . . . . . . . . . . . . 223
9.3 Truth in a Model . . . . . . . . . . . . . . . . . . . . . . . . . 229
9.4 Consequence and Validity . . . . . . . . . . . . . . . . . . . . 236
9.5 Core Ideas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
9.6 Self Study Questions . . . . . . . . . . . . . . . . . . . . . . . 241
9.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242

10 Tableaux for First-Order Logic 246


10.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
10.2 Simple Tableaux . . . . . . . . . . . . . . . . . . . . . . . . . 247
10.3 Tableaux With Identity . . . . . . . . . . . . . . . . . . . . . 262
10.4 Tableaux With Functions . . . . . . . . . . . . . . . . . . . . 268
10.5 Infinite Tableaux and Decidability . . . . . . . . . . . . . . . 270
10.6 Core Ideas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
10.7 Self Study Questions . . . . . . . . . . . . . . . . . . . . . . . 275
10.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276

11 Soundness and Completeness 278


11.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
11.2 The Denotation and Locality Lemma . . . . . . . . . . . . . . 279
11.3 The Soundness Theorem . . . . . . . . . . . . . . . . . . . . . 282
11.4 The Completeness Theorem . . . . . . . . . . . . . . . . . . . 285
11.5 Core Ideas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
11.6 Self Study Questions . . . . . . . . . . . . . . . . . . . . . . . 287
11.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
11.7.1 Part A — Doing . . . . . . . . . . . . . . . . . . . . . 289
11.7.2 Part B — Proving . . . . . . . . . . . . . . . . . . . . 290
CONTENTS v

IV Conclusion 292

V Solutions to Selected Exercises 293

A Chapter 1. Introduction 294

B Chapter 2. A Math Primer for Aspiring Logicians 298

C Chapter 3. Elementary Set Theory 303

D Chapter 4. Syntax of Propositional Logic 308

E Chapter 5. Semantics for Propositional Logic 315

F Chapter 6. Tableaux Propositional Logic 324

G Chapter 7. Soundness and Completeness 333

H Chapter 8. Syntax for First-Order Logic 335

I Chapter 9. Semantics for First-Order Logic 339

J Chapter 10. Tableaux for First-Order Logic 344

K Chapter 11. Soundness and Completeness 354

M List of Symbols 370


Part I

Preliminaries

1
Chapter 1

Introduction

The entire manuscript is typeset with numbered items, consecutively num-


bered per section and subsection. The main reason for this is to facilitate
reference in the slides, homework, and discussion.

1.1 Valid Inferences


1.1.1 Logic is concerned with reasoning, more specifically valid reasoning.
Here are some pieces of reasoning or, as logicians call them, inferences:

(1) The letter is either in the left drawer or in the right drawer, and
it’s not in the left drawer. So, the letter is in the right drawer.
(2) If the ball is scarlet, then it’s red, and the ball is red. So, the ball
is scarlet.

In an inference, the statements that come before the “so” are called the
premises, and the statement that comes after is called the conclusion.
Inference (1) is clearly a pretty solid piece of reasoning. If you know
for sure that the letter is either in the left or in the right drawer, but
you can exclude that it’s in the left drawer, then the letter must be
in the right drawer. Inference (2), in contrast, is pretty bad. Sure, if
the ball is scarlet, then it’s red. That’s a conceptual truth. And let’s
grant for the sake of argument that the ball is red. But that doesn’t
mean that the ball has to be scarlet. There are many other shades of
red: crimson, burgundy, maroon, . . . . In logician’s terminology, (1) is
a valid inference, while (2) is invalid. The aim of logic is to develop a
theory of valid inference.1
1
The notion of validity at play here, the one that mathematical logic courses typically
focus on, is a very strong notion of validity: it requires that the truth of the premises
necessitates the truth of the conclusion. The notion is also known as deductive validity in
the literature. In this course, we’ll be exclusively concerned with deductive validity and

2
CHAPTER 1. INTRODUCTION 3

1.1.2 In modern logic, accounts of validity are typically formulated in a for-


mal, mathematical setting. In a first step towards the formalization of
logic, we abstract away from the logically irrelevant aspects of ordi-
nary language to obtain the notion of a formal language: an artificial
language whose grammar is given by precise mathematical rules. The
main benefit of the mathematization of ordinary language is that it
makes the notion of validity amenable to mathematical analysis. A
very welcome side-effect is that it allows us to implement logical rea-
soning into computer programs. The sub-discipline of logic that deals
with the definition of formal languages is called syntax. Whenever we
develop a logic, we first have to deal with syntax.

1.1.3 When we’re developing a formal language, we have to pay particularly


close attention to the relationship between the symbols of our formal
language and the ordinary language expressions they are supposed to
formalize. The process of translating from natural language (English,
Dutch, . . . ) into a formal language is called formalization. Being able
to adequately formalize natural language expressions is an important
skill you will learn in this course. As you will see, formalization is not
an automatic process, it requires finesse and attention to linguistic
subtlety. But most of all, it requires a solid understanding of how
formal languages work.

1.1.4 The notion of validity is standardly defined for inferences formulated


in a formal language. This language is then called the object language,
the language that we’re reasoning about. The definition of validity itself
is, of course, also formulated in a language, in our case, mathematical
English or, as it’s sometimes called, mathemateze. This language is our
meta-language, the language that we’re reasoning in.

1.1.5 In modern logic, validity is typically understood in terms of truth-


preservation from premises to conclusion. The idea is that an inference
is valid iff (i.e. ‘if and only if’2 ) in every possible situation where the
premises are true, the conclusion is true as well. To illustrate, consider
inference (1) from 1.1.1. Suppose that in some situation, the premises
are true, i.e. the letter is in the left drawer or in the right drawer and
it’s not in the left drawer. Can it be the case that in such a situation,
the conclusion is not true, i.e. the letter is also not in the right drawer?
I shall, correspondingly, omit the qualifying adjective. Other, weaker notions of validity
include, for example, what’s known in the literature as inductive validity, where it’s only
required that the premises make the conclusion more likely. Inductive validity is of special
importance, for example, in scientific reasoning and it’s mainly dealt with in courses on
probability theory and statistics.
2
The expression ‘if and only if’ or ‘iff’ for short is used in mathemateze to express that
two things are equivalent or, for all practical purposes, the same. We’ll discuss this and
other standard expressions of mathemateze in more detail in the following chapter.
CHAPTER 1. INTRODUCTION 4

Clearly not, for this would lead to a contradiction: we’d have that the
letter is either in the left or the right drawer and, at the same time, it’s
also neither in the left nor in the the right drawer. And surely, we can’t
have a contradiction in a possible situation, so the letter must be in the
right drawer, i.e. the conclusion must be true. Since we were reasoning
about an arbitrary possible situation, we can conclude that in every
possible situation in which the premises are true, the conclusion is true
as well. So, the argument is valid.

1.1.6 On the standard account, we correspondingly get that an argument


is invalid iff there exists a possible situation in which the premises
are true and the conclusion is false. We can use this criterion to see
why inference (2) from 1.1.1. is invalid. The premise that if the ball is
scarlet, then it is red will be true in any reasonable possible situation.
But the ball could surely be maroon, so take a possible situation in
which it is. In such a situation, the ball will be red but not scarlet,
so the premises are true and the conclusion is false. The argument is
invalid.

1.1.7 Note that an inference can be valid even if the conclusion is actually
false! This can happen (only) if at least one of its premises is false,
too. Take the following inference as an example: if Bremen is part
of the Netherlands, then Johannes is Dutch, and Bremen is part of
the Netherlands; so, Johannes is Dutch. It’s easy to convince yourself
that the inference is valid but the conclusion is false (I’m German,
from around Bremen). But that’s OK, since all we need that in every
possible situation in which the premises are true, the conclusion is
true—and that’s the case. It’s just that in the real world, which is a
very possible situation, one of the premises is false: Bremen is not part
of the Netherlands. The account of validity as truth-preservation only
talks about the truth of the conclusion in situations where the premises
are true, it remains silent about what happens if the premises are false.
Validity is, in a sense, a hypothetical concept. There is a stronger
concept of correct reasoning which demands that the premises be true
as well: an inference is said to be sound iff it is valid and the premises
are actually true. With a sound inference it’s certainly impossible that
the conclusion is false. The reason why we primarily study validity
and not soundness is that it allows us to focus on the logical aspects
of reasoning, leaving facts out of the picture—those are for scientists
to figure out.

1.1.8 The informal idea of validity as truth preservation is made mathemat-


ically precise in the logical discipline of semantics. The fundamental
concept of semantics is that of a model for a formal language. A model
is essentially the formal counterpart to a possible situation: it’s a well-
CHAPTER 1. INTRODUCTION 5

defined mathematical object which decides for each statement in the


language whether the statement is true or false. Once we’ve syntac-
tically defined our formal language, we need to say what a model for
that language is and what it means for a formal statement to be true
in a model. For this purpose, we’ll essentially make use of the famous
definition of truth due to the Polish logician Alfred Tarski. Once we’ve
got this settled, we get our desired account of validity as truth preser-
vation across all models, now understood in a precise mathematical
sense. Carrying out the details will be the second step in developing a
logic.

1.1.9 There is a third and final step, which consists in determining a proof
system for the logic. The aim is to formulate inference rules that al-
low us to derive the conclusion from the premises in all (and only)
the valid inferences. These inference rules, however, are supposed to
be purely syntactic in the sense that they only make reference to the
formal symbols and not to the concepts of semantics, like truth in a
model. As you will see, showing that an inference is valid using the
official definition of validity as truth preservation across all models
can be difficult, it typically requires creative thinking and it’s not at
all obvious how to proceed. The point of a proof system is to make
establishing validity more tractable, especially for an artificial intelli-
gence. From a more philosophical vantage point, the idea of a proof
system is to formally model the kind of step-by-step reasoning that we
typically do in logical situations. The subfield of logic that deals with
proof systems is called proof theory. In this course, we’ll work with the
proof system of semantic tableaux, which was invented by the Dutch
logician Evert Willem Beth.

1.1.10 Once we’ve laid down a set of inference rules, it’s an important math-
ematical fact to establish that using these rules, we can derive the
conclusion from the premises in all and only the valid inferences. So,
we have to show two things: first that we can derive the conclusion
from the premises only if the inference is valid, and second that we
can derive the conclusion from the premises whenever the inference is
valid. The first part, is called the soundness theorem. For most log-
ics it’s quite easy to establish soundness. What’s more difficult but
possible to show is the second part, that every valid inference can be
shown to be valid by our purely formal means. This is (Gödel’s) com-
pleteness theorem, named after Austrian logician Kurt Gödel. In this
course, we’ll prove both soundness and completeness for the tableaux
method. This will be our most important mathematical result.
CHAPTER 1. INTRODUCTION 6

1.2 Propositional and First-Order Logic


1.2.1 In this course, we’ll cover standard propositional and first-order logic.
These are two logical systems with different scopes, though first-order
logic, in a sense, contains propositional logic as a part.

1.2.2 Propositional logic deals with inferences that are valid because of the
meaning of the so-called sentential connectives: “not,” “and,” “or,” “if
. . . , then . . . ,” and so on. The two inferences we discussed in 1.1.1 are
valid/invalid in precisely this sense: (1) is valid because of the meaning
of “not” and ”or” and (2) is invalid because of the meaning of “if . . . ,
then . . . .” In the second part of this course, after having dealt with
some mathematical prolegomena, we’ll develop standard propositional
logic “from scratch:” we’ll cover its syntax, semantics, and proof the-
ory. In a sense, we could save us some work and move immediately
to first-order logic, since the syntax, semantics, and proof theory for
first-order logic are extensions of those for propositional logic. But
for propaedeutic reasons, we’ll cover propositional logic separately. In
propositional logic, the definitions of a formal language, a model, truth
in a model, and derivability are all straight-forward enough so that we
can focus on how they exemplify the underlying ideas sketched in §1.1.

1.2.3 First-order logic deals with all of the inferences of propositional logic
plus inferences involving generality. There are inferences that are easily
seen to be valid but we can’t account for their validity purely in terms
of the behavior of the sentential connectives. Consider, e.g.:

(3) This ball is scarlet and everything that’s scarlet is red. So, this
ball is red.
(4) The letter is in the left drawer. So there is something in the left
drawer.

It’s easily checked that, intuitively, in every situation where the premises
of these arguments are true, the conclusion is too. But the (formal)
language of propositional logic lacks the expressive resources to cap-
ture the meaning of the crucial premise in (3) or the conclusion in
(4). Claims like “everything that’s scarlet is red” or “there is some-
thing in the left drawer” cannot be analyzed purely in terms of “not,”
“and,” “or,” . . . . To deal with such claims, we need quantifiers, which
are linguistic devices, like “for all” and “there exists,” that allow us
to express general claims, i.e. claims that are, in a sense, about all
objects.

1.2.4 Developing a formal language with quantifiers and, in particular, de-


veloping an adequate notion of a model for such a language is mathe-
CHAPTER 1. INTRODUCTION 7

matically significantly more involved than in the simpler case of propo-


sitional logic (though it is, as I said, really just a generalization). We’ll
tackle this task in the third part of the course. There are a variety of
issues that we’ll have to deal with here, involving, for example, iden-
tity and infinity. But it’ll be worth it. First-order logic provides the
paradigm of logical reasoning in mathematics, computer science, lin-
guistics, and many other disciplines. And at the end of the course,
you’ll have mastered its corner-stones.

1.3 Classical Logic


1.3.1 In this course, we’ll only cover what’s known as classical logic. It’s
actually a bit tricky to say what classical logic precisely is. Perhaps
the most accurate description is that classical logic is standard logic,
it’s the logic characterized by the assumptions, techniques, and results
that are usually made, used, and referenced without further justifica-
tion in the many applications of logic in philosophy, mathematics, lin-
guistics, computer science, and so on. We won’t go through all of them
one-by-one, but throughout the course, I’ll highlight which of our as-
sumptions are assumptions of classical logic. Here, let me just mention
one assumption that will be of fundamental importance throughout
the course.

1.3.2 Note that when we argued that inference (1) from 1.1.1 is valid in 1.1.5,
we assumed that we can’t have contradictions in possible situations.
This is a kind of consistency assumption and it’s characteristic of
classical logic. Note that without the consistency assumption, inference
(1) wouldn’t be valid, since we could find a situation in which the
premises are true but the conclusion not. This would be a situation
in which the letter is not in the right drawer but it’s both in the left
drawer and not in the left drawer. Since the letter is in the left drawer,
it’s in the left or the right drawer, meaning the first premise is true.
And since the letter is also not in the left drawer, the second premise
is true. But the letter is not in the right drawer, so the conclusion
is false. Hence, if we were to allow for such an impossible situation
(which we’re not!), the inference would be invalid.

1.3.3 An important logical consequence of the consistency assumption is


that every inference with inconsistent premises is valid. For suppose
that we have some inconsistent premises, say “the ball is red” and “the
ball is not red,” and any arbitrary conclusion, say “the moon is made
of green cheese.” The inference is valid iff in every possible situation
where the premises are true, the conclusion is true as well. But by
the consistency assumption, there are no possible situations where
CHAPTER 1. INTRODUCTION 8

the premises are true. Hence, trivially, in every situation where the
premises are true, so is the conclusion.3 To drive the point home, think
of it in another way. Ask yourself: Can the inference be invalid? Well,
there would have to be a possible situation where the premises are true
and the conclusion is not. But that would need to be a situation in
which the ball is red and not red, which is excluded by the consistency
assumption. This means that the argument cannot be invalid, which
is just another way to say that it’s valid. Clearly, the point generalizes
to arbitrary inferences with inconsistent premises. Bottom-line: every
inference with inconsistent premises is valid. This is known as the
principle of ex falso quodlibet and it’s an example of a law of classical
logic.

1.3.4 There are logics which don’t share the consistency assumptions, so-
called paraconsistent logics. We won’t deal with them in the course,
but as AI students, you should know that they exist. There are a
variety of reasons for why one might be interested in paraconsistent
logics, but a simple example, which is close to home for AI purposes,
involves reasoning with information provided by (possibly) inconsis-
tent databases. We often have to reason with information provided by
databases where we don’t have control over how information is fed
into them, and which therefore might turn out to be inconsistent. Just
think of the internet! This means that if we use classical logic to reason
with the information provided by such an inconsistent database, every
inference which uses all the information in the database as premises
would turn out to be valid, which is clearly undesirable. To obtain
useful information from inconsistent databases using logic, we need
a paraconsistent logic. Classical logic, instead, is more suited to rea-
soning about the real world (rather than about information), where
inconsistencies arguably can’t occur.

1.3.5 The consistency assumption has a flip-side, which is equally character-


istic of classical logic. It states that in every situation, each statement
is either true or not: the ball is either red or not, the letter is either in
the left drawer or not, . . . . This is a completeness assumption. Contra-
vening it would be a situation in which some fact remains unsettled:
whether the ball is red, where the letter is, or the like. The complete-
ness assumption, too, has logical consequences. In particular, it means
3
Note that this also means that trivially in every situation where the premises are true,
the conclusion is false. To see this, ask yourself: can there be a situation where the premises
are true and the conclusion is not false. The answer needs to be: no! Why? Because in such
a situation, the premises would need to be true, which we’ve already seen is impossible.
Hence, there cannot be a counterexample to the claim that in every situation where the
premises are true, the conclusion is false—there is no situation in which the premises are
true and the conclusion is not false.
CHAPTER 1. INTRODUCTION 9

that every inference whose conclusion says that something is the case
or not is valid. Take, for example, an inference with the conclusion “the
ball is red or not.” For that inference, no matter what the premises
are, to be invalid, we’d need a situation in which the premises are true
but the conclusion is not. But that would require it to be the case that
in that situation the ball is neither red nor not red, the question of its
redness would need to be unsettled. Given the completeness assump-
tion, that’s impossible. Hence, the argument can’t be invalid. So it’s
valid. This is known as the classical law of verum ex quodlibet.

1.3.6 Statements like “the ball is red or not,” which in classical logic are true
in all situations, are also called logical truths. Conversely, contradic-
tions, like “the ball is both red and not,” are called logical falsehoods.
Logical truths and falsehoods are not very informative considered as
statements about the real world, but they play a central role in clas-
sical logic. We’ll see later that the logical truth of certain statements
coincides with the validity of certain inferences. So, in a sense, we can
focus on the logical truth of statements rather than the validity of
inferences.

1.3.7 Logics without the completeness assumption also exist. They are called
paracomplete logics, and again, we won’t cover them in this course.
They are equally useful in the case of reasoning with databases, albeit
for slightly different reasons. Just like databases can turn out to be in-
consistent, they might be incomplete by failing to provide information
about a certain subject matter, e.g. the redness of the ball. In such
cases, we arguably don’t want to be able to conclude from the database
that the ball is either red or not, since that’s not what the database
says—it remains quiet about this. This is what a paracomplete logic
allows us to do.

1.3.8 On our mathematical approach to logic, the completeness and con-


sistency assumption are implemented via a semantic principle, which
governs our definition of truth in a model:

Bivalence. For every model of a formal language, every statement of


the language is either true or false in the model and never both.

The principle of bivalence corresponds to the conjunction of the con-


sistency and the completeness assumption, spelled out in our formal
terms. We will always assume bivalence in our semantics, which is one
way in which we’re doing classical logic. This principle will be imple-
mented in different ways for propositional and first-order logic, but
the underlying idea always remains the same.
CHAPTER 1. INTRODUCTION 10

1.3.9 There are many other characteristics of what’s called classical logic.
Here are just a few:

ˆ Focusing only on propositional and first-order logic, leaving other


connectives like the modal operators “necessarily” or “possibly”
out of the picture.
ˆ Focusing on truth preservation as the definition of validity, in-
stead of more proof-focused accounts.
ˆ Dealing with “if . . . , then . . . ” statements by means of the so-
called material conditional (which we’ll discuss extensively in the
context of propositional logic).

1.3.10 In this course, you’ll get familiar with classical logic and you’ll get an
“I know it when I see it” kind of acquaintance with the subject.

1.4 Decidability
1.4.1 As a computer science-minded person, you might ask: Is it perhaps
possible to write a computer program, such that if I give it an arbitrary
inference, the program will determine (in a finite amount of time)
whether the argument is valid? Meaning, the program will spit out
“yes” if the inference is valid, and it will spit out “no” if the argument is
invalid. This is (roughly) what logicians call the question of decidability
of validity. And indeed, in propositional logic, it’s possible to write
such a computer program: classical propositional logic is decidable. We
will, indeed, prove this result by describing two decision procedures for
propositional logic, one using models and one using proof systems.

1.4.2 It might be surprising to learn that first-order logic is not decidable.


It can be proven with mathematical certainty that there exists no
computer program as described above. This was proven by Alonzo
Church and Alan Turing in the 1930’s. We shall not prove the result
in the course, but you’ll be able to get an idea of why it holds.

1.4.3 Decidability shouldn’t be confused with completeness (see 1.1.9). The


point of completeness is that we can derive, by purely formal means,
all the valid inferences. The point of decidability is whether this can
be automatized in such a way that we’ll always get a yes-or-no answer
after finitely many steps. In a sense, the undecidability of first-order
logic means that finding purely formal demonstrations of valid infer-
ences in first-order logic is a hard problem, it requires intelligence and
creativity to become computationally feasible.

1.4.4 Even though first-order logic is undecidable, we can still write a com-
puter program that attempts to find a derivation of a given conclusion
CHAPTER 1. INTRODUCTION 11

from some premises. We just have to keep in mind that even if the in-
ference in question is valid, it’s possible that the program doesn’t find
a proof in a reasonable amount of time. A program like this is called a
theorem prover. The method of semantic tableaux we use in this course
is often used in theorem provers (in fact, for propositional logic, it es-
sentially is a theorem prover). We’ll look a bit at the problems we face
when trying to write efficient theorem provers.

1.5 Core Ideas


At the end of every chapter, you will find a summary of the most important
concepts and ideas of the chapter. NB: This is not a comprehensive summary
of the chapter, it only covers the most important points.

ˆ An inference is valid iff in every situation in which the premises are


true, so is the conclusion; an inference is invalid iff there is a situation
in which the premises are true but the conclusion is not.

ˆ We model ordinary language by means of formal languages, artificial


languages whose grammar is given by precise mathematical rules.

ˆ We model a possible situation by means of a model for a formal lan-


guage. The statements of the formal language are assigned truth-values
relative to a model.

ˆ We model step–by-step reasoning by means of proof systems, formal


systems of inference rules. In this course, we use the tableaux method.

ˆ A proof system is sound and complete iff the valid inferences are pre-
cisely the ones whose conclusion can be derived from the premises.

ˆ Propositional logic deals with inferences involving the sentential con-


nectives “not,” “and,” “or,” . . . ; first-order logic also allows for the
quantifiers “for all” and “there exists.”

ˆ We only focus on classical logic, which importantly means that we as-


sume bivalence: in every model, every statement of our formal language
is either true or false and never both.

ˆ A logic is decidable iff there is a step-by-step algorithm which after


finitely many steps tells us whether a given inference is valid or not.
Propositional logic is decidable, but first-order logic is not.
CHAPTER 1. INTRODUCTION 12

1.6 Self-Study Questions


After the summary of the core ideas, there will always be a set of multiple-
choice questions, which test your understanding of the core concepts. You
can find the answers on the last page of the chapter. Explanations for why
these are the correct answers can be found in the appendix.

1.6.1 Which of the following entails that a given inference is valid:

(a) In every situation, the premises are false.


(b) In some situation, the premises are true and conclusion is not.
(c) In no situation, the premises are true and the conclusion is not.
(d) In no situation, the conclusion is false.
(e) In every situation, the conclusion is false.
(f) In every situation where the conclusion is false, at least one of the
premises is false.

1.6.2 Which of the following entails that the inference is invalid:

(a) There is a situation in which premises and conclusion are all false.
(b) In every situation where the conclusion is false, the premises are
true.
(c) There is a situation in which the premises are true and there is no
situation in which the conclusion is true.
(d) There is a situation in which the premises are true and the con-
clusion is false.
(e) In every situation where the premises are true, the conclusion is
false.
(f) There is no situation in which the premises are true and the con-
clusion as well.

1.7 Exercises
Each chapter contains a set of exercises. Solutions to selected exercises can
be found in the appendix. The exercises marked [h] are homework assign-
ments, which are always due in the workgroup meeting preceding the one in
which they are going to be discussed.

1.7.1 [h] Are the following inferences valid or invalid? Provide an explana-
tion!

(a) Every blue fish is a whale. This is a whale. So, this is a blue fish.
CHAPTER 1. INTRODUCTION 13

(b) Well, I didn’t not miss the train. So, I missed the train.
(c) If you had checked your email, then you’d have seen my message,
and you didn’t see my message. So you didn’t check your email.
(d) Every rose is red. So there’s at least one red rose.
(e) If you did that, then pigs can fly. So, you didn’t do it.

1.7.2 [h] Give an argument that if an inference is valid, then adding addi-
tional premises doesn’t cancel the validity of the inference.

1.7.3 [h] For every invalid inference, it’s possible to add one or more premises
to make it valid. Why? (There’s actually more than one way of achiev-
ing this, find at least two.)

1.8 Further Readings


At the very end of some chapters, I provide a list to further literature. These
are not mandatory readings, but they can help you understand the material
of the chapter better.

A (very) short, informal introduction to the core ideas of logic that I can
warmly recommend is:

ˆ Priest, Graham. 2017. Logic. A Very Short Introduction. 2nd Edition.


Oxford, UK: Oxford University Press.

The first four chapters of that book give you a slightly more detailed, infor-
mal overview of the material that we’re going to cover. The author, Graham
Priest, is a famous non-classical logician, he believes that there are true
contradictions. Because of this, his booklet is particularly good at taking
non-classical views into account. The later chapters of that book can also
be recommended, by way of outlook on the broader field of modern logic.

Explanations in the appendix.

1.6.2 (c), (d)

1.6.1 (a), (c), (d), (f)


Self Study Solutions
Chapter 2

A Mathematics Primer for


Aspiring Logicians

In block 2 of this year, you will take “Wiskunde voor KI,” which is a proper
mathematics course. That course covers the material of the following two
chapters (and much more) in way more detail. The purpose of the present
chapters (and corresponding section of the course) is to bring you up to speed
so that we can study logic.

2.1 Logic and Mathematics


2.1.1 As we said in the introduction, modern logic is a highly mathematical
discipline. So, in order to develop modern logical theory, we require
a certain amount of mathematics. This chapter covers the basics of
mathematical language and methodology. The next chapter covers the
mathematical theory we need.

2.1.2 As you’ve probably noticed already, university-level mathematics looks


and feels very different from what you did in high-school and before.
Academic mathematics has its very own language and methodology.
We call the former “mathemateze” and the latter “mathodology.”

2.1.3 A word on the relationship between logic and mathematics. As you’ll


see in this chapter, there’s a lot of logic in mathodology. In fact, the
foundations of modern mathematics is typically taken to be first-order
logic, more specifically set-theory formulated in first-order logic. Yet,
we make use of mathematics to study first-order logic. There is an
obvious kind of circularity to this: we use logic to study logic. But we
typically think this circularity is harmless. To see why, it’s important
to appreciate the distinction between object and meta-language. What
we’re doing is to use mathematics as our meta-language to talk about
logic as our object language. This is not much weirder than studying

14
CHAPTER 2. A MATHEMATICS PRIMER FOR ASPIRING LOGICIANS15

English grammar in English, Dutch grammar in Dutch, and so on.


Surely, we can do that. Just think of your elementary school grammar
lessons.

2.2 Mathemateze
2.2.1 As you know from from high-school, mathematical language is full of
special symbols, which you will have to be able to read in order to
understand what’s being said in the first place. For this reason, we’ll
first cover some notation.

2.2.2 Mathematicians frequently use Greek letters, and you should be fa-
miliar with their names/pronunciations. Here are the most commonly
used letters and their names, capitals are included some cases but not
in others:
Letter Name
α alpha
β beta
Γ, γ gamma
∆, δ delta
 epsilon
ζ zeta
η eta
Θ, θ theta
ι iota
κ kappa
Λ, λ lambda
µ mu
ν nu
Ξ, ξ xi
Π, π pi
ρ rho
Σ, σ sigma
τ tau
Φ, φ, ϕ phi
χ chi
Ψ, ψ psi
Ω, ω omega

2.2.3 Think of a typical mathematical claim like (a+b)2 = a2 +2ab+b2 . The


symbols a and b here stand for arbitrary numbers, they are variables
for numbers. Generally speaking, we use variables to refer to arbitrary
but fixed objects of some mathematical category, like numbers. The
CHAPTER 2. A MATHEMATICS PRIMER FOR ASPIRING LOGICIANS16

variables are said to range over the objects of the category. In our
example, a and b range over numbers. A variable can assume any
value from among the objects it ranges over. For example, we can
have that a = 0, a = 1, a = π, and so on. Absent further information,
we don’t know what the value of a given variable is. So, all we know,
for example, if n ranges over the natural numbers is what follows from
n being a natural number: that 0 ≤ n, that n ≤ n + 1, and so on.1
But we don’t know, for example, whether n is even, odd, prime, or
the like. That n has such properties would need to be inferred from
extra information. For example, if it’s given that n is a prime number
bigger than two, then we can infer that n is odd.

2.2.4 Strictly speaking, you always need to declare your variables: you need
to say which kind of object they range over. This is typically done
using the word “let.” You would, for example, say: let n be a natural
number, let f be a function, or the like. But there are many different
phrases that can be used to the same effect, for example:

ˆ Let n be a natural number.


ˆ For n a natural number, . . . .
ˆ Consider a natural number n.
ˆ Suppose that n is a natural number.

There is not much more than a notational difference between these.

2.2.5 Always having to declare one’s variables quickly gets tedious. This is
why we have conventions concerning standard variables for important
categories of objects. Some standard variables used in mathematics
and their associated categories are:

Object Variable
unspecified x, y, z, . . .
sometimes: a, b, c, . . .
natural numbers n, m, l, . . .
indices i, j, . . .
sets of indices I, J, . . .
functions f, g, h, . . .
also: λ, σ, τ, . . .
sets X, Y, Z, . . .
conditions Φ, Ψ, . . .
formulas φ, ψ, θ, . . .
also: A, B, C, . . .

1
There is quite some dispute about whether zero counts as a natural number or not.
In the context of this course, unless stated otherwise, we will always assume that it is.
CHAPTER 2. A MATHEMATICS PRIMER FOR ASPIRING LOGICIANS17

propositions p, q, r, . . .
also: P, Q, R, . . .

Note the pattern here. The first variable for a category is typically
chosen mnemonically—number, f unction, index, φormula, . . . —and
the following continue in alphabetical (or inverse alphabetical) order.
Also, “higher-order” objects, like sets or conditions, typically get cap-
ital variables.

2.2.6 Variables allow us to make general claims about objects of a category,


while still making concrete statements.2 For example, if we let n and
m be natural numbers, the statement n + m = m + n says that for
every possible value of n and m, i.e. all natural numbers, adding the
one to the other is the same as adding the other to the one. We can
make this perfectly explicit by saying: for all natural numbers n and
m, we have that n + m = m + n. Without variables, it’s impossible
to make such a claim in a finite expression, we’d need to repeat our
claim for each pair of numbers n and m:

ˆ 0+0=0+0
ˆ 0+1=1+0
ˆ 1+0=0+1
ˆ ...

Clearly, this is not feasible.

2.2.7 Variables also allow us to talk about numbers where we don’t know
precisely what they are. Take the first prime number bigger than 436 ·
1099 . By Euclid’s theorem, we know that this number exists: there
are infinitely many prime numbers and there are only finitely many
numbers smaller than 436 · 1099 , so there needs to be a first prime
number after 436 · 1099 . We can refer to this number using a variable
by saying: let n be the first prime number bigger than 436 · 1099 . It’s
difficult to refer to this number explicitly, or to even determine which
number it is: the number is very, very large. All we know is that the
number exists. Another way of saying this is: there exists a natural
number n such that n is the first prime number after 436 · 1099 .

2.2.8 Speaking of natural numbers. One important role of natural numbers


is in counting. Say you have three apples. Then you can count them:
my first apple, my second apple, my third apple. This way of counting
is indicated mathematically by using the numbers as subscripts or
2
It’s important not to confuse variables with collections, sets, or the like. A variable
always stands for one, and only one object.
CHAPTER 2. A MATHEMATICS PRIMER FOR ASPIRING LOGICIANS18

indices: a1 is the first apple, a2 the second apple, and a3 the third
apple. What a mathematician would typically say in such a situation
is something like this: suppose we have three apples, a1 , a2 , and a3 .

2.2.9 Sometimes, we only know that we have finitely many objects, but not
how many precisely. The standard way of expressing this mathemati-
cally is to say something of the sort: consider n-apples, a1 , . . . , an . Here
n is used as a variable for a some arbitrary but fixed natural number,
just like we discussed above. When we use numbers as indexes for
objects in this way, we typically use i, j, . . . as variables ranging over
these numbers. For example, we would say something like: consider
n-apples, a1 , . . . , an and let ai be one of these apples, for 1 ≤ i ≤ n.
Here i ranges over the numbers from 1 to n used as indices, it is an
index variable. Note that for each i between 1 and n, ai is a variable
that ranges over apples.

2.2.10 As we said, a variable always stands for an arbitrary but fixed object
of some category. But note that the information which kind of object
a variable stands for is only valid in a given context with a preceding
variable declaration. For example, if we let n stand for the first prime
after 436 · 1099 , then n refers to this number until the context of our
assumption (current proof, sub-argument, etc.) is closed and we move
to the next context. Only when we move to a different context— a new
proof, for example— we can re-use n as a variable. If we use n again in
the same context, it still refers to the first prime after 436·1099 . So, for
example, if you were to talk about finitely many apples a1 , . . . , an in
the same context where you’ve earlier assumed that n is the first prime
after 436 · 1099 , then you would in fact be talking about that many
apples (rather than some arbitrary finite number as per 2.2.9). So:
watch out that you’re always clear on your variable declarations and
what you can and cannot assume about the values of your variables.

2.2.11 When we want talk about a distinguished mathematical object, we


typically use a constant to refer to it. The numerals 0, 1, 2, . . . , for
example, stand for the first, second, and third natural number (and
so forth). Note that there are also constants for functions, such as
+ for addition, · for multiplication, etc.. There are also constants for
properties and relations, like ≤ for the smaller-than (or equal to) rela-
tion. What distinguishes constants from variables is that they always
denote the same object in every context. The numeral 0 always de-
notes the first natural number, + is always addition, and so on. In
contrast, n can assume any value from the natural numbers, f can be
any functions, etc.

2.2.12 Sometimes, we introduce temporary constants for notational conve-


CHAPTER 2. A MATHEMATICS PRIMER FOR ASPIRING LOGICIANS19

nience. Suppose that we’ve just established that there exists a natural
number n such that n is the first prime number after 436 · 1099 . This
number is not important enough to justify introducing a new constant
for it, but it might be useful to free up the variable n again for later
use—n is such a convenient variable for natural numbers. So we might
call the number n, whose existence we’ve just established, a and con-
tinue to use n as we please. Conceptually, what’s happening here is
nothing but a new variable declaration, but it’s fruitful to think about
it as introducing a “temporary name” for an object.

2.2.13 Mathematical writing is often very concise, especially hand-written


mathematics. One standard logico-mathematical abbreviation you’ve
already encountered is the phrase “iff,” which stands for if and only if.
Think, for example, of our definition of validity as truth preservation
in the introduction. There we said that an inference is valid iff in every
possible situation where the premises are true, the conclusion is true
as well. To say that one thing is the case if and only if another thing
is the case is to say that the two are equivalent: if the one thing is the
case, so is the other and vice versa.

2.2.14 If two things are equivalent—the one is the case iff the other is—
then the two things can be exchanged for each other in practically
all mathematical contexts. For example, according to our account of
validity given above, we can freely go back and forth between saying
that an inference is valid and saying that in every possible situation
where the premises are true, the conclusion is true as well. The two
phrases (practically) mean the same thing. This is why “iff” is often
used in definitions (see below).

2.2.15 Here are some other abbreviations often found in mathematical writing
together with their associated meaning:

Abbreviation Meaning
i.e. id est, that is
e.g. exempli gratia, for example
viz. videlicet, namely
s.t. such that
w.r.t. with respect to
w.t.s want to show
q.e.d. quod erat demonstrandum
fr for (especially hand-written)
df. or dfn. definition (especially hand-written)
thm. theorem (especially hand-written)

2.2.16 We now turn from notation to meaning. You might have noticed that
CHAPTER 2. A MATHEMATICS PRIMER FOR ASPIRING LOGICIANS20

mathematical language is very precise, to the extend that it can seem


pedantic. When mathematicians use a word, especially a technical con-
cept, they usually mean something very specific by it—one and only
one thing. Mathematical language is not as vague and flexible as ordi-
nary language is. In order to properly understand mathemateze, you
have to be perfectly clear on the meanings of the terms involved. These
meanings are typically given by definitions. The most basic forms of
mathematical definitions are definitions of objects and definitions of
properties and relations.
2.2.17 A mathematical object is defined by giving a list of properties such that
we can show that there is one and only one object that satisfies these
properties. For example,
√ we can define the principal square root of 2,
typically denoted 2, as the positive real number x such that x·x = 2.
Note that in order for such a definition to give us a unique object,
we need to show that: (i) there exists such an object that satisfies
the property and (ii) that only
√ one object satisfies the properties.
For example, we can’t define 2 as the natural number n such that
√· n = 2—such a natural number doesn’t exist. And we can’t define
n
2 as the real number
√ x such √ that x · x = 2—there is more than one
such number, viz. 2 and − 2.
2.2.18 Note that there can be more than one valid definition of a given math-
ematical object. For example, the number π can be defined using the
following integral definition:
Z 1
1
π= √ dx
−1 1 − x2
But it can also be defined as the smallest positive real x which satis-
fies the equation sin(x) = 0. It’s a mathematical fact that these two
definitions characterize the same object. In mathematical practice, it’s
often useful to know alternative definitions of an object.
2.2.19 A mathematical property is defined by giving the precise conditions
under which an object has the property. For example, a natural num-
ber n is said to be prime iff (i) 1 < n and (ii) there are no natural
numbers k, l < n such that n = k · l. Note that the property being de-
fined is typically italicized. This is considered good form in typed-out
mathematics. In hand-written mathematics, you typically underline
the concept being defined.
2.2.20 Related to definitions are the central concepts of necessary and suffi-
cient conditions:

2.2.20.a A condition is said to be necessary for something to obtain


just in case if the condition wouldn’t obtain, then the thing
CHAPTER 2. A MATHEMATICS PRIMER FOR ASPIRING LOGICIANS21

wouldn’t be the case. For example, being non-negative3 is a


necessary condition for being a natural number—if a number
is negative, it can’t be a natural number. But being non-
negative is not a necessary condition for being an integer: the
whole negative numbers are all integers but, well, negative.
2.2.20.b A condition is said to be sufficient for something iff the thing
is the case, whenever the condition obtains. For example, be-
ing even is a sufficient condition for being an integer: if some-
thing’s even, it’s an integer. But being even is not a necessary
condition for being an integer: of course, there are non-even
integers—the odd ones.

To say that a condition is necessary, we use the locution ‘only if:’


something’s a natural number only if it is non-negative. And to say
that a condition is sufficient, we use the locution ‘if:’ if a number is
even, then it’s an integer. Hence the origin of the phrase ‘if and only
if.’

2.2.21 The definition of a property always gives us a list of necessary and


jointly sufficient conditions for something to have the property. Think
of the conditions (i) and (ii) from our definition of being prime. They
are both necessary in the sense that an object that lacks one of these
two properties is not prime: 1, for example, is not prime since it violates
condition (i); 4 isn’t prime because it violates condition (ii)—clearly
2 < 4 and 2 · 2 = 4, so just set k = l = 2. At the same time, (i) and
(ii) together are sufficient for a number to be prime. For example, to
see that 3 is prime, first note that 1 < 3 so condition (i) is satisfied.
Second, there are just three numbers smaller than 3, viz. 0, 1, and 2.
And 0 · 1 = 0 · 2 = 0, 1 · 1 = 1, 1 · 2 = 2, 2 · 2 = 4. So there are no k, l < 3
such that k · l = 3, meaning condition (ii) is satisfied. So, 3 is prime.

2.2.22 But note that not any list of necessary and sufficient conditions consti-
tute a proper definition. For a definition to be successful, we demand
that the defined concept doesn’t occur among the conditions being
used to define it. Why? Well, a definition that violates this constraint
wouldn’t be very useful. Suppose we would define an even number as
one that is the product of an even number with some other number.
It’s true that a number is even iff the number is the product of an even
number with some other number. So the conditions are necessary and
sufficient for a number to be even. But this is not a particularly useful
3
You might wonder: why didn’t he say positive natural number? The reason is that,
in mathematics, it’s standard to reserve positive for numbers (strictly) bigger than zero.
So, zero isn’t positive. But then being positive can’t be a necessary condition for being a
natural number: zero is not positive but a natural number. Zero is, however, not negative,
for a negative number is one that is smaller than zero and zero isn’t smaller than itself.
CHAPTER 2. A MATHEMATICS PRIMER FOR ASPIRING LOGICIANS22

definition. In order to establish that a number is even, we’d first have


to establish that some other numbers are even. And in order to do
that, we need to establish that some other numbers are even. And so
on, ad infinitum. A definition in which the condition in question is
violated is called circular.

2.2.23 In contrast to the definition of an object, however, the definition of


a property or relation can be empty, i.e. no object has the property
or stands in the relation to anything. For example, we can define the
property of being the biggest natural number as follows: we say that a
number n is the biggest natural number iff for all natural numbers m,
m ≤ n. It’s clear that there is no biggest natural number, so no object
has the property. But the property exists—it can be defined like we
just did.

2.2.24 The way of defining a property generalizes to relations, like the relation
≤ on the natural numbers. For two natural numbers n, m, we say that
n ≤ m iff there exists a natural number k such that n + k = m. The
relation ≤ is called binary because it relates two objects. There are
also ternary, quaternary, quinary relations, and so on. More generally,
we call a relation n-ary iff it relates n objects, where n is a natural
number. So a binary relation is a 2-ary relation, a ternary relation is a
3-ary relation, and so on. Here’s an example of a definition of a ternary
relation: a point (on the plane) x lies in between two points y and z
if and only if there is a straight line that connects y and z which goes
through x.

2.2.25 Just like with definitions of objects, there is sometimes more than
one definition of a given property or relation. For example, n ≤ m for
natural numbers n and m can equivalently be defined by the condition
n
that m ≤ 1. It’s good to know alternative definitions of important
properties and relations.

2.2.26 Understanding mathematical definitions is often not easy, it requires


patience and effort. Learning mathemateze is like learning a foreign
language. Here are two important steps that you can (in fact, should)
take in order to properly understand a definition:

Examples. Check some examples, like we did above (this applies to


definitions of properties and relations). Is 1 a prime? (No, con-
dition (i) is violated.) Is 2 prime? (Yes.) Is√3 prime? (Yes.) Is 4
prime? (No, condition (ii) is violated) . . . Is 2 a prime? (No, the
definition only applies to natural numbers). If you know how to
program, try to write a script that checks examples. Try to come
CHAPTER 2. A MATHEMATICS PRIMER FOR ASPIRING LOGICIANS23

up with your own examples and counter-examples. For each def-


inition you learn, you should know a list of standard examples
and counter-examples.
Understand the Conditions. Try to understand why the condi-
tions are formulated
√ the way they are. For example, why did
we demand that 2 is the positive real x such that x · x = 2?
Because there is more than one real with this property. Or, why
did we demand that for n to be prime that there are no numbers
k, l < n such that k · l = n, rather than the weaker condition that
!
there are no k, l ≤ n such that k · l = n? Well, this definition
wouldn’t work: no number would be prime! To see this note that
for each number n > 1, 1 · n = n and hence there are m, k ≤ n
with n = m · k, viz. m = 1 and k = n. Another thing you can do
in order to understand the conditions better is to try to give an
equivalent formulation, like in the case of π and ≤.

But this is just the beginning. To properly appreciate a definition, you


will have to work with it, you will have to prove things with it. This
is just like to properly acquire command of a new word, you have to
use it in a sentence.

2.2.27 There’s also a way in which learning mathemateze is not like learn-
ing a language, at least not like learning a language in (high)school.
When you learn definitions in mathematics, you should not just mem-
orize them, like your vocabs in school-English. It is much, much more
important that you understand a mathematical definition rather than
that you memorize it. This is what the previous steps are supposed to
help you with. And once you’ve properly understood a definition, it
will actually be easy to remember it, or at least to be able to recon-
struct it from memory.

2.2.28 As you will see once we get more advanced, certain kinds of mathe-
matical objects have a special way of being defined: a set, for example,
is defined by specifying its members, a function is defined by saying
which output it gives for which input, and so on. These special kinds of
definitions can always be traced back to the general kind of definition
we characterized above in 2.2.2, but they tell us something about the
(mathematical) nature of the objects under consideration. For exam-
ple, the fact that a set can be defined by specifying its members tells
us that there is nothing more to being a set than being a collection
of objects (more on sets in the next chapter). Keep an eye out for the
way in which an individual object of a certain kind—a set, a function,
a language, a model, . . . —is defined, you will better understand what
these objects are (mathematically speaking).
CHAPTER 2. A MATHEMATICS PRIMER FOR ASPIRING LOGICIANS24

2.2.29 Having covered all of this notation, it’s important to get clear on its
benefits and drawbacks. The primary purpose of most of the features
of mathemateze that we’ve just discussed is precision, they allow us to
phrase our claims in such a precise way that we can establish them be-
yond a reasonable doubt—that we can prove them. We’ll cover proving
things in the next section, the section on mathodology. But the pre-
cision I just mentioned comes at a price: as you can probably agree,
a properly formulated mathematical claim can be (very) difficult to
properly understand. And so there is also a role for natural language
in mathematics: it can make the very precise claims of mathemateze
intuitively perspicuous. Just compare the two claims:

ˆ Let n be a natural number. Then, if n > 2 and there are no


natural numbers k, l < n such that n = k · l, then there is no
natural number m, such that n = 2m.
ˆ Every prime bigger than two is odd.

The two claims say exactly the same thing. While the first is very
precise and, once understood, easily seen to be true, the second is far
more intelligible.

2.2.30 The previous observation motivates my last recommendation about


learning mathemateze: once you’ve properly understood the formal
side of things, try to phrase what you’re thinking about in natural
language without using mathematical symbols. Once you can do this
well, you have truly understood a mathematical concept. In fact, I
think this is so important, that I will ask you to do this in exercises.
When an exercise is marked [6 x], this means that you may not use any
mathematical symbols in the answer to this question.

2.3 Mathodology
2.3.1 One of the most important mathematical activities is proving things.
A mathematical proof is a rigorous, step-by-step argument which es-
tablishes the truth of a mathematical statement. Importantly, in a
mathematical proof, every step needs to be justified, nothing should
remain vague or unclear.

2.3.2 Mathematicians typically classify true mathematical statements into


different categories, roughly according their role in mathematical in-
quiry:

Lemma. An auxiliary claim, established in order to prove a more


important proposition or theorem. Whether something counts as
CHAPTER 2. A MATHEMATICS PRIMER FOR ASPIRING LOGICIANS25

a lemma is thus context-dependent: one mathematician’s impor-


tant result may be another mathematician’s lemma.
Proposition. A run-of-the-mill, ordinary mathematical fact.
Theorem. An important mathematical fact, e.g. because it provides
significant insight, has been an open question for long, or the like.
Sometimes, the term “theorem” is also used generically to refer
to any kind of mathematical fact.
Corollary. A simple consequence of a previously established lemma,
proposition, or theorem. Sometimes a theorem is a mere corollary
of a central lemma that has been proven along the way.
Conjecture. This one stands out a bit, since it’s a claim that has not
(yet) been shown to be true but there is strong evidence that it
is.

2.3.3 The ideal of a mathematical proof is that of a purely axiomatic proof.


An axiom is a basic principle of mathematics, which is assumed to
be true. Each category of mathematical objects has its own axioms
governing it. Examples of axioms for the natural numbers are:

ˆ 0 is a natural number
ˆ for no natural number n, n + 1 = 0
ˆ for all natural numbers n, m, if n + 1 = m + 1, then n = m
ˆ for all natural numbers n, n + 0 = n
ˆ for all natural numbers n, n + (m + 1) = (n + m) + 1
ˆ ....

An axiomatic proof is one whose only assumptions are axioms and


definitions and where each step corresponds to a valid inference. Ax-
iomatic proofs are therefore very detailed and proceed in very, very
small steps. This makes axiomatic proofs often difficult to read. Just
imagine proving from the above axioms that if n is a prime number
with n > 2, then n + 1 is even. This can be done, but it takes many,
many steps and definitions.

2.3.4 Axiomatic proofs are an epistemic ideal, you almost never find a full
axiomatic proof in the literature. The point of most mathematical
writing is to convince the reader that a purely axiomatic proof exists.
It’s left to the interested reader to figure out the details. Of course,
whether a given piece of writing convinces you, depends on your back-
ground. Mathematical writing for beginners is much more detailed
than writing on an advanced level. In this course, you’ll get more and
more advanced and we’ll get, correspondingly, less and less detailed.
CHAPTER 2. A MATHEMATICS PRIMER FOR ASPIRING LOGICIANS26

Our aim in mathematical writing is to achieve what’s called informal


rigor, that is to ensure that there is an axiomatic argument corre-
sponding to what we say, while retaining readability. For this purpose,
mathematicians have developed conventional ways of writing proofs
that are supposed to ensure that an underlying axiomatic proof exists
if we obey by the conventions.

2.3.5 An interesting side-remark. The origin of the proof systems mentioned


in §1 is, in fact, the mathematical study of axiomatic arguments: a
derivation in a proof system is a model for a correct axiomatic ar-
gument. Here, it’s once more important that we heed the distinction
between object and meta-language. When we’re reasoning mathemat-
ically about logic (in the meta language), it’s enough to convince the
reader that an axiomatic proof of the fact in question exists. But when
we’re working in the object language, and we’re trying to derive a
conclusion from some premises, we need to be perfectly detailed and
axiomatic. Watch out for which level of detail is required in a given
context and err on the side of caution!

2.3.6 Note that there is a difference between a finished, mathematical proof


according to the standards of informal rigor and the notes you make
along the way, which help you to discover the proof. This is especially
important to keep in mind when you’re writing for exams, term papers,
or a thesis. You might have encountered this already in high-school
in situations when a simple calculation was not deemed an entirely
satisfying answer to a problem, but some explanation was required.
This is precisely the point here: your calculation are your notes, the
actual proof is an argumentative piece of writing.

2.3.7 Having to rigorously prove a mathematical fact may seem like a daunt-
ing task at first. To make things easier for you, I recommend following
these steps: figure out what you want to prove, state your claim as
clearly as possible, unfold the relevant definitions, remind yourself of
relevant facts, devise a proof strategy, write up your proof, proof-read.
Let’s go through these steps in turn:

2.3.7.1 Figure out what you want to prove.


In a course like this, you will often be told explicitly what
to prove (with the assumption that the claim in question is
true). But sometimes, especially in more advanced contexts,
you might need to determine whether a claim is true. In such
a case, you will try to formulate a conjecture.

Running Example. Think of some standard prime numbers.


You you might think of three, five, seven, maybe 11. So you
CHAPTER 2. A MATHEMATICS PRIMER FOR ASPIRING LOGICIANS27

might form the initial conjecture that every prime number is


odd. But wait a moment, we forgot two: two is a prime number
and two is even! So we modify our conjecture to say that every
prime number bigger than two is odd. And, in fact, now there’s
no obvious counterexample anymore. But not being able to find
a counterexample doesn’t constitute a proof, there might be an
even prime number somewhere which is so big, we can’t find it
by a search. So, we set out to prove the following conjecture:
Conjecture. Every prime number bigger than two is odd.
2.3.7.2 State your claim as clearly as possible.
As we’ve mentioned above, stating a mathematical claim in
proper mathemateze is what makes it precise enough to prove.
So, let’s write our conjecture in proper mathemateze. Basically,
what we want to say is that for any natural number, if that
number is a prime and bigger than two, then the number is
odd. So, we declare n as a variable for natural numbers and
write our conjecture as:
ˆ Let n be a natural number. If n > 2 and n is prime, then
n is odd.
Note that if we had declared n as a variable for prime numbers,
we could have written:
ˆ Let n be a prime number. If n > 2, then n is odd.
By declaring n to range over the primes bigger than two, we
could even write.
ˆ Let n be a prime number with n > 2. Then, n is odd.
Each of these claims would be proven in a slightly different
way, but they say essentially the same thing, they are, in fact,
equivalent.
The underlying phenomenon here is that there’s a trade-off
between assumptions about our variables and the if-part of our
theorem (if there’s one). The last rephrasing of our conjecture
has no if-part but instead three assumption: n is a natural
number, n is prime, and n > 2. The first phrasing instead,
only assumes that n is a natural number and, in turn, has two
claims in the if-part.
Strictly speaking, to prove the first claim, we have to prove the
if-then claim “if n > 2 and n is prime, then n is odd” using
only the assumption that n ranges over the naturals and to
prove the third claim, we have to prove that n is odd using
the assumption that n is natural, prime, and bigger than two.
As we’ll see in a few moments, however, the two things are
CHAPTER 2. A MATHEMATICS PRIMER FOR ASPIRING LOGICIANS28

essentially the same, so there is little but notational difference


between the phrasings in question.
2.3.7.3 Unfold the relevant definitions.
Look for all the central concepts in your conjecture and as-
sumptions and remind yourself of their definitions. In the case
of our conjecture, the central concepts are those of a number
being even/odd and those of a number being prime.
Here are the relevant definitions:
Definition. A natural number n is even iff there exists a nat-
ural number k such that n = 2k. A natural number n is odd
iff n is not even.
Definition. A natural number n is said to be prime iff 1 < n
and there are no natural numbers k, l < n such that n = k · l
2.3.7.4 Remind yourself of relevant facts (lemmas, propositions, theo-
rems,. . . ) you already know
In mathematical practice, you almost never start “from scratch.”
You typically make use of lemmas, propositions, and theorems
that either you or somebody else proved before. In mathemat-
ics, we truly “stand on the shoulder of giants” like Euclid,
Bernoulli, Euler, and many, many others. At this stage of your
proof search, try to figure out if you already know some fact
that might help you in proving your conjecture.
It turns out that for our present conjecture, we don’t really
need any additional lemmas to prove it, we can directly prove
it from the definitions. But we don’t know that yet and we
remember the following facts, which we record just in case:
Proposition. If n is a natural number, then n is even or n is
odd (but never both).
Proposition. Let n be a natural number. If n is an even
number, then n + 1 is odd. And if n is odd, then n + 1 is
even.
We assume that this you’ve proven elsewhere (slides, book,
homework, etc.) and we write it down here just to be sure,
maybe we need it later. It can always happen that while we’re
searching for a proof strategy, it becomes clear that we can
use some other relevant facts, but it’s always a good idea to
look at the clearly relevant facts first, since they might help
you devise a proof strategy in the first place.
Nota bene: When we ask you to prove a result as homework,
in an exam, or elsewhere for credit, of course, you can’t just
refer to somebody else’s proof of the result somewhere in the
CHAPTER 2. A MATHEMATICS PRIMER FOR ASPIRING LOGICIANS29

literature. However, you can make use of results that are clearly
established in the lecture, the notes, or the like. Please make
sure that you reference where to find the proof clearly, using
the slide number or chapter.section.number.
2.3.7.5 Devise a proof strategy.
Now things get serious, you actually need to start reasoning.
What we will do at this point is to try to see why the result
holds and to derive a proof strategy from that. In our exam-
ple conjecture, finding a proof strategy is relatively easy. We
simply note that if n > 2 is a prime number, then n can’t be
even. Because if n were even, by definition, there would be a k
such that 2k = n, which contradicts the assumption that n is
prime. And if n isn’t even, then, by definition, n is odd. This
isn’t our final proof yet, we still have some cleaning up to do.
But we have a pretty good idea how to proceed.
Note that often it will not be so easy to find a proof strategy.
What you will typically do, then, is to look through all the
proof strategies/argument forms you know and see if they are
useful. Below, we list some standard proof strategies/argument
forms together with the kind of situations in which they are
typically useful. While doing more and more mathematics,
you will slowly build a mental library of proof strategies that
worked in certain situations. This will be an invaluable re-
source in trying to prove things: most often, what you’ll do
is to adapt a proof strategy you already know to the case at
hand. Bottom-line: even if this looks hard now, you will get
better at this with experience.
2.3.7.6 Write up your proof.
Now it’s time to record the results of your work, now you write
up the finished proof. If you indeed succeeded in proving your
result, it is now a result (a lemma, proposition, theorem), so
you can write:
Proposition. Let n be natural number such that n is prime.
If n > 2, then n is odd.
When you claim a result, you will have to follow up with a
proof. The proof typically comes afterwards in a separate proof
environment. You begin the proof by declaring your variables
and listing your assumptions (possible naming them for ease
of reference). Then you reason carefully, step-by-step to the
desired result:

Proof. Let n be a natural number and assume n is prime. By


CHAPTER 2. A MATHEMATICS PRIMER FOR ASPIRING LOGICIANS30

definition, this means that (i) 1 < n and (ii) there are no
natural numbers k, l < n such that n = k · l. We want to
show that if n > 2, then n is odd. So, suppose that n > 2.
By definition, for n to be odd would mean that n is not even.
We claim that given our assumptions, n cannot be even and
hence must be odd. For suppose that n is even. By definition,
this would mean that there exists an m such that n = 2m. But
this would contradict condition (ii) for n being prime: just let
k = 2 and l = m. Note that 1 < n and n = 2m, it follows that
m < n and we have 2 < n by assumption . So, n cannot be
even, which means that n must be odd.

Note the  at the end of the proof. It marks the end of the
proof and is read Q.E.D., i.e. quod erat demonstrandum (what
was to be shown).
We’ve completed our proof.
2.3.7.7 Proof-read.
As with any piece of writing, it’s important to double check
what you’ve written. At this stage of proving things, go through
what you’ve written once more. Ask yourself: Are my defini-
tions correct(ly phrased)? Is every reasoning step explained?
Are all my variables declared? Is my wording understandable?
—Keep in mind that your proof will be read by somebody
else, you’re not writing it for yourself but to convince some-
body else. Write for a reader, not for yourself. We grade your
mathematical writing not only in terms of correctness but also
in terms of intelligibility.
Now you’re (finally) done. Typically, at this part, we’ll discard
our notes and rest content with our finished, polished proof.
Especially when handing in homework, what you will report is
your proof and not your notes (unless asked specifically).

2.3.8 Nota bene: A good mathematical proof is written in a clear language,


using complete, grammatical sentences. We won’t cover mathematical
writing in more detail, but I will try to lead by example. The examples
of proofs given below are written in the style that we expect you to
adopt.

2.3.9 We conclude our tutorial on mathematical proofs with a beginner’s li-


brary of standard argument forms to be used in mathematical proofs/proof
strategies. Note that the list is not exhaustive, already in the next
chapter, you will learn a new argument form that will be of central
importance throughout the course.
CHAPTER 2. A MATHEMATICS PRIMER FOR ASPIRING LOGICIANS31

Conditional Proof. Also known as direct proof.


ˆ Form: We prove an if-then claim by assuming the if-part and
deriving the then-part.
ˆ Justification: Intuitively, an if-then claim is true just in case
the then-part is true, whenever the if-part is. For example,
“if n is even, then n + 1 is odd” is true iff n + 1 is odd for
every even number n. If we can derive from the assumption
that the if-part is true that the then-part must be true, too,
we’ve shown just that.
ˆ Use: Whenever you have an if-then claim, you should first
try to prove it using conditional proof.
ˆ Example:
Proposition. Let n, m be natural numbers. If n is even, n·m
is even.
Proof. Let n and m be natural numbers. We want to prove
that if n is even, n · m is even. So assume for conditional
proof that n is even. Then, by definition, we have that there
exists a natural number k such that n = 2k. Now consider the
number n · m. Since n = 2k, we have that n · m = (2 · k) · m =
2 · (k · m). By definition, this means that n · m is even, which
is what we wanted to show.
Note how we write a conditional proof:
1. State the conditional you wish to prove.
2. Assume the if-part.
3. Use mathematical reasoning to get to the then-part.
4. Conclude the proof by saying that you’ve shown what
needed to be shown.
5. Common mistakes: assuming what needs to be proved
(either the whole if-then statement or the then-part).
Distinction by Cases. Also known as proof by cases, proof by ex-
haustion, the brute force method,
ˆ Form. We prove a claim by showing that it holds in each of
a list of exhaustive cases, which is a list of cases such that at
least one of the cases must obtain.
ˆ Justification: If a list of cases is exhaustive, this means that
at least one of the cases will obtain (even though we don’t
necessarily know which one). But if we can show that in
each of the cases, our claim would be true, we don’t need to
know which case will actually occur, we can conclude that
our claim will be true regardless.
CHAPTER 2. A MATHEMATICS PRIMER FOR ASPIRING LOGICIANS32

ˆ Use: When your assumptions (e.g. from a conditional proof)


allow for a natural distinctions into several cases. We typi-
cally try to avoid distinction by cases with long list of cases
(think more than 3) for reasons of mathematical elegance,
though a proof with 1936 cases exists (the computer assisted
proof of the four-color theorem).
ˆ Example:
Proposition. For n a natural number, n2 + n is even.
Proof. Let n be a natural number. First, note that n2 + n =
n(n + 1). So it suffices to show that n(n + 1) is even. Since
every number is either even or odd, we can distinguish two
exhaustive cases: (i) n is even or (ii) n is odd.
– Case i. If n is even, then n(n + 1) is the product of
an even number, n, and an odd number, n + 1. By the
above proposition (the example proposition for condi-
tional proof), this means that n(n + 1) is even, too.
– Case ii. If n is odd, then, by one of our previously es-
tablished proposition, we know that n + 1 is even. But
then, again, n(n + 1) is the product of an even number,
n + 1, and an odd number, n, which we already observed
means that n(n + 1) is even.
So, either way, n(n + 1) is even, which is what we wanted to
show.

Note how we write a proof by cases:


1. Give a justification for your case distinction (How many
cases are there? Why are they exhaustive?).
2. Go through each case one by one and show, by mathe-
matical reasoning, that the result holds in the case.
3. Conclude that, since the list was exhaustive, the result
holds in general.
ˆ Common mistakes: List of cases is not exhaustive/cases are
missing.
Proof by Contradiction. Also known as indirect proof.
ˆ Form. We prove a claim by showing that it’s negation leads
to a contradiction.
ˆ Justification. In classical mathematics, we assume that for
every claim, either the claim or its negation is true (remember
bivalence, this is a way in which classical mathematics relies
on classical logic). But if the negation of a claim leads to a
CHAPTER 2. A MATHEMATICS PRIMER FOR ASPIRING LOGICIANS33

contradiction, it can’t be true (again, classical logic). Hence,


the original claim must be true.
ˆ Use. This is really an all-rounder, it’s used in many diverse
situations. You will get a “feel” for when proof by contradic-
tion works well. You should always try indirect proof if all
direct methods (like conditional proof or proof by cases) have
failed you. The method is especially powerful when you’re
trying to establish that one of two conditions must obtain
(like n is either even or n is odd).
ˆ Example.
Proposition. Let n be a prime number. If n > 2, then n is
odd.
Proof. See above.
Proposition. There is no smallest positive real number, i.e.
there exists no real number x > 0 such that for all y > 0 we
have x ≤ y.
Proof. Suppose (for proof by contradiction) that our claim is
false, i.e. there exists a natural number x such that (i) x > 0
and (ii) for all y > 0 we have x ≤ y. Call that number  (as
a temporary constant, cf. 2.2.8). Now consider the number
 
2 . Since by assumption (i) 0 < , we have that (a) 0 < 2

and that (b) 2 < . From (a) together with (ii), it follows
that  ≤ 2 . But from this and (b), we get that  < , which is
impossible. Hence,  cannot exist and our claim is proven.
Note how we write an indirect proof:
1. State that you’re assuming that the claim is false (you
can note that you do this for proof by contradiction, but
typically that’s clear).
2. Spell out what it means for the claim to be false.
3. Derive a contradiction from the assumption that the claim
is false.
4. Conclude that the claim must be true because its nega-
tion leads to a contradiction.
Contrapositive Proof.
ˆ Form. We prove an if-then statement by deriving that the if-
part is false from the assumption that the then-part is false.
ˆ Justification. This is closely related to indirect proof. In order
for an if-then claim to be false, we would need that the if part
is true but the then-part is false. But if we can derive that
the if-part is false whenever the then-part is, we cannot have
CHAPTER 2. A MATHEMATICS PRIMER FOR ASPIRING LOGICIANS34

that the if-then claim is false for we’d get a contradiction: the
if-part would need to be both true and false. So, given that
we can derive the negation of the if-part from the negation
of the then-part, the if-then claim cannot be false, so it must
be true.
ˆ Use. Contraposition is very useful if the then-part contains
a disjunctive claim (as in the example below).
ˆ Examples.
Proposition. Let n, m be natural numbers. If n · m is even,
then either n is even or m is even.
Proof. Suppose that n and m are natural numbers. We want
to show that n · m is even, then either n is even or m is
even. We prove the contrapositive, i.e. if neither n nor m is
even, then n · m is odd. Note that if neither n nor m is even,
then both n and m are odd. This means that n = 2k + 1
and m = 2l + 1 for natural numbers k, l. Now consider the
number n · m. Since n = 2k + 1 and m = 2l + 1, we have that
n · m = (2k + 1)(2l + 1) = 4kl + 2k + 2l + 1 = 2(2kl + k + l) + 1.
But now note that 2(2kl + k + l) + 1 is of the form 2x + 1
for x a natural number, just let x = 2kl + k + l. But this just
means that 2(2kl + k + l) + 1 = n · m is odd, which is what
we needed to show.
Biconditional Proof.
ˆ Form. We prove that two statements are equivalent (the one
is true iff the other is) by showing that (i) if the one is true,
so is the other (the left-to-right or ⇒ direction) and that
(ii) if the other is true, so is the one (the right-to-left or
⇐ direction). Note that we can prove (i) and (ii) using any
kind of proof principle we like, but often conditional proof is
useful.
ˆ Justification. Essentially, an equivalence claim (iff-statement)
is just a combination of two if-then statements. To say that
n is odd iff n is not even is to say that (i) if n is odd, then
n is not even, and (ii) if n is not even, then n is odd. So,
essentially, we need to prove two if-then claims, which is what
biconditional proof amounts to.
ˆ Use. I cannot stress this enough: you always need to prove
both the left-to-right and the right-to-left direction if you try
to establish an equivalence claim.
ˆ Example.
Proposition. Let n be a natural number. Then n2 is even
iff n is even.
CHAPTER 2. A MATHEMATICS PRIMER FOR ASPIRING LOGICIANS35

Proof. Let n be a natural number.


– (Left-to-right direction): We need to show that if n2 is
even, then n is even. We prove the contrapositive. Sup-
pose that n is not even, i.e. odd. Then, by previous
observation, there exists a natural number k such that
n = 2k + 1. Now consider n2 . By our observation, we
have n2 = (2k + 1)2 . So, we get:

n2 = (2k + 1)2 = 4k 2 + 4k + 1

But note that 4k 2 + 4k = 2(2k 2 + 2k), and hence 4k 2 + 4k


is even by definition. So n2 = l + 1 where l is an even
number (just let l = 4k 2 + 4k), which means that n2 is
odd by a previous observation.
– (Right-to-left direction): We want to prove that if n is
even, then n2 is even. So suppose that n is even (for
conditional proof). We’ve previously observed that the
product of an even number with any other number is
even. But n2 = n · n, so it follows as a simple corollary
that n2 is even.
We conclude that n2 is even iff n is even.
Note how we write a biconditional proof:
1. State that you want to prove an iff-claim.
2. Prove the left-to-right direction.
3. Prove the right-to-left direction.
4. Conclude that the equivalence holds.
ˆ Common mistakes: One of the directions is missing.
Universal Generalizations.
ˆ Form. We prove that all objects (of a category) have a prop-
erty by showing that any arbitrary object (of the category)
has the desired property.
ˆ Justification. An arbitrary object is one about which we’ve
not assumed anything. But that means that if we can show
that an arbitrary object has the property, then any object
we might pick will be just like the arbitrary object. So, if we
have a proof that an arbitrary object has the property, we
can just repeat the proof for any object we might pick. So,
any object will have to have the property.
ˆ Use. Basically whenever we want to prove a universal claim.
CHAPTER 2. A MATHEMATICS PRIMER FOR ASPIRING LOGICIANS36

ˆ Example.
Note that basically every proof we’ve discussed so far is just
one application of universal generalization away from prov-
ing a universal, for-all claim. Take the following proposition
we’ve proven above:
Proposition. For n a natural number, n2 + n is even.
We could easily transform the proof of this proposition in a
proof of the following proposition:
Proposition. For all natural numbers n, n2 + n is even.
Or, informally put, the result of squaring a natural number
and then adding the number itself to this will always result
in an even number. The proof of this will be almost like the
proof of the previous proposition:
Proof. For universal generalization, let n be an arbitrary nat-
ural number. [insert proof of previous proposition here]. Since
n was arbitrary, we can conclude that all numbers have the
desired property.
It’s a simple exercise to do the same with the other claims
made in the chapter.
So, you can see that there is not much of a difference be-
tween using universal statements and using declared vari-
ables. Strictly speaking, however, to prove a universal claim
you need to reason by universal generalization (or something
to the effect).

2.3.10 This concludes our tutorial on mathemateze and mathodology. We’ve


barely scratched the surface, you still have much to learn. But it’s a
start. Over the coming years, you will become more and more proficient
in the ways of mathematics. For now, let me just note that there is way
more to mathematics than what we’ve just discussed above, this was
just a starting point. In mathematical practice, we come up with new
definitions, concepts, theories all the time. There is a creative side
of mathematics which is sometimes hidden from view when you’re
learning the basics of a field as a finished product: its definitions and
theorems. In this course, I will try to also give you a feel for how we
came up with the definitions that we’re going to cover.

2.4 Friendly Advice


I’d like to conclude this chapter with somebody else’s advice. Kevin Houston,
in the preface to his fantastic tutorial How to Think Like a Mathematician
(see §2.8) gives some “friendly advice” for learning mathematics, which I’m
CHAPTER 2. A MATHEMATICS PRIMER FOR ASPIRING LOGICIANS37

going to repeat here since all of this applies to this course (this is a direct
quote from p. x of that book):

ˆ It’s up to you — Your actions are likely to be the greatest determiner


of the outcome of your studies. Consider the ancient proverb: The
teacher can open the door, but you must enter by yourself.

ˆ Be active — Read the book. Do the exercises set.

ˆ Think for yourself — Always good advice.

ˆ Question everything — Be sceptical of all results presented to you.


Don’t accept them until you are sure you believe them.

ˆ Observe — The power of Sherlock Holmes came not from his deduc-
tions but his observations.

ˆ Prepare to be wrong — You will often be told you are wrong when doing
mathematics. Don’t despair; mathematics is hard, but the rewards are
great. Use it to spur yourself on.

ˆ Don’t memorize — seek to understand — It is easy to remember what


you truly understand.

ˆ Develop your intuition — But don’t trust it completely.

ˆ Collaborate — Work with others, if you can, to understand the mathe-


matics. This isn’t a competition. Don’t merely copy from them though!

ˆ Reflect — Look back and see what you have learned. Ask yourself how
you could have done better.

2.5 Core Ideas


ˆ Variables stand for arbitrary but fixed objects, constants stand for
known objects.

ˆ Mathemateze is a very precise language, you have to learn it like a


foreign language. You will not only have to remember definitions, but
to understand them. Keep in mind: understanding facilitates remem-
bering.

ˆ An object is defined by giving a list of properties that one and only


one object—the object to be defined—satisfies. Keep in mind that for
a definition of an object to be successful we need to show that there
exists an object that satisfies the properties and that there is at most
one object that satisfies the properties.
CHAPTER 2. A MATHEMATICS PRIMER FOR ASPIRING LOGICIANS38

ˆ A property or relation is defined by giving the precise conditions under


which an object has the property or some objects stand in the relation.
Keep in mind that in order for a definition of a property or relation
to be successful, the property or relation cannot be used to formulate
the conditions that define it.

ˆ A mathematical proof is a rigorous, step-by-step argument which es-


tablishes the truth of a mathematical statement.

ˆ An axiomatic proof has only axioms and definitions as premises and


uses only valid inferences. The point of mathematical writing is to
convince the reader that an axiomatic proof exists using informal rigor.

ˆ Follow these steps to construct a proof: figure out what you want to
prove, state your claim as clearly as possible, unfold the relevant defi-
nitions, remind yourself of relevant facts, devise a proof strategy, write
up your proof, proof-read.

ˆ Over time, you will slowly build a mental library of proof strategies
that worked in certain situations. Study existing proofs and try to
understand them, why they work, how they approach the problem.
This is the best way to build that mental library.

2.6 Self-Study Questions


2.6.1 Which of the following is not a successful definition?

(a) A number n is an even square iff n is the square of an even number,


i.e. iff there exists a natural number m such that m is even and
m2 = n.
(b) Let’s say that a natural number n is independent iff there are no
two dependent numbers k, l < n such that n = k + l. Further, we
say that a number is dependent iff it is not independent.
(c) We define  to be the smallest positive real, i.e.  is the number x
such that 0 < x and for all reals y, if 0 < y, then  ≤ y.
(d) We say that a real number x is a positive infinitesimal iff x is a
smallest positive real, i.e. iff (i) 0 < x, and (ii) for all reals y, if
0 < y, then x ≤ y.
(e) We define 11 to be the first natural number bigger than 10.
(f) We define the imaginary number i as the complex number x such
that x2 = −1.

2.6.2 Suppose you’re asked to prove the following conjecture:


CHAPTER 2. A MATHEMATICS PRIMER FOR ASPIRING LOGICIANS39

ˆ For all natural numbers n and m, if n + m is odd, then either n


is odd or m is odd.

What do you think is a good proof strategy to tackle this? (This is, of
course, somewhat subjective, but think about it!)

(a) Prove it directly using conditional proof followed by universal gen-


eralization.
(b) Try indirect proof to the whole statement.
(c) Use biconditional proof followed by indirect proof and universal
generalization.
(d) Try contrapositive proof followed by a universal generalization.
(e) Prove the conditional by conditional proof combined with indirect
proof, followed by universal generalization.
(f) Make a distinction by cases followed by universal generalization.

2.7 Exercises
2.7.1 For each of the arguments you gave in exercise 1.7.1, determine which
argument forms you’ve used.

2.7.2 Prove the following simple, number-theoretic facts. Make use of the
step-by-step procedure laid out in 2.3.7.

(a) [h] The sum of two even numbers is even.


(b) [h] If the product of two natural numbers is odd, then at least one
of the two numbers is odd.
(c) [h] Every natural number is either even or odd.
(d) If you add one to an even number, you get an odd number.
(e) The product of two prime numbers is not a prime number.
(f) No prime number bigger than two is the product of an even and
an odd number.

2.7.3 Let n be a natural number.

(a) Formulate a necessary but not sufficient condition for n being even.
(b) Formulate a sufficient but not necessary condition for n being even.
(c) Formulate a necessary and sufficient condition for n being even.

2.7.4 [h, 6 x] For each of the following mathematical statements, express the
statement in ordinary language, without the use of mathematical sym-
bols.
CHAPTER 2. A MATHEMATICS PRIMER FOR ASPIRING LOGICIANS40

(a) Let n and m be two natural numbers. Then, if there is a number k


such that 2k = n and there is a natural number l such that 2l = m,
then there exists a natural number j such that 2j = n + m.
(b) For every natural number, n, either there exists a natural number
k such that 2k = n or there exists a natural number k such that
2k + 1 = n.
(c) If n is a natural number, then there exists a natural number k
such that 2k = n2 + n.
(d) There is no real number x such that x < 0 and whenever y < 0
for some real number y, then y ≤ x.

2.7.5 [6 x] Consider our running example from 2.3.7 and its final proof:
Proposition. Let n be a natural number such that n is prime. If
n > 2, then n is odd.

Proof. Let n be a natural number and assume n is prime. By definition,


this means that (i) 1 < n and (ii) there are no natural numbers k, l < n
such that n = k · l. We want to show that if n > 2, then n is odd. So,
suppose that n > 2. By definition, for n to be odd would mean that n
is not even. We claim that given our assumptions, n cannot be even
and hence must be odd. For suppose that n is even. By definition, this
would mean that there exists an m such that n = 2m. But this would
contradict condition (ii) for n being prime: just let k = 2 and l = m.
Note that 1 < n and n = 2m, it follows that m < n and we have 2 < n
by assumption . So, n cannot be even, which means that n must be
odd.

Describe the theorem and its proof in natural language without the
use of mathematical symbols.

2.8 Further Readings


It will take a while for you to become perfectly comfortable with mathemat-
ical writing and reasoning. Here are some references to books which you can
use to learn more about how mathematicians write and think:
ˆ Houston, Kevin. 2009. How to Think Like a Mathematician. A Com-
panion to Undergraduate Mathematics. Oxford, UK: Oxford University
Press.
The book has a homepage http://www.kevinhouston.net/httlam.
html, which includes sample chapters, corrections, etc.
I particularly recommend reading chapters 5, 14–25, and 32–35 (don’t
worry they’re short).
CHAPTER 2. A MATHEMATICS PRIMER FOR ASPIRING LOGICIANS41

ˆ Vivaldi, Franco. 2014. Mathematical Writing. London, UK: Springer.

I can warmly recommend both of these books, they will make your life much
easier when it comes to studying any field that uses modern mathematics,
such as logic, (parts of) philosophy, linguistics, (theoretical) computer sci-
ence, . . . .
Most of the things we covered above are covered in those books at greater
length and in more detail. It might be that, here and there, the books contra-
dict what I said above by way of advice—but those are primarily questions
of style and not of substance.

(d,e): most promising


(b,c,f): not very promising
2.6.2 (a): might work

(f): there’s more than one: i, −i


(c): there is no such number
2.6.1 (b): the definition is circular
Self Study Solutions
Chapter 3

Elementary Set Theory

3.1 Sets and Set Notation


3.1.1 A set is a collection of objects, called its elements or members. The
elements of a set are also said to belong to the set or to be contained in
the set. A set may contain any kind of objects whatsoever: numbers,
symbols, people, or even other sets. For X a set and x an object, we
write x ∈ X to say that x is an element of X and we write x 6∈ X to
say that x is not an element of X. If we have many objects x1 , . . . , xn ,
then we also write x1 , . . . , xn ∈ X to say that x1 ∈ X, and . . . , and
xn ∈ X.

3.1.2 If the elements of a set are precisely a1 , . . . , an , then we can denote


the set by {a1 , . . . , an }. This is called an extensional definition of the
set. So, the set {1, a, {Robbie, 0}}, for example, contains precisely the
number 1, the symbol a, and the set {Robbie, 0}, which in turn con-
tains Robbie and the number 0 as elements.

3.1.3 A set may contain any number of elements. The set {0}, for example,
has just one member—the number 0. A set with exactly one member
is also called a singleton set.1 The set {2, 14} has two members. And so
on. Sets can also have infinitely many members. An important infinite
set we’ll encounter frequently is N, the set of all natural numbers. You
might be tempted to write N = {0, 1, 2, . . .} but it’s important to resist
this temptation. In order to define a set, for each object it needs to
be clear whether it’s an element of the set or not. And who’s to say
that the list 0, 1, 2, . . . continues 0, 1, 2, 3, 4, . . . and not 0, 1, 2, 4, 6 . . ..
This means that if we write N = {0, 1, 2, . . .} this leaves open whether
3 ∈ N or 3 ∈ / N.

3.1.4 Other infinite sets we’ll encounter are: Z, the set of the integers (pos-
1
It’s important to distinguish the set {0} from the number 0.

42
CHAPTER 3. ELEMENTARY SET THEORY 43

itive and negative whole numbers); Q, the set of rational numbers


(fractions√of integers); and R, the set of real numbers (you know, the
one with 2, π, e, . . . in it).

3.1.5 There also exists a set with no elements at all, the so-called empty set.
This set is of fundamental importance in logic and mathematics. We
denote this set by {} or ∅. Note especially that for each object x, we
have that x ∈/ ∅.

3.1.6 If the elements of a set are precisely the objects satisfying condition
Φ, then we can denote the set by {x : Φ(x)}. This is called a definition
by set abstraction. For example, {x : x is a prime number} is the set
that contains all and only the prime numbers. So we have that 3 ∈ {x :
x is a prime number} but 4 ∈ / {x : x is a prime number}. Note that
by Euclid’s theorem, the set {x : x is a prime number} has infinitely
many elements. So, using set abstraction, we can denote infinite sets
by a finitary expression.
To be perfectly clear : an object a is a member of the set {x : Φ(x)} iff
a satisfies the condition Φ, i.e. Φ(a)!

3.1.7 Set abstraction is typically carried out over the elements of an already
known set X, i.e. we consider the set of all members of X that sat-
isfy condition Φ. This set is denoted {x ∈ X : Φ(x)}, which is just
shorthand for {x : x ∈ X and Φ(x)}. The background set of a set
abstraction can make a sigificant difference to the sets
√ denoted.
√ E.g.
{x ∈ N : x × x = 2} = ∅ but {x ∈ R : x × x = 2} = { 2, − 2}.

3.1.8 When we’re doing set abstraction, we also often implicitly assume that
the members of the new set have a specific form. E.g. we would write
n
Q = {m : n, m ∈ Z, m 6= 0} to say that Q is the set of fractions of
n
integers. In this case, { m : n, m ∈ Z} is shorthand for:
n
{x : there exist n, m ∈ Z such that x = , where m 6= 0}.
m
We extend this notation later to the more general case.

3.2 The Subset Relation


3.2.1 One set is called a subset of another iff every element of the one set is
also an element of the other. A bit more precisely, for all sets X and
Y , X is a subset of Y iff for each object x, if x ∈ X, then x ∈ Y .
We write X ⊆ Y to say that X is a subset of Y and we write X * Y
to say that X is not a subset of Y . Note that X * Y iff there is at
least one x such that x ∈ X but x ∈ / Y . So, for example, {0, 1} ⊆
{1, a, 0, {b, 1}} but {b, 1} * {1, a, 0, {b, 1}}. Note that b ∈ {b, 1} but
CHAPTER 3. ELEMENTARY SET THEORY 44

b∈/ {1, a, 0, {b, 1}}. The moral here is that it’s important to distinguish
between subsets and elements: even though {b, 1} * {1, a, 0, {b, 1}}, we
have that {b, 1} ∈ {1, a, 0, {b, 1}}.

3.2.2 It’s easily checked that every set is a subset of itself. This will be the
first proposition that we prove:

Proposition. For all sets X, we have that X ⊆ X.

Proof. This might seem “obvious,” but it’s important to prove even
obviously seeming facts. After all, you might think that something’s
obvious but it turns out to be false! So, consider any arbitrary set X
and arbitrary object x. Suppose that x ∈ X. It follows, trivially, that
x ∈ X. Since x was arbitrary, this means that for each x, if x ∈ X,
then x ∈ X, which is another way of saying that X ⊆ X. Since X was
also arbitrary, it follows that for each set X, we have that X ⊆ X.

3.2.3 A set X is called a proper subset of another set Y iff X ⊆ Y but


Y * X. We write X ⊂ Y to say that X is a proper subset of Y and
X 6⊂ Y to say that X is not a proper subset of Y .

Proposition. For all sets X, we have that X 6⊂ X.

Proof. We prove this fact “indirectly,” that is we show that the as-
sumption that some set is a proper subset of itself leads to a contra-
diction. Hence it cannot be that any set is a proper subset of itself.
Suppose that some set X is such that X ⊂ X. Call this set A. By the
definition of ⊂ we have that A ⊆ A and A 6⊆ A, which is a contradic-
tion. So there exists no set X such that X ⊂ X.

3.2.4 It’s also instructive to show that the empty set, ∅, is a subset of every
set whatsoever:

Proposition. For each set X, we have that ∅ ⊆ X.

Proof. We show this fact again indirectly. Suppose that there exists
a set X such that ∅ * X. Call this set A. We get that ∅ * A, which
means that there exists at least one object x such that x ∈ ∅ but
x∈/ A. Call this object a. We get that a ∈ ∅. But we know that for
each x, x ∈/ ∅, and hence a ∈ / ∅. We’ve arrived at a contradiction,
a ∈ ∅ and a ∈
/ ∅. Hence the assumption that there exists a set X such
that ∅ * X is false, which means that for each set X, we have that
∅ ⊆ X.
CHAPTER 3. ELEMENTARY SET THEORY 45

3.2.5 For X a set, we define the power set to be

℘(X) = {Y : Y ⊆ X}.

That is, the power set of X is the set of all the subsets of X. So, for
example, we have that ℘({1, 2}) = {∅, {1}, {2}, {1, 2}}. Note that by
the propositions proved in 3.2.2 and 3.2.4, for every set X, we have
that ∅, X ∈ ℘(X).

3.3 The Axiom of Extensionality


3.3.1 Sets are individuated by their members. This is captured in the so-
called axiom of extensionality, which states that two sets are identical
iff they have exactly the same elements. More formally:

Axiom of Extensionality. For all sets X and Y , X = Y iff X ⊆ Y


and Y ⊆ X.

For example, it follows from the axiom of extensionality that the sets
{1, 2}, {2, 1}, {1, 1, 2}, {2, 1, 1, 2}, . . . are all one and the same sets—for
they have precisely the same members. In other words, in set-theory
the order and multiplicity of the elements of a set doesn’t matter.

3.3.2 Note that in order to show that two sets X and Y are identical, we
have to show two things: (i) we have to show that X ⊆ Y and (ii)
we have to show that Y ⊆ X. No proof of a purported set-identity is
complete without having established both of these facts. To show that
two sets X and Y are distinct, in contrast, it’s enough to establish one
of X * Y or Y * X. In other words, it suffices to show either that
there exists an x with x ∈ X and x ∈/ Y or that there exists an x with
x ∈ Y and x ∈ / X.

3.3.3 It is an interesting consequence of the axiom of extensionality that


there exists precisely one empty set:

Proposition. Let’s say that a set X is empty iff for all objects x, we
have that x ∈
/ X. Then we have that for all sets X, if X is empty, then
X = ∅.

Proof. We prove this indirectly. So, suppose that there exists a set X
such that X is empty but X 6= ∅. Call this set A. It follows that A 6= ∅,
which means that either A * ∅ or ∅ * A. We’ve already established
that ∅ ⊆ X for every set X (cf. 2.2.4), so we can focus on the case that
A * ∅. If A * ∅, this means that there exists an object x such that
x ∈ A but x ∈ / ∅. Call this object a. We get that a ∈ A. But we’ve
assumed that A is an empty set, meaning that for all x, x ∈ / A. So,
CHAPTER 3. ELEMENTARY SET THEORY 46

certainly, a ∈
/ A. We’ve arrived at a contradiction, a ∈ A and a ∈ / A,
meaning that the assumption that there exists an empty set X with
X 6= ∅ is false. Hence, for all sets X, if X is empty, then X = ∅.

3.4 Operations on Sets: Union, Intersection, Dif-


ference
3.4.1 The union of two sets contains all the objects that are in at least one
of the two sets. We denote the union of X and Y by X ∪ Y . More
formally, for two sets X and Y , we define

X ∪ Y = {x : x ∈ X or x ∈ Y }.

Note that if an element is in both sets, then it is also an element of their


union. So, for example, {1, 2} ∪ {2, 3} = {1, 2, 3}. This is because, in
logic and mathematics, we typically read “or” inclusively: to say that
one thing or another is the case is to say that the one thing is the case,
the other is the case, or both are the case.

3.4.2 The intersection of two sets contains all things that are in both sets.
We denote the intersection of X and Y by X ∩ Y . More formally,

X ∩ Y = {x : x ∈ X and x ∈ Y }.

In words, X ∩ Y contains all the things that are both elements of X


and of Y . So, for example, {1, 2} ∩ {2, 3} = {2}. Note that if X and Y
don’t have any members in common, then X ∩ Y = ∅. So, for example,
{1, 2} ∩ {3, 4} = ∅.

3.4.3 We can prove a characterization of the subset-relation in terms of


intersection:

Proposition. For all sets X and Y , if X ∩ Y = X, then X ⊆ Y .

Proof. Let A and B be two arbitrary sets and suppose that A∩B = A.
We need to show that for each x, if x ∈ A, then x ∈ B. So take an
arbitrary object a and suppose that a ∈ A. Suppose for indirect proof
that a ∈
/ B. Since A ∩ B = A, it follows that a ∈ A ∩ B. But A ∩ B is
defined as {x : x ∈ A and x ∈ B}, so it follows that a ∈ B. We have
arrived at a contradiction, a ∈ B and a ∈ / B, which means that our
assumption a ∈ / B is false. Hence a ∈ B. So, if a ∈ A, then a ∈ B.
Since a was arbitrary, this means that for each x, if x ∈ A, then x ∈ B.
This just means that A ⊆ B. So, if A ∩ B = A, then A ⊆ B. And
since A and B were both arbitrary, we have that for all X and Y , if
X ∩ Y = X, then X ⊆ Y .
CHAPTER 3. ELEMENTARY SET THEORY 47

There is an analogous characterization of the subset-relation in terms


of union, which you show as an exercise (see the end of the chapter).

3.4.4 The operations of union and intersection can also be applied to more
than two sets. A technically convenient way of doing is is to define
union (intersection) for sets of sets. Suppose that X is a set of sets.
Then we define:
[
X = {x : there exists a X ∈ X such that x ∈ X}
\
X = {x : for all X ∈ X , we have that x ∈ X}
For example, the sets {1, a, b}, {1, a, c}, {1, b, d}. We get:
[
{{1, a, b}, {1, a, c}, {1, b, d}} = {1, a, b, c, d}
\
{{1, a, b}, {1, a, c}, {1, b, d}} = {1}
It’s
S easily checked that T for any two sets X, Y , we have that X ∪ Y =
{X, Y } and X S ∩ Y = T{X, Y } (exercise). The real advantage of the
new operations and is that they can also be applied to infinite
sets of sets, but for now, we don’t need to worry about that.

3.4.5 The difference between one set and another are all the elements that
are in the one but not the other. The difference between X and Y is
denoted X \ Y and defined by

X \ Y = {x ∈ X : x ∈
/ Y }.

If X and Y overlap, i.e. if X ∩ Y 6= ∅, then X \ Y is just the result


of taking the elements of Y out of X. So, for example, {1, 2, 3, 4} \
{2, 4, 6} = {1, 3}. If X and Y don’t overlap, set-difference doesn’t do
anything:

Proposition. For all sets X and Y , if X ∩ Y = ∅, then X \ Y = X.

Proof. Let A and B two arbitrary sets such that A ∩ B = ∅. We need


to establish that A \ B = A. Since this is a set-identity, we need to
make use of the axiom of extensionality, i.e. we need to show that (i)
A \ B ⊆ A and that (ii) A ⊆ A \ B.
Claim (i) is almost immediate from the definition of A \ B as {x ∈ A :
x∈/ B}. For take an arbitrary object a and suppose that a ∈ A \ B.
Since A \ B = {x ∈ A : x ∈ / B}, this means that a ∈ A and a ∈ / B.
So, surely, a ∈ A. But since a was arbitrary, this means that for every
object x, if x ∈ A \ B, then x ∈ A, which just means that A \ B ⊆ A.
CHAPTER 3. ELEMENTARY SET THEORY 48

To establish claim (ii), we make use of our assumption that A ∩ B = ∅.


Take an arbitrary object a ∈ A. We need to show that a ∈ A \ B =
{x ∈ A : x ∈/ B}. We already have one part of the required condition,
i.e. a ∈ A, so let’s check the other, i.e. a ∈/ B. Suppose, for indirect
proof, that a ∈ B. This would mean that a ∈ A and a ∈ B. But then
a ∈ A ∩ B (since A ∩ B = {x : x ∈ A and x ∈ B}), and we’ve assumed
that A ∩ B = ∅. So, we have a contradiction, which means that a ∈ / B.
So, a ∈ A and a ∈ / B, so a ∈ A \ B. Since a was arbitrary, we conclude
that for all x, if x ∈ A, then x ∈ A \ B, i.e. A ⊆ A \ B.
Since we’ve established that A\B ⊆ A and A ⊆ A\B, we can conclude
that A \ B = A, as desired, using the axiom of extensionality.

3.5 Ordered-Pairs and Cartesian Products


3.5.1 An ordered pair is a set-like collection of two objects, except that
order matters. The ordered pair with a as its first component and b
as its second component is denoted (a, b). Note that (a, b) 6= (b, a).
In fact, two ordered pairs are identical iff they have exactly the same
components in the same place, that is: (a1 , a2 ) = (b1 , b2 ) iff a1 = b1 and
a2 = b2 . Note also that the number 1 is distinct from the ordered pair
(1, 1), which has the number 1 both as its first and second component.

3.5.2 The notion of an ordered pair can be generalized. For n ≥ 2 a natural


number, an (ordered) n-tuple is a set-like collection of n-objects in that
order. We write (a1 , . . . , an ) for the ordered n-tuple which has of a1 as
its first component, a2 as its second component, . . . , up until an as its
n-th component. So, for example, the 3-tuple (1, a, ∅) has the number
1 as its first component, the symbol a as its second component, and
the empty set as its third component. Note that in contrast to sets,
order and multiplicity are very important with n-tuples. For example,
(2, 1, 1) is distinct from (2, 1) and (2, 1) is distinct from (1, 2).

3.5.3 The Cartesian product of one set with another contains all the ordered
pairs that can be formed from taking an element of the first set as
the first component and an element of the second set as the second
component. For two sets X and Y , we write X × Y for their Cartesian
product. Formally, we can define this by saying that

X × Y = {(x, y) : x ∈ X and y ∈ Y }.

3.5.4 If we have n sets X1 , . . . , Xn , then their Cartesian product is the set of


all n-tuples with the first component from X1 , the second component
from X2 , and so on up to the n-th component from Xn . We denote
CHAPTER 3. ELEMENTARY SET THEORY 49

the Cartesian product of X1 , . . . , Xn by X1 × . . . × Xn . More formally,

X1 × . . . × Xn = {(x1 , . . . , xn ) : x1 ∈ X1 , . . . , xn ∈ Xn }.

To illustrate, consider the sets {1, 2} and {a, b}. We get that {1, 2} ×
{a, b} = {(1, a), (1, b), (2, a), (2, b)}. Note that {a, b}×{1, 2} =
6 {1, 2}×
{a, b}, since {a, b} × {1, 2} = {(a, 1), (a, 2), (b, 1), (b, 2)}. The special
case where X1 = . . . = Xn = X will be important, where we also
denote
| × .{z
X . . × X}
n times
by X n. We have, for example, that

{1, 2}2 = {1, 2} × {1, 2} = {(1, 1), (1, 2), (2, 1), (2, 2)}.

3.6 Properties, Relations, and Functions


3.6.1 A property, P , over a set of objects, X is a set of elements of X, i.e.
P is a property over X iff P ⊆ X. The idea is that a ∈ P iff a has the
property P . For example, the property of being even over the natural
numbers is the set

{n ∈ N : there exists a k ∈ N, such that k ≤ n and n = 2k}.

3.6.2 A binary relation, R, over a set of objects, X, is a set of ordered pairs


of elements from X, i.e. R is a binary relation over X iff R ⊆ X 2 .
The idea is that (a, b) ∈ R iff the object a stands in the relation
R to b. For example, consider the set A = {1, 2, 3, 4}. The set R =
{(1, 2), (2, 3), (3, 4)} is a relation over A. Intuitively, it’s the relation
being strictly smaller than. In fact, we can check that for all n, m ∈ A,
(n, m) ∈ R iff n < m, i.e. R = {(n, m) ∈ A2 : n < m}. In the case
of binary relations, we sometimes also use the notation aRb instead of
(a, b) ∈ R to say that a stands in the relation R to b. E.g. if ≤ is the
relation {(a, b) ∈ N2 : a ≤ b} of being smaller than over the natural
numbers N, then instead of (a, b) ∈ ≤, we also just write a ≤ b.

3.6.3 More generally, an n-ary relation over X, R, is simply a set of n-


tuples of elements from X, i.e. R ⊆ X n . We say that a1 , . . . , an stand
in relation R iff (a1 , . . . , an ) ∈ R.

3.6.4 A function, f , from one set, X, to another, Y , assigns to each element


a ∈ X a unique element f (a) ∈ Y . That is, for all a, b ∈ X, if f (a) 6=
f (b), then a 6= b. In this case, the set X is called the domain of f . The
members of the domain are the possible inputs for f , for which f is
defined. We denote the domain of a function f by dom(f ). The set Y ,
CHAPTER 3. ELEMENTARY SET THEORY 50

instead, is called the range of f . The range contains the possible values
of f . We denote the range of f by rg(f ). We also write f : X → Y to
say that f is a function from X to Y , i.e. dom(f ) = X and rg(f ) = Y .
To say that function f : X → Y assigns b ∈ Y as the value to a ∈ X,
f
we write f (a) = b or a 7→ b.

3.6.5 Here are some assignments from {a, b, c, d} to {1, 2, 3, 4} that aren’t
functions:

a 1 a 1
b 2 b 2
c 3 c 3
d 4 d 4

More than one value for a No value for d

The point is every element of the domain needs to be assigned exactly


one value from the range. As long as these requirements are met, we
have a function. So, the following two assignments are functions:

a 1 a 1
b 2 b 2
c 3 c 3
d 4 d 4

f1 f2

3.6.6 Functions are everywhere in mathematics. Take, for example, the suc-
cessor function S : N → N, which is defined by S(n) = n + 1 for all
numbers n ∈ N. Note that the domain and the range of this func-
tion are the same, which is allowed. But functions can also operate on
other kinds of objects. Take the two sets {a, b, c, d} and {1, 2, 3, 4} from
above. We can specify the two functions f1 and f2 from the diagram
in 3.6.4 as follows:

f1 f2
a 2 a 1
b 4 b 4
c 2 c 4
d 3 d 2
CHAPTER 3. ELEMENTARY SET THEORY 51

This is called a function table. It tells us for every possible input from
{a, b, c, d} what the output in {1, 2, 3, 4} is. For f1 , we have, for ex-
ample, f1 (a) = 2, f1 (b) = 4, f1 (c) = 2, and f1 (d) = 3, while for f2 ,
we have f2 (a) = 1, f2 (b) = 4, f2 (c) = 4, and f2 (d) = 2. Note that not
every element in the range is assigned as a value to some input. This
is allowed, since the range only contains the possible values for f . The
actual values of f : X → Y are the members of the set {f (x) : x ∈ X}.
This set is called the image of f , and it’s denoted im(f ). We have, for
example, im(f1 ) = {2, 3, 4} and im(f2 ) = {1, 2, 4}.

3.6.7 What’s important when specifying a function f : X → Y is to say


for each x ∈ X what the value f (x) ∈ Y is. This can be done in
many different ways. Above, we’ve already seen a function table, which
will be a useful method of specifying a function. But this, of course,
only works when the domain of f is finite. In cases where the domain
is infinite, we can specify a function like we did in the case of the
successor function, by means of a function rule. The function rule of
S : N → N was given by S(n) = n + 1. This can also be written
S
as n 7→ n + 1. It’s important to note that we can’t always figure
out the function rule in such a clear way. There are functions where
we don’t know the function rule, but that doesn’t stop them from
being functions. We can, and in fact will, often talk about functions
abstractly, without knowing what their function rule is (or, in fact, if
there even is an intelligible one). All we know in such a case is that
every member of the domain gets precisely one value from the range.
That’s it.

3.6.8 A common way of specifying the values a function gives is by distin-


guishing the possible inputs, the domain, into a finite list of exclusive
and exhaustive cases and say which output the function gives for each
of these cases.2 The idea is best illustrated by means of an example.
Consider the function f : N → {0, 1}, which gives the result 1 when
applied to an even number and the result 0 when applied to an odd
number. A concise way of writing this is as follows:
(
1 if n is even
f (n) =
0 if n is odd

There can, of course, be more than two cases. Consider the function
g : N → {1, 2, 3}, which assigns 1 to every even number, 2 to every
prime bigger than 2, and 3 to every other number. This function can
2
The list needs to be exclusive in order to avoid that an input gets more than one
value, and it needs to be exhaustive to ensure that every input gets a value.
CHAPTER 3. ELEMENTARY SET THEORY 52

be determined as follows:

1
 if n is even
g(n) = 2 if n is prime and n > 2

3 otherwise

Note that the “otherwise” here is a good catch-all to make an otherwise


non-exhaustive list exhaustive.

3.6.9 An n-ary function from X to Y is a function f : X n → Y . That is,


f assigns a value from Y to every n-tuple of members from X. For
(a1 , . . . , an ) ∈ X n , we also write f (a1 , . . . , an ) for the more correct
f ((a1 , . . . , an )). The case of a binary function f : X 2 → Y will be
particularly important in the following. In case where X is finite, we
can also give a function table for f : X 2 → Y . Consider, for example,
the function f {a, b, c}2 → {0, 1} given by the following assignment:

f f f
(a, a) 7→ 0 (b,a) 7→ 0 (c,a) 7→ 1
f f f
(a, b) 7→ 1 (b,b) 7→ 0 (c,b) 7→ 1
f f f
(a, c) 7→ 1 (b,c) 7→ 1 (c,c) 7→ 0

This assignment can be given in table form as follows:

f a b c
a 0 1 1
b 0 0 1
c 1 1 0

The convention hereby is that the first input is in the left-most column
and the second input in the top-most row. Notice that f (x, y) 6= f (y, x)
is possible, e.g. in our case f (a, b) = 1 6= 0 = f (b, a).
So, generally, if X = {a1 , . . . , an } is a finite set, the function table for
a function f : X 2 → Y is given as follows:

f a1 ··· an
a1 f (a1 , a1 ) ··· f (a1 , an )
.. .. ..
. . .
an f (an , a1 ) ··· f (an , an )

3.6.10 So far, we spoke about functions using the informal notion of an as-
signment. Formally speaking, however, a function is typically under-
stood as a special kind of set. A function f is understood as a triple
CHAPTER 3. ELEMENTARY SET THEORY 53

(dom(f ), rg(f ), Rf ). Here, dom(f ) and rg(f ) are arbitrary sets, which
constitute the domain and range of the function respectively. The spe-
cial component is Rf , which, intuitively, is the assignment relation of
the function. More formally, Rf ⊆ dom(f ) × rg(f ) is a set of pairs
(x, y) where x ∈ dom(f ) and y ∈ rg(f ) subject to the two conditions:

Left-totality. For each x ∈ dom(f ), there exists a y ∈ rg(f ) such


that (x, y) ∈ Rf .
Right-uniqueness. If (x, y) ∈ Rf and (x, z) ∈ Rf , then y = z.

We will not rely on the formal definition of a function much in this


course, but we will need it for the semantics of first-order logic. We
conclude our discussion of functions with the fully formal definition of
function f1 from 3.6.4, as an example:

f1 = ({a, b, c, d}, {1, 2, 3, 4}, {(a, 2), (b, 4), (c, 2), (d, 3)})
n
3.6.11 Finally, we can generalize our notation { m : n, m ∈ Z, m 6= 0} from
3.1.8 to the general case. For f : X → Y and X 0 ⊆ X, we define:

{f (x) : x ∈ X 0 } = {y : there exists a x ∈ X 0 , such that y = f (x)}

This is really just a useful abbreviation, which we’ll use here and there.

3.7 Inductive Definitions and Proof by Induction


The ideas of this section are among the hardest of the course. But these
ideas lie at the heart of logical theory, that’s why it’s important that we
discuss them from different angles. Here we begin with some examples and a
description of the general idea. In the following chapter, we will apply these
ideas to define a logical language. If not everything is perfectly clear after
this chapter, don’t despair—keep working on it and (hopefully) it will all
make sense soon.

3.7.1 In 3.1.3, we mentioned that N shouldn’t be written {0, 1, 2, . . .}. We’ll


now discuss a powerful method for defining infinite sets like N, so-
called inductive definitions. Let’s begin with the natural numbers as
an example. Note that the natural numbers are essentially just zero
and all its successors, where a number m is said to be the successor of
a number n iff m = n + 1. So, clearly, zero is a natural number and
if we take a natural number, then its successor (the result of adding
one) is also a natural number. More precisely:

(i) 0 ∈ N
(ii) For all n, if n ∈ N, then n + 1 ∈ N
CHAPTER 3. ELEMENTARY SET THEORY 54

Using these two facts, we easily can show that 1, 2, 3, . . . are all natural
numbers. Take the number three. Here’s how we show that 3 ∈ N: We
know that 0 ∈ N by (i). By (ii), it follows that 0 + 1 = 1 ∈ N. Again by
(ii) it follows that 1 + 1 = 2 ∈ N. Finally, again by (ii), it follows that
2 + 1 = 3 ∈ N. This clearly generalizes to every natural number n. By
n applications of (ii), we can show that n ∈ N. The bottom-line is that
(i) and (ii) together allow us to derive for every natural number that
it is a member of the set N. But we want more. We also want to be
able to show that only natural numbers are members of N, i.e. there is
no number which is not a natural number but a member of N. To do
that, we use a simple trick: we simply stipulate that the objects which
can be shown to be members of N by (i) and (ii) are all the natural
numbers. This is typically written as follows:

(iii) Nothing else is a member of N.

Together (i), (ii), and (iii) constitute an inductive definition of the


natural numbers. Many sets can be defined inductively (and some
cannot). The reason why we’re discussing inductive definitions now
is that the set of formulas of a formal language is usually given an
inductive definition. What’s particularly appealing about inductive
definitions is that they allow us to define an infinite set, like N, in a
finitary way—using just three conditions viz. (i), (ii), and (iii). This is
very important for the computer implementability of arithmetic (the
theory of the natural numbers): without a finitary way of encoding the
numbers, how should a computer be able to handle them?

3.7.2 How do you show that a number n is not a member of N? Well,


essentially, what you have to show is that there is no way of reaching
n by repeatedly adding one to zero. It seems clear, for example, that
we can’t reach 12 in this way. But to actually prove it, we will need a
more precise version of inductive definitions, which we’ll discuss below.
However, most of the time, it will be enough to “see” that a number
can’t be constructed by repeatedly adding one to zero to justify the
claim that it’s not in N.

3.7.3 If a set is defined inductively, then there’s a powerful way of defining


functions on the set, where the function is defined “following the in-
ductive definition.” This method is known as function recursion. To
illustrate the idea, let’s use function recursion to define a function
f : N2 → N. In order to define f , we need to say for every pair of num-
bers n, m ∈ N what the result of f (n, m) is. This is done by recursion
as follows:

(i) For every number n ∈ N, f (n, 0) = 0.


CHAPTER 3. ELEMENTARY SET THEORY 55

(ii) For all numbers n, m ∈ N, f (n, m + 1) = f (n, m) + n.

Note the pattern here: we first say what the result of f (n, 0) is, and
then we say what the result of f (n, m + 1) is, but in terms of what the
result of f (n, m) is. In this way, since zero and it’s successors are all
the natural numbers, we’ve said for every number what the result of
f (n, m) is. To see that that’s the case, let’s calculate f (3, 2) using the
recursive definition (i) and (ii):

ˆ f (3, 0) = 0, by clause (i)


ˆ So, f (3, 1) = f (3, 0 + 1) = f (3, 0) + 3 = 3 by clause (ii).
ˆ So, f (3, 2) = f (3, 1 + 1) = f (3, 1) + 3 = 3 + 3 = 6 by clause (ii).

Here, we calculated “bottom up:” we started from f (3, 0) and figured


out what f (3, 2) needed to be. We might as well have gone “back-
wards,” as follows:

ˆ We want to know the result of f (3, 2). But f (3, 2) = f (3, 1 + 1)


and, by (ii), f (3, 1 + 1) = f (3, 1) + 3.
ˆ So, in order to calculate f (3, 2), we need to calculate f (3, 1). Now,
f (3, 1) = f (3, 0 + 1) and by (ii) f (3, 0 + 1) = f (3, 0) + 3.
ˆ So, we need to calculate f (3, 0), but we know what that is by (i),
viz. f (3, 0) = 0.
ˆ Putting it all together, we get that f (3, 2) = f (3, 1+1) = f (3, 1)+
3 = f (3, 0 + 1) + 3 = (f (3, 0) + 3) + 3 = 3 + 3 = 6.

In this way of calculating the result, in each step, we need to figure out
the result for a lower number, until eventually, we need to figure out
the result for zero is. This “calling upon” results for lower numbers
is where “function recursion” gets its name from. Note that because
every number is the result of adding one to zero a bunch of times, this
procedure works.
Do you recognize the function f ? What does it do? Think about it
before you move on, we’ll answer the question in a moment.
A side-remark: function recursion is of fundamental importance in
computer implementations of calculation. You can easily see why: it
allows us to specify a function with an infinite domain in a finitary
way. Otherwise, how should a computer, with finite memory, be able
to deal with functions on the natural numbers?

3.7.4 And if a set is defined inductively, then there’s a powerful proof method
for proving things about (all) its members: proof by induction. The
idea is, once more, to follow the inductive definition of the set when
CHAPTER 3. ELEMENTARY SET THEORY 56

proving things about it. To make this idea clear, let’s use N again as
an example. Suppose that Φ(n) is a condition on natural numbers,
something like “if n is even, then n is not odd” or the like. Suppose
further that we can show the following two facts:

(i) Zero satisfies the condition, i.e. Φ(0).


(ii) If a number satisfies the condition, then also its successor does,
i.e. for all n ∈ N, if Φ(n), then Φ(n + 1).

In such a situation, we can conclude that every number satisfies the


condition. Why? Well, pick a number, any number. We know that
this number can be reached by successively adding one to zero—after
all, “nothing else is a natural number.” We know that zero satisfies
the condition by (i). And we know that if zero satisfies the condition,
then one satisfies the condition by (ii). So we know that one satisfies
the condition. And we know that if one satisfies the condition, then
two satisfies the condition by (ii). So, two satisfies the condition. And
we know that if two satisfies the condition, then . . . . And so on. We
will eventually reach every number like this—again, “nothing else is a
natural number.” So, if we can establish (i) and (ii), we can conclude
that every natural number has the property.

3.7.5 In an inductive proof, the condition (i) is called the base case and (ii)
is called the induction step. So, to be precise, the form of an induc-
tive proof over the natural numbers is always that we establish that
all natural numbers have a property by showing that (i) zero has the
property and (ii) if a number has the property, then also its successor
does. Note that for step (ii), we need to establish the truth of a condi-
tional: if a number has the property, then the successor of the number
has the property. We do this by conditional proof, i.e. we assume that
a number has the property, and we derive that its successor does, too.
In this very special case, the assumption is known as the induction
hypothesis and it is referred to as such in inductive proofs. Here is an
example of a proof by induction for over the natural numbers:

Proposition. Let f : N2 → N be defined as in 3.7.3. Then, for all


n, m ∈ N, we have that f (n, m) = n · m.

Proof. We prove this using mathematical induction. In order to be


able to conclude our result, we have to prove two things:

(i) f (n, 0) = n · 0 (‘base case’)


(ii) For all n, m ∈ N, if f (n, m) = n·m, then f (n, m+1) = n·(m+1).
(‘induction step’)
CHAPTER 3. ELEMENTARY SET THEORY 57

We prove these in turn.


For the base case, (i), note that by (i) of 3.7.3, f (n, 0) = 0 for every
n ∈ N. Since n · 0 = 0 for all n ∈ N, the claim holds.
For the induction step, (ii), let n, m ∈ N be arbitrary numbers and
assume that f (n, m) = n · m as the induction hypothesis. Consider
f (n, m + 1). By clause (ii) of 3.7.3, we know that f (n, m + 1) =
f (n, m) + n. But by the induction hypothesis, we know that f (n, m) =
n · m, so we get f (n, m + 1) = f (n, m) + n = (n · m) + n = n · (m + 1),
which is what we needed to show.
Hence, by mathematical induction, we conclude that for all n, m ∈ N,
we have that f (n, m) = n · m.

Proof by induction over the natural numbers is also called mathemat-


ical induction. Essentially, each inductively defined set has its own
principle for proof by induction. In the following chapter, we will dis-
cuss how to prove things about all formulas of a formal language using
proof by induction.

3.7.6 You will exercise some more simple cases of mathematical induction
to get the idea of how inductive proofs work. You will not have to
master the technique during the course, however—this is not a course
in number theory. In the next chapter, we’ll discuss another version
of inductive proof that you will have to master, a version of inductive
proof for formal languages.

3.7.7 Before we describe how inductive definitions work in general, let’s give
recursive definitions a slightly more precise shape. The problem we’re
tackling is to make the claim that “nothing else is a member of N”
mathematically precise. The standard idea for doing this is to define
N as the smallest set that contains zero and all its successors. Here we
think of a set X as smaller than another set Y iff X ⊆ Y . The sense
in which N is the smallest set containing zero and all its successors is
that for any set X such that X contains zero and all its successors,
we have that N ⊆ X. In other words, N is smaller than any other set
containing zero and all its successors. Just think of any other set that
also contains zero and all its successors, say Z, Q, and R. Clearly, we
have that N ⊆ Z, N ⊆ Q, and N ⊆ R. Can we find a set that contains
zero and all it’s successors but not all the members of N? If “nothing
else is a member of N” is correct, the answer would need to be: no!
So, the idea would now be to define N as the smallest set X such that
the following two conditions hold:

(i) 0 ∈ X
(ii) For all numbers x, if x ∈ X, then x + 1 ∈ X.
CHAPTER 3. ELEMENTARY SET THEORY 58

How do we know that such a set exists (and that it’s unique)? Well,
that needs to be postulated as an axiom of mathematics: in axiomatic
set theory, the claim that N, so defined, exists is known as the axiom
of infinity.

3.7.8 Having given a precise definition of N, we can now prove that certain
numbers aren’t natural, i.e. we can prove claims of the form x ∈ / N.
To see how this works, let’s prove that 21 is not a natural number:
1
Proposition. We have that 2 ∈
/ N.

Proof. Let X be some set that satisfies the conditions (i) and (ii) of
the definition of N. Suppose further that 12 ∈ X. We claim that under
this assumption, also the set
k
Y =X \{ : k ∈ Z and k is odd}
2
satisfies conditions (i) and (ii).3 We prove these in turn:

(i) To see that 0 ∈ Y , first note that 0 ∈ X. So, to show that


/ { k2 : k ∈ Z and k is odd}.
0 ∈ Y all we need to show is that 0 ∈
k
Why? Because Y = X \ { 2 : k ∈ Z and k is odd} = {x ∈ X :
x ∈ / { k2 : k ∈ Z and k is odd}}. We prove that 0 ∈ / { k2 : k ∈
Z and k is odd} by contradiction. So suppose that 0 ∈ { k2 : k ∈
Z and k is odd}, which would mean that there exists a k ∈ Z,
which is odd and k2 = 0. But then, it follows easily, that k = 0.
And 0 is even (to see this note that 2 · 0 = 0). Hence k would
need to be both even and odd, which is impossible. Hence 0 ∈ /
k
{ 2 : k ∈ Z and k is odd}.
(ii) We need to show that for all numbers x, if x ∈ Y , then x+1 ∈ Y .
So, let n be an arbitrary number and suppose that n ∈ Y . By
definition of Y , this means that n ∈ X. And since X satisfies
condition (ii), we get that n+1 ∈ X. Now to show that n+1 ∈ Y ,
we need to show that n + 1 ∈ / { k2 : k ∈ Z and k is odd}. (Why?
The answer’s essentially the same as the one to the why-question
in (i)!) So, for proof by contradiction, suppose that n + 1 ∈ { k2 :
k ∈ Z and k is odd}. We get that there exists a k ∈ Z, such
that k is odd and n + 1 = k2 . It follows that n = k2 − 1 = k−2 2 .
But now note that if k is odd, then k − 2 is odd, too. Hence,
n ∈ { k2 : k ∈ Z and k is odd}. But since n ∈ Y , we have that
n∈/ { k2 : k ∈ Z and k is odd}. Contradiction. So, n+1 ∈ / { k2 : k ∈
3
Note that the definitions of even and odd can easily be generalized to Z: a number
n ∈ Z is even iff there exists a k ∈ Z such that n = 2k. And a number n ∈ Z is odd iff n
is not even.
CHAPTER 3. ELEMENTARY SET THEORY 59

Z and k is odd}. But now we have that n + 1 ∈ X and n + 1 ∈ /


{ k2 : k ∈ Z and k is odd}, which just means that n + 1 ∈ Y , as
desired.
So, if X satisfies conditions (i) and (ii) and 12 ∈ X, then there is a
smaller set, viz. X\{ k2 : k ∈ Z and k is odd}, which satisfies conditions
(i) and (ii), too. But then X cannot be the smallest set satisfying
conditions (i) and (ii), i.e. X cannot be N. Now suppose, for a final
proof by contradiction, that 12 ∈ N. Since N satisfies conditions (i)
and (ii) from its definition, we’ve just seen that this would entail that
N 6= N, which is impossible. Hence, by indirect proof, 12 ∈
/ N, as desired.

It’s not terribly important that you get all the details of this argument,
but I want you to see the general form of how you might go about
proving that something’s not a member of an inductively defined set
(and that that’s surprisingly difficult). If you really want to understand
the proof (and, again, you don’t have to), try to prove the same result
using mathematical induction.
3.7.9 What’s particularly pleasing about our precise definition of N is that
it allows us prove the principle of mathematical induction:
Theorem (Mathematical Induction). Suppose that Φ is a condition
on numbers such that:
(i) Φ(0)
(ii) for all natural numbers n ∈ N, if Φ(n), then Φ(n + 1).
Then it follows that all natural numbers satisfy the condition Φ, i.e.
we have that Φ(n), for all n ∈ N.

Proof. Let Φ be an arbitrary condition on numbers satisfying condi-


tions (i) and (ii), Consider the set {x : Φ(x)}. By the conditions (i)
and (ii) of our theorem, {x : Φ(x)} satisfies conditions (i) and (ii) from
the definition of N. Since N is the smallest set satisfying those condi-
tions, we have that N ⊆ {x : Φ(x)}. But now, it easily follows that for
all n ∈ N, we have that Φ(n). For let n ∈ N be an arbitrary number.
Since N ⊆ {x : Φ(x)}, it follows that n ∈ {x : Φ(x)}. But that just
means that Φ(n), as desired.

3.7.10 Now that you’ve seen how to recursively define the natural numbers
and how we can derive the proof principle of mathematical induction
from the definition, let’s focus on the idea of recursive definitions in
general. In order to inductively define a set, we always use the following
pattern:
CHAPTER 3. ELEMENTARY SET THEORY 60

1. We give a set of initial elements.


2. We provide a list of constructions that allow us to form new ele-
ments from old elements.
3. We define our set as the smallest set that contains all the initial
elements and that is closed under the constructions, meaning that
if we apply the constructions to elements, we get new elements.
In the recursive definition of N, the only initial element is the number
zero and the only construction is the simple operation of adding one.
But more generally, we can have any number of initial elements and
any number of constructions that allow us to construct new elements..
3.7.11 Running Example (Gargles). We give an inductive definition of the set
Gargle of gargles. Here we go:

1. The set of initial elements is {♣, ♠}.


2. Our constructions are all constructions of writing symbols next to
each other. We consider the following constructions:
i. Take any gargle x and write ♦ before and after it, giving ♦x♦
as the result.
ii. Take any two gargles x, y and write ♥ in between, giving x♥y
as the result.
3. Our set Gargle is now defined as the smallest set X such that:
(i) {♣, ♠} ⊆ X.
(ii) (a) For all x, if x ∈ X, then ♦x♦ ∈ X.
(b) For all x and y, if x, y ∈ X, then x♥y ∈ X.

The set Gargle has infinitely many elements:

ˆ ♣, ♠ ∈ Gargle
ˆ ♦♣♦, ♦♠♦ ∈ Gargle
ˆ ♣♥♣, ♣♥♠, ♠♥♠ ∈ Gargle
ˆ ♣♥♦♣♦, ♦♠♦♥♠, . . . ∈ Gargle

3.7.12 Next, let’s discuss how function recursion works generally. As we hinted
at above, whenever we have an inductively defined set, we can use func-
tion recursion to define a function on that set. The way this works is
as follows:

1. We say what the value of our function is on the initial elements.


2. We say how to calculate the value of the function for an element
built by a construction, where we can reference to the values of the
function for the elements the element is constructed from.
CHAPTER 3. ELEMENTARY SET THEORY 61

When we give these two pieces of information, we’ve defined a function


on the inductively defined set: since the set is the smallest set which
contains the initial elements and is closed under the constructions, for
each element in the set we can determine the value of our function.
3.7.13 So, how do we recursively define a function f with dom(f ) = Gargle?
Well, we have to answer the following questions:
(i) What are the values of f (♣) and f (♠)?
(ii) (a) What is the value of f (♦x♦) in terms of the value of f (x)?
(b) What is the value of f (x♥y) in terms of the values of f (x)
and f (y)?
Consider, for example, the function #♠ : Gargle → N, which is defined
by the following recursion:
(
1 if x = ♠
(i) #♠ (x) =
0 if x = ♣
(ii) (a) #♠ (♦x♦) = #♠ (x)
(b) #♠ (x♥y) = #♠ (x) + #♠ (y)

This function calculates the number of ♠’s in a given gargle. We have,


for example, #♠ (♦♣♦) = 0, #♠ (♠) = 1, #♠ (♠♥♦♠♦) = 2, and so
on. Let’s check this, for example, in the case of ♠♥♦♠♦:

ˆ We know by clause (ii.b), #♠ (♠♥♦♠♦) = #♠ (♠) + #♠ (♦♠♦).


ˆ By clause (ii.a), we know that #♠ (♦♠♦) = #♠ (♠).
ˆ So, #♠ (♠♥♦♠♦) = #♠ (♠) + #♠ (♦♠♦) = #♠ (♠) + #♠ (♠).
ˆ But #♠ (♠) = 1, so #♠ (♠♥♦♠♦) = #♠ (♠) + #♠ (♦♠♦) =
1 + 1 = 2.

3.7.14 But wait, there’s a problem. By what we said so-far about general
recursion, it’s only guaranteed that every element gets a value. But
remember from the definition of a function, that every element needs
to get a unique value. In the case of the natural numbers, this is
guaranteed since for each natural number, there is exactly one way of
“constructing it” from zero via the successor function:

n = 0+
| . .{z
. + 1}
n times

Thus, there is only one way to calculate the value of recursive function
following the construction of the number.
But note that this is not the case for the gargles. Take the gargle
♠♥♣♥♠, for example. This gargle can constructed in two ways:
CHAPTER 3. ELEMENTARY SET THEORY 62

ˆ Since ♠, ♣ ∈ Gargle, we know that ♣♥♠ ∈ Gargle by (ii.b). And


since ♠ ∈ Gargle and ♣♥♠ ∈ Gargle, we know that ♠♥♣♥♠ ∈
Gargle
ˆ Since ♣, ♠ ∈ Gargle, we know that ♠♥♣ ∈ Gargle. And since
♠ ∈ Gargle and ♠♥♣ ∈ Gargle, we know that ♠♥♣♥♠ ∈
Gargle

What does this mean? Well, every number can be “read” in exactly
one way: the number n is the n-th successor of zero. For gargles, that’s
not the case: ♠♥♣♥♠ can be constructed via ♣♥♠ and it can be
constructed via ♠♥♣. Why does this matter? Well, if we want to use
function recursion to define a function on the gargles, if we’re not
careful, it might give different results depending on how we “read” a
gargle. Take, for example, the “function” f : Gargle → {0, 1} which
is defined by recursion over the gargles as follows:

(i) f (♠) = 1 and f (♣) = 0


(ii) (a) f (♦x♦) = f (x)
(
1 if f (x) = 1 and f (y) = 0
(b) f (x♥y) =
0 otherwise

You might think that this recursion defines a proper function on the
gargles, but it does not! Why? Because it gives different values for
♠♥♣♥♠, depending on how we “read” the expression:

ˆ Calculation 1. We know that f (♠) = 1 and f (♣) = 0. So we know


that f (♠♥♣) = 1 by (ii.b). And since f (♠) = 1, this means that
f (♠♥♣♥♠) = 0, again by (ii.b).
ˆ Calculation 2. Since f (♠) = 1 and f (♣) = 0, we know that
f (♣♥♠) = 0 by (ii.b). Since f (♠) = 1, this gives f (♠♥♣♥♠) = 1
by (ii.b).

Looking at it the other way around, if we try to calculate “backwards,”


already in the first step, we have to make a decision how to “parse”
the gargle—and different ways of parsing it give different values for the
function. So, strictly speaking, not every function recursion over the
gargles is guaranteed to yield an actual function. Sure, some of them
do: for example, it can be shown that the definition of #♠ works, it
assigns a unique value to every gargle. But some of them don’t: for
example, our “function” f defined above.
All of this points to an important fact: when we’re dealing with an
inductively defined set, we want it’s members to have a unique con-
struction for recursion to work properly. In the case of the gargles,
CHAPTER 3. ELEMENTARY SET THEORY 63

we don’t have this “unique readability” and therefore we have to be


careful when we’re trying our hand at function recursion over them. It
will be an important fact about formal languages that their formulas
are uniquely readable—we’ll make sure that they are by design.

3.7.15 Finally, we mentioned that for every inductively defined set, we have
its own form of proof by induction. The idea that we described for
the natural numbers generalizes to a general procedure as follows. In
order to show that every element of an inductively defined satisfies a
condition, we show:

1. All the initial elements satisfy the condition. (‘base case’)


2. A newly constructed element satisfies the condition, whenever the
elements that it’s constructed from do. (‘induction steps’)

The reasoning behind this is essentially the same as in the case of


mathematical induction. Since the elements of an inductively defined
set are precisely the ones that can be constructed from the initial
elements using the constructions if 1. and 2. are established, then for
each element, we can infer that it satisfies the condition step-by-step
tracing the construction steps.
Note that, in contrast to mathematical induction over N, there can be
more than one base case and several induction steps (with their own
induction hypotheses) to show.

3.7.16 So, how do we prove things about gargles using induction? Well, sup-
pose we want to show that all gargles satisfy the condition Φ. What
we need establish are the following things:

(i) We need to show that ♣, ♠ all satisfy the condition, i.e. Φ(♣)
and Φ(♠). This is the base case.
(ii) And we need to show that:
(a) For all x, if x satisfies the condition, then ♦x♦ satisfies the
condition, i.e. for all x, if Φ(x), then Φ(♦x♦).
(b) For all x, y, if x and y satisfy the condition, then x♥y satisfies
the condition, i.e. for all x, y, if Φ(x) and Φ(y), then Φ(x♥y).

As an example, we’re going to prove that The number of ♥’s in a


gargle is equal to the number of ♠’s and ♣’s added together minus 1.
To make this claim more precise, let’s define two more functions on
the gargles. First, define the function #♥ : Gargle → N by recursion
as follows:

(i) #♥ (♠) = #♥ (♣) = 0


CHAPTER 3. ELEMENTARY SET THEORY 64

(ii) (a) #♥ (♦x♦) = #♥ (x)


(b) #♥ (x♥y) = #♥ (x) + #♥ (y) + 1

This function counts the number of ♥’s in a given gargle. Second,


define the function #♣ : Gargle → N by recursion as follows:

(i) #♣ (♠) = 0 and #♣ (♣) = 1


(ii) (a) #♣ (♦x♦) = #♣ (x)
(b) #♣ (x♥y) = #♣ (x) + #♣ (y)

This function counts the number of ♣’s in a gargle. (Both of these


recursions actually work, don’t worry about the details too much).

Theorem. For all x ∈ Gargle, #♥ (x) = #♠ (x) + #♣ (x) − 1.

Proof. We use induction over the gargles to prove the claim.


For the base case, we need to show two things:

#♥ (♠) = #♠ (♠) + #♣ (♠) − 1

and
#♥ (♣) = #♠ (♣) + #♣ (♣) − 1.
We only show the latter, since the former is completely analogous.
Simply note that #♥ (♣) = 0, #♠ (♣) = 0, and #♣ (♣) = 1. So we get

#♠ (♣) + #♣ (♣) −1 = #♥ (♣)


| {z } | {z } | {z }
=0 =1 =0

For the induction step, we need to show two things:

1. For all x ∈ Gargle, if

#♥ (x) = #♠ (x) + #♣ (x) − 1,

then
#♥ (♦x♦) = #♠ (♦x♦) + #♣ (♦x♦) − 1.

2. For all x, y ∈ Gargle, if

#♥ (x) = #♠ (x) + #♣ (x) − 1,

and
#♥ (y) = #♠ (y) + #♣ (y) − 1,
then
#♥ (x♥y) = #♠ (x♥y) + #♣ (x♥y) − 1
CHAPTER 3. ELEMENTARY SET THEORY 65

We prove these in turn. First 1. Let x be an arbitrary gargle and


suppose the induction hypothesis that #♥ (x) = #♠ (x) + #♣ (x) − 1.
Now consider #♥ (♦x♦). By definition, #♥ (♦x♦) = #♥ (x) and so we
get the claim immediately.
Now for 2. Let x and y be arbitrary gargles and suppose the induction
hypotheses that #♥ (x) = #♠ (x) + #♣ (x) − 1 and #♥ (y) = #♠ (y) +
#♣ (y) − 1. Now consider #♥ (x♥y). By definition,

#♥ (x♥y) = #♥ (x) + #♥ (y) + 1.

By substituting the equations from our induction hypotheses, we get:

#♥ (x♥y) = (#♠ (x) + #♣ (x) − 1) + (#♠ (y) + #♣ (y) − 1) + 1


= #♠ (x) + #♣ (x) + #♠ (y) + #♣ (y) − 1
= #♠ (x) + #♠ (y) + #♣ (x) + #♣ (y) −1
| {z } | {z }
=#♠ (x♥y) =#♣ (x♥y)

= #♠ (x♥y) + #♣ (x♥y) − 1

This is what we needed to show.


So, we conclude our theorem by induction over the gargles.

3.7.17 We conclude the section with a guideline for writing a proof by induc-
tion:

1. State clearly that you’re using induction to prove the claim.


2. Prove the base case.
3. State clearly that you’re now considering the induction steps. In
each sub-case, begin by stating your induction hypothesis and then
use it to derive the claim about the constructed element.
4. State clearly that you’re using induction to infer that the claim in
question holds for all elements of the set.

3.8 Core Ideas


ˆ A set is a collection of objects, its elements.

ˆ One set is a subset of another just in case all the elements of the one
set are elements of the other. A set is a proper subset of another just
in case the one set is a subset of the other but not vice versa.

ˆ Two sets are identical iff they have precisely the same elements.
CHAPTER 3. ELEMENTARY SET THEORY 66

ˆ The union of two sets contains any element of either set, their inter-
section contains only the objects that are in both sets. The difference
of one set and another contains all the elements of the one but not the
other.

ˆ An ordered tuple is a set-like collection of objects with a specific order.


In tuples, order and multiplicity count.

ˆ The Cartesian product of two sets is the set of all ordered pairs formed
by taking an element of the first set as the first component and an
element of the second set as the second component.

ˆ A property is a set of objects—the set of objects that have the property.


More generally, an n-ary relation is a set of n-tuples—the set of objects
having standing in the relation in that order.

ˆ A function from one set to another is an assignment of elements in the


one set to elements in the other such that each element in the one set
is assigned a unique element in the other.

ˆ In order to inductively define a set, we give a set of initial elements and


a set of constructions for new elements. We define the set inductively as
the smallest set that contains the initial elements and is closed under
the constructions.

ˆ In order to recursively define a function over an inductively defined


set, we give the value of the function for the initial elements and say
how to calculate the value of a newly constructed element based on
the values of the elements it’s constructed from.

ˆ To prove a claim by induction over an inductively defined set, we


show that all initial elements satisfy the claim and that every newly
constructed element satisfies the claim whenever the elements that its
constructed from do.

3.9 Self Study Questions


3.9.1 Let X and Y be sets. Which of the following entails that X ⊆ Y ?

(a) For every x ∈ X, we also have x ∈ Y .


(b) For every x ∈ Y , we also have x ∈ X.
(c) There exists no x ∈ Y such that x ∈
/ X.
(d) There exists no x ∈ X such that x ∈
/ Y.
(e) Some x ∈ X is such that x ∈
/ Y.
(f) Some x ∈ Y is such that x ∈
/ X.
CHAPTER 3. ELEMENTARY SET THEORY 67

(g) Every x ∈
/ X is also such that x ∈
/ Y.
(h) Every x ∈
/ Y is also such that x ∈
/ X.

3.9.2 Let X and Y be sets. Which of the following entails that X * Y ?

(a) For every x ∈ X, we also have x ∈


/ Y.
(b) For every x ∈
/ Y , we also have x ∈
/ X.
(c) There exists no x ∈ X such that x ∈
/ Y.
(d) There exists no x ∈ Y such that x ∈
/ X.
(e) Some x ∈ X is such that x ∈
/ Y.
(f) Some x ∈ Y is such that x ∈
/ X.
(g) Every x ∈
/ X is also such that x ∈
/ Y.
(h) Every x ∈
/ Y is also such that x ∈
/ X.

3.9.3 Let X and Y be sets. Which of the following entails that X = Y ?

(a) X ⊆ Y and Y ⊆ X.
(b) For some object x, we have that x ∈ X iff x ∈ Y .
(c) X ⊆ Y and Y * X.
(d) There is no element x ∈ X such that x ∈
/ Y and there is no element
y ∈ Y such that y ∈/ X.
(e) When we pick any element x ∈ X, we can find a corresponding
element y ∈ Y and vice versa.
(f) If we find an element x ∈ X such that x ∈ Y , then we can find an
y ∈ Y such that y ∈ X.

3.9.4 Let X and Y be sets. Which of the following entails that X 6= Y ?

(a) There is an object x ∈ X such that x ∈


/Y
(b) For all objects x, we have that x ∈
/ X iff x ∈
/Y
(c) X * Y or Y * X.
(d) There is an object y ∈ Y such that y ∈
/ X.
(e) There exists an object x ∈ X such that x ∈ Y
(f) There exists an object x ∈ X such that x ∈
/Y

3.9.5 Let X and Y be sets. Which of the following entails that x ∈ X ∪ Y


for an object x?
CHAPTER 3. ELEMENTARY SET THEORY 68

(a) x ∈ X (e) x ∈ X and x ∈ Y

(b) x ∈ Y (f) x ∈ X and x ∈


/Y

(c) x ∈
/X (g) x ∈
/ X and x ∈ Y

(d) x ∈
/Y (h) x ∈
/ X and x ∈
/Y
3.9.6 Let X and Y be sets. Which of the following entails that x ∈
/ X ∪Y
for an object x?
(a) x ∈ X (e) x ∈ X and x ∈ Y

(b) x ∈ Y (f) x ∈ X and x ∈


/Y

(c) x ∈
/X (g) x ∈
/ X and x ∈ Y

(d) x ∈
/Y (h) x ∈
/ X and x ∈
/Y
3.9.7 Let X and Y be sets. Which of the following entails that x ∈ X ∩ Y
for an object x?
(a) x ∈ X (e) x ∈ X and x ∈ Y

(b) x ∈ Y (f) x ∈ X and x ∈


/Y

(c) x ∈
/X (g) x ∈
/ X and x ∈ Y

(d) x ∈
/Y (h) x ∈
/ X and x ∈
/Y
3.9.8 Let X and Y be sets. Which of the following entails that x ∈
/ X ∩Y
for an object x?
(a) x ∈ X (e) x ∈ X and x ∈ Y

(b) x ∈ Y (f) x ∈ X and x ∈


/Y

(c) x ∈
/X (g) x ∈
/ X and x ∈ Y

(d) x ∈
/Y (h) x ∈
/ X and x ∈
/Y
3.9.9 Which of the following excludes that assignment f is a function from
X to Y ?

(a) For some element y ∈ Y , there is no element x ∈ X to which f


assigns y.
(b) For some element x ∈ X, there is no element y ∈ Y which f
assigns to x.
(c) There is some element x ∈ X, such that there are two elements
y, y 0 ∈ Y such that f assigns both y and y 0 to x.
CHAPTER 3. ELEMENTARY SET THEORY 69

(d) There is some element y ∈ Y , such that there are two elements
x, x0 ∈ X such that f assigns y to both x and x0 .

3.9.10 Consider the set {n2 : n ∈ N and 0 ≤ n ≤ 10} of all the squares of
natural numbers between zero and ten . Which of the following entails
that a natural number m ∈/ {n2 : n ∈ N and 0 ≤ n ≤ 10}?

(a) There exists an n ∈ N with 0 ≤ n ≤ 10 such that m 6= n2 .


(b) For all n ∈ N with 0 ≤ n ≤ 10 it holds that m 6= n2 .
(c) Either 0  m2 or m2  10.
(d) Either 0  m or m  10.

3.10 Exercises
3.10.1 [h] Let X = {1, 2, 3} and Y = {1, 3, 5}. Calculate:

(a) X ∩ Y
(b) X ∪ Y
(c) X \ Y and Y \ X
(d) ℘(X) and ℘(Y )
(e) X × Y and Y × X

3.10.2 Let X and Y be sets. Prove the following facts!

(a) [h] X ⊆ Y iff X ∪ Y = Y .


(b) X ⊆ Y iff X ∩ Y = X.
(c) X = Y iff ℘(X) = ℘(Y )
(d) X = Y iff X \ Y = ∅ and Y \ X = ∅

3.10.3 [h] Consider the function f : {1, 2, 3}2 → {1, 2, 3} which assigns to a
pair of numbers, the smaller of the two. Write the function using all
our different function notations.

3.10.4 Let f : X → Y and g : Y → Z be two functions. Prove that g ◦ f is


a function from X to Z, where (g ◦ f )(x) = g(f (x)) for all x ∈ X.4

3.10.5 Consider the function f : N2 → N, which is defined by recursion over


the natural numbers as follows:
4
Remember that a function from one set to another must do the following two things:
every element in the one set gets assigned an element in the other and no element in the
one set gets assigned more than one element in the other. By proving those two things
you’ve proven that we’ve got a function.
CHAPTER 3. ELEMENTARY SET THEORY 70

(i) for all n ∈ N, f (n, 0) = n


(ii) for all n, m ∈ N, f (n, m + 1) = f (n, m) + 1

Prove, using mathematical induction, that f (n, m) = n + m for all


n, m ∈ N.

3.10.6 Use the formal definition of a function (3.6.10) to prove that if two
functions assign the same output to the same input, then they are
identical.

3.10.7 [h] Remember the gargles (3.7.11). Give recursive definitions of the
following functions:

(a) a function l : Gargle → N that measures the length of a gargle


(counted in number of symbols)
(b) a function 1♥ : Gargle → {0, 1} which assigns one to a gargle iff
the gargle contains the symbol ♥

3.10.8 [h] Use induction over the gargles to prove that every gargle contains
an even number of ♦’s (note that 0 is even).

3.10.9 [h] Prove that ♠♦♥♠ ∈


/ Gargle. (Hint: Use the previous result.)

3.10.10 Prove the induction principle over the gargles:

Theorem. Suppose that Φ is a condition on gargles. If we can show:

(a) Φ(♣) and Φ(♠)


(b) i. For all gargles x ∈ Gargle, if Φ(x), then Φ(♦x♦).
ii. For all gargles x, y ∈ Gargle, if Φ(x) and Φ(y), then Φ(x♥y)

Then for all gargles x ∈ Gargle, Φ(x).

3.11 Further Readings


There is a host of accessible literature on elementary set theory (it’s, after
all, the basis for modern math).
If you’re already looking into Houston’s book How to Think Like a Math-
ematician (see §2.8), I can recommend having a look at chapter 1 of that
book, too.
A more comprehensive introduction to set theory can be found in Tim-
othy Buttons Set Theory: An Open Introduction, which is freely available
under:
http://builds.openlogicproject.org/courses/set-theory/ In this
book, especially part 2 is what you want to have a look at.
Self Study Solutions
Some explanations in the appendix.
3.9.1 (a), (d), (h)
3.9.2 (e)
3.9.3 (a), (d)
3.9.4 (a), (c), (d), (f)
3.9.5 (a), (b), (e), (f), (g)
3.9.6 (h)
3.9.7 (e)
3.9.8 (c), (d), (f), (g), (h)
3.9.9 (b), (c)
3.9.10 (b)
71 CHAPTER 3. ELEMENTARY SET THEORY
Part II

Propositional Logic

72
Chapter 4

Syntax of Propositional Logic

4.1 Propositional Languages


4.1.1 As we said in the introduction, formal languages are the result of ab-
stracting away from the logically irrelevant aspects of ordinary lan-
guage to arrive at an abstract, mathematical language with well-
defined symbols and grammatical rules. We also mentioned in the in-
troduction that propositional logic is the logic of ‘not,’ ‘and,’ ‘or,’ ‘if
. . . , then . . . ,’ and ‘iff’—the so-called sentential operators. Correspond-
ingly, to define formal languages for propositional logic, we abstract
away from everything but the structure required for the sentential
operators.

4.1.2 What is this structure? —First, note that the sentential operators
connect sentences to form new sentences. The operator ‘not,’ for ex-
ample, takes a sentence and makes a new one out of it, it takes us
from “two plus two equals four” to “two plus two doesn’t equal four.”
The operator ‘and,’ instead takes two sentences to form a new one;
from “two plus two equals four” and “four is even,” we get to “two
plus two equals four and four is even.” Next, note that for validity in
propositional logic, it actually doesn’t matter what the sentences are
that an operator connects. Take, for example, inference (1) from the
introduction:

(1) The letter is either in the left drawer or in the right drawer, and
it’s not in the left drawer. So, the letter is in the right drawer.

This inference remains valid if we replace “the letter is in the left


drawer” and “the letter is in the right drawer” with any statements
we please. The following inference, for example, is equally valid:

(1’) The cat is either on the mat or the dog dances tango, and the cat
isn’t on the mat. So, the dog dances tango.

73
CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC 74

To see this, just go through the reasoning we used to see that (1) is
valid and replace “the letter is in the left drawer” everywhere with
“the cat is on the mat” and “the letter is in the right drawer” with
“the dog dances tango.”

4.1.3 So, we can abstract away from the concrete sentences in an inference,
and replace them with sentence letters, typically written p, q, r, . . . .
The sentential operators, then, are represented by the following formal
symbols:

Symbol Name Reading


¬ Negation not
∧ Conjunction and
∨ Disjunction or
→ Conditional if . . . , then . . .
↔ Biconditional iff

So, the sentence “the letter is either in the left drawer or in the right
drawer” becomes the formula (p∨q) and the sentence “the letter is not
in the left drawer” becomes ¬p. If we use the symbol ∴ to stand for
the natural language expression “so,” we can therefore fully formally
represent the inference (1) as:

(p ∨ q), ¬p ∴ q

The abstraction process just described is the translation from natural


language into the language of propositional logic, which is known as
formalization. We will now first define the notion of a propositional
language, a formal language of propositional logic, and then discuss
how to formalize natural language claims in a propositional language.

4.1.4 To define formal language (in propositional logic and beyond) we need
to do two things: we need to specify a vocabulary—the symbols we
can use to form expression of the language—and we need to define
a grammar, which tells us which expressions of the symbols from the
vocabulary are well-formed.

4.1.5 For a propositional language, typically denoted L, the vocabulary con-


sists of the following:

(i) a (non-empty) set P of sentence letters,1


(ii) the sentential operators ¬, ∧, ∨, →, and ↔, and
(iii) the parentheses ( and ).
1
In the following, we’ll always assume that P only contains lower case letters p, q, r, . . .
and, in particular that ¬, ∧, ∨, →, ↔, (, ) ∈
/ P.
CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC 75

Every expression in L is constructed from the symbols in its vocabu-


lary.
4.1.6 The grammar of L, then, is given by an inductive definition: we induc-
tively define the set of well-formed formulas, typically denoted (some-
what ambiguously) simply L. Remember that in order to recursively
define a set, we need to give a set of initial elements and a set of con-
structions to form new elements from old ones. Well, in the case of
a propositional language, the initial elements are simply the sentence
letters and the constructions are given by the sentential operators. So,
the set of well-formed formulas of L is defined as the smallest set X,
such that:
(i) P ⊆ X
(ii) (a) if φ ∈ X, then ¬φ ∈ X
(b) if φ, ψ ∈ X, then (φ ∧ ψ), (φ ∨ ψ), (φ → ψ), (φ ↔ ψ) ∈ X.
Note the presence of the parentheses in (ii.b)—we’ll have to talk about
them in some more detail later. It’s a bit tedious to write out (ii.b)
in full, so we sometimes use the following notation, in which ◦ is a
variable ranging over the sentential connectives:
(ii) (b) if φ, ψ ∈ X, then (φ ◦ ψ) ∈ X, for ◦ = ∧, ∨, →, ↔.
So, for example, if P is the set {p, q, r}, we have p, ¬¬q, (p ∧ (r ∨
q)), (p → (r ∨ (p ∧ (q ↔ r)))), . . . ∈ L.
4.1.7 Note that there are many different propositional languages, one for
each set of propositional letters. For example, the language defined
over P = {p} is different from the language defined over P = {p, q}.
The latter, for example, contains (p ∧ q) as a formula, while the former
does not. In the following, we’ll always assume that we’re dealing with
a fixed propositional language L, which has p, q, r, . . . ∈ P.
4.1.8 There is another way of writing the grammar of a propositional lan-
guage, which you should have seen. This is the so-called Backus-Naur-
Form or BNF, for short, which is especially important in programming
and computer science. The BNF of a propositional language is given
as follows:
φ ::= p | ¬φ | (φ ∧ φ) | (φ ∨ φ) | (φ → φ) | (φ ↔ φ)
This BNF means precisely the same thing as our official recursive
definition, it is just written differently. The idea is that an object of
type φ (formula) is either a sentence letter, p ∈ P, the result of taking
an object of type φ and writing a ¬ in front of it, or the result of taking
an object of type φ another object of type φ and writing a sentential
operator between the two and packaging the result in parentheses.
CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC 76

4.1.9 With the official definition of L in place, we can straight-forwardly


show that an expression is a formula by constructing it from the sen-
tence letters using the sentential operators. For example, we can show
that (p ∧ (q ∨ r)) ∈ L (assuming that p, q, r ∈ P) as follows :

ˆ We have q, r ∈ P, therefore q, r ∈ L.
ˆ Since q, r ∈ L, we have that (q ∨ r) ∈ L.
ˆ We have p ∈ P and therefore p ∈ L.
ˆ Since (q ∨ r), p ∈ L, we get that (p ∧ (q ∨ r)) ∈ L.

And we can also show that an expression is not a formula, roughly


like we showed that something is not a natural number. Here, as an
example, we’ll show that (p ∧ ¬) ∈
/ L (even if p ∈ P):

Proposition. Let L be a propositional language. Then (p ∧ ¬) ∈


/ L.

Proof. Let X be a set such that X satisfies conditions (i) and (ii) from
the definition of L and (p ∧ ¬) ∈ X. We claim that then the set Y
defined by:
Y = X \ {(p ∧ ¬), ¬}
also satisfies conditions (i) and (ii) and Y ⊂ X. We need to show two
things. First, we show that (i) P ⊆ Y . To see this, note that P ⊆ X
and (p ∧ ¬), ¬ ∈ / P. So it follows that for all p ∈ P, p ∈ X and
p∈/ {(p ∧ ¬), ¬}. Hence, p ∈ Y , meaning P ⊆ Y .
To see that Y satisfies condition (ii.a), let’s suppose that φ ∈ Y . Since
Y = X \{(p∧¬), ¬}, it follows that φ ∈ X. And since X is closed under
negation, we can conclude that ¬φ ∈ X. But clearly ¬φ ∈ / {(p ∧ ¬), ¬}.
Hence, φ ∈ X \ {(p ∧ ¬), ¬}, meaning φ ∈ Y .
To see that Y satisfies condition (ii.b), assume that φ, ψ ∈ Y . We now
need to show four different cases (a) (φ ∧ ψ) ∈ Y , (b) (φ ∨ ψ) ∈ Y ,
(c) (φ → ψ) ∈ Y , and (d) (φ ↔ ψ) ∈ Y . But for cases (b–d), this is
easy. Since φ, ψ ∈ Y , we get that φ, ψ ∈ X. Since X satisfies (ii.b), we
get that (φ ◦ ψ) ∈ X for ◦ = ∨, →, ↔. And trivially, for ◦ = ∨, →, ↔,
(φ ◦ ψ) ∈
/ {(p ∧ ¬), ¬}. Hence (φ ◦ ψ) ∈ Y , as desired. (Why trivially?
We’ll look at the form of the members of {(p ∧ ¬), ¬}. . . )
Only in case (a), we need to reason a bit more. We again easily get
from φ, ψ ∈ Y to φ, ψ ∈ X and from there via (ii.b) to (φ ∧ ψ) ∈ X.
But can (φ ∧ ψ) be in {(p ∧ ¬), ¬}? Well, only if φ = p and ψ = ¬.
But remember that ψ ∈ Y = X \ {(p ∧ ¬), ¬}. Hence if ψ = ¬, then
ψ ∈/ Y , which contradicts our assumption that ψ ∈ Y . Hence, using
proof by contradiction, we can conclude that (φ ∧ ψ) ∈
/ {(p ∧ ¬), ¬}.
Hence (φ ∧ ψ) ∈ Y , as desired.
CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC 77

So, if X satisfies (i) and (ii) and (p∧¬) ∈ X, then X is not the smallest
set that satisfies conditions (i) and (ii). Hence X 6= L. For a final proof
by contradiction, suppose that (p ∧ ¬) ∈ L. By our observation, it
would follow that L = 6 L, which is a contradiction. Hence (p ∧ ¬) ∈ / L.

As you can see, it’s quite tedious to show that something isn’t a for-
mula. In the following, we’ll develop some techniques for proving that
expressions aren’t formulas that are a bit more “user friendly.”

4.1.10 But first, let’s talk about formalization. As we said above, formaliza-
tion is the process of abstracting a natural language expression into
an expression of a formal language. We don’t do this just for fun (even
though it is fun), but with a particular aim in mind: we want to check
inferences couched in natural language for validity. How to actually
check for validity will be covered in the following chapter. For now
you can note that in order for us to be able to draw any conclusion
from our formal language expeditions back to ordinary language, we
need to make sure that, whatever we do in our formalization, we can
always reverse it—we want to be able to “translate backwards.” In
order to guarantee this, every formalization begins with a translation
key, which is basically like a cipher in cryptography. A translation key
tells us for each sentence letter which natural language sentence it
stands for.
Here’s an example of a translation key:

p : the letter is in the left drawer


q : the letter is in the right drawer

This is just one possible key, the following one is also perfectly fine:

p : the letter is in the right drawer


q : the letter is in the left drawer

The only thing you’re not allowed to do in your translation key is to


use a sentence letter to stand for an expression that is formed from
simpler sentences using sentential connectives. So the following key is
not valid:

p : the letter is in the left drawer


q : the letter is in the right drawer
r : the letter is not in the left drawer
CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC 78

Note that if a sentence is not directly formed from simpler sentences


using the connectives, it is also translated by a sentence letter. For
example, the sentence “necessarily, it’s not the case that humans fly”
get’s translated as a sentence letter. Otherwise, there is no such thing
as a correct key, only a useful one.

4.1.11 The rest of the process of formalization in propositional logic basically


consists in finding the sentence operator that corresponds best to the
operators used to form the sentence. There are no clear rules for doing
so, but I can give you some guidelines. In the following examples, I
use the translation key:

p : the letter is in the left drawer


q : the letter is in the right drawer

Here are some simple sentences and their corresponding translations:

The letter isn’t in the left drawer ; ¬p


It’s not the case that the letter is in the left ; ¬p
drawer
The letter is in the left and in the right drawer ; (p ∧ q)
The letter is not in the left drawer, but it’s also ; (¬p ∧ ¬q)
not in the right one
The letter is in the left or in the right drawer ; (p ∨ q)
The letter is neither in the left nor in the right ; (¬p ∧ ¬q)
drawer
If the letter is in the left drawer, then it’s not in ; (p → ¬q)
the right drawer
The letter is in the left drawer, if it’s not in the ; (¬q → p)
right one
The letter is only in the left drawer, if it’s not ; (p → ¬q)
in the right one
The letter is in the left drawer iff it’s not in the ; (p ↔ ¬q)
right one

As you can see, ¬, ∧, ∨, →, ↔ are basically used like their natural lan-
guage counterparts. There are some special cases, however. Note, for
example, that “The letter is not in the left drawer, but it’s also not
in the right one” becomes (¬p ∧ ¬q). This is because, from the per-
spective or propositional logic, the “but” only carries the information
that both sentences are true—the implicit sense of surprise implied
by the use of “but” is something we don’t care about from a logical
perspective.
CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC 79

4.1.12 A very special case is the one of “either . . . or . . . .” This expression


is ambiguous in English between an inclusive reading (one of the two
or both) and an exclusive reading (one of the two but not both). The
inclusive reading is translated as ∨ (by logical convention). So, “the
letter is either in the left drawer or in the right drawer” would be
translated as (p ∨ q) if read inclusively. But in this case, an exclusive
reading seems more appropriate. This reading would be translated as
((p ∨ q) ∧ ¬(p ∧ q)). It’s usually possible from the context to determine
which reading is intended, but sometimes there is simply a remaining
ambiguity in natural language.2

4.1.13 To conclude our brief treatment of formalization, let me say that being
able to provide a good formalization is much like translating well from
one language to another. It requires a good understanding of both
and a careful attention to linguistic subtlety. As such, formalization
is something you’ll have to learn, slowly. You’ll be much better at it,
once you’ve properly understood how formal languages work, what the
meaning of formal expressions is, given by their semantics, and so on.
For now, my advice is just: exercise, exercise, exercise.

4.2 Proof by Induction on Formulas


The following section contains the first treatment of one of the most impor-
tant principles of this course: proof by induction on formulas. Our entire
treatment of mathematical induction was in preparation for this. So, before
you embark on this section, make sure that you got as much as possible from
our treatment of mathematical induction in the previous chapter.

4.2.1 We will now develop the technique of proof by induction on formulas,


or as we’ll often simply call it from now on “proof by induction.” The
principle is essentially the same as for N or the gargles, just applied to
the formulas of a propositional language. So, remember that induction
is a method for proving something for all members of an inductively
defined set. We do so by showing first that the claim holds for all initial
elements of the set and that it’s preserved under its constructions.
Spelled out for formulas, this leads to the following induction principle:

Theorem. Let Φ be a condition on formulas. If we can show:

(i) For all p ∈ P, Φ(p).


(ii) (a) For all φ ∈ L, if Φ(φ), then Φ(¬φ).
(b) For all φ, ψ ∈ L, if Φ(φ) and Φ(ψ), then Φ((φ ◦ ψ)), for
◦ = ∧, ∨, →, ↔.
2
Just check [dictionary].
CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC 80

Then we can conclude that for all φ ∈ L, Φ(φ).

Proof. Let Φ be an arbitrary condition on formulas satisfying condi-


tions (i) and (ii), Consider the set {x : Φ(x)}. By the conditions (i)
and (ii) of our theorem, {x : Φ(x)} satisfies conditions (i) and (ii) from
the definition of L. Since L is the smallest set satisfying those condi-
tions, we have that L ⊆ {x : Φ(x)}. But now, it easily follows that for
all φ ∈ L, we have that Φ(φ). For let φ ∈ L be an arbitrary formula.
Since L ⊆ {x : Φ(x)}, it follows that φ ∈ {x : Φ(x)}. But that just
means that Φ(φ), as desired.

4.2.2 We’re going to use induction on formulas a lot in this course. Almost
always when I ask you “how are we going to prove this?,” the answer
is going to be “by induction, of course!” So, let’s be clear on how you
write a proof by induction:

1. State clearly that you’re using induction to prove the claim.


2. Prove the base case, i.e. show that all sentence letters have the
property.
3. State clearly that you’re now considering the induction steps. In
each sub-case, begin by stating your induction hypothesis and then
use it to derive the claim about the constructed element, i.e.
ˆ derive that ¬φ has the property from the induction hypothesis
that φ has the property, and
ˆ derive that (φ ◦ ψ) has the property (for ◦ = ∧, ∨, →, ↔) from
the induction hypotheses that φ and ψ has the property.
4. State clearly that you’re using induction to infer that the claim in
question holds for all elements of the set.

4.2.3 We’re now going through some applications of induction on formulas—


but not just for the sake of it, there’s going to be a purpose. We want
to establish some criteria for determining that an expression isn’t a
formula.

4.2.4 First, we’re going to note a fact about parentheses, which essentially
depends on the fact that for every opening parentheses, there has to
be a corresponding closing one in a proper formula. The fact is the
following:

Proposition. Let φ ∈ L be a formula. Then φ contains an even


number of parentheses.

Proof. We prove this by induction on formulas.


CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC 81

ˆ Base case. No sentence letter contains any parentheses. So for


each p ∈ P, the number of parentheses in p is zero. But zero is
an even number. So, the base case holds.
ˆ Induction steps.
– Suppose the induction hypothesis that φ ∈ L contains an
even number of parentheses. Consider the formula ¬φ. Note
the number of parentheses in ¬φ are exactly the same as in
φ, since no new ones have been added. So, also ¬φ contains
an even number of parentheses, as desired.
– Suppose the induction hypothesis that φ, ψ ∈ L contain an
even number of parentheses. Consider the formulas (φ ◦ ψ).
The number of parentheses in (φ◦ψ) are precisely the number
of parentheses in φ, plus the number of parentheses in ψ, plus
two. But that’s the sum of three even numbers, and as such
(as you proved as an exercise in the previous chapter) even,
as desired.

We conclude by induction on formulas that for every φ ∈ L, the num-


ber of parentheses in φ is even.

This proposition already gives us a useful criterion for formula-hood:


if an expression contains an odd number of parentheses, it cannot be
a formula.

Corollary. Let φ be an expression. If φ contains an odd number of


parentheses, then φ ∈
/ L.

Proof. Immediate from the previous proposition via contrapositive


proof.

Using this result, we can show, for example, that ((¬(p ∨ q) ∧ (p ↔


¬¬s)) ∈/ L. Proving this with our definitional technique (the technique
from 4.1.9) would be tedious. But is it the best we can do using paren-
theses? Why, no! You will prove an even sharper result as an exercise
at the end of this chapter.

4.2.5 Before we move to the next section, we will prove one more useful
result for proving that something isn’t a formula. Note that we said
that every formula of L is made up of symbols from its vocabulary.
It’s high-time, we proved this:

Proposition. Let φ ∈ L be a formula. Then φ only contains symbols


from the vocabulary of L.
CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC 82

Proof. We prove this, again, by induction.

ˆ Base case. All the members of P are part of the vocabulary of L.


So, the base case holds.
ˆ Induction steps.
– Suppose the induction hypothesis that φ ∈ L only contains
symbols from L’s vocabulary. Consider the formula ¬φ. Note
the symbols in ¬φ are exactly the same as in φ with the sole
addition of ¬. But ¬ is a symbol of L’s vocabulary. So also
¬φ only contains symbols from L’s vocabulary.
– Suppose the induction hypotheses that φ, ψ ∈ L only contain
symbols from L’s vocabulary. Consider the formula (φ ◦ ψ),
for ◦ = ∧, ∨, →, ↔. Note the symbols in (φ ◦ ψ) are exactly
the same as in φ and ψ with the addition of (, ◦, and ). But
(, ◦, and ) are all symbols of L’s vocabulary. So also (φ ◦ ψ)
only contains symbols from L’s vocabulary.

We conclude the theorem by the principle of induction on formulas.

Also this result leads to a necessary condition for formula-hood:

Corollary. Let φ be an expression. Then, if φ contains a symbol not


from L’s vocabulary, then φ ∈
/ L.

Proof. Immediate from the previous proposition via contrapositive


reasoning.

This criterion allows us to exclude a whole range of expressions from


L, such as, for example, (p ↔ (q → ¬¬♣)). Again, this could be done
using our definition method (4.1.9) but it would be tedious. In the next
section, as an off-shoot from an important theoretical concept to do
with readability, we’ll get our strongest criterion for formula-hood—an
actual algorithm we can use to figure out whether an expression is a
formula.

4.3 Unique Readability and Parsing Trees


4.3.1 Remember the issues of gargle-readability (3.7.14). The problem was
that some gargle could be constructed in more than one ways, which
lead to a problem with function recursion over the gargles. We won’t
run into the same problem with formulas, but that’s a fundamental fact
about them—the so-called unique readability theorem. Essentially, the
CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC 83

theorem states that for each formula, there is a unique way to construct
that formula from the sentence letters using the sentential connectives.
Importantly, this makes formal languages computer-readable: it means
that a computer, who lacks human intuition and context-sensitivity,
can always precisely figure out what a given formula is supposed to
say.

4.3.2 The unique readability theorem is, in effect, a consequence of our tidy
use of parentheses. To illustrate how, consider the “formula” p ∧ q ∨ r,
which is of course not a proper formula, but suppose for a second that
we would allow expressions like this. Well, there would be two ways of
“reading” or parsing that formula:

p∧q∨r p∧q∨r

p q∨r vs. p∧q r

q r p q

This is not only bad for function recursion (for issues we discussed in
the context of the gargles) but also messes up our intended informal
reading of the formula. Suppose we use a translation key where p
stands for “I drink a coffee,” q for “I have toast,” and r for “I have
eggs.” Then, our two ways of parsing the sentence correspond to two
very different informal meanings:

ˆ I have a coffee and either toast or eggs.


ˆ Either I have a coffee and toast or I have eggs.

The two are really very different in meaning: in the former case you
have a beverage and food and in the second either you have a beverage
and food or just some food.
But this is just a cautionary tale: the problem actually doesn’t arise in
propositional logic, as long as we use our parentheses properly—which
is what we’re going to prove in this section.

4.3.3 To state the unique readability theorem, we will make use of the notion
of a parsing tree. And before that, the notion of a tree.3 Roughly
speaking, a tree is a structure of the following form:
3
Mathematicians call the structure that we’re studying here (more correctly) a directed
rooted tree, but for simplicity we’ll just drop the modifiers.
CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC 84

• •

• • • • •

.. .. .. .
. . . • ..

Some terminology about trees:

ˆ The dots are called the nodes of the tree and the lines connecting
them the edges.
ˆ The upper-most node is called the root of the tree and the lower-
most nodes its leaves.
ˆ If x is a node in the tree and y is directly above x, i.e. there is
an edge pointing from y to x, then x is called a child of y and y
the parent of x.
ˆ A path in the tree is a sequence of nodes which are connected by
edges.

Note that in a tree, there’s never a “loop,” i.e. a (non-trivial) path


that both begins and ends at the same node. We’re not going to bother
giving a mathematically precise definition of a tree, but instead we’re
going to put the concept immediately to (good) use.

4.3.4 To define the parsing tree of a formula, we will, for the first time, make
use of function recursion over L. So let’s briefly remind ourselves how
this works in general (remember 3.7.12–13). In order to recursively
define a function over an inductively defined set:

1. We say what the value of our function is on the initial elements.


2. We say how to calculate the value of the function for an element
built by a construction, where we can reference to the values of the
function for the elements the element is constructed from.

In the case of L, this means we have to give the values of the function
for all sentence letters and we have to say how the function behaves
under the sentential operators. Note that we will have to prove that
this actually defines a function on L—remember the unique readability
issue. We’ll do in a moment. For now, we will pretend that it does and
justify that assumption ex post, i.e. afterwards.

4.3.5 We will now use function recursion to define a function T , which maps
any formula φ ∈ L to its parsing tree:
CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC 85

(i) T (p) = p for p ∈ P

¬φ
(ii.a) T (¬φ) =
T (φ)

(φ ◦ ψ)
(ii.b) T ((φ ◦ ψ)) = for ◦ = ∧, ∨, →, ↔
T (φ) T (ψ)

What a parsing tree does is, essentially, tell us how the formula in
question was constructed. To see how, it’s best to look at some exam-
ples.

4.3.6 One useful piece of terminology: the first sentential operator who’s rule
is applied when we construct the parsing tree for a formula is called the
main operator of that formula. This operator will become particularly
important when we do semantics in the next chapter. For now, it will
allow us to refer to statements based on their main operator:

Main operator Kind of statement


¬ a negation
∧ a conjunction
∨ a disjunction
→ a conditional
↔ a biconditional

4.3.7 Examples. Here are some examples of parsing trees:

(i)

((p ∧ (p → q)) → ¬q)

(p ∧ (p → q)) ¬q
T (((p ∧ (p → q)) → ¬q)) =
p (p → q) q

p q

Main operator: →
(ii)
CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC 86

(p ∧ (q ∨ ¬q))

p (q ∨ ¬q)
T ((p ∧ (q ∨ ¬q))) =
q ¬q

Main operator: ∧
(iii)

((p → q) ∨ (q → r))

T (((p → q) ∨ (q → r))) = (p → q) (q → r)

p q q r

Main operator: ∨
(iv)

¬¬¬¬q

¬¬¬q

T (¬¬¬¬q) = ¬¬q

¬q

Main operator: ¬
In each of these cases, you can read the tree from bottom to top
to get a construction of the formula in question from the sentence
letters. This is the precise sense in which a parsing tree tracks
the construction of a formula.

The following point is the most difficult in this chapter. If you don’t
get it immediately, don’t despair! Follow the advice on reading math:
try to think this through, consider examples, draw pictures, etc. And
then move on. The details of this proof are not the most important
thing to take out of this chapter—the content of the main theorem is!
CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC 87

4.3.8 We will now prove that our recursive definition of T indeed assigns
to each formula φ a unique parsing tree, i.e. we prove that T is a
function. We do this in two steps, by proving two central lemmas.
In order to properly state these lemmas, we need to read (i) and (ii)
from (4.3.5) as conditions on something being a tree for a formula—
otherwise, we’d already assume the truth of these lemmas. The idea
is that, for example, (4.3.5.i) says that a tree T is a parsing tree of p
iff T = p. And (4.3.5.ii.a) says that T is a parsing tree for ¬φ iff T is
of the form

¬φ

T0

where T 0 is a parsing tree for φ. Similarly for (4.3.5.ii.b). With this


understanding in mind, we can prove the following two lemmas:

Lemma (Existence Lemma). For each φ ∈ L, T (φ) exists, i.e. there


exists a tree that satisfies the conditions (i–ii) from (4.3.5).

Proof. We prove the claim using induction.

(i) Base case. Let p ∈ P be a sentence letter. By (4.3.5.i), T (p) = p.


And, indeed, p is a tree with p as its only node. Hence, the base
case holds.
(ii) Induction steps.
(a) Suppose the induction hypothesis that T (φ) exists. We need
to show that T (¬φ) exists. By (4.3.5.ii.a), we have that:
¬φ
T (¬φ) =
T (φ)

But by the induction hypothesis T (φ) exists. But if T (φ) is


a tree, so is T (¬φ).
(b) Suppose the induction hypotheses that T (φ) and T (ψ) exist.
We need to show that T ((φ ◦ ψ)) exists for ◦ = ∧, ∨, →, ↔.
By (4.3.5.ii.b), we have that:

(φ ◦ ψ)
T ((φ ◦ ψ)) = for ◦ = ∧, ∨, →, ↔
T (φ) T (ψ)
CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC 88

But by the induction hypothesis T (φ) and T (ψ) both exist.


But if T (φ) is a tree and T (ψ) is a tree, so is T ((φ ◦ ψ)).

Using induction on formulas, we conclude that for each φ ∈ L, T (φ)


exists.

Lemma (Uniqueness Lemma). Let φ ∈ L. Then T (φ) is unique, i.e.


if T1 (φ) and T2 (φ) are two trees that satisfy the conditions (i–ii) from
(4.3.5), then T1 (φ) = T2 (φ).

Proof. We prove this, once more, using induction.

(i) Base case. Let p ∈ P be a sentence letter, and T1 (p) and T2 (p)
two trees satisfying the conditions (i–ii) from (4.3.5). But condi-
tion (4.3.5.i) says that T (p) = p, so T1 (p) = p = T2 (p), which is
what we needed to show.
(ii) Induction steps.
(a) Suppose the induction hypothesis that if T1 (φ) and T2 (φ) are
two trees for φ which satisfy conditions (i–ii) from (4.3.5),
then T1 (φ) = T2 (φ). Now consider two trees for ¬φ, i.e.
T1 (¬φ) and T2 (¬φ). By (4.3.5.ii.a), we have that:
¬φ ¬φ
T1 (¬φ) = and T2 (¬φ) =
T1 (φ) T2 (φ)

where T1 (φ) and T2 (φ) are trees for φ which satisfy the condi-
tions (i–ii) from (4.3.5). But then, by the induction hypoth-
esis, T1 (φ) = T2 (φ). And so, T1 (¬φ) = T2 (¬φ), as desired.
(b) Suppose the first induction hypothesis that if T1 (φ) and
T2 (φ) are two trees for φ which satisfy conditions (i–ii) from
(4.3.5), then T1 (φ) = T2 (φ). And suppose further that if
T1 (ψ) and T2 (ψ) are two trees for ψ which satisfy conditions
(i–ii) from (4.3.5), then T1 (ψ) = T2 (ψ).
Now consider two trees for (φ◦ψ), for ◦ = ∧, ∨, →, ↔, T1 ((φ◦
ψ)) and T2 ((φ ◦ ψ)). By (4.3.5.ii.a), we have that:
(φ ◦ ψ) (φ ◦ ψ)
T1 ((φ ◦ ψ)) = and T2 ((φ ◦ ψ)) =
T1 (φ) T1 (ψ) T2 (φ) T2 (ψ)

where T1 (φ) and T2 (φ) are trees for φ which satisfy the condi-
tions (i–ii) from (4.3.5), and T1 (ψ) and T2 (ψ) are trees for ψ
which satisfy the conditions (i–ii) from (4.3.5). But then, by
CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC 89

the induction hypotheses, T1 (φ) = T2 (φ) and T1 (ψ) = T2 (ψ).


And so, T1 ((φ ◦ ψ)) = T2 ((φ ◦ ψ)), as desired.

We conclude by using (i) and (ii) to infer by induction that the parsing
tree for every formula is unique.

Put together, the two lemmas yield our unique readability theorem:

Theorem. For each formula φ ∈ L there exists a unique parsing tree,


T (φ) as defined by (4.3.5).

This theorem is of fundamental importance for what we’re doing in


propositional logic. Essentially, it guarantees that we can use function
recursion on L. Why? Because the theorem tells us that we can calcu-
late a recursively defined function on L following the formulas parsing
tree—and this is the only way in which the function can be calculated.
In fact, this theorem guarantees that propositional logic can be com-
puter implemented—a computer can construct the parsing tree for a
formula in order to “understand” it.

4.3.9 We’ll conclude the section by discussing an algorithm for determining


whether a given expression σ is a formula. First, let’s clarify what we
understand under the term “algorithm.” In mathematics, an algorithm
is typically understood as a finite set of precise instructions which, if
followed, are supposed to complete a specific task. Hence an algorithm
itself is not a computer program but the procedure that underlies the
computer program. We can thus describe an algorithm in natural lan-
guage without focusing on a specific implementation of the algorithm
in a programming language.

4.3.10 The task that our algorithm is supposed to fulfill is to determine


whether a given expression is a formula. We hereby assume that the
expression is formed entirely out of symbols from the alphabet, since
by 4.2.5, if it’s not, we can already exclude that it’s a formula. Here is
our algorithm:

1. Write down the expression σ and look at it. Proceed to step 2.


2. Does the expression you’re currently looking at contain the same
number of (’s and )’s?
(a) If not, terminate: σ is not a formula!
(b) If yes, proceed to step 3.
3. Is the expression you’re currently looking at of the form p?
(a) If not, proceed to step 4.
CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC 90

(b) If yes, write a X next to it and proceed to step 6.


4. Is the expression you’re currently looking at of the form ¬τ ?
(a) If not, proceed to step 5.
(b) If yes, apply the following rule:
¬τ X

τ
Then proceed to step 6.
5. Is the expression you’re currently looking at of the form (τ ◦ π),
◦ = ∧, ∨, →, ↔, and there is no other connective  = ∧, ∨, →, ↔
such that the expression is of the form (τ 0 π 0 ) and  is enclosed
in fewer parentheses than ◦?
(a) If not, terminate: σ is not a formula!
(b) If yes, apply the following rule:
(τ ◦ π)X

τ π
Then proceed to step 6.
6. In the tree you’ve constructed so far, is there an expression at a
leaf without a X next to it?
(a) If no, terminate: σ is a formula ,
(b) If yes, then pick one and look at it. Go back to step 2.

4.3.11 We will not prove that this algorithm actually completes its task, we
won’t formally verify the algorithm. To do so, is actually not that hard
given all the facts we’ve already observed. But it’s tedious and so we’ll
leave the task to the interested reader. Note that what you would need
to show are three things: (i) the algorithm always terminates, (ii) if the
algorithm terminates and says that the expression is a formula, then
it is indeed a formula, and (iii) if the algorithm terminates and says
that the expression is not a formula, then it is indeed not a formula.
Here we simply observe (again without proof) that the algorithm, if
applied to a formula, actually yields the parsing tree of that formula.
To see this, try it out! Apply the algorithm to some formulas and see
what you get. Here we give just one example:
CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC 91

(p ∨ (q ∨ (r ↔ (¬s ∧ t))))X5.b

pX3.b (q ∨ (r ↔ (¬s ∧ t)))X5.b

qX3.b (r ↔ (¬s ∧ t))X5.b

rX3.b (¬s ∧ t)X5.b

¬sX4.b tX3.b

sX3.b
Note that in the first step (5.b), we parse according to the first ∨
because it’s in fewer parentheses than the second ∨, the ↔ and the ∧.

So, effectively, what the algorithm does is what a computer would do


(described on a very abstract level) if given an expression and asked
whether it makes sense in propositional logic.

4.3.12 We conclude the section with two example applications of the algo-
rithm to illustrate how it can be used to show that something isn’t a
formula:

(i) Claim. ((p ∧ q) → (r)) ∈


/L
Algorithmic proof.

((p ∧ q) → ¬(r))X5.b

(p ∧ q)X5.b ¬(r)X4.b

pX3.b qX3.b (r)/5.a

(ii) Claim. ¬¬(p(↔)q) ∈


/ L.
Algorithmic proof.

¬¬(p(↔)q)X4.b

¬(p(↔)q)X4.b

(p(↔)q)X5.b

(p(/2.a )q)/2.a
CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC 92

4.4 Function Recursion on Propositional Languages


4.4.1 In the previous section, we’ve given a justification for using function re-
cursion on L. Now, we’ll define two important syntactic functions, i.e.
functions with domain L. The importance of these functions derives
from their wide-spread use in logical literature (way beyond proposi-
tional logic). But also in our course, they will play an important role
here and there.

4.4.2 The first of the two functions, we’ll define a function that maps each
formula to the set of formulas it was constructed from. These formulas
are called the sub-formulas of the formula in question. We define the
function as follows using function recursion:

(i) sub(p) = {p} for p ∈ P


(ii) (a) sub(¬φ) = sub(φ) ∪ {¬φ}
(b) sub((φ ◦ ψ)) = sub(φ) ∪ sub(ψ) ∪ {(φ ◦ ψ)}

Let’s consider two examples. We have:

ˆ sub(¬¬p) = {p, ¬p, ¬¬p}


ˆ sub(((p ∧ q) ∨ r)) = {p, q, r, (p ∧ q), ((p ∧ q) ∨ r)}

In order to understand this definition, follow the advice for under-


standing a definition from lecture 2!

4.4.3 One thing that might help us understanding sub-formulas better is to


compare them to notions we’ve previously introduced. For this pur-
pose, we’ll prove the following proposition:

Proposition. Let φ ∈ L be a formula. Then the elements of sub(φ)


are precisely the formulas that occur in T (φ).

Proof. We prove this fact using—surprise—induction.

(i) Base case. Let p ∈ P be a sentence letter. Then sub(p) = {p}


and T (p) = p. Hence sub(p) is precisely the set of formulas that
occur in T (p).
(ii) Induction steps.
(a) Assume the induction hypothesis that sub(φ) are precisely
the formulas that occur in T (φ). Now consider ¬φ. We know
by definition of sub, that sub(¬φ) = sub(φ) ∪ {¬φ}. By the
(ii.a) definition of parsing trees, we know furthermore that:
CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC 93

¬φ
T (¬φ) =
T (φ)

Hence the formulas that occur in T (¬φ) are precisely the


formulas that occur in T (φ) plus the formula ¬φ. But this
is just another way of saying that the formulas in T (¬φ) are
sub(φ) ∪ {¬φ} = sub(¬φ), as desired.
(b) Assume the induction hypotheses that sub(φ) are precisely
the formulas that occur in T (φ) and sub(ψ) are precisely
the formulas ocuring in T (ψ). Now consider (φ ◦ ψ), for ◦ =
∧, ∨, →, ↔. We know by definition of sub, that sub((φ◦ψ)) =
sub(φ) ∪ sub(ψ) ∪ {(φ ◦ ψ)}. By the (ii.a) definition of parsing
trees, we know furthermore that:
(φ ◦ ψ)
T ((φ ◦ ψ)) =
T (φ) T (ψ)

Hence the formulas that occur in T ((φ ◦ ψ)) are precisely


the formulas that occur in T (φ), plus the formulas in T (ψ),
plus (φ ◦ ψ). But this is just another way of saying that the
formulas in T ((φ ◦ ψ)) are sub(φ) ∪ sub(ψ) ∪ {(φ ◦ ψ)} =
sub((φ ◦ ψ)), as desiderd.

We can thus use induction to infer that the claim holds for all φ.

4.4.4 We move to the second important syntactic function, the so-called


measure of complexity. In order to define this function, we make use
of the maximum function max : N2 → N, which is defined as follows:

n if n > m

max(n, m) = m if m > n

n if n = m

So, we get, for example, max(2, 3) = 3, max(3, 2) = 3, max(2, 2) = 2,


and so on. Using max, the (recursive) definition of the complexity
function c : L → N is as follows:

(i) c(p) = 0 for p ∈ P


(ii) (a) c(¬φ) = c(φ) + 1
(b) c((φ ◦ ψ)) = max(c(φ), c(ψ)) + 1, for ◦ = ∧, ∨, →, ↔.

Let’s consider some examples. We have:


CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC 94

ˆ c(¬p) = 1, c((p ∧ q)) = 1, c((p ∨ q) = 1, . . .


ˆ c(¬¬p) = 2, c(¬(p ∧ q)) = 2, c((¬p ∧ ¬q)) = 2, . . .

Can we somehow narrow down what c precisely measures? Think


about it before you read the next point.

4.4.5 We will now prove a proposition that gives us an intuitive reading of


the function c:

Proposition. Let φ ∈ L be a formula. Then c(φ) is the length of the


longest path in T (φ) which starts from the root (counted in number of
edges travelled). In other words, c(φ) gives us the height of the parsing
tree T (φ).

Proof. How are we going to prove this? You know it!

(i) Base case. Let p ∈ P be a sentence letter. Then T (p) = p. But


the only path you can travel from the root in p is the trivial path
of length zero, which just stops immediately at p. Since c(p) = 0,
this just means that the base case holds.
(ii) Induction steps.
(a) Assume the induction hypothesis that c(φ) is the length of
the longest path from the root in T (φ). Now consider ¬φ. By
the (ii.a) definition of parsing trees, we know that:
¬φ
T (¬φ) =
T (φ)

Now think about the longest path you can travel from the
root in T (¬φ). Well, it’s going to be the longest path you
can travel in T (φ) plus one edge to the new root, i.e. ¬φ.
Since c(¬φ) = c(φ) + 1, this means that, by the induction
hypothesis, the claim holds.
(b) Assume the induction hypotheses that c(φ) is the length of
the longest path from the root in T (φ) and c(ψ) is the length
of the longest path from the root in T (ψ). By the (ii.a) defi-
nition of parsing trees, we know that:
(φ ◦ ψ)
T ((φ ◦ ψ)) =
T (φ) T (ψ)
CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC 95

Now think about the longest path you can travel in T ((φ◦ψ)).
Well, take the longest path you can find in either T (φ) or
T (ψ). Let’s suppose that the path is in T (φ) (if it is in T (ψ),
the argument is completely analogous). The longest path you
can travel from the root in T ((φ ◦ ψ)) is going to be precisely
this path plus the one new edge connecting T (φ) to the new
root (φ ◦ ψ). Since c((φ ◦ ψ)) = max(c(φ), c(ψ)) + 1, this just
means that the claim also holds here.

We can thus use induction to infer that the claim holds for all φ.

4.4.6 This completes our discussion of function recursion on L by means


examples. In order to get a real good grip on how the method works,
it’s best for you to try your hand at it, i.e. try to define your own
syntactic functions using function recursion. Among the exercises, you
can find some functions you can try to define.

4.5 Some Useful Notational Conventions


4.5.1 So far, we’ve been very precise about our use of parentheses. And for
good reason: as we discussed in §4.3, our tidy use of parentheses is
what guaranteed unique readability. But, at the same time, writing all
of these parentheses can be exhausting. And, as I said in the lecture,
logicians are lazy: they don’t like to do unnecessary things. So, over
the years, some generally agreed upon conventions have emerged for
leaving out parentheses, which we’ll briefly discuss in this section.

4.5.2 Before we begin, note that conventional notation is only ever allowed
outside the context of syntax theory, i.e. in semantics and proof theory.
And even there, it’s often better to be safe than sorry. As we said a
couple of times by now: the parentheses are there for a reason. It’s
much easier to make mistakes using conventional notation than it is
in official notation. That being said, conventional notation can be a
real boon on your wrist.

4.5.3 The first convention is that you can always omit any outermost paren-
theses. So, instead of (p ∧ q), you can simply write p ∧ q. The reasoning
behind this is that these are easy to fill in: if a logician writes p ∨ q,
it’s pretty clear what they mean: (p ∨ q).

4.5.4 The second convention is that in a series of ∨’s or ∧’s, you can leave
out the repeatedly nested parentheses. So, for example, instead of the
official ((p ∧ (q ∧ (r ∧ (s ∧ t))))), we can simply write p ∧ q ∧ r ∧ s ∧ t (also
applying the convention about outermost parentheses). Note that this
CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC 96

convention really introduces some ambiguity into our language. The


conventional formula p ∨ q ∨ r has indeed two ways of being parsed:

p∨q∨r p∨q∨r

p q∨r vs. p∨q r

q r p q

This makes the convention a bit harder to justify. But note that the two
different “readings” of the formula don’t really say anything different.
Take the following two sentences:

ˆ I have toast, or I have eggs or pancakes


ˆ I have toast or eggs, or I have pancakes

They really seem to say the same thing. We’ll only be able to properly
justify the convention in the next chapter, when we talk about logical
equivalence, but for now, I hope these examples illustrate why we can
allow for this little bit of ambiguity. Note that in mixed series of ∧
and ∨, we cannot omit parentheses. I.e. p ∧ (q ∨ r) needs to stay just
that.

4.5.5 Finally, the most complicated convention concerns the interaction of


the connectives. The idea is that if we agree upon which connective we
should read first, we can drop a bunch of parentheses. For example, if
we say that we always read → before ∧ if there’s an ambiguity, then
the previously ambiguous expression p ∧ q → r, with the two readings

p∧q →r p∧q →r

p q→r vs. p∧q r

q r p q

now unambiguously, gets the second of the two readings. That is, p ∧
q → r is then simply read as (p ∧ q) → r. This is the idea of binding
strength: we say that ∧ binds stronger than →. This idea allows us to
leave out a bunch of parentheses, which we can easily fill in by reading
the right operator first.

4.5.6 The relative binding strength of the operators is given in the following
diagram:
¬ > ∧ = ∨ > → > ↔.
CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC 97

Explicitly, this means that, in a case of conflict, you always read the ↔
first, then →, then ∨ and ∧, and only finally ¬. Note that ∧ and ∨ have
precisely the same binding strength, so in expressions like p∧(q∨r), we
really can’t leave out any parentheses. But if we consider an expression
like ((p ∧ q) ↔ (p ↔ q)), we can easily leave out a some:

ˆ According to the first convention, we can leave out the outermost


parentheses, yielding (p ∧ q) ↔ (p ↔ q).
ˆ Now, since we know that we’ll always parse according to ↔ before
∧, this means that we can leave out the parentheses around the
∧, giving us p ∧ q ↔ (p ↔ q).
ˆ Can we also leave out the last pair of parentheses? —No! For in
p ∧ q ↔ p ↔ q, there would be a conflict between two readings:
p∧q ↔p↔q

p∧q ↔p↔q p∧q ↔p q

p∧q p↔q vs. p∧q p

p q p q p q

ˆ So, the best we can do is p ∧ q ↔ (p ↔ q).

4.5.7 These ideas need some getting used to. For that reason, there are
plenty of exercises included at the end of this section (4.8.9 and 4.8.10),
solutions for which can be found in the appendix.

4.5.8 We conclude with a last remark about conventional notation. You


might wonder: Well, these are pretty clear conventions. Surely, a com-
puter can learn them. Why can’t we devise an official definition that
has unique readability and respects these conventions? —And you
would be right! This is possible. It’s just not very practical for most
purposes. The official definition of a formula would thereby become
much more complicated. This would have the consequence that proofs
by induction and function recursion would become horribly compli-
cated to carry out, too. Additionally, if we care about efficiency (as we
often do with computers), it turns out to be way more efficient to use
our official definition and to translate between official and conventional
notation that to carry out everything in conventional notation. And
you can essentially see why: because every recursive definition would
become more complicated—and we need a lot of them. So, we’ll stick
to our official notation for official purposes and use conventional no-
tation for fun.
CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC 98

4.6 Core Ideas


ˆ A formal language is defined by a vocabulary (symbols) and a grammar
(rules for well-formed expressions).

ˆ The vocabulary of a propositional language consists of a set of sentence


letters P, the sentential connectives ¬, ∧, ∨, →, ↔, and the parentheses
( and ).

ˆ The set L of formulas is inductively defined as the smallest set con-


taining the sentence letters and which is closed under the sentential
connectives.

ˆ Formalization is the process of abstracting ordinary language expres-


sions into a propositional language. Use the guidelines.

ˆ Syntactic recursion works by specifying the value for the sentence let-
ters and how to calculate the value for a complex formula from the
values for its subformulas.

ˆ The parsing tree of a formula gives you its internal structure: it shows
how the formula was constructed.

ˆ The unique readability theorem guarantees that syntactic recursion


works and that formulas are computer readable. It states that every
formula has a unique parsing tree.

ˆ There is a useful algorithm for checking whether an expression is a


formula.

ˆ Outside of syntax theory, the notational conventions are useful. But


use them with caution.

4.7 Self Study Questions


4.7.1 Suppose that an expression contains absolutely no parentheses. What
is the best we can say about the expression?

(a) The expression is not a formula.


(b) If the expression is a formula, then it is a sentence letter.
(c) If the expression is a formula, then it cannot contain ∧, ∨, →, ↔.
(d) There is an even number of negations in the expression.

4.7.2 You’re asked to determine whether an expression is a formula. In which


order would you apply the following techniques?
CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC 99

(a) Try to prove it definitionally (as in 4.1.9)


(b) Check whether the expression contains symbols not from the vo-
cabulary (using Proposition 4.2.5).
(c) Apply the algorithm (4.3.10).
(d) Check whether the expression contains as many (’s as )’s (using
the solution to exercise (4.8.8)).

4.7.3 Can you always see whether a formula is written in conventional no-
tation?

(a) Yes, I simply construct the parsing tree for the expression.
(b) No, because a formula written in notational convention does not
need to contain an even number of parentheses.
(c) Yes, you can apply the algorithm (4.3.10) for that.
(d) No, it needs to be clear from the context.

4.7.4 A formula has complexity four. Which of the following can you infer
from that?

(a) The formula contains at least four connectives.


(b) The formulas contains at most four connectives.
(c) The formula contains at most four negations.
(d) The parsing tree of the formula has at most four nodes.
(e) The parsing tree of the formula has exactly four nodes.
(f) The parsing tree of the formula has at least five nodes.

4.8 Exercises
4.8.1 [h] Translate the following statements into a suitable propositional
language! Don’t forget the translation key.

(a) Alan Turing built the first computer but Ada Lovelace invented
the first computer algorithm.
(b) Only if Alan Turing built the first computer, it’s Monday today.
(c) Either Alan Turing or Ada Lovelace is your favorite computer
scientist.
(d) Today is Monday if and only if both yesterday was Tuesday and
tomorrow is Saturday.

4.8.2 [h] Translate the following statements into English/Dutch. Use the
following translation key:
CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC 100

p : I’m happy/Ik ben blij


q : I clap my hands/Ik klap in mijn handen
r : You’re happy/Jij bent blij
s : You clap your hands/Jij klapt in je handen
t : We both clap our hands/Wij klappen in onze handen

(a) ¬¬(p ∧ q)
(b) (¬p → ¬q)
(c) (p ↔ (¬r ∧ q))
(d) ((q ∧ s) → t)
(e) ((q ∧ s) → (p ∨ r))
(f) (((p ∧ q) ∨ (r ∧ s)) ∧ ¬((p ∧ q ∧ r ∧ s)))

4.8.3 Give an inductive definition of the set of all formulas of L that only
contain p, q, ¬, ∧, (, and ).

4.8.4 Use the algorithm from 4.3.10 to decide whether the following expres-
sions are formulas:

(a) (q ↔ (p ∧ (q ∨ (r ∧ ¬s)))
(b) ((p ∧ q) ∨ (p ∧ (q → ¬q)))
(c) (p → (p → ((p ∧ p) ↔ p ∨ p)))
(d) ¬¬(¬¬p ∧ (q ∨ q))

4.8.5 Use function recursion to define the following syntactic functions:

(a) [h] the function #conn : L → N, which counts the number of


sentential connectives in a formula φ.
(b) the function #( : L → N, which counts the number of left brackets
in a formula φ.
(c) the function #P : L → N, which counts the number of sentence
letters in a formula φ.
(d) the function 1p : L → {0, 1}, which assigns one to a formula φ if
p ∈ sub(φ) and zero otherwise.

4.8.6 [6 x] Consider the following recursively defined functions on L. What


do they (in ordinary words) measure?

(a) f : L → N defined by:


(i) f (p) = 1, for p ∈ P
CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC 101

(ii) (a) f (¬φ) = f (φ) + 1,


(b) f ((φ ◦ ψ)) = f (φ) + f (ψ) + 3, for ◦ = ∧, ∨, →, ↔ .
(b) g : L → N defined by:
(i) g(p) = 0, for p ∈ P
(ii) (a) g(¬φ) = g(φ) + 1,
(b) g((φ ◦ ψ)) = g(φ) + g(ψ), for ◦ = ∧, ∨, →, ↔ .
(c) (this is a tricky one) h : L → {0, 1} defined by:
(i) h(p) = 1, for p ∈ P
(
0 if h(φ) = 1
(ii) (a) h(¬φ) =
1 if h(φ) = 0

1 if h(φ) = 1 and h(ψ) = 1

(b) h((φ ◦ ψ)) = 1 if h(φ) = 0 and h(ψ) = 0

0 otherwise

for ◦ = ∧, ∨, →, ↔ .

4.8.7 Use proof by induction to prove that the number of elements in sub(φ)
is at most 2 · #conn (φ) + 1.

4.8.8 [h] Prove (using induction on formulas) that for each formula φ ∈ L,
the number of (’s and the number of )’s is equal. Derive as a corollary
a necessary condition for formula-hood and discuss why the condition
is better than the one given in (4.2.4).

4.8.9 Translate the following formulas written using the notational conven-
tions into official notation:

(a) ¬p ∧ q
(b) ¬(p ∧ q → ¬p ∨ ¬q)
(c) p ∨ p ↔ ¬p
(d) (p ∨ q) ∧ r
(e) p → p ↔ p → p
(f) ¬p ∧ (q ∨ r → p ↔ q)
(g) p ∧ (p ∨ q)
(h) p → q ∨ q ↔ r
(i) p → q ↔ ¬q → ¬p
(j) ¬¬¬p
(k) p → p ↔ p ∨ ¬p
(l) p ∨ q → ¬r ∧ (s ↔ p)
CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC 102

4.8.10 Take the following formulas and write them according to our nota-
tional conventions:

(a) (p ∧ q)
(b) ¬¬q
(c) (p ∧ (r ∨ q))
(d) (p → (r ∨ (p ∧ (q ↔ r))))
(e) (p ∨ ¬(p ∨ q))
(f) ((p ∧ q) → r)
(g) (((p ∨ q) → ¬q) ↔ r)
(h) ((p ∧ q) ∧ r)
(i) (p ∧ (q ∧ r))
(j) (p ∨ (q ∨ r))
(k) (p ∧ (q ∨ r))
(l) (p ∧ (q → r))

4.8.11 (This is a real challenge, only try this is you have enough time and en-
ergy): Write an algorithm that translates a formula from conventional
into official notation.

4.9 Further Readings


We’re starting to get into logic proper. Recommendations for further read-
ings that I can give you at this point will be chapters from other logic books,
which cover the same material but in slightly different way. Note, however,
that if you look into another logic textbook, you will (most likely) encounter
different ways of doing things. Here are just some examples of what I mean:

ˆ Some authors use different terminology. For example, they may call
sentence letters “propositional variables” or sentential connectives “propo-
sitional connectives.”

ˆ Some authors may have different definitions of formulas. It’s possible,


for example, to demand that also negations are enclosed by parenthe-
ses, so that ¬p is not a formula only (¬p) is.

ˆ Some authors use different symbols for the connectives, such as ⊃


instead of our → or ≡ instead of our ↔.

ˆ Some authors use A, B, C, . . . for formulas, rather than our φ, ψ, θ, . . . .


CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC 103

ˆ Some authors might use proof by induction in a different (but equiv-


alent) way. For example, they might use mathematical induction on
the complexity of formulas instead of our method, which is known in
the literature as structural induction.

ˆ Some authors may use terminology in a slightly different way. For


example, complexity is sometimes defined so that it doesn’t correspond
to the height of the parsing tree but rather the number of logical
connectives in the formula.

ˆ They might prove different theorems or the same theorems in different


ways.

These are really just some of the possible differences you may encounter.
If you continue with my literature recommendations, you have to brace
yourself for that. But there are also some reasons for why it’s a good idea
to look at other texts despite these potential obstacles:

ˆ Regardless of these differences, the books that I’m recommending are


dealing with the same subject matter—just in a slightly different way.
And this different perspective might help you understand better what’s
going on.

ˆ When you continue your studies, you will see that there are many,
many different notations or, more generally, ways of doing things around.
This equally applies in logic, mathematics, computer science, and other
related disciplines. The earlier you get used to the plurality, the better.

ˆ Other textbooks are a great source of additional exercises (though


typically without solutions, as I mentioned in the lecture).

So, here are my recommendations for book-chapters that deal with the same
material in comparable ways.

ˆ Section 2.1 of Dalen, Dirk van. 2013. Logic and Structure. 5th edition.
London, UK: Springer.

ˆ Sections 1.0, 1.1 and 1.3 of Enderton, Herbert. 2001. A Mathematical


Introduction to Logic. 2nd edition. San Diego, CA: Harcourt/Academic
Press.

ˆ The reader Parvulae Logicales INDUCTIE by Albert Visser, Piet Lem-


mens, and Vincent van Oostrom from the 2015 installment of the
course. You can find this on blackboard.

One last thing: don’t buy these books until you really feel like the book will
help you a lot. Look at them in the library first (or online, if possible)!
Self Study Solutions
Some explanations in the appendix.
4.7.1 (c)
4.7.2 first (b), then (d), then (c), then
(a)
4.7.3 (d)
4.7.4 (a), (f)
104 CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC
Chapter 5

Semantics for Propositional


Logic

5.1 Valuations and Truth


5.1.1 In this chapter, we turn our attention to semantics. Remember from
the introduction that in semantics, we make the idea of validity as
truth-preservation formally precise. We will do this by introducing
the notion of a model for a propositional language and defining what
it means for a formula to be true in a model. We then obtain our
account of validity for inferences couched in a propositional language
by saying that such an inference is valid iff in every model where the
premises are true, the conclusion is true as well. So, intuitively, models
play the role of situations. But how can we make this precise?
5.1.2 What do we want our formal situation-counterparts to do? Well, note
that all that matters for our account of validity is which statements
are true in a given situation. So, at the very least, we want our formal
situation-counterpart to tell us which sentence letters are true in the
situation we’re modeling. As it turns out, this is already enough. Once
we know the truth-values of the sentence letters, we can calculate the
values of every formula by means of function recursion. But we’re
getting ahead of ourselves. Let’s figure out how a model can tell us
which sentence letters are true in a given model. Remember that in
classical logic, we assume that every sentence is either true or false and
never both—this is the principle of bivalence. So, in classical logic,
there are only two truth-values, true and false. If we now assign to
each sentence letter its truth value in a given situation, the result will
be a function. Why so? Well, by bivalence, every sentence is true or
false in the situation—so every sentence get’s assigned a value. And,
also by bivalence, no sentence is both true and false in the situation—
so every sentence gets assigned a unique value. We call a function

105
CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC 106

that assigns a truth-value to every sentence letter a valuation. We


will use valuations as models for propositional languages. Intuitively,
the idea is that a valuation models the situation in which the true
sentences are precisely the ones which the valuation assigns true. From
a mathematical perspective, it’s convenient to model the truth-values
as numbers, 0 for false and 1 for true. We can now give the formal
definition of a model or valuation for a propositional language:

ˆ Let L be a propositional language. A valuation for L is a function


v : P → {0, 1}.

5.1.3 Examples. Let P = {p, q, r}. The following are all the valuations v :
{p, q, r} → {0, 1}:

(a) v(p) = 0, v(q) = 0, and v(r) = 0


(b) v(p) = 0, v(q) = 0, and v(r) = 1
(c) v(p) = 0, v(q) = 1, and v(r) = 0
(d) v(p) = 0, v(q) = 1, and v(r) = 1
(e) v(p) = 1, v(q) = 0, and v(r) = 0
(f) v(p) = 1, v(q) = 0, and v(r) = 1
(g) v(p) = 1, v(q) = 1, and v(r) = 0
(h) v(p) = 1, v(q) = 1, and v(r) = 1

More generally, if there are n elements in P, there are 2n different


valuations for L.

5.1.4 More Examples. But even if P is infinite, for example {pi : i ∈ N}, we
can reasonably define valuations (but note that there’ll be infinitely
many, so we can’t write them all down). Here are some examples for
definitions of valuations v : {pi : i ∈ N} → {0, 1}:

(a) v(pi ) = 0 for all i ∈ N


(
0 if i is odd
(b) v(pi ) =
1 if i is even
(
0 if i is even
(c) v(pi ) =
1 if i is odd
(
1 if i is prime
(d) v(pi ) =
0 otherwise
(e) v(pi ) = 1 for all i ∈ N
(f) For X ⊆ N a set of numbers, we set v(pi ) = 1 iff i ∈ X.
CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC 107

(g) For φ ∈ L be a formula, we set v(pi ) = 0 iff pi ∈ sub(φ), for all


i ∈ N.

5.1.5 Above we said that once we know the truth-values of the sentence
letters under a given valuation, we can calculate the truth-values of all
formulas using function recursion. In order to do so, we need to know
how the truth-value of a complex formula depends on the truth-values
of its immediate sub-formulas. Let’s begin by guiding our intuitions
first. The following principles seem plausible for all φ, ψ ∈ L, given
that ¬ means not, ∧ means and, and ∨ means or :

(i) ¬φ is true iff φ is false


(ii) (φ ∧ ψ) is true iff φ is true and ψ is true
(iii) (φ ∨ ψ) is true iff φ is true or ψ is true

How about a formula of the form (φ → ψ)? When is a formula of


this form true? The standard answer to this question is actually not
so easy to see. We will try to motivate it “indirectly” by thinking
about when (φ → ψ) should be false. Surely, if φ is true and ψ is
false, then (φ → ψ) should be false. To say that if the ball is red, then
it’s scarlet should exclude that the ball is red and not scarlet. The
standard account for the truth of (φ → ψ) says that this is the only
way in which (φ → ψ) can be false: the only way in which it can turn
out to be false that if the ball is red, then it’s scarlet is if the ball is
red but not scarlet. So, (φ → ψ) is false if and only if φ is true and
ψ is false. Note that, by bivalence, this gives us an answer to when
(φ → ψ) is true: by bivalence, (φ → ψ) is false iff (φ → ψ) is not true;
so (φ → ψ) is true iff its not the case that φ is true and ψ is false; and
there are two ways in which it can not be the case that φ is true and
ψ is false: one is that φ is not true, and so false, and the other is that
ψ is not false, and so true. We arrive at:

(iv) (φ → ψ) is true iff φ is false or ψ is true

The last case, (φ ↔ ψ) is somewhat easier, given that ↔ means if and


only if. We want to say that (φ ↔ ψ) is true iff (φ → ψ) is true (if)
and (ψ → φ) is true (only if). With a bit of fiddling using (iv), we get:

(v) (φ ↔ ψ) is true iff either φ and ψ are both true or φ and ψ are
both false.

5.1.6 The reading of → given by (iv) is something worth dwelling on for a


moment. Remember that in the introduction we said that the treat-
ment of the conditional is something that characterizes classical logic.
Condition (iv) is this treatment. The operator → with the reading
CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC 108

given by (iv) is what’s called the material conditional. A peculiar fea-


ture of the material conditional is that (φ → ψ) is true, if φ is false
(regardless of whether ψ is true). So, given we’re thinking about the
actual world, the sentence “if Utrecht is in the US, then I’m the king
of Germany” is true, if understood in terms of →. This might sound
weird, so let’s think about it. Why should “if Utrecht is in the US, then
I’m the king of Germany” be true (about the actual world)? Well, be-
cause it’s not false: for it to be false, it would need to be the case that
Utrecht is in the US but I’m not the king of Germany—and it’s simply
not true that Utrecht is in the US. Note that this is, essentially, the
same reasoning we used to show in the introduction that in classical
logic, an inference with inconsistent premises is always valid (1.3.3).
Here classical logic and the reading of → go hand-in-hand. There are
many non-classical logics in which the conditional is not material. For
example, in relevant logic, (φ → ψ) can only be true if φ and ψ have
something in common. This would render “if Utrecht is in the US,
then I’m the king of Germany” false since the country Utrecht is in
and my royal pedigree have absolutely nothing to do with each other.
But, as we said in the introduction, we will only deal with classical
logic in this course.

5.1.7 Let’s see how we can use (i–v) to determine the truth value of for-
mulas under an assignment. In order to do so, we first try to capture
the meaning of the operators by means of so-called truth-functions.
An n-ary truth function (for n ∈ N) is a function from {0, 1}n to
{0, 1}. To each of the clauses (i–v), there corresponds a truth-function
that “mirrors” the influence of the operator on the truth-value of the
sentence. These truth-functions, in a sense, give the meaning of their
corresponding operator. So, we have functions f¬ : {0, 1} → {0, 1}
and f◦ : {0, 1}2 → {0, 1} for ◦ = ∧, ∨, →, ↔ given by the following
definitions:

(i) Negation: f¬ (x) = 1 − x


0 1
1 0

(ii) Conjunction: f∧ (x, y) = min(x, y)1


1
The function min is defined analogously to max (remember 4.4.4) by:

x
 if x < y
min(x, y) = y if y < x .

x if x = y

CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC 109

f∧ 0 1
0 0 0
1 0 1
(iii) Disjunction: f∨ (x, y) = max(x, y)
f∨ 0 1
0 0 1
1 1 1
(iv) Conditional : f→ (x, y) = max(1 − x, y)
f→ 0 1
0 1 1
1 0 1
(v) Biconditional : f↔ (x, y) = min(max(1 − x, y), max(1 − y, x)).
f↔ 0 1
0 1 0
1 0 1
Let’s think this through in the case of f∧ . Studying the function-table
for f∧ , we can see that f∧ (x, y) = 1 iff x = 1 and y = 1—the only case
in which f∧ assigns the output one is if both inputs are one. Since one
means true and zero means false, this just means that f∧ assigns the
output true iff both inputs are true—which is precisely what (5.1.5.ii)
says. The truth-functions given by (i–v) are also known as the Boolean
functions or simply Booleans.
5.1.8 The Booleans f→ and f↔ are a bit hard to wrap your head around, so
let’s think about them for a second. First, f→ . There is another way
of writing down the same function, which can be found by looking at
the table. Note that there are four possible inputs: (0, 0), (0, 1), (1, 0),
and (1, 1). But in only one of these cases, does f→ assign zero, viz.
(1, 0). Remember that (φ → ψ) is false iff φ is true and ψ is false. So,
we can use definition by cases to write down the definition of f→ :
(
0 if x = 1 and y = 0
f→ (x, y) =
1 otherwise
Similarly, f↔ looks almost threatening. But look at the table! Only
for the inputs (0, 0) and (1, 1) does f↔ assign the output 1. So, we get
the following useful definition by cases of f↔ :
(
1 if x = y
f↔ (x, y) =
0 otherwise
This, I hope, is already much more transparent.
CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC 110

5.1.9 We will now use the Booleans to define the truth-value JφKv of a for-
mula φ ∈ L under a valuation v. More concretely, we will define the
function J·Kv : L → {0, 1} by the following recursion:

(i) JpKv = v(p) for all p ∈ P


(ii) (a) J¬φKv = f¬ (JφKv )
(b) J(φ ◦ ψ)Kv = f◦ (JφKv , JψKv ) for ◦ = ∧, ∨, →, ↔

This is very compact, so let’s unfold it a bit by applying the definitions


of the truth-functions:

(i) JpKv = v(p) for all p ∈ P


(ii) (a) J¬φKv = 1 − JφKv
(b) J(φ ∧ ψ)Kv = min(JφKv , JψKv )
J(φ ∨ ψ)Kv = max(JφKv , JψKv )
J(φ → ψ)Kv = max(1 − JφKv , JψKv )
(
1 if JφKv = JψKv
J(φ ↔ ψ)Kv =
0 otherwise

By virtue of function recursion, we get truth-value for every formula


from (i–ii). In fact, we can calculate this value step-by-step. Let’s con-
sider a couple of concrete examples. Let v be the valuation given in
5.1.3.f, i.e. v(p) = 1, v(q) = 0, and v(r) = 1. For ¬(p ∧ (r ∨ q)), we get:

J¬(p ∧ (r ∨ q))Kv = 1 − J(p ∧ (r ∨ q))Kv (ii.a)


= 1 − min(JpKv , J(r ∨ q)Kv ) (ii.b)
= 1 − min(JpKv , max(JrKv , JqKv )) (ii.b)
= 1 − min(v(p), max(v(r), v(q))) (i)
= 1 − min(1, max(1, 0)) (5.1.3.f)
= 1 − min(1, 1)
=1−1
=0

Next, consider the formula ((p → q) ∨ (q → r)). We get:

J((p → q) ∨ (q → r))Kv = max(J(p → q)Kv , J(q → r)Kv )


= max(max(1 − JpKv , JqKv ), max(1 − JqKv , JrKv ))
= max(max(1 − v(p), v(q)), max(1 − v(q), v(r)))
= max(max(1 − 1, 0), max(1 − 0, 1))
= max(max(0, 0), max(1, 1))
= max(0, 1)
=1
CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC 111

5.1.10 Note that since for every valuation v, J·Kv is a function from L to
{0, 1}, it follows immediately that for each φ ∈ L, we have that either
JφKv = 1 or JφKv = 0 (and never both). In other words, the law of
bivalence holds for our semantics.

5.1.11 It’s useful to look at the definition of truth under a valuation from
another angle, to look at it as a property of formulas. The idea is
that, instead of defining the truth-value of a formula using function
recursion as we did in 5.1.9, we could also have defined a property of
formulas using an inductive definition—just like we inductively define
sets.2 For v a valuation and φ ∈ L a formula, we write v  φ to say
that φ is true under the valuation v, and we write v 2 φ to say that φ
is not true under v. To obtain an inductive definition of truth under
a valuation, we would now simply postulate the following inductive
clauses, which are derived clauses (i–v) from 5.1.5:

(i) v  p iff v(p) = 1, for p ∈ P


(ii) v  ¬φ iff v 2 φ
(iii) v  (φ ∧ ψ) iff v  φ and v  ψ
(iv) v  (φ ∨ ψ) iff v  φ or v  ψ
(v) v  (φ → ψ) iff v 2 φ or v  ψ
(vi) v  (φ ↔ ψ) iff either v  φ and v  ψ, or v 2 φ and v 2 ψ.

This definition would have worked equally well—in fact, we shall prove
in a moment that the two definitions coincide. Which of the two defi-
nitions (5.1.9 or this one) to prefer is mainly a question of preference.
Some logicians prefer 5.1.9 and some logicians prefer the above defi-
nition. In the following, we will mainly work with definition 5.1.9 (so
guess which kind of logician I am).

5.1.12 Let’s look at our two examples from before. Let v be again the val-
uation given in 5.1.3.f, i.e. v(p) = 1, v(q) = 0, and v(r) = 1. For
¬(p ∧ (r ∨ q)), we can argue:

ˆ v  ¬(p ∧ (r ∨ q)) iff v 2 (p ∧ (r ∨ q)) (by ii)


ˆ v 2 (p ∧ (r ∨ q)) iff v 2 p or v 2 (r ∨ q) (by (iii) and contrapositive
reasoning)
ˆ v 2 (r ∨ q) iff v 2 r and v 2 q (by (iv) and contrapositive reason-
ing)
ˆ But we know that v(p) = 1, v(q) = 0, and v(r) = 1 (by 5.1.3.f).
2
Remember that mathematically speaking a property actually is just the set of objects
satisfying the property.
CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC 112

ˆ So, v  p, v 2 q and v  r (by i and in the case of q contrapositive


reasoning).
ˆ So, since we have v  p, we don’t have v 2 p (on pain of contra-
diction). And since we have v  r, we don’t have v 2 r and v 2 q
(again, on pain of contradiction).
ˆ Hence, we neither have v 2 p, nor do we have v 2 r and v 2 q.
ˆ But that just means that we don’t have v 2 (p ∧ (r ∨ q)), and so
we don’t have v  ¬(p ∧ (r ∨ q)). In other words, v 2 ¬(p ∧ (r ∨ q)).

For ((p → q) ∨ (q → r)), instead, we argue as follows:

ˆ v  ((p → q) ∨ (q → r)) iff v  (p → q) or v  (q → r) (by iv)


ˆ v  (p → q) iff v 2 p or v  q (by v)
ˆ v  (q → r) iff v 2 q or v  r (by v)
ˆ Since we have v(r) = 1, we have v  r (by i).
ˆ So we have v 2 q or v  r.
ˆ So we have v  (q → r).
ˆ So we have v  ((p → q) ∨ (q → r)).

We get precisely to the same results as we did using definition 5.1.9:


¬(p ∧ (r ∨ q)) is false under v and ((p → q) ∨ (q → r)) is true under v.
This is not by chance, as we’ll prove in the following theorem.

5.1.13 We show:
Proposition. Let v be a valuation and φ ∈ L. Then JφKv = 1 iff
v  φ.

Proof. We prove this claim by induction on formulas.

(i) Base case. We need to show that JpKv = 1 iff v  p for p ∈ P.


But JpKv = 1 and v  p are both defined as v(p) = 1, hence the
claim trivially holds.
(ii) Induction steps.
(a) Assume the induction hypothesis that JφKv = 1 iff v  φ. We
need to show that J¬φKv = 1 iff v  ¬φ. This is a bicondi-
tional, so we have to show both directions:
ˆ Left-to-right. Suppose that J¬φKv = 1. We need to derive
that v  ¬φ. Since J¬φKv = 1 − JφKv , we can infer that
JφKv = 0. But by the induction hypothesis, we have that
JφKv = 1 iff v  φ. Hence JφKv = 0 iff v 2 φ. So, we can
conclude from JφKv = 0 that v 2 φ. But v  ¬φ iff v 2 φ,
and so v  ¬φ, as desired.
CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC 113

ˆ Right-to-left. Suppose that v  ¬φ. Since v  ¬φ iff v 2 φ,


it follows that v 2 φ. The induction hypothesis states
that JφKv = 1 iff v  φ, and so JφKv = 0 iff v 2 φ. Hence,
we get to JφKv = 0. But since J¬φKv = 1 − JφKv , it follows
that J¬φKv = 1, as desired.
(b) Assume the induction hypotheses that JφKv = 1 iff v  φ and
JψKv = 1 iff v  ψ. We need to show four different cases,
J(φ ◦ ψ)Kv = 1 iff v  (φ ◦ ψ) for ◦ = ∧, ∨, →, ↔. We will only
prove the case for ◦ = ∧ and leave the remaining cases as
exercises.
So, we need to derive that J(φ ∧ ψ)Kv = 1 iff v  (φ ∧ ψ). So
we have to show both directions:
ˆ Left-to-right. Suppose that J(φ ∧ ψ)Kv = 1. Since J(φ ∧
ψ)Kv = min(JφKv , JψKv ), it follows that JφKv = 1 and
JψKv = 1. By the induction hypothesis, it follows that
v  φ and v  ψ. But v  (φ ∧ ψ) iff v  φ and v  ψ by
definition, and so v  (φ ∧ ψ), as desired.
ˆ Right-to-left. Suppose that v  (φ ∧ ψ). Since v  (φ ∧ ψ)
iff v  φ and v  ψ, it follows that v  φ and v  ψ.
By the induction hypothesis, we have that JφKv = 1 and
JψKv = 1. But since J(φ ∧ ψ)Kv = min(JφKv , JψKv ), it
|{z} |{z}
=1 =1
follows that J(φ ∧ ψ)Kv = 1.

5.2 Consequence and the Deduction Theorem


5.2.1 We’ve now arrived at the point where we can formulate the main
result of our theoretical inquiry into valid reasoning: a formal account
of validity. Remember that the standard account of validity states
that an inference is valid iff in every situation where the premises are
true, the conclusion is true as well. We can now understand this idea
precisely as truth-preservation across all models. Assuming that an
inference is formulated in terms of a propositional language, we can
say that an inference is valid iff under every valuation, if the premises
are all true under the valuation, then the conclusion is, too. We’ll
spend this section working this idea out in a bit more detail.

5.2.2 Note that what underlies our (formal) notion of validity is a relation
among formulas, the relation that holds between a set of formulas and
another formula just in case under every valuation that makes the
set of formulas true also makes the other formula true. This notion is
CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC 114

known in the literature as logical consequence. If Γ is a set of formulas


(and we typically use Γ, ∆, Σ, . . . as variables for sets of formulas) and
φ a single formula, we write Γ  φ to say that φ is a consequence of
the Γ’s; and we write Γ 2 φ to say that φ is not consequence of the
Γ’s. Formally, we define the relation  on the formulas as follows:

ˆ Γ  φ iff for all valuations v, if JψKv = 1, for all ψ ∈ Γ, then


JφKv = 1.

Alternatively, using the property of truth in a model, we can define


the relation as follows:

ˆ Γ  φ iff for all valuations v, if v  ψ, for all ψ ∈ Γ, then v  φ.

It’s an almost immediate consequence of Proposition 5.1.13 that these


two definitions are equivalent. So, we’re not going to bother proving
it here.
When Γ  φ, we also say that the Γ’s entail φ or that the Γ’s imply
φ. Ultimately, the notion of logical consequence is going to be an aux-
ialliary notion in our account of valid inference. The idea is that an
inference Γ ∴ φ, couched in a formal language, is valid iff Γ  φ. Note
that Γ ∴ φ is not a formula of L (Ask yourself: Is ∴ a symbol of L?
No! So simply apply Proposition 4.2.5 to conclude that Γ ∴ φ ∈ / L.)
The expression Γ ∴ φ is an inference couched in a formal language:
it’s a model for a natural language inference, a piece of reasoning, and
not itself a sentence. What we do is to give an account of when Γ ∴ φ
is valid (iff Γ  φ), which we use as a model for the validity of natural
language inferences.

5.2.3 Let’s consider some examples. When we’re writing out claims of con-
sequence, it’s common to leave out the set-brackets before the  for
ease of exposition. So, instead of the more proper {p, q}  p ∧ q, we’ll
typically write p, q  p ∧ q.

(i) Claim. p, q  p ∧ q

Proof. We need to prove that for each valuation v, if JpKv = 1 and


JqKv = 1, then Jp ∧ qKv = 1. So, let v be an arbitrary valuation
such that JpKv = 1 and JqKv = 1. Consider Jp ∧ qKv . We know
that Jp ∧ qKv = min(JpKv , JqKv ). But since JpKv = 1 and JqKv = 1,
we have that min(JpKv , JqKv ) = min(1, 1) = 1, which is what we
needed to show.

(ii) Claim. p ∧ q  p and p ∧ q  q


CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC 115

Proof. We only show p ∧ q  p, since the proof for p ∧ q  q is


strictly analogous. We need to show that for each valuation v,
if Jp ∧ qKv = 1, then JpKv = 1. So, assume that v is a valuation
and Jp ∧ qKv = 1. We know that Jp ∧ qKv = min(JpKv , JqKv ). But
the only way that min(JpKv , JqKv ) = 1 is that both JpKv = 1 and
JqKv = 1. Hence JpKv = 1, as desired.

(iii) Claim. p  p ∨ q and q  p ∨ q.

Proof. We only show p  p ∨ q, since the proof for q  p ∨


q is strictly analogous. We need to show that for each valu-
ation v, if JpKv = 1, then Jp ∨ qKv = 1. So let v be a val-
uation such that JpKv = 1. Consider Jp ∨ qKv . We know that
Jp ∨ qKv = max(JpKv , JqKv ). But since JpKv = 1, min(JpKv , JqKv ) =
max(1, JqKv ). Now note that whatever the value of JqKv , we have
that max(1, JqKv ) = 1, which is what we needed to show. If we
want to be super precise, we can argue using distinction by cases.
There are two exhaustive cases (a) JqKv = 1 and (b) JqKv = 0. In
case (a), we have max(1, JqKv ) = max(1, 1) = 1. And in case
(b), we have max(1, JqKv ) = max(1, 0) = 1. So, either way,
max(1, JqKv ) = 1.

(iv) Claim. p ∨ q, ¬p  q (remember inference (1) from §1)

Proof. We need to prove that for each valuation v, if Jp ∨ qKv = 1


and J¬pKv = 1, then also JqKv = 1. So let v be a valuation
and suppose that Jp ∨ qKv = 1 and J¬pKv = 1. Since J¬pKv =
1 − JpKv , it follows that JpKv = 0. We furthermore know that
Jp ∨ qKv = max(JpKv , JqKv ). Since JpKv = 0, it follows that Jp ∨
qKv = max(0, JqKv ). But Jp ∨ qKv = 1, by assumption, and we can
only have max(0, JqKv ) = 1 if JqKv = 1. So we can conclude that
JqKv = 1, as desired.

(v) Claim. p, p → q  q

Proof. We need to prove that for each valuation v, if JpKv = 1 and


Jp → qKv = 1, then also JqKv = 1. We could prove this directly,
using similar reasoning as in (iii), but we’ll use another argument
form for illustrative purposes—we’re going to prove the claim
indirectly. So assume that it’s not the case that for each valuation
v, if JpKv = 1 and Jp → qKv = 1, then also JqKv = 1. This would
mean that there exists v, such that JpKv = 1 and Jp → qKv = 1,
but JqKv = 0. Remember that Jp → qKv = min(1 − JpK, JqKv ).
By assumption JpKv = 1 and JqKv = 0. So we can infer that
min(1 − JpK, JqKv ) = min(1 − 1, 0) = min(0, 0) = 0. But also
CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC 116

by assumption, Jp → qKv = min(1 − JpK, JqKv ) = 1. So, we have


arrived at a contradiction: Jp → qKv = 1 and Jp → qKv = 0. So, we
can conclude that our assumption for indirect proof—that there
exists v, such that JpKv = 1 and Jp → qKv = 1, but JqKv = 0—is
false. But this just means that for each valuation v, if JpKv = 1
and Jp → qKv = 1, then also JqKv = 1.

(vi) Claim. ¬p, p ↔ q  ¬q.

Proof. We need to prove that for each valuation v, if J¬pKv = 1


and Jp ↔ qKv = 1, then also J¬qKv = 1. So let v be a valuation and
suppose that J¬pKv = 1 and Jp ↔ qKv = 1. Since J¬pKv = 1−JpKv ,
it follows that JpKv = 0. We furthermore know that
(
1 if JpKv = JqKv
J(p ↔ q)Kv = .
0 otherwise

So, since Jp ↔ qKv = 1, it follows that JqKv = JpKv = 0. But since


J¬qKv = 1 − JqKv , we get J¬qKv = 1, as desired.

5.2.4 Note that each of the examples in 5.2.3 is a claim about concrete
formulas, i.e. specific formulas of a fixed language. But an important
aspect of logic is to figure out logical laws: patterns of valid inferences.
An example of such a patter is that φ, ψ ∴ (φ ∧ ψ) is valid for all
φ, ψ ∈ L. Remember from 5.2.2 that the idea is that φ, ψ ∴ (φ ∧ ψ) is
valid iff φ, ψ  (φ ∧ ψ). So, what we have to prove in order to prove the
logical law is that for all φ, ψ ∈ L, φ, ψ  (φ ∧ ψ). How would we do
something like this? Well, even though this is a claim about formulas,
we don’t need to use induction to establish this. We can simply use
the reasoning from 5.2.3.(i) and replace p with φ and q with ψ. To be
perfectly explicit, here is the argument:

Proof. Let φ, ψ ∈ L be two formulas. We need to prove that for each


valuation v, if JφKv = 1 and JψKv = 1, then Jφ ∧ ψKv = 1. So, let v
be an arbitrary valuation such that JφKv = 1 and JψKv = 1. Consider
Jφ∧ψKv . We know that Jφ∧ψKv = min(JφKv , JψKv ). But since JφKv = 1
and JψKv = 1, we have that min(JφKv , JψKv ) = min(1, 1) = 1, which is
what we needed to show.

In a similar way, we can also transform the other examples from 5.2.3
into logical laws.
CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC 117

There is, however, another kind of logical law, which requires a slightly
more general approach. Suppose that you know that Γ entails φ and
that φ together with ∆ entails ψ, for some formulas φ, ψ ∈ L and sets
of formulas Γ, ∆ ⊆ L. Can it be that Γ and ∆ together don’t entail ψ?
The answer seems to be: obviously not! If φ is true whenever the Γ’s
are true and ψ is true whenever φ and the ∆’s are true, then it seems
to follow that φ is true whenever the Γ’s and the ∆’s are true. This is
the law of the transitivity of logical consequence. It’s an example of a
more general kind of logical law, which we need to prove in a slightly
more complicated way. Let’s do this as an example:

Proposition. For all Γ, ∆ ⊆ L and φ, ψ ∈ L, if Γ  φ and {φ}∪∆  ψ,


then Γ ∪ ∆  ψ.

Proof. Let Γ, ∆ ⊆ L and φ, ψ ∈ L be arbitrary. We need to prove that


if Γ  φ and {φ} ∪ ∆  ψ, then Γ ∪ ∆  ψ. So, suppose for conditional
proof, that Γ  φ and {φ} ∪ ∆  ψ. This means that (a) for each
valuation v, if JθKv = 1 for all θ ∈ Γ, then JφKv = 1, and (b) JθKv = 1
for all θ ∈ ∆ ∪ {φ}, then JψKv = 1. We want to show that Γ ∪ ∆  ψ,
i.e. for all v, if JθKv = 1 for all θ ∈ Γ ∪ ∆, then JψKv = 1. So, suppose
that (c) JθKv = 1 for all θ ∈ Γ ∪ ∆. Since Γ ⊆ Γ ∪ ∆, we can infer that
JθKv = 1 for all θ ∈ Γ. By (a), this means that JφKv = 1. And since
also ∆ ⊆ Γ ∪ ∆, we can infer from (c) that JθKv = 1 for all θ ∈ ∆.
Putting the last two observations together, we get that JθKv = 1 for
all θ ∈ ∆ ∪ {φ}. Finally by (b), we can infer from this that JψKv = 1,
as desired.

What is this more general law good for? Well, it allows you to infer
consequence claims from other consequence claims that you’ve already
proven. We know, for example, that p, p → q  q (5.2.3.v) and that
p, q  p ∧ q (5.2.3.i). So, using our proposition, we can infer that
p, p → q  p ∧ q. Note that we implicitly made use of our set notation
here: strictly speaking p, p → q  q should be written {p, p → q}  q
and p, q  p ∧ q should be written {p, q}  p ∧ q. So, what we can
infer using our proposition is that {p, p → q} ∪ {p}  p ∧ q. But
{p, p → q} ∪ {p} = {p, p → q} (remember, with sets, repetition doesn’t
matter). So, {p, p → q} ∪ {p}  p ∧ q can be written p, p → q  p ∧ q.

5.2.5 If two formulas φ and ψ are consequences of each other, i.e. if φ  ψ


and ψ  φ, then we say that they are logically equivalent. We write
φ  ψ to say that φ and ψ are equivalent. For example, we have
that p  ¬¬p. To see this, note that for any valuation v, we have
that J¬¬pKv = 1 − J¬pKv = 1 − (1 − JpKv ) = JpKv . So, p and ¬¬p
have the same truth-value under every valuation. As a consequence,
CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC 118

we have that both p  ¬¬p and ¬¬p  p, i.e. p  ¬¬p. This point
actually generalizes: two formulas are logically equivalent iff they have
the same truth-value under each valuation:
Proposition. Let φ, ψ ∈ L be formulas. Then φ  ψ iff for all
valuations v, JφKv = JψKv .

Proof. We prove the two directions in turn:

ˆ (Left-to-right): Suppose that φ  ψ, i.e. both φ  ψ and ψ  φ.


We need to show that for all valuations v, JφKv = JψKv . So, let v
be an arbitrary valuation. There are two exhaustive possibilities:
(a) JφKv = 1 and (b) JφKv = 0. In case (a), we can infer that
JψKv = 1 from the fact that φ  ψ (i.e. for all v, if JφKv = 1,
then JψKv = 1). Hence JφKv = 1 = JψKv , as desired. Can it be
in case (b) that JψKv = 1? Well, if this were the case, then by
ψ  φ, we’d have that JφKv = 1 contrary to our assumption that
JφKv = 0. Hence, by indirect proof, we have to have JψKv = 0. So
JφKv = 0 = JψKv , as desired.
ˆ (Right-to-left): Suppose that for all valuations v, JφKv = JψKv .
We need to show that φ  ψ, i.e. both φ  ψ and ψ  φ. We
only show φ  ψ, since the proof for ψ  φ is strictly analogous.
To prove that φ  ψ, we need to show that for all valuations v,
if JφKv = 1, then JψKv = 1. So, let v be an arbitrary valuation
with JφKv = 1. But since, by assumption, JφKv = JψKv , it follows
immediately that JψKv = 1, as desired.

The concept of equivalence is of fundamental importance for logical


theory. To see this, note that all that matters for validity is the truth
or falsity of statements: an inference is valid iff in every model where
the premises are true, so is the conclusion. But then, if two statements
are logically equivalent, they can be replaced for each other in all
reasoning contexts without destroying validity: if an inference is valid,
then so is any inference where some formulas have been replaced with
logically equivalent ones. For example, since p, q ∴ p ∧ q is valid, so is
¬¬p, q ∴ p ∧ q.
5.2.6 We shall now state a series of logical laws that we’re not all going to
prove—well, you will as an exercise. It’s important that you’ve seen
these laws and understand them since they are often alluded to in
formal reasoning, inside and outside the context of logic.
Lemma (Propositional Laws). The following laws hold for all φ, ψ, θ ∈
L and all Γ, ∆ ⊆ L:
CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC 119

(i) φ  φ (Reflexivity)
(ii) If Γ  φ and {φ} ∪ ∆  ψ, then Γ ∪ ∆  ψ. (Transitivity)
(iii) If Γ  φ, then Γ ∪ ∆  φ. (Monotonicity)
(iv) φ, ψ  φ ∧ ψ (Conjunction Introduction)
(v) φ ∧ ψ  φ and φ ∧ ψ  ψ (Conjunction Elimination)
(vi) φ  φ ∨ ψ and ψ  φ ∨ ψ (Disjunction Introduction)
(vii) If φ  θ and ψ  θ, then φ ∨ ψ  θ. (Disjunction Elimination)
(viii) φ ∧ φ  φ (Idempotence)
(ix) φ ∨ φ  φ (Idempotence)
(x) φ ∧ ψ  ψ ∧ φ (Commutativity)
(xi) φ ∨ ψ  ψ ∨ φ (Commutativity)
(xii) (φ ∧ ψ) ∧ θ  φ ∧ (ψ ∧ θ) (Associativity)
(xiii) (φ ∨ ψ) ∨ θ  φ ∨ (ψ ∨ θ) (Associativity)
(xiv) φ ∧ (ψ ∨ θ)  (φ ∧ ψ) ∨ (φ ∧ θ) (Distributivity)
(xv) φ ∨ (ψ ∧ θ)  (φ ∨ ψ) ∧ (φ ∨ θ) (Distributivity)
(xvi) ¬¬φ  φ (Double Negation)
(xvii) ¬(φ ∧ ψ)  ¬φ ∨ ¬ψ (De Morgan’s Law)
(xviii) ¬(φ ∨ ψ)  ¬φ ∧ ¬ψ (De Morgan’s Law)
(xix) ¬φ, φ ∨ ψ  ψ (Disjunctive Syllogism)
(xx) φ → ψ  ¬φ ∨ ψ (Conditional Definition)
(xxi) φ → ψ  ¬ψ → ¬φ (Contraposition)
(xxii) φ → ψ, φ  ψ (Modus Ponens)
(xxiii) φ → ψ, ¬ψ  ¬φ (Modus Tollens)
(xxiv) φ ↔ ψ  (φ → ψ) ∧ (ψ → φ) (Biconditional Introduction)
(xxv) φ ↔ ψ  ¬φ ↔ ¬ψ (Biconditional Contraposition)

Proof. We leave all but (vii) as an exercise. We prove (vii) here because
it allows us to understand the idea of proof by cases better.
We want to show that if φ  θ and ψ  θ, then Γ ∪ {φ ∨ ψ}  θ. So,
suppose that φ  θ and ψ  θ. This means that (a), for all valuations
v, if JφKv = 1, then JθKv = 1; and (b) for all valuations v, if JψKv = 1,
then JθKv = 1. In order to derive φ∨ψ  θ, we need to show that for all
valuations v, if Jφ∨ψKv = 1, then JθKv = 1. So, let v be a valuation such
that Jφ ∨ ψKv = 1. Since Jφ ∨ ψKv = 1 and Jφ ∨ ψKv = max(JφKv , JψKv ),
we can distinguish two exhausting cases (c) JφKv = 1 and (d) JψKv = 1.
We show that in each case JθKv = 1.
CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC 120

(c) Let JφKv = 1. By (a) this directly gives us JθKv = 1.


(d) Let JψKv = 1. By (b), we can diretly infer that JθKv = 1.
Hence, either way, assuming that Jφ ∨ ψKv = 1, we get that JθKv = 1,
which is what we needed to show.

Note that the law of Associativity justifies our notational convention


of leaving out parentheses in sequences of ∧’s and sequences of ∨’s
(4.5.4).
5.2.7 A nice thing about the previously stated laws is that you can use
them to derive further laws. For example, you might suspect that
φ ∧ ψ  ¬(¬φ ∨ ¬ψ). You can actually derive this law from the
laws of Double Negation and de Morgan’s Law. Here’s how (for the
derivation, note that the laws hold for all φ):
ˆ By Double Negation and the fact that we can replace logical
equivalents φ ∧ ψ  ¬¬φ ∧ ¬¬ψ.
ˆ By de Morgan’s law (and replacing logical equivalents), we get
that ¬¬φ ∧ ¬¬ψ  ¬(¬φ ∨ ¬ψ).
ˆ Using transitivity a bunch of times, we get φ ∧ ψ  ¬(¬φ ∨ ¬ψ).
These kinds of derivations will be the topic in the next chapter, in
proof theory. Here the point simply is that the laws given above are,
essentially, laws of reasoning. And we can prove them to be correct.
5.2.8 Note that it’s a simple consequence of the definition of  that Γ 2 φ iff
there exists a valuation v, such that JψKv = 1, for all ψ ∈ Γ, but JφKv =
0. This gives us the standard method for showing that some formuals
Γ don’t entail a given conclusion φ: we produce a countermodel, i.e. a
valuation v, such that JψKv = 1, for all ψ ∈ Γ, but JφKv = 0. Here are
some examples. For simplicity, we assume that P = {p, q, r}.

(i) Claim. p ∨ q, p 2 ¬q
Countermodel. Any v such that v(p) = 1 and v(q) = 1. If v(p) = 1
and v(q) = 1, then both JpKv = 1 and JqKv = 1. And Jp ∨ qKv =
max(JpKv , JqKv ) = max(1, 1) = 1. But JqKv = 1 and J¬qKv =
1 − JqKv , so J¬qKv = 0.
(ii) Claim. p → q, q 2 p (remember inference (2) from the introduc-
tion)
Countermodel. Any v such that v(p) = 0 and v(q) = 1. If v(p) = 0
and v(q) = 1, then JpKv = 0 and JqKv = 1. Since Jp → qKv =
min(1 − JpKv , JqK), we have that Jp → qKv = max(1 − 0, 0) =
max(1, 0) = 1. But JpKv = 0.
CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC 121

(iii) Claim. p → q, ¬p 2 ¬q
Countermodel : Any v such that v(p) = 0 and v(q) = 1. If v(p) =
0, then JpKv = 0. So J¬pKv = 1 − JpKv = 1 and Jp → qKv =
max(1 − JpKv , JqK) = max(1 − 0, 0) = max(1, 0) = 1. But since
v(q) = 1, JqKv = 1, and so J¬qKv = 1 − JqKv = 0.

5.2.9 Corresponding to the positive laws of logic we discussed above—laws


about what follows from what—there are also negative laws of logic—
what doesn’t follow from what. These are also known as logical falla-
cies, mistakes of reasoning. The examples (i–iii) of the previous point
are instances of the most commonly known propositional fallacies. The
examples we gave show that:

ˆ There are formulas φ, ψ ∈ L, such that φ ∨ ψ, φ 2 ¬ψ. So you


can’t necessarily infer from φ ∨ ψ and φ that ¬ψ. If you do so
anyway in a case where φ ∨ ψ, φ 2 ¬ψ, you’ve committed the
fallacy of affirming the disjunct.
ˆ There are formulas φ, ψ ∈ L, such that φ → ψ, ψ 2 φ. So you
can’t necessarily infer from φ → ψ and ψ that φ. If you do so
anyways in a case where φ → ψ, ψ 2 φ, you’ve committed the
fallacy of affirming the consequent.
ˆ There are formulas φ, ψ ∈ L, such that φ → ψ, ¬φ 2 ¬ψ. So you
can’t necessarily infer from φ → ψ and ¬φ that ¬ψ. If you do so
anyways in a case where φ → ψ, ¬φ 2 ¬ψ, then you’ve committed
the fallacy of denying the antecedent.

Note that the fallacies are not as formulated as neatly as the positive
laws. The reason is that it’s not the case, for example, that for all
φ, ψ ∈ L, we have that φ∨ψ, φ 2 ¬ψ. To give a kind of stupid example,
which however makes the point very clear, let φ = p and ψ = ¬p. Then
φ ∨ ψ, φ 2 ¬ψ becomes p ∨ ¬p, p 2 ¬¬p. But it’s easy to see using the
laws of logic, that this is false. By Double Negation, p  ¬¬p. And by
(Monotonicity), from this we get p ∨ ¬p, p  ¬¬p. So, in this specific
case, you can reason by affirming the disjunct. The point is that, in
contrast to the laws of logic, you can’t always reason like this.
5.2.10 Remember from the introduction that as a consequence of bivalence,
in classical logic there are logical truths—statements that are true in
every possible situation—and logical falsehood —statements that are
false in every possible situation. We’ll now make these notions precise.
Note that we defined the expression Γ  φ for any set Γ and formula
φ. But what if Γ = ∅? We’ll let’s see what happens when we apply the
definition:
ˆ ∅  φ iff for all valuations v, if JψKv for all ψ ∈ ∅, then JφKv = 1.
CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC 122

But wait a moment, ∅ has no members. So there is no ψ such that


ψ ∈ ∅. What does this mean for us? Well, that every valuation v is
such that JψKv = 1 for all ψ ∈ ∅. To see this, let’s think about what
it would mean for it to be false under some valuation v that JψKv = 1
for all ψ ∈ ∅. Well, it would mean that there is a ψ ∈ ∅ such that
JψKv = 0. But there is no ψ ∈ ∅. So, it can’t be false that JψKv = 1 for
all ψ ∈ ∅. But that means that all we require for ∅  φ is that JφKv = 1,
for all valuations v. This is the notion of logical truth made precise:
a formula is a logical truth iff it is the consequence of the empty set.
In formal logic, we also call logical truth validities. So, a formula that
is true under each valuation can also be called a valid formula. As a
matter of notation, note that ∅ can also be written {}. So, ∅  φ can
also be written {}  φ. But we’ve said that we typically leave out the
set-braces in consequence claims, so ∅  φ can simply be written as
 φ.

5.2.11 Let’s consider some examples:

(i) Claim:  p ∨ ¬p

Proof. We need to prove that for all valuations v, we have that


Jp∨¬pKv = 1. So let v be an arbitrary valuation. Since Jp∨¬pKv =
max(JpKv , J¬pKv ) and J¬pKv = 1 − JpKv , we get that Jp ∨ ¬pKv =
max(JpKv , 1−JpKv ). Now we can distinguish two exhaustive cases:
(a) JpKv = 1 and (b) JpKv = 0. In case (a), we have Jp ∨ ¬pKv =
max(JpKv , 1−JpKv ) = max(1, 1−1) = max(1, 0) = 1. And in case
(b), we have Jp ∨ ¬pKv = max(JpKv , 1 − JpKv ) = max(0, 1 − 0) =
max(0, 1) = 1. So, either way, Jp ∨ ¬pKv = 1, which is what we
needed to show.

(ii) Claim:  (p → q) ∨ (q → p)

Proof. We need to prove that for all valuations v, we have that


J(p → q) ∨ (q → p)Kv = 1. So, let v be an arbitrary valuation and
consider J(p → q) ∨ (q → p)Kv . We know that J(p → q) ∨ (q →
p)Kv = max(J(p → q)Kv , J(q → p)Kv ). And since J(p → q)Kv =
max(1 − JpKv , JqKv ) and J(q → p)Kv = max(1 − JqKv , JpKv ), we can
conclude that:

J(p → q)∨(q → p)Kv = max(max(1−JpKv , JqKv ), max(1−JqKv , JpKv ))

We can again distinguish two exhaustive cases: (a) JpKv = 1 and


CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC 123

(b) JpKv = 0. In case (a), we get:

J(p → q) ∨ (q → p)Kv = max(max(1 − JpKv , JqKv ), max(1 − JqKv , JpKv ))


= max(max(1 − 1, JqKv ), max(1 − JqKv , 1))
= max(max(0, JqKv ), max(1 − JqKv , 1))
| {z }
=1
(∗)
= max(max(0, JqKv ), 1)
=1

To see the critical identity (∗), simply note that max(x, 1) for
x ∈ {0, 1} is always going to be 1. In case (b), the reasoning is
analogous:

J(p → q) ∨ (q → p)Kv = max(max(1 − JpKv , JqKv ), max(1 − JqKv , JpKv ))


= max(max(1 − 0, JqKv ), max(1 − JqKv , 0))
= max(max(1, JqKv ), max(1 − JqKv , 0))
| {z }
=1
= max(1, max(1 − JqKv , 0))
=1

So, in either case, J(p → q) ∨ (q → p)Kv = 1, which is what we


needed to show.

Note that in each of the two proofs we used a distinction by cases


on JpKv = 1 or JpKv = 0—that is, we’ve used bivalence. This is
not by accident. There is a precise sense (that we’re not going to
go into here) in which all validities of classical logic ultimately
depend on bivalence. The take away is that if you want to prove
that a formula is valid, it’s always a good idea to try to use
bivalence in the proof. The usual way to use bivalence is to make
a distinction by cases as we did in the proof, but sometimes you
can also use bivalence for proof by contradiction—deriving that
both JφKv = 1 and JφKv = 0 for some φ is a good aim when trying
to prove something indirectly.

5.2.12 So much for logical truth. How about logical falsehood ? Well, remem-
ber that a sentence is a logical falsehood iff it’s false in all possible
situations. Formally, this means that a formula get’s value zero un-
der all valuations. But wait, we can already express this using our
terminology. Note that for all valuations v, JφKv = 0 iff J¬φKv = 1—
this follows directly from J¬φKv = 1 − JφKv . But that just means that
JφKv = 0 for all valuations v iff J¬φKv = 1 for all valuations v. But
CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC 124

that just means that ¬φ is valid! So, we can formally understand log-
ical falsehood in terms of logical truth: φ being logically false simply
means  ¬φ.

5.2.13 Let’s consider an example. Since proving that something is a logical


falsehood is proving that something is a logical truth, we don’t expect
any new methods to be necessary. We’ll simply show that (p ∧ ¬p) is
a logical falsehood. In order to show this, we establish

ˆ Claim.  ¬(p ∧ ¬p)

Proof. We will actually not prove this from definitions, but rather
using the laws of logic from 5.2.6 and the result  p ∨ ¬p from
5.2.11. For note that by de Morgan’s law, we have that ¬(p ∧
¬p)  ¬p ∨ ¬¬p. By Double Negation, ¬¬p  p and so ¬p ∨
¬¬p  ¬p∨p. By Commutativity, we have that ¬p∨p  p∨¬p.
So, putting all of this together and using Transitivity a bunch of
times, we get that ¬(p∧¬p)  p∨¬p. And we know that  p∨¬p
from 5.2.11. By definition, this means that Jp ∨ ¬pKv = 1 for all
valuations v. Since ¬(p ∧ ¬p)  p ∨ ¬p, by Proposition 5.2.5, we
have that for every valuation v, Jp ∨ ¬pKv = J¬(p ∧ ¬p)Kv . But
that just means that J¬(p ∧ ¬p)Kv = 1, for all valuations v.

5.2.14 We conclude our discussion of validity by stating and proving a the-


orem about the connection between the consequence and the condi-
tiona: the so-called deduction theorem. We will first state and prove
the theorem and then discuss it’s content:

Theorem (Deduction Theorem). Let φ, ψ ∈ L be formulas and Γ ⊆ L


a set of formulas. Then the following two are equivalent:

1. Γ ∪ {φ}  ψ
2. Γ  φ → ψ

Proof. We need to show that if 1., then 2. and if 2., then 1. We do so


in turn:

ˆ Suppose that Γ ∪ {φ}  ψ. That means, by definition, that for


all valuations v such that JθKv = 1, for all θ ∈ Γ ∪ {φ}, we
have JψKv = 1. We need to derive that Γ  φ → ψ, i.e. that
for all valuations v such that JθKv = 1, for all θ ∈ Γ, we have
Jφ → ψKv = 1. So let v be an arbitrary valuation such that
JθKv = 1, for all θ ∈ Γ. We can distinguish two cases: (a) JφKv = 1
and (b) JφKv = 0. In each case, consider Jφ → ψKv . So, remember
that Jφ → ψKv = max(1−JφKv , JψKv ). Let’s look at the two cases:
CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC 125

(a) If JφKv = 1, we can infer that JθKv = 1, for all θ ∈ Γ ∪ {φ}.


Why? Well, because we’ve assumed that JθKv = 1, for all
θ ∈ Γ and we’re considering the case where JφKv = 1. If
JθKv = 1, for all θ ∈ Γ ∪ {φ}, then since Γ ∪ {φ}  ψ, we
get that JψKv = 1. Ok, now let’s calculate Jφ → ψKv . We
have Jφ → ψKv = max(1 − JφKv , JψKv ) = max(1 − 1, 1) =
max(0, 1) = 1.
(b) If JφKv = 0, the situation is resolved even quicker. Let’s cal-
culate Jφ → ψKv on the assumption that JφKv = 0. We get
that Jφ → ψKv = max(1 − JφKv , JψKv ) = max(1 − 0, JψKv ) =
max(1, JψKv ) = 1, as desired.
So, we have that if Γ ∪ {φ}  ψ, then Γ  φ → ψ.
ˆ Suppose conversely that Γ  φ → ψ. That is for each valuation
v, if JθKv = 1 for all θ ∈ Γ, then Jφ → ψKv = 1. We want to show
that Γ ∪ {φ}  ψ. That is, we need to show that for all valuations
v such that JθKv = 1, for all θ ∈ Γ∪{φ}, we have JψKv = 1. So let v
be an arbitrary valuation v such that JθKv = 1, for all θ ∈ Γ∪{φ}.
First, note that this means that JθKv = 1, for all θ ∈ Γ, and so,
since Γ  ϕ → ψ, we get that Jφ → ψKv = 1. Second, note that we
also get that JφKv = 1—simply since φ ∈ Γ ∪ {φ}. Now consider
Jφ → ψKv = max(1 − JφKv , JψKv ). Since Jφ → ψKv = 1, we get
that max(1 − JφKv , JψKv ) = 1. And since JφKv = 1, we get that
max(1 − 1, JψKv ) = max(0, JψKv ) = 1. But for x ∈ {0, 1}, we can
only have max(0, x) = 1, if x = 1. So JψKv = 1, as desired. So,
we have that if Γ  φ → ψ, then Γ ∪ {φ}  ψ.

This completes the proof of the deduction theorem.

5.2.15 The deduction theorem is of fundamental importance since it connects


logical consequence with the (material) conditional. One way of read-
ing the theorem is that you can infer that a conditional is true iff you
can infer the consequent (the then-part) from the antecedent (the if-
part). This is, in a sense, the essence of conditional proof. Using the
deduction theorem, we can make a connection between the validity
of inferences and the logical truth of formulas. This will be our last
observation. Let’s first think about a simple case of an inference with
one premise and one conclusion: φ ∴ ψ. We’ve said that this inference
is valid iff φ  ψ. But by the deduction theorem, this is the case iff
 φ → ψ. Why? Well, just take the statement of the deduction the-
orem and let Γ = ∅. So, φ ∴ ψ is valid iff  φ → ψ. In words, an
inference with just one premise is valid iff the statement “if [premise],
then [conclusion]” is a logical truth. We’ll now turn this into a more
general theorem, which we’ll use to prove decidability in the next sec-
tion.
CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC 126

5.2.16 The essence of the theorem we’re about to prove is that we can gener-
alize the idea that we’ve just described. But to see how, we first need
to prove a lemma:

Lemma. For all φ, ψ, θ ∈ L, we have φ → (ψ → θ)  (φ ∧ ψ) → θ.

Proof. We derive the equivalence using the logical laws. First, note
that φ → (ψ → θ)  ¬φ ∨ ¬ψ ∨ θ using Conditional Definition
twice. Now consider (φ ∧ ψ) → θ. By Conditional Definition, we get
(φ ∧ ψ) → θ  ¬(φ ∧ ψ) ∨ θ. But by De Morgan’s law ¬(φ ∧ ψ) 
¬φ ∨ ¬ψ. Hence ¬(φ ∧ ψ) ∨ θ  ¬φ ∨ ¬ψ ∨ θ. But now we have that
φ → (ψ → θ)  ¬φ ∨ ¬ψ ∨ θ and (φ ∧ ψ) → θ  ¬φ ∨ ¬ψ ∨ θ, from
which we can infer our claim by Transitivity.

With this lemma in place, we prove our theorem as a corollary from


the deduction theorem:

Theorem. Let φ1 , . . . , φn , ψ ∈ L be formulas. Then:

φ1 , . . . , φn  ψ iff  (φ1 ∧ . . . ∧ φn ) → ψ

Proof. The proposition follows from the deduction theorem by n ap-


plications of Lemma 5.2.16.

This theorem will play an essential role in the following section.

5.2.17 But first, we note in the following corollary that we can also reduce
logical equivalence to the validity of a formula—unsurprisingly, a bi-
conditional:

Corollary. For φ, ψ ∈ L, we have that φ  ψ iff  φ ↔ ψ

Proof. We prove both directions in turn:

ˆ (Left-to-right): Suppose that φ  ψ, i.e. both φ  ψ and ψ  φ.


By the deduction theorem, we have  φ → ψ and  ψ → φ. By the
law of Conjunction Introduction, we have  (φ → ψ) ∧ (ψ → φ).
And by the law of Biconditional Introduction, we have  φ ↔ ψ.
ˆ (Left-to-right). Suppose that  φ ↔ ψ. By the law of Bicondi-
tional Introduction, we have  (φ → ψ) ∧ (ψ → φ). From this,
it easily follows that  (φ → ψ) and  (ψ → φ) using the law of
Conjunction Elimination. But that gives us φ  ψ and ψ  φ by
the deduction theorem and so φ  ψ, as desired.
CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC 127

5.2.18 So, to sum up, by Proposition 5.2.6, we have reduced the question
of whether a (finite) set of formulas entails another formula to the
question whether a specific conditional is valid, the conditional formed
by taking the conjunction of all the members in the set as the if-
part and the potential conclusion as the then-part. In the following
section, we shall turn this observation into a decision procedure for
propositional logic.

5.3 Truth Tables and Decidability


5.3.1 Remember our discussion of decidability from the introduction. There
we posed the question of whether it is possible to write a computer
program, such that if I give it an arbitrary inference, the program will
determine whether the argument is valid. In general, we remarked,
the answer is: no! We’ll discuss that at the end of our treatment of
first-order logic. But with respect to propositional logic, the answer
is: yes! The purpose of this section is to develop a semantic decision
procedure for propositional logic, i.e. one using semantic concepts. In
the following chapter, we’ll cover a proof-theoretic method, i.e. one
that makes use of proof-theoretic concepts.

5.3.2 The decision method that we’ll discuss in this chapter is the method
of truth-tables. It was first discovered by the Austrian philosopher
Ludwig Wittgenstein in the 20’s of the last century. But to this day,
it’s the most widely used decision procedure for propositional logic.
You will soon be able to appreciate its elegance. The idea underlying
the method is that in order to determine the truth-value of a formula,
we actually only need to know the truth-values of the sentence letters
that occur in the formula. But there are only finitely many of those and
so, there are only finitely many possible combinations of truth-values
for the sentence letters in the formula. Hence, we can write down all
the possible truth-values that a formula can take in one, finite table.
This table, the so-called truth-table for the formula is at the heart of
the decision procedure we’ll discuss in this section.

5.3.3 First, let’s discuss how to construct a truth-table. We’ll do this by


means of an example. Let’s construct the truth-table for ((p ∨ q) ∧
¬(p ∧ q)) → r step-by-step:

(a) The first thing you should do when you’re constructing a truth-
table for a formula is to find all the propositional letters. In our
case, we get p, q, and r.
(b) Write down all the propositional letters, followed by some space
CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC 128

(we’ll need that space). Draw separators as in the following ex-


ample:
p q r

This is the beginning of our truth-table.


(c) Next, count the propositional letters. If there are n propositional
letters in the formula, then its truth-table will have 2n rows (we’ll
see in a second why). In our case, this means that there are
23 = 2 · 2 · 2 = 8 rows in our truth-table.
(d) Correspondingly, draw 2n lines into the truth-table (if your paper
has lines, you don’t need to do this). In our example, we get:

p q r

We now have the skeleton of our truth-table.


(e) Next, we need to fill in all the possible combinations of truth-
values for p, q, r. Since there are n letters (in our case 3) and 2
truth-values (1 and 0), combinatorics tells us that there are 2n
different combinations. Here is a method for determining them
all:
(i) Start by dividing the rows of your truth-table skeleton into
two equal parts (this is always possible, since 2n is even for
every n). Then, in the first column of the table (in our case,
the p column), fill in 1s in the upper half of the table and
0s in the lower half, like this:
p q r
1
1
1
1
0
0
0
0
CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC 129

(ii) Next, consider the upper half and lower half of the second
column, and divide them again into two parts. In this col-
umn (in our case, the q column), fill in 1s in the upper half
of the table and 0s in the lower half of every part, like this:
p q r
1 1
1 1
1 0
1 0
0 1
0 1
0 0
0 0
(iii) Proceed to the next column (in our case, the r column) and
repeat the procedure:
p q r
1 1 1
1 1 0
1 0 1
1 0 0
0 1 1
0 1 0
0 0 1
0 0 0
If you have more than 3 propositional letters, you can con-
tinue dividing the parts in two. This will always work, since
half of half of 2n will always be even.3
(f) Now that we’ve filled in all the possible combinations of truth-
values for the propositional letters, we recursively calculate the
truth-value of the whole formula following the parsing tree. In
our case, the parsing tree is this:
3
If you know how to count in binary, then you can see that I’m basically counting down
from 2n in binary.
CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC 130

(((p ∨ q) ∧ ¬(p ∧ q)) → r)

((p ∨ q) ∧ ¬(p ∧ q)) r

(p ∨ q) ¬(p ∧ q)

p q (p ∧ q)

p q

(g) We calculate the truth-values of the formulas in the parsing tree


one after another from the bottom to the top. We use the truth-
function corresponding to the rule that was applied. Once we’ve
calculated the truth-value for a formula in the parsing tree, we
write it underneath the formula in the truth-table. In our case,
this means that we’ve got to complete 5 steps:
(i) We start with the first two leaves and calculate the value
of (p ∨ q) based on the values of p and q using the truth-
function f∨ :
p q r (p ∨ q)
1 1 1 1
1 1 0 1
1 0 1 1
1 0 0 1
0 1 1 1
0 1 0 1
0 0 1 0
0 0 0 0
(ii) Next, we move to the third and fourth leaf and calculate
the truth-value of (p ∧ q) based on the truth values of p and
q using the truth-function f∧ :
p q r (p ∨ q) (p ∧ q)
1 1 1 1 1
1 1 0 1 1
1 0 1 1 0
1 0 0 1 0
0 1 1 1 0
0 1 0 1 0
0 0 1 0 0
0 0 0 0 0
(iii) Now we go one step up and calculate the truth-value of
¬(p ∧ q) based on the truth-value of (p ∧ q) using the truth-
CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC 131

function f¬ :
p q r (p ∨ q) (p ∧ q) ¬(p ∧ q)
1 1 1 1 1 0
1 1 0 1 1 0
1 0 1 1 0 1
1 0 0 1 0 1
0 1 1 1 0 1
0 1 0 1 0 1
0 0 1 0 0 1
0 0 0 0 0 1
(iv) Now we proceed to calculate the truth-value of ((p ∨ q) ∧
¬(p ∧ q)) based on the truth-values of (p ∨ q) and ¬(p ∧ q))
we’ve just calculated now using the truth-function f∧ :

p q r (p ∨ q) (p ∧ q) ¬(p ∧ q) ((p ∨ q) ∧ ¬(p ∧ q))


1 1 1 1 1 0 0
1 1 0 1 1 0 0
1 0 1 1 0 1 1
1 0 0 1 0 1 1
0 1 1 1 0 1 1
0 1 0 1 0 1 1
0 0 1 0 0 1 0
0 0 0 0 0 1 0
(v) Finally, we calculate the truth-value of the whole formula
((p ∨ q) ∧ ¬(p ∧ q)) → r based on the truth-values of ((p ∨
q) ∧ ¬(p ∧ q)) and r using f→ :
p q r (p ∨ q) (p ∧ q) ¬(p ∧ q) ((p ∨ q) ∧ ¬(p ∧ q)) ((p ∨ q) ∧ ¬(p ∧ q)) → r
1 1 1 1 1 0 0 1
1 1 0 1 1 0 0 1
1 0 1 1 0 1 1 1
1 0 0 1 0 1 1 0
0 1 1 1 0 1 1 1
0 1 0 1 0 1 1 0
0 0 1 0 0 1 0 1
0 0 0 0 0 1 0 1
5.3.4 So what’s the general pattern here? In order to generate the truth-
table for a formula, we do the following:
1. Determine all the sentence letters in the formula.
2. Determine how many different ways there are for distributing the
truth-values 0, 1 over these sentence letters and write the different
combinations in a table, one row at a time.
Fact. If there are n sentence letters, then there are 2n different
combinations.
CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC 132

3. Calculate the parsing tree of the formula.


4. Recursively calculate the truth-values of the sub-formulas for each
of the different combinations of truth-values, and write the result
for a sub-formula under the formula, in the row that corresponds
to the combination you used to calculate the result.

In order to have a unique order for step 4., we start with the bottom-
left leaf of the tree and then try to move up and calculate what we
need to know along the way. This is precisely what we did in 5.3.3.
Here is another example (this time, just the result):
p ∧ (q ∨ r) ↔ (p ∧ q) ∨ (p ∧ r)

Parsing Tree:

p ∧ (q ∨ r) ↔ (p ∧ q) ∨ (p ∧ r)

p ∧ (q ∨ r) (p ∧ q) ∨ (p ∧ r)

p q∨r p∧q p∧r

q r p q p r

Truth-Table:
p q r q∨r p ∧ (q ∨ r) p∧q p∧r (p ∧ q) ∨ (p ∧ r) p ∧ (q ∨ r) ↔ (p ∧ q) ∨ (p ∧ r)
1 1 1 1 1 1 1 1 1
1 1 0 1 1 1 0 1 1
1 0 1 1 1 0 1 1 1
1 0 0 0 0 0 0 0 1
0 1 1 1 0 0 0 0 1
0 1 0 1 0 0 0 0 1
0 0 1 1 0 0 0 0 1
0 0 0 0 0 0 0 0 1

5.3.5 Remember that, mathematically speaking, an algorithm is a set of pre-


cise instructions for a specific task. In our case, the task is to determine
whether a given formula is valid (since we’ve reduced the question of
the validity of inference to the question of validities of formulas). With
5.3.4, we’re almost there, we just need to add one more step. So far
we’ve described an algorithm that allows us to calculate all the dif-
ferent possible truth-values a formula can take. But wait! A formula
is valid just in case it gets value one under every valuation. So, if our
truth-table yields one as the only possible value for our formula, then
CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC 133

the formula should be valid! So, the example truth-tables we just did
show that 2 ((p ∨ q) ∧ ¬(p ∧ q)) → r and  p ∧ (q ∨ r) ↔ (p ∧ q) ∨ (p ∧ r).
So, after 1.–4., we add the following final step:

5. Check the column under the formula:


ˆ If there are only 1’s, the formula is valid.
ˆ If there is one or more 0’s, the formula is not valid.

We’ve arrived at an algorithm that for determining whether a given


formula is valid.

5.3.6 How can we use the algorithm described above to determine whether a
given inference is valid? To answer this question, consider an inference
φ1 , . . . , φn ∴ ψ with finitely many premises. By definition, the inference
is valid iff φ1 , . . . , φn  ψ. And we know by Proposition 5.2.16 that
φ1 , . . . , φn  ψ is mathematically equivalent to  (φ1 ∧ . . . ∧ φn ) → ψ.
So, we use our algorithm to determine whether (φ1 ∧ . . . ∧ φn ) → ψ is
a logical truth. If it is, then the inference is valid; and if it isn’t the
inference is invalid.

5.3.7 We will complete this chapter by proving that the algorithm works,
i.e. we will show that if the algorithm tells us that a formula is valid,
then it is valid; and we will show that if the algorithm tells us that
a formula is invalid, then it is, in fact, invalid. This proof, together
with the observation that carrying out the algorithm only takes finitely
many steps, establishes that classical propositional logic is decidable.
This is the main theorem of this chapter:

Theorem (Decidability of Propositional Logic). Propositional logic


is decidable, i.e. there exists an algorithm which after finitely many
steps correctly determines whether a given inference (with finitely any
premises) is valid.

What we will need to prove in order to show that our algorithm is


correct is that if the algorithm tells us a formula is valid, then it is
valid; and that if the algorithm says that the formula is invalid, then
it’s invalid.

5.3.8 But first, we make the following simple observation:

Lemma. Let φ and let p1 , . . . , pn be the sentence letters in φ. Further,


let v be a valuation. Consider a line in the truth-table for φ and let
x1 , . . . , xn be the values for p1 , . . . , pn in that row and xφ the value for
φ in that row. Then, if v(pi ) = xi for 1 ≤ i ≤ n, then JφKv = xφ .
CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC 134

Proof. By inspection of the way the values of the truth-table are cal-
culated. You can prove this by an induction on formulas, the details
are left as an exercise for interested students.

5.3.9 Using this lemma it’s easy to show that our algorithm is correct:
Theorem (Verification of the Method of Truth-Tables.). Let φ be a
formula.

(i) If in the truth-table for φ there exists a line with a 0 under φ,


then 2 φ.
(ii) If in the truth-table for φ, in all lines under φ the value is 1, then
 φ.

Proof. We prove the two in turn:

(i) Suppose that p1 , . . . , pn are the sentence letters in φ and that


x1 , . . . , xn are the values for p1 , . . . , pn (respectively) in the row.
Define a valuation v : P → {0, 1} by setting v(p1 ) = x1 , . . . , v(pn ) =
xn , and v(p) = 0 if p 6= p1 , . . . , pn . Then by Lemma 5.3.8, we have
that JφKv = 0, which means that 2 φ.
(ii) Suppose that p1 , . . . , pn are the sentence letters in φ. Let v be an
arbitrary valuation. Consider the values v(p1 ), . . . , v(pn ). Since in
our truth-table, we have considered all the possible truth-values
for p1 , . . . , pn , there will be a line in our table that corresponds
to v(p1 ), . . . , v(pn ). The value of φ in that line will be 1 since, by
assumption, the value of φ is 1 in every line. Hence, by Lemma
5.3.8, JφKv = 1, which is what we needed to show.

This completes our investigation into the method of truth-tables: we’ve


established that it is indeed a decision procedure for propositional
logic.

5.4 Core Ideas


ˆ Models in propositional logic are valuations: functions from sentence
letters to truth-values.

ˆ We can calculate the value of a formula under a valuation recursively


using the Boolean truth-functions.

ˆ The validity of inferences over formal languages can be understood in


terms of the concept of logical consequence.
CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC 135

ˆ Logical consequence is defined by saying that a set of formulas entails


a formula iff in every valuation where all the members of the set have
value 1, the formula has value 1.

ˆ Logical truth is a special case of logical consequence: a formula is a


logical truth if it’s a consequence of the empty set.

ˆ The question whether a given set of premises entails a conclusion can


be reduced to the question whether the conditional with the conjunc-
tion of the premises as the if-part and the conclusion as the then-part
is logically valid.

ˆ The method of truth-tables allows us to decide in finitely many steps


whether a given formula is valid. This gives us a decision procedure
for propositional logic.

5.5 Self-Study Questions


5.5.1 Suppose that a formula contains 3 connectives. Which of the following
is the best you can say about the formula’s truth-table?

(a) It has exactly 32 = 9 rows.


(b) It has exactly 23 = 8 rows.
(c) We can’t predict the number of rows.
(d) We can’t predict the number of rows, but there are at least 2 · 3
rows.

5.5.2 Is it possible for a formula of the form φ ∧ ψ to be a logical truth?

(a) Yes! For example, if φ = p and ψ = ¬p.


(b) Yes! For example, if φ and ψ are logical truths themselves.
(c) Yes! For example, if the formula is also of the form p ∨ ¬p
(d) No! That would entail that two sentence letters are logical truths,
which is impossible.

5.5.3 Consider a formula of the form φ → ψ. Which of the following entails


that the formula is a logical truth?

(a) φ  ψ
(b)  ψ
(c)  ¬ψ → ¬φ
(d)  ¬φ
CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC 136

5.6 Exercises
5.6.1 Prove the remaining cases of Proposition 5.1.13.

5.6.2 Prove that the two definitions of  in 5.2.2 are equivalent (using Propo-
sition 5.1.13). (This is a good exercise for proof strategies!)

5.6.3 Proof the laws of Lemma 5.2.6. [h] (iii), (xiii), and (xv).

5.6.4 Suppose that φ is a formula and v : L → {0, 1} a valuation such that


for all ψ ∈ sub(φ), JψKv = 0. Prove that φ does not contain any ¬’s.

5.6.5 [h] Prove that there is no valuation v such that for all φ ∈ L, we have
JφKv = 1.

5.6.6 Prove that Γ  φ iff there is no valuation v, such that JψKv = 1, for all
ψ ∈ Γ, but also JφKv = 0.

5.6.7 Do the truth-tables for the following formulas:

(a) [h] p ∨ (q ∧ r) ↔ (p ∨ q) ∧ (p ∨ r)
(b) ¬p ∨ q → q ∧ (p ↔ q)
(c) p ∧ (q → r) ↔ (¬p ∨ q → p ∧ r)
(d) ¬(p → (q ∨ ¬r) ∧ (¬q → r))
(e) (p ↔ q ∧ r) ∨ (q ↔ r)
(f) ¬p ∨ ¬q → ¬(p ∧ q)
(g) (¬p ∨ q) → (q ∧ (p ↔ q))
(h) ((p ↔ q) → ((q ↔ r) → (p ↔ r)))
(i) (p → q) ∨ (¬q → p)
(j) (q → r) → p ∧ (q ∨ ¬r)
(k) ((p ∨ q) ∨ (¬p ∨ r)) ∨ (¬q ∨ ¬r)
(l) (p → (q → r)) → ((p → q) → (p → r))
(m) (p ∧ q) ↔ (r ∨ (¬p ∧ q))
(n) ((p → r) → ((q → r) → (p ∨ q → r)))
(o) ¬q ↔ (p → (¬r → q))
(p) (p → q) ∧ ((q → r) ∧ (r → ¬p))
(q) p → (q → (r → (¬p → (¬q → ¬r))))
(r) (p → q ∧ r) ↔ ((p → q) ∧ (p → r))
(s) p ∧ (¬p ∨ q) → (r → ¬q) ∧ (p → r)

5.6.8 Use the method of truth-tables to determine whether the following


inferences are valid:
CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC 137

(a) [h] p ∴ p ∨ (p ∧ q)
(b) p → ¬p ∴ ¬p
(c) p ∧ ¬p ∴ q
(d) p ∴ p ∨ ¬p
(e) q ∴ p → q
(f) p → q, q → r ∴ p → r
(g) p ↔ ¬p ∴ p ↔ (q ∧ ¬q)

5.7 Further Readings


The following chapters cover roughly the same material:

ˆ Section 2.2 of Dalen, Dirk van. 2013. Logic and Structure. 5th edition.
London, UK: Springer.

ˆ Sections 1.2 of Enderton, Herbert. 2001. A Mathematical Introduction


to Logic. 2nd edition. San Diego, CA: Harcourt/Academic Press.

5.5.3 (a–d)

5.5.2 (b)

5.5.1 (c)
Self Study Solutions
Chapter 6

Tableaux for Propositional


Logic

6.1 Proof Systems


6.1.1 Remember from the introduction (1.1.9) that the point of a proof sys-
tem is to formulate syntactic inference rules that allow us to derive
the conclusion from the premises in all (and only) the valid inferences.
There are, in fact, several different kinds of proof systems in the litera-
ture and we begin this chapter with an overview of the most important
ones. What all of these proof systems have in common is that they
avoid reference to semantic concepts, like valuations or consequence.
I don’t expect you to become fluent in all of the different proof systems
covered below. The point is that you should see what they look like and
(roughly) how they work.

6.1.2 Hilbert systems are, essentially, a model of step-by-step axiomatic rea-


soning in mathematics. A Hilbert system is defined by giving a set
of axioms (i.e. valid formulas) and a set of inference rules (i.e. rules
that allow you to infer valid formulas from valid formulas). As an ex-
ample, here are axioms and rules for a Hilbert system for classical
propositional logic:

Hilbert1 φ → (ψ → φ)
Hilbert2 (φ → (ψ → χ)) → ((φ → ψ) → (φ → χ))
Hilbert3 (¬φ → ¬ψ) → (ψ → φ)
Modus Ponens. From φ and (φ → ψ) infer ψ.
Definitions. From (φ ∧ ψ) infer ¬(φ → ¬ψ) and vice versa; from
(φ ∨ ψ) infer (¬φ → ψ) and vice versa; and from (φ ↔ ψ) infer
((φ → ψ) ∧ (ψ → φ)) and vice versa.

138
CHAPTER 6. TABLEAUX FOR PROPOSITIONAL LOGIC 139

A proof in the Hilbert system is a sequence of formulas such that each


formulas is either an axiom (in our case, an instance of Hilbert1–3 ) or
inferred from some formulas earlier in the proof via an inference rule
(in our case, Modus Ponens or Definitions). We write `H φ to say
that there is a proof in the Hilbert system that ends with φ. It can
be shown (though we won’t do that here) that the Hilbert calculus is
sound and complete:
`H φ iff  φ.
That is, a formula is derivable in our Hilbert system iff it is valid.
Using the idea of Theorem 5.2.16, we can use the Hilbert system to
show that an inference is valid: we know that φ1 , . . . , φn  ψ iff 
φ1 ∧. . .∧φn → ψ, which by soundness and completeness of our Hilbert
system is equivalent to `H φ1 ∧ . . . ∧ φn → ψ

6.1.3 But proving things in Hilbert systems is hard. Hilbert systems are
very economical, they only have a few axioms and rules—that’s it.
Our system, for example, has just 3 axioms and 2 rules. This makes
reasoning about our system very efficient. But it makes reasoning with
the system hard. To see how hard, here I give a derivation of p → p in
our Hilbert system:

1. ((p → ((p → p) → p)) → ((p → (p → p)) → (p → p)))


(Axiom 2. with φ = p, ψ = (p → p), and χ = p)
2. (p → ((p → p) → p)) (Axiom 1. with φ = p and ψ = (p → p))
3. ((p → (p → p)) → (p → p)) (From 1. and 2. by MP.)
4. (p → (p → p)) (Axiom 1. with φ = p and ψ = p.)
5. (p → p) (From 3. and 4. by MP.)

This is how you would show that p  p using a Hilbert system. Would
you have managed to find the proof yourself?

6.1.4 The next kind of proof system, we’ll discuss are sequent calculi or
Gentzen systems. A sequent is an expression of the form Γ ⇒ ∆,
where Γ and ∆ are sets of formulas. Intuitively, we read a sequent
φ1 , . . . , φn ⇒ ψ1 , . . . , ψm as the claim that φ1 ∧ . . . ∧ φn  ψ1 ∨ . . . ∨ ψm ;
that is, sequents are claims about consequence. The point is that we
can derive consequence claims from other consequence claims (as we
did in the previous chapter). In the Gentzen calculus for propositional
logic, there is only one axiom (i.e. consequence claims held to be true
no matter what):
φ⇒φ (Identity)
The remaining ingredients are several rules, which allow us to infer
consequence claims from each other. These rules fall into two classes
CHAPTER 6. TABLEAUX FOR PROPOSITIONAL LOGIC 140

structural rules, which don’t involve the connectives, and logical rules,
a pair of two for each connective.
Here are the structural rules:
Γ⇒∆ Γ⇒∆
W eakL W eakR
Γ ∪ {φ} ⇒ ∆ Γ ⇒ ∆ ∪ {φ}

Γ ⇒ {φ} ∪ ∆ {φ} ∪ Γ0 ⇒ ∆0
Cut
Γ ∪ Γ0 ⇒ ∆, ∆0

And here are the rules for the connectives:

Γ ⇒ ∆ ∪ {φ} Γ ∪ {φ} ⇒ ∆
¬L ¬R
Γ ∪ {¬φ} ⇒ ∆ Γ ⇒ ∆ ∪ {¬φ}

Γ ∪ {φ, ψ} ⇒ ∆ Γ ⇒ {φ} ∪ ∆ Γ0 ⇒ {ψ} ∪ ∆0


∧L ∧R
Γ ∪ {φ ∧ ψ} ⇒ ∆ Γ ∪ Γ0 ⇒ {φ ∧ ψ} ∪ ∆ ∪ ∆0

Γ ∪ {φ} ⇒ ∆ Γ0 ∪ {ψ} ⇒ ∆0 Γ ⇒ ∆ ∪ {φ, ψ}


∨L ∨R
Γ ∪ Γ0 ∪ {φ ∨ ψ} ⇒ ∆ ∪ ∆0 Γ ⇒ ∆ ∪ {φ ∨ ψ}

Γ ⇒ {φ} ∪ ∆0 Γ0 ∪ {ψ} ⇒ ∆0 Γ ∪ {φ} ⇒ {ψ} ∪ ∆


→L →R
Γ ∪ Γ0 ∪ {φ → ψ} ⇒ ∆ ∪ ∆0 Γ ⇒ {φ → ψ} ∪ ∆

It is possible to give rules ↔ L and ↔ R as well, but they are compli-


cated. Typically, in sequent calculus, φ ↔ ψ is considered defined as
(φ → ψ) ∧ (ψ → φ). A proof in a Gentzen system is an upwards down
tree whose leaves are all axioms and whose branches are constructed
according to the rules. We write Γ `G ∆ to say that there is a proof
with Γ ⇒ ∆ as its root. It is possible to show (in fact, it’s not that
difficult) that
Γ `G φ iff Γ  φ

6.1.5 Gentzen calculi have some very nice properties from a theoretical per-
spective. This is why you will likely encounter them in courses that fo-
cus on proof theory. But they are a bit hard to wrap your head around
since they are very “meta:” you infer claims about consequence from
claims about consequence. Here is an example of a Gentzen proof that
¬(p ∨ q)  ¬p ∧ ¬q:
p⇒p q⇒q
p ⇒ p, q W eakR q ⇒ p, q W eakR
p ⇒ p ∨ q ∨R q ⇒ p ∨ q ∨R
¬L ¬L
¬(p ∨ q), p ⇒ ∅ ¬(p ∨ q), q ⇒ ∅
¬R ¬R
¬(p ∨ q) ⇒ ¬p ¬(p ∨ q) ⇒ ¬q
∧R
¬(p ∨ q) ⇒ ¬p ∧ ¬q
CHAPTER 6. TABLEAUX FOR PROPOSITIONAL LOGIC 141

It is actually quite easy to find sequent proofs, even though they are
difficult to understand properly. Here, however, we shall not go more
into the depth of sequent calculi.
6.1.6 The third kind of proof system you should have seen is what’s called
a natural deduction system. Natural deduction systems are character-
ized by having no axioms, only rules that allow you to infer formulas
from each other. The idea of natural deduction is to model the kind of
informal reasoning we naturally do in mathematical proofs. The main
aspect is the idea of assumptions. In a natural deduction proof, you
may assume any formula at any point during the proof. But you may
only proceed via the inference rules. Some of these rules cancel previ-
ous assumptions, which is done by writing [ ] around the assumption.
Here are the natural deduction rules for propositional logic:

[φ] [¬φ]
.. ..
.. ..
φ ¬φ ψ ψ
EF Q Biv
ψ ψ

φ ψ φ∧ψ φ∧ψ
∧I ∧E1 ∧E2
φ∧ψ φ ψ
[φ] [ψ]
.. ..
.. ..
φ ψ φ∨ψ θ θ
∨I1 ∨I2 ∨E
φ∨ψ φ∨ψ θ
[φ]
..
..
ψ φ→ψ φ
→I →E
φ→ψ ψ

Similar to sequent calculi, there are two kinds of rules: introduction


and elimination rules, i.e. rules that allow you to infer a statement
with a connective and rules that allow you to infer something from a
statement with a connective.
A natural deduction proof is an upside down tree (like a sequent cal-
culus proof) of formulas whose branches are constructed according to
the rules. We write Γ `N φ to say that there exists a natural deduc-
tion proof whose root is φ and the formulas at the leaves that don’t
have [ ] written around them are all in Γ. It’s a bit more tricky, but
possible to show that
Γ `N φ iff Γ  ψ
CHAPTER 6. TABLEAUX FOR PROPOSITIONAL LOGIC 142

6.1.7 Here’s an example of a natural deduction proof:

[p] ¬q
p∨q [q] q EF Q
q ∨E, 1

This proof shows that p ∨ q, ¬q `N p.

6.1.8 In this course, we will not cover Hilbert calculi, Gentzen calculi, or
natural deduction in detail. If you take a liking to one of these systems,
you can check out the references at the end of this chapter. In this
course, we’ll make use of analytic tableaux, which double as a proof
system and decision procedure for propositional logic. In the following
sections, we will motivate and develop this proof system in some more
detail.

6.2 Satisfiability and Consequence


6.2.1 Just like the method of truth-tables we discussed in the previous chap-
ter, the method of analytic tableaux has a theoretical foundation in
an important theorem. In this section, we shall state and prove this
theorem.

6.2.2 But first, we need to introduce a new theoretical concept, the concept
of satisfiability. A set of formulas Γ ⊆ L is said to be satisfiable iff
there exists a valuation v such that JφKv = 1 for all φ ∈ Γ. In words, a
set of formulas is satisfiable iff there exists a valuation that makes all
the members of the set true.

6.2.3 Let’s consider some examples of satisfiable sets (where we assume,


again, that P = {p, q, r}):

(a) The whole set P = {p, q, r} is satisfiable since v(p) = 1, v(q) =


1, v(r) = 1 is a valuation that makes all the members of P true.
(b) Any subset X ⊆ P is satisfiable since v(p) = 1 iff p ∈ X defines a
valuation v that makes all the members of X true.
(c) The empty set ∅ is satisfiable since every valuation makes all the
members of ∅ true (again, ask yourself: can there be a valuation
that doesn’t make some member of ∅ true?).
(d) We can even more generally note that any subset of a satisfiable
set is satisfiable:
Proposition. . Let Γ, ∆ ⊆ L be sets of formulas. If Γ is satisfiable
and ∆ ⊆ Γ, then ∆ is satisfiable.
CHAPTER 6. TABLEAUX FOR PROPOSITIONAL LOGIC 143

Proof. Let Γ, ∆ ⊆ L be sets of formulas such that Γ is satisfiable


and ∆ ⊆ Γ. That Γ is satisfiable means, by definition, that there
exists a valuation v such that JφKv = 1 for all φ ∈ Γ. We need to
show that that there exists a valuation v 0 such that JψKv0 = 1 for
all ψ ∈ ∆. But we can simply let v 0 be v. For let ψ be an arbitrary
element of ∆. Since ∆ ⊆ Γ, we have that ψ ∈ Γ. And we have that
JφKv = 1 for all φ ∈ Γ, and so JψKv = 1. Hence JψKv = 1 for all
ψ ∈ ∆, as desired.

(e) The set {p ∨ ¬p} is satisfiable since (as we proved in 5.2.11) p ∨ ¬p


is true under every valuation.
(f) The set {p → q, ¬q} is satisfiable since v(p) = 0, v(q) = 0, and
v(r) arbitrary defines a valuation that makes both p → q and ¬q
true.

6.2.4 So, what does it mean for a set of formulas to be unsatisfiable? Well,
it follows immediately from the definition that a set Γ of formulas is
unsatisfiable iff there exists no valuation v such that JφKv = 1 for all
φ ∈ Γ; in words, a set of formulas is unsatisfiable iff there is no valu-
ation that makes all the members of the set true or, equivalently, iff
every valuation makes some member false. So, intuitively, unsatisfia-
bility is a kind of inconsistency: a set of formulas is unsatisfiable iff its
members can’t all be made true by a valuation.

6.2.5 Let’s consider some examples of unsatisfiable sets (assuming, again,


that P = {p, q, r, }):

(a) Any set {φ, ¬φ} for φ ∈ L is unsatisfiable. This immediately fol-
lows from the fact noted in 5.1.10 that for each valuation v and
formula φ ∈ L, we have that either JφKv = 1 or JφKv = 0 (and
never both); that is J·Kv is a function from L to {0, 1}. But if
both JφKv = 1 and J¬φKv = 1, it would follow that JφKv = 1 and
JφKv = 0, since J¬φKv = 1 − JφKv . It follows, for example, more
concretely that {p, ¬p} is unsatisfiable.
(b) A more general consequence of the previous observation is that
the set L of all formulas is unsatisfiable. To see this, simply ob-
serve that φ, ¬φ ∈ L and so if L would be satisfiable (i.e. all its
members would be made true by some valuation), then JφKv = 1
and J¬φKv = 1, which we’ve just seen is impossible.
(c) The point generalizes even more:
Proposition. Let Γ, ∆ ⊆ L be sets of formulas. If Γ is unsatisfi-
able and Γ ⊆ ∆, then ∆ is unsatisfiable.
CHAPTER 6. TABLEAUX FOR PROPOSITIONAL LOGIC 144

Proof. We prove this by contradiction. So, let Γ, ∆ ⊆ L be sets of


formulas, Γ unsatisfiable, Γ ⊆ ∆, and suppose, for contradiction,
that ∆ is satisfiable. This would mean that there exists a valuation
v such that JφKv = 1 for all φ ∈ ∆. But then, since Γ ⊆ ∆, it would
follow that for all ψ ∈ Γ, JψKv = 1, which means that Γ would be
satisfiable. Contradiction! Hence ∆ is unsatisfiable, as desired.

(d) Finally, let’s consider a less abstract/more concrete example: the


set {p ∨ q, ¬p, ¬q} is unsatisfiable. To see this, suppose that v is
a valuation with Jp ∨ qKv = 1, J¬pKv = 1, and J¬qKv = 1. Since
J¬φKv = 1 − JφKv , we get immediately that JpKv = 0 and JqKv = 0.
But since Jp ∨ qKv = max(JpKv , JqKv ) and Jp ∨ qKv = 1, that either
JpKv = 1 and JqKv = 1 (otherwise, how could max(JpKv , JqKv ) = 1?)
—but either case leads to a contradiction. Hence, we cant have
that Jp ∨ qKv = 1, J¬pKv = 1, and J¬qKv = 1 for any v; that is,
{p ∨ q, ¬p, ¬q} is unsatisfiable.

6.2.6 The reason why we talk about satisfiability is that the method of ana-
lytic tableaux is a method for satisfiability checking: it’s an algorithm
that allows us to determine, purely syntactically, whether a set of for-
mulas in propositional logic is satisfiable. “But what does this have
to do with proof theory?” you may ask. And rightly so—we haven’t
connected the questions of satisfiability and validity yet. This is what
we’re doing in the following theorem:

Theorem (I Can’t Get No Satisfaction). Let Γ ⊆ L be a set of for-


mulas and φ ∈ L a formula. Then, the following are equivalent:

1. Γ  φ
2. Γ ∪ {¬φ} is unsatisfiable

Proof. We need to show two things: 1. ⇒ 2. and 2. ⇒ 1. We do so in


turn:

ˆ (1. ⇒ 2.) We proceed by conditional proof. So suppose that (∗)


Γ  φ, i.e. for all v, if JψKv = 1, for all ψ ∈ Γ, then JφKv = 1. We
proceed by indirect proof to show that Γ ∪ {¬φ} is unsatisfiable.
So suppose that Γ ∪ {¬φ} is satisfiable, i.e. there is a valuation
v such that JψKv = 1 for all ψ ∈ Γ ∪ {¬φ}. Then JψKv = 1, for
all ψ ∈ Γ, since Γ ⊆ Γ ∪ {¬φ}. And so by (∗), we know that
JφKv = 1. But also {¬φ} ⊆ Γ ∪ {¬φ}, so J¬φKv = 1 − JφK = 1,
which means that JφKv = 0. Contradiction. So we can conclude
that Γ ∪ {¬φ} is unsatisfiable, given our assumption that Γ  φ.
So by conditional proof, we get that if Γ  φ, then Γ ∪ {¬φ} is
unsatisfiable.
CHAPTER 6. TABLEAUX FOR PROPOSITIONAL LOGIC 145

ˆ (2. ⇒ 1.) Suppose (for conditional proof) that Γ ∪ {¬φ} is un-


satisfiable, i.e. there exists no valuation v such that JψKv = 1 for
all ψ ∈ Γ ∪ {¬φ}. We want to show that Γ  φ and do so indi-
rectly. So, suppose that Γ 2 φ; that is, suppose that there exists
a valuation v such that JψKv = 1 for all ψ ∈ Γ and JφKv = 0. But
then, since J¬φKv = 1 − JφK, it follows that J¬φKv = 1. And this
just means that JψKv = 1 for all ψ ∈ Γ ∪ {¬φ}—in contradiction
to Γ ∪ {¬φ} being unsatisfiable. Hence Γ  φ, as desired.

6.2.7 A good way of understanding this theorem is by looking at an example.


Remember from 6.2.5.d that {p ∨ q, ¬p, ¬q} is unsatisfiable. Note that
the proof of this can equally be read as a proof of p ∨ q, ¬p  q. Just
compare it to 5.2.3.iv!

6.2.8 The point of the previous theorem is that we can reduce the question
of the validity of arguments to the satisfiability of a set of formulas:
by the previous theorem, an inference is valid iff the set of premises
together with the negation of the conclusion is unsatisfiable. In the
following section, we will make use of this idea to develop the method
of analytic tableaux as a proof theory for propositional logic.

6.3 Analytic Tableaux


6.3.1 The method of analytic tableaux is an algorithm for determining
whether a (finite) set of formulas is satisfiable. In this way, via The-
orem 6.2.6, analytic tableaux allow us to determine whether a given
inference is valid—we get another decision procedure for propositional
logic. What makes the method of tableaux proof theoretic is that it
proceeds step-by-step and purely syntactically: no mention of semantic
concepts (like truth) is made in the formulation of the procedure. This
is in stark contrast to the method of truth-tables, which makes explicit
reference to truth. We will now describe how the method works and
in the next chapter prove that it does.

6.3.2 The aim of our algorithm is to determine whether a given, finite set of
formulas is satisfiable. So, as input, we get a set Γ of formulas. We will
check the satisfiability of Γ by constructing a tree (yet another use of
trees) according to the following recipe:

1. We begin by writing down the members of Γ as the initial list. This


list forms the root of our tableau.
Examples.
CHAPTER 6. TABLEAUX FOR PROPOSITIONAL LOGIC 146

ˆ Γ = {p ∨ q, ¬p, ¬q}
Initial List:
p∨q
¬p
¬q
ˆ Γ = {p ∧ q, ¬p ∨ q, ¬(q ∧ ¬¬r)}
Initial List:
p∧q
¬p ∨ q
¬(q ∧ ¬¬r)
2. Next, we repeatedly apply the following rules:

¬¬φ φ ∧ ψ ¬(φ ∧ ψ) φ∨ψ ¬(φ ∨ ψ)

φ φ ¬φ ¬ψ φ ψ ¬φ

ψ ¬ψ

¬(φ → ψ) φ→ψ φ↔ψ ¬(φ ↔ ψ)

φ ¬φ ψ φ ¬φ φ ¬φ

¬ψ ψ ¬ψ ¬ψ ψ

We read these rules as follows:


ˆ If there’s a node with a formula to which no rule has been
applied yet, then we apply the rule by extending every branch
that goes through the node as shown by the rule.1
If all the rules that can be applied have been applied, then we say
that the tableau is complete.
Examples (Cont’d). The initial lists that we gave as examples above
can be extended to complete tableaux as follows:

p∨q
¬p
¬q

p q
1
Order doesn’t matter.
CHAPTER 6. TABLEAUX FOR PROPOSITIONAL LOGIC 147

p∧q
¬p ∨ q
¬(q ∧ ¬¬r)

¬p q

¬q ¬¬¬r ¬q ¬¬¬r

¬r ¬r
3. Once we’ve completed our tableau, we check on every branch B
whether there is a p ∈ P such that p ∈ B and ¬p ∈ B.
ˆ if yes, then we say that B is closed, and mark it by writing an
7 under it;
ˆ if no, then we say that B is open.
Examples (Cont’d). In our examples, we get the following results:

p∨q
¬p
¬q

p q
7 7
CHAPTER 6. TABLEAUX FOR PROPOSITIONAL LOGIC 148

p∧q
¬p ∨ q
¬(q ∧ ¬¬r)

¬p q

¬q ¬¬¬r ¬q ¬¬¬r
7 7
¬r ¬r
7

4. We now check our tableau whether there an open branch in the


tree (i.e. a branch without an 7 underneath):
ˆ If yes, the tableau is called open and the set is satisfiable.
ˆ If no, the tableau is called closed and the set is unsatisfiable.

6.3.3 Lets talk about the idea behind the algorithm for a moment. The idea
is that the rules allow us to test, step-by-step, what would need to be
the case for the formulas in the tree to be true. A rule creates new
branches if there’s more than one possibility for the formula to be true.
The idea can be given in the following two principles:

Down Preservation. If the formula φ at the parent node of a rule is


true under a valuation v, i.e. JφKv = 1, then at least one formula
ψ on a newly generated child node is true under v, i.e. JψKv = 1.
Up Preservation. If a formula ψ at a newly generated child node is
true under v, JψKv = 1, then the formula φ at the parent node is
true, i.e. JφKv = 1.

Following this idea, we ultimately create a tree in which each branch


corresponds (intuitively) to a possible valuation making all its mem-
bers true. More formally, the idea is that each complete branch B
corresponds to a valuation vB , such that JφKvB = 1 whenever φ ∈ B.
Note, however, that in contrast to the method of truth-tables, we don’t
use the recursive definition of truth in the formulation of our method.
The method is purely syntactic.
6.3.4 And what’s the deal with the 7’s? Well, a branch B can only corre-
spond to a real valuation if there is no p ∈ P such that p, ¬p ∈ B.
CHAPTER 6. TABLEAUX FOR PROPOSITIONAL LOGIC 149

This is so, because v needs to be a function and if it would make both


p and ¬p true, i.e. if JpKv = 1 and J¬pKv = 1 − JpKv = 1, we’d need to
have v(p) = 1 and v(p) = 0, which is impossible. Hence a branch B
with some p, ¬p ∈ B doesn’t correspond to a real possibility and can
thus be eliminated.
If in this way, we eliminate all the possible evaluations, we have shown
that there is no valuation that makes all the members of the formulas
in the initial list true. Note that the initial list is the only node that
is on every branch of the tree—it is the root. Well, strictly speaking
we will need to prove this; and we will, in the next chapter.

6.3.5 But for now, let’s focus on the pragmatics. We will now first discuss
how to get a valuation from an open branch that makes the formulas
on the branch—and thus the initial list—true. If B is an open branch
of a complete tableau, then we define its associated interpretation
vB : P → {0, 1} by setting:
(
1 if p ∈ B
vB (p) :=
0 if p ∈
/B

Note that since we assume that B is open, vB is indeed a function!


(Why?) In fact, if B is open and ¬p ∈ B, then p ∈ / B, and hence
vB (p) = 0—and so J¬pKvB = 1 − JpKvB = 1. In fact, as we will show
in the next chapter, we will get as a theorem that every formula of an
open branch is true under the associated interpretation:

Theorem (To be proven later). Let B be an open branch of a com-


plete tableau and vB it’s associated valuation. Then for all φ ∈ B, we
have that JφKvB = 1.

6.3.6 Example. Let’s consider our example of an open tableau from the de-
scription of the tableau method:
CHAPTER 6. TABLEAUX FOR PROPOSITIONAL LOGIC 150

p∧q
¬p ∨ q
¬(q ∧ ¬¬r)

¬p q

¬q ¬¬¬r ¬q ¬¬¬r
7 7
¬r ¬r
7

In this case, the associated interpretation of the only open branch B


(the right-most one) is given by vB (p) = 1, vB (q) = 1, vB (r) = 0.

6.3.7 Note that since the initial list, the members of our set Γ, are on ev-
ery branch of tableau (they’re on the root, after all), it follows that if
there’s an open branch, then the initial list is on it. So, by the Theo-
rem stated (but not proven!) in 6.3.5, we have that vB makes all the
members of Γ true. We will use this now to define a proof method
using analytic tableaux.

6.3.8 Using the idea that Γ  φ iff Γ ∪ {¬φ} is unsatisfiable (by Theorem
6.2.6), we define Γ `T ϕ as meaning that the complete tableau for
Γ ∪ {¬ϕ} is closed (i.e. not open). As a notational convention, we
usually leave out the T and just write Γ ` ϕ. So, to be perfectly explicit,
the idea is that if the tableau for Γ ∪ {¬φ} is closed, then there is no
valuation that makes all its members true, the set is unsatisfiable; but
that just means that Γ  φ. If, instead, the tableau for Γ ∪ {¬ϕ} is
open, then there is such a valuation, which shows that Γ 2 φ. So, in
the tableau method, our step-by-step syntactic procedure, our proof,
is the construction of the tableau. And, as it turns out, we cannot only
use this method to derive the conclusion from the premises in all (and
only) the valid inferences; in fact, we can also show that all invalid
inferences in fact are invalid—and we get a countermodel to show this
for free, on top.

6.3.9 Note that in order to prove that a formula is a logical truth, we need
to show that it follows from the empty set. Remember:  φ means
that ∅  φ. Using the method of tableaux, this means that we need to
check if the set {¬φ} is satisfiable. If it is, then there is a valuation in
CHAPTER 6. TABLEAUX FOR PROPOSITIONAL LOGIC 151

which ¬φ is true, so φ false, and so φ is not a logical truth; if {¬φ} is


not satisfiable, then ¬φ is always false, so φ always true, and so φ a
logical truth.

6.3.10 Let’s consider a bunch of examples:

(a) De Morgan 1

¬p ∨ ¬q ` ¬(p ∧ q) ¬(p ∧ q) ` ¬p ∨ ¬q
¬p ∨ ¬q ¬(p ∧ q)
¬¬(p ∧ q) ¬(¬p ∨ ¬q)

p∧q ¬¬p

¬p ¬q ¬¬q

p p ¬p ¬q

q q p p
7 7
q q
7 7

(b) De Morgan 2

¬p ∧ ¬q ` ¬(p ∨ q) ¬(p ∨ q) ` ¬p ∧ ¬q
¬p ∧ ¬q ¬(p ∨ q)
¬¬(p ∨ q) ¬(¬p ∧ ¬q)

p∨q ¬p

p q ¬q

¬p ¬p ¬¬p ¬¬q

¬q ¬q p q
7 7 7 7

(c) Law of Excluded Middle


CHAPTER 6. TABLEAUX FOR PROPOSITIONAL LOGIC 152

` p ∨ ¬p
¬(p ∨ ¬p)

¬p

¬¬p

p
7

(d) Definition of the Conditional

` (¬p ∨ q) ↔ (p → q)
¬((¬p ∨ q) ↔ (p → q))

(¬p ∨ q) ¬(¬p ∨ q)

¬(p → q) (p → q)

p ¬¬p

¬q ¬q

¬p q ¬p q
7 7
p p
7 7

(e) Transitivitiy
CHAPTER 6. TABLEAUX FOR PROPOSITIONAL LOGIC 153

(p → q), (q → r) ` (p → r)
p→q
q→r
¬(p → r)

¬r

¬p q

¬q r ¬q r
7 7 7 7

(f) Distributivity

(p ∨ q) ∧ r ` (p ∧ r) ∨ (q ∧ r)
(p ∨ q) ∧ r
¬((p ∧ r) ∨ (q ∧ r))

p∨q

¬(p ∧ r)

¬(q ∧ r)

p q

¬p ¬r ¬p ¬r

¬q ¬r ¬q ¬r ¬q ¬r ¬q ¬r
7 7 7 7 7 7 7 7

6.3.11 Note that by Definition 6.3.8, we have that Γ 0 φ iff the tableau for
Γ ∪ {¬φ} is open. In that case, we get a countermodel showing that
Γ 2 φ for free. Here are a couple of examples:

(a) Fallacy Affirming the Consequent


CHAPTER 6. TABLEAUX FOR PROPOSITIONAL LOGIC 154

p → q, q 0 p
p→q
q
¬p

¬p q

Countermodel : vB (q) = 1, vB (p) = 0.

(b) Fallacy of Affirming the Disjunct

p ∨ q, p 0 ¬q
p∨q
p
¬¬q

p q

Countermodel : vB (p) = 1, vB (q) = 1.

(c) Messed Up Distributivity


CHAPTER 6. TABLEAUX FOR PROPOSITIONAL LOGIC 155

(p ∨ r) ∧ (q ∨ r) 0 (p ∨ q) ∧ r
(p ∨ r) ∧ (q ∨ r)
¬((p ∨ q) ∧ r)

p∨r

q∨r

¬(p ∨ q) ¬r

p r

q r q r
7 7 7
¬p

¬q

p r

q r q r
7 7 7

Countermodel (left most branch): vB (p) = 0, vB (q) = 0, vB (r) = 1

6.3.12 Let’s conclude with one remark. Note that, officially, we’re only allowed
to close branches once we’ve completed the entire tree. In practice,
however, it’s often possible to stop early—as soon as we find a formula
φ and its negation ¬φ on a branch, we know that we’ll also eventually
find a p and ¬p on the branch. So we can “close early.” In practice,
this will be fine but for now, I’d like you to stick to the official rules.
It’s a bit like with official notation and conventional notation. The
official rules (don’t close early) are there to ensure that no mistakes
are made. Once we’re more comfortable doing tableau—when we do
them for first-order logic—you’ll be allowed to “close early.”

6.4 Core Ideas


ˆ There are several different kinds of proof systems: Hilbert calculi, se-
quent calculi, natural deduction, and analytic tableaux. In this course,
CHAPTER 6. TABLEAUX FOR PROPOSITIONAL LOGIC 156

we use analytic tableaux.

ˆ A set of formulas is satisfiable iff there is a valuation that makes all of


its members true.

ˆ An inference is valid iff the set of the premises and the negation of the
conclusion is unsatisfiable.

ˆ The method of analytic tableaux is an algorithm for checking whether


a set of formulas is satisfiable: if the tableau for a set is open, then the
set is satisfiable.

ˆ We define a proof system using analytic tableaux by defining derivabil-


ity as the tableau for the set of premises plus negation of conclusion
being closed.

ˆ We can read-off a countermodel from an open branch of an open


tableau.

6.5 Self Study Questions


6.5.1 Consider a set Γ. Which of the following implies that Γ is satisfiable?

(a) For all valuations v, there is a formula φ ∈ Γ, such that JφKv = 1.


(b) For all valuations v and all formulas φ ∈ Γ, we have JφKv = 1.
(c) For some valuation v there is a formula φ ∈ Γ such that JφKv = 1.
(d) For some valuation v and all formulas φ ∈ Γ, we have JφKv = 1.
(e) For all φ ∈ Γ there exists a valuation v with JφKv = 1.
(f) For all φ ∈ Γ and valuations v, we have JφKv = 1.
(g) For some φ ∈ Γ there exists a valuation v with JφKv = 1.
(h) For some φ ∈ Γ we have for all valuations v that JφKv = 1.

6.5.2 Consider a set Γ. Which of the following implies that Γ is unsatisfiable?

(a) For each formula φ ∈ Γ, there is a valuation v with JφKv = 0.


(b) For each formula φ ∈ Γ and valuation v, we have JφKv = 0.
(c) There is a formula φ ∈ Γ such that for all valuations v, we have
JφKv = 0.
(d) There is a formula φ ∈ Γ and valuation v, such that JφKv = 0.
(e) For each valuation v, there is a formula φ ∈ Γ with JφKv = 0.
(f) For all valuations v and formulas φ ∈ Γ, we have JφKv = 1.
CHAPTER 6. TABLEAUX FOR PROPOSITIONAL LOGIC 157

(g) There is a valuation v such that for all formulas φ ∈ Γ, we have


JφKv = 0.
(h) There is no valuation v such that for all formulas φ ∈ Γ, we have
JφKv = 1.

6.5.3 Consider a complete tableau. Which of the following entails that the
tableau is open.

(a) For no sentence letter p ∈ P is it the case that for all branches B
we have p, ¬p ∈ B.
(b) For no sentence letter p ∈ P do we have a branch B with p, ¬p ∈ B.
(c) For some sentence letter p ∈ P do we have a branch B with either
p∈/ B or ¬p ∈
/ B.
(d) For some sentence letter p ∈ P we have that for all branches B,
either p ∈
/ B or ¬p ∈
/ B.
(e) For all branches B there is a sentence letter p ∈ P such that either
p∈/ B or ¬p ∈/ B.
(f) For all branches B and all sentence letters p ∈ P we have that
either p ∈
/ B or ¬p ∈
/ B.

6.5.4 Consider a complete tableau. Which of the following entails that the
tableau is closed.

(a) There is a sentence letter p ∈ P and branch B, such that p, ¬p ∈ B.


(b) There is a sentence letter p ∈ P, such that for all branches B, we
have p, ¬p ∈ B.
(c) There is a sentence letter p ∈ P, such that for all branches B,
either p ∈ B or ¬p ∈ B.
(d) For each branch B there is a p ∈ P such that either p ∈ B or
¬p ∈ B.
(e) For each branch B there is a p ∈ P such that p ∈ B and ¬p ∈ B.
(f) For each branch B and all p ∈ P we have that p ∈ B and ¬p ∈ B.

6.6 Exercises
6.6.1 [6 x] Describe the content of Theorem 6.2.6 in your words (without
symbols).

6.6.2 Prove that the following sets are unsatisfiable without using analytic
tableau!

(a) [h] {¬(p → q), ¬(q → p)}


CHAPTER 6. TABLEAUX FOR PROPOSITIONAL LOGIC 158

(b) {¬(p ∨ ¬p)}


(c) [h] {¬p, ¬p → p}
(d) {¬p, (p → q) → p}

6.6.3 Let Γ = {φ1 , . . . , φn } be a finite set of formulas. Prove that Γ is un-


satisfiable iff  ¬(φ1 ∧ . . . ∧ φn ) is a logical truth.

6.6.4 Check the following claims using analytic tableau:

(a) [h] p → q, r → q ` (p ∨ r) → q
(b) [h] p → (q ∧ r), ¬r ` ¬p
(c) [h] ` ((p → q) → q) → q
(d) [h] ` ((p → q) ∧ (¬p → q)) → ¬p
(e) p ↔ (q ↔ r) ` (p ↔ q) ↔ r
(f) ¬(p → q) ∧ ¬(p → r) ` ¬q ∨ ¬r
(g) p ∧ (¬r ∨ s), ¬(q → s) ` r
(h) ` (p → (q → r)) → (q → (p → r))
(i) ¬(p ∧ ¬q) ∨ r, p → (r ↔ s) ` p ↔ q
(j) p ↔ ¬¬q, ¬q → (r ∧ ¬s), s → (p ∨ q) ` (s ∧ q) → p

6.6.5 Let φ be a formula. Determine how long the tableau for {¬φ} can at
most (measured in terms of longest branch) based on φ’s complexity
c(φ).

6.6.6 Highly optional : Prove in the Hilbert calculus that:

(a) ` (¬p → p) → p
(b) ` (((p → q) → p) → p)

6.7 Further Readings


The system of natural deduction finds many applications in logic. You can
read more about it in:

ˆ Natural Deduction: Section 2.4 of Dalen, Dirk van. 2013. Logic and
Structure. 5th edition. London, UK: Springer.
Self Study Solutions
6.5.1 (b), (d), (f)
6.5.2 (c), (e), (h) NB: (b) is not cor-
rect! Hint: Think of the case
Γ = ∅.
6.5.3 (b), (f)
6.5.4 (b), (e), (f)
159 CHAPTER 6. TABLEAUX FOR PROPOSITIONAL LOGIC
Chapter 7

Soundness and Completeness

This chapter is rather short, but it packs a punch: it contains two relatively
complicated proofs. We’re going to spend one entire lecture going through
the details.

7.1 Soundness and Completeness


7.1.1 Recall from the introduction that we want our proof system to be such
that we can derive the conclusion from the premises in all and only the
valid inferences. In this chapter, we set out to prove that our tableau
system for propositional logic enjoys this property: we set out to prove
that our tableau system is sound and complete. Remember from the
introduction that soundness means that only in valid inferences, we
can derive the conclusion from the premises, while completeness means
that in all the valid inferences, we can derive the conclusion from
the premises. Using our official notation, we can now state the two
theorems we wish to prove as follows:

Soundness Theorem. If Γ ` φ, then Γ  φ.


Completeness Theorem. If Γ  φ, then Γ ` φ.

We’re going to prove these two theorems in turn, beginning with


soundness. But before, let us make a couple of remarks about sound-
ness and completeness results in general.

7.1.2 The reason why having a sound and complete proof system is desir-
able is that it allows us to approach validity in a purely syntactic
fashion. Remember that proof systems are purely syntactic, they only
manipulate formulas without reference to the semantic clauses. If we
have a sound and complete proof system, this means that even though
we don’t explicitly talk about semantics in our system, we still effec-
tively capture the semantically defined notion of validity—no small

160
CHAPTER 7. SOUNDNESS AND COMPLETENESS 161

feat! From an AI perspective what’s neat about this is that having


reduced validity to syntactic derivability makes establishing validity
much more tractable for computer systems.
7.1.3 Of the two kinds of theorems, soundness and completeness, the former
is typically easier to show than the latter. Intuitively, the soundness
theorem is a kind of “sanity check” for our proof system. As we stated
it above, the theorem states that only in valid inferences, we can derive
the conclusion from the premises. Why is that a sanity check? Well,
because it means that if we can derive something, then it follows—it
can’t happen that we derive something and it doesn’t follow.
7.1.4 The completeness theorem is typically (much) harder to prove. And
without knowing how the proof goes, it’s already possible to see why.
The theorem states that every conclusion that can validly be drawn
from a set of premises can be derived from them. But surely we usually
don’t know all conclusions that can validly be drawn from a set of
premises: there are many (actually infinitely many) of them, it takes
some time to figure that out. But the completeness theorem states
that, even if we don’t know which are the conclusion we can validly
draw, we can be sure that we can derive them. That’s surprising! (I
hope . . . )
7.1.5 The tableau system we use in this paper has the nice feature that its
soundness proof and its completeness proof are relatively perspicuous.
This is why we can give them in an introductory course like this. The
completeness proof for the Hilbert or natural deduction calculi for
propositional logic, for example, is much harder (although ultimately
based on the same ideas). The aim of this chapter is two-fold: first,
I want to introduce you to the idea of soundness and completeness
proofs, the kinds of things you have to do to establish a result like this,
etc.; and second, I want to set you up for the beginning of the second
part of the course, in which we’re going to focus more on proving things
in logic—so why not begin with one of the most exciting theorems you
can prove ,
7.1.6 Before we get started, let’s briefly say something about the proof idea.
Remember that (in 6.3.3), we said that the idea behind the tableau
rules is are the following two properties:
Down Preservation. If the formula at the parent node of a rule is
true under a valuation, then at least one formula on a newly
generated child node is true under the valuation.
Up Preservation. If a formula at a newly generated child node is
true under a valuation, then the formula at the parent node is
true.
CHAPTER 7. SOUNDNESS AND COMPLETENESS 162

It turns out that these two properties, when thought through care-
fully lead to the desired results: down preservation leads to soundness
and up preservation leads to completeness. Effectively, what down and
up preservation together guarantee is that for each rule, we’re think-
ing through precisely the ways in which a formula can be true: down
preservation means that we’re considering all the possibilities of the
formula being true, und up preservation means that we’re considering
only possibilities of the formula being true. We’ll have to put in some
work, but that’s the essence of it. To prepare yourself for the proof,
remind yourself of the rules and check that they do indeed have the
two properties just described:

¬¬φ φ ∧ ψ ¬(φ ∧ ψ) φ∨ψ ¬(φ ∨ ψ)

φ φ ¬φ ¬ψ φ ψ ¬φ

ψ ¬ψ

¬(φ → ψ) φ→ψ φ↔ψ ¬(φ ↔ ψ)

φ ¬φ ψ φ ¬φ φ ¬φ

¬ψ ψ ¬ψ ¬ψ ψ

7.2 The Soundness Theorem


7.2.1 In this section, we’re aiming to prove the soundness theorem, i.e. the
fact that if Γ ` φ, then Γ  φ. Actually, what we’re going to prove is
the contrapositive (remember contrapositive proof from §2): if Γ 2 φ,
then Γ 0 φ. Let’s set out our strategy: First, remember that Γ 0 φ
means that the tableau for Γ ∪ {¬φ} is open, i.e. at least one branch in
the tableau doesn’t contain both some p and ¬p (6.3.8). So, in order
to obtain our result—that if Γ 2 φ, then Γ 0 φ —we need to show
that we can derive from Γ 2 φ that at least one branch in the tableau
for Γ ∪ {¬φ} doesn’t contain both some p and ¬p. How can we achieve
this? Well, remember that Γ 2 φ means that there’s a valuation v such
that JψKv = 1 for all ψ ∈ Γ and JφKv = 0 (cf. 5.2.8). So, we can use
the information that this countermodel exists. Now this is where the
down preservation property comes into play. Note that the initial list—
in our case, Γ∪{¬φ} —is at the root of our tableau. Our countermodel
showing Γ 2 φ makes all the members of Γ ∪ {¬φ} true. And by the
down preservation property, whenever we apply a rule to our initial
CHAPTER 7. SOUNDNESS AND COMPLETENESS 163

list, at least one branch contains a true formula in our countermodel.


So, our final tableau must contain at least one branch such that all
the formulas on that branch are true in our countermodel. But then
that branch can’t contain both p and ¬p, since the two cannot both
be true under any valuation. Hence the branch must be open. This is
how we’re going to prove soundness, so let’s get to work.

7.2.2 In the following, we will talk about tableaux as the kinds of trees
constructed according to the rules laid out in 6.3.2. Remember that a
tableau is complete if every rule that can be applied has been applied;
otherwise we say that the tableau is incomplete. Now suppose that v
is a valuation and B a branch of a (possibly incomplete) tableau. Then
we say that v is faithful to B iff JφKv = 1, for all φ ∈ B, i.e. v is faithful
to B iff under v all the formulas on B are true. Note that the associated
interpretation vb of an open branch B in a complete tableau (6.3.5) is
a paradigm example of a faithful valuation (though we haven’t proven
this yet in generality): vB is faithful to B. So every countermodel
produced by the tableau method in 6.3.11 is a (paradigm) example of
a faithful interpretation for the open branch it was derived from. It’s
worth convincing yourselves of this fact in order to understand what’s
about to happen next. So go ahead and check that in each case in
6.3.11, vB makes all the members of B true. The concept of faithfulness
will be the central concept in our soundness and completeness proof.

7.2.3 Note that if we have an incomplete tableau and apply a rule to some
formula in it, each branch of our incomplete tableau will be extended
with new formulas (this is what it means to properly apply a rule to
an incomplete tableau, cf. 6.3.2.2). Our central lemma, which will lead
to the soundness theorem, states that by extending branches in this
way, we preserve faithfulness of valuations:

Lemma (Soundness Lemma). Let v be a valuation that is faithful to a


branch B of an incomplete tableau. If a rule is applied to a formula in
the tableau, then v is faithful to at least one branch B 0 which extends
B in the new tableau.

This lemma is, in a sense, a more precise version of the down preserva-
tion property. Before we set out to prove it, let’s consider an example
to see what the lemma says. Let’s do the tableau for {p ∨ q, ¬p ∨ ¬q}
step-by-step and consider the faithful valuation along the way (assum-
ing P = {p, q}). We begin with the initial list:

p∨q
¬p ∨ ¬q
CHAPTER 7. SOUNDNESS AND COMPLETENESS 164

The initial list is a limit-case of a tableau, one with only one node and
one branch. It’s easily checked that there are precisely two valuations
that are faithful to this branch (which consists solely of the initial list):
v1 with v1 (p) = 1 and v1 (q) = 0 and v2 with v2 (p) = 0 and v2 (q) = 1.
Now let’s begin constructing our tableau by applying the rule for p∨q:

p∨q
¬p ∨ ¬q

p q

We now have two branches in our tableau: B1 which contains p, ¬p∨¬q,


and p ∨ q and B2 which contains q, ¬p ∨ ¬q, and p ∨ q. Now it’s easily
checked that v1 remains faithful to at least one of the new branches
created, namely B1 : v1 makes p true. The valuation v1 , of course, is not
faithful to all the new branches, B2 contains q and v1 makes q false.
But our lemma states that v1 needs to make all the formulas on one
of the new branches true. (The case for v2 is completely analogous).
Continuing constructing our tableau hopefully drives the point home:

p∨q
¬p ∨ ¬q

p q

¬p ¬q ¬p ¬q
7 7

We now have four branches in our tableau:

ˆ B11 with ¬p, p, ¬p ∨ ¬q, p ∨ q


ˆ B12 with ¬q, p, ¬p ∨ ¬q, p ∨ q
ˆ B21 with ¬p, q, ¬p ∨ ¬q, p ∨ q
ˆ B22 with ¬q, q, ¬p ∨ ¬q, p ∨ q

Of the two branches extending B1 , namely B11 and B12 , v1 is faithful


again to one of them: B12 . Note that there is no valuation faithful to
B11 , since the branch is closed: we have p, ¬p ∈ B11 and there can’t be a
valuation that makes both p and ¬p true. So, starting with a valuation
(v1 ) that made the members of our initial list true, by keeping track
of that valuation throughout the construction of our tableau, we ulti-
mately found a branch in the final, complete tableau (B12 ) such that
CHAPTER 7. SOUNDNESS AND COMPLETENESS 165

the valuation makes all the formulas on that branch true—all thanks
to the fact that we could always find at least one new branch with a
true formula on it.

7.2.4 We’re now going to prove that this example generalizes, we’re going
to prove our lemma:

Proof. The proof consists in a one-by-one inspection of the rules. There


are 9 rules, so 9 cases. Here I’m not going to exercise all of the cases
for you, I’ll leave some work for you (exercise 7.6.1). I will do the cases
for (a) the rule for φ → ψ and (b) the rule for ¬(φ → ψ).

(a) Suppose that v is a valuation faithful to branch B of some in-


complete tableau. Suppose further that φ → ψ ∈ B and now the
rule

φ→ψ

¬φ ψ

is applied, extending the branch accordingly. This means that we


have two new branches extending B, B1 and B2 . And we have
B1 = B ∪ {¬φ} and B2 = B ∪ {ψ}. We already know that v is
faithful to B, and so Jφ → ψKv = 1, in particular. Since Jφ →
ψKv = max(1 − JφKv , JψKv ), it follows that either JφKv = 0 or
JψKv = 1.So, we can distinguish two cases:
ˆ In the first case, J¬φKv = 1 − JφKv = 1. But since v is already
faithful to B, this means that v is faithful to B1 = B ∪ {¬φ}.
ˆ In the second case, since v is already faithful to B, we imme-
diately get that v is faithful to B2 = B ∪ {ψ}.
So, either way, v is faithful to at least one new branch created by
the rule for φ → ψ, which is what we needed to show.
(b) For the second case, suppose that v is a valuation faithful to branch
B of some incomplete tableau and that ¬(φ → ψ) ∈ B. Now the
rule
¬(φ → ψ)

¬ψ
is applied, which gives us one new branch B 0 = B ∪ {φ, ¬ψ}. Since
v is faithful to B and ¬(φ → ψ) ∈ B, we get that J¬(φ → ψ)Kv = 1.
CHAPTER 7. SOUNDNESS AND COMPLETENESS 166

From this, since J¬(φ → ψ)Kv = 1 − max(1 − JφKv , JψKv ), we get


that max(1 − JφKv , JψKv ) = 0. So JφKv = 1 and JψKv = 0. Since
J¬ψKv = 1 − JψKv , we can conclude J¬ψKv = 1. But now, since
v was faithful to B, B 0 = B ∪ {φ, ¬ψ}, and JφKv = 1 as well as
J¬ψKv = 1, we get that v is faithful to B 0 .

The remaining cases work similarly and you should work them out
yourself.

7.2.5 We will now use the soundness lemma to conclude the soundness the-
orem:

Theorem (Propositional Soundness). If Γ ` φ, then Γ  φ.

Proof. We follow the proof strategy laid out in 7.2.1. We prove the
contrapositive that if Γ 2 φ, then Γ 0 φ. So, suppose that Γ 2 φ. So
there’s a valuation, v, such that JψKv = 1 for all ψ ∈ Γ and JφKv = 0.
We now successively construct the tableau for Γ∪{¬φ} and prove that
it must be open. First, we write down the initial list Γ ∪ {¬φ}. Note
that since v is such that JψKv = 1 for all ψ ∈ Γ and JφKv = 0, it follows
immediately that v is faithful to the only branch in the (incomplete)
tableau consisting only of the initial list. We now successively apply
the rules to turn our initial list into a complete tableau. Every time
we apply a rule, by our soundness lemma 7.2.3, we get at least one
branch that v is faithful to. Hence v must be faithful to at least one
branch in the complete tableau. Call this branch B.
We now conclude that B cannot be closed. We show this indirectly.
Suppose that B was closed. Then there would exists a p ∈ P such that
both p ∈ B and ¬p ∈ B. Since v is faithful to B, this would mean
that JpKv = 1 and J¬pKv = 1. But this just means that v(p) = 1 and
v(p) = 0, which is impossible. Hence B cannot be closed.
But if B cannot be closed, then B must be open. Since a tableau is
open iff at least one branch in the tableau is open (6.3.2.4), we conclude
that our complete tableau must be open. Hence Γ 0 φ, by definition,
which is what we needed to show.

7.2.6 Before we conclude the section and move to completeness, we remark


that what we’ve just proven can also be interpreted differently: ef-
fectively, what we’ve proven is that the tableau method, construed as
the algorithm laid out in 6.3.2, gives the correct result for unsatisfiable
sets:
CHAPTER 7. SOUNDNESS AND COMPLETENESS 167

Theorem (Tableau Verification Part 1). If the algorithm laid out


in 6.3.2 gives the answer that a set is unsatisfiable, then the set is
unsatisfiable.

Proof. Suppose that the tableau method gives the result that a set Γ is
unsatisfiable. By 6.3.2.4, this means that the tableau must be closed.
Now suppose, for proof by contradiction, that the set Γ is satisfiable.
This means, by definition, that there’s a valuation v that makes all the
members of Γ true. By the same reasoning as in the proof of Soundness,
we can conclude that there’s at least one open branch in the complete
tableau for Γ. Hence the tableau must be open, in contradiction to
our assumption that the tableau method gave the result that Γ is
unsatisfiable. Hence Γ must indeed be unsatisfiable, which is what we
needed to show.

7.3 The Completeness Theorem


7.3.1 We now move to completeness. There is a sense in which the com-
pleteness proof is easier than the soundness proof, namely that the
proof idea is easier: we simply prove that the associated valuation for
an open branch in a complete tableau (cf. 6.3.5) is faithful to that
branch. From this, we can quickly conclude completeness, again by
contrapositive reasoning. For suppose that Γ 0 φ. This means, by def-
inition, that the tableau for Γ ∪ {¬φ} is open. Hence we get an open
branch and associated interpretation, which would then make all the
members of Γ ∪ {¬φ} true and thus show that Γ 2 φ. So, this gives
us that if Γ 0 φ, then Γ 2 φ, which is just the contrapositive of com-
pleteness. The devil, you rightly suspect, lies of course in the detail:
proving that the associated interpretation is indeed faithful. That’s
what we will now do, giving us the completeness lemma. In prepara-
tion for the proof, remind yourself of the definition of the associated
interpretation: (
1 if p ∈ B
vB (p) =
0 if p ∈/B
And make sure that you actually did check the examples in 6.3.11, as
I asked you to in 7.2.2.

7.3.2 We prove:
Lemma (Completeness Lemma). Let B be an open branch of a com-
plete tableau. Then vB is faithful to B.

The main proof is a (rather complicated) induction, so make sure that


you’re familiar with the proof principle (§4.2). Ready? Here we go:
CHAPTER 7. SOUNDNESS AND COMPLETENESS 168

Proof. Let B be an open branch of a complete tableau and vB its


associated valuation. First, we split up our claim into two parts. We
note that vB is faithful to B, i.e. JφKv = 1 for all φ ∈ B, iff for all
φ ∈ L:

1. if φ ∈ B, then JφKvB = 1, and


2. if ¬φ ∈ B, then JφKvB = 0.

So, what we’re going to prove is the conjunction of (a) and (b). We’re
going to do this by induction.

(i) Base case. We need to show that for all p ∈ P, 1. if p ∈ B,


then JpKvB = 1, and 2. if ¬p ∈ B, then JpKvB = 0. Claim 1. is
immediate by Definition 6.3.5. To see that 2. holds, note that if
¬p ∈ B, then it cannot be that p ∈ B. Because then p, ¬p ∈ B
and so B would be closed contrary to our assumption that it’s
open. But by definition, if p ∈
/ B, we have vB (p) = 0. and since
JpKvB = vB (p), we get our desired claim.
(ii) Induction steps. Now, things get serious:
(a) We need to prove that if φ enjoys the property, then ¬φ
enjoys the property. In our case, this means that assuming
1. if φ ∈ B, then JφKvB = 1, and
2. if ¬φ ∈ B, then JφKvB = 0.
as our induction hypothesis, we need to derive that
1’. if ¬φ ∈ B, then J¬φKvB = 1, and
2’. if ¬¬φ ∈ B, then J¬φKvB = 0.1
We do so in turn:
ˆ First, 1’. Suppose that ¬φ ∈ B. By induction hypothesis
2., this means that JφKvB = 0. But J¬φKvB = 1 − JφKvB
and so J¬φKvB = 1, as desired.
ˆ For 2’. Assume that ¬¬φ ∈ B. Since B is an open branch
of a complete tableau, every rule that can be applied has
been applied. And so, the rule for ¬¬φ has been applied:
¬¬φ

φ
So, it must be the case that φ ∈ B. But then, by in-
duction hypothesis 1., we get that JφKvB = 1. And since
J¬φKvB = 1 − JφKvB , we get J¬φKvB = 0, as desired.
1
Note the double negation here. This is not a typo!
CHAPTER 7. SOUNDNESS AND COMPLETENESS 169

(b) We have a different sub-case for ◦ = ∧, ∨, →, ↔. I will go


through the case for ◦ = ∧ to illustrate the idea and leave
the remaining cases as an exercise (7.6.2).
We have two pairs of induction hypotheses:
1φ . if φ ∈ B, then JφKvB = 1, and
2φ . if ¬φ ∈ B, then JφKvB = 0.
And
1ψ . if ψ ∈ B, then JψKvB = 1, and
2ψ . if ¬ψ ∈ B, then JψKvB = 0.
What we need to prove are:
1φ∧ψ . if φ ∧ ψ ∈ B, then Jφ ∧ ψKvB = 1, and
2φ∧ψ . if ¬(φ ∧ ψ) ∈ B, then Jφ ∧ ψKvB = 0.
We do so in turn:
ˆ Suppose that φ ∧ ψ ∈ B. Since B is an open branch of
a complete tableau, every rule that can be applied has
been applied. And so, the rule for φ∧ψ has been applied:
φ∧ψ

ψ
So, we can conclude that both φ, ψ ∈ B. But by 1φ . and
1ψ ., this means that JφKvB = 1 and JψKvB = 1. Since
Jφ ∧ ψKvB = min(JφKvB , JψKvB ), we get Jφ ∧ ψKvB = 1, as
desired.
ˆ Next, suppose that ¬(φ ∧ ψ) ∈ B. Again, since B is an
open branch of a complete tableau, the rule for ¬(φ ∧ ψ)
has been applied:
¬(φ ∧ ψ)

¬φ ¬ψ
So, we can conclude that either ¬φ ∈ B or ¬ψ ∈ B. We
can therefore distinguish two cases. In the first case, if ¬φ ∈
B, we can infer from 2φ . that JφK = 0. Since Jφ ∧ ψKvB =
min(JφKvB , JψKvB ), this means we get Jφ ∧ ψKvB = 0. In the
second case, if ¬ψ ∈ B, we can infer from 2ψ . that JφK = 0.
So, we get Jφ ∧ ψKvB = 0. So either way, Jφ ∧ ψKvB = 0—as
desired.

As we said above, the remaining cases are left as exercises. Once they
are completed, we can infer the completeness lemma via induction.
CHAPTER 7. SOUNDNESS AND COMPLETENESS 170

7.3.3 From the completeness lemma, actual completeness follows quickly


(via the proof strategy laid out in 7.3.1):

Theorem (Propositional Completeness). If Γ  φ, then Γ ` φ.

Proof. We use contrapositive proof. So assume that Γ 0 φ. This means,


by definition, that in the complete tableau for Γ∪{¬φ} there is at least
one open branch. Call it B. The branch has an associated interpre-
tation vB which is faithful to B by Lemma 7.3.2. Since Γ ∪ {¬φ} is
the root of the tableau, and therefore included in any branch, we have
Γ ∪ {¬φ} ⊆ B. But that means, since vB is faithful to B, that vB
makes all the members of Γ ∪ {¬φ} true and so Γ 2 φ (by 6.2.6), as
desired.

7.3.4 Note that, just like in the case of soundness, our completeness theorem
can be interpreted as a theorem about the tableau method with respect
to satisfiability search: we’ve effectively shown that the method works
with respect to satisfiability.

Theorem (Tableau Verification Part 2.). If the algorithm laid out in


6.3.2 gives the answer that a set is satisfiable, then the set is satisfiable.

Proof. Let Γ be a set the algorithm says is satisfiable. This means


that the complete tableau for Γ is open. So, there is an open branch,
B, in the tableau with an associated valuation vB . By Lemma 7.3.2,
vB is faithful to B and since Γ ⊆ B, it follows that vB makes all the
members of Γ true. In other words, Γ is satisfiable.

7.3.5 Together with the observation that the tableau method applied to a
finite set terminates after finitely many steps, Theorems 7.2.3 and 7.3.4
give an alternative proof of Theorem 5.3.7, the decidability of classical
propositional logic:

Theorem (Decidability of Propositional Logic). Propositional logic


is decidable, i.e. there exists an algorithm which after finitely many
steps correctly determines whether a given inference (with finitely any
premises) is valid.

But we already know that propositional logic is decidable, via truth-


tables, so why do we need another proof? The answer reveals some-
thing important about mathematical practice. Let’s assume that we’re
not worried about any of our proofs being incorrect and looking for
confirmation of the result by repeated proofs. This would not be
good mathematical practice anyways: remember we work rigorously!
Rather, having an alternative proof of an already known result can
CHAPTER 7. SOUNDNESS AND COMPLETENESS 171

provide new insight into the result: why it is true, what it says, how
it can be applied, etc. In the case of decidability, we have an alterna-
tive proof via tableaux. This is desirable since we already know that
many other logics (like first-order logic but also any non-classical log-
ics) don’t have (something corresponding to) a truth-table method.
So, if we want to prove decidability for these other logics, we can’t use
truth-tables. But, we can try tableaux! It is, in fact, possible to develop
tableau methods for a wide range of logics (you will see tableaux for
other logics throughout your degree). If we can develop a sound and
complete tableau method for another logic, we can at least hope that
we get decidability in this way. Unfortunately, in the case of first-order
logic, our hopes will be disappointed. But even in the failure to obtain
decidability for first-order logic lies some insight: we’ll be able to see
see why first-order logic is undecidable.

7.4 Infinite Premiss Sets and Compactness


In this section, we remain a bit less mathematically precise than we usually
are in this course. This is because we haven’t introduced the proper methods
of dealing with infinity and the topic is infinity in logic. However, we will
introduce some ideas that will become important in first-order logic, so stay
tuned.

7.4.1 We conclude our treatment of tableaux and of propositional logic in


general by looking at inferences with infinite premise sets. So far, es-
pecially for tableaux and truth-tables, we’ve restricted ourselves to
inferences with finite premise sets. And for good reason: humans usu-
ally don’t make arguments with infinitely many statements in them;
how could they? But, especially from a mathematical perspective, it’s
desirable to be able to deal with inferences potentially involving infi-
nite premises. It turns out that many mathematical theories need to
have infinitely many axioms. The standard theory of the natural num-
bers, called Peano arithmetic typically abbreviated P A, is already an
example. It’s in fact provable that there is no finite set of axioms that
describes the same theory as P A (although we will not prove this
here). So, already for something as simple as a proper mathematical
treatment of 0, 1, 2, . . . we need inferences with infinite premise sets.
We’ll get familiar with P A in the following section, P A is a first-order
theory. For now, we’ll simply discuss how we can make tableau work
for inferences with infinitely many premises in the “safe harbor” of
propositional logic.

7.4.2 We’ll, in fact, still make a limiting assumption, namely that the infi-
nite premise sets we’ll be considering can be indexed or enumerated
CHAPTER 7. SOUNDNESS AND COMPLETENESS 172

by the (positive) natural numbers. That is, if Γ is a premise set, then


there is a way of writing Γ as the set {φi : i ∈ I}, where I ⊆ N+ ;2 in
other words, there is a first member of Γ, a second member of Γ, and
so on.3 It might be surprising to learn that there are actually infinite
sets of formulas bigger than that. And, in fact, they only occur in the
more technical realms of logic. Even in most infinitary applications,
our premise sets can still be indexed. In fact, coming from a computer
science perspective, the assumption that our premise sets are enu-
merable is quite reasonable from a technical perspective: computers
can handle infinity, like the natural numbers, using inductive defini-
tions and recursion; but with respect to the higher-realms of infinity,
computers are rather limited in their capabilities (computers already
approximate reals like π with floats).

7.4.3 There are many, many infinite premise sets you can imagine, here we
give just a few examples to show you what we’re dealing with:

(a) The set {p, ¬p, ¬¬p, . . .} or more precisely the smallest set X such
that p ∈ X and if φ ∈ X then ¬φ ∈ X. To have this infinite set,
we don’t even need infinitely many sentence letters.
(b) But if we do, the set P = {pi : i ∈ N} can function as an infinite
premise set.
(c) So, we can also have a set like this {¬p2i , p2i+1 : i ∈ N} which
contains ¬pi for each even i and pi for each odd i ∈ N.
(d) Let v be any valuation. Then the set Tv = {φ : JφKv = 1} is always
infinite! This idea we’ll discuss in more detail in the context of first-
order logic. But to see this, suppose that v(p) = 1 (if v(p) = 0,
the argument is completely analogous). Then JpKv = 1 and so
p ∈ Tv . But also J¬¬pKv = 1 and so ¬¬p ∈ Tv . And note that
p 6= ¬¬p—after all, p contains no negations and ¬¬p contains 2.
Hence, Tv has at least two members. But then, there’s also ¬¬¬¬p,
J¬¬¬¬pKv = 1 so ¬¬¬¬p ∈ Tv , and p 6= ¬¬¬¬p, ¬¬p 6= ¬¬¬¬p—
so Tv has at least 3 members. This clearly goes on, so for every
n, Tv has at least n members, which is just another way of saying
that Tv is an infinite set.

7.4.4 Note that the definition of logical consequence/validity given in 5.2.2


can easily be applied to cases with infinitely many premises:

ˆ Γ  φ iff for all valuations v, if JψKv = 1, for all ψ ∈ Γ, then


JφKv = 1.
2
N+ = N \ {0}.
3
This doesn’t mean that there’s an algorithm that does that; this is a different, much
more complicated question.
CHAPTER 7. SOUNDNESS AND COMPLETENESS 173

The definition requires that under all valuations where all the premises
are true, also the conclusion is true. There’s nothing finitary going on
here: even if there are infinitely many premises, they can all be true
under a valuation.

7.4.5 Where it gets tricky is if we want to use truth-tables to determine


validity. Unfortunately, the method no longer works. Remember from
5.3.6 that in order to check whether φ1 , . . . , φn  ψ we did the truth-
table for (φ1 ∧. . .∧φn ) → ψ (using result 5.2.16 that φ1 , . . . , φn  ψ iff 
(φ1 ∧. . .∧φn ) → ψ in the background). But if our premises are infinite,
this idea breaks down. It is not possible to form a conditional with an
infinite conjunction of premises as the if-part. And even if we could
(which we can in infinitary logic), we’d still have the problem that we
can’t always write down the truth-table for such a conditional: if we
have infinitely many premises, we can have infinitely many sentence
letters involved, and as we remarked in 5.1.4, we simply can’t list all
the possible distributions of truth-values on an infinite set of sentence
letters. So, once we “go infinitary,” truth-tables are out.

7.4.6 Fortunately, tableaux are not. We’ll now describe how the tableau
method works for checking whether Γ  φ when Γ is infinitary. For
our algorithm in 7.3.2, we began by writing down Γ ∪ {¬φ} as the
initial list. This no longer works, since Γ is infinite. So, instead, in our
first step, what we’re going to do is to write down ¬φ as our initial
list. Then, without considering the premises for, we simply repeatedly
apply the tableau rules to make the complete tableau for ¬φ. If this
tableau closes, we can already declare that Γ ` φ, since if the tableau
for ¬φ closes, this means that φ is valid, i.e. ∅  φ, and by Monotonicity
(cf. 5.2.6) if follows that Γ  φ. So, let’s continue assuming that the
tableau for ¬φ doesn’t close. Now, it will be important that we can
write Γ as {ψi : i ∈ I}, where I ⊆ N+ , or, more transparently, as
{ψ1 , ψ2 , . . .}. Now, what we’re going to do is to write ψ1 into the
initial list, which so far only contained ¬φ. Then we repeatedly apply
the rules again to the new tableau. If the tableau closes, we declare
that Γ ` φ, since we’ve just shown that ψ1  φ (we did the tableau for
{ψ1 , ¬φ}) and by Monotonicity, ψ1 , ψ2 , . . .  φ follows. If the tableau
still doesn’t close, we repeat this procedure with ψ2 , then with ψ3 , and
so on. If at any step the tableau closes, we claim Γ ` φ. If we continue
going through the ψi ’s and the tableau never closes, we declare Γ 0 φ.

7.4.7 Note that if Γ 0 φ, we might never actually get the job done—we might
continue checking for an infinite amount of time (well not actually, but
we could continue indefinitely). But this doesn’t mean that we can’t
show that Γ 0 φ. It could happen, for example, that at some point
we can prove that the tableau will never close. This would require an
CHAPTER 7. SOUNDNESS AND COMPLETENESS 174

insight concerning the structure of the premise set. For example, if


Γ is the set {¬ . . ¬} p : n ∈ N}4 If the tableau for Γ ∪ {¬φ} doesn’t
| .{z
n times
close after the first step, i.e. the tableau for {p, ¬φ}, we can already
see that the tableau will never ever close: after the initial step, all that
our procedure will do is to add a new version of ¬¬p to the initial
list, apply the rule for ¬¬p to give us a new node with p, then adding
¬¬¬¬p to the list, which gives us a new node with ¬¬p and then a new
node with p, and so on. Basically, the only new nodes with sentence
letters we can ever get from the premise set Γ contain p; and so if the
tableau for {p, ¬φ} doesn’t close, we will never be able to close.

7.4.8 A consequence of the possibility that a tableau will never close is


that we will not get decidability for inferences with infinite premise
sets. Remember from the introduction (§1.4) that for decidability, we
require an algorithm with finite run time. And it might just happen
that our algorithm just described keeps on running forever. This is
bad, since we’ll never be able to know if the algorithm keeps running
because it hasn’t closed yet or because it never will. So, we can’t
use our algorithm to effectively determine whether a given infinitary
inference is valid.

7.4.9 It is, however, still possible to prove soundness and completeness for
the infinitary tableau method just described, though we won’t do this
here.5 This might be surprising but it’s important to see that de-
cidability and soundness and completeness are not the same thing.
That we still get soundness, is not so surprising, in fact. Note that
for soundness, what matters is that if our algorithm says that a set is
unsatisfiable, then it is unsatisfiable. But the algorithm saying that a
set is unsatisfiable just means that the tableau closes. And if a tableau
closes it closes after finitely many steps (there will have to be a point
at which all branches have been closed). What might be a bit more
surprising is that we still get completeness: what we needed to show
for completeness is that the associated valuation for a branch is faith-
ful to that branch. But what if the branch is infinite? Well, note that
the definition of vB actually doesn’t require that B is finite (6.3.5):
(
1 if p ∈ B
vB (p) :=
0 if p ∈
/B

This definition works perfectly fine even if B is infinite. The rest is


just details (albeit infinitary details).
4
Or more precisely, the smallest set X such that p ∈ X and if φ ∈ X, then ¬¬φ ∈ X.
5
Basically, the argument is the same as in the finitary case we discussed above, we just
need to be a bit careful with the infinities that might occur.
CHAPTER 7. SOUNDNESS AND COMPLETENESS 175

7.4.10 We will conclude the chapter by proving an interesting theorem, which


relies on the soundness and completeness of the infinitary method, but,
in a sense, makes the method obsolete:

Theorem (Compactness). Let Γ be an infinite set of formulas. If


Γ ` φ, then there exists a finite set Σ ⊆ Γ such that Σ ` φ.

Proof. As always, we assume that Γ can be written {ψi : i ∈ I} for


some I ⊆ N+ . Now suppose that Γ ` φ. This means that the tableau
for the infinitary inference closes at some point. Suppose that the
last formula we added to the initial list is ψn . This means that the
tableau for {ψ1 , . . . , ψn , ¬φ} closes (so far, all we did is to construct
the tableau for this set). But that just means that ψ1 , . . . , ψn ` φ.
Since {ψ1 , . . . , ψn } ⊆ Γ, our claim holds.

The compactness theorem states that if there is a proof from an infinite


premise set, then there’s always a proof from a finite subset. This is
encouraging, we don’t really need the infinitary methods just described
to prove the things we want to prove.6 Well, the theorem will become
really interesting in first-order logic. For now, we rest content with
having achieved its proof for propositional logic.

7.5 Core Ideas


ˆ By proving soundness and completeness we’re reducing validity to syn-
tax.

ˆ The soundness theorem is a kind of sanity check for a proof system


and typically easier to prove.

ˆ The completeness theorem is a surprising mathematical fact and typ-


ically harder to prove.

ˆ In the case of tableaux, the soundness theorem relies on the fact that
in every rule, if the upper formula is true, then at least one of the
lower formulas is true (‘down preservation’).

ˆ In the case of tableaux, the completeness theorem relies on the fact


that if one of the lower formulas in a rule is true, so is the upper
formula (‘up preservation’).
6
In light of this result, my claim from 7.4.1 that P A can’t be finitely axiomatized might
be confusing. But note that all we’ve shown is that for every number theoretic fact there’s
a finite set of premises that suffices to show it. But for two different facts, these sets of
premises might be different. In fact, if we want to have a set of premises that allows us to
derive all number theoretic facts, this set needs to be infinite.
CHAPTER 7. SOUNDNESS AND COMPLETENESS 176

ˆ Truth tables don’t work for infinitary premise sets but tableaux do.

ˆ Compactness tells us that if there’s a proof, we can always find a


finitary one.

7.6 Self Study Questions


7.6.1 Which of the following entails that your proof system is unsound?
(Assume that you’re trying to develop a proof system for classical
logic).

(a) There is a valid inference in which you can’t derive the conclusion
from the premises.
(b) There is an invalid inference in which you can derive the conclusion
from the premises.
(c) There is a set of premises from which you can both derive a formula
and it’s negation.
(d) There is a satisfiable set of premises from which you can derive
every formula whatsoever.

7.6.2 Which of the following entails that your proof system is incomplete?
(Assume that you’re trying to develop a proof system for classical
logic).

(a) There is a valid inference in which you can’t derive the conclusion
from the premises.
(b) There is an invalid inference in which you can derive the conclusion
from the premises.
(c) There is a set of premises from which you can’t derive any formula
whatsoever.
(d) There is a set of premises from which you can both derive a formula
and it’s negation.

7.7 Exercises
7.7.1 Check the remaining cases of Proof 7.2.4.

7.7.2 Check the remaining cases of Proof 7.3.2.

7.7.3 [h] Let’s say that a set of formulas Γ is proof-theoretically inconsistent


iff there exists a formula φ such that Γ ` φ and Γ ` ¬φ. Correspond-
ingly, we say that Γ is proof theoretically consistent iff there is no
formula φ such that both Γ ` φ and Γ ` ¬φ.
CHAPTER 7. SOUNDNESS AND COMPLETENESS 177

(a) Use the soundness theorem to derive that every proof-theoretically


inconsistent set is unsatisfiable.
(b) Use the completeness theorem to derive that every proof-theoretically
consistent set is satisfiable.

7.7.4 Let Γ be a set of formulas such that there exists a formula φ with
Γ 0 φ. Use the completeness theorem to conclude that Γ is satisfiable.

7.7.5 Suppose that {φ} is satisfiable and φ ` ψ. Use the soundness theorem
to conclude that ψ 0 ¬φ.

7.6.2 (a), (c)

7.6.1 (b), (d)


Self Study Solutions
Part III

First-Order Logic

178
Chapter 8

Syntax of First-Order Logic

We now begin with the study of first-order logic. As you’ll see, things will
be moving a bit faster. One important thing: I assume that the ideas and
methods of Part II are clear at this stage. So, we will not cover the same
issues in the same level of detail as in propositional logic just for first-order
logic. For example, I now assume that you know how an inductive definition
works. This allows us to focus on the more interesting aspects of first-order
logic.

8.1 First-Order Languages


8.1.1 Remember from the introduction that in first-order logic, we deal with
inferences involving the quantifiers “for all” and “there exists.” We
looked at the following two examples:

(3) This ball is scarlet and everything that’s scarlet is red. So, this
ball is red.
(4) The letter is in the left drawer. So there is something in the left
drawer.

Note that the kind of abstraction we did in propositional logic—


abstracting away from concrete sentences via sentence letters—will
not work in these kinds of inferences. Take (4), for example. If we
abstract from the concrete sentences, we get the following argument
form:
p ∴ q,
where p stands for “the letter is in the left drawer” and q stands for
“there is something in the left drawer.” Clearly, though, p 2 q (just let
v(p) = 1 and v(q) = 0); so, according to propositional logic, inference
(4) is invalid, which is clearly absurd.

179
CHAPTER 8. SYNTAX OF FIRST-ORDER LOGIC 180

The moral of the story is that in predicate logic, the internal struc-
ture of the sentences is important. When all we care about are the
sentential connectives, then abstracting sentences to sentence letters
is fine; but when we care about claims, like “ there is something in the
left drawer,” then this abstraction is too coarse-grained. We need to
abstract less.

8.1.2 The first step towards first-order logic is to take into account the
grammatical structure of simple sentences. A traditional way of looking
at the structure (which is in essence still alive in linguistic syntax
theory) is to consider the term-predicate structure of sentences. A
sentence like “the ball is scarlet” says of a thing, the ball, that it has a
property, being red. The sentence talks about the ball via the singular
term “the ball” and it talks about being red using the predicate “. . . is
red.” Now just like we abstracted away from sentences via sentence
letters in propositional logic, in first-order logic, we abstract away
from terms and predicates via term symbols and predicate symbols. The
rationale is that just like in propositional logic the concrete sentences
didn’t matter for validity, in first-order logic, the concrete terms and
predicates don’t matter. To see this, consider the following inferences:

(3’) This train is slow and everything that’s slow is yellow. So, this
train is yellow.
(4’) The dog is in the car. So there is something in the car.

Both of these inferences are valid, just like their counterparts (3) and
(4). Clearly whatever terms and predicates you fill in here, the infer-
ences remain valid. So, we can abstract away from them.

8.1.3 In first-order logic, we represent terms using term symbols, typically


denoted t, u, v, . . . . There are actually several different kinds of terms.
In first-order logic, we distinguish three:

(i) proper names, like “Johannes” or “Angela”


(ii) pronouns, like “he,” “she,” and “it”
(iii) functional terms, like “the birthplace of Ada Lovelace”

These classifications, just like the examples, stem from natural lan-
guage. But it’s fruitful to compare these categories to expressions from
mathemateze that we’ve discussed in §2. In mathematics, constants
are proper names, variables are pronouns, and functions are . . . well
. . . functions. In fact, this terminology is the very same we use for the
corresponding syntactic categories in our formal language for first-
order logic. The reason for this is the origin of first-order logic as the
study of mathematical reasoning. We typically use constants a, b, c, . . .
CHAPTER 8. SYNTAX OF FIRST-ORDER LOGIC 181

to abstract from proper names, we use variables x, y, z, . . . to abstract


from pronouns, and we use functional terms f, g, h, . . . to abstract from
functional expressions.

8.1.4 It’s worth thinking about functional expressions in natural language


for a moment. Take the above example “the birthplace of Ada Lovelace.”
This is a singular terms that refers to a place, namely London. The
structure of the expression is that a functional expression “the birth-
place of . . . ” is applied to another term, the proper name “Ada Lovelace.”
Correspondingly, in first-order logic, we take the structure of “the
birthplace of Ada Lovelace” to be:

f (a),

where f stands for “the birthplace of . . . ” and a stands for Ada


Lovelace. Note that there are conditions for us to understand a nat-
ural language expression as a function: they are the conditions for
function-hood we discussed in 3.6.5. The expression “the birthplace
of . . . ” expresses a function since everybody has a birthplace and no
person has two birthplaces (here we’re assuming that only names for
people can be put in for the “. . . ”). Note that function expressions
in natural language, just like in mathematics, can come with different
arities. For example, the expression “the last common ancestor (LCA)
of . . . and ” is a function expression in natural language that refers
to the last person that both . . . and directly descend from. It is
generally believed that for any pair of humans there is a LCA and
no two people can have more than one LCA. So, the expression, “the
last common ancestor of Alan Turing and Ada Lovelace” refers to one
and only one individual, we just don’t know which one. Formally, we’d
write it like this:
f (a, b),
where f stands for “the last common ancestor (LCA) of . . . and ,”
a for Alan Turning and b for Ada Lovelace.
Note that function expressions can be combined to make more compli-
cated function symbols. We can talk, for example, about the “the LCA
of Ada Lovelace and the LCA of Alan Turing and Angela Merkel,”
which we’d formally write as

f (a, f (b, c)),

where c stands for Angela Merkel. This is not really different from what
happens in mathematics, when we combine mathematical operations,
as in:
(n + m) · 2
CHAPTER 8. SYNTAX OF FIRST-ORDER LOGIC 182

exp(n + m)
(a · b) + (c · d)

8.1.5 In first-order logic, we represent predicates using predicate symbols,


typically P, Q, R, . . . . So, the logical structure of “the ball is scarlet”
according to first-order logic is:

P (t),

where t stands for the ball and P for the predicate “. . . is scarlet.” The
predicate “is scarlet” is what’s called a unary predicate, it expresses a
property of one object. There are also predicates with higher arities:
“. . . is in ” is a binary predicate or relation symbol ; “. . . lies between
and —” is a ternary predicate; and so on for n-ary predicates. An n-
ary predicate expresses that a certain relation holds between n objects,
each denoted by a term. So, the general form of a simple sentence in
first-order logic is:
R(t1 , . . . , tn ),
where R is an n-ary predicate symbol and t1 , . . . , tn are terms. The
sentence “the letter is in the left drawer,” for example, is formalized
as
R(t, u),
where t stands for the letter, u for the left drawer, and R for the binary
predicate “. . . is in .”

8.1.6 Now, the real interesting part in first-order logic are the quantifiers,
they are what gives the logic its expressive strength. In first-order
logic, as we already hinted at, we consider two quantifier expressions:
“for all” and “there exists” (and synonymous expressions like “every,”
“some,” . . . ). Our inferences (3) and (4) provide examples of how these
expressions are used in natural language. Let’s briefly talk about how
we treat them in the language of first-order logic. In first-order logic,
we use the quantifiers with the help of variables to formalize general
claims, like “everything that’s scarlet is red” or “there is something
in the left drawer.” Remember that an important role of variables in
mathematics is to allow us to make general claims about mathematical
objects (cf. 2.2.6 and 2.2.7). In first-order logic, the variables play the
same role. Here’s how. Take the claim:

ˆ Everything that’s scarlet is red.

In first-order logic, we take the underlying logical structure of the


expression to be:

ˆ Every object is such that if it is scarlet, then it is red.


CHAPTER 8. SYNTAX OF FIRST-ORDER LOGIC 183

In any case, clearly, the two sentences are equivalent: if the one is
true, then so is the other and vice versa. So, logically, they should be
treated as the same. Now note that “object,” and “it” are indefinite
terms, they refer to one arbitrary but fixed object. They are, essen-
tially, variables! So, in half-formal terms, the structure of our sentence
is:

ˆ Every x is such that if x is scarlet, then x is red.

We’ve already talked about how we abstract from predicates in first-


order logic, and the “if . . . , then ” is treated just like in proposi-
tional logic, so the only interesting thing is the expression “every.” This
is where the quantifiers come in. We use the universal quantifier ∀ as
our abstract representation of phrases like “every,” “for all,” “each,”
and so on. So, ultimately, the form of our example in first-order logic
is
∀x(S(x) → R(x)),
where S stands for the predicate “. . . is scarlet” and R stands for “. . . is
red.” So, putting it all together, inference (3) get’s formalized as

S(a), ∀x(S(x) → R(x)) ∴ R(a),

where a is a constant for the ball.

8.1.7 In order to formalize our inference (4), we need to talk about “there
exists,” “there is,” “some,” and so on. In first-order logic, we abstractly
represent these expressions by the existential quantifier ∃. Otherwise,
the idea is the same as in “for all.” So, from “there is something in
the left drawer” we get via “there is some object such that it is in the
left drawer” to “there is some x such that x is in the left drawer,” and
finally reach
∃xR(x, b),
where R stands for the predicate “. . . is in ” and b is a constant for
the left drawer. The whole inference (4) therefore becomes

R(a, b) ∴ ∃xR(x, b),

where additionally a is a constant for the letter.

8.1.8 Those are the basic ideas of first-order languages. To sum up, let’s
put our new vocabulary in a table together with names and intended
reading:

Symbol Name Reading


CHAPTER 8. SYNTAX OF FIRST-ORDER LOGIC 184

a, b, c, . . . Constants Proper names, math. constants


x, y, z, . . . Variables Pronouns, math. variables
f, g, h, . . . Function symbols Functional expressions
P, Q, R, . . . Predicate symbols Predicates, properties, relations
∀ Universal quantifier every, for all, each, . . .
∃ Existential quantifier there exists, there is, for some, . . .

We’ll now make the syntax of first-order logic formally precise. At the
end of the chapter, we’ll talk about formalization in first-order logic.

8.2 Terms and Formulas


8.2.1 Remember that in order to define a formal language, L,1 we need to
specify its vocabulary and its grammar (4.1.4). In first-order logic,
it’s standard to package the non-logical vocabulary together in what’s
called a “signature.” A signature is a structure S = (C, F, R, ar) such
that:

(i) C is a set of constant symbols


(ii) F is a set of function symbols
(iii) R is a set of predicate symbols
(iv) ar : F ∪ R → N is a function that assigns to each function and
predicate symbol a fixed natural number, its arity

So, a signature gives us the constants, functions, and predicates that


we have available in our language L. Additionally, via the function
ar, the signature also tells us the arity of our function and predicate
symbols, which we can’t just “read off” the symbols. Note that just
like every set of sentence letters determined a different propositional
language, in first-order logic, every signature determines a different
first-order language.

8.2.2 Strictly speaking the arity function ar is always part of the signature.
However, it’s a bit annoying to always have it around since its function
is purely auxiliary. We’ll therefore introduce the notational convention
that for R ∈ R, Rn means that R is such that ar(R) = n, and similarly
for f ∈ F, f n means that f is such that ar(f ) = n. This allows us
to drop the arity function from the specification of a signature and
still record all the information it provides. Note well, however, that
the expression Rn is not a symbol of our language, only R is. The
notation Rn is purely suggestive.
1
In this chapter, L will stand for a fixed but arbitrary first-order language.
CHAPTER 8. SYNTAX OF FIRST-ORDER LOGIC 185

8.2.3 Examples:

(i) Signature of LP A , the language of arithmetic, is

SP A = ({0}, {S 1 , +2 , ·2 }, ∅),

Note that the language of arithmetic doesn’t have any predicate


symbols in its signature, which is perfectly fine.
(ii) A real extreme example (!) is the empty signature S∅ = (∅, ∅, ∅).
This signature determines the language L∅ of pure first-order
logic.
(iii) Also set-theory can be looked at from the perspective of first-
order logic. The signature of the language of set-theory L∈ is
defined as
S∈ = ({∅}, ∅, {∈2 }).
It has only the binary predicate ∈.
(iv) A less concrete example: S = ({a, b, c}, {f 1 , g 2 }, {P 1 , R2 }) has
three constants: a, b, c; two function symbols: the unary func-
tion symbol f and binary function symbol g; and two predicate
symbols: the unary P and the binary R.

In the following, we’ll always assume that we’re dealing with some
arbitrary but fixed signature S = (C, F, R).

8.2.4 The logical vocabulary of every first-order language, however, is the


same. It consists of:

(i) the set of variables: V = {x, y, z, . . .}2


(ii) the sentential operators: ¬, ∧, ∨, →, ↔
(iii) the identity predicate: =
(iv) the quantifiers: ∀, ∃
(v) the parentheses: (, ).

The only symbol in the logical vocabulary we haven’t discussed so-far


is the identity predicate =. The purpose of this symbol is relatively
clear: it expresses the predicate “. . . is identical to .” “But is it
among the logical vocabulary and not in the signature?” you ask?
Good question! This has to do with the fact that we treat identity
as a distinguished, logical concept. What this precisely means will be-
come clear only in the next lecture. But for now you can already see
2
In the following, we shall always assume that we have a fixed set of infinitely many
variables at our disposal. We don’t care so much about how they are written: u, v, w, . . .
is equally fine as x1 , x2 , x3 , . . ..
CHAPTER 8. SYNTAX OF FIRST-ORDER LOGIC 186

one aspect of it. Remember that when we abstract natural language


expressions into first-order formulas, we abstract away from the con-
crete predicates in question (8.1.2): ‘the letter is in the left drawer”
becomes R(a, b). With identity, we won’t play this game, identity al-
ways gets formalized as identity. So, “the birthplace of Ada Lovelace
is identical to the birthplace of Alan Turing” get’s formalized as

f (a) = f (b),

where a stands for Ada Lovelace, b for Alan Turing, and f for “the
birthplace of . . . .” Note that we don’t need to say what = stands for,
this is clear! We’ll talk about identity some more when we talk about
formalization.
8.2.5 Remember that above (8.1.4) we mentioned that function expressions
can be nested, i.e. used within one another. This necessitates that we
give a recursive definition of the terms of our language, which is what
we’ll do next. The set T of terms is recursively defined as the smallest
set X such that:
(i) (a) V ⊆ X
(b) C ⊆ X
(ii) If t1 , . . . , tn ∈ X and f n ∈ F, then f (t1 , . . . , tn ) ∈ X
In words, the terms are all the variables and constants plus the func-
tional combinations of those.
8.2.6 Examples:

(i) In SP A , we have, for example,

S(0), S(S(0)), . . . ∈ T

S(x), S(y), +(0, 0), ·(0, 0), ·(S(x), +(0, 0)), . . . ∈ T


We’ll introduce some notational conventions in SP A :
(a) We write n instead of S(. . . S(0) . . .). So, for example, 1 is
| {z }
n times
short for S(0), 2 is short for S(S(0)), and so on. This al-
lows us to write more natural expressions, like ·(2, 2), which
officially is ·(S(S(0)), S(S(0))).
(b) To make things even more natural, we allow ourselves to
use “infix notation” with + and ·, i.e. instead of the official
·(2, 4), we allow ourselves to write (2 · 4). The parentheses,
however, are necessary, since we want to be able to disam-
biguate between ((2 · 4) + 1), which intuitively denotes 9, and
(2 · (4 + 1)), which intuitively denotes 10.
CHAPTER 8. SYNTAX OF FIRST-ORDER LOGIC 187

(ii) The only terms in the empty signature S∅ are the variables
x, y, z, . . . .
(iii) The only terms in the signature S∈ are ∅ and the variables.
(iv) Assuming that f 2 ∈ F, we have

f (x, x), f (x, y), f (y, x), f (y, y), f (f (x, y), f (y, x)), . . . ∈ T

8.2.7 Note that in our function expressions, we don’t pay attention to the
idea of domains and ranges we painstakingly introduced in 3.6.4: for
any term t a unary function symbol f generates a new term f (t). It is
possible, and in some cases necessary, to develop a syntax theory for
terms which takes into account the domains and ranges of functions
by means of typing. This is, for example, important when we study the
logical foundations of programming languages, which often make use
of typing. In this course, however, we won’t pay that much attention to
functions, and therefore typing. Instead, we focus on the grammar of
formulas. Correspondingly, we allow ourselves the leisurely assumption
that every function is defined for every object.

8.2.8 Now, after everything we’ve said, it should be relatively clear what the
recursive syntax of L looks like. The set L of formulas is recursively
defined as the smallest set X such that:

(i) (a) If Rn ∈ R and t1 , . . . , tn ∈ T , then R(t1 , . . . , tn ) ∈ X.


(b) If t1 , t2 ∈ T , then t1 = t2 ∈ X
(ii) (a) if φ ∈ X, then ¬φ ∈ X
(b) if φ, ψ ∈ X, then (φ ◦ ψ) ∈ X for ◦ = ∧, ∨, →, ↔
(c) if φ ∈ X and x ∈ V, then Qxφ ∈ X for Q = ∃, ∀

The initial elements of X, the formulas given by (i.a–b), are also called
atomic formulas.

8.2.9 One quick notational convention. Instead of ¬t1 = t2 , we allow our-


selves to write t1 6= t2 .

8.2.10 Examples:

(i) Here are some formulas in LP A :

x = 10

S(x) = 44
2+2=4
1·1=0
CHAPTER 8. SYNTAX OF FIRST-ORDER LOGIC 188

∀xS(x) 6= 0
(2 · 2) = 5 ∧ S(44) = 7)
∀x∀y(x 6= y → S(x) 6= S(y))
∀x∀y(S(x) = y + 1 → S(x) = S(y))
∀x∃yS(x) = y

Note that the only atomic formulas in LP A are equations, i.e.


formulas of the form t1 = t2 .
(ii) Here are some formulas in L∅ :

x=y
(x = y ∧ y 6= z)
∀x∃y(x = y ∧ ∀zy 6= z)
∀x∃yx 6= y
∃x∃y(x 6= y ∧ ∀z(z = x ∨ z = y))

(iii) Here are some formulas in L∈ :

∈(x, x)
∀x∈(∅, x)
¬∃x∈(x, ∅)
∀x(∈(x, y) → ∈(x, z))
∀x∀(x = y ↔ ∀z(∈(z, x) ↔ ∈(z, y)))

In L∈ , to make things more natural, we allow ourselves the no-


tation (t ∈ u) instead of the official ∈(t, u). When no confusion
can arise, we shall sometimes omit the outermost parentheses.
Our examples become:
x∈x
∀x(∅ ∈ x)
¬∃x(x ∈ ∅)
∀x(x ∈ y → x ∈ z)
∀x∀y(x = y ↔ ∀z(z ∈ x ↔ z ∈ y))

This almost looks like the elementary set theory we studied in


§3. And, in fact, this is the language you use to develop formal
set-theory as a first-order theory.
CHAPTER 8. SYNTAX OF FIRST-ORDER LOGIC 189

(iv) Finally, I shall use the abstract signature S = ({a, b, c}, {f 1 , g 2 }, {P 1 , R2 })


to give some extreme examples:

R(f (g(a, b)), g(f (a), f (b)))


∀xP (f (a))
∀x(P (a) → ∃yR(x, y))
(∀xP (x) ∧ ∀xQ(x))
∀x(P (x) ∨ ∀xQ(x))
(P (x) ↔ ∀y∀xR(x, y))
All of these are perfectly fine formulas of first-order logic.

8.2.11 How do we prove that an expression σ is not a formula? Well, just like
in propositional logic (cf. 4.1.9). Once we’ve introduced parsing trees
in the following section, it will, in fact, be relatively straight-forward
to adopt our algorithm from 4.3.10 to first-order logic. But we shall
not do this explicitly, rather it will be left as an exercise (8.9.4). More
generally, in first-order logic, we shall not focus so much on the more
nit-picky details of syntax, like proving that a formula is an expression
or not. We have bigger fish to fry.

8.2.12 Since we’re not so much concerned with very detailed syntax, we can
already introduce the notational conventions here. Actually, they are
precisely the same as in propositional logic (cf. §4.5). To see that we
can’t leave out parentheses with the quantifiers, let’s quickly look at
one example. In the expression ∀x(P (x) → S(a)), we really cannot
leave out any parentheses. If we would, we’d get ∀xP (x) → S(a),
which is very different from ∀x(P (x) → S(a)). To see the difference,
let’s interpret P as the predicate “. . . passes,” S as “. . . is surprised,”
and a as denoting me. Under this reading, ∀xP (x) → S(a) says that
if everybody passes, then I’m surprised. This is true—though, I’d be
positively surprised. The formula ∀x(P (x) → S(a)), instead, says that
for every individual student, if that student passes, I’m surprised. This
is certainly false: I believe in every single one of you ,3

8.2.13 Also the topic of proof by induction we can handle rather quickly. Since
the set of formulas is defined recursively, proof by induction works just
like in propositional logic: we show that all formulas have a property
by showing that the atomic formulas have the property and that it’s
preserved under the constructions:

Theorem. Let Φ be a condition on formulas. If we can show:


3
Though it would be enough already for this to be false that I believe in one of you. . .
CHAPTER 8. SYNTAX OF FIRST-ORDER LOGIC 190

(i) (a) Φ(R(t1 , . . . , tn )) for all Rn ∈ R and t1 , . . . , tn ∈ T


(b) Φ(t1 = t2 ) for t1 , t2 ∈ T
(ii) (a) For all φ ∈ L, if Φ(φ), then Φ(¬φ).
(b) For all φ, ψ ∈ L, if Φ(φ) and Φ(ψ), then Φ((φ ◦ ψ)), for
◦ = ∧, ∨, →, ↔.
(c) For all φ ∈ L, if Φ(φ), then Φ(Qxφ) for Q = ∀, ∃.

Then we can conclude that for all φ ∈ L, Φ(φ).

Proof. Completely analogous to the proof of Theorem 4.2.1 (induction


in propositional logic).

8.2.14 Note that also the set T is inductively defined, so we can also prove
things about terms by means of induction:

Theorem. Let Φ be a condition on terms. If we can show:

(i) (a) Φ(a) for all a ∈ C


(b) Φ(x) for all x ∈ V
(ii) If Φ(t1 ), . . . , Φ(tn ), then Φ(f (t1 , . . . , tn )).

Then we can conclude that for all t ∈ T , Φ(t).

Proof. Just like induction over formulas.

8.2.15 Similarly, function recursion works standardly: the sets of terms and
formulas are inductively defined, hence we can use function recursion
on them. The pattern is the same as always: give the value for the
initial elements and say how to calculate the value for more complex
elements based on the values of their components. As an example, let’s
generalize the notion of complexity from propositional logic (4.4.4) to
first-order logic. First, we define a complexity c : T → N for terms by
saying:

(i) (a) c(a) = 0 for all a ∈ C


(b) c(x) = 0 for all x ∈ V
(ii) c(f (t1 , . . . , tn )) = max(c(t1 ), . . . , c(tn )) + 1 for all f n ∈ F and
t1 , . . . , tn ∈ T

Using the function on terms, we define a complexity function c : L → N


on formulas by saying:

(i) (a) c(R(t1 , . . . , tn )) = max(c(t1 ), . . . , c(tn )) for all Rn ∈ R and


t1 , . . . , tn ∈ T
(b) c(t1 = t2 ) = max(c(t1 ), c(t2 )) for t1 , t2 ∈ T
CHAPTER 8. SYNTAX OF FIRST-ORDER LOGIC 191

(ii) (a) c(¬φ) = c(φ) + 1


(b) c(φ ◦ ψ) = max(c(φ), c(ψ)) + 1, for ◦ = ∧, ∨, →, ↔.
(c) c(Qxφ) = c(φ) + 1 for Q = ∀, ∃.

As you can see, things work really analogously to the way they work
in propositional logic.

8.3 Parsing Trees and Occurrences


8.3.1 In this section, we develop the notion of a parsing tree for formulas
in first-order logic. Since all the constructions of propositional logic
are also constructions of first-order logic, we actually just have to add
new clauses to Definition 4.3.5. Well, that’s not exactly true. Note
that, in contrast to sentence letters in propositional logic, also the
atomic formulas of first-order logic, like R(a, g(x, f (a))), have an in-
ternal structure that we need to parse. We therefore begin by defining
the notion of a parsing tree for terms.

8.3.2 We recursively define the function T that assigns to each term t ∈ T


its parsing tree as follows:

(i) T (α) = α for all α ∈ V ∪ C

f (t1 , . . . , tn )
(ii) T (f (t1 , . . . , tn )) =
T (t1 ) . . . T (tn )

8.3.3 Examples:

(i) Here’s the parsing tree for the term 4 = S(S(S(S(0)))) in LP A :

S(S(S(S(0))))

S(S(S(0)))

S(S(0))

S(0)

0
CHAPTER 8. SYNTAX OF FIRST-ORDER LOGIC 192

(ii) Here’s the parsing tree for (2 · (2 + 1)) = (S(S(0)) · (S(S(0)) +


S(0)))

(S(S(0)) · (S(S(0)) + S(0)))

S(S(0)) (S(S(0)) + S(0))

S(0) S(S(0)) S(0)

0 S(0) 0

(iii) One last, more abstract example:

f (g(a, b))

g(a, b)

a b

8.3.4 We can now give the general, recursive definition of a parsing tree for
a formula φ ∈ L:
CHAPTER 8. SYNTAX OF FIRST-ORDER LOGIC 193

R(t1 , . . . , tn )
(i.a) T (R(t1 , . . . , tn )) =
T (t1 ) . . . T (tn )

t1 = t2
(i.b) T (t1 = t2 ) =
T (t1 ) T (t2 )

¬φ
(ii.a) T (¬φ) =
T (φ)

(φ ◦ ψ)
(iii.b) T ((φ ◦ ψ)) = for ◦ = ∧, ∨, →, ↔
T (φ) T (ψ)

Qxφ
(ii.c) T (Qxφ) = for Q = ∀, ∃
T (φ)

8.3.5 Example. Since the idea of how to do parsing trees should be clear by
now, we do just one, more involved example:

∀x(R(x, g(f (a), f (b))) → ∃yR(y, y))

R(x, g(f (a), f (b))) → ∃yR(y, y)

R(x, g(f (a), f (b))) ∃yR(y, y)

x g(f (a), f (b)) R(y, y)

f (a) f (b) y y

a b

8.3.6 At this point, we simply remark that it is now a straight-forward ex-


ercise to prove a Unique Readability Theorem for terms and formulas
in first-order logic along the lines of the proof we gave in 4.3.8. Stating
and proving this theorem is useful exercise.
CHAPTER 8. SYNTAX OF FIRST-ORDER LOGIC 194

8.3.7 In the following, we shall need, for technical reasons, the notion of a
stripped parsing tree. The stripped parsing tree of a formula is the re-
sult of leaving only the main operator on each node when we’re doing
the ordinary parsing tree. Here is an explicit, recursive definition of
stripped parsing trees for terms and formulas:

Terms:

(i) TS (α) = α for all α ∈ V ∪ C

f
(ii) TS (f (t1 , . . . , tn )) =
TS (t1 ) . . . TS (tn )

Formulas:

R
(i.a) TS (R(t1 , . . . , tn )) =
TS (t1 ) . . . TS (tn )

=
(i.b) TS (t1 = t2 ) =
TS (t1 ) TS (t2 )

¬
(ii.a) TS (¬φ) =
TS (φ)


(iii.b) TS ((φ ◦ ψ)) = for ◦ = ∧, ∨, →, ↔
TS (φ) TS (ψ)

Qx
(ii.c) TS (Qxφ) = for Q = ∀, ∃
TS (φ)
CHAPTER 8. SYNTAX OF FIRST-ORDER LOGIC 195

In a sense, the information provided by the stripped parsing tree is


the same as the information provided by the ordinary parsing tree:
they both tell us how the formula was constructed. In fact, it’s easy
to write an algorithm that translates an ordinary parsing tree into a
stripped one and vice versa (exercise?). The difference between the
two concepts is a difference in focus: while the ordinary parsing tree
focuses on the information from which sub-formulas a formula was
constructed, the stripped parsing tree focuses on the operations used
along the way. Having easy access to this information will be useful
along the way.

8.3.8 Example. Here’s the stripped parsing tree for Example 8.3.5:

∀x

R ∃y

x g R

f f y y

a b

8.3.9 In the following, we shall often need to access the information provided
by parsing trees. In order to be able to do so, we need to be able to
refer to the nodes in a given tree in a clear fashion. For this purpose,
we introduce the following (standard!) naming conventions for nodes
in trees. Let me introduce the idea by means of an example. Take the
tree:

• •

• • •

We then name nodes in this tree as follows:

ˆ The root is always called r.


CHAPTER 8. SYNTAX OF FIRST-ORDER LOGIC 196

ˆ The first child of the root is called (r, 1).


ˆ The first child of (r, 1) is called (r, 1, 1).
ˆ The second child of (r, 1) is called (r, 1, 2).
ˆ ...

So, the nodes in our tree are:

r, (r, 1), (r, 1, 1), (r, 1, 2), (r, 1, 1, 1), (r, 1, 2, 1), (r, 1, 2, 2).

Note that there can be empty names for nodes. For example, the node
(r, 2, 1, 1) doesn’t denote a node in our tree.
Essentially, we name a node by giving the “directions” for someone
traveling along the edges through the nodes.4 Here is a more general,
recursive (!) definition of the name pnq of a node n in a tree:

ˆ The name of the root is r


ˆ The name of the i-th child of node n is called (pnq, i).5
This definition allows you to step-by-step calculate the name of
a node in a tree.

8.3.10 We shall now define the central notion of an occurrence of an expres-


sion in a formula. Intuitively, it’s pretty clear what it means for an
expression to occur in a formula: an expression occurs in a formula
if you need to write it down in order to write the formula. So, for
example, P (x) occurs in ∀x(P (x) ∨ R(x, x)). But for our intents and
purposes, this intuitive notion doesn’t provide enough information.
We need better access to the internal structure of the formula. We get
it, via the parsing tree. First, we shall give a precise definition of the
notion of an occurrence:

ˆ An expression σ occurs in a term or formula τ iff σ is the label


of some node in the (ordinary or stripped) parsing tree of τ .

8.3.11 Let’s begin with some examples:

(i) The term S(S(0)) occurs in the term S(S(S(S(0)))), since it


labels the node (r, 1, 1) in the parsing tree:
4
For interested: the naming of the nodes depends on how we draw the tree, but we’ll
ignore this complication.
5
For this to look nice, we have to “forget” a bunch of parentheses in pnq according to
the rule ((x, y), z) = (x, y, z).
CHAPTER 8. SYNTAX OF FIRST-ORDER LOGIC 197

S(S(S(S(0))))

S(S(S(0)))

S(S(0))

S(0)

0
(ii) The quantifier ∃y occurs in the formula ∀x(R(x, g(f (a), f (b))) →
∃yR(y, y)) since it labels the node (r, 1, 2) in the stripped tree:
∀x

R ∃y

x g R

f f y y

a a
(iii) The formula R(x, y) occurs twice in the formula ∀x(R(x, y) →
∃x∀y(R(y, x)∧¬R(x, y))), once at (r, 1, 1) and once at (r, 1, 2, 1, 1, 2, 1):
∀x(R(x, y) → ∃x∀y(R(y, x) ∧ ¬R(x, y)))

R(x, y) → ∃x∀y(R(y, x) ∧ ¬R(x, y))

R(x, y) ∃x∀y(R(y, x) ∧ ¬R(x, y))

x y ∀y(R(y, x) ∧ ¬R(x, y))

R(y, x) ∧ ¬R(x, y)

R(y, x) ¬R(x, y)

y x R(x, y)

x y
CHAPTER 8. SYNTAX OF FIRST-ORDER LOGIC 198

8.3.12 The fact that an expression can occur more than once requires us to
talk about occurrences of expressions. For many purposes we want to
be able to distinguish different occurrences of expressions, we want, for
example, to be able to distinguish the occurrence of R(x, y) at (r, 1, 1)
from the occurrence at (r, 1, 2, 1, 1, 2, 1). We do this by defining an
occurrence of an expression in a formula as the pair (n, σ) where n is
a node in the parsing tree labelled with σ.

8.3.13 Examples. Consider the formula ∀x(R(x, y) → ∃x∀y(R(y, x)∧¬R(x, y))).


We gave the ordinary parsing tree above. Here is the stripped one:

∀x

R ∃x

x y ∀y

R ¬

y x R

x y

ˆ The following are all the occurrences of logical operators in the


formula:
(r, ∀x)
((r, 1), →)
((r, 1, 2), ∃x)
((r, 1, 2, 1), ∀y)
((r, 1, 2, 1, 1), ∧)
((r, 1, 2, 1, 1, 2), ¬)

ˆ The following are all the occurrences of predicate symbols in the


formula:
((r, 1, 1), R)
((r, 1, 2, 1, 1, 1), R)
((r, 1, 2, 1, 1, 2, 1), R)
CHAPTER 8. SYNTAX OF FIRST-ORDER LOGIC 199

ˆ The following are all the occurrences of variables in the formula:

((r, 1, 1, 1), x)
((r, 1, 1, 2), y)
((r, 1, 2, 1, 1, 1, 1), y)
((r, 1, 2, 1, 1, 1, 2), x)
((r, 1, 2, 1, 1, 2, 1, 1), x)
((r, 1, 2, 1, 1, 2, 1, 2), y)

ˆ The following are all the occurrences of formulas in the formula:

(r, ∀x(R(x, y) → ∃x∀y(R(y, x) ∧ ¬R(x, y))))


((r, 1), (R(x, y) → ∃x∀y(R(y, x) ∧ ¬R(x, y))))
((r, 1, 1), R(x, y))
((r, 1, 2), ∃x∀y(R(y, x) ∧ ¬R(x, y))))
((r, 1, 2, 1), ∀y(R(y, x) ∧ ¬R(x, y)))
((r, 1, 2, 1, 1), (R(y, x) ∧ ¬R(x, y)))
((r, 1, 2, 1, 1, 1), R(y, x))
((r, 1, 2, 1, 1, 2), ¬R(x, y))
((r, 1, 2, 1, 1, 2, 1), R(x, y))

8.4 Free and Bound Variables


8.4.1 Variables, just like pronouns are tricky. They allow us to talk about
things in general, to say things like “everything that’s scarlet is red”
in a logically precise fashion; but, at the same time, it’s not always
easy to figure out what your variables refer to, especially when you
have more than one variable around. Take the formula ∀x∃yR(x, y),
for example. If we read R as “. . . is smaller than ” and assume that
we’re only talking about numbers, then this statement says that for
every number there is a number bigger than it. Now, how can you
see that the “it” refers to the first number that we talked about and
not the second? This information is encoded in the quantificational
structure of the formula: which quantifier talks about which variable.
In this section, we set out to understand this structure.
8.4.2 The concept that we’re trying to understand is that of a quantifier
capturing a variable. The idea is as follows. Take the formula:

∀x(P (a) ∧ ¬R(x, a)).

In this formula, the universal quantifier ∀x stands at the very begin-


ning, but clearly it “captures” the variable x later on in the formula.
So, if, for example, a stands for Socrates, P for “. . . is a philosopher,”
and R stands for “. . . is a friend of ,” then the formula should read:
CHAPTER 8. SYNTAX OF FIRST-ORDER LOGIC 200

everybody is such that Socrates is a philosopher and they are not a


friend of Socrates; or, more perspicuously, Socrates is a philosopher
and nobody is his friend (/). The point is this. For a quantifier to
capture a variable, x, clearly the quantifier needs to talk about the
variable, i.e. it needs to be of the form ∀x or ∃x. But it does not need
to stand immediately in front of the predicate the variable is applied
to, as in ∀xP (x); no, it can be farther away as in our example.
8.4.3 But is it enough for a quantifier to capture a variable that the quanti-
fier talks about the variable and occurs before it in the formula? The
answer is no. Take the formula ∀x(∃y¬R(x, y) ∨ (P (x) → R(c, y))).
Here are its ordinary and stripped parsing trees:

∀x(∃y¬R(x, y) ∨ (P (x) → R(c, y))) ∀x

(∃y¬R(x, y) ∨ (P (x) → R(c, y))) ∨

∃y¬R(x, y) (P (x) → R(c, y)) ∃y →

¬R(x, y) P (x) R(c, y) ¬ P R

R(x, y) x c y R x c y

x y x y

In this formula, there is an occurrence of the variable y at (r, 1, 2, 2, 2),


which is written in the formula after the quantifier ∃y at (r, 1, 1). But
we should think that the quantifier does not bear the right relation to
the variable since when we look at the way the formula was constructed
the variable y at (r, 1, 2, 2, 2) wasn’t even around when ∃y at (r, 1, 1)
was introduced. So how could it “capture” the variable if it wasn’t
even there yet.
A natural answer would be that a quantifier needs to be “connected”
to the variable in the right way, i.e. by means of a path in the parsing
tree. Take the occurrence of y at (r, 1, 1, 1, 1, 2). This variable was
around when ∃y at (r, 1, 1) was introduced. In fact, we can see this by
tracing a path from y at (r, 1, 1, 1, 1, 2) to ∃y at (r, 1, 1). This gives us
a second condition for a quantifier to capture a variable: in addition
to the quantifier talking about the variable, there needs to be a path
from the variable to the quantifier in the parsing tree. Is this enough?
8.4.4 It turns out, the answer is no! This might be surprising but there
is one phenomenon that we haven’t properly taken into account yet.
Take the formula ∀x(N (x) → ∃xR(x, x)). Here are the parsing trees:
CHAPTER 8. SYNTAX OF FIRST-ORDER LOGIC 201

∀x(N (x) → ∃xR(x, x)) ∀x

N (x) → ∃xR(x, x) →

N (x) ∃xR(x, x) N ∃x

x R(x, x) x R

x x x x

If we read N as “. . . is a natural number” and R as “. . . is smaller than


or equal to ,” then what this formula says is that for all numbers,
there is a number that is smaller than or equal to itself. This statement
is a bit artificial, but I hope the point is clear: the first quantifier ∀x
only talks about the first occurrence of x at (r, 1, 1, 1). The second
and third occurrence of x at (r, 1, 2, 1, 1) and (r, 1, 2, 1, 2) respectively,
are captured by ∃x at (r, 1, 2). You can see this in the way we read
the consequent of the conditional: ∃xR(x, x) says that there exists a
number that is smaller than or equal to itself. There is no remaining,
unclear pronoun which is there for ∀x to capture. This consideration
gives us the final condition for a quantifier to successfully capture a
variable: there cannot be another quantifier that captures the variable
first.

8.4.5 Putting this all together, we get to the following official definition of a
quantifier occurrence capturing or, as we typically say in logic, binding
a variable occurrence:

ˆ Let (n, x) be an occurrence of a variable x in a formula φ and


(m, Qy) an occurrence of a quantifier Q = ∀, ∃ in φ. Then, (n, x)
is bound by (m, Qy) iff
(i) x = y,
(ii) there is an downwards path from m to n,
(iii) this path from m to n does not go through a node k such
that (k, Q0 x) is an occurrence of a quantifier Q0 = ∀, ∃ in φ.
When a variable occurrence in a formula is not bound by some
quantifier occurrence in the same formula, we also call the vari-
able occurrence free.

8.4.6 Examples. Let’s consider some examples.

(i) For the variable occurrence ((r, 1, 2, 2, 2), y) in ∀x(∃y¬R(x, y) ∨


(P (x) → R(c, y))) and the quantifier occurrence ((r, 1, 1), ∃y) in
CHAPTER 8. SYNTAX OF FIRST-ORDER LOGIC 202

the same formula, the condition (ii) is violated. For occurrence


(r, 1, 1, 1, 1, 2), instead, we’re good: ((r, 1, 1), ∃y) binds (r, 1, 1, 1, 1, 2).
(ii) We’ve just seen that for the variable occurrences ((r, 1, 2, 1, 1), x)
and ((r, 1, 2, 1, 2), x) in ∀x(N (x) → ∃xR(x, x)), only the quanti-
fier occurrence ((r, 1, 2), ∃x) binds them. Occurrence (r, ∀x) does
not bind either because condition (iii) of Definition 8.4.5 is vio-
lated.
(iii) The following table has the information which variable occur-
rence is bound by which quantifier occurrence in the formula
∀x(R(x, y) → ∃x∀y(R(y, x) ∧ ¬R(x, y))) from Example 8.2.13

Variable occurrence Quantifier occurrence


((r, 1, 1, 1), x) (r, ∀x)
((r, 1, 2, 1, 1, 1, 1), y) ((r, 1, 2, 1), ∀y)
((r, 1, 2, 1, 1, 1, 2), x) ((r, 1, 2), ∃x)
((r, 1, 2, 1, 1, 2, 1, 1), x) ((r, 1, 2), ∃x)
((r, 1, 2, 1, 1, 2, 1, 2), y) ((r, 1, 2, 1), ∀y)

8.4.7 Now, with the notion of variable binding in place, we can define the
central concept of open and closed formulas: we say that a formula is
open iff there exists a variable occurrence in the formula that is not
bound by some quantifier occurrence in the same formula. A formula
is called closed iff it’s not open, i.e. iff all variable occurrences in the
formula are bound by some quantifier occurrence.

8.4.8 Examples: Let’s first look at our examples:

ˆ ∀x(R(x, y) → ∃x∀y(R(y, x)∧¬R(x, y))) is open, since ((r, 1, 1, 2), y)


is free.
ˆ ∀x(∃y¬R(x, y) ∨ (P (x) → R(c, y))) is open since ((r, 1, 2, 2, 2), y)
is free
ˆ ∀x(N (x) → ∃xR(x, x)) is closed.

8.4.9 In the following, we shall study only inferences involving closed for-
mulas or sentences as they are typically called. The reason for this is
that open formulas are not straight-forwardly apt to be true or false in
a possible situation: an open formula, like P (x), contains an unbound
variable, which intuitively is something like a pronoun who’s reference
you don’t know. This spells trouble for our account of valid inference.
Suppose that somebody comes into the room and declares “She’s com-
ing! So, we should all be happy.” How can you possibly determine any
relation between the truth of “she’s coming” and “we should all be
happy” without knowing who “she” refers to? It is possible to use
free variables and (disambiguated) pronouns in a reasonable, logical
CHAPTER 8. SYNTAX OF FIRST-ORDER LOGIC 203

way. For example, in mathematics, we often write an open formula


x + y = y + x to express a general claim about all numbers, rather
than using the universally quantified sentence ∀x∀y(x + y = y + x) for
the same purpose. Similarly, we reasonably reason from “he’s coming”
to “he’s not not coming;” we don’t need to know who “he” is for that
purpose. But, it turns out that if we consider valid inferences between
open formulas, all sort of technical difficulties pop up (they’re not un-
surmountable, but they’re there). To avoid these problems, we’ll focus
on inferences between sentences alone. This is actually not that much
of an restriction since we’ll have to think about the truth-conditions
for open formulas anyways when we try to give truth-conditions for
sentences. Why you can already see now from a purely syntactic stand-
point: sentences, like ∀xP (x), are constructed from open formulas, in
this case P (x); so, if we want to trace the construction of a formula in
our recursive definition of truth, we need to give the truth-conditions
for the open formula P (x).
8.4.10 It is worth, as an exercise, to prove some basic facts about free and
bound variable. Here we report some propositions, leaving most of the
proofs as exercises, however:
Proposition. Let x be a variable and φ, ψ formulas. Then:
1. If x occurs free in φ, then x occurs free in ¬φ
2. If x occurs free in either φ or ψ, then x occurs free in (φ ◦ ψ) for
◦ = ∧, ∨, →, ↔
3. If x occurs free in φ and y 6= x, then x occurs free in Qyφ for
Q = ∀, ∃.

Proof. We only proof 3. Suppose that x occurs free in φ, at (n, x),


and y 6= x. Suppose further, for contradiction, (n, x) is bound by some
occurrence (k, Q0 x) of quantifier Q0 x, Q0 = ∀, ∃, in Qyφ. Clearly, k
can’t be r, i.e. the root of the stripped parsing tree for Qyφ. Why?
Well, since there we find Qy and by assumption x 6= y. So, for (r, Qy),
condition (i) of Definition 8.4.5 is violated, meaning (r, Qy) can’t bind
any occurrence of x. But since the (stripped) parsing tree for Qyφ
looks as follows,

Qy

TS (φ)

This means that k must be a node in TS (φ). But that would mean
that (k, Q0 x) binds (n, x) in φ. Contradiction. Hence (n, x) is also free
in Qyφ, as desired.
CHAPTER 8. SYNTAX OF FIRST-ORDER LOGIC 204

8.5 Substitution
8.5.1 Before we talk about formalization, we will introduce the concept of
substitution of terms for (free) variables in formulas. The point of this
operation is to be able, in a purely syntactic way, to specify what
certain pronouns stand for. Remember that in first-order logic, we
treat pronouns as variables: “it is red,” for example, is formalized as
R(x). Now, without knowing what “it” stands for, we cannot deter-
mine whether this sentence is true or false. In semantics, which we
treat in the next chapter, we will discuss a semantic way of achieving
this. But for many purposes, especially proof theory, it will be useful
to be able to do this in a purely syntactic fashion, i.e. just by manipu-
lating symbols. This is what the operation of term-substitution allows
us to do. To illustrate the idea, think of the sentence “it is red” for-
malized as R(x) again. Suppose further that in our language we refer
to the ball using the constant a. A simple way of making it explicit
that “it” stands for the ball is to replace the x in R(x) with a to get
R(a), a formula that says that the ball is red. We will now define this
operation as a general operation on formulas: the operation of a term
substituting a term t for free occurrences of a variable x in a formula
φ. For this, we write (φ)[x := t].

8.5.2 In order to be able to define the operation, we first need to define


what it means to replace a term for a variable in another term. This
is because variables can be nested somewhere in functional expres-
sions, such as f (a, g(h(x), c)). To handle this, we define the operation
(s)[x := t] of replacing a term t for a variable x in another term s
recursively as follows:
(
s if s 6= x
(i) (s)[x := t] = for s ∈ C ∪ V
t if s = x
(ii) (f (t1 , . . . , tn ))[x := t] = f ((t1 )[x := t], . . . , (tn )[x := t])

Note the parentheses which indicate the scope of the substitution op-
eration: where to apply the operation. Let’s go through one example
step-by-step, let’s calculate (f (a, g(h(x), c)))[x := c]:
CHAPTER 8. SYNTAX OF FIRST-ORDER LOGIC 205

(f (a, g(h(x), c)))[x := c] = f ((a)[x := c], (g(h(x), c))[x := c])


= f (a, g((h(x))[x := c], (c)[x := c]))
= f (a, g(h((x)[x := c]), c)
= f (a, g(h(c), c))

Well, we could have done this without the recursive definition, sure.
But the point here is that we want all of our definitions to be, at least
in principle, computer implementable—and the way we can achieve
this is by defining them properly, recursively.

8.5.3 Now we can define the notion (φ)[x := t] of substituting t for all free
occurrences of x in φ. We do this recursively:

(i) (a) (R(t1 , . . . , tn ))[x := t] = R((t1 )[x := t], . . . , (tn )[x := t])
(b) (t1 = t2 )[x := t] = (t1 [x := t] = t2 [x := t])6
(ii) (a) (¬φ)[x := t] = ¬((φ)[x := t])
(b) ((φ ◦ ψ))[x := t] = ((φ)[x := t] ◦ (ψ)[x := t]) for ◦ = ∧, ∨, →
,↔
(
Qy(φ)[x := t] if y 6= x
(c) (Qyφ)[x := t] = for Q = ∀, ∃
Qyφ if y = x

The crucial clause is (ii.c), which blocks us from substituting the vari-
able as soon as we hit upon a quantifier that binds it. This is because
we only want to replace free occurrences of variables. Why? Well, first
of all, because only for free variables it’s unclear what they stand for.
So only for them do we need to use substitution to say what they
stand for. Moreover, if we replaced bound variables, we could change
the meaning of statements. To see this, note that ∃xR(x) says that
there exists a red thing (assuming that R stands for “. . . is red”). Now,
suppose that (∃xR(x))[x := a] would yield ∃xR(a). This would turn
the statement “there exists a red thing” into the statement “there ex-
ists an object such that the ball is red” (assuming that a stands for
the ball). This is not only a weird statement, it could actually be false,
even if “there exists a red thing” is true: suppose that the ball is blue
and the cup is red. Then “there exists a red thing” is true but “there
exists an object such that the ball is red” is false. /
6
Note that the symbol “=” is used in 3 different ways here: 1. in the substitution
↓ ↓ ↓
operation (t1 = t2 )[x := t] = (t1 [x := t] = t2 [x := t]), 2. as the strict identity of terms

(t1 = t2 )[x := t] = (t1 [x := t] = t2 [x := t]), and 3. as the identity symbol of our language
↓ ↓
(t1 = t2 )[x := t] = (t1 [x := t] = t2 [x := t]).
CHAPTER 8. SYNTAX OF FIRST-ORDER LOGIC 206

8.5.4 Example. Here is a step-by-step calculation of the result of a substitu-


tion:
(∀x(∃y¬R(x, y) ∨ (P (x) → R(c, y))))[y := c]

= ∀x((∃y¬R(x, y) ∨ (P (x) → R(c, y))))[y := c]


= ∀x(((∃y¬R(x, y)))[y := c] ∨ ((P (x) → R(c, y)))[y := c])
= ∀x((∃y¬R(x, y)) ∨ ((P (x))[y := c] → (R(c, y))[y := c]))
= ∀x((∃y¬R(x, y)) ∨ (P ((x)[y := c]) → (R((c)[y := c]), (y)[y := c])))
= ∀x(∃y¬R(x, y) ∨ (P (x) → R(c, c)))

8.6 Formalization
8.6.1 We conclude this chapter with a brief discussion of formalization in
first-order logic. All the points from propositional logic still apply. The
translation key in first-order logic contains the following information:

(i) A so-called domain of discourse, D, which is the set of things


we’re talking about.
(ii) For each constant in the signature, a natural language term it
formalizes.
(iii) For each predicate in the signature, a natural language predicate
it formalizes.
(iv) For each function symbol in the language, a natural language
expression it formalizes.

So, bottom-line, the translation key gives us the reading of the vocab-
ulary in the signature.

8.6.2 Examples.

(i) Here is an example of the standard translation key for LP A :


ˆ D=N
ˆ 0: the number 0
ˆ S: the successor function which maps every number n to the
next biggest number
ˆ +: addition
ˆ · : multiplication
(ii) Here’s a(n arbitrary) translation key for our abstract signature
S = ({a, b, c}, {f 1 , g 2 }, {P 1 , R2 }):
ˆ D = {x : x is a human}
ˆ f : the function that maps a person to their mother
CHAPTER 8. SYNTAX OF FIRST-ORDER LOGIC 207

ˆ g: the function that maps two people to their LCA


ˆ P : being a philosopher
ˆ R : being taller than

8.6.3 Here are some standard translation patterns for existential quantifiers
in natural language (“some,” “there exists,” . . . ) using any transla-
tion key where S stands for “. . . is smart” and H stands for “. . . is
handsome.”
Somebody who’s smart exists ; ∃xS(x)
There’s somebody who’s not smart ; ∃x¬S(x)
Somebody’s smart and somebody’s ; ∃xS(x) ∧ ∃xH(x)
handsome
Somebody’s smart and handsome ; ∃x(S(x) ∧ H(x))
Nobody’s both smart and handsome ; ¬∃x(S(x) ∧ H(x))
Somebody, who’s smart, is hand- ; ∃x(S(x) ∧ H(x))
some

One very important piece of advice: what you want to say is almost
never (!):
∃x(S(x) → H(x)),
i.e. someone is such that if they’re smart, then they’re handsome.
Remember that → is the material conditional, so this statement would
already be true if there’s someone who’s not smart (clear? if not, read
5.1.6 again); and it’s only false if there is someone who’s both smart
and not handsome.

8.6.4 Here are some standard translation patterns for universal quantifiers
in natural language (“for all,” “every,” . . . ) using the same translation
key as in the previous examples:

Not everybody handsome is smart ; ¬∀x(H(x) → S(x))


Everybody who’s smart is hand- ; ∀x(S(x) → H(x))
some
A person who’s smart is handsome ; ∀x(S(x) → H(x))
Someone who’s smart is handsome ; ∀x(S(x) → H(x))
Everybody’s smart and handsome ; ∀x(S(x) ∧ H(x))

Note well, however, that ∀x(S(x) ∧ H(x)) and ∀x(S(x) → H(x)) say
very different things: the former says everybody is both smart and
handsome, while the latter says that everybody who’s smart is also
handsome.

8.6.5 Indeterminate terms, like pronouns, indexicals, etc., are formalized


CHAPTER 8. SYNTAX OF FIRST-ORDER LOGIC 208

using free variables. Only when clearly the same thing is meant, use
the same variable, if different things could be meant, use different
variables:
He’s handsome ; H(x)
She’s handsome and smart ; H(x) ∧ S(x)
He’s handsome and he’s smart ; H(x) ∧ S(y)
He’s handsome and she’s smart ; H(x) ∧ S(y)
That’s a smart and handsome person ; H(x) ∧ S(x)

8.6.5. 21 One last piece of advice on formalization with quantifiers. If you’re


unsure about the quantificational structure of a sentence, it’s always a
good idea to follow the method we used in 8.1.6, when we introduced
the quantifiers. The idea is to first rephrase a quantified statement, i.e.
a statement with a quantifier expression in it, using pronouns; to re-
place those pronouns with variables (following the guideline that possi-
bly different objects get different variables); and finally to abstract the
whole statement. Here’s another example. Take the statement “Every
number has a successor.” To arrive at an adequate formalization, we
proceed as follows:

ˆ Every number has a successor.


ˆ Every number is such that it has a successor.
ˆ Every object is such that if it is a number, then it has a successor.
ˆ Every object is such that if it is a number, then there exists an
object such that it’s the successor of the former number.
ˆ Every x is such that if x is a number, then there exists an object
y, such that y is the successor of x.
ˆ ∀x(N (x) → ∃y(S(x) = y))
Assuming that N stands for “. . . is a number,” and S expresses
the successor function.

8.6.6 Last, we briefly discuss how we can express that there are a fixed
number of objects with a certain property. Let’s suppose that we want
to say that there is at least one thing that has a certain property. If
we use P as a predicate for a property, a simple formula that will do
the trick is
∃xP (x) (Exists1 )
Now, suppose that we want to say that there is at most one object
(possibly none). How can we do that? Well, a neat little mathematical
idea (that we’ve already used in §2 and in the context of functions, cf.
3.6.10) is to say that to say that at most one object is so-and-so is to
CHAPTER 8. SYNTAX OF FIRST-ORDER LOGIC 209

say that any candidate so-and-so object is unique in being so-and-so,


meaning any other object that is also so-and-so is already our object:

∀x∀y(P (x) ∧ P (y) → x = y) (At Most1 )

If we want to say that there is precisely one object that’s so-and-so,


we can easily achieve this by forming the conjunction of (Exists1 ) and
(At Most1 ):

∃xP (x) ∧ ∀x∀y(P (x) ∧ P (y) → x = y)

This formula can be a bit simplified as follows:

∃x(P (x) ∧ ∀y(P (y) → x = y)) (Exactly1 )

This is, essentially, the formulation that we standardly use in mathe-


matics to say that there is precisely one P .

8.6.7 Ok, so far so good. Now what if we want to say that there are at least
two things that are so-and-so. Well clearly there have to be two things,
x and y that are both so-and-so. But is this enough? Well, no! The
objects x and y could be identical, in which case there would be only
one object after all. So, we need to exclude this possibility, which gives
us the following natural formula for saying that there are at least two
things:
∃x∃y(P (x) ∧ P (y) ∧ x 6= y) (Exists2 )
And what about at most two things? Well, it shouldn’t be possible
that there are three things, any potential third thing would have to
already be one of our initial two. So, generalizing the idea from (At
Most1 ), we get:

∀x∀y∀z(P (x) ∧ P (y) ∧ P (z) → x = y ∨ x = z ∨ y = z) (At Most2 )

So, to say that we have exactly two P ’s, we can just conjoin (Exists2 )
and (At Most2 ):

∃x∃y(P (x)∧P (y)∧x 6= y)∧∀x∀y∀z(P (x)∧P (y)∧P (z) → x = y∨x = z∨y = z)

A somewhat more palatable formula to the same effect is:

∃x∃y(P (x) ∧ P (y) ∧ x 6= y ∧ ∀z(P (z) → x = z ∨ y = z)) (Exactly2 )

Now, you hopefully see the pattern. Can you write a formula that says
that there are at least/at most/exactly 3 P ’s?
CHAPTER 8. SYNTAX OF FIRST-ORDER LOGIC 210

8.7 Core Ideas


ˆ In first-order logic, we take the grammatical structure of simple sen-
tences into account.

ˆ Singular terms stand for objects and predicates for properties/relations.

ˆ Quantifiers allow us talk about things in generality.

ˆ The signature of a language consists of a set of constants, a set of


function symbols, and a set of predicate symbols (the latter two with
a fixed arity assignment).

ˆ Both terms and formulas are defined inductively in first-order logic.

ˆ We have induction for formulas and terms and function recursion.

ˆ Parsing trees are defined for terms and functions.

ˆ Nodes in trees have a canonical way of being named: by the directions


for how to find them starting at the root.

ˆ Variables are bound by quantifiers.

ˆ A formula without free variables is closed, a sentence.

ˆ Substitution is a useful syntactic operation that allows us to say what


a variable stands for.

ˆ Just like in propositional logic, there are guidelines for formalization.

8.8 Self Study Questions


8.8.1 Which of the following entails that an expression is not a formula of
a first-order language (of appropriate signature)?

(a) The expression contains a sentence letter.


(b) The expression contains an even number of parentheses.
(c) The expression contains a symbol not from the vocabulary.
(d) The expression contains a variable occurrence that is not bound
by any quantifier occurrence.
(e) The expression contains an n-ary function symbol followed by m-
terms for m 6= n.
(f) The expression does not have a parsing tree.
(g) The expression contains quantifier occurrences that don’t bind any
variable occurrences.
CHAPTER 8. SYNTAX OF FIRST-ORDER LOGIC 211

(h) The expression contains an odd number of parentheses.


(i) The expression contains no sentential operators.

8.8.2 Let φ be a formula, (n, x) be a variable occurrence in φ and (m, Qy) a


quantifier occurrence in φ such that (n, x) is bound by (m, Qy). Which
of the following are not possible?

(a) y 6= x
(b) n = r
(c) m = r
(d) There is a path from m to n such that Qy is on that path (for
Q = ∃, ∀)
(e) There is a path from the root to n which does not go through m.
(f) There is a path from the root to m which goes through n.
(g) There is a path from the root to n which goes through m.
(h) There is a path from the root to m which does not go through n.

8.8.3 Let φ be a formula with precisely one free variable, x. Consider the
result of a substitution (φ)[x := t] of term t without free variables for
all free occurrences of x in φ. Which of the following cannot happen?

(a) (φ)[x := t] is open.


(b) (φ)[x := t] is closed.
(c) (φ)[x := t] contains no variables.
(d) (φ)[x := t] does not contain t.
(e) (φ)[x := t] contains free occurrences of x
(f) (φ)[x := t] does not contain x
(g) (φ)[x := t] is an atomic formula

8.9 Exercises
8.9.1 [h] Define the notion of a subformula in first-order logic by generalizing
Definition 4.4.2.

8.9.2 [h] Prove that for each formula φ, if x is the only variable that occurs
free in φ, then Qxφ is closed for Q = ∀, ∃.

8.9.3 (a) Use syntactic recursion to define a function F V : L → ℘(V), which


maps a formula φ to the set F V (φ) of all the variables that occur
free in φ.
CHAPTER 8. SYNTAX OF FIRST-ORDER LOGIC 212

(b) Prove that F V (Qxφ) = F V (φ) \ {x} using induction on formulas.

8.9.4 Adapt the algorithm from 4.3.10 to first-order logic.

8.9.5 Prove 1. and 2. of 8.4.10.

8.9.6 [h] Is it the case that every sub-formula of a sentence is itself a sen-
tence? If so, prove it. If not, give a counterexample.

8.9.7 Let x be a variable that occurs free in φ and y any other variable. Is it
always the case that y occurs free in φ[x := y]? If so, prove it. If not,
provide a counter-example.

8.9.8 Do the parsing trees for

∀x((R(x, x) ∧ ∃yR(z, y)) → ∃x∀zB(x, x, y)).

Determine all the occurrences of quantifiers and variables in the for-


mula. For each occurrence of a variable determine if it’s free or bound.
If it’s bound, determine which occurrence of a quantifier binds it.

8.9.9 Step-wise calculate the results of the following substitutions.

(i) [h] (∀x(R(x, y) → ∃yR(y, y)))[y := x]


(ii) ∀x(P (x) ∨ (∃y(R(y, x) → ¬Q(x)))[x := c])
(iii) (∀z(R(z, z) → (¬R(z, z) ↔ R(z, z))))[z := c]
(iv) ∀xR(x[x := a], y[y := b], z[x := a])
(v) (∀x(∃yR(x, y) ∧ ∃x∀zR(x, z)))[x := z]
(vi) (∀x(B(x, y, y) ∧ ∀z(B(z, x, z) ∨ ∃yB(y, y, z))))[y := x]
(vii) (∀x(R(x, x) → (R(y, y) ∨ ∃y(R(y, x) ∧ R(y, y)))))[y := x]
(viii) ((∀x∃y(B(x, y, z))[y := x] ∨ ∀zB(z, y, x)))[y := x]

8.9.10 Write down formulas in L∈ that say that a set x has:

(i) at least 3 members


(ii) at most 3 members
(iii) exactly 3 members

8.9.11 Vertaal de onderstaande zinnen zo nauwkeurig mogelijk in de taal van


de predikatenlogica. Vergeet de vertaalsleutel niet (discussiedomein =
de verzameling der gehele getallen)

a. 2 is een even getal.


b. 2 is groter dan 3.
CHAPTER 8. SYNTAX OF FIRST-ORDER LOGIC 213

c. [h] De som van 2 en 3 is groter dan 4.


d. Als dit getal groter dan 4 is, dan is het ook groter dan 3.
e. Als dit getal niet groter dan 2 is, dan is het ook niet groter dan 3.
f. Dit getal is kleiner dan 2 of groter dan 4.

8.9.12 Idem voor:

a. Er is een getal groter dan 4 en er is een getal kleiner dan 4.


b. Er is een even getal groter dan 3.
c. [h] Ieder getal groter dan 4 is ook groter dan 3.
d. Geen getal is groter dan 3 en kleiner dan 4.
e. Als dit getal groter dan 4 is, dan is ieder getal dat ik hier opgeschreven
heb groter dan 4.
f. [h] Een getal dat kleiner dan 3 is, is kleiner dan 4.
g. Een getal, dat kleiner dan 3 is, is kleiner dan 4.
h. Er is geen getal groter dan 4 en kleiner dan 3.

8.9.13 Idem voor:

a. Een getal dat groter is dan ieder even getal, is oneven.


b. Ieder getal is groter dan tenminste één getal.
c. [h] Er is een even getal dat kleiner is dan een oneven getal dat
groter is dan een oneven getal.
d. Er is geen getal dat groter is dan ieder getal.
e. Geen getal is groter dan zichzelf.
f. Ieder oneven getal is groter dan 0.
g. Ieder oneven getal is groter dan een even getal.

8.9.14 Idem, maar nu met de verzameling van alle mensen als discussiedomein,
voor:

a. Wie iemand bemint, bemint zichzelf.


b. Wie niemand bemint, is niet verstandig.
c. Wie verstandig is, wordt door iemand bemind.
d. Iedereen bemint iemand.
e. Wie mij bemint, wordt door mij bemind.
f. Wie tegen mij is, is niet voor mij.
g. Wie niet voor mij is, is tegen mij.
h. Iedereen is óf voor mij, óf tegen mij.
CHAPTER 8. SYNTAX OF FIRST-ORDER LOGIC 214

8.9.15 Idem (discussiedomein = de verzameling van alle filosofen):

a. Alle filosofen publiceren in een internationaal bekend tijdschrift.


b. Wie in een internationaal bekend tijdschrift publiceert en weten-
schappelijke boeken schrijft, doet hoogwaardig onderzoek.
c. Elke filosoof die in een internationaal bekend tijdschrift publiceert,
is te verkiezen boven alle filosofen die dat niet doen.
d. Er is een filosoof die wetenschappelijke boeken schrijft en meer pub-
liceert dan allen die hoogwaardig onderzoek doen.
e. Voor elke filosoof die wetenschappelijke boeken schrijft geldt dat
het niet zo is dat zij/hij te verkiezen is boven alle filosofen die in
een internationaal bekend tijdschrift publiceren.

8.9.16 [h] Gegeven is de volgende vertaalsleutel:


D = {Rosja, Zebedeus, Peter},
p: Peter
G(x, y): x is groter dan y
S(x, y): x is sterker dan y
B(x): x is blij
Vertaal de volgende formules naar Algemeen Beschaafd Nederlands:

a. ∃x S(x, p)
b. ∀x ∀y (S(x, y) → G(y, x))
c. ∀x (∃y G(x, y) → ∃y S(x, y))
d. ¬∃x G(p, x) ∧ ¬∃y S(y, p)
e. ∀x (B(x) → ∃y G(y, x))

8.9.17 A nice challenge at the end: give an inductive definition of a sequence


of formulas {φi ∈ L: i ∈ N} such that φi says that there are precisely i
objects.

8.8.3 (a), (d), (e)

8.8.2 (a), (b), (d), (e), (f)

8.8.1 (a), (c), (e), (f), (h)


Self Study Solutions
Chapter 9

Semantics for First-Order


Logic

9.1 Truth, Models, and Assignments


9.1.1 In this chapter, we’re going to develop the standard semantics for
the first-order languages we’ve described and studied in the previous
chapter. That is, we want to define the concept of a model for these
languages. The aim is, as always in logical semantics, to give a formal
account of valid inference using truth-preservation across all models
(cf. 1.1.5 and 5.2.1). We will do so, making use of several ideas from
the elementary set-theory chapter, in particular §3.6. So, make sure
you’re up to speed on properties, relations, and functions.

9.1.2 Note that a model for propositional logic, i.e. an assignment, inter-
prets the non-logical vocabulary of the language in question (as truth-
values). The meaning of logical vocabulary, i.e. the sentential opera-
tors, is given by the truth-functions. In first-order logic, the situation
is similar. A model interprets the non-logical vocabulary of the first-
order language in question, i.e. the signature. The meaning of the
sentential operators is still given by the truth-functions, and we know
how those work. The meaning of the quantifiers, instead, we’ll have to
study in a bit more detail.

9.1.3 Let’s begin by informally describing the idea of how we use the con-
cepts from set-theory to provide the notion of a model for a first-order
language. In §8.1, we discussed the idea that in first-order logic, we
need to take the subject-predicate structure of sentences into account
but we abstract away from the concrete terms and predicates involved.
For now, let’s focus on simple, term-property sentences, like “the ball
is red.” Logically speaking, i.e. abstracting away from the concrete

215
CHAPTER 9. SEMANTICS FOR FIRST-ORDER LOGIC 216

terms and predicates involved, the structure of this sentence is

P (a),

where a is a constant and P is a unary predicate. Now, constants


denote objects and unary predicates express properties. But remember
that a property, at least for us, is just a set of objects, the set of objects
with the property (cf. 3.6.1). The property of being red, in this picture,
is the set of all red things. Now ask yourself, when is the sentence “the
ball is red” true? Well, the natural answer is, just in case the ball is
actually red, i.e. iff the ball is a member of the set of red things.

9.1.4 How can we formally model this? Well, by assigning the ball as the
denotation to a and the set of all red things, {x : x is red}, as the
interpretation to P . To distinguish the actual ball from it’s name, we
call the ball JaK and use a as its name. Our observation, so far, is
that P (a) should be true under the intended interpretation of a and
P iff JaK ∈ {x : x is red}, i.e. iff the ball is a member of the set
of red things. But now remember that for our logical purposes, we
abstract away from the concrete terms and predicates involved, so we
should forget about their meaning, too: P (a) is just a formula. But we
can use the idea we just described to obtain natural truth-conditions
for P (a). All we need to know is which object a denotes and which
property, conceived as a set, P expresses. Then, we can say that P (a)
is true iff the object denoted by a is in the set expressed by P . And
that’s precisely what a model for first-order logic does: it tells us which
objects the terms denote and it tells us which properties (i.e. sets) the
predicates express. From there, the definition of truth-in-a-model flows
rather naturally.

9.1.5 Now let’s generalize the idea from the previous point. Remember, from
8.1.5, that the general form of a simple sentence in first-order logic is

R(t1 , . . . , tn ),

where R is an n-ary predicate and t1 , . . . , tn are terms. For example,


the structure of “the letter is in the left drawer” is R(t, u), where t
stands for “the letter,” u stands for “the left drawer,” and R stands for
“. . . is in .” Now, on the intended interpretation, the terms denote
the objects in question, that’s clear: JtK is the letter and JuK is the
left drawer. The predicate R instead denotes the relation of one thing
being inside another. Remember, from 3.6.2, that a binary relation is
a set of ordered pairs: the set of pairs where the first thing stands in
the relation to the second. So the relation of one thing being inside an-
other is the set {(x, y) : x is inside y}. On the intended interpretation,
therefore, R(t, u) is true iff (JtK, JuK) ∈ {(x, y) : x is inside y}. Now,
CHAPTER 9. SEMANTICS FOR FIRST-ORDER LOGIC 217

generally speaking, a model M gives us for each term t an object JtKM


denoted by t in M, and the model gives us for each n-ary predicate
R a set RM of n-tuples. A formula R(t1 , . . . , tn ), then, will be true in
model M iff (Jt1 KM , . . . , Jtn KM ) ∈ RM . This is the general idea for
truth of simple sentences in a model: a simple sentence is true iff the
objects denoted by the terms stand in the relation expressed by the
predicate.
9.1.6 Note that in the case of the distinguished identity predicate, =, the
situation is even easier. In 8.2.4, we discussed the idea that we want
= to express actual identity. The simple identity claim “Darth Sidious
is Palpatine” is true, for example, iff “Darth Sidious” and “Palpatine”
denote the same person (and for those of you who don’t like Star Wars,
they do!). This straight-forwardly generalizes to abstract models: the
(one and only) way to achieve what we want is by saying that t1 = t2
is true in a model M iff t1 and t1 denote the same object in the model,
i.e. iff Jt1 KM = Jt2 KM .
9.1.7 Let’s continue thinking about the denotations of terms a bit more. In
8.1.3–4, we discussed the different kinds of terms recognized in clas-
sical, first order logic: constants, variables, and function expressions.
Now, constants denote fixed objects, so it’s quite clear how to inter-
pret them in a model M: just assign an object aM to each constant a.
Also with function expressions, it’s quite clear what to do. If we have
a function expression like “the LCA of . . . and ,” formalized as a
binary function symbol f , the intended interpretation is the function
that maps any two individuals to their LCA. And abstractly speaking,
the interpretation of f in a model M should just be a binary function,
f M . In fact, when we consider iterations of functions, it’s quite clear
how to calculate their semantic values using recursion. Take “the LCA
of Ada Lovelace and the LCA of Alan Turing and Angela Merkel,” for
example. If we formalize this term as

f (a, f (b, c)),

where a stands for “Ada Lovelace,” b for “Alan Turing,” c for “An-
gela Merkel,” and f for ‘the LCA of . . . and ,” then the value
M
Jf (a, f (b, c))K in a model M can be calculated as follows:

Jf (a, f (b, c))KM = f M (aM , Jf (b, c)KM )


= f M (aM , f M (bM , cM ))

where aM is the object denoted by a in M, bM is the object denoted


by b in M, cM is the object denoted by c in M, and f M is the function
expressed by f in M. What’s not so clear is how to handle variables,
which is what we’re going to discuss next.
CHAPTER 9. SEMANTICS FOR FIRST-ORDER LOGIC 218

9.1.8 Remember that the variables x, y, z, . . . are, essentially, logical pro-


nouns, i.e. expressions like “he,” “she,” “it.” How do you figure out
what a pronoun like “it” refers to? Take the sentence “it is red,” for
example. Formally, we abstract this to

P (x),

where x stands for “it,” and P stands for “. . . is red.” If we want


to know whether P (x) is true, we need to know what x stands for.
And even on the intended reading, the natural language sentence it-
self doesn’t give us any clue towards the denotation of x. We need
more information. If, for example, you’re talking to a friend and she
says “Yesterday, I bought a ball. It is red.” Then you know that x is
supposed to refer to the ball. Or if another friend says “My favorite
pen is lying there on the table. It’s red.” Then x is supposed to refer
to the pen. There are two important points here: (i) you need some
context to determine what pronouns stand for, and (ii) the denotation
of the same pronoun in the same sentence can change from context to
context. This is all in line with the way we think about variables in
mathematics (cf. 2.2.3–10). How can we formally model this behavior?
There is nothing in the formula P (x) itself that lets us determine the
relevant context, so we shall model the context as additional semantic
information: we will introduce the notion of an assignment α, which
is essentially just a function that assigns to every variable x ∈ V an
object α(x). In the same spirit as before, using assignments, we will
be able to say that P (x) is true in a model M under an assignment α
iff α(x) ∈ P M . That is, in order to be able to think about variables,
our definition of truth will be relative to a model and an assignment.
In order to take into account that the same variable, even within the
same sentence, can have different meanings in different contexts, we’ll
consider different assignments in the same model.

9.1.9 The information provided by a model and assignment together is


enough to recursively calculate the truth-value JφKM α (in a model M
under an assignment α) for formulas φ, which involve only the senten-
tial operators ¬, ∧, ∨, →, ↔. We do this just as in propositional logic,
i.e. using the truth-functions. The only really interesting question is
how to handle the quantifiers. The questions are:

J∃xφKM
α =???

J∀xφKM
α =???

Let’s consider the latter, i.e. universal statements like “everything


that’s scarlet is red,” in some more detail (the case for existential
CHAPTER 9. SEMANTICS FOR FIRST-ORDER LOGIC 219

claims is analogous). The form of this statement, as discussed in the


previous chapter, is
∀x(S(x) → R(x)),
where S stands for “. . . is scarlet” and R stands for “. . . is red.” Now, in
light of this, we intuitively want “everything that’s scarlet is red” to be
true iff every object that is scarlet is also red. Modeling this in a model
under an assignment is not straight-forward, however. Suppose we’re
dealing with a model M which interprets S and R as the sets S M and
RM respectively and an assignment α which tells us that α(x) is the
ball. This allows us to determine the value of JS(x) → R(x)KM α . We
get the value 0 if the ball is a member of S M (the set of scarlet things
in the model) but not of RM (the set of red things in the model);
otherwise we get the value 1. But that is just the truth-value of the
formula for one possible value of x, namely the the value under α—
the ball. To be able to talk about truly all objects, all possible values
x can take, we simply change the values of x under α. Let’s write
α[x 7→ d] for the assignment that is defined just like α for all variables
other than x but assigns the value d to x. With this notation, we can
simply go through all the possible values x can take and check whether
R(x) → S(x) is true. If this is the case, we declare ∀x(R(x) → S(x))
true; if we can find a value for x such that R(x) → S(x) is false, we
declare ∀x(R(x) → S(x)) false. A bit more precisely, we get:

J∀x(S(x) → R(x))KM M
α = 1 iff for all objects d, JS(x) → R(x)Kα[x7→d] = 1

More generally, we get the following truth-condition for ∀xφ in a model


under an assignment α:

J∀xφKM M
α = 1 iff for all objects d, JφKα[x7→d] = 1

In words, ∀xφ is true in a model under an assignment iff for every


possible value of x, if we change the value of x to that value (while
keeping all the other values the same), φ comes out true in the model.
Analogously, we can motivate the following clause for ∃:

J∃xφKM M
α = 1 iff for some d, we have JφKα[x7→d] = 1

That is, ∃xφ is true in a model under an assignment iff there is a


possible value for x, such that if we change the value of x to that value
(keeping the rest fixed), we get φ to be true. This is the fundamental
idea of how the semantics for ∀ and ∃ works.

9.1.10 Should we really consider all possible values x can take in the recursive
clauses for ∀ and ∃? There are good, intuitive reasons to say that the
answer is: no! Huh? Well, consider the statement “everybody passes
CHAPTER 9. SEMANTICS FOR FIRST-ORDER LOGIC 220

the course.” The logical form of this statement is ∀xP (x), where P
stands for “. . . passes.” If we consider all things as possible values for
x, this sentence clearly is false: if x denotes the ball, for example,
then P (x) is clearly false—a ball can’t pass the course. But that’s also
not what I meant. I mean that every student passes the course. Now,
there are two ways of going about modeling this: either we revise
the grammatical structure of our sentence to ∀x(S(x) → P (x)) or
we restrict the possible values for x. In the former case, we get the
right result, since trivial counterexamples like the ball no longer make
problems: if x denotes the ball, then S(x) is false and so S(x) → P (x)
is true. The only real counterexample will now be a student, i.e. a
member of S M , that is not a member of P M . So, the solution works.
But it has a certain flair of “cheating:” we revised the grammatical
structure of the sentence we wanted to model to something else than
what we actually said. In general, this sort of move is not liked by
logicians. The other solution, restricting the possible values, is more
generally liked. It’s the one we shall adopt: in a model M, we shall
restrict the values for all our syntactic expressions to values from a
fixed set DM —the domain of discourse in the model. The domain of
discourse fixes the kinds of things we’re talking about. In our example,
“everybody passes,” the intended domain of discourse should include
all and only the students in the course. In arbitrary model, however,
the domain can, of course, be arbitrary.1

9.1.11 Now everything is in place. We will spend the rest of the chapter
pouring the previous ideas into fully formal, precise definitions. But
before we do so, it’s worth going through two ideas that one might
have for how the quantifiers should work that actually don’t work.
This will shed light on the idea that we’re actually using in this course.
The first idea is to use substitution as follows: why don’t we say that
∀xφ is true iff (φ)[x := t] is true for every term t, and, analogously,
∃xφ is true iff for some term t, (φ)[x := t]? This is known as the
substitutional account of the quantifiers. Considering an example of a
concrete language and intuitive model quickly shows why this account
can’t be correct. Take the statement “there is a red thing,” which we
formalize as ∃xR(x) in a language with the predicate R for “. . . is red”
and additionally only the constant a for the ball. Suppose that we’re
in a model M where there are only two things, the cup and the ball.
The constant a duly denotes the ball and R expresses the property of
1
There is also a deeper, more technical reason: there is no universal set U of absolutely
everything. To see this, note that a set is a thing, so we’d get U ∈ U . But this kind of
thing usually spells trouble: we can quickly derive paradoxes when we allow for sets to
contain themselves. This is why in standard set-theory, universal sets, and more generally,
sets containing themselves are banned.
CHAPTER 9. SEMANTICS FOR FIRST-ORDER LOGIC 221

being red, which in our model is such that the cup is red but the ball
is not. Additionally, we’re working with an assignment α such that
α(x) is the ball for all variables x ∈ V. Intuitively, it’s correct that
there is a red thing, namely the cup. So, ∃R(x) should be true in the
model. And, in fact, if we use our idea from above, we get this result:
simply change the value of x to the cup and R(x) becomes true. But
there is no term that, given our model and assignment, denotes the
cup. So, we can’t find a t such that (R(x))[x := t] comes out true. Our
language is just not expressive enough to talk about all the objects in
our model. There is no natural way of fixing this. It’s simply unfeasible
to postulate that we have names for all objects in all models—there
are simply to many. There is a way of somehow fixing the idea, but we
won’t explore it in the course. We’ll return to the idea of substitution
in the proof theory chapter.

9.1.12 Another approach we might be tempted to use is to say that:

J∀x(S(x) → R(x))KM M
α = 1 iff for every assignment β, JS(x) → R(x)Kβ = 1

That is, we might be tempted to say that ∀x(S(x) → R(x)) is true in a


model under an assignment iff S(x) → R(x) is true under every other
possible assignment in the model. This actually works in the case of
relatively “simple” statements like ∀x(S(x) → R(x))—if S(x) → R(x)
is true under every assignment, then it must be true for every object,
since for each object we will find at least one assignment β such that
β(x) is the object.
The approach, however, doesn’t work in general. To be precise, we
can’t say the following:

J∀φKM M
α = 1 iff for every assignment β, JφKβ = 1

J∃φKM M
α = 1 iff for some assignment β, JφKβ = 1

Once the quantificational structure of formulas gets more involved, the


approach yields the intuitively wrong outcomes. Actually, to illustrate
the problem, we can look at a statement with just one quantifier but
one other kind of variable. Suppose that we’re in a context where “it”
clearly refers to the smallest natural number, i.e. 0. Then consider the
statement “every natural number is bigger than it.” Formally, we’d
represent this statement as

∀yR(x, y),

where R stands for “. . . is smaller than (or equal to) .” The model
the intended reading suggests is to let DM = N, i.e. we talk about the
natural numbers, RM = {(n, m) : n ≤ m} (i.e. the relation of being
CHAPTER 9. SEMANTICS FOR FIRST-ORDER LOGIC 222

smaller than: the set of numbers such that the first is smaller than
(or equal) to the second), and α(x) is the number zero. Intuitively
speaking, on this reading, ∀yR(x, y) should come out true in such a
modeling situation: zero is indeed such that it is smaller than every
other number (and identical to itself). But, alas, the present proposal
gives another verdict: it’s not the case that for every assignment β,
JR(x, y)KM β = 1. Just take any assignment β which assigns 2 to x and
1 to y. Since (2, 1) ∈/ {(n, m) : n ≤ m} = RM , we get JR(x, y)KM β = 0.
M
And so, according to the proposal, J∀yR(x, y)Kα = 0. —The problem
is that, intuitively, the value of x needs to remain fixed. Our official ac-
count from 9.1.9 guarantees this, but the account under consideration
does not. The problem has very much to do with what happens when
we have more than one quantifier in a statement. In fact, our argu-
ment can be used to show that in our intended model, J∃x∀yR(x, y)KM α
turns out to be 0, though intuitively it should be 1. To see this, note
that our proposal would say that J∃x∀yR(x, y)KM α = 1 iff there exists
M
a assignment β, such that J∀yR(x, y)Kβ = 1, which in turn would
be the case iff for every assignment γ, we have JR(x, y)KM γ = 1. But
we just figured out that no-matter what we start with, while there is
a assignment β such that J∀yR(x, y)KM β = 1, it’s not the case that,
M
then, for all assignments γ, JR(x, y)Kγ = 1. Nested quantifiers are, in
fact, the ultimate reason why we need the official definition that we
endorse.

9.1.13 That’s it, these are the ideas that we’re going to develop in this chap-
ter. Let’s briefly sum up:

ˆ A model interprets the signature by assigning denotation to ev-


ery constant, a function to every function symbol, and an n-ary
relation to every n-ary relation symbol
ˆ An assignment in a model tells us what the variables denote. It
plays the role of the context in natural language.
ˆ We can recursively calculate the denotation of arbitrary terms in
a model under an assignment.
ˆ We can recursively calculate the truth-value of a formula relative
to a model under an assignment.

The rest is, more or less, standard. Validity will be defined as truth-
preservation across all models, just like in propositional logic. We’ll
now make these concepts fully precise.
CHAPTER 9. SEMANTICS FOR FIRST-ORDER LOGIC 223

9.2 Models and Assignments


9.2.1 As we just said, a model interprets the signature. So, let S = (C, F, R)
be a signature. A model for S is a structure M = (DM , ·M ), such that:

(i) DM is a non-empty (!) set, the domain of M


(ii) ·M is an interpretation function, which assigns to:
(a) every constant c ∈ M an element cM ∈ DM of the domain
(b) every function symbol f n ∈ F, a function f M : (DM )n →
DM
(c) every predicate Rn ∈ R, a set RM ⊆ (DM )n .

9.2.2 Examples. The following are all examples of models for their respective
signatures (I took the signatures from Example 7.2.3). It’s important
to note that for signatures with an intended reading (as in the cases
of arithmetic and set-theory), there are both “intended” and “unin-
tended” models:

(i) Signature SP A = ({0}, {S 1 , +2 , ·2 }, ∅)


(a) The standard, intended model:
ˆ DM = N
ˆ 0M = 0
ˆ S M (n) = n + 1
ˆ +M (n, m) = n + m
ˆ ·M (n, m) = n · m
(b) A natural, but non-intended model on the even numbers:
ˆ DM = {n ∈ N : n is even}
ˆ 0M = 0
ˆ S M (n) = n + 2
ˆ +M (n, m) = n + m
ˆ ·M (n, m) = n · m
(c) A natural, but non-intended model on the odd numbers:
ˆ DM = {n ∈ N : n is odd}
ˆ 0M = 1
ˆ S M (n) = n + 2
(
n+m if n + m is odd
ˆ +M (n, m) =
n + m + 1 otherwise
ˆ ·M (n, m) = n · m
Why the weird clause for +M ? Because +M needs to be a
function from {n ∈ N : n is odd} to {n ∈ N : n is odd} and
n + m can be even if n, m are both odd: just take 1 + 1.
CHAPTER 9. SEMANTICS FOR FIRST-ORDER LOGIC 224

(d) A weird, non-intended model:


ˆ DM = N
ˆ 0M = 42
ˆ S M (n) = n
ˆ +M (n, m) = n · m
ˆ ·M (n, m) = nm
(e) A really weird, non-intended model:
ˆ DM = {∗}
ˆ 0M = ∗
ˆ S M (∗) = ∗
ˆ +M (∗, ∗) = ∗
ˆ ·M (∗, ∗) = ∗
(ii) S∅ = (∅, ∅, ∅)
ˆ Literally, every set is a model!
(iii) S∈ = ({∅}, ∅, {∈2 })
(a) The intended model cannot be described using our methods,
since there is no set of all sets (leads to paradox). But here
is a natural, model for the language:
ˆ DM = N ∪ ℘(N)
ˆ ∅M = ∅
ˆ ∈M = {(x, X) ∈ N × ℘(N) : x ∈ X}
(b) The following model is not really intended, but works:
ˆ DM = ℘(N)
ˆ ∅M = ∅
ˆ ∈M = {(X, Y ) : X ⊆ Y }
(c) The following model is weird:
ˆ DM = {a, b, c}
ˆ ∅M = c
ˆ ∈M = {(a, c), (b, c), (c, c), (a, b)}
(iv) S = ({a, b, c}, {f 1 , g 2 }, {P 1 , R2 })
ˆ There is no real intended model, so let’s just describe some
arbitrary one.
– DM = {1, 2, 3, 4}
– aM = 1, bM = 3, cM = 2
– f M (x) = x for each x ∈ DM
– g M (x, y) = min(x, y) for all x, y ∈ DM
– P M = {1, 3}
CHAPTER 9. SEMANTICS FOR FIRST-ORDER LOGIC 225

– RM = {(1, 1), (1, 2), (2, 2)(2, 3), (3, 3)}

9.2.3 Note that the domain in every model is postulated to be non-empty.


The reason for this is that otherwise, we’d get weird results, which we’ll
be able to see in a moment. But for now, you can already note that if we
had an empty domain, we couldn’t assign denotations to the constants.
But in natural language reasoning, it’s a standard presupposition that
if you’re talking about something, if you give it a name, then it exists—
at least for the purpose of your reasoning process.

9.2.4 Next, we give a precise formulation to the concept of an assignment


in a model. This is easy: an assignment in a model M = (DM , ·M ) of
signature S is a function α : V → DM .

9.2.5 Examples: Note that for the assignment, the only component of the
model that matters is the domain, since the variables assume values
from here.

(i) Models with domain DM = N (9.2.2.i.a–d):


(a) α(x1 ) = 0, α(y) = 1, α(z) = 2
(b) α(x) = 1, α(y) = 0, α(z) = 3
(c) α(x) = 0, α(y) = 0, α(z) = 0
(d) For V = {xi : i ∈ N}, α(xi ) = i.
(e) α(x) = 1 for all x ∈ V.
(ii) Models with domain DM = N ∪ ℘(N) (9.2.2.iii.a):
(a) α(x) = 2, α(y) = {2}, α(z) = N
(b) α(x) = {x ∈ N : n is even}, α(y) = {n ∈ N : x is odd}, α(z) =
{n ∈ N : x is prime}
(c) α(x) = 0 for all x ∈ V.
(iii) For DM = {∗} (9.2.2.i.e) there is only one assignment, which is
the constant assignment α(x) = ∗ for all x ∈ V

Note that any function α : V → DM is an assignment: we can have


that multiple variables assume the same value or that some values are
not assumed by any variable.

9.2.6 With the concept of a model and an assignment in place, we can define
the denotation JtKM
α of a term t in a model M under assignment α by
the following recursion:

(a) (i) JxKM


α = α(x)
(ii) JcKα = cM
M

(b) Jf (t1 , . . . , tn )KM M M M


α = f (Jt1 Kα , . . . , Jtn Kα )
CHAPTER 9. SEMANTICS FOR FIRST-ORDER LOGIC 226

9.2.7 Examples:

(i) In the standard model for SP A (9.2.2.i.a) with α(x) = 0, α(y) =


1, α(z) = 2 (9.2.5.i.a):

J0KM
α =0
JxKM
α =0
JS(0)KM
α =1
Jy · S(0)KM
α =1
JS(((x · y) + z))KM
α =3

(ii) In the non-intended model for SP A (9.2.2.i.b), which has DM =


{x : x is even}, with α(x) = 0, α(y) = 0, α(z) = 0 (9.2.5.i.c):

J0KM
α =0
JxKM
α =0
JS(0)KM
α =2
Jy · S(0)KM
α =0
JS(((x · y) + z))KM
α =2

(iii) In the non-intended model for SP A (9.2.2.i.c), which has DM =


{x : x is odd}, with α(x) = 1 for all x ∈ V (9.2.5.i.e):

J0KM
α =1
JxKM
α =1
JS(0)KM
α =3
Jy · S(0)KM
α =3
JS(((x · y) + z))KM
α =5

(iv) In the non-intended model for SP A (9.2.2.i.d), which has DM =


{x : x is odd}, with α(x) = 0, α(y) = 1, α(z) = 2 (9.2.5.i.a):

J0KM
α = 42
JxKM
α =0
JS(0)KM
α = 42
Jy · S(0)KM
α =1
42
=1
JS(((x · y) + z))KM 1
α = (0 ) · 2 = 0
CHAPTER 9. SEMANTICS FOR FIRST-ORDER LOGIC 227

(v) Consider the abstract model (9.2.2.iv). In that model we have


under the assignment α(x) = 1, α(y) = 4, α(z) = 2, we have:

Jf (f (x))KM M M
α = f (f (α(x)) = 1
Jg(b, c)KM M M
α = min(b , c ) = min(3, 2) = 2
Jg(b, y)KM M
α = min(b , α(y)) = min(3, 4) = 3
Jg(f (f (x)), g(b, c))KM M M
α = min(α(x), min(b , c )) = min(1, min(3, 2)) = 1
Jf (g(g(a, b), g(b, c))))KM M M M M
α = min(min(a , b ), min(b , c )) = min(min(1, 3), min(3, 2))

Note that both the model and assignment crucially affect the values
of terms. The results in weird models can be weird. Try some more
examples by yourself.

9.2.8 Finally, we shall define the crucial operation of changing the value of
a variable under an assignment, which we need for the clauses for the
quantifiers. Let α be an assignment in a model M = (DM , ·M ). We
define the function α[x 7→ d], which is the result of setting the value
of variable x ∈ V to d ∈ DM , by the following condition:
(
α(y) if y 6= x
ˆ α[x 7→ d](y) =
d if y = x

It follows immediately from the definition that for all d ∈ DM ,

JxKM
α[x7→d] = d.

We introduce the following useful notation for iterated changes: in-


stead of α[x 7→ d][y 7→ e] we write α[x 7→ d, y 7→ e].

9.2.9 To tighten our understanding of terms and their denotations, we shall


prove the following lemma:

Lemma (Term Locality Lemma). Let M be a model and t a term


with precisely the variables in set V in it. Then for all assignments α
and β in M, if α(x) = β(x) for all x ∈ V , then JtKM M
α = JtKβ .

Proof. We also prove this fact by induction on terms.

ˆ For the induction base, note we distinguish two cases: (a) t is a


constant or (b) t is a variable. If (a), t is a constant a ∈ C, we can
reason that JaKM
α =a
M = JaKM for all assignments α, β. If (b) t is
β
a variable x, then V = {x} and so JxKM α = α(x) = β(x) = JaKβ .
M
CHAPTER 9. SEMANTICS FOR FIRST-ORDER LOGIC 228

ˆ For the induction step, consider a term f (t1 , . . . , tn ) and suppose


the induction hypothesis for all ti , i.e. if ti contains precisely
variables Vi and α(x) = β(x) for all x ∈ Vi , then Jti KM M
α = Jti Kβ .
Now suppose that f (t1 , . . . , tn ) contains variables V and α(x) =
β(x) for all x ∈ V . Now for each ti , we get that the variables Vi
in ti are also in t, i.e. Vi ⊆ V . Since α(x) = β(x) for all x ∈ V ,
it follows that α(x) = β(x) for all x ∈ Vi . But then, by the
induction hypothesis, we get for each ti that Jti KM M
α = Jti Kβ . We
can conclude that:

Jf (t1 , . . . , tn )KM
α = f M( Jt1 KM
α ... Jtn KM
α )

=
f M( Jt1 KM
β ... Jtn KM
β ) =Jf (t1 , . . . , tn )KM
β

This concludes our induction.

The Locality Lemma essentially states that the value of a term under
an assignment only depends on the values the assignment gives to the
variables in the term. In fact, we can infer the following corollary about
ground terms, i.e. terms without variables in them
Corollary (Ground Terms Lemma). Let M be a model and t ∈ T a
ground term. Then for all assignments α, β, we have JtKM M
α = JtKβ .

Proof. Exercise 9.7.3.

9.2.10 This passage has been added for clarification purposes.


There was some confusion about how to solve exercise 9.7.2, also
among the TAs. So, I suppose I should have been clearer on the dif-
ferent variants of inductive proof, and so I’m trying to remedy the
situation here. Remember that for every inductively defined set, we
have a corresponding principle for proof by induction. The idea is that
if we can establish that all the basic or initial elements of the set have
a property and the property is preserved under the constructions, then
all elements of the set have the property. We’re mainly relying on two
versions of this proof principle, induction on terms and inductions of
formulas. But sometimes, we want to prove things about inductively
defined subsets of the terms or formulas. This exercise is such a case.
In this exercise, we want to prove that all terms of a certain form,
namely numerals of the form S( |{z}
. . . S(0) . . .)) = n have a certain
n times
semantic property: a given denotation. In order to use proof by induc-
tion for this purpose, we need to recognize that the terms in question
CHAPTER 9. SEMANTICS FOR FIRST-ORDER LOGIC 229

have an inductive structure: their set can be defined by induction. The


definition, in this case, is simple: the initial element is the term (!) 0
and the construction is writing the function symbol S in front of a
term. Let’s call the terms constructed in this way natural numerals.
The corresponding induction principle for the construction of natu-
ral numerals states that if the natural numeral 0 has a property, and
if a natural numeral n has the property then the numeral S(n) has
the property, then all natural numerals have the property in question.
Here is the solution for exercise i) of 9.7.2:
Lemma (9.7.2.i). Let M be the standard model for the P A (as defined
in 9.2.2) and α an arbitrary assignment. Then for all terms of the form
n ∈ T , we have JnKMα = n.

Proof. We prove this fact using induction on the natural numerals,


i.e. terms of the form n = S(. . . S(0) . . .). For the base case, consider
the term 0. By definition, we have that J0KM α = 0
M = 0. Now, for

the induction step, assume the induction hypothesis, that for the term
n = S(. . . S(0) . . .) we have JnKM
α = n. We need to show that for
the term S(n) = n + 1, we have JS(n)KM α = n + 1. Now note that
JS(n)KM α = S M (JnKM ) = JnKM + 1. But by the induction hypothesis,
α α
we have JnKM M
α = n, so we have JS(n)Kα = n + 1, as desired.

Note that we can use similar methods for other inductively definable
subsets of terms or formulas. We can, for example, prove facts about
all terms without variables (i.e. ground terms) by showing that all
constants have the property and the property is preserved under ap-
plying function symbols. Or, we can show that all formulas with an
even number of negations have a property by showing that all atomic
formulas have the property and that the property is preserved under
writing two negations in front of a formula. In the following, we shall
often (sometimes implicitly) make use of such “restricted” forms of
induction on terms or variables.

9.3 Truth in a Model


9.3.1 Using the ideas from §9.1, we can now define the truth-value JφKM α of
a formula φ under an assignment α in a model M by the following
recursion:
(
M 1 if (Jt1 KM M
α , . . . , Jtn Kα ) ∈ R
M
(i) (a) JR(t1 , . . . , tn )Kα =
0 otherwise
(
1 if Jt1 KM M
α = Jt2 Kα )
(b) Jt1 = t2 KM α =
0 otherwise
CHAPTER 9. SEMANTICS FOR FIRST-ORDER LOGIC 230

(ii) (a) J¬φKM M


α = f¬ (JφKα )
(b) J(φ ◦ ψ)KM M M
α = f◦ (JφKα , JψKα ) for ◦ = ∧, ∨, →, ↔
(c) J∃xφKα = max({JφKα[x7→d] : d ∈ DM })
M M

J∀xφKM M M
α = min({JφKα[x7→d] : d ∈ D })

Note that we’re using the min and max functions here as functions
defined on (non-empty) sets of truth-values X ⊆ {0, 1}, i.e. min(X)
is the smallest element of X and max(X) is the biggest element of X.
More explicitly, we have max({0}) = 0, max({1}) = 1, max({1, 0}) =
1, and min({0}) = 0, min({1}) = 1, min({1, 0}) = 0. It might look
like {JφKM M
α[x7→d] : d ∈ D } is a (possibly) quite big set, depend-
ing on the size of DM . But note that each of the individual values
JφKMα[x7→d] is either 0 or 1. Since multiplicity doesn’t matter in sets, the
set {JφKM M
α[x7→d] : d ∈ D } is either {0}, {1}, or {0, 1}.

9.3.2 Just like in (5.1.11), we can define truth in a model under a assignment
as a property of formulas. Here, we don’t do this as an alternative
definition, as we did in §5.1, but rather, we take the property M, α  φ
of a formula being true in a model under an assignment to be defined
as follows:

ˆ M, α  φ iff JφKM
α =1

9.3.3 Using this definition, we can provide the following lemma, which has
the potential to make Definition 9.3.1 more transparent:

Lemma. For every model M and assignment α, we have:

(i) M, α  R(t1 , . . . , tn ) iff (Jt1 KM M


α , . . . , Jtn Kα ) ∈ R
M

(ii) M, α  t1 = t2 iff Jt1 KM M


α = Jt2 Kα
(iii) M, α  ¬φ iff M, α 2 φ
(iv) M, α  (φ ∧ ψ) iff M, α  φ and M, α  ψ
(v) M, α  (φ ∨ ψ) iff M, α  φ or M, α  ψ
(vi) M, α  (φ → ψ) iff M, α 2 φ or M, α  ψ
(vii) M, α  (φ ↔ ψ) iff either M, α  φ and M, α  ψ, or M, α 2 φ
and M, α 2 ψ.
(viii) M, α  ∃xφ iff there exists a d ∈ DM , such that M, α[x 7→ d]  φ
(ix) M, α  ∀xφ iff for all d ∈ DM , we have M, α[x 7→ d]  φ

Proof. By a straightforward induction on complexity, which is left as


an exercise.
CHAPTER 9. SEMANTICS FOR FIRST-ORDER LOGIC 231

Note that clauses (viii) and (ix) are, more or less, explicitly the clauses
we gave as our motivation in §9.1.

9.3.4 Examples: Here are some examples of truth in a model.

(i) First, let’s take the standard model (9.2.2.i.a) of SP A . Let’s take
an assignment α with α(x) = 0, α(y) = 1, α(z) = 2. We get
(a) M, α  S(0) = y
To see this, simply note that JS(0)KM M M M
α = S (J0Kα ) = S (0) =
1 and JyKM α = α(y) = 1.
(b) M, α 2 S(0) = x
This follows from the previous observation that JS(0)KM α =1
M
and JxKα = α(x) = 0
(c) M, α  S(0) 6= x
This follows from the previous by propositional reasoning.
(d) M, α  (S(0) 6= x ∨ 4 6= 4)
This follows from the previous by propositional reasoning.
(e) M, α  S(0) = x → 1 6= 1
Follows from (b) and propositional reasoning.
(f) M, α  ∀x(S(x) 6= 0)
We need to show that for each n ∈ DM = N, we have
M, α[x 7→ n]  S(x) 6= 0. So let n ∈ N be an arbitrary num-
ber. We know that M, α[x 7→ n]  S(x) 6= 0 iff M, α[x 7→
n] 2 S(x) = 0. So, we can use indirect proof to establish
that M, α[x 7→ n]  S(x) 6= 0 by leading M, α[x 7→ n] 
S(x) = 0 to a contradiction. So, assume M, α[x 7→ n] 
S(x) = 0. It follows that JS(x)KM M
α[x7→n] = J0Kα[x7→n] . We have
JS(x)KM M
α[x7→n] = S (n) = n + 1. And we have 0
M = 0. So

we get that n + 1 = 0. But we know that there is no natural


number n ∈ N such that n + 1 = 0. So, we can conclude that
M, α[x 7→ n]  S(x) 6= 0, as desired.
(g) M, α  ∃xS(x) = S(S(0))
To show this, we need to establish that there exists an n ∈ N
such that M, α[x 7→ n]  S(x) = S(S(0)). We can easily
check that JS(S(0))KM α = 2. So, we let n = 1. For n = 1,
we have JS(x)Kα[x7→1] = S M (α[x 7→ 1](x)) = S M (1) = 2. By
M

Lemma 9.2.9, we have JS(S(0))KM M


α[x7→1] = 2 since JS(S(0))Kα =
2 and S(S(0)) is a ground-term. Hence:

JS(x)KM M
α[x7→n] = JS(S(0))Kα[x7→1] ,

as desired.
CHAPTER 9. SEMANTICS FOR FIRST-ORDER LOGIC 232

(h) M, α  ∀x∃yx · y = x.
This formula involves a nested quantifier. Let’s first unfold
what we have to show: M, α  ∀x∃yx · y = x iff for all
n ∈ N, we have M, α[x 7→ n]  ∃yx · y = x. And we have
M, α[x 7→ n]  ∃yx · y = x iff there exists an m ∈ N such
that M, α[x 7→ n, y 7→ m]  x · y = x. So, what we need to
show in order to prove that M, α  ∀x∃yx · y = x is that
for each n ∈ N, there exists an m ∈ N such that M, α[x 7→
n, y 7→ m]  x · y = x. So let n ∈ N be arbitrary. Now if we
set m = 1, then we get

Jx · yKM
α[x7→n,y7→1] = α[x 7→ n, y 7→ 1](x) · α[x 7→ n, y 7→ 1](y)
=n·1
=n

But surely, also JxKM


α[x7→n,y7→1] = n. So, we have

Jx · yKM M
α[x7→n,y7→1] = JxKα[x7→n,y7→1] ,

meaning M, α[x 7→ n, y 7→ 1]  x · y = x. So, we have


M, α[x 7→ n]  ∃yx·y = x and, since n was arbitrary M, α 
∀x∃yx · y = x, as desired.
(ii) Let’s consider some examples in the abstract structure (9.2.2.iv)
under the assignment α(x) = 1, α(y) = 3, and α(z) = 4.
(a) M, α  P (a)
Simply note that JaKM α =a
M = 1 ∈ {1, 3} = P M

(b) M, α  P (x)
Simply note that JxKM α = α(x) = 1 ∈ {1, 3} = P
M

(c) M, α 2 P (z)
Simply note that JzKM α = α(z) = 4 ∈ / {1, 3} = P M
(d) M, α  R(x, x)
First, remember that JxKM M M
α = 1. It follows that (JxKα , JxKα ) =
M
(1, 1) ∈ {(1, 1), (1, 2), (2, 2)(2, 3), (3, 3)} ∈ R .
(e) M, α  ∃y(y 6= x ∧ R(y, y))
We need to show that there exists a d ∈ DM such that
M, α[y 7→ d]  y 6= x∧R(y, y). Let d = 3. We get JyKM α[y7→3] =
α[y 7→ 3](y) = 3 and 3 6= 1 = α[y 7→ 3](x). Hence M, α[y 7→
3]  y 6= x. Further, since JyKM α[y7→3] = 3, we have that
(JyKα[y7→3] , JyKα[y7→3] ) = (3, 3) ∈ RM . So, we have M, α[y 7→
M M

3]  R(y, y). At this point, we have M, α[y 7→ 3]  y 6=


x ∧ R(y, y). We get M, α  ∃y(y 6= x ∧ R(y, y)), as desired.
(f) M, α  ∀xP (g(a, x))
CHAPTER 9. SEMANTICS FOR FIRST-ORDER LOGIC 233

We need to show that for each d ∈ DM that M, α[x 7→ d] 


P (g(a, x)). We will do this by showing that for each d ∈ DM ,
we have Jg(a, x)KM α[x7→d] = 1. Since 1 ∈ P
M , the claim fol-

lows. Why should it be that Jg(a, x)KM α[x7→d] = 1? Well, we


know that Jg(a, x)KM M M M
α[x7→d] = g (JaKα[x7→d] , JxKα[x7→d] ), since
g M (x, y) = min(x, y), JaKM M M
α[x7→d] = a , and JxKα[x7→d] =
α[x 7→ d](d) = d, we get Jg(a, x)KM α[x7→d] = min(1, d). Now,
D = {1, 2, 3, 4}, so for each d ∈ DM , we have Jg(a, x)KM
M
α[x7→d] =
min(1, d) = 1. But that’s all we needed to show.
(g) M, α  ∀xP (a).
We need to show that for each d ∈ DM that M, α[x 7→
d]  P (a). But the value of P (a) is the same under each
assignment: the proof of M, α  P (a) doesn’t depend on α.
So, clearly for each d ∈ DM that M, α[x 7→ d]  P (a).

9.3.5 Note that in order to show that quantified claims are true in a model
under an assignment we actually need to do some work. A stark con-
trast between first-order logic and propositional logic is that in the
latter, we can simply calculate the truth-value of a formula under a
assignment without too much effort. In first-order logic, in contrast,
the definition of truth in a model under an assignment is recursive
and can thus be calculated, but it is not always easy to do so; we often
need to prove non-trivial claims to establish that a quantified claim is
true in a model.

9.3.6 Nested quantifiers, as example 9.3.4.i.h, are somewhat tricky to wrap


your head around. Here are some reading guidelines that might help
understand what’s going on:

(a) M, α  ∃x∀yφ iff there is a change of value of x such that for all
subsequent changes of y which keep x fixed, φ is true.
(b) M, α 2 ∃x∀yφ iff for all changes of x there is a subsequent change
of y (which keeps x the same) such that φ becomes false.
(c) M, α  ∀x∀yφ iff for all changes of x and subsequent changes of
y, φ is true.
(d) M, α 2 ∀x∀yφ iff for some change of x’s value there is a change of
y’s value which keeps x’s value fixed and makes φ false.
(e) M, α  ∃x∃yφ iff for some change of x’s value there is a subsequent
change of y’s value, which keeps the value of x fixed and makes φ
true
(f) M, α 2 ∃x∃yφ iff for all changes of the values of x and subsequent
changes of y, φ is false.
CHAPTER 9. SEMANTICS FOR FIRST-ORDER LOGIC 234

(g) M, α  ∀x∃yφ iff for all changes of x’s value there is a change of
y’s value that leaves x fixed and makes φ true
(h) M, α 2 ∀x∃yφ iff there exists a value for x such that for all sub-
sequent changes in the value of y (keeping x fixed), φ becomes
false

These clauses can be used to help you think about what you need to
show in order to establish whether a complex quantified claim is true.

9.3.7 We conclude our discussion of truth in a model by proving that sen-


tences, that is formulas with no free variables, have determinate truth-
values, i.e. their truth-values don’t depend on assignments:

Proposition (Sentence Lemma). Let M be a model and φ ∈ L a sen-


tence (i.e. a formula with no free variables). Then for all assignments
α, β, we have JφKM M
α = JφKβ .

We’re actually going to prove something slightly stronger:

Lemma (Formula Locality Lemma). Let M be a model and φ ∈ L


whose free variables form the set V . Then for all assignments α and
β, if α(x) = β(x) for all x ∈ V , then JφKM M
α = JφKβ .

Proof. We prove the claim using induction.

(i) Base cases:


(a) Note that if the free variables in R(t1 , . . . , tn ) form the set V ,
then for each term ti , the free variables in ti are all in V . So, if
α(x) = β(x) for all x ∈ V , we can infer using the Term Local-
ity Lemma that Jti KM M
α = Jti Kβ . From this the claim quickly
follows. For note that we get that (Jt1 KM M
α , . . . , Jtn Kα ) ∈ R
M

iff (Jt1 KM M M
β , . . . , Jtn Kβ ) ∈ R . Since

(
1 if (Jt1 KM M
α , . . . , Jt1 Kα ) ∈ R
M
JR(t1 , . . . , tn )KM
α =
0 otherwise

and
(
1 if (Jt1 KM M
β , . . . , Jt1 Kβ ) ∈ R
M
JR(t1 , . . . , tn )KM
β =
0 otherwise

the claim follows immediately.


(b) The case for t1 = t2 is completely analogous to (a) except
that there are just two terms involved.
CHAPTER 9. SEMANTICS FOR FIRST-ORDER LOGIC 235

(ii) Induction Steps:


(a) Suppose the induction hypothesis that if the free variables in
φ are in V and α(x) = β(x) for all x ∈ V , then JφKM M
α = JφKβ .
Now consider ¬φ. Clearly, the free variables in ¬φ are the
same as in φ. So, we can conclude that JφKM M
α = JφKβ by the
induction hypothesis. Since further J¬φKM M
α = 1 − JφKα and
M M
J¬φKβ = 1 − JφKβ , the claim follows as desired.
(b) The case for (φ ◦ ψ) works similar to the case for ¬φ and is
left as an exercise.
(c) Suppose the induction hypothesis that if the free variables in
φ are in V and α(x) = β(x) for all x ∈ V , then JφKM M
α = JφKβ .
We only consider ∀yφ and leave ∃yφ as an exercise. Suppose
that the free variables in ∀yφ are the V ’s and α(x) = β(x) for
all x ∈ V . Then, the free variables in φ are V ∪ {y} (or V if y
does not occur in φ, but then the proof is even easier). Now
we distinguish two cases: (i) J∀yφKM M
α = 1 or (ii) J∀yφKα = 0.
ˆ From J∀yφKM α = 1, it follows that for all d ∈ D
M we have

M, α[y 7→ d]  φ. But for each d, consider β[y 7→ d].


Since α(x) = β(x) for all x ∈ V , we have that α[y 7→
d](x) = β[y 7→ d](x) for all x ∈ V . Moreover, α[y 7→
d](y) = d = β[y 7→ d](y). Hence α[y 7→ d](x) = β[y 7→
d](x) for all x ∈ V ∪{y}. But by the induction hypothesis,
this means that JφKM M M
α[y7→d] = JφKβ[y7→d] = 1. So, J∀yφKβ =
1, as desired.
ˆ From J∀yφKM α = 0, it follows that for some d ∈ D
M we

have M, α[y 7→ d] 2 φ. For this d, consider β[y 7→ d].


Since α(x) = β(x) for all x ∈ V , we have that α[y 7→
d](x) = β[y 7→ d](x) for all x ∈ V . Moreover, α[y 7→
d](y) = d = β[y 7→ d](y). Hence α[y 7→ d](x) = β[y 7→
d](x) for all x ∈ V ∪{y}. But by the induction hypothesis,
this means that JφKM M M
α[y7→d] = JφKβ[y7→d] = 0. So, J∀yφKβ =
0, as desired.

This concludes our induction.

The Sentence Lemma is a simple corollary of the Formula Locality


Lemma.

9.3.8 We can use the Sentence Lemma to justify the following definition of
truth in a model:

ˆ For φ a sentence, we define M  φ as M, α  φ.


CHAPTER 9. SEMANTICS FOR FIRST-ORDER LOGIC 236

By the Sentence Lemma, if φ is true under one assignment, then φ is


true under all of them; and similarly, if φ is false under some assign-
ment, then φ is false under all of them. So, whenever we’re reasoning
about M  φ, we can supply an arbitrary assignment α as needed for
the above definitions to work.

9.4 Consequence and Validity


9.4.1 In this section, we discuss the notion of valid inference in first-order
logic. This makes the section one of the core sections of the lecture. At
the same time, we can be relatively brief, since all the work we’ve been
doing so-far was to make this part here easy. So, we begin by giving
the official definition of validity in first-order logic. As we indicated
before, we define the notion for sentences:

ˆ For Γ a set of sentences and φ a sentence, we say that Γ  φ iff


for all models M, if M  ψ for all ψ ∈ Γ, then M  φ.
ˆ This gives us: Γ 2 φ iff there exists a (counter)model M, such
that M  ψ for all ψ ∈ Γ, but M 2 φ

The notion of logical equivalence is defined just like propositional logic


φ  ψ means both φ  ψ and ψ  φ. Similarly, logical truth is defined
as being a consequence of the empty set, i.e.  φ means ∅  φ.

9.4.2 Examples

(i) ∀xP (x)  P (a)


To see this, suppose that M, α  ∀xP (x) for some arbitrary
model M and assignment α. It follows that for all d ∈ DM ,
we have M, α[x 7→ d]  P (x). But aM ∈ DM . So, if we set
d = aM , we get M, α[x 7→ aM ]  P (x). But that just means
that α[x 7→ aM ](x) = aM ∈ P M , from which it immediately
follows that M, α  P (a).
(ii) P (a)  ∃xP (x)
To see this, suppose that M, α  P (a) for some arbitrary model
M and assignment α. This means that aM ∈ P M . But then, we
can simply set d = aM , and get that M, α[x 7→ aM ]  P (x) and
so M, α  ∃xP (x), as desired.
(iii) ∃x(P (x) ∧ Q(x))  ∃xP (x) ∧ ∃xQ(x)
Suppose that J∃x(P (x) ∧ Q(x))KM α = 1. That means that we can
change the value of only x to d such that JP (x)∧Q(x)KM
α[x7→d] = 1.
M M
Hence JP (x)Kα[x7→d] = 1 and JQ(x)Kα[x7→d] = 1. So we can change
CHAPTER 9. SEMANTICS FOR FIRST-ORDER LOGIC 237

the value of only x to d such that JP (x)KM


α[x7→d] = 1, meaning
M
J∃xP (x)Kα = 1; and we can change the value of only x to d
such that JQ(x)KM M
α[x7→d] = 1, meaning J∃xQ(x)Kα = 1. Hence
J∃xP (x) ∧ ∃xQ(x)KMα = 1, as desired.
(iv) ∀x(P (x) → ∃yR(x, y))  ¬∃x(P (x) ∧ ∀y¬R(x, y))
Suppose that J∀x(P (x) → ∃yR(x, y))KM α = 1. This means that
for every change of x to d, JP (x) → ∃yR(x, y)KMα[x7→d] = 1. Now
suppose for proof by contradiction that J¬∃x(P (x)∧∀y¬R(x, y))KMα =
0 meaning
J∃x(P (x) ∧ ∀y¬R(x, y))KM α = 1.

Then there is a change of x to d such that JP (x)∧∀y¬R(x, y)KM α[x7→d] =


M M
1. Hence JP (x)Kα[x7→d] = 1 and J∀y¬R(x, y)Kα[x7→d] = 1. But
if JP (x)KM M
α[x7→d] = 1, then we must have J∃yR(x, y)Kα[x7→d] =
1, since JP (x) → ∃yR(x, y)KM α[x7→d] = 1 (by the above reason-
ing). So, to take stock, we have J∀y¬R(x, y)KM α[x7→d] = 1 and
M
J∃yR(x, y)Kα[x7→d] = 1, which quickly leads to contradiction. For
example, it’s easily observed in class that ∃yR(x, y) is equivalent
to ¬∀y¬R(x, y), so we get J¬∀y¬R(x, y)KM α[x7→d] = 1, which gives
M
a contradiction to J∀y¬R(x, y)Kα[x7→d] = 1. Hence using proof by
contradiction, we get J¬∃x(P (x) ∧ ∀y¬R(x, y))KM α , as desired.
(v)  ∀x(P (x) ∨ ¬P (x))
To see this, suppose that for some model M and assignment α,
we have M, α 2 ∀x(P (x) ∨ ¬P (x)). This means that there exists
a d ∈ DM such that M, α[x 7→ d] 2 P (x) ∨ ¬P (x). But this
means that both M, α[x 7→ d] 2 P (x) and M, α[x 7→ d] 2 ¬P (x)
and so both M, α[x 7→ d] 2 P (x) and M, α[x 7→ d]  P (x),
which is a contradiction. Hence M, α 2 ∀x(P (x) ∨ ¬P (x)) for all
models M and assignments α.
(vi)  ∃x x = Batman (it’s logically true that Batman exists)
Let M be an arbitrary model and α an arbitrary assignment
therein. We have BatmanM ∈ DM ; that is, the denotation of
Batman is a member of the domain. So, set set d = BatmanM and
consider JxKM
α[x7→BatmanM ]
= BatmanM = JBatmanKM α[x7→BatmanM ]
.
So Jx = BatmanKMα[x7→BatmanM ]
= 1. So J∃x x = BatmanKM
α = 1,
which is what we needed to show.
(vii) ∀x∃yR(x, y) 2 ∃y∀xR(x, y)
To show this, we need to provide a countermodel:
ˆ DM = N
CHAPTER 9. SEMANTICS FOR FIRST-ORDER LOGIC 238

ˆ RM = {(n, m) : n ≤ m}
It’s relatively easily checked that ∀x∃yR(x, y) is true under every
assignment in this model, since for every number there’s a bigger
number: for every n ∈ N we can pick n+1 ∈ N and get M, α[x 7→
n, y 7→ n + 1]  R(x, y). At the same time, it’s not the case that
some number is bigger than all others: there is n ∈ N such that
for all m ∈ M, M, α[x 7→ n, y 7→ n + 1]  R(x, y).
(viii) ∃xP (x) ∧ ∃xQ(x) 2 ∃x(P (x) ∧ Q(x))
Here’s a countermodel:
ˆ DM = {a, b}
ˆ P M = {a}
ˆ QM = {b}
In this model, there we can find aM such that M, α[x 7→ aM ] 
P (x) and so M, α  ∃xP (x), and we can find bM such that
M, α[x 7→ bM ]  Q(x) and so M, α  ∃xQ(x). But neither aM
nor bM is such that M, α[x 7→ aM /bM ]  P (x) ∧ Q(x)—nothing
is both P and Q.
(ix) ∀x(P (x) ∨ Q(x)) 2 ∀xP (x) ∨ ∀xQ(x)
The same countermodel works:
ˆ DM = {a, b}
ˆ P M = {a}
ˆ QM = {b}

9.4.3 We note that all the laws of classical propositional logic are valid in
first-order logic (cf. 5.2.6). Additionally, we can prove the following
logical laws concerning the quantifiers:

Proposition (Quantifier Laws). For all formulas φ and ψ, we have:

(i) ∀xφ  (φ)[x := t] where t is a ground term


(ii) (φ)[x := t]  ∃xφ where t is a ground term
(iii) ∀xφ  ∃xφ
(iv) ∀xφ  ¬∃x¬φ
(v) ∃xφ  ¬∀x¬φ
(vi) ∀x∀yφ  ∀y∀xφ
(vii) ∃x∃yφ  ∃y∃xφ
(viii) ∃x∀yφ  ∀y∃xφ
(ix) (∀xφ ∧ ∀xψ)  ∀x(φ ∧ ψ)
(x) (∃xφ ∨ ∃xψ)  ∃x(φ ∨ ψ)
CHAPTER 9. SEMANTICS FOR FIRST-ORDER LOGIC 239

(xi) ∀xφ ∨ ∀xψ  ∀x(φ ∨ ψ)


(xii) ∃x(φ ∧ ψ)  ∃xφ ∧ ∃xψ
(xiii) (φ → ∀xψ)  ∀x(φ → ψ) if x is not free in φ
(xiv) (φ → ∃xψ)  ∃x(φ → ψ) if x is not free in φ
(xv) (∀xφ → ψ)  ∃x(φ → ψ) if x is not free in ψ
(xvi) (∃xφ → ψ)  ∀x(φ → ψ) if x is not free in ψ

Proof. We only prove (iii) and leave the rest as very useful exercises.

(i) This law holds because we stipulated that DM 6= ∅ in Definition


9.2.1. Since DM 6= ∅, if J∀xφKM M
α = 1, i.e. for all d ∈ D , we
M M
have JφKα[x7→d] = 1, we can always pick a d ∈ D such that
JφKM M
α[x7→d] = 1. But that just means J∃xφKα = 1.

9.4.3 The law ∀xφ  ∃xφ might seem strange, but the underlying assump-
tion that leads to it, DM 6= ∅, is necessary to get some important
logical laws to work. For example, we clearly want that ∀xP (x) 
P (a) (9.4.2.i): if everybody passes, then you pass. But if we’d allow
for DM = ∅, this law could fail. For simply consider a model with
DM = ∅. In that model ∀xP (x) would be trivially true: for every
d ∈ DM , we’d have that M, α[x 7→ d]  P (a). But since DM = ∅,
we can’t have aM ∈ DM and so also not in P M , which means that
M 2 P (a).

9.4.4 Next, we observe that the Deduction Theorem and the I Can’t Get
No Satisfaction Theorem both hold for first-order logic as well:

Theorem (Deduction Theorem). Let φ, ψ ∈ L be formulas and Γ ⊆ L


a set of formulas. Then the following two are equivalent:

1. Γ ∪ {φ}  ψ
2. Γ  φ → ψ

Proof. Exactly as in 5.2.14.

Theorem (I Can’t Get No Satisfaction). Let Γ ⊆ L be a set of for-


mulas and φ ∈ L a formula. Then, the following are equivalent:

1. Γ  φ
2. Γ ∪ {¬φ} is unsatisfiable

Proof. Exactly as in 6.2.6.


CHAPTER 9. SEMANTICS FOR FIRST-ORDER LOGIC 240

However, as we’ll see in the next chapter, the deduction theorem


doesn’t give us decidability anymore. We can use it, however, to derive
interesting logical truths, such as

 ∀xφ → (φ)[x := t]

for t a ground term, which we can infer directly from 9.4.3.i. The I
Can’t Get No Satisfaction Theorem, instead, will play the same role
in first-order logic as in propositional logic: it’s the foundation of the
tableau method, which we’ll discuss in the next chapter.

9.4.5 We conclude this chapter with a long example in which we’re going
to prove the correct answer for the Albert, Betty, Charles puzzle from
the first lecture. Here we go:

ˆ Consider the signature S = ({a, b, c}, ∅, {M 1 , L2 }).


ˆ Our intended reading is that M stands for “. . . is married”, L
stands for “. . . looks at ”, a means “Albert,” b stands for
“Betty,” and c stands for “Charles.”
ˆ Claim:

¬M (a), M (c), L(c, b), L(b, a)  ∃x∃y(M (x) ∧ ¬M (y) ∧ L(x, y)).

ˆ Proof :
Let M be a model and α arbitrary, such that JM (c)KM α = 1,
J¬M (a)KMα = 1, JL(c, b)KM = 1, and JL(b, a)KM = 1. So aM ∈
α α /
M M M M M M M M
M , c ∈ M , and (c , b ), (b , a ) ∈ L . We have that
J∃x∃y(M (x)∧¬M (y)∧L(x, y))KM α = 1 holds iff there are changes
for x to d and y to d0 such that

J(M (x) ∧ ¬M (y) ∧ L(x, y))KM


α[x7→d,y7→d0 ] = 1.

Now, we know that either (i) bM ∈ M M or (ii) bM ∈


/ M M.
– If (i) bM ∈ M M , then we can set d = bM and d0 = aM .
We’d get d ∈ M M and so JM (x))KM 0 / MM
α[x7→d,y7→d0 ] = 1; d ∈
and so J¬M (y)KM 0
α[x7→d,y7→d0 ] = 1; and (d, d ) ∈ L
M and so

JL(x, y))KM
α[x7→d,y7→d0 ] = 1; giving us, J∃x∃y(M (x) ∧ ¬M (y) ∧
M
L(x, y))Kα[x7→d,y7→d0 ] = 1.
– If (ii) bM ∈
/ M M , we can set d = cM and d0 = bM . In a
similar way, we get
Either way, we get J∃x∃y(M (x) ∧ ¬M (y) ∧ L(x, y))KM
α[x7→d,y7→d0 ] =
1, which is what we wanted to show.

,
CHAPTER 9. SEMANTICS FOR FIRST-ORDER LOGIC 241

9.5 Core Ideas


ˆ A model interprets the signature by assigning denotation to every con-
stant, a function to every function symbol, and an n-ary relation to
every n-ary relation symbol

ˆ An assignment in a model tells us what the variables denote. It plays


the role of the context in natural language.

ˆ We can recursively calculate the denotation of arbitrary terms in a


model under an assignment.

ˆ We can recursively calculate the truth-value of a formula relative to a


model under an assignment:

– a universally quantified claim is true in a model under a assign-


ment iff the formula after the quantifier remains true for every
possible change of the value of the variable in the assignment
– an existentially quantified claim is true in a model under a assign-
ment iff the formula after the quantifier becomes true for at least
one possible change of the value of the variable in the assignment

ˆ Validity is defined as in every logic as truth-preservation across models.

ˆ The Deduction Theorem holds for first-order logic but doesn’t lead to
decidability.

9.6 Self Study Questions


9.6.1 Which of the following entails that M, α  ∀x(P (x) → Q(x))?

(a) There exists no d ∈ DM such that d ∈ P M .


(b) There exists no d ∈ DM such that d ∈ QM .
(c) There exists no d ∈ DM such that d ∈ P M and d ∈ QM .
(d) There exists no d ∈ DM such that d ∈ P M and d ∈
/ QM .
(e) For all d ∈ DM , it holds that d ∈ P M .
(f) For all d ∈ DM , it holds that d ∈ QM .
(g) For all d ∈ DM , it holds that if d ∈ P M , then d ∈ QM
(h) For all d ∈ DM , it holds that if d ∈ QM , then d ∈ P M
(i) For all d ∈ DM , it holds that if d ∈
/ P M , then d ∈
/ QM
(j) For all d ∈ DM , it holds that if d ∈
/ QM , then d ∈
/ PM

9.6.2 Which of the following entails that M, α 2 ∀x(P (x) → Q(x))?


CHAPTER 9. SEMANTICS FOR FIRST-ORDER LOGIC 242

(a) There exists a d ∈ DM such that d ∈


/ P M.
(b) There exists a d ∈ DM such that d ∈
/ QM .
(c) There exists a d ∈ DM such that d ∈ P M and d ∈ QM .
(d) There exists a d ∈ DM such that d ∈ P M and d ∈
/ QM .
(e) For all d ∈ DM , it holds that d ∈ P M .
(f) For all d ∈ DM , it holds that d ∈ QM .
(g) For all d ∈ DM , it holds that if d ∈ P M , then d ∈ QM
(h) For all d ∈ DM , it holds that if d ∈ QM , then d ∈ P M
(i) For all d ∈ DM , it holds that if d ∈
/ P M , then d ∈
/ QM
(j) For all d ∈ DM , it holds that if d ∈
/ QM , then d ∈
/ PM

9.6.3 Which of the following entails that M, α  ∃x(P (x) → Q(x))?

(a) There exists a d ∈ DM such that d ∈ P M .


(b) There exists a d ∈ DM such that d ∈ QM .
(c) There exists a d ∈ DM such that d ∈ P M and d ∈ QM .
(d) There exists a d ∈ DM such that d ∈
/ P M.
(e) There exists a d ∈ DM such that d ∈
/ QM .
(f) For all d ∈ DM , if d ∈ P M , then d ∈ QM .
(g) For all d ∈ DM , if d ∈ QM , then d ∈ P M .
(h) For all d ∈ DM , d ∈ P M
(i) For all d ∈ DM , d ∈
/ PM
(j) For all d ∈ DM , d ∈ QM
(k) For all d ∈ DM , d ∈
/ QM

9.7 Exercises
9.7.1 Determine the denotation of the following terms in the models M from
(9.2.2.i.d) under the assignment α(xi ) = 2i + 1 for i ∈ N:

(a) x2
(b) S(x2 )
(c) (x1 + x3 )
(d) S(S(S(x0 )))
(e) S(0 · x1 )
(f) 2 + 2
(g) [h] (x1 · x2 ) + x3
CHAPTER 9. SEMANTICS FOR FIRST-ORDER LOGIC 243

(h) 0 + 0
(i) (0 · 0) + 1 (you can write down a shorthand version)
(j) 42

9.7.2 (a) [h] Prove, using induction on terms, that in model (9.2.2.i.a) of
SP A , we have JnKM
α = n, for all assignments α
(b) Prove, using induction on terms, that in model (9.2.2.i.b) of SP A ,
we have JnKMα = 2 · n, for all assignments α
(c) Prove, using induction on terms, that in model (9.2.2.i.d) of SP A ,
we have JnKMα = 42, for all assignments α

9.7.3 Prove the Ground Terms Lemma as a corollary of the Term Locality
Lemma.

9.7.4 [6 x] Explain why and how the law of bivalence holds on the first-order
semantics.

9.7.5 Determine whether the following claims hold in the standard model
(9.2.2.i.a) of SP A under the assignment α(x) = 1, α(y) = 2, α(z) = 3:

(a) M, α  x = 1
(b) M, α  S(x) = S(S(x))
(c) M, α  2 + 2 = 4
(d) M, α  1 · 1 = 0
(e) M, α  ∀xS(x) 6= 0
(f) M, α  ((2 · 2) = 5 ∧ S(44) = 7)
(g) [h] M, α  ∀x∀y(S(x) = S(y) → x = y)
(h) M, α  ∀x∀y(S(x) = (y + 1) → S(x) = S(y))
(i) M, α  ∀x∃yS(x) = y
(j) M, α  ∃x∀yS(x) = y

9.7.6 Take the model (9.2.2.iii.a) for S∈ . Consider the assignment α with
α(x) = {x : x is even} and α(y) = {x : x is odd}. Prove the following
facts:

(a) M, α  ∃y(y ∈ x)
(b) [h] M, α  ∀x¬(x ∈ ∅)
(c) M, α  ¬∃z(z ∈ x ∧ z ∈ y)
(d) M, α  ∃z∀u(u ∈ z ↔ u ∈ x ∨ u ∈ y)
(e) M, α 2 ∀x∀y(x = y ↔ ∀z(z ∈ x ↔ z ∈ y)) (Hint: Note that
counterexamples can’t be sets!)
CHAPTER 9. SEMANTICS FOR FIRST-ORDER LOGIC 244

Hint: You will need to rely on basic number theoretic and set-theoretic
facts.

9.7.7 Prove Lemma 9.3.3.

9.7.8 Find a model that shows that {(φ)[x := t] : t ∈ T } 2 ∀xφ (cf. 9.1.11).

9.7.9 Is it the case that {(φ)[x := t] : t ∈ T }  ∃xφ? Prove it or provide a


countermodel.

9.7.10 Remember the numeric quantifiers from 8.6.7. Prove the following
facts:

(i) M  ∃x∃y(P (x)∧P (y)∧x 6= y) iff P M has at least two elements.


(ii) M  ∀x∀y∀z(P (x) ∧ P (y) ∧ P (z) → x = y ∨ x = z ∨ y = z) iff
P M has at most two elements.
(iii) M  ∃x∃y(P (x) ∧ P (y) ∧ ∀z(P (z) → x = z ∨ y = z)) iff P M has
precisely two elements.

9.7.11 This one’s a real challenge. Suppose that M is a model for a language
with a function symbol f 1 ∈ F such that:

ˆ M  ∀x∀y(f (x) = f (y) → x = y)


ˆ M  ∀xf (x) 6= x
ˆ M  ∃x¬∃yf (y) = x

Show that the domain DM cannot be finite, i.e. there is no number n


such that there are exactly n elements in DM .

9.7.12 Prove the remaining quantifier laws 9.4.3.

9.7.13 Prove the following:

(a) ∀xP (x)  ∀yP (y)


(b) ∃x∃yS(x, y)  ∃y∃xS(x, y)
(c) [h] ¬∃xP (x)  ∀x(P (x) → Q(x))
(d) ∀xP (x)  ∀x(Q(x) → P (x) ∨ R(x))

9.7.14 Check the following:

(a) [h] ∀x(P (x) → Q(x)), ∃x¬P (x)  ∀x¬Q(x)


(b) ∀x(P (x) → ∃yS(x, y))  ∀x∃y(P (x) → S(x, y))
(c) ∀xP (x) → ∀yQ(y)  ∀x(P (x) → ∀yQ(y))
(d) ∃x(P (x) → ∀yQ(y))  ∃xP (x) → ∀yQ(y)
CHAPTER 9. SEMANTICS FOR FIRST-ORDER LOGIC 245

(e)  ∀x∃yS(x, y) → ∃xS(x, x)


(f) ∃x¬∃yS(x, y)  ∃x∀yS(x, y)

9.7.15 For each of the following formulas, provide a model M+ and an as-
signment α+ such that the formula is true in the model under the
assignment, as well as a model M− and an assignment α− such that
the formula is false.

(i) R(x, y) → ∀x∀yR(x, y)


(ii) ∀x∀y(R(x, y) ∧ R(y, x) → R(x, x))
(iii) ∀x∃yR(x, y) → ∃y∀xR(x, y)

9.6.3 (b), (c), (d), (f), (i), (j)

9.6.2 (d)

9.6.1 (a), (d), (f), (g), (j)


Self Study Solutions
Chapter 10

Tableaux for First-Order


Logic

10.1 Overview
10.1.1 We shall now, once more, turn to proof theory; this time for first-order
logic. As in the case of propositional logic, there are many different
kinds of proof theories for first-order logic: Hilbert calculi, Gentzen
calculi, natural deduction calculi, . . . . In this chapter, we’ll develop
a tableau calculus for first-order logic and in the next chapter we’re
going to prove soundness and completeness. As we observed at the
end of the previous chapter, the one on semantics, the I Can’t Get No
Satisfaction Theorem, which served as the theoretical foundations for
propositional tableaux, also holds for first-order logic: we have for all
formulas φ and sets of formulas Γ that:

Γ  φ iff Γ ∪ {¬φ} is unsatisfiable

The idea that we’re going to once more exploit is that we can de-
velop a syntactic method for determining whether a set of formulas
is satisfiable (viz. tableaux). By determining in a purely syntactic way
whether Γ ∪ {¬φ} is satisfiable, we get a proof theory for first-order
logic.

10.1.2 There is, however, an important limitation that we’ll have to take
into account. In propositional logic, the tableau method was purely
algorithmic: we could blindly apply the rules and would always, after
a finite amount of time, get an answer about the the validity of an
inference. This gave us a route to the decidability of propositional
logic. In first-order logic, very unfortunately, things aren’t as neat:
while we can construct tableaux sort of algorithmically, it will no
longer be the case that we can blindly apply the rules and after finitely

246
CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC 247

many steps get an answer: valid or invalid. The “local” reason for this,
the reason why this happens with tableaux, is that (as you’ll see) first-
order tableaux can turn out to be infinite. An infinite tableau is one
with infinitely many nodes. But, as we’ll see, an infinite tableau can
nevertheless be complete, in the sense that every rule that can be
applied has been applied. This might seem surprising coming from
propositional logic, where you really could construct every tableau
by hand. This will no longer be the case in first-order logic: there are
tableaux that no human could ever write down.

10.1.3 A more general and “deeper” reason why we can’t construct tableaux
as in propositional logic is that first-order logic is provably undecid-
able:

Theorem (Church and Turing 1935/36). There exists no effective


algorithm that for every inference after finitely many steps correctly
determines whether the premises entail the conclusion in first-order
logic.

Proving this theorem is out of reach for the methods of this course.
In order to prove it we need to think about what an “effective algo-
rithm” is in the first place. You can study the methods in “Logis-
che Complexiteit” (KI3V12013). Here we will content ourselves with
some observations about why our tableau algorithm doesn’t lead to
decidability, which will let us glimpse at why first-order logic really
is undecidable.

10.1.4 Throughout this chapter, we will restrict ourselves to finite inferences,


i.e. inferences with only finitely many premises.1 And furthermore,
we shall limit ourselves (as in the previous chapter) to inferences in-
volving sentences only. In order to avoid an irreparable overdose of
new techniques, we’ll introduce first-order tableaux step-by-step: in
§10.2, we’ll discuss tableaux for inferences involving only the quan-
tifiers but no identity predicate and no function symbols; in §10.3,
we’ll add identity; and in §10.4, we’ll add function symbols. Without
further ado: here we go!

10.2 Simple Tableaux


10.2.1 In essence, the tableau method for first-order logic works just like
the tableau method for propositional logic: we write down the initial
list, expand it via the rules to a complete tableau, check whether
it’s closed or open, and if it’s open, we declare the inference invalid,
1
This is not where the infinitary issues mentioned above come from.
CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC 248

if the tableau is closed, we declare the inference valid. If you don’t


remember exactly how this works, refresh your memory using §6. For
reference, here are the rules of propositional logic again:

¬¬φ φ ∧ ψ ¬(φ ∧ ψ) φ∨ψ ¬(φ ∨ ψ)

φ φ ¬φ ¬ψ φ ψ ¬φ

ψ ¬ψ

¬(φ → ψ) φ→ψ φ↔ψ ¬(φ ↔ ψ)

φ ¬φ ψ φ ¬φ φ ¬φ

¬ψ ψ ¬ψ ¬ψ ψ

To obtain a calculus for first-order logic (without identity and func-


tions), we add to these rules for the quantifiers. But first, we need to
introduce an auxilliary concept.
10.2.2 As we said above, we restrict ourselves to inferences with only sen-
tences, i.e. formulas without free variables. It’s possible to develop
tableaux for formulas with free variables but for the purposes of this
course it’s more trouble than it’s worth. We will, however, be required
to think about “arbitrary” objects, which we’d typically do using free
variables. For this purpose, we extend our language with an infinite
number of special constants known as parameters. The set of param-
eters is denoted Par. We’ll use p, q, r, . . . as parameters or, if we run
out of (or are likely to run out of) letters, p1 , p2 , . . .. Don’t confuse
the parameters with the propositional letters, however! Parameters
work really just like constants, they can take the place of constants
in formulas and they get interpreted like constants in models. So, for
example,
∀x(P (x) → ∃y(R(x, y) ∧ R(y, p)),
will be a “formula” for the purpose of our proof-theory. Similarly, if
M is a model for our language, then for each p ∈ Par, we assign an
interpretation pM ∈ DM . The definitions of truth in a model are then
extended in the obvious way to treat parameters just like constants.
So, for example, P (p) will be true in a model M under assignment
α iff pM ∈ P M .
10.2.3 A small technical digression. Note that officially, the parameters are
not part of our language, even though we treat them as such. So P (p)
CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC 249

is not a formula of L, rather it’s a formula of the extended language


L+ which is defined just like L except that C + = C ∪ Par. Clearly,
every formula of L is then also a formula of L+ , i.e. L ⊆ L+ , but not
vice versa. As a result, every model M for L+ is also a model for L:
if for all formulas φ ∈ L+ , we can calculate a truth-value JφKM
α , then
M
we thereby also calculate a truth-value JφKα for all formulas φ ∈ L.
More pedantically, we can simply treat any model for M for L+ as
a model for L by “forgetting” the interpretations of the parameters.
This was basically just a sketch of the more technical background
behind the idea of parameters, which you’ll not need to worry too
much about here. You’ll see in a moment how this will all work.

10.2.4 The new rules for the quantifiers are as follows:

¬∀xϕ ¬∃xϕ

∃x¬ϕ ∀x¬ϕ

∃xϕ ∀xϕ

(ϕ)[x := p]† (ϕ)[x := a]‡

†: where p ∈ Par is any parameter not on the branch already.


‡: where a ∈ C ∪ Par is any constant or parameter already on the
branch, or an arbitrary “fresh” parameter if there are none.
The rules are read like this:

ˆ If there’s a node with a formula to which a rule can be applied


but hasn’t been applied yet, then we apply the rule by extending
every branch that goes through the node as shown by the rule.

Everything else about the tableau constrution just works like in


propositional logic. In particular, a tableau is called complete iff every
rule that can be applied has been applied.

10.2.5 We will need to make a few remarks about the new rules. First, look
at the rule for ∃xφ. What this rule says is that you have ∃xφ on
a node, then you extend every branch with (φ)[x := p], where p is
a new parameter, i.e. a parameter that hasn’t been used yet on the
branch. How can it happen that a parameter has already been used
on a branch? Easy: if there’s another existential quantifier around,
as in the following example:
CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC 250

∃x∃yR(x, y)

∃yR(p, y)

R(p, q)

In the first application of the rule, the application to ∃x∃yR(x, y). we


were free to chose any parameter whatsoever, since no parameter was
on the branch yet. We chose p, because obviously. But then, in the
second application of the rule, this time to ∃yR(p, y), we were limited
in our choice; we could chose anything but p. So we chose q. Note
that though there are two existential quantifiers in ∃x∃yR(x, y), we
have to apply the rules step-by-step, as in the example: first, we use
the rule to get from ∃x∃yR(x, y) to ∃yR(p, y), and then we use the
rule again to get from ∃yR(p, y) to R(p, q). In the first application
of the rule, we’ve discharged our duties with respect to ∃x∃yR(x, y),
and in the second rule, we’ve discharged our duties for ∃yR(p, y). So,
the tableau in our example is, indeed, complete. To be perfectly clear,
this:

∃x∃yR(x, y)

R(p, q)

is not allowed. Much less are the following moves allowed:

∃x∃yR(x, y) ∃x∃yR(x, y)

R(p, p) ∃yR(p, y)

R(p, p)

Intuitively, the parameter introduced by the ∃xφ rule is an arbitrary


object that satisfies φ, the idea being that if ∃xφ is true, then there
has to be some object that satisfies φ; but we don’t know anything
about that object other than that it satisfies φ, so we pick a fresh
parameter for it to guarantee that we don’t make any illicit assump-
tions.

10.2.6 Next, let’s talk about the rule for the universal quantifier. What’s im-
portant about this rule is that it has to be applied for every constant
or parameter on the branch. So, if we have, for example, ∀xP (x),
CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC 251

Q(a), and R(q, b) all together on one branch, we have to apply the
rule for a, q, and b, as illustrated below:

∀xP (x)
Q(a)
R(q, b)

P (a)

P (q)

P (b)

This tableau is complete because every rule that can be applied has
been applied. But, it might happen, that you first apply rules for
the universal quantifier and then continue applying rules which then
introduce new parameters. In such a case, also for those parameters
you have to apply the rule. Here’s an example:

∀xP (x)
∃yR(a, y)

P (a)

R(a, p)

P (p)

The point is that first, we applied the rule to ∀xP (x) for the constant
a which was the only constant or parameter at this point (when we
only had the initial list). But then, we applied the rule for ∃yR(a, y),
which introduced a new parameter, p, to the branch. At this point,
it became possible to apply the rule for ∀xP (x) again, and so we did.
So, even though the following looks like a good tree, it’s not:

∀xP (x)
∃yR(a, y)

P (a)

R(a, p)
CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC 252

So, my advice for doing tableaux with ∀ in them is as follows: every


time you introduce a new parameter, check if there’s a universal
quantifier rule that needs to be (re-)applied to that new parameter.
Just an ominous remark at this point: it’s this feature of the rule
for the universal quantifier that ultimately leads to infinite tableau.
But before we discuss this (much later), let’s briefly talk about the
idea behind the rule. As in propositional logic, every branch in the
complete tableau intuitively corresponds to an interpretation that
makes all the formulas on it true. In that interpretation, there will
be an interpretation for each constant or parameter on that branch
living happily in the domain. But if ∀xφ is on the branch, then for
each of these φ needs to be true if the value of x is set to that thing.
That’s just what the rule says: for everything we talk about on a
branch with ∀xφ on it, ensure that that thing satisfies φ.

10.2.7 There’s one more feature of the universal quantifier rule that we
haven’t talked about: it says under ‡ that we should pick an arbi-
trary “fresh” parameter if there are no constants or parameters on
the branch. What’s meant here is that if there are no constants or
parameters on the branch but a universally quantified claim, then do
the same as in the case of the existential rule: instantiate formula in
question with a fresh parameter that’s not yet on the branch. So, for
example, the following is such a situation:

∀xP (x)
∀x(P (x) → Q(x))

P (p)

P (p) → Q(p)

¬P (p) Q(p)

What’s the idea here? Well, for one. In a situation like in our exam-
ple, we need to get the tableau method started: if there are just a
bunch of universal quantifiers around, otherwise nothing would hap-
pen. But that’s just the superficial reason. The “deeper” reason is
our assumption that the domain of every model is non-empty (cf.
9.2.3 and 9.4.4): in any model there needs to be at least one object
in the domain, and we introduce a fresh parameter to talk about this
object. In 9.4.3–4, we showed and discussed that the law ∀xφ  ∃xφ
depended on exactly the assumption of non-empty domains. Below,
we will see that we can derive ∀xφ ` ∃xφ precisely because of the
CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC 253

requirement to pick an arbitrary “fresh” parameter if there are no


constants or parameters on the branch.

10.2.8 It’s already been implicit, but explicit is better than implicit, so let
me just mention that the motivation behind the new rules is the
same as in propositional logic, which we can re-formulate for first-
order logic as follows:

Down Preservation. If the formula at the parent node of a rule is


true in a model M under an assignment α, then for at least one
newly generated branch, all the formulas written according to
the rule on that branch are also true in M under α.
Up Preservation. If all the formulas on a branch that’s been gen-
erated by a rule are true in a model M under an assignment α,
then also the formula at the parent node of the rule is true.

These two principles will again together guarantee soundness and


completeness. We will verify them in the next chapter.

10.2.9 Provability is now defined as expected: A branch B in a complete


tableau is called closed iff there is some atomic formula R(t1 , . . . , tn ),
such that both R(t1 , . . . , tn ), ¬R(t1 , . . . , tn ) ∈ B (note that we’re not
yet dealing with identity); otherwise the branch is called open. A
complete tableau is closed iff every branch is closed, and otherwise
it’s called open. Now just like in propositional logic, we define Γ ` φ
to mean that the tableau for Γ ∪ {¬φ} is closed. But enough talk,
let’s look at some examples.

10.2.10 Examples:

(i) ∀xP (x) ` ∃xP (x)


∀xP (x)
¬∃xP (x)

∀x¬P (x)

¬P (p)

P (p)
7

Note how in this example the requirement to introduce a fresh


parameter for a universal quantifier if there’s no constant or
parameter on the branch yet played a crucial role.
CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC 254

(ii) ∃x(P (x) ∧ Q(x)) ` ∃xP (x) ∧ ∃xQ(x)


∃x(P (x) ∧ Q(x))
¬(∃xP (x) ∧ ∃xQ(x))

P (p) ∧ Q(p)

P (p)

Q(p)

¬∃xP (x) ¬∃xQ(x)

∀x¬P (x) ∀x¬Q(x)

¬P (p) ¬Q(p)
7 7

(iii) ∀x∀y(R(x, y) ∨ R(y, x)) ` ∀xR(x, x)


∀x∀y(R(x, y) ∨ R(y, x))
¬∀xR(x, x)

∃x¬R(x, x)

¬R(p, p)

∀y(R(p, y) ∨ R(y, p))

R(p, p) ∨ R(p, p)

R(p, p) R(p, p)
7 7
CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC 255

(iv) ∀xP (x) ∨ ∀xQ(x) ` ∀x(P (x) ∨ Q(x))


∀xP (x) ∨ ∀xQ(x)
¬∀x(P (x) ∨ Q(x))

∃x¬(P (x) ∨ Q(x))

¬(P (p) ∨ Q(p))

¬P (p)

¬Q(p)

∀xP (x) ∀xQ(x)

P (p) Q(p)
7 7

(v) ∃x∀yR(x, y) ` ∀y∃xR(x, y)


∃x∀yR(x, y)
¬∀y∃xR(x, y)

∀yR(p1 , y)

∃y¬∃xR(x, y)

¬∃xR(x, p2 )

∀x¬R(x, p2 )

R(p1 , p2 )

¬R(p1 , p2 )
7
CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC 256

(vi) ∀x(P (x) → ∃yS(x, y)) ` ∀x∃y(P (x) → S(x, y))


∀x(P (x) → ∃yS(x, y))
¬∀x∃y(P (x) → S(x, y))

∃x¬∃y(P (x) → S(x, y))

¬∃y(P (p) → S(p, y))

∀y¬(P (p) → S(p, y))

¬(P (p) → S(p, p))

P (p)

¬S(p, p)

P (p) → ∃yS(p, y)

¬P (p) ∃yS(p, y)
7
S(p, q)

¬(P (p) → S(p, q))

P (p)

¬S(p, q)
7
CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC 257

(vii) ` ¬∃x(K(x) ∧ ∀y(¬S(y, y) ↔ S(x, y)))


¬¬∃x(K(x) ∧ ∀y(¬S(y, y) ↔ S(x, y)))

∃x(K(x) ∧ ∀y(¬S(y, y) ↔ S(x, y)))

(K(p) ∧ ∀y(¬S(y, y) ↔ S(p, y)))

K(p)

∀y(¬S(y, y) ↔ S(p, y))

¬S(p, p) ↔ S(p, p)

¬S(p, p) S(p, p)

S(p, p) ¬S(p, p)
7 7

10.2.11 Ok, so what about underivability, i.e. Γ 0 φ? The definition of `


automatically gives us Γ 0 φ iff the tableau for Γ ∪ {¬φ} is open, i.e.
there is at least one open branch. Here’s an example:
CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC 258

∀x(P (x) ∨ Q(x))


¬(∀xP (x) ∨ ∀xQ(x))

¬∀xP (x)

¬∀xQ(x)

∃x¬P (x)

∃x¬Q(x)

¬P (p)

¬Q(q)

P (p) ∨ Q(p)

P (q) ∨ Q(q)

P (p) Q(p)

7 P (q) Q(q)
7

The tableau gives us that ∀x(P (x) ∨ Q(x)) 0 ∀xP (x) ∨ ∀xQ(x). Now,
remember that for open branches in complete tableau we want to have
a model that makes all their members true, the associated model. We
also get associated models in first-order logic, but it’s a bit more
complicated to define them.
10.2.12 So let B be an open branch in a complete tableau. We then define
the associated model MB as follows:
(i) DMB = {a ∈ C ∪ P ar : a occurs on B}
(ii) aMB = a for all a ∈ C ∪ P ar
(iii) RMB = {(a1 , . . . , an ) : R(a1 , . . . , an ) ∈ B}
There’s something to talk about here. There’s no typo here: the ob-
jects in the domain of MB are really the constants and parameters on
B themselves. The model is what’s known in the literature as a term-
model : it’s a model built from the expressions of our language. In this
CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC 259

weird model, every constant and parameter is just a name for itself.
And, given that, predicates are interpreted just like the branch says
they should be. Now, if you think back of the associated valuation of
a branch in propositional logic (6.3.5), you will note that also there,
we basically let the branch tell us what to do. In first-order logic, this
is not much different. The only somewhat weird thing is that we let
constants (and parameters) denote themselves. But what’s really so
weird about this: constants and parameters are themselves objects
(they are things you can write down, etc.), so why shouldn’t they be
allowed to “live” in a model?

10.2.13 Examples:

(i) Let’s first look at the example ∀x(P (x) ∨ Q(x)) 0 ∀xP (x) ∨
∀xQ(x) from before:

∀x(P (x) ∨ Q(x))


¬(∀xP (x) ∨ ∀xQ(x))

¬∀xP (x)

¬∀xQ(x)

∃x¬P (x)

∃x¬Q(x)

¬P (p)

¬Q(q)

P (p) ∨ Q(p)

P (q) ∨ Q(q)

P (p) Q(p)

7 P (q) Q(q)
7

The constants and parameters occurring in the open branch


CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC 260

are p and q. So, we have DMB = {p, q}. The parameters p and
q denote themselves, that is we have pMB = p and q MB = q.
The only atomic formulas occurring on the branch are P (q) and
Q(p). Correspondingly, we get P MB = {q} and QMB = {p}.
So, to sum up, our model looks like this:
ˆ DMB = {p, q}
ˆ pMB = p
ˆ q MB = q
ˆ P MB = {q}
ˆ QMB = {p}
Now it’s easily checked that in this model, ∀x(P (x) ∨ Q(x)) is
true: there are two things (p and q) and each of them is either
P or Q (p is Q and q is P ). But ∀xP (x) ∨ ∀xQ(x) is certainly
false: neither is everything P (p is not) nor is everything Q (q
is not).
(ii) The next cases I’ll handle a bit more quickly:
∀x∀y(R(x, y) → R(y, x)) 0 ∀xR(x, x)
∀x∀y(R(x, y) → R(y, x))
¬∀xR(x, x)

∃x¬R(x, x)

¬R(p, p)

∀y(R(p, y) → R(y, p))

R(p, p) → R(p, p)

¬R(p, p) R(p, p)
7

Associated model MB :
ˆ DMB = {p}
ˆ pMB = p
ˆ R MB = ∅
Note that this is perfectly fine: in a model, RM can be empty
(i.e. nothing stands in the relation R to anything).
CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC 261

(iii) ∃x(P (c) → P (x)) 0 ¬(P (c) → ∀xP (x))


∃x(P (c) → P (x))
¬¬(P (c) → ∀xP (x))

P (c) → ∀xP (x)

P (c) → P (p)

¬P (c) ∀xP (x)

¬P (c) P (p) ¬P (c) P (p)

P (c) P (c)
7

Associated model of the right-most branch MB :


ˆ DMB = {p, c}
ˆ pMB = p
ˆ cMB = c
ˆ P MB = {p, c}
(iv) 0 ∀x∀y(R(x, y) → R(y, x))
¬∀x∀y(R(x, y) → R(y, x))

∃x¬∀y(R(x, y) → R(y, x))

¬∀y(R(p, y) → R(y, p))

∃y¬(R(p, y) → R(y, p))

¬(R(p, q) → R(q, p))

¬R(p, q)

R(q, p)
Associated model of the branch MB :
ˆ DMB = {p, q}
ˆ pMB = p
ˆ q MB = q
CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC 262

ˆ RMB = {(q, p)}

10.2.14 Just like in propositional logic, we’ll be able to show that the associ-
ated model makes all the formulas on the branch true. This will be
the completeness lemma for first-order logic: if B is an open branch
of a complete tableau, then all formulas in B are true in MB under
all assignments. For now, it’s a good idea to convince yourself that
in the above examples, this is indeed the case.

10.3 Tableaux With Identity


10.3.1 In the following two sections, we will just refine the tableau calculus
from the previous section to make it work for our full language, with
identity and function symbols. First, we add rules for identity. They
are as follows:

..
. a=b
σ ‡ [x := a]
a = a†
σ[x := b]

†: a any parameter or constant already on the branch, or an arbitrary


fresh one.
‡: σ here is any atomic formula, i.e. a formula of the form R(t1 , . . . , tn )
or t1 = t2 .
These rules, though deceptively simple, require some commentary.

10.3.2 First, the a = a rule. What this rule says is that for each constant or
parameter a on the branch, you need to add a node with a = a. The
purpose of this rule is to allow us to get ` ∀x(x = x), since clearly
 ∀x(x = x)—the denotation of any term is identical to itself in every
model. The proof goes as follows:

¬∀x(x = x)

∃x¬x = x

¬p = p

p=p
7
CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC 263

Note that we’ve closed the branch here because there was an atomic
formula, p = p, such that both p = p ∈ B and ¬p = p ∈ B. Now, in
principle, the rule works quite like the universal quantifier rule in that
you need to make sure you do this for every constant or parameter on
the branch, also ones that have been created later, when you thought
you were already done with this rule. But that’s clearly nuts: this
would make our proof trees quickly explode in size. This is why we
make the following convention: we only apply the rule for a = a
if (a) thereby we can close a branch (i.e. if also ¬a = a is on the
branch) or (b) we’re dealing with an open branch of an otherwise
complete tableau. The reason behind convention (a) is clear (I hope),
the reason behind (b) will become clear when we discuss associated
models.
10.3.3 Turning to the second rule, things get a bit more complicated. It
might be useful to talk about the idea first. What the rule relies on
is the fact that if two objects a and b are identical and one of them
satisfies φ, then also the other object needs to satisfy φ—the two
objects are, after all, identical. Here is a simple application of the
rule, used to show that ` ∀x∀y(P (x) ∧ x = y → P (y)):

¬∀x∀y(P (x) ∧ x = y → P (y))

∃x¬∀y(P (x) ∧ x = y → P (y))

¬∀y(P (p) ∧ p = y → P (y))

∃y¬(P (p) ∧ p = y → P (y))

¬(P (p) ∧ p = q → P (q))

P (p) ∧ p = q

¬P (q)

P (p)

p=q

P (q)
7
CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC 264

The crucial application of the rule is in the last step here. For this
step, it was simply noted that

P (p) = (P (x))[x := p]

P (q) = (P (x))[x := q]
Since p = q was on the branch at that point, this allowed us to get
from P (p) to P (q), and thus to close the branch. Now, you may think
that this is a bit confusing. After all (P (x))[x := p] and (P (x))[x := q]
is not what’s written on the branch, P (p) and P (q) are. How could
you have seen that the rule should have been applied like this? Well,
the answer is quite simple: what the rule really tells you is that if
you have a = b on a branch and some atomic formula with either a
in it, say, R(t1 , . . . , a, . . . , tn ), then you need to replace that a with b,
you’ll get R(t1 , . . . , b, . . . , tn ). This, of course, holds equally well the
other way around (replace b with a) and with identity claims as in
the following example, which shows that a = b, b = c ` a = c

a = b, b = c ` a = c
a=b
b=c
a 6= c

a=c
7

Note that you really only need to replace terms for each other in
atomic formulas, not in more complex formulas and, in particular,
not in negated atomic formulas.

10.3.4 Examples: (see next page)


CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC 265

(i) a = b ` ∃xS(x, a) → (a = a ∧ ∃xS(x, b))


a=b
¬(∃xS(x, a) → (a = a ∧ ∃xS(x, b)))

∃xS(x, a)

¬(a = a ∧ ∃xS(x, b))

S(p, a)

¬(a = a) ¬∃xS(x, b)

a=a ∀x¬S(x, b)
7
¬S(p, b)
7

(ii) a = b 0 ∀x∃y(P (y) ∧ x = y)


a=b
¬∀x∃y(P (y) ∧ x = y)

∃x¬∃y(P (y) ∧ x = y)

¬∃y(P (y) ∧ p = y)

∀y¬(P (y) ∧ p = y)

¬(P (p) ∧ p = p)

¬(P (a) ∧ p = a)

¬(P (b) ∧ p = b)

¬P (p) p 6= p

¬P (a) p 6= a

¬P (b) p 6= b ¬P (b) p 6= b

p=p
7
CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC 266

10.3.5 What remains to be discussed is how we can define an associated


model when identity claims are involved. There is an obvious com-
plication here. In the associated term model we defined above, every
term denotes itself. So, in that model, there are only trivial identi-
ties. Clearly, a is the same symbols as a, so aMB = aMB , meaning
that MB  a = a (the trivial identity). But if a and b are different
constants, then they are different, meaning aMB 6= bMB , giving us
MB 2 a = b for all distinct a and b. But it can, of course, happen
that a = b is on a branch, as in our example (ii) above. What to do?

10.3.6 The solution is that we move slightly away from the notion of a term
model and consider models where the elements are sets of terms.
A term will now denote a set of terms, the set of terms that are,
according to the branch, identical to the initial term. Here’s how this
goes. Let B be a branch of a complete open tableau. We first define
the relation ∼B on terms by saying that:

ˆ a ∼B b iff a = b ∈ B.

In words, a ∼B b means that a is identical to b according to B. For


each term (constant or parameter) a, we can now consider the set:

[a]∼B = {b : a ∼B b},

which contains all the terms b that are identical to a according to B.


This set, called the equivalence class of a, will be the denotation of
a in MB .

10.3.7 So, here’s the definition associated model for an open branch B in a
complete tableau with identity:

ˆ DMB = {[a]∼B : a occurs in B}


ˆ aMB = [a]∼B , for all a ∈ C ∪ P ar
ˆ RMB = {([a1 ]∼B , . . . , [an ]∼B ) : R(a1 , . . . , an ) ∈ B}

This will do the job.

10.3.8 Let’s consider an example. Take the left-most branch in example


(10.3.4.ii). There are three constants or parameters that occur in the
branch: a, b, and p. We have a = b ∈ B as the only identity claim
written on the branch, but we should assume that also a = a, b = b,
and p = p are there on the branch as per convention (b) of 10.3.2.
So, we can calculate the sets [·]∼B as follows:

ˆ [a]∼B = {a, b}
ˆ [b]∼B = {a, b}
CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC 267

ˆ [p]∼B = {p}

We now set these identity classes as the interpretations of the terms:

ˆ aMB = [a]∼B = {a, b}


ˆ bMB = [b]∼B = {a, b}
ˆ pMB = [p]∼B = {p}

Note that [a]∼B = [b]∼B and so aMB = bMB , which is precisely what
we want since this means that MB  a = b and a = b ∈ B. The
only thing that remains to be determined is the interpretation of P .
But in this case, that’s easy since there is no formula of the form
P (t) ∈ B. So, we get P MB = ∅. So, all in all, we get:

ˆ DMB = {[a]∼B , [b]∼B , [p]∼B } = {{a, b}, {p}}


ˆ aMB = [a]∼B = {a, b}
ˆ bMB = [b]∼B = {a, b}
ˆ pMB = [p]∼B = {p}
ˆ P MB = ∅

10.3.9 We’re going to conclude our discussion of tableaux with identity by


observing a lemma in preparation for our completeness proof in the
next chapter. We’re proving that lemma here, since it will help us
understand how the associated model of a branch with identity claims
works:

Lemma. Let B be an open branch of a complete tableau and MB


the associated model. Then, if a = b ∈ B, then M  a = b.

Proof. Suppose that a = b ∈ B. What we need to show is that the


sets [a]∼B and [b]∼B are identical. This is because aMB = [a]∼B and
bMB = [b]∼B and M  a = b iff aMB = aMB . So, let’s briefly remind
ourselves of the definitions of the two sets:

[a]∼B = {c : a ∼B c}

[b]∼B = {c : b ∼B c}
Now, by extensionality, what we need to show is that every element
of [a]∼B is an element of [b]∼B and vice versa. We’re only going to
prove one of the directions, since the other is completely analogous.
So assume that c ∈ [a]∼B , i.e. a = c ∈ B. We need to derive that
c ∈ [b]∼B , i.e. c = b ∈ B. We already know that a = b ∈ B and we’ve
assumed that a = c ∈ B. But the tableau we’re looking at is complete,
so every rule that can be applied has been applied, in particular every
CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC 268

instance of the second, substitution identity rule. Now note that a = b


is the same as (x = b)[x := a] and we have a = c ∈ B. So by an
application of the rule we get (x = b)[x := c] ∈ B. But now note that
(x = b)[x := c] is nothing else than c = b, which is precisely what we
needed to show being on the branch.

10.4 Tableaux With Functions


10.4.1 Finally, we’ll discuss how to allow for function symbols in tableaux.
Luckily, we don’t need to add any new rules. All we need to do is to
replace the talk of “constants or parameters” with the more general
talk of “ground terms.” Remember that a ground term is a term that
does not contain any variables, i.e. any constant (or parameter) a is
a ground term and so are functional combination of those, such as
f (a, g(c, p)) and the like. The only rules affected by this are the rule
for the universal quantifier and the identity rules, so let’s just restate
them in the new, official form:

..
∀xϕ . s=t
σ ‡ [x
:= s]
(ϕ)[x := t]† t = t†
σ[x := t]

†: where t is any ground term already on the branch, or an arbitrary


“fresh” ground term if there are none.
‡: σ here is any atomic formula, i.e. a formula of the form R(t1 , . . . , tn )
or t1 = t2 .
10.4.2 So, there is nothing really different about the proof theory itself,
i.e. the rules for constructing tableaux. The only topic on which we
really have to say anything is how to interpret function symbols in
the associated model of a branch. Remember that in a model M,
the interpretation of an n-ary function symbol f n ∈ F is an n-ary
function f M : (DM )n → DM . But what on earth could this function
be in our model? The solution is as simple as it is revealing about the
whole game of associated models. First, let’s adjust the definition of
the associated model for an open branch B in complete tableau to a
setting with function terms. We define [t]∼B to be the set {u : t =
u ∈ B} when t occurs in B and {t} otherwise: 2
(
{u : t = u ∈ B} if t occurs in B
[t]∼B =
{t} otherwise
2
Why this complicated definition, you ask? Well, you’ll see in a moment.
CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC 269

Then, we say:

ˆ DMB = {[t]∼B : t ∈ T }
ˆ aMB = [a]∼B , for a ∈ C ∪ P ar
ˆ RMB = {([t1 ]∼B , . . . , [tn ]∼B ) : R(t1 , . . . , tn ) ∈ B}

Now, for f n ∈ F an n-ary function symbol, we need to find a function


f MB that maps n-tuples of equivalence classes of terms ([t1 ]∼B , . . . , [tn ]∼B )
to a single such equivalence class. But which one should it be:

f MB ([t1 ]∼B , . . . , [tn ]∼B ) = [?]∼B

The “obvious” answer is that f MB ([t1 ]∼B , . . . , [tn ]∼B ) = [f (t1 , . . . , tn )]∼B .
Why is this obvious? First, note that this really defines a function on
the terms. In particular, for every input we assign one and only one
out. This crucially hinges on the fact that we’ve “enlarged” DMB to
include also the equivalence classes of all the terms not in B: we know
that the value for f ([t]∼B ) = [f (t)]∼B , namely [f (t)]∼B ∈ DMB , but
we’re now also guaranteed that we get a value for f MB ([f (t)]∼B ),
viz. [f (f (t))]∼B ∈ DMB , and so on. The definition guarantees exactly
that the formulas on the branch involving the term f (t1 , . . . , tn ) will
be true. How so, is best illustrated by means of an example.

10.4.3 Example. Let’s consider the following derivation for 0 ∀x∀y(f (x) =
f (y) → x = y)

¬∀x∀y(f (x) = f (y) → x = y)

∃x¬∀y(f (x) = f (y) → x = y)

¬∀y(f (p) = f (y) → p = y)

∃y¬(f (p) = f (y) → p = y)

¬(f (p) = f (q) → p = q)

f (p) = f (q)

p 6= q

Now let’s consider the model MB . We have:

ˆ DMB = {[t]∼B : t ∈ T }
CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC 270

ˆ pMB = [p]∼B = {p}


ˆ q MB = [p]∼B = {q}
ˆ f MB (pMB ) = [f (p)]MB = {f (p), f (q)}
ˆ f MB (q MB ) = [f (q)]MB = {f (p), f (q)}
Now this model does exactly what we want it to do. Since f MB (pMB ) =
f MB (q MB ), we have that MB  f (p) = f (q). And since pMB 6= q MB ,
we have MB  p 6= q.
10.4.4 It is now a nice exercise to show using induction on terms that, in
the associated model, the denotation of any ground-term is its equiv-
alence class. In fact, you will do this as an exercise ,
Lemma. Let B be an open branch in a complete tableau and MB
the associated model. Then we have for all ground terms t ∈ T that
JtKM
α = [t]∼B .

Proof. Exercise 10.8.7. Hint: You can basically ignore variables in


the base case and disregard the assignment by Corollary 9.2.9.

10.5 Infinite Tableaux and Decidability


10.5.1 So far, all the tableaux we’ve been discussing have been very well
behaved. In fact, the infinitary issues I’ve mentioned in §10.1, at
least so far, haven’t surfaced. But now they will. In fact, we need not
look very hard for them. Just try to do the tableau for the simple
formula ¬∃x∀yR(x, y). You get:

¬∃x∀yR(x, y)

∀x¬∀yR(x, y)

¬∀yR(p1 , y)

∃y¬R(p1 , y)

¬R(p1 , p2 )

¬∀yR(p2 , y)

∃y¬R(p2 , y)

¬R(p2 , p3 )

..
.
CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC 271

What’s going on here? Well, what happened is that we get a univer-


sally quantifier formula ∀x¬∀yR(x, y), which we instantiate with a
fresh parameter p1 , which then lead to an existentially quantified for-
mula ∃y¬R(p1 , y), for which we introduced a fresh parameter, p2 , giv-
ing us R(p1 , p2 ), which then forces us to re-instantiate ∀x¬∀yR(x, y),
giving us ∃y¬R(p2 , y), forcing us to introduce p3 , and so on. We got
caught in what we might call a quantifier feedback loop. The possibil-
ities of such loops has profound implications for the proof theory of
first-order logic using tableaux.

10.5.2 But before we discuss these implications, let’s talk about the fact
that we just wrote down an infinite tableau. Well, of course we didn’t.
What we did was to realize that if we were to continue applying the
tableau rules trying to construct the tableau for ¬∃x∀yR(x, y), we
will never get to an end. So, we can’t write down the tableau for
that formula. But does that mean that this tableau doesn’t exist? To
appreciate that the mathematical answer is No!, we have to re-think
what tableaux actually are. So far, we’ve been talking about tableaux
as a concrete tree that we write down. But, mathematically speaking,
the actual ink-and-paper (or screen-and-pixel) tableau that you write
down is not what the tableau is. A tableau, for a mathematician, is a
special kind of (graph-theoretic) tree, which in turn is a mathematical
structure that consists of a set of nodes and a set of edges connecting
them. Nothing in this requires the tree to be written down or even
be finitely writable in the first-place—it’s just a pair of sets. But
what about the rules? Well, for a mathematician, the rules we use to
construct our tableaux are just an inductive definition of the set of
tableaux! And in first-order logic, that’s how we will need to think
of tableaux as well.

10.5.3 Note that the tableau above, our example, shows that 0 ∃x∀yR(x, y),
i.e. the formula ∃x∀yR(x, y) is not derivable. In fact, infinite tableaux
can only occur in cases where something isn’t derivable. Why? Well,
because if something is derivable, then the tableau needs to close
(by definition). But if a tableau closes, it does so after finitely many
steps: there will be two nodes, one containing an atomic formula the
other its negation, which are at some point in the tree—everything
that comes after doesn’t matter (in first-order logic, we can “close
early”). But how do we know that the tableau we sketched above
doesn’t close at some point? Well, that we need to prove. But the
argument is actually not that hard:

ˆ Note that all the formulas after the initial list are of the form:
¬∀yR(pi , y), ∃y¬R(pi , y), and ¬R(pi , pi+1 ). But these statements
CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC 272

either begin with an existential quantifier or a negation. So,


there can’t be an atom and it’s negation on the branch.

Further, we need to prove that the tableau is complete, i.e. if we


were to continue indefinitely to apply rules according to the pattern
described, then for each node to which a rule could be applied, at
some point it will be applied. Here’s the argument:

ˆ Note that whenever we introduce a new variable pi , we apply


the rule for ∀x¬∀yR(x, y) to it, and whenever we encounter
¬∀yR(pi , y) we apply ¬∀-rule to get to ∃y¬R(pi , y) and then
the ∃-rule to get to ¬R(pi , pi+1 ). We’ve just observed that these
are all the (kinds) of formulas on the tree, so we know that for
each node to which a rule could be applied, at some point it will
be applied.

This is how we make sense of infinite tableaux.

10.5.4 Example. The tableau for the inference from ∀x∃yR(x, y) to ∃y∀xR(x, y):
CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC 273

∀x∃yR(x, y) 0 ∃y∀xR(x, y)
∀x∃yR(x, y)
¬∃y∀xR(x, y)

∀y¬∀xR(x, y)

∃yR(p1 , y)

¬∀xR(x, p1 )

∃x¬R(x, p1 )

R(p1 , p2 )

¬R(p3 , p1 )

∃yR(p2 , y)

¬∀xR(x, p3 )

∃yR(p3 , y)

¬∀xR(x, p2 )

..
.

How do we see that the tableau is open? Well, we note that all
the atomic formulas on the (one and only) branch are of the form
R(pi , p2i ) and ¬R(p2i+1 , pi ). Since there is no natural number i such
that i = 2i + 1, we can conclude that we’ll never get to R(s, t) and
¬R(s, t)—the tableau never closes.

10.5.5 Do we also have associated models for infinite open branches? Why,
yes we do! Nothing in the definition of an associated model prevents
us from applying the definition to an infinite branch. Here are the
models we’d get for our two examples:

ˆ 0 ∃x∀yR(x, y)
– DMB = {pi : i ∈ N}
CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC 274

– pM
i
B
= pi
– R M B =∅
ˆ ∀x∃yR(x, y) to ∃y∀xR(x, y)
– DMB = {pi : i ∈ N}
– pM
i
B
= pi
– R M B = {(pi , p2i ) : i ∈ N}

In fact, we will be able to prove soundness and completeness for first-


order tableaux, which, of course, crucially depends on the existence
of associated models.

10.5.6 This brings me to the last point of this chapter, decidability. Sound-
ness and completeness should not be confused with decidability. What
we will be able to show, the soundness and completeness theorem, is
that Γ  φ iff the tableau for Γ ∪ {¬φ} is closed. But that doesn’t
mean that there’s an effective algorithm, which in a finite amount
of time determines whether Γ  φ. In fact, we’ve just seen that our
algorithm can “loop” and never spit out an answer. In such cases, we
can still reason to the right answer, but this isn’t algorithmic any-
more. A computer can’t figure out that reasoning. Now, this shows
that the tableau algorithm doesn’t decide first-order logic. But the
fact that one algorithm doesn’t work, of course, doesn’t mean that
no algorithm works. What we can see, however, is that a certain
kind of algorithm will not work: one that tries to construct counter-
models for invalid inferences. The idea to construct countermodels is
something that tableaux and truth-tables have in common: in truth-
tables, we check whether there’s a line in the table that gives us a
countermodel and in tableaux we look for a branch to do the job.
And in propositional logic, this method works: since we only need to
consider the interpretations (read: truth-values) of the finitely many
sentence letters in the formula, we can do this checking in a finite
amount of time. The problem in first-order logic is that countermod-
els sometimes need to be infinite. To see this, note that some sets of
formulas have only infinite models: an example can be found in Ex-
ercise 9.7.11. If we want to show that something doesn’t follow from
such a set, we need an infinite countermodel. And we can’t possibly
search through all of those.
Now that’s a whole class of algorithms that won’t work. But in order
to show that really absolutely no algorithm can possibly work ever,
we need to dig deeper. The reasoning that establishes that is closer re-
lated to the paradoxes, like Russel’s paradox, but we won’t be able to
go into that in this course (that’s more for “Logische Complexiteit”).
CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC 275

10.6 Core Ideas


ˆ Tableaux for first-order logic work in a way similar to propositional
tableaux. Underlying is always the idea that an inference is valid iff
the premises and negation of conclusion are jointly unsatisfiable.
ˆ We have three tiers of tableau systems: simple tableaux, tableaux with
identity, and tableaux with identity and functions. We successively add
rules for those.
ˆ There is also a concept of an associated model, which in first-order
logic is a term model, i.e. a model based on the syntactic strings as the
objects in the domain.
ˆ First-order tableaux can turn out to be infinite, in which case we need
to prove that they are open and complete.
ˆ The existence of infinite tableaux destroys our hopes for an easy de-
cidability proof.
ˆ Alas, first-order logic is actually provably undecidable.

10.7 Self Study Questions


10.7.1 Which of the following statements are correct?
(a) The tableau for an inference can only be infinite if the premises
don’t entail the conclusion.
(b) It’s possible to have a tableau for an inference where each branch
contains both a statement and its negation, but the premises
don’t entail the conclusion.
(c) You will always be able to make the tableau for a given inference
in a finite amount of time.
(d) You can have a tableau without any parameters on any branch.
(e) In FOL, if you both a formula and its negation on a branch, then
the branch is closed and no rules need to be applied anymore
(even if further rules could be applied).
(f) Parameters are treated like variables, i.e. we assign them a value
only under a variable assignment and not as part of the model.
(g) If you make the complete tableau for an inference and find that
it’s open, then you get a model that makes the premises true and
the conclusion false.
(h) When you make the tableau for an inference, you start by writ-
ing down the initial list, which consists of the premises and the
conclusion and then you start applying the rules.
CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC 276

(i) It’s possible that the tableau for an inference whose conclusion
is a valid formula of FOL is not closed.
(j) If you find that all the branches of a tableau for an inference
contain both a formula and its negation, then the premises entail
the conclusion.

10.8 Exercises
10.8.1 By constructing appropriate tableaux, show the following:

(a) ∀xP (x) ` ∀yP (y)


(b) ∃x∃yS(x, y) ` ∃y∃xS(x, y)
(c) ¬∃xP (x) ` ∀x(P (x) → Q(x))
(d) [h] ∀xP (x) ` ∀x(Q(x) → P (x) ∨ R(x))

10.8.2 Construct tableaux to check the following. If the tableau does not
close, construct a counter-model from the open branch and check
that it works. If the tableau is infinite, see if you can find a simple
finite counter-model by trial and error.

(a) ∀x(P (x) → Q(x)), ∃x¬P (x) ` ∀x¬Q(x)


(b) ∀xP (x) → ∀yQ(y) ` ∀x(P (x) → ∀yQ(y))
(c) [h] ∃x(P (x) → ∀yQ(y)) ` ∃xP (x) → ∀yQ(y)
(d) [h] ` ∀x∃yS(x, y) → ∃xS(x, x)
(e) ∃x¬∃yS(x, y) ` ∃x∀yS(x, y)

10.8.3 The tableaux for the following set of formulas is infinite

{∀x∀y(f (x) = f (y) → x = y), ∀xf (x) 6= x, ∃x¬∃yf (y) = x}.

Begin the tableaux construction, prove that the tableau is open, and
determine the associated model.

10.8.4 Use the tableau method to find models in which the following formu-
las are false:

(a) ∃x∃y(P (x) ∧ P (y) ∧ x 6= y)


(b) ∀x∀y∀z(P (x) ∧ P (y) ∧ P (z) → x = y ∨ x = z ∨ y = z)
(c) ∃x∃y(P (x) ∧ P (y) ∧ ∀z(P (z) → x = z ∨ y = z))

10.8.5 Show the following facts using tableaux with identity:

(a) ` ∀x∀y(x = y → y = x)
CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC 277

(b) ∃x(x = a ∧ P (x)) ` P (a)


(c) P (a) ` ∀x(x = a → P (x))
(d) [h] ∃x(P (x) ∧ ∀y(P (y) → x = y)) ` ∀x∀y(P (x) ∧ P (y) → x = y)
(e) ` ∃x(∃yP (y) → P (x))

10.8.6 [h, 6 x] It’s not possible to write an algorithm that determines whether
a formula is invalid, that is false in some model, after a finite amount
of time. Why?
Hardcore version (Not homework): It’s not even possible to find such
an algorithm for contingency, that is to determine whether a formula
is true in some models and false in others. Why?

10.8.7 Prove Lemma 10.4.4.

10.7.1 (a, d, e, g, j)
Self Study Solutions
Chapter 11

Soundness and Completeness

This chapter is even shorter than the soundness and completeness chapter
in propositional logic. But it’s a bit like with strawberries: the smaller the
sweeter. This chapter marks the end of our investigation into logical the-
ory and it contains one of the most important results of first-order logic—
a milestone achievement of the mathematical study of valid reasoning: the
completeness proof.

11.1 Overview
11.1.1 In this chapter, we’re going to prove soundness and completeness for
first-order tableaux. A daunting task, for sure, but we can do it! The
soundness and completeness theorem will round off our treatment of
first-order logic. In fact, it will mark the end of our concrete discussion
of logical systems. In the following chapter, we’ll conclude the course
with discussion and outlook. There we will look at theorems we didn’t
prove, systems we didn’t study, and so on. So, prepare yourself for
one last-ditch effort.

11.1.2 The proofs of the soundness and completeness theorems for first-order
logic run very much along the same lines as the proofs for proposi-
tional logic. We’ll essentially spell out an up preservation lemma and
a down preservation lemma, which together guarantee soundness and
completeness.

11.1.3 As you’ll see, nothing in the proofs depends in an interesting way on


the fact that infinite tableaux exist. In a sense, this shouldn’t be very
surprising. Infinite tableaux can occur, but as we discussed before,
from a mathematical perspective, there is nothing really weird about
them. They are tableaux like all the others. We’ve already observed
that infinite tableaux are always open (a closing tableau closes after
finitely many steps), but we can define associated models for infinite

278
CHAPTER 11. SOUNDNESS AND COMPLETENESS 279

branches just like in the finitary case. In fact, the existence of associ-
ated models is what ultimately gives us soundness and completeness.

11.1.4 But before we prove our soundness and completeness lemmas, we’ll
have to supply some auxilliary lemmas. These two lemmas, the de-
notation lemma and the locality lemma, are semantic in nature and
will figure into our proof that the associated model works in various
places. So, let’s get cracking.

11.2 The Denotation and Locality Lemma


11.2.1 The first lemma that we’re going to prove is the denotation lemma.
Let’s first pause for a second and think about the content of the
lemma (before having stated it precisely). The core of the lemma
is a kind of equivalence between the syntactic method of fixing the
reference of variables via substitution and the semantic method of
changing assignments. Taking the simple (open) formula P (x) as our
example, the idea is as follows. Suppose that there’s an object d in
the domain of a model M, which we can refer to, under assignment
α, by means of a term t, i.e. JtKMα = d. The point of the lemma is
that the following two ways of fixing the reference of x in P (x) to be
t are equivalent:

1. We can substitute the term t in P (x), i.e. consider (P (x))[x := t]


which is just P (t).
2. We can change the value of x under the assignment α to be JtKMα
and consider P (x) under the changed assignment α[x 7→ JtKM
α ].

The way in which the two ways are equivalent is that they either
both lead to a true statement or they both lead to a false statement:

ˆ M, α  (P (x))[x := t] iff M, α[x 7→ JtKM


α ]  P (x)

The denotation lemma generalizes this idea to arbitrary formulas.

11.2.2 Here is the precise statement of the denotation lemma and its proof:

Lemma (The Denotation Lemma). Consider a formula φ with pre-


cisely one free variable, x, and an arbitrary ground term t. Further,
let M be a model and α an assignment. Then we have:

M, α  (φ)[x := t] iff M, α[x 7→ JtKM


α ]φ

Proof. We prove the claim by induction on formulas.


CHAPTER 11. SOUNDNESS AND COMPLETENESS 280

(i) For the base case, consider a formula

R(t1 , . . . , t, . . . , tn ),

which is of the form

(R(t1 , . . . , x, . . . , tn ))[x := t].

Note that by assumption (that x is the only free variable in


φ) none of t1 , . . . , tn contain a variable, i.e. they are all ground
terms. Now consider the claim M, α  R(t1 , . . . , t, . . . , tn ). By
definition, this means that (Jt1 KM M M M
α , . . . , JtKα , . . . , Jtn Kα ) ∈ R .
Now consider the formula R(t1 , . . . , t, . . . , tn ) under the assign-
ment α[x 7→ JtKM α ]. By definition, we have that M, α[x 7→
JtKM
α ]  R(t 1 , . . . , t, . . . , tn ) iff

(Jt1 KM M M M
α[x7→JtKM ] , . . . , JtKα , . . . , Jtn Kα[x7→JtKM ] ) ∈ R .
α α

Now, since the terms t1 , . . . , tn are all ground terms, by the


ground term lemma, they all have the same denotation under
every assignment (cf. 9.2.9). In particular, we get:

Jtn KM M
α[x7→JtKM ] = Jtn Kα .
α

But that just means that the condition for M, α[x 7→ JtKM α 
R(t1 , . . . , t, . . . , tn ) reduces to (Jt1 KM
α , . . . , JtKM , . . . , Jt KM ) ∈
α n α
RM , which, as we’ve observed above, is precisely the condi-
tion for M, α  R(t1 , . . . , t, . . . , tn ). The base case for t1 = t2 is
completely analogous (in fact, simpler) and is left as an exercise.
(ii) We only consider the induction step ∀xφ, the rest is left as
an exercise. So, assume the induction hypothesis that if φ is a
formula that contains x as the only free variable, then M, α 
(φ)[x := t] iff M, α[x 7→ JtKM α ]  φ. Now let ∀yφ be a formula
with x as its only free variable.1 Note that since x is free in
∀yφ, the variable must also be free in φ. We want to show that
M, α  (∀yφ)[x := t] iff M, α[x 7→ JtKM α ]  ∀yφ. So, consider
the claim that M, α  (∀yφ)[x := t]. By the recursive definition
of substitution, we know that (∀yφ)[x := t] = ∀y(φ[x := t]).
Hence, we have that M, α  (∀yφ)[x := t] iff M, α  ∀y(φ)[x :=
t], which is the case iff for all d ∈ DM , we have that M, α[y 7→
d]  (φ)[x := t]. But by the induction hypothesis, we have that
M, α[y 7→ d]  (φ)[x := t] iff M, α[y 7→ d, x 7→ JtKM α ]  φ. But
M
that just means that we have for all d ∈ D that M, α[y 7→
d, x 7→ JtKM
α ]  φ, which by definition just means that M, α[x 7→
JtKMα ]  ∀yφ, as desired.
1
Why don’t we write ∀xφ? Because in that formula x is not free!
CHAPTER 11. SOUNDNESS AND COMPLETENESS 281

11.2.3 One of the main uses of the denotation lemma is that it allows us to
derive the quantifier law ∀xφ  (φ)[x := t] in a natural way. To see
this, suppose that M, α  ∀xφ. This means, by definition, that for all
d ∈ DD , we have that M, α[x 7→ d]  φ. So let d = JtKM α . We get that
M
M, α[x 7→ JtKα ]  φ, which by the denotation lemma is equivalent
to M, α  (φ)[x := t]. This law is one of the most natural quantifier
laws: it states that if everything is such that φ, then t is such that
φ. But, in order to prove it in full generality, we need the denotation
lemma. In essence, this will also be the role of the denotation lemma
in our soundness and completeness proof: the lemma allows us to
infer the truth of instances of a universal generalization from the
generalization itself.

11.2.4 The second lemma that we’re going to prove is (yet) another version
of the locality lemma (we’ve proved locality lemmas for terms and
formulas with respect to free variables in §9). The locality lemma
that we need in this chapter, is slightly different:
Lemma. Let M and N be models with DM = DN . Further, let ϕ
be a sentence such that for all constants c that occur in ϕ, cM = cN ,
for all function symbols f that occur in φ, f M = f N , and for all
predicates R that occur in ϕ, RM = RN . Then for all assignments
α, we have that M, α  ϕ iff N , α  ϕ.

This lemma states, in words, that if two models have the same domain
and interpret the non-logical symbols in a sentence in exactly the
same way, then they interpret the whole sentence in the same way.
Note that since the domains of the two models in the lemma are the
same, an assignment in the one model is also an assignment in the
other (remember, assignments only depend on domains).
We shall now prove this lemma:

Proof. We prove this fact by . . . surprise . . . induction. I will cover


the base case for R(t1 , . . . , tn ) and the case for ∀xφ and leave the
remaining cases as exercises.

(a) Consider an atomic sentence R(t1 , . . . , tn ). We know, by defi-


nition, that M, α  R(t1 , . . . , tn ) iff (Jt1 KM M M
α , . . . , Jt1 Kα ) ∈ R .
First, note that since cM = cN for all c ∈ C and f M = f N for all
f ∈ F, a simple induction on terms establishes that JtKM N
α = JtKα
for all terms t ∈ T (exercise!). Since additionally RM = RN , we
can conclude that (Jt1 KN N N
α , . . . , Jt1 Kα ) ∈ R , which is precisely
the condition for N , α  R(t1 , . . . , tn ).
CHAPTER 11. SOUNDNESS AND COMPLETENESS 282

(b) Next, consider a sentence of the form ∀xφ. Now suppose the in-
duction hypothesis that if M interprets the non-logical symbols
in a sentence φ exactly like N , then for all assignments α, we
have that M, α  φ iff N , α  φ. We want to derive from this
assumption that M, α  ∀xφ iff N , α  ∀xφ. Now, we note that
M, α  ∀xφ iff for all d ∈ DM (= DN ), M, α[x 7→ d]  φ. Now,
note that if M interprets the non-logical symbols in ∀xφ the
same as N , then M also interprets the non-logical symbols in
φ as N . So, we can infer by the induction hypothesis that for
all d ∈ DN , N , α[x 7→ d]  φ. But that is just the condition for
N , α  ∀xφ, as desired.

Now the role of the locality lemma is less intuitive than the role of
the denotation lemma. But as we’ll see in a second, it plays a central
role in our proof of the soundness lemma.

11.3 The Soundness Theorem


11.3.1 We will now go ahead and prove soundness for first-order tableaux.
Remember the core concept of our soundness (and completeness) the-
orem for propositional tableaux was the idea of a faithful valuation,
i.e. a valuation that makes all the formulas on a branch true. To
obtain soundness, we simply observed that we can use the counter-
model for an invalid inference to infer that there must be an open
branch in the tableau. Hence the conclusion can’t be derived from
the premises, meaning contrapositively that if we can derive the con-
clusion from the premises, they do in fact follow—i.e. soundness. The
way we could use the countermodel to obtain our open branch was by
means of the central soundness lemma, which states that if we have
a faithful valuation and apply a rule, we get a new branch to which
our valuation remains faithful—which is just an iterated version of
the down preservation principle. Now in first-order tableaux, things
aren’t quite as simple, although the general idea remains the same.

11.3.2 The concept of a faithful valuation is generalized to first-order logic


easy enough: we say that a model M is faithful to a branch iff for
all assignments α we have that M, α  φ for all φ ∈ B. The major
complication that we’re facing is that when we apply the rule for the
existential quantifier, we’re introducing a new parameter. And even if
model M was faithful to the branch to begin with, it might now turn
out to be unfaithful after the introduction of the new parameter: the
model might interpret the parameter in the wrong way. What will
CHAPTER 11. SOUNDNESS AND COMPLETENESS 283

save us is a relaxation of the strict condition that our initial model


needs to remain faithful. It turns out that it’s enough that we get a
faithful model everytime we apply a rule. From this, we can infer in
the same way that we get an open branch in the complete tableau.

11.3.3 So, here is the official version of our soundness lemma:

Lemma. Let B be a branch of a (possibly incomplete) tableau and


suppose further that M is a model that is faithful to B. Then, if a
rule is applied to B, extending the branch to B 0 , then there exists a
model N with DM = DN and cM = cN for all c ∈ C which occur on
B, such that N is faithful to B 0 .

Proof. The proof proceeds, analogously to the proof of the soundness


lemma in propositinal logic, by going through the rules one by one.
Now, the propositional rules, we can deal with exactly as in proposi-
tional logic (replacing the talk of valuations with talk of models) and
letting M = N .
The interesting cases are the quantifier rules:
The rules for the negated quantifiers are handled quickly:

¬∀xϕ ¬∃xϕ

∃x¬ϕ ∀x¬ϕ

We’ve already observed that ¬∀xφ  ∃x¬φ and that ¬∃xφ  ∀x¬φ
(since it follows quickly from 9.4.3 iv and v). But that means that in
either case, we can just let M = N .
Now for the ∃xφ rule. Suppose that the last rule that’s been applied
was:

∃xϕ
†: p a fresh parameter
ϕ[x := p]†

Now, we know that M is faithful to B. Since ∃xφ ∈ B, we can


conclude that M, α  ∃xφ. So, this means that there exists a d ∈ DM
such that M, α[α 7→ d]  φ. Now, we define our model N just like M,
except that we set pN = d. Now, note that by the denotation lemma,
it easily follows that N , α  (φ)[x := p]. Furthermore, since p has to
be new to the branch, there cannot be any formula in B containing
p. But that means that for each formula in B, the condition for the
locality lemma is satisfied: the only difference between M and N
CHAPTER 11. SOUNDNESS AND COMPLETENESS 284

is how we interpret p. Hence N , α  ψ for all ψ ∈ B. Since B 0 =


B ∪ {(φ)[x := p]}, it follows that N is faithful to B 0 .
Next, consider the case where the last rule has been a case of

∀xϕ
†: a any ground term on the branch
ϕ[x := a]†

Now, since M is faithful to B and ∀xφ ∈ B, we can infer that M, α 


∀xφ. But then it follows immediately via the law ∀xφ  (φ)[x := t]
that M, α  (φ)[x := t]. So, we can simply let M = N .
Finally, we need to consider the case where the last rule was an
identity rule. The case where the last rule was the t = t rule, which
introduces t = t for every ground term on every branch is easily taken
care of since it’s easily checked that for each model M under every
assignment α, M, α  t = t.
Slightly more interesting (but not much) is the case of the subsitution
rule:

s=t
σ ‡ [x := s]

σ[x := t]

‡: σ here is any atomic formula, i.e. a formula of the form R(t1 , . . . , tn )


or t1 = t2 .
Now suppose that M, α  (σ)[x := s]. By the denotation lemma,
we get that M, α[x 7→ JsKM α ]  σ. Now since s = t ∈ B and M is
faithful to B, it follows that M, α  s = t and so JsKM M
α = JtKα . So,
we get M, α[x 7→ JtKM α ]  σ, which by the denotation lemma gives
us M, α  (σ)[x := t], as desired.

This completes the proof of our soundness lemma.

11.3.4 We can infer soundness from this pretty straight-forwardly:

Theorem (Soundness for First-Order Tableaux). If Γ ` φ, then Γ 


φ.

Proof. We prove this by contraposition. So, suppose that Γ 2 φ. This


means that there exists a model M such that M  ψ for all ψ ∈ Γ
but M 2 φ. Now consider the tableau for Γ ∪ {¬φ}. We immediately
CHAPTER 11. SOUNDNESS AND COMPLETENESS 285

have that M is faithful to the initial list. Now, every time that we
apply a rule to construct the complete tableau, we obtain a model
(possibly different from M) which is faithful to at least one branch of
that tree. This means that there exists a model N which is faithful to
some branch in the complete tableau for Γ ∪ {¬φ}. Now reasoning by
contradiction quickly shows that this branch cannot be closed. For
suppose that it were. Then there would be an atomic formula and
its negation on the branch, which would then both need to be true
in N , which is impossible. So, the branch is open, hence the tableau,
hence Γ 0 φ, as desired.

11.4 The Completeness Theorem


11.4.1 We move to completeness. The proof strategy remains the same as in
propositional logic: we want to show that the associated model for an
open branch in a complete tableaux is indeed faithful to that branch.
This is the content of the following completeness lemma:

Lemma (Completeness Lemma). Let B be an open branch in a


complete tableau. Then the associated model MB defined as in 10.3.7
is faithful to B.

Proof. We prove this fact in a similar way as in propositional logic,


viz. we prove that for all φ ∈ L:

1. if φ ∈ B, then MB , α  φ
2. if ¬φ ∈ B, then MB , α 2 φ

I will do the base case for R(t1 , . . . , tn ) and the cases for ∀xφ and
¬∀xφ. The cases for t1 = t2 and for ∃xφ and ¬∃xφ are left as useful
(!) exercises. The propositional cases work just like in propositional
logic.

(i) For the base case of 1., suppose that R(t1 , . . . , tn ) ∈ B. We know
that MB , α  R(t1 , . . . , tn ) iff (Jt1 KM MB
α , . . . , Jt1 Kα ) ∈ R
B MB .
M
By the definition of MB , we know that R B = {([t1 ]∼B , . . . , [tn ]∼B ) :
R(t1 , . . . , tn ) ∈ B}. And you proved as an exercise in 10.4.4 that
JtKM
α = [t]∼B . Putting these two things together, the claim fol-
lows immediately.
For the base case of 2., suppose that ¬R(t1 , . . . , tn ) ∈ B. Since B
is open, we can conclude that R(t1 , . . . , tn ) ∈ / B. Since RMB =
{([t1 ]∼B , . . . , [tn ]∼B ) : R(t1 , . . . , tn ) ∈ B}, it follows that

/ RMB .
([t1 ]∼B , . . . , [tn ]∼B ) ∈
CHAPTER 11. SOUNDNESS AND COMPLETENESS 286

Since JtKM MB MB /
α = [t]∼B , we conclude that (Jt1 Kα , . . . , Jt1 Kα ) ∈
R M B , which gives us M, α 2 R(t1 , . . . , tn ) as desired.
(ii) Now, for the induction step, assume the induction hypothesis
that for all assignments α:
1. if φ ∈ B, then MB , α  φ
2. if ¬φ ∈ B, then MB , α 2 φ
Now suppose that ∀xφ ∈ B. In order to show that MB , α  ∀xφ
as desired, we need to show that for each d ∈ DMB , we have that
MB , α[x 7→ d]  φ. Remember that DMB = {[t]∼B : t ∈ T }.
So, we have to show that for each term t, we have MB , α[x 7→
[t]∼B ]  φ. Now, since ∀xφ ∈ B and B is complete, every rule
that can be applied has been applied. Since t = t ∈ B for all
t ∈ T , we can conclude that the ∀φ rule has been applied for
t, i.e. for each t ∈ T , we have (φ)[x := t] ∈ B. Hence, by the
induction hypothesis, M, α  (φ)[x := t]. Note that since we’re
in tableaux, we can assume that ∀xφ is a sentence and so φ
contains at most one free variable, x. So, by the denotation
lemma, we have M, α[x 7→ JtKM α ]  φ. But, we have already
B

M
observed that JtKα = [t]∼B , so for each t ∈ T , we get that
M, α[x 7→ [t]∼B ]  φ, as desired.
The remaining case is ¬∀xφ ∈ B. Since the tableau is complete,
we can conclude that the ¬∀xφ rule has been applied, giving us
that ∃x¬φ ∈ B. Again, since the tableau is complete, we know
that the ∃ rule has been applied. This means that for some
(at some point fresh) parameter p, we have (¬φ)[x := p] ∈ B.
We know that (¬φ)[x := p] = ¬(φ)[x := t]. By the induction
hypothesis, we know that M, α 2 (φ)[x := t]. By the denotation
lemma, we get M, α[x 7→ JtKM M
α ] 2 φ. Since JtKα = [t]∼B , this
B

gives us for some [t]∼B ∈ D M that M, α[x 7→ JtKM


α [t]∼B ] 2 φ.
B B

From this, it follows immediately that M, α 2 ∀xφ, as desired.

11.4.2 From the completeness lemma, the actual completeness proof follows
quickly:

Theorem (Completeness for First-Order Tableaux). If Γ  φ, then


Γ ` φ.

Proof. We prove the contrapositive. So, suppose that Γ 0 φ. By def-


inition, that gives us an open branch in the complete tableau for
Γ ∪ {¬φ}. By the completeness lemma, we get that MB is faith-
full to at least one branch in this tableau, which, of course, contains
CHAPTER 11. SOUNDNESS AND COMPLETENESS 287

Γ ∪ {¬φ}. Hence, we get that M  ψ, for all ψ ∈ Γ and M 2 φ. So,


Γ 2 φ, as desired,

This completes our completeness proof and our investigation into


logical theory.

11.5 Core Ideas


ˆ The soundness and completeness proof for first-order tableaux are re-
ally just a generalization of the propositional case.

ˆ The devil is in the details.

11.6 Self Study Questions


Revisit the self study questions of ch. 7.6.
CHAPTER 11. SOUNDNESS AND COMPLETENESS 288

11.7 Exercises
ˆ The exercises for this chapter are different. We’re getting close to the
endterm exam and it’s high-time that we think a bit more concretely
about exam preparation. The exercises for this and next lecture will
just be that: exam preparation.

ˆ In a moment, I will describe the “recipe” for the exam to you so that,
in principle, you can make as many mock exams from the questions
as you’d like. The main purpose of all of this is that you get a more
specific idea of the kind of questions I’ll ask in the exam, their difficulty
level, and so on. Fully worked out answers will be provided at the end
of the week. They will come along with a marking scheme to allow you
to gauge my expectations.

ˆ Here’s the exam recipe. The exam will consist of two parts, Part A
and Part B. The questions in Part A will be questions where I ask
you to do certain things with concrete terms, formulas, models, etc.
The questions in Part B instead are questions where I ask you to
prove things. Throughout the course, you’ve been doing exercises that
prepare you for these two kinds of questions. Some more details:

– Below is an example of what Part A might look like. The questions


in 11.7.1 are roughly of the same length and difficulty as the
questions in the exam. The questions test basic understanding
of the core concepts and whether you can write up answers in
a mathematically appropriate fashion. Note that it’s not enough
to simply write down the result, you need to explain how you
got there, what you did, etc. Check Lecture 3 (and possibly the
recommended readings there) for reference. Note that for each
part of logical theory—syntax, semantics, proof theory—there is
at least one question.
– Part B will contain three questions with proofs. There will be one
question of each difficulty level from 1 to 3. Questions can come
from any part of logical theory. In answering these questions, pay
attention to writing proper proofs. Check Lecture 3!

ˆ The two parts of the exam will be weighed in such a way that if you
answer all the questions in Part A correct, you’ll pass the exam with
a 6.5. The questions in Part B will then differentiate your grade. The
idea is that to get a 7.5, you should get the difficulty 1 proof correct,
to get an 8.5, you should also get the difficulty 2 proof correct, and to
get an 9.5, you should get all three proofs essentially correct. A 10 you
can get if all answers are flawless, including the difficulty 3 proof. I will
not go more into the details of points etc. since this encourages the
CHAPTER 11. SOUNDNESS AND COMPLETENESS 289

wrong kind of learning: please don’t calculate “how much do I need to


answer to get this ’n that grade”—work hard and try your best!

11.7.1 Part A — Doing


11.7.1.1 Consider the following formula:

∀x∃y(R(x, y) → ∀x(P (x) ∧ ∃yR(x, y)))

Determine which variable occurrence is bound by which quantifier


occurrence in the formula.

11.7.1.2 Formalize the following sentence as accurately as possible in the


language of first-order logic. Use the following translation key:
L(x) : x is a logician
D(x) : x is an animal lover
V (x) : x is friendly
I(x, y) : x is smarter than y (or as smart as)
k : Kurt
a : Ada
discussiedomein = de verzameling van alle mensen

(i) Only logicians are friendly.


(ii) Logicians aren’t smarter than other people.
(iii) He’s a logician that’s not friendly and he’s also not an animal
lover.
(iv) Somebody, who’s not a logician, is not an animal lover.

11.7.1.3 Take the signature SP A = ({0}, {S, +, ·}, ∅) and the following model
M for it:

ˆ DM = {n ∈ N : n is odd}
ˆ 0M = 1
ˆ S M (n) = n + 2
(
n+m if n + mis odd
ˆ + (n, m) =
M
n+m+1 otherwise
ˆ ·M (n, m) = n · m

Further, let α : V → N be an assignment in M such that α(x) = 3.


In this model and under this assignment, determine the denotation
of the term (0 + x) · S(0).

11.7.1.4 Consider the signature S = (∅, {f 1 }, {R2 }) and its model M given
by:
CHAPTER 11. SOUNDNESS AND COMPLETENESS 290

ˆ DM = {1, 2}
ˆ f M (1) = 2 and f M (2) = 1
ˆ RM = {(1, 2)}

Let α be an arbitrary assignment. Determine the truth-value of the


following formula in this model under that assignment:

∀x∀y(R(x, y) → R(f (y), f (x))

11.7.1.5 Show the following two derivability facts:

(a) ∃x(P (x) ∧ x = c), ∀x(P (x) → Q(x)) ` Q(c)


(b) P (c) ∨ (P (c) ∧ Q(c)), ∀x(Q(x) → ¬P (x)) 0 ¬P (c)

In (b), also determine the associated model of at least one open


branch.

11.7.1.6 Consider the following inference:

ˆ The ball is round and everything round comes from Mars. So,
the ball comes from Mars.

Use the formal methods we developed in this course to determine


whether the inference is valid.

11.7.2 Part B — Proving


11.7.2.1 (Difficulty 1) Suppose that φ is a formula with one free variable, y.
Prove that ∀xφ is not a sentence.

11.7.2.2 (Difficulty 1) Provide an argument that for all sets of formulas Γ,


we have Γ ` P (c) ∨ ¬P (c).

11.7.2.3 (Difficulty 1) Show that ∀x(φ → ψ)  ¬∃x(φ ∧ ¬ψ).

11.7.2.4 (Difficulty 1) Consider the formula ∀x∃yR(x, y) of an appropriate


signature. Find a model M+ (of that signature), such that the
formula is true in the model (under arbitrary assignment) and find
a model M− such that the formula is false in the model.

11.7.2.5 (Difficulty 1) Suppose that a set of formulas Γ is such that Γ  c 6= c


for some constant c ∈ C. Prove that Γ is unsatisfiable.

11.7.2.6 (Difficulty 2) Let S = ({a}, {f 1 }, ∅) be a signature, M a model with


f (d) = d for all d ∈ DM , and α an assignment with α(x) = aM for
all x ∈ V. Prove by induction on terms that for all terms s, t, we
have JsKM M
α = JtKα .
CHAPTER 11. SOUNDNESS AND COMPLETENESS 291

11.7.2.7 (Difficulty 2) Suppose that φ is formula whose only free variable is


x (which may occur more than once in φ). Show, using induction on
formulas, that (φ)[x := c], where c ∈ C is a constant, is a sentence.

11.7.2.8 (Difficulty 2) Remember that a binary relation R over a set X is


a set R ⊆ X 2 . If R is a binary relation over X, we say that R is
reflexive iff for all objects x ∈ X, we have that (x, x) ∈ R. Now
consider a signature S = (C, F, R) with R2 ∈ R. Let further M =
(DM , ·M ) be a model of our signature and consider RM . Clearly,
RM is a binary relation over DM . Prove that RM is reflexive iff
M, α  ∀xR(x, x) for all assignments α.

11.7.2.9 (Difficulty 2) Let S = ({a, b, c}, ∅, {P 1 }) be a signature and M =


(DM , ·M )a model of that signature with DM = {1, 2, 3} such that
aM = 1, bM = 2, and cM = 3. Prove that M, α  ∃xP (x) →
P (a) ∨ P (b) ∨ P (c).

11.7.2.10 (Difficulty 2) Prove that if φ ` ψ and ψ ` θ, then φ ` θ.

11.7.2.11 (Difficulty 3) We say that a set of formulas Γ is semantically com-


plete iff for all formulas φ ∈ L, either Γ  φ or Γ  ¬φ. Suppose
that Γ  c 6= c. Show that Γ is semantically complete.

11.7.2.12 (Difficulty 3) Recursively define a function c : L → N such that


c(φ) measures the number of nodes in the parsing tree for φ. Prove
that your answer works.

11.7.2.13 (Difficulty 3) Somebody says to Bertrand: “There’s a barber that


shaves all and only those people who don’t shave themselves.” “No,”
Bertrand responds, “that’s impossible!” Show that such a barber
can indeed not exist (make use of the formal methods of the course:
formalization and either semantics or proof theory).
Part IV

Conclusion

292
Part V

Solutions to Selected
Exercises

293
Appendix A

Chapter 1. Introduction

1.6 Self-Study Questions


Explanations:

1.6.1 (a) If in every situation, the premises are false, there can’t be a situa-
tion in which the premises are true and the conclusion is false, i.e.
the inference can’t be invalid. Hence the inference is valid.
(b) If in some situation the premises are true and the conclusion not,
the conclusion is false. By definition, this means that the inference
is invalid.
(c) If in no situation, the premises are true and the conclusion not,
then the inference can’t be invalid. So it must be valid.
(d) If in no situation the conclusion is false, there can’t be a situation
in which the premises are true and the conclusion is false, i.e. the
inference can’t be invalid. Hence the inference is valid.
(e) If in every situation the conclusion is false, then it’s still possible
that there is no situation in which the premises are true. But if
there’s no situation where the premises are true, then the inference
would be valid.e
(f) If in every situation where the conclusion is false, at least one
of the premises is false, then there can’t be a situation in which
the premises are true and the conclusion is false. For then, all the
premises would be true, but also at least one of them would be
false, which can’t be.
1.6.2 (a) Even if there is a situation in which both premises and conclusion
are false, there could still be no situation in which the premises
are true and the conclusion is false.
(b) There could be no situation in which the conclusion is false. Then,
trivially, in every situation where the conclusion is false, the premises

294
APPENDIX A. CHAPTER 1. INTRODUCTION 295

are true. But, at the same time, the inference would be valid, since
there couldn’t be a situation in which the premises are true and
the conclusion is false, i.e. the inference can’t be invalid.
(c) Suppose the conclusion is true in no situation. This means the
conclusion is false in every situation. But if then, there’s a situation
in which the premises are true, in that situation the conclusion
must be false. Hence, there’s a situation in which the premises are
true and the conclusion is false, so the inference is invalid.
(d) This is just the definition of what it means for an inference to be
invalid.
(e) It could still be that there is no situation in which the premises are
true. Then, trivially, in every situation in which the premises are
true, the conclusion is false. But, at the same time, there couldn’t
be a situation in which the premises are true and the conclusion
is false, the argument would be valid.
(f) There could still be a situation in which the premises are true and
the conclusion is false, all we’re given is that there is no situation
in which the premises and the conclusion are both true. This just
means that in every situation in which the premises are true, the
conclusion is false (see previous option).

1.7 Exercises
1.7.1 (a) The inference is not valid. To see this, note that we can have a
situation in which the premises are true and the conclusion is false.
Think of a situation in which there are two whales, Moby and Dick,
and one more fish, the clownfish Nemo. Moby is a blue whale and
Dick is a grey whale, Nemo is orange and white. For argument’s
sake, suppose that all whales are fish. Surely then, all blue fish
are whales, since there’s only one blue fish, the whale Moby. But
there’s a whale, Dick, which is a fish but grey. So, Dick is not a
blue fish. All in all, in the situation, the premises are true and the
conclusion is false, the inference is invalid.
(b) The inference is valid. To see this, suppose that we’re in a situa-
tion in which you didn’t not miss your train. Can it be, in such a
situation, that you still didn’t miss your train? Well, that would
mean that some statement, viz. you didn’t miss your train, is both
true and not true. But this is impossible. So we can’t have a situ-
ation in which you didn’t not miss your train but still didn’t miss
the train. This means the inference can’t be invalid, so it has to
be valid.
APPENDIX A. CHAPTER 1. INTRODUCTION 296

(c) The inference is valid. The case is very similar to the case with
the letters in the drawer. Suppose that it’s true in some situation
that if you’d checked your mail, then you’d have seen my message,
and you didn’t see it. Can it be that you checked your mail in that
situation? Well, then you’d have seen my mail and you didn’t. So
it can’t be that, in the situation, you checked your mail. So, in
every situation where the premises are true, the conclusion needs
to be true as well, i.e. the inference is valid.
(d) The inference is invalid. Think of a possible situation in which
there are no roses at all, e.g. because a rose disease wiped them
out. In such a situation, trivially, every rose would be red (can you
show me a non-red rose in the situation?). But there would be no
rose and certainly not a red one. So there’s a possible situation
in which the premises are true and the conclusion is false, the
inference is invalid.
(e) The inference is invalid. For concreteness sake, let’s suppose that
“that” is that you jumped over the Eiffel tower. Now think of a
possible situation in which you have superhuman strength and, at
the same time, pigs have wings and can fly (what a beautiful world
it would be). Well, in such a situation, we know that no matter
whether you jumped over the Eiffel tower, pigs can fly. So certainly,
if you did it, pigs can fly (how can the statement be false?). Now
suppose that in our magic land, you indeed jumped over the Eiffel
tower. Then the conclusion is false—you did actually do it. So, we
have a situation in which if you did it, then pigs can fly and you
did it—the premises are true and the conclusion is false.
If you were to add the premise that pigs don’t fly, the argument
would become valid. Check that for yourself. This shows that usual
figure of speech involved here is elliptic. It’s assumed, in the back-
ground, that pigs can’t fly.

1.7.2 Suppose that an inference is valid. By definition, this means that in


every situation where the premises are true, the conclusion is true
as well. Suppose you add some premises to the inference. Can that
inference be invalid? Well there would need to be a situation in which
all the premises of the new inference are true but the conclusion is
false. But all the premises of the old argument are also premises of the
new inference, so in such a situation also all the premises of the old
inference would need to be true. But since the old inference is valid,
this means that the conclusion would be true in the situation. So for
the new inference to be invalid, the conclusion would need to be both
false and true in some situation, which is impossible. Hence the new
inference is valid.
APPENDIX A. CHAPTER 1. INTRODUCTION 297

1.7.3 Take some invalid inference. Here are two ways of making it valid:

(1) Add the conclusion of the inference as a premise. It’s easy to see
that the new inference can’t be invalid, for there would need to
be a situation in which the new premises, which now include the
conclusion, are all true but the conclusion is false. So the conclu-
sion would need to be both true and false in some situation which
can’t be. So the inference is valid.
(2) Add any contradiction, such as “the rose is red and not red” to the
premises. We’ve already seen that any inference with inconsistent
premises is valid (1.3.3), so the inference will be valid, too.
Appendix B

Chapter 2. A Math Primer


for Aspiring Logicians

2.7 Exercises
2.7.1 This depends on the answers you’ve given. The following correspond
to my answers:

(a) We did not need to use any of the proof principles we discussed,
we simply produced a counterexample.
(b) We used indirect proof. The argument form was as follows:
ˆ Suppose that there is a possible situation in which the premises
are true and the conclusion is false.
ˆ We get a contradiction.
ˆ Therefore there is no situation in which the premises are true
and the conclusion false.
ˆ So, in every situation where the premises are true, so must be
the conclusion, meaning the inference is valid.
(c) Same as (b)
(d) Same as (a)
(e) Same as (a)
(f) Same as (a)

2.7.2 Here are proofs for the facts. Note that this are not the only possible
proofs, but they can function as examples. It’s a bit tricky to illustrate
the procedure that leads to these proofs, so I will present you with the
finished end-product. For advice on how to find the proofs, see the
slides.

298
APPENDIX B. CHAPTER 2. A MATH PRIMER FOR ASPIRING LOGICIANS299

(a) The sum of two even numbers is even.


Precise statement. For all integers n and m, if n and m are both
even, then n + m is even.

Proof. Let n, m be two arbitrary integers (for universal generaliza-


tion). Assume (for conditional proof) that n and m are both even.
This means, by definition, that there exists an integer k such that
n = 2k and there exists an integer l such that m = 2l. Now con-
sider n+m. Since n = 2k and m = 2l, we have that n+m = 2k+2l.
But 2k + 2l = 2(k + l). So, there exists a natural number, namely
k + l, such that twice that number is n + m. But by definition
this just means that n + m is even, which is what we needed to
show. By conditional proof and universal generalization, the claim
is proven.

(b) If the product of two numbers is odd, then at least one of the two
numbers is odd.
Precise statement. Let n and m be integers. If n · m is odd, then
either n is odd or m is odd.

Proof. We prove this claim by contrapositive proof. So, suppose


that neither n nor m is odd. Note that since a number is odd iff
it’s not even, this just means that both n and m are even. For
our contrapositive proof, we need to derive that n · m is also not
odd. Since a number is odd iff it’s not even, this means we have
to show that n · m is even. But we’ve already proven that if a
number is even, then its product with any other number is even
(2.3.9, example for conditional proof). So surely, if both n and m
are even, then n · m is even, which is what we needed to show.

How could you have seen that contrapositive proof is a good strat-
egy here? Well, whenever you have a conditional with a disjunction
in the then-part, it’s a good idea to try contrapositive proof.
(c) Every number is either even or odd.
Precise statement. Let n be a natural number. Then n is even or
n is odd (and, in fact, not both).

Proof. We prove this fact indirectly. So, let n be an arbitrary num-


ber and suppose that n is neither even nor odd, meaning n is both
not even and not odd. Since a number is odd iff it’s not even, this
means that n is both even and not even. But that’s a contradic-
tion. So, by indirect proof, it’s not the case that n is neither even
nor odd, meaning that n is either even or odd, as desired.
APPENDIX B. CHAPTER 2. A MATH PRIMER FOR ASPIRING LOGICIANS300

How could you see that you should use indirect proof to show this?
Well, whenever you try to prove that one of two cases must ob-
tain and you can’t prove them from something you already know,
indirect proof is a good idea.
(d) If you add one to an even number, you get an odd number.
Precise statement. Let n be an integer. If n is even, then n + 1 is
odd.

Proof. Let n be an arbitrary integer and suppose that n is even (for


conditional proof). By definition, this means that there exists an
integer k such that n = 2k. We now need to derive that n + 1 must
be odd. We do this via indirect proof. So, suppose that n+1 is also
even, meaning that there exists an integer l such that n + 1 = 2l.
Since n = 2k and n + 1 = 2l, we can infer that 2k + 1 = 2l. From
this we can infer that 1 = 2l − 2k. So, 1 = 2(l − k). But since l − k
is an integer, this would mean that 1 is an even number. And we
know that 1 isn’t even, so we arrived at a contradiction. We can
therefore conclude that n + 1 is not even, i.e. odd, which is what
we needed to show.

Why can we assume that 1 isn’t even? Well, that’s something that
can itself be proven using the methods of the next chapter. But
for the present purpose it’s fine to assume it. Remember that our
aim is to convince the reader that a purely axiomatic proof exists
and not to provide one ourselves.
(e) The product of two prime numbers is not a prime number.
Precise statement: Let n and m be natural numbers. Then, if n
and m are prime, then n · m is not prime.

Proof. Let n and m be two natural numbers and assume that


both n and m are prime (for conditional proof). This means, by
definition, that n > 1 and m > 1 and there exists no number
k, l < n such that n = k · l and there exist no numbers i, j such
that m = i · j. We need to show that n · m is not a prime number.
We do this indirectly. For suppose that n · m would be prime. This
would mean 1 < n · m and there exist no numbers a, b < n · m
such that n · m = a · b. But since n > 1 and m > 1, n · m > n
and n · m > m. So, there would be two numbers a and b with
a, b < n · m and n · m = a · b, after all—just let a = n and b = m.
Contradiction. Hence n · m is not prime, as desired.

(f) No prime number bigger than two is the product of an even and
an odd number.
APPENDIX B. CHAPTER 2. A MATH PRIMER FOR ASPIRING LOGICIANS301

Precise statement. Let n be a natural number. Then if n > 2 and


n is prime, there do not exist numbers k and l such that k is even,
l is odd and n = k · l.

Proof. Suppose that n is a natural number, that n > 2 and that


n is prime. We need to derive that there are no numbers k and
l such that k is even, l is odd and n = k · l. We again, do this
indirectly. So suppose that k is even, l is odd, and n = k · l. Now,
since k is even, we know by our earlier observation that n = k · l
is even, too. But we’ve also proved that if n is prime and n > 2,
then n is odd. And since n is prime and n > 2 by assumption, we
get that n is odd. So n has to be both even and odd, which is a
contradiction. Hence there are not numbers k and l such that k is
even, l is odd and n = k · l, which is what we needed to show.

2.7.3 (a) A necessary but not sufficient condition for n to be even is that n
is divisible by at least two numbers k, l (not necessarily different).
If n is even, then there are two such numbers, in fact one of them
is two. But there are numbers which are divisible by two distinct
numbers and not even. E.g. 15 = 3 · 5. Hence the condition is not
sufficient.
(b) A sufficient condition but not necessary for n to be even is that
it’s divisible by four. If n is divisible by four, then it’s divisible
by two and thus even. But there are even numbers which are not
divisible by four, for example, 14.
(c) A necessary and sufficient condition for n to be even is that n is
divisible by two—that’s the definition of being even. A perhaps
more interesting example is the condition that n be divisible by
an even number. If n is divisible by an even number, then it’s even
since an even number is divisible by two and a divisor of a divisor
is a divisor. And if n is even, then by definition n’s divisible by two,
which is an even number. Hence the condition is both necessary
and sufficient for n to be even.

2.7.4 (a) The sum of two even natural numbers is even.


(b) Every natural number is either even or odd.
(c) The sum of the square of a number and that number itself is
always even.
(d) There is no biggest negative real number.

2.7.5

Proposition. Every prime number bigger than two is odd.


APPENDIX B. CHAPTER 2. A MATH PRIMER FOR ASPIRING LOGICIANS302

Proof (Informal sketch). We prove this fact indirectly. Suppose that


there exists a prime number which is bigger than two and even. Since
the number is is even, it must be divisible by two. But by definition,
a prime number cannot be divisible by any number smaller than it.
Since our prime was supposed to be bigger than two but at the same
time needs to be divisible by two, we arrive at a contradiction: our
prime is both divisible by a number smaller than it and not. Hence a
prime like this, which is bigger than two and even, cannot exist. So,
we can conclude that every prime bigger than two is even.
Appendix C

Chapter 3. Elementary Set


Theory

3.9 Self-Study Questions


3.9.1 (a) is the definition of X ⊆ Y . Note that (d) is equivalent to (a): if
there exists no x ∈ X such that x ∈ / Y , then every x ∈ X must also be
in Y . Why? Well, can you find a counterexample? (h) is perhaps the
most difficult to see to be correct. Also here it helps to think whether
you can find a counterexample if (h) holds. Suppose that X * Y and
that (h) holds. That X * Y means that there exists an x ∈ X and
x∈ / Y . And (h) says that every x ∈ / Y is also such that x ∈ / X. This
would mean that some x would need to be both such that x ∈ Y and
x∈/ X, which is impossible. Hence X * Y cannot be and we have to
have X ⊆ Y , instead.

3.9.2 (e) is the only correct answer.

3.9.3 (a) is correct by the axiom of extensionality. (b) is not correct, since
we might have, for example, one element which is in both sets (and
hence in the one iff in the other), but another which is only in one
and not the other (and hence the sets are different). (c) is more or less
obviously not enough. (d) can be seen to be correct by the reasoning
from 3.9.1.(d). (e) is not correct since it’s not enough that we can
“pair” the elements, they need to be the same. And (f) is too weak, it
only implies X ⊆ Y .

3.9.4 Remember that two sets are distinct as soon as they have different
members.

3.9.5–3.9.8 The correct answers follow immediately from the definitions of ∩ and
∪. Note that in logic and mathematics, we read “or” inclusively and
therefore (e) is also correct in 3.9.5. Note further that to say that it’s

303
APPENDIX C. CHAPTER 3. ELEMENTARY SET THEORY 304

not the case that one thing or the other is the case is to say that both
are not the case. Similarly, to say that it’s not the case that two things
are the case is to say that at least one of them is not the case. This,
hopefully, helps with 3.9.6 and 3.9.8.

3.9.9 The correct answers follow directly from the conditions on what a
function needs to do.

3.9.10 Remember from 3.6.11 that {f (x) : x ∈ X 0 } for f : X → Y and X 0 ⊆


X is defined as the set {y : there exists a x ∈ X 0 , such that y = f (x)}.
This means that an element m is not in the set {n2 : n ∈ N and 0 ≤
n ≤ 10} just in case it’s not the case that there exists an n ∈ N such
that 0 ≤ n ≤ 10 and m = n2 . But that’s just the same as (b). Note
that (a) is not correct since 4 ∈ {n2 : n ∈ N and 0 ≤ n ≤ 10} since
0 ≤ 2 ≤ 4 and 22 = 4 but there exists nine numbers 0 ≤ n ≤ 10 with
n2 6= 4: 02 = 0, 12 = 1, 32 = 9, . . . , 102 = 100.

3.10 Exercises
3.10.1 (a) {1, 3}
(b) {1, 2, 3, 5}
(c) X \ Y = {2}, Y \ X = {5}
(d) ℘(X) = {∅, {1}, {2}, {3}, {1, 2}, {1, 3}, {2, 3}, {1, 2, 3}}
℘(Y ) = {∅, {1}, {3}, {5}, {1, 3}, {1, 5}, {3, 5}, {1, 3, 5}}
(e) X×Y = {(1, 1), (1, 3), (1, 5), (2, 1), (2, 3), (2, 5), (3, 1), (3, 3), (3, 5)}
Y ×X = {(1, 1), (1, 2), (1, 3), (3, 1), (3, 2), (3, 3), (5, 1), (5, 2), (5, 3)}

3.10.2 (a) To prove X ⊆ Y iff X ∪ Y = Y , we need to prove both directions.



to prove :
If X ⊆ Y, then X ∪ Y = Y
Suppose X ⊆ Y . We’ll show that X ∪ Y = Y . To show this, we
must prove two things:
(i) Y ⊆ X ∪ Y : from the definition of union follows that X ∪ Y
consists of all elements in X and all elements in Y , hence
all elements in Y are in X ∪ Y , which is what we needed to
show.
(ii) X ∪ Y ⊆ Y Let x be an arbitrary element of X ∪ Y . By the
definition of union, we know that x ∈ Y or X ∈ X. We now
make a distinction by cases (see §2.3.9). When x ∈ Y , we are
fine, as we want to show that x ∈ Y . When x ∈ X, we know
by our assumption that X ⊆ Y , that x ∈ Y . Therefore all
x ∈ X ∪ Y are in Y , therefore X ∪ Y ⊆ Y .
APPENDIX C. CHAPTER 3. ELEMENTARY SET THEORY 305

Using the axiom of Extensionality, we can now conclude that


X ∪Y =Y.


to prove :
If X ∪ Y = Y , then X ⊆ Y . We’ll prove the contrapositive. We
assume that X 6⊆ Y . Hence there must be some element x ∈ X
such that x 6∈ Y . As X ∪ Y contains all elements of X, it must
also contain x. As there’s one element that is in X ∪ Y and not
in Y , X ∪ Y 6= Y , which is what we needed to prove.

As we’ve now proved both directions, we’ve proved X ⊆ Y iff


X ∪Y =Y
(b) We aim to show that X ⊆ Y iff X ∩ Y = X. To prove this we
need to prove both sides of the biconditional:
– ⇒. We need to show that if X ⊆ Y then X ∩ Y = X. So,
assume X ⊆ Y . We want to show that X ∩ Y = X. By
extionsionality, we must show two things:
* First, we need to show that X ∩Y ⊆ X. So, let an element
x ∈ X ∩ Y . That means, by definition of ∩, that x ∈ X
and x ∈ Y . But then certainly, x ∈ X, as desired.
* Second, we need to show that X ⊆ X ∩ Y . So, let an
element x ∈ X. We have assumed that X ⊆ Y , from
which it follows that x ∈ Y . As such we have x ∈ X and
x ∈ Y , which means, by definition of ∩, that x ∈ X ∩ Y .
By the axiom of extensionality we can conclude X ∩ Y = X.
– ⇐. We need to show that if X ∩ Y = X, then X ⊆ Y . We do
a proof by contraposition. So, assume X 6⊆ Y . That means
there is an element x ∈ X for which x 6∈ Y . If x 6∈ Y , then
x 6∈ X ∩ Y . So X 6⊆ X ∩ Y (because there is an x ∈ X for
which x 6∈ X ∩ Y ) so X 6= X ∩ Y .
We have now proved both directions, so X ⊆ Y iff X ∩ Y = X.

3.10.3
f f f
(1, 1) 7→ 1 (2, 1) 7→ 1 (3, 1) 7→ 1
f f f
1. (1, 2) 7→ 1 (2, 2) 7→ 2 (3, 2) 7→ 2
f f f
(1, 3) 7→ 1 (2, 3) 7→ 2 (3, 3) 7→ 3
f 1 2 3
1 1 1 1
2.
2 1 2 2
3 1 2 3
APPENDIX C. CHAPTER 3. ELEMENTARY SET THEORY 306
(
x if x < y,
3. f ((x, y)) =
y otherwise

3.10.5 We want to prove that f (n, m) = n + m for all n, m ∈ N. We use


the principal of mathematical induction. Base case: We need to show
that f (n, 0) = n + 0. By the definition f (n, 0) = n. It is trivial that
n = n + 0. Induction step: We need to show that for all n, m ∈ N, if
f (n, m) = n + m, then f (n, m + 1) = (n + m) + 1. So, let n, m ∈ N.
Assume the induction hypothesis that f (n, m) = n + m (IH). Consider
f (n, m + 1). By the definition f (n, m + 1) = f (n, m) + 1. By the IH
and get f (n, m + 1) = n + m + 1, which is what we had to show. By the
principle of mathematical induction, we conclude that for all n, m ∈ N
it is true that f (n, m) = n + m.

3.10.7 (a) l : Gargle → N:


(i) (a) l(♣) = 1
(b) l(♠) = 1
(ii) (a) l(3x3) = l(x) + 2
(b) l(x♥y) = l(x) + l(y) + 1
(b) 1♥ : Gargle → {0, 1}:
(i) 1♥ (♣) = 1♥ (♠) = 0
(ii) (a) 1♥ (3x3) = 1♥ (x)
(b) 1♥ (x♥y) = 1

3.10.8 We will prove by induction that the amount of 3’s in a gargle is always
even.

(i) Base case 1: ♠ has an even number of 3’s, as 0 is even. Base case
2: ♣ has an even number of 3’s, as 0 is even.
(ii) (a) Assume x has an even number of 3’s.Then 3x3 must also
have an even number of 3’s, as the number of 3’s of 3x3
is the number of 3’s of x+2. We know that an even number
+2 results in another even number.
(b) Assume x, y have an even number of 3’s. Then the number
of 3’s of x♥y will be the number of 3’s of x + the number of
3’s of y, which will be an even number, as an even number
added to an even number results in an even number, which
is what we needed to show.

By means of induction we’ve now proved that the number of 3’s in a


Gargle is always even.
APPENDIX C. CHAPTER 3. ELEMENTARY SET THEORY 307

3.10.9 We will prove that ♠3♥♠ 6∈ Gargle. We have just proven that every
gargle has an even amount of 3, but this one has exactly 1, and 1 =
0 · 2 + 1. Therefore 1 is odd, and not even. So we must conclude it’s
not a Gargle.
Appendix D

Chapter 4. Syntax of
Propositional Logic

4.7 Self-Study Questions


4.7.1 (a) Well, the formula could be sentence letter, which is a formula but
contains no parentheses.
(b) This is not true: also a formula that is formed from a single sen-
tence letter by means of some negations, such as ¬p or ¬¬p etc.,
would not contain any parentheses.
(c) This is the only option which is guaranteed to hold: if a formula
contains ∧, ∨, →, ↔, then it needs to contain parentheses.
(d) This is simply not true: ¬p contains an odd number of negations,
but no parentheses.
4.7.2 Strategy (a) is the hardest to apply, which we’ve seen by means of our
example. Strategy (c) will always give the right result but may take a
long time. Strategy (d) is actually included in strategy (c)—just look
at the second step of the algorithm. So, before you employ (c) fully, you
just do (d). Having tried strategy (b) is a prerequisite for applying the
algorithm (as we say before we describe the algorithm). But note that
(b) can fail to tell you that something’s not a formula even if it isn’t:
(p∧¬()) is not a formula, but you actually need to apply the algorithm
to see this, not even parentheses checking will help immediately.
4.7.3 The only correct answer is (d): conventional notation is just that,
conventional, and so we need to make clear that we’re using it.
4.7.4 To see why (b) is not correct, consider the formula φ = (¬¬¬p ∨ ¬p).
It’s easily checked that c(φ) = 4, but there are 5 connectives in φ. To
see that (f) is correct, we can use the Proposition 4.4.5: the complexity
of a formula corresponds to the longest path from the root in its parsing

308
APPENDIX D. CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC309

tree. Since each step in path goes from one node to another, we have
the starting node, the root, plus at least four other nodes, meaning
five nodes. Note that there can be more than five nodes in the tree,
as you can check by doing the parsing tree for φ, which we used to
explain why (b) is incorrect.

4.8 Exercises
4.8.1 Translation key:

p : Alan Turing built the first computer


q : Ada Lovelace invented the first computer algorithm
r : Today is Monday
s : Alan Turing is your favorite computer scientist
t : Ada Lovelace is your favorite computer scientist
u : Yesterday was Tuesday
v : Tomorrow is Saturday

(a) (p ∧ q)
(b) (r → p)
(c) Inclusive reading: (s ∨ t); Exclusive reading: ((s ∨ t) ∧ ¬(s ∧ t))
(d) (r ↔ (u ∧ v)).

1. (For now, English only)

(a) It’s not the case that I’m not both happy and clapping my hands.
(b) If I’m not happy, then I don’t clap my hands.
(c) I’m happy if and only if you’re not happy and I clap my hands.
(d) If I clap my hands and you clap your hands, then we both clap
our hands.
(e) If I clap my hands and you clap your hands, then either I’m happy
or you’re happy.
(f) Either I’m happy and clap my hands or you’re happy and clap
your hands.

4.8.3 The set of all formulas of L that only contain (symbols from) p, q, ¬, ∧, (,
and ) is the smallest set X such that:

ˆ p, q ∈ X,
ˆ if φ ∈ X, then ¬φ ∈ X,
ˆ if φ, ψ ∈ X, then (φ ∧ ψ) ∈ X.
APPENDIX D. CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC310

4.8.4 (a) (q ↔ (p ∧ (q ∨ (r ∧ ¬s)))

(q ↔ (p ∧ (q ∨ (r ∧ ¬s)))

qX (p ∧ (q ∨ (r ∧ ¬s))

pX (q ∨ (r ∧ ¬s)

qX (r ∧ ¬s/

Answer : Not a formula!

(b) ((p ∧ q) ∨ (p ∧ (q → ¬q)))

((p ∧ q) ∨ (p ∧ (q → ¬q)))

(p ∧ q) (p ∧ (q → ¬q))

pX qX pX (q → ¬q)

qX ¬q

qX

Answer : Formula!

(c) (p → (p → ((p ∧ p) ↔ p ∨ p)))

(p → (p → ((p ∧ p) ↔ p ∨ p)))

pX (p → ((p ∧ p) ↔ p ∨ p))

pX ((p ∧ p) ↔ p ∨ p)

(p ∧ p) p ∨ p/

pX pX

Answer : Not a formula!

(d) ¬¬(¬¬p ∧ (q ∨ q))


APPENDIX D. CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC311

¬¬(¬¬p ∧ (q ∨ q))

¬(¬¬p ∧ (q ∨ q))

(¬¬p ∧ (q ∨ q))

¬¬p (q ∨ q)

¬p qX qX

pX

Answer : Formula!

4.8.5

(a) #conn : L → N can be defined by:


(i) #conn (p) = 0 for p ∈ P
(ii) (a) #conn (¬φ) = #conn (φ) + 1
(b) #conn ((φ◦ψ)) = #conn (φ)+#conn (ψ)+1, for ◦ = ∧, ∨, →
, ↔.
(b) #( : L → N can be defined by:
(i) #( (p) = 0 for p ∈ P
(ii) (a) #( (¬φ) = #( (φ)
(b) #( ((φ ◦ ψ)) = #( (φ) + #( (ψ) + 1, for ◦ = ∧, ∨, →, ↔.
(c) #P : L → N can be defined by:
(i) #P (p) = 1 for p ∈ P
(ii) (a) #P (¬φ) = #P (φ)
(b) #P ((φ ◦ ψ)) = #P (φ) + #P (ψ), for ◦ = ∧, ∨, →, ↔.
(d) 1p : L → {0, 1} can be defined by
(i) 1p (p) = 1 and 1p (q) = 0 for all q 6= p ∈ P
(ii) (a) 1p (¬φ) = 1p (φ)
(b) 1p ((φ ◦ ψ)) = max(1p (φ), 1p (ψ)), for ◦ = ∧, ∨, →, ↔.

4.8.6 (a) This function counts the number of symbols in a formula.


(b) This function counts the number of negations in a formula.
(c) This function assigns one as a value iff the number of negations in
the formula is even.
APPENDIX D. CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC312

4.8.7 We willen laten zien dat boor alle formules φ ∈ L, het aantal subfor-
mules maximaal tweemaal het aantal connectieven plus 1, i.e. |sub(φ)| ≤
2 · #conn (φ) + 1.
We bewijzen dit met inductie op formules.

(i) Het basisgeval p heeft één subformule, p, en bevat geen connec-


tieven. Het aantal subformules is dus niet groter dan 2 · 0 + 1 = 1.
(ii) a. Neem een formule φ ∈ L. We nemen via de inductiehy-
pothese aan dat het aantal subformules van φ niet groter is
dan 2·#conn (φ)+1. We bekijken het geval ¬φ. Met de defini-
tie van de subformules (zie 4.4.2) weten we dat sub(¬φ) =
sub(φ)∪{¬φ} dus |sub(¬φ)| = |sub(φ)|+1. Aangezien we één
negatie toevoegen volgt #conn (¬φ) = #conn (φ) + 1. Neem nu
de inductiehypothese:

|sub(φ)| ≤ 2 · #conn (φ) + 1

Substitutie geeft:

|sub(¬φ)| − 1 ≤ 2 · #conn (¬φ)


|sub(¬φ)| ≤ 2 · #conn (¬φ) + 1

Hiermee geldt de claim voor ¬φ.


b. Bezie φ, ψ ∈ L. We nemen via de inductiehypothese aan dat
|sub(φ)| ≤ 2 · #conn (φ) + 1 en |sub(ψ)| ≤ 2 · #conn (ψ) +
1. We beschouwen het geval (φ ◦ ψ) met ◦ ∈ {∨, ∧, →, ↔}.
Met de definitie van de subformules (zie 4.4.2) weten we dat
sub((φ◦ψ)) = sub(φ) ∪ sub(ψ) ∪ {(φ◦φ)}. Omdat er overlap
kan zijn tussen de subformules van φ en ψ, moet gelden dat
|sub((φ ◦ ψ))| ≤ |sub(φ)| + sub(ψ). Verder #conn ((φ ◦ ψ)) =
#conn (φ) + #conn (ψ) + 1. Begin als volgt:

|sub((φ ◦ ψ))| ≤ |sub(φ)| + sub(ψ)

Substitutie geeft:

|sub((φ ◦ ψ))| ≤ 2 · #conn (φ) + 2 · #conn (ψ) + 2


≤ 2 · (#conn (φ) + #conn (ψ) + 1)
≤ 2 · #conn ((φ ◦ ψ))
≤ 2 · #conn ((φ ◦ ψ)) + 1

Hiermee geldt de eigenschap ook voor (φ ◦ ψ). We hebben


nu via inductie bewezen dat de eigenschap voor alle formules
geldt.
APPENDIX D. CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC313

4.8.8 We give an informal outline of the argument. This can be made precise
using function #( from 1.8.5 and an analogously defined function #) .
Claim. The number of ( and ) in a formula φ is always the same.

Proof. We prove this by induction.

(i) For the base case, note that the number of both (’s and )’ in any
sentence letter p is both zero.
(ii) (a) Assume the induction hypothesis, that numbers of ( and )
in φ are the same. Consider ¬φ. Note that the number of (’s
¬φ is the same as in φ and the number of )’s in ¬φ is the
same as in φ (no new parentheses have been added). Hence,
the numbers of ( and ) in ¬φ are also the same.
(b) Assume the induction hypotheses, that numbers of ( and ) in
φ are the same and the numbers of ( and ) in ψ are the same.
Denote the number of (’s in φ by n, the number of )’s in φ by
m, the number of (’s in ψ by k, and the number of )’s in ψ by
l. We have n = m and k = l. Consider (φ ◦ ψ). The number
of (’s in (φ ◦ ψ) is n + k + 1. The number of )’s in (φ ◦ ψ) is
m + l + 1. Since n = m and k = l, n + k + 1 = m + l + 1, as
desired.
We conclude our claim by induction on formulas.

4.8.9 (a) (¬p ∧ q)


(b) ¬((p ∧ q) → (¬p ∨ ¬q))
(c) ((p ∨ p) ↔ ¬p)
(d) ((p ∨ q) ∧ r)
(e) ((p → p) ↔ (p → p))
(f) (¬p ∧ (((q ∨ r) → p) ↔ q))
(g) (p ∧ (p ∨ q))
(h) ((p → (q ∨ q)) ↔ r)
(i) ((p → q) ↔ (¬q → ¬p))
(j) ¬¬¬p
(k) ((p → p) ↔ (p ∨ ¬p))
(l) ((p ∨ q) → (¬r ∧ (s ↔ p)))

4.8.10

(a) p ∧ q
APPENDIX D. CHAPTER 4. SYNTAX OF PROPOSITIONAL LOGIC314

(b) ¬¬q
(c) p ∧ (r ∨ q)
(d) p → (r ∨ (p ∧ (q ↔ r)))
(e) p ∨ ¬(p ∨ q)
(f) p ∧ q → r
(g) p ∨ q → ¬q ↔ r
(h) p ∧ q ∧ r
(i) p ∧ q ∧ r
(j) p ∨ q ∨ r
(k) p ∧ (q ∨ r)
(l) p ∧ (q → r)
Appendix E

Chapter 5. Semantics for


Propositional Logic

5.5 Self-Study Questions


5.5.1 We can’t predict the number of rows because we don’t know how many
sentence letters are there in the formula. (Well, to be super precise,
there will be at least one row and at most 24 = 16 rows. The former
in case there is only one letter the latter in case there are 4, which is
the most you can have with 3 connectives)

5.5.2 (a) would mean that φ ∧ ψ = p ∧ ¬p and we’ve checked that p ∧ ¬p is


a contradiction. (b) is the only sensible option and it’s easily checked
that if  φ and  ψ, then  φ ∧ ψ. (c) would mean that φ ∧ ψ is also
of the form φ ∨ ψ, which is impossible. (d) doesn’t make sense.

5.5.3 (a) entails that  φ → ψ by the deduction theorem. (b) entails that
 φ → ψ by the observation that if JψKv = 1 for all valuations, then also
Jφ → ψKv = max(1 − JφKv , JψKv ) = 1 for all valuations v . (c) follows
from the fact that ¬ψ → ¬φ  φ → ψ by the law of contraposition.
And (d) follows from the observation that that if J¬φKv = 1 for all
valuations, then JφKv = 0 and so Jφ → ψKv = max(1 − JφKv , JψKv ) = 1
for all valuations v.

5.6 Exercises
5.6.2 (xv) We want to show that for all φ, ψ, and θ, we have

φ ∨ (ψ ∧ θ)  (φ ∨ ψ) ∧ (φ ∨ θ)

We’ll prove this directly. We know that φ  ψ iff for all valua-
tions v, JφKv = JψKv (see proposition 5.2.5).

315
APPENDIX E. CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC316

Let’s apply the truth functions to our formulas with an arbitrary


valuation v:

Jφ ∨ (ψ ∧ θ)Kv = max(JφKv , Jψ ∧ θKv )


= max(JφKv , min(JψKv , JθKv ))

J(φ ∨ ψ) ∧ (φ ∨ θ)Kv = min(Jφ ∨ ψKv , Jφ ∨ θKv )


= min(max(JφKv , JψKv ) , max(JφKv , JθKv ))

We have to prove that for all x, y, z ∈ {0, 1}, we have:

max(x, min(y, z)) = min(max(x, y), max(x, z))

We distinguish two cases: (i) x = 0 and (ii) x = 1.


(i) Assume x = 0. Then we have max(x, y) = y and max(x, z) =
z. Substituting this in the identity min(max(x, y), max(x, z)) =
min(max(x, y), max(x, z)) gives us min(max(x, y) , max(x, z)) =
min(y, z). And if x = 0 we also know that max(x , min(y, z)) =
min(y, z). It follows that max(x, min(y, z)) = min(max(x, y), max(x, z)).
So, in case (i) the claimed equation holds.
(ii) Assume that x = 1. It follows that max(x, min(y, z)) =
x = 1. We also get that max(x, y) = max(x, z) = x = 1.
So, min(max(x, y), max(x, z)) = 1. It follows immediately
that max(x, min(y, z)) = min(max(x, y), max(x, z)), which
is what we wanted to show.
So, what we’ve shown is that for all values x, y, z ∈ {0, 1}, our
equation holds. This gives us quickly that for all valuations v :
P → {0, 1}, we have that Jφ ∨ (ψ ∧ θ)Kv = J(φ ∨ ψ) ∧ (φ ∨ θ)Kv ,
which entails our claim.

5.6.3 (iii) Claim. For all Γ, ∆ ⊆ L and φ ∈ L, we have if Γ  φ, then


Γ ∪ ∆  φ.

Proof. We prove this by conditional proof. So let Γ, ∆ ⊆ L and


φ ∈ L be arbitrary and assume that Γ  φ. This means that for
all valuations v, if v makes all the members of Γ true, then v
makes φ true. We want to show that Γ ∪ ∆  φ, i.e. if a valuation
v makes all the members of Γ ∪ ∆ true, then v makes all the
members of φ true. So let v be an arbitrary valuation that makes
all the members of Γ ∪ ∆ true. Since Γ ⊆ Γ ∪ ∆, it follows that
v makes all the members of Γ true. But by assumption (Γ  φ),
this means that v makes φ true, which is what we needed to
show.
APPENDIX E. CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC317

(xiii) Claim. (φ ∨ ψ) ∨ θ  φ ∨ (ψ ∨ θ)

Proof. There are different ways of proving this. The easiest is


via Proposition 5.2.5. Remember that this Proposition states
that φ  ψ iff for all valuations v, JφKv = JψKv . Now con-
sider J(φ ∨ ψ) ∨ θKv and Jφ ∨ (ψ ∨ θ)Kv for an arbitrary val-
uation v. We have J(φ ∨ ψ) ∨ θKv = max(J(φ ∨ ψ)Kv , JθKv ) =
max(max(JφKv , JψKv ), JθKv ) and Jφ ∨ (ψ ∨ θ)Kv = max(JφKv , Jψ ∨
θKv ) = max(JφKv , max(JψKv , JθKv )). The claim follows immedi-
ately from the (combinatoric) observation that for all x, y, z ∈
{0, 1}, max(max(x, y), z) = max(x, max(y, z)). This observation
can be proven in many ways, here’s one. We distinguish two cases:
(a) x = y = z = 0 and (b) at least one of x, y, z is 1. In case (a),

max(max(x, y), z) = max(max(0, 0), 0) = max(0, 0) = 0

max(x, max(y, z)) = max(0, max(0, 0)) = max(0, 0) = 0


If at least one of x, y, z is 1, then either max(x, y) = 1 or z = 1,
hence max(max(x, y), z) = 1. Similarly, if at least one of x, y, z is
1, then either x = 1 or max(y, z) = 1, so max(x, max(y, z)) = 1.
Hence, either way, max(max(x, y), z) = max(x, max(y, z)), as
desired.

(xv) Claim. φ ∨ (ψ ∧ θ)  (φ ∨ ψ) ∧ (φ ∨ θ)

Proof. Again, there are several ways of doing this. We show this
directly, i.e. we show that (1) φ ∨ (ψ ∧ θ)  (φ ∨ ψ) ∧ (φ ∨ θ) and
(2) (φ ∨ ψ) ∧ (φ ∨ θ)  φ ∨ (ψ ∧ θ) from definitions. We proceed
in turn:
1. We want to show that φ ∨ (ψ ∧ θ)  (φ ∨ ψ) ∧ (φ ∨ θ), i.e. if
φ ∨ (ψ ∧ θ) is true under a valuation v, then (φ ∨ ψ) ∧ (φ ∨ θ)
is also true under v. So, suppose that Jφ ∨ (ψ ∧ θ)Kv = 1,
for an arbitrary valuation v. We know that Jφ ∨ (ψ ∧ θ)Kv =
max(JφKv , J(ψ ∧θ)Kv ) = max(JφKv , min(JψKv , JθKv ). Since Jφ∨
(ψ ∧θ)Kv = 1, we can distinguish two cases: (a) JφKv = 1 or (b)
min(JψKv , JθKv ) = 1. In case (a), we can infer that Jφ ∨ ψ)Kv =
max(JφKv , JψKv ) = 1 and Jφ ∨ θ)Kv = max(JφKv , JθKv ) = 1.
Hence J(φ ∨ ψ) ∧ (φ ∨ θ)Kv = min(Jφ ∨ ψ)Kv , Jφ ∨ θ)Kv ) = 1,
as desired. In case (b), we can infer that both JψKv = 1 or
JθKv ) = 1. Hence both Jφ ∨ ψ)Kv = max(JφKv , JψKv ) = 1 and
Jφ ∨ θKv = max(JφKv , JθKv ) = 1; so we get J(φ ∨ ψ) ∧ (φ ∨
θ)Kv = min(Jφ ∨ ψ)Kv , Jφ ∨ θ)Kv ) = 1, again as desired. So, in
either case, J(φ ∨ ψ) ∧ (φ ∨ θ)Kv = 1 proving our claim that
φ ∨ (ψ ∧ θ)  (φ ∨ ψ) ∧ (φ ∨ θ).
APPENDIX E. CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC318

2. We want to show that (φ∨ψ)∧(φ∨θ)  φ∨(ψ∧θ). So, suppose


that J(φ ∨ ψ) ∧ (φ ∨ θ)Kv = 1 for some arbitrary valuation
v. Since J(φ ∨ ψ) ∧ (φ ∨ θ)Kv = min(Jφ ∨ ψ)Kv , Jφ ∨ θ)Kv ), it
follows that both Jφ ∨ ψKv = 1 and Jφ ∨ θKv = 1. We can
easily see that there are two cases to consider, (a) JφKv = 1
and (b) both JψKv = 1 and JθKv = 1. In case (a), we easily get
Jφ ∨ (ψ ∧ θ)Kv = max(JφKv , J(ψ ∧ θ)Kv ) = 1. In case (b), we first
note that we get Jψ ∧ θKv = min(JψKv , JθKv ) = 1. From this,
by Jφ ∨ (ψ ∧ θ)Kv = max(JφKv , J(ψ ∧ θ)Kv ), it quickly follows
that Jφ ∨ (ψ ∧ θ)Kv = 1. So, either way, Jφ ∨ (ψ ∧ θ)Kv = 1,
which is what we needed to show.

5.6.5 We prove this claim by contradiction. Suppose that there is a valuation


v, such that JφKv = 1 for all φ ∈ L. It follows that JpKv = 1 and
J¬pKv = 1 (since both p, ¬p ∈ L). But then, also J¬pKv = 1 − JpKv = 0.
Contradiction! Hence there is no valuation v, such that JφKv = 1 for
all φ ∈ L.

5.6.7 (a) p ∨ (q ∧ r) ↔ (p ∨ q) ∧ (p ∨ r)

Parsing Tree:
p ∨ (q ∧ r) ↔ (p ∨ q) ∧ (p ∨ r)

p ∨ (q ∧ r) (p ∨ q) ∧ (p ∨ r)

p q∧r p∨q p∨r

q r p q p r

Truth Table:
p q r q∧r p ∨ (q ∧ r) p∨q p∨r (p ∨ q) ∧ (p ∨ r) p ∨ (q ∧ r) ↔ (p ∨ q) ∧ (p ∨ r)
1 1 1 1 1 1 1 1 1
1 1 0 0 1 1 1 1 1
1 0 1 0 1 1 1 1 1
1 0 0 0 1 1 1 1 1
0 1 1 1 1 1 1 1 1
0 1 0 0 0 1 0 0 1
0 0 1 0 0 0 1 0 1
0 0 0 0 0 0 0 0 1
APPENDIX E. CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC319

(g) (¬p ∨ q) → (q ∧ (p ↔ q))

(¬p ∨ q) q ∧ (p ↔ q)

¬p q q p↔q

p p q
p q p↔q ¬p ¬p ∨ q q ∧ (p ↔ q) (¬p ∨ q) → (q ∧ (p ↔ q))
1 1 1 0 1 1 1
1 0 0 0 0 0 1
0 1 0 1 1 0 0
0 0 1 1 1 0 0
(q) p → (q → (r → (¬p → (¬q → ¬r))))

p (q → (r → (¬p → (¬q → ¬r))))

q (r → (¬p → (¬q → ¬r)))

r ¬p → (¬q → ¬r)

¬p ¬q → ¬r

p ¬q ¬r

q r
Truth table on next page:
p q r ¬q ¬r ¬q → ¬r ¬p ¬p → (¬q → ¬r) r → (¬p → (¬q → ¬r)) q → (r → (¬p → (¬q → ¬r))) p → (q → (r → (¬p → (¬q → ¬r))))
1 1 1 0 0 1 0 1 1 1 1
1 1 0 0 1 1 0 1 1 1 1
1 0 1 1 0 0 0 1 1 1 1
1 0 0 1 1 1 0 1 1 1 1
0 1 1 0 0 1 1 1 1 1 1
0 1 0 0 1 1 1 1 1 1 1
0 0 1 1 0 0 1 0 0 1 1
0 0 0 1 1 1 1 1 1 1 1
APPENDIX E. CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC320
APPENDIX E. CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC321

(s) p ∧ (¬p ∨ q) → (r → ¬q) ∧ (p → r)

p ∧ (¬p ∨ q) (r → ¬q) ∧ (p → r)

p ¬p ∨ q r → ¬q p → r

¬p q r ¬q p r

p q
Truth-table on next page.
p q r ¬p ¬q ¬p ∨ q r → ¬q p→r p ∧ (¬p ∨ q) (r → ¬q) ∧ (p → r) p ∧ (¬p ∨ q) → (r → ¬q) ∧ (p → r)
1 1 1 0 0 1 0 1 1 0 0
1 1 0 0 0 1 1 0 1 0 0
1 0 1 0 1 0 1 1 0 1 1
1 0 0 0 1 0 1 0 0 0 1
0 1 1 1 0 1 0 1 0 0 1
0 1 0 1 0 1 1 1 0 1 1
0 0 1 1 1 1 1 1 0 1 1
0 0 0 1 1 1 1 1 0 1 1
APPENDIX E. CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC322
APPENDIX E. CHAPTER 5. SEMANTICS FOR PROPOSITIONAL LOGIC323

5.6.8 (a) We know that p ∴ p ∨ (p ∧ q) is valid iff p  p ∨ (p ∧ q). By the


deduction theorem, the latter is equivalent to  p → (p ∨ (p ∧ q)).
This we can check via truth-tables. Here we go:

Parsing Tree:
p → (p ∨ (p ∧ q))

p p ∨ (p ∧ q)

p p∧q

p q

Truth Table:
p q p∧q p ∨ (p ∧ q) p → (p ∨ (p ∧ q)
1 1 1 1 1
1 0 0 1 1
0 1 0 0 1
0 0 0 0 1

Since there are only 1’s in the final column, the formula is a logical
truth and the argument therefore valid.
Appendix F

Chapter 6. Tableaux
Propositional Logic

6.6 Exercises
6.6.1 One way of putting it is as saying that an inference is valid iff the
premises together with the negation of the conclusion are unsatisfiable
or inconsistent.

6.6.2 (a) Suppose for contradiction that there is a valuation v such that
J¬(p → q)Kv = 1 and J¬(q → p)Kv = 1. J¬(p → q)Kv = 1 would
mean that Jp → qKv = 0, from which would follow that JpKv = 1
and JqKv = 0. (An implication is not true iff the first argument
is true and the second argument is false). For J¬(q → p)Kv = 1
to hold, J(q → p)Kv = 0 must be true, from which follows that
JqKv = 1 and JpKv = 0. As we’d already established that JpKv = 1
and JqKv = 0, this is a contradiction. Hence such a v cannot exist,
so the set is unsatisfiable.
(b) This follows immediately from our observation 5.2.11.(i) that 
p ∨ ¬p. This means that for all v, Jp ∨ ¬pKv = 1. Now suppose
that there is a valuation v such that J¬(p ∨ ¬p)Kv = 1. Since
J¬(p ∨ ¬p)Kv = 1 − Jp ∨ ¬pKv , it would follow that Jp ∨ ¬pKv = 0
in contradiction to 5.2.11.(i). Hence, there is no such valuation v
and {¬(p ∨ ¬p)} is unsatisfiable.
(c) Suppose for contradiction that there is a valuation v such that
J¬pKv = 1 and J¬p → pKv = 1. From J¬pKv = 1 follows that JpKv =
0 (∗). From J¬p → pKv = 1 follows that max(1 − J¬pKv , JpKv ) = 1.
So either 1−J¬pKv must be 1 or JpKv must be 1. By (∗) follows that
the latter is not the case, so 1 − J¬pKv = 1 must hold. This would
mean that J¬pKv = 0. But we’ve already assumed that J¬pKv = 1.

324
APPENDIX F. CHAPTER 6. TABLEAUX PROPOSITIONAL LOGIC325

Contradiction! Hence such a v cannot exist, so the set is unsatis-


fiable.
(d) Suppose for contradiction that there is a v such that J¬pKv = 1
and J(p → q) → pKv = 1. Since J¬pKv = 1 and J¬pKv = 1 −
JpKv , it follows that JpKv = 0. Now consider J(p → q) → pKv .
We know that J(p → q) → pKv = max(1 − Jp → qKv , JpK) =
max(1−max(1−JpKv , JqKv ), JpK). Since JpKv = 0, we can infer that
max(1−max(1−JpKv , JqKv ), JpK) = max(1−max(1−0, JqKv ), 0) =
max(1 − max(1, JqKv ), 0) = max(1 − 1, 0) − max(0, 0) = 0, in
contradiction to J(p → q) → pKv = 1. Hence there can be no such
v and our set is unsatisfiable.

6.6.3 We need to show two things: (a) if  ¬(φ1 ∧. . .∧φn ), then {φ1 , . . . , φn }
is unsatisfiable, and (b) if {φ1 , . . . , φn } is unsatisfiable, then  ¬(φ1 ∧
. . . ∧ φn ). We do so in turn:

(a) Suppose that  ¬(φ1 ∧ . . . ∧ φn ). This means that for each v,


J¬(φ1 ∧ . . . ∧ φn )Kv = 1. Since J¬(φ1 ∧ . . . ∧ φn )Kv = 1 − J(φ1 ∧
. . . ∧ φn )Kv , we can infer that J(φ1 ∧ . . . ∧ φn )Kv = 0. Now consider
J(φ1 ∧ . . . ∧ φn )Kv . We can easily show that J(φ1 ∧ . . . ∧ φn )Kv =
min(Jφ1 Kv , . . . , Jφn Kv ) (exercise, proof this by induction on natural
numbers). Since min(Jφ1 Kv , . . . , Jφn Kv ) = 0, it follows that some
φi for 1 ≤ i ≤ n must be such that Jφi Kv = 0. But that just means
that for each valuation v, some φi must be false. Hence there can
be no valuation that makes all the members of {φ1 , . . . , φn } true,
which is what we needed to show.
(b) Suppose that {φ1 , . . . , φn } is unsatisfiable, i.e. there is no valuation
that that makes all the members of {φ1 , . . . , φn } true. We wish to
derive that  ¬(φ1 ∧. . .∧φn ), i.e. for all valuations v, ¬(φ1 ∧. . .∧φn )
is true. So let v be arbitrary. Since {φ1 , . . . , φn } is unsatisfiable,
it follows that that some φi for 1 ≤ i ≤ n must be such that
Jφi Kv = 0. Now consider J(φ1 ∧ . . . ∧ φn )Kv . We’ve already observed
that J(φ1 ∧ . . . ∧ φn )Kv = min(Jφ1 Kv , . . . , Jφn Kv ). Since there is a
φi such that Jφi Kv = 0, it follows that min(Jφ1 Kv , . . . , Jφn Kv ) = 0.
Since J¬(φ1 ∧ . . . ∧ φn )Kv = 1 − J(φ1 ∧ . . . ∧ φn )Kv , it immediately
follows that J¬(φ1 ∧ . . . ∧ φn )Kv = 1, as desired.

6.6.4 (a) p → q, r → q ` (p ∨ r) → q
APPENDIX F. CHAPTER 6. TABLEAUX PROPOSITIONAL LOGIC326

p→q
r→q
¬((p ∨ r) → q)

p∨r

¬q

¬p q

p r 7

7 ¬r q
7 7

(b) p → (q ∧ r), ¬r ` ¬p

p → (q ∧ r)
¬r
¬¬p

¬p q∧r
7
q

r
7

(c) ((p → q) → q) → q
APPENDIX F. CHAPTER 6. TABLEAUX PROPOSITIONAL LOGIC327

¬(((p → q) → q) → q)

((p → q) → q)

¬q

¬(p → q) q
7
p

¬q

Counter-model: v(p) = 1, v(q) = 0.


APPENDIX F. CHAPTER 6. TABLEAUX PROPOSITIONAL LOGIC328

(d) ` ((p → q) ∧ (¬p → q)) → ¬p


¬(((p → q) ∧ (¬p → q)) → ¬p)

((p → q) ∧ (¬p → q))

¬¬p

p→q

¬p → q

¬p q

7 ¬¬p q

Counter-model: v(p) = 1, v(q) = 1


(e) p ↔ (q ↔ r) ` (p ↔ q) ↔ r

p ↔ (q ↔ r)
¬((p ↔ q) ↔ r)

p ¬p

(q ↔ r) ¬(q ↔ r)

(p ↔ q) ¬(p ↔ q) (p ↔ q) ¬(p ↔ q)

¬r r ¬r r

q ¬q q ¬q q ¬q q ¬q

r ¬r r ¬r ¬r r ¬r r

7 p ¬p p ¬p 7 p ¬p 7 7 p ¬p

q ¬q ¬q q q ¬q ¬q q
7 7 7 7 7 7 7 7
APPENDIX F. CHAPTER 6. TABLEAUX PROPOSITIONAL LOGIC329

(f) ¬(p → q) ∧ ¬(p → r) ` ¬q ∨ ¬r

¬(p → q) ∧ ¬(p → r)
¬(¬q ∨ ¬r)

¬(p → q)

¬(p → r)

¬¬q

¬¬r

¬q
7

(g) p ∧ (¬r ∨ s), ¬(q → s) ` r

p ∧ (¬r ∨ s)
¬(q → s)
¬r

(¬r ∨ s)

¬s

¬r s
7

Counter-model: v(p) = 1, v(q) = 1, v(r) = 0, v(s) = 0.


APPENDIX F. CHAPTER 6. TABLEAUX PROPOSITIONAL LOGIC330

(h) ` (p → (q → r)) → (q → (p → r))

¬((p → (q → r)) → (q → (p → r)))

(p → (q → r))

¬(q → (p → r))

¬(p → r)

¬r

¬p (q → r)

7 ¬q r
7 7

(i) ¬(p ∧ ¬q) ∨ r, p → (r ↔ s) ` p ↔ q


(See last page. This was a tough one ,)
APPENDIX F. CHAPTER 6. TABLEAUX PROPOSITIONAL LOGIC331

(j) p ↔ ¬¬q, ¬q → (r ∧ ¬s), s → (p ∨ q) ` (s ∧ q) → p


(Note: Here I made a short tableaux by strategically applying the
rules in a certain order. You might get another tableaux if you use
the rules in different order.)

p ↔ ¬¬q
¬q → (r ∧ ¬s)
s → (p ∨ q)
¬((s ∧ q) → p)

s∧q

¬p

¬s p∨q

7 p q

7 p ¬p

¬¬q ¬¬¬q
7
¬q
7
¬(p ∧ ¬q) ∨ r
p → (r ↔ s)
¬(p ↔ q)

¬(p ∧ ¬q) r

¬p (r ↔ s) ¬p (r ↔ s)

p ¬p p ¬p p ¬p p ¬p

¬q q ¬q q ¬q q ¬q q

7 ¬p ¬¬q ¬p ¬¬q ¬p ¬¬q 7 r ¬r r ¬r

q 7 7 r ¬r

s ¬s s ¬s
7 7
¬s

s q

r ¬r

s ¬s
APPENDIX F. CHAPTER 6. TABLEAUX PROPOSITIONAL LOGIC332

Counter-model v(p) = 0, v(q) = 1, v(r) = 0.


Appendix G

Chapter 7. Soundness and


Completeness

7.6 Self-Study Questions


7.6.1 (a) contradicts completeness, not soundness; (b) is in direct contradic-
tion to soundness; (c) this just means that the set is not consistent,
in classical propositional logic, we have, for example, {p, ¬p} ` p and
{p, ¬p} ` ¬p, (d) this would quickly lead to a contradiction if the cal-
culus were sound: if you can derive any formula whatsoever from mΓ,
you get Γ ` p ∧ ¬p; by soundness Γ  p ∧ ¬p; by Γ being satisfiable,
say via v, we get that Jp ∧ ¬pKv = 1; contradiction.

7.6.2 (a) directly contradicts completeness; (b) doesn’t contradict complete-


ness but soundness (note that a calculus can be complete but not
sound: take the trivial calculus in which everything can be derived
from everything, this calculus is certainly complete but not sound);
(c) every logical truth follows from every set (you can see this from
the definition of logical truth ∅  φ and monotonicity (5.2.6.(iii)); (d)
this is perfectly fine since there are sets from which both a formula
and a negation do follow (see above).

7.7 Exercises
7.7.3 (a) Claim. Every proof theoretically inconsistent set is unsatisfiable.

Proof. Suppose that Γ is proof-theoretically inconsistent, i.e. there


exists a φ such that Γ ` φ and Γ ` ¬φ. By the soundness theorem,
we can infer that Γ  φ and Γ  ¬φ. We now show that Γ is
unsatisfiable using indirect proof. For suppose that Γ is satisfiable,
i.e. there exists a v such that all the formulas in Γ are true under

333
APPENDIX G. CHAPTER 7. SOUNDNESS AND COMPLETENESS334

v. Since Γ  φ, it follows that JφKv = 1. And since Γ  ¬φ, it


follows that J¬φKv = 1. But since J¬φKv = 1 − JφKv = 1, we can
infer that JφKv = 0. Contradiction. Hence Γ must be unsatisfiable,
which is what we needed to show.

(b) Claim Every proof theoretically consistent set is satisfiable.

Proof. We’re going to prove the contrapositive, i.e. that if Γ is


unsatisfiable, then Γ is proof-theoretically inconsistent. Suppose
that Γ is unsatisfiable. We’re going to show that it follows that
Γ  p for some arbitrary p ∈ P. To see this, note that if Γ is
unsatisfiable, since Γ ⊆ Γ∪{¬p}, it follows by Proposition 6.2.5.(c)
that Γ ∪ {¬p} is also unsatisfiable. But this, by Theorem 6.2.6, is
equivalent to Γ  p. Similarly, we can see that Γ  ¬p. For since
Γ is unsatisfiable and Γ ⊆ Γ ∪ {¬¬p}, we have that Γ ∪ {¬¬p}
is unsatisfiable, which gives us Γ  ¬p. So, we have Γ  p and
Γ  ¬p. By the completeness theorem, this means that Γ ` p and
Γ ` ¬p, which means that Γ is proof-theoretically inconsistent and
is what we wanted to show.
Appendix H

Chapter 8. Syntax for


First-Order Logic

8.9 Exercises
8.9.1 (i) (a) sub(R(t1 , . . . , tn )) = {R(t1 , . . . , tn )} for all Rn ∈ R and
t1 , . . . , tn ∈ T .
(b) sub(t1 = t2 ) = {t1 = t2 } for all t1 , t2 ∈ T .
(ii) (a) sub(¬φ) = {¬φ} ∪ sub(φ) for all φ ∈ L.
(b) sub((φ ◦ ψ)) = {(φ ◦ ψ)} ∪ sub(φ) ∪ sub(ψ) for all φ, ψ ∈ L.
(c) sub(Qxφ) = {Qxφ} ∪ sub(φ) for all φ ∈ L and Q = ∀, ∃.

8.9.2 Suppose φ is a formula and x is the only free variable in φ, i.e. x is


not bound by a quantifier. Let’s consider Qxφ. By definition the root
of the corresponding parsing tree is the following occurence: hr, Qxi.
Since it’s the root, there is a path to the variable x. Therefore x now is
bound by hr, Qxi. Since x was the only free variable, Qxφ is now closed.

8.9.6 No, this is not the case. Consider ∃xP (x). This is a sentence since
all the variables it contains are bound (the x is bound by the ∃x).
The set of sub-formulas is sub(∃xP (x)) = {∃xP (x), P (x)}. P (x) is a
sub-formula, however it is not sentences since it contains an unbound
variables, i.e. the x.

8.9.9 (i)
(∀x(R(x, y) → ∃yR(y, y)))[y := x]
∀x((R(x, y) → ∃yR(y, y)))[y := x]
∀x((R(x, y))[y := x] → (∃yR(y, y))[y := x])

335
APPENDIX H. CHAPTER 8. SYNTAX FOR FIRST-ORDER LOGIC336

∀x(R((x)[y := x], (y)[y := x]) → ∃yR(y, y))


∀x(R(x, x) → ∃yR(y, y))

Vertaalsleutel voor opdracht 8.9.11 t/m 8.9.13:


Denotatie Betekenis
0 het getal 0
2 het getal twee
3 het getal drie
4 het getal vier
1
E (x) x is even
O1 (x) x is oneven
N 1 (x) ik heb het getal x hier opgeschreven
2
G (x, y) x is groter dan y
K 2 (x, y) x is kleiner dan y
2
s (x, y) De som van x en y
8.9.11 (a) 2 is een even getal.
E(2)
(b) 2 is groter dan 3.
G(2, 3)
(c) De som van 2 en 3 is groter dan 4.
G(s(2, 3), 4)
(d) Als dit getal groter dan 4 is, dan is het ook groter dan 3.
G(x, 4) → G(x, 3)
(e) Als dit getal niet groter dan 2 is, dan is het ook niet groter dan 3.
¬G(x, 2) → ¬G(x, 3)
(f) Dit getal is kleiner dan 2 of groter dan 4.
K(x, 2) ∨ G(x, 4)

8.9.12 (a) Er is een getal groter dan 4 en er is een getal kleiner dan 4.
(∃xG(x, 4) ∧ ∃xK(x, 4))
(b) Er is een even getal groter dan 3.
∃x(E(x) ∧ G(x, 3))
(c) Ieder getal groter dan 4 is ook groter dan 3.
∀x(G(x, 4) → G(x, 3))
(d) Geen getal is groter dan 3 en kleiner dan 4.
¬∃x(G(x, 3) ∧ K(x, 4)
(e) Als dit getal groter dan 4 is, dan is ieder getal dat ik hier opgeschreven
heb groter dan 4.
G(x, 4) → ∀x(N (x) → G(x, 4))
(f) Een getal dat kleiner dan 3 is, is kleiner dan 4.
∀x(K(x, 3) → K(x, 4))
APPENDIX H. CHAPTER 8. SYNTAX FOR FIRST-ORDER LOGIC337

(g) Een getal, dat kleiner dan 3 is, is kleiner dan 4.


∃x(K(x, 3) ∧ K(x, 4))
(h) Er is geen getal groter dan 4 en kleiner dan 3.
¬∃x(G(x, 4) ∧ K(x, 3))

8.9.13 (a) Een getal dat groter is dan ieder even getal, is oneven.
∀x(∀y(E(y) → G(x, y)) → O(x))
(b) Ieder getal is groter dan tenminste één getal.
∀x(∃yG(x, y))
(c) Er is een even getal dat kleiner is dan een oneven getal dat groter
is dan een oneven getal.
∃x(E(x) ∧ ∃y(O(y) ∧ K(x, y) ∧ ∃z(O(z) ∧ G(y, z))))
(d) Er is geen getal dat groter is dan ieder getal.
¬∃x(∀yG(x, y))
(e) Geen getal is groter dan zichzelf.
¬∃xG(x, x)
(f) Ieder oneven getal is groter dan 0.
∀x(O(x) → G(x, 0))
(g) Ieder oneven getal is groter dan een even getal.
∀x(O(x) → ∃y(E(y) ∧ G(x, y))

8.9.14 Vertaalsleutel voor opdracht a t/m e:


Denotatie Betekenis
m mij
V 1 (x) x is verstandig
2
B (x, y) x bemint y

(a) Wie iemand bemint, bemint zichzelf.


∀x(∃yB(x, y) → B(x, x))
(b) Wie niemand bemint, is niet verstandig.
∀x(¬∃yB(x, y) → ¬V (x))
(c) Wie verstandig is, wordt door iemand bemind.
∀x(V (x) → ∃yB(y, x))
(d) Iedereen bemint iemand.
∀x∃yB(x, y)
(e) Wie mij bemint, wordt door mij bemind.
∀x(B(x, m) → B(m, x))

Vertaalsleutel voor opdracht f t/m h:


Denotatie Betekenis
V 1 (x) x is voor mij
1
T (x) x is tegen mij
APPENDIX H. CHAPTER 8. SYNTAX FOR FIRST-ORDER LOGIC338

(f) Wie tegen mij is, is niet voor mij.


∀x(T (x) → ¬V (x))
(g) Wie niet voor mij is, is tegen mij.
∀x(¬V (x) → T (x))
(h) Iedereen is óf voor mij, óf tegen mij.
∀x((V (x) ∨ T (x)) ∧ ¬(V (x) ∧ T (x))

8.9.16 (a) Er is iemand die sterker is dan Peter.


(b) Als een persoon sterker is dan een ander dan is de ander groter
dan deze persoon.
(c) Iemand die groter is dan een ander is ook sterker dan een ander.
(d) Peter is groter dan niemand maar niemand is sterker dan Peter.
(e) Als een persoon blij is dan is er iemand die groter is dan deze
persoon.
Appendix I

Chapter 9. Semantics for


First-Order Logic

9.7 Exercises
9.7.1
(a) Jx2 KM
α = α(x2 ) = 2 · 2 + 1 = 5

(b) JS(x2 )KM M M M


α = S (Jx2 Kα ) = Jx2 Kα = 2 · 2 + 1 = 5

(c) J(x1 +x3 )KM M M M M M


α = + (Jx1 Kα , Jx3 Kα ) = + (α(x1 ), α(x3 )) = + (2, 7) =
2 · 7 = 14.

(d) JS(S(S(x0 )))KM M M M M


α = S (JS(S(x0 ))Kα = JS(S(x0 ))Kα = . . . = Jx0 Kα =
α(x0 ) = 1.

(e) JS(0 · x1 )KM M M M M M M


α = S (J0 · x1 Kα ) = J0 · x1 Kα = · (J0Kα , Jx1 Kα ) =
·M (0M , α(x1 )) = 423 = 74088

(f) J2 + 2KM M M M M M M
α = + (J2K , J2K ) = + (JS(S(0))Kα , JS(S(0))Kα ) =
+M (S M (JS(0)KM M M M M M M M
α ), S (JS(0)Kα )) = + (S (J0Kα ), S (J0Kα )) =
M
+ (42, 42) = 42 · 42 = 1764

(g) J(x1 · x2 ) + x3 KM M M J7KM = (J3KM ·M J5KM ) · 7 = 35 · 7 =


α = J(3 · 5)Kα + α α α
1701

(h) J0 + 0KM M M M M M M
α = + (J0Kα , J0Kα ) = + (0 , 0 ) = 42 · 42 = 1764

(i) J(0·0)+1KM M M M M M M M M M
α = + (J0·0K , JS(0)KKα ) = + ((· (0 , 0 ), S (0 )) =
4242 · 42 = 4243

(j) J42KM
α = JS(. . . (S (0) . . .)K
M = S M (JS(. . . (S (0) . . .)KM ) = JS(. . . (S (0) . . .)KM =
| {z } | {z } | {z }
42 times 41 times 41 times
. . . = J0KM
α =0
M = 42.

339
APPENDIX I. CHAPTER 9. SEMANTICS FOR FIRST-ORDER LOGIC340

9.7.2
See the added bullet 9.2.10 for (a). (b) and (c) work completely analogously.

9.7.5
(a) We will first determine JxKM M
α . We get: JxKα = α(x) = 1.
Next, we determine J1KM . We get: J |{z}
1 KM = JS(0)KM M M
α = S (J0Kα ) =
=S(0)
S M (0)= 0 + 1 = 1. Because 1 = 1, it follows that JxKM M
α = J1Kα and
so M, α  x = 1.
(b) We want to determine whether M, α  S(x) = S(S(x)). We will first
determine JS(x)KM
α as follows:

JS(x)KM M M
α = S (JxKα ) = = S M (1) = 1 + 1 = 2
JxKM
α =α(x)=1

We will then determine JS(S(x))KM


α as follows:

JS(S(x))KM M M M M M
α = S (JS(x)Kα ) = S (S (JxKα ))

= S M (S M (1)) = 1 + 1 + 1 = 3.
JxKM
α =1

Because 2 6= 3, it follows that JS(x)KM M


α 6= JS(S(x))Kα and so M, α 2
x = 1.
(c) M, α  2 + 2 = 4
We will first determine J2 + 2KM
α as follows:

J2 + 2KM M M M M M M
α = + (J2K , J2K ) = + (JS(S(0))Kα , JS(S(0))Kα )

= +M (S M (JS(0)K)M M M
α , S (JS(0)K)α )

= +M (S M (S M (J0KM M M M
α )), S (S (J0Kα )))

= +M (S M (S M (0)), S M (S M (0)))
= +M (0 + 1 + 1, 0 + 1 + 1) = 2 + 2 = 4.
We will then determine J4KM
α as follows:

J4KM M M M
α = JS(S(S(S(0))))Kα = S (JS(S(S(0)))Kα )

= S M (S M (JS(S(0))KM
α ))

= S M (S M (S M (JS(0)KM
α )))

= S M (S M (S M (S M (J0KM
α ))))

= S M (S M (S M (S M (0)))) = 0 + 1 + 1 + 1 + 1 = 4.

Because 4 = 4, it follows that J2+2KM M


α = J4Kα and so M, α  2+2 = 4.
APPENDIX I. CHAPTER 9. SEMANTICS FOR FIRST-ORDER LOGIC341

(g) We will show that for any x, y ∈ DM , S(x) = S(y) → x = y holds. We


will use proof by contradiction. Assume there is some pair a, b ∈ DM
for which the claim doesn’t hold. For an implication to be false, the
premises have to be true, where the conclusion is false. So S(a) = S(b)
and a 6= b. But S(a) = S(b) means just that a + 1 = b + 1. As a 6= b,
we’ve arrived at the conclusion that 1 6= 1, which is a contradiction.
Hence such a pair a, b cannot exist and therefore the claim is true for
all x, y.

9.7.6
(a) In order to show that M, α  ∃y y ∈ x, we need to establish (by
definition) that there exists a d ∈ DM such that M, α[y 7→ d]  y ∈ x.
This, in turn, is the case iff (JyKM M
α[y7→d , JxKα[y7→d ) = (d, α(x)) = (d, {x :
x is even}) ∈∈M . Since we have ∈M = {(x, X) : x ∈ N, X ∈ ℘(N), x ∈
X}, we get that M, α[y 7→ d]  y ∈ x iff d ∈ {x : x is even}. So, let
d = 2. Clearly, 2 ∈ {x : x is even} and so M, α[y 7→ 2]  y ∈ x. So,
M, α  ∃y y ∈ x.

(b) We will show M, α  ∀x¬x ∈ ∅ holds by using proof by contradiction.


Assume there is some x such that x ∈ ∅ is true in our model. Jx ∈
∅KM = JxKβ ∈M J∅KM . As ∈M denotes just the set theoretical relation
∈ and ∅M denotes the empty set, we can infer that x ∈ ∅ is true
iff x (whatever x is), is in the empty set. We know that the empty
set doesn’t contain any elements, so this cannot be the case. But we
assumed that x ∈ ∅ is true, so we’ve arrived at a contradiction. Hence
such an x cannot exist. Therefore, ∀x¬x ∈ ∅ is valid in our model.

9.7.10
(i) We will prove M  ∃x∃y(P (x) ∧ P (y) ∧ x 6= y) iff P M has at least two
elements.
Left-to right: We will prove this using conditional proof, followed by
proof by contradiction. We begin with the conditional proof. Let M
and α be arbitrary such that M, α  ∃x∃y(P (x) ∧ P (y) ∧ x 6= y).
Now, we assume the negation of our conclusion for the proof by con-
tradiction. Thus, we assume P M does not have at least two elements.
This means P M has strictly fewer than 2 elements, meaning we can
distinguish two cases: P M has 0 elements (1) and P M has 1 element
(2).

(1) Note that we assumed M, α  ∃x∃y(P (x) ∧ P (y) ∧ x 6= y). This


means there has to be a d ∈ DM such that M, α[x7→d]  P (x).
Now, for case (1), we assume P M has 0 elements, so P M = ∅. Then
APPENDIX I. CHAPTER 9. SEMANTICS FOR FIRST-ORDER LOGIC342

there is no d ∈ DM such that d ∈ P M . This is a contradiction


to our original assumption. It follows that if P M = ∅, M, α 2
∃x∃y(P (x) ∧ P (y) ∧ x 6= y).
(2) We assume P M has 1 element. Because we assumed
M, α  ∃x∃y(P (x) ∧ P (y) ∧ x 6= y),
there must exist d ∈ DM and d0 ∈ M such that M, α[x7→d,y7→d0 ] 
P (x) ∧ P (y) ∧ x 6= y. It follows that M, α[x7→d,y7→d0 ]  P (x) and
M, α[x7→d,y7→d0 ]  P (y) and M, α[x7→d,y7→d0 ]  x 6= y. The latter
tells us that d and d0 are two distinct elements of DM . But, this is
in contradiction to our assumption that P M has 1 element. Thus,
it follows that if P M has 1 element, M, α 2 ∃x∃y(P (x)∧P (y)∧x 6=
y).
Right-to-left: We will (again) prove this using conditional proof, fol-
lowed by proof by contradiction. First, for the conditional proof, we
assume P M has at least two elements. Now, for our proof by contra-
diction, we assume M, α 2 ∃x∃y(P (x) ∧ P (y) ∧ x 6= y). The latter can
be rewritten:
M, α 2 ∃x∃y(P (x) ∧ P (y) ∧ x 6= y)
M, α  ¬∃x∃y(P (x) ∧ P (y) ∧ x 6= y)
M, α  ∀x¬∃y(P (x) ∧ P (y) ∧ x 6= y)
M, α  ∀x∀y¬(P (x) ∧ P (y) ∧ x 6= y)
M, α  ∀x∀y(¬P (x) ∨ ¬P (y) ∨ x = y)

So, we have M, α  ∀x∀y(¬P (x) ∨ ¬P (y) ∨ x = y) and we know P M


has at least two elements. We will give a counterexample. Assume
p, q ∈ DM and p, q ∈ P M , where p 6= q. It follows that, in order for our
claim to be valid, M, α[x7→p,y7→q]  (¬P (x) ∨ ¬P (y) ∨ x = y) must hold.
This means that either M, α[x7→p,y7→q]  ¬P (x), or M, α[x7→p,y7→q] 
¬P (x) or M, α[x7→p,y7→q]  x = y must be true. Let’s look at them
individually: since we have p ∈ P M , we know M, α[x7→p,y7→q]  ¬P (x)
cannot be true. The same goes for q ∈ P M and M, α[x7→p,y7→q] 
¬P (y). Lastly, we assumed p 6= q, so we cannot have M, α[x7→p,y7→q] 
x = y either. Thus, we have found a counterexample. It follows that if
P M has at least two elements then M  ∃x∃y(P (x) ∧ P (y) ∧ x 6= y).

9.7.13
(a) We want to show that ∀xP (x)  ∀yP (y). So, let M and α be arbitrary
such that M  ∀xP (x). This means that for all d ∈ DM , M, α[x 7→
APPENDIX I. CHAPTER 9. SEMANTICS FOR FIRST-ORDER LOGIC343

d]  P (x), i.e. for all d ∈ DM , JxKM


α[x7→d] ∈ P
M . But that just means

that for all d ∈ DM , d ∈ P M . Hence, we also have that for all d ∈ DM ,


d = JyKMα[y7→d] ∈ P
M , which gives us that M, α  ∀yP (y).

(c) We will prove the contrapositive. Assume for arbitrary M, α that


M, α  ¬∀x(P (x) → Q(x)). We will show that M, α  ¬¬∃xP (x).
As we know ¬¬φ is equivalent to φ, showing J∃P (x)KMα = 1 is suffi-
cient.
Proof: ¬∀x(P (x) → Q(x)) means just that there is some element
d ∈ DM such that P (d) → Q(d) is false. This can only be the case
if P (d) is true and Q(d) is false. From JP (d)KM
α = 1, follows that
M
J∃xP (x)Kα = 1, which is what we needed to show. 

9.7.14
(a) Deze claim is niet waar. Dit kan geı̈llustreerd worden met het onder-
staande tegenvoorbeeld:
DM = {a, b}
P M = {a}
QM = {a, b}
∀x(P (x) → Q(x)) is waar: als we x vervangen door a dan hebben we
dat P (a) en Q(a) waar zijn, dus P (a) → Q(a) is ook waar. Als we x ver-
vangen door b krijgen we ¬P (b), dus dan is de implicatie P (b) → Q(b)
waar. ¬∃P (x) is ook waar, want ¬P (b) is waar. ∀x¬Q(x) is echter niet
waar, want er is een element waarvoor geldt Q(X), namelijk a.
Appendix J

Chapter 10. Tableaux for


First-Order Logic

10.8 Exercises
10.8.1 (a) ∀xP (x)
¬∀yP (y)

∃y¬P (y)

¬P (p)

P (p)
7

344
APPENDIX J. CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC345

(b) ∃x∃yS(x, y)
¬∃y∃xS(x, y)

∃yS(p1 , y)

S(p1 , p2 )

∀y¬∃xS(x, y)

¬∃xS(x, p2 )

∀¬S(x, p2 )

¬S(p1 , p2 )
7

(c) ¬∃xP (x)


¬∀x(P (x) → Q(x))

∃x¬(P (x) → Q(x))

¬(P (p) → Q(p))

P (p)

¬Q(p)

∀x¬P (x)

¬P (p)
7
APPENDIX J. CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC346

(d) ∀xP (x)


¬∀x(Q(x) → P (x) ∨ R(x))

∃x¬(Q(x) → P (x) ∨ R(x))

¬(Q(p) → P (p) ∨ R(p))

Q(p)

¬(P (p) ∨ R(p))

¬P (p)

¬R(p)

P (p)
7

10.8.2 (a) ∀x(P (x) → Q(x))


∃x¬P (x)
¬∀x¬Q(x)

¬P (p1 )

P (p1 ) → Q(p1 )

∃x¬¬Q(x)

¬¬Q(p2 )

Q(p2 )

P (p2 ) → Q(p2 )

¬P (p1 ) Q(p1 )

¬P (p2 ) Q(p2 ) ¬P (p2 ) Q(p2 )


Call the leftmost branch B. Then we get MB with DMB =
APPENDIX J. CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC347

{p1 , p2 } as well as P MB = ∅ and QMB = {p2 }.


(b) ∀xP (x) → ∀yQ(y)
¬∀x(P (x) → ∀yQ(y))

∃x¬(P (x) → ∀yQ(y))

¬(P (p) → ∀yQ(y))

P (p)

¬∀yQ(y)

∃y¬Q(y)

¬Q(q)

¬∀xP (x) ∀yQ(y)

∃x¬P (x) Q(q)


7
¬P (r)
Let B be the open branch. We get MB with DMB = {p, q, r} as
well as P MB = {p} and QMB = ∅.
APPENDIX J. CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC348

(c) ∃x(P (x) → ∀yQ(y))


¬(∃xP (x) → ∀yQ(y))

∃xP (x)

¬∀yQ(y)

∃y¬Q(y)

P (p)

¬Q(q)

P (r) → ∀yQ(y)

¬P (r) ∀yQ(y)

Q(p)

Q(q)
7

Call the open branch B. We get the following countermodel:


– DMB = {p, q, r}
– P MB = {p}
– QMB = ∅
APPENDIX J. CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC349

(d) ¬(∀x∃yS(x, y) → ∃xS(x, x))

∀x∃yS(x, y)

¬∃xS(x, x)

∀x¬S(x, x)

∃yS(p1 , y)

¬S(p1 , p1 )

S(p1 , p2 )

∃yS(p2 , y)

¬S(p2 , p2 )

S(p2 , p3 )

..
.
It’s relatively straight-forward to see that the tableau will be in-
finite: the universal quantifier in ∀x∃yS(x, y) needs to be instan-
tiated for each new parameter but itself generates an existential
quantifier ∃yS(pi , y), which forces us to introduce a new param-
eter, and so on.
At the same time, the tableau will not be closed, since the only
negated atoms are going to be of the form ¬S(pi , pi ) coming
from the universal quantifier ∀x¬S(x, x); and the only un-negated
atoms come from our existential quantifiers ∃yS(pi , y), which can
never give us a formula of the form S(pi , pi ).
As our countermodel, we get MB with DMB = {p1 , p2 , . . .} and
S MB = {hp1 , p2 i, hp2 , p3 i, . . .}.
A finite countermodel for the same inference is DMB = {p1 , p2 }
and S MB = {hp1 , p2 i, hp2 , p1 i}.
APPENDIX J. CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC350

(e) ∃x¬∃yS(x, y)
¬∃x∀yS(x, y)

¬∃yS(p1 , y)

∀y¬S(p1 , y)

¬S(p1 , p1 )

∀x¬∀yS(x, y)

¬∀yS(p1 , y)

∃y¬S(p1 , y)

¬S(p1 , p2 )

¬S(p1 , p2 )

¬∀yS(p2 , y)

∃y¬S(p2 , y)

¬S(p2 , p3 )

¬S(p1 , p3 )

..
.
It’s relatively straightforward to see that the branch is infinite:
we have to continue instantiating ∀x¬∀yS(x, y) with the new pa-
rameters we introduce, then we get a new existential quantifier,
which forces us to introduce a new parameter, . . . —we have a
quantifier feedback loop. At the same time, there is never a non-
negated atomic formula on the branch (try to find the pattern).
Our countermodel thus looks like this:
– DMB = {p1 , p2 , . . .}
– S MB = ∅
Here is a finite model that works as countermodel for the same
APPENDIX J. CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC351

inference:
– DMB = {p1 }
– S MB = ∅

10.8.5 (d)
APPENDIX J. CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC352

∃x(P (x) ∧ ∀y(P (y) → x = y))


¬∀x∀y(P (x) ∧ P (y) → x = y)

P (p) ∧ ∀y(P (y) → p = y)

P (p)

∀y(P (y) → p = y)

∃x¬∀y(P (x) ∧ P (y) → x = y)

¬∀y(P (q) ∧ P (y) → q = y)

∃y¬(P (q) ∧ P (y) → q = y)

¬(P (q) ∧ P (r) → q = r)

P (q) ∧ P (r)

q 6= r

P (q)

P (r)

P (q) → p = q

¬P (q) p=q
7
P (r) → p = r

¬P (r) p=r
7
q=r
7

10.8.6 The reason why it’s not possible to write such an algorithm is that
it would lead to a decision procedure for first-order logic, which we
APPENDIX J. CHAPTER 10. TABLEAUX FOR FIRST-ORDER LOGIC353

know can’t exist. Suppose you could in finitely many steps determine
whether a formula is invalid, that is: false in some model. Now suppose
you’re wondering if a given formula is valid. You run the algorithm. If
the algorithm tells you the formula is invalid, you know the answer to
your question: no. If the algorithm tells you the formula is not invalid,
well, then it must be valid; so you know the answer to your question:
yes. So, an algorithm that determines invalidity gives an algorithm
for validity. We know the later doesn’t exist, so the former can’t exist
either.
Hardcore version. Also such an algorithm would lead to a decision pro-
cedure, though in a slightly more complicated fashion. Suppose you’re
interested in whether a formula is valid but you can only determine
whether it’s contingent. First, find out whether the formula is contin-
gent. If it is, you know it can’t be valid, because a contingent formula
is false in some model. If you find out the formula is not contingent,
then there are two options: either the formula is true in every model
or it is false in every model. We need to figure out in which of the two
cases we are. But we can do this using the algorithm again. The way
this works is that you pick any contingent formula, say a non-trivial
identity claim of the form a = b. Then you consider the disjunction of
your initial form and that contingent formula. Run the algorithm on
that statement. If it turns out to be contingent, then the initial for-
mula must be false in every model. If the disjunction turns out to be
non-contingent, then the initial formula must be valid. Why so? Well,
a disjunction is true iff at least one of the disjuncts is true. Now let’s
go through the two possible situations. If the initial formula is false in
every model, then the disjunction will be true precisely in the models
where the contingent formula is true—which means the disjunction
will be itself contingent. But if the initial formula was valid, it is true
in every model, and so it’s disjunction with any other statement will
also be true in every model. So, the disjunction of our non-contingent
formula and a contingent formula will be non-contingent iff the non-
contingent formula is valid.
Appendix K

Chapter 11. Soundness and


Completeness

11.7.1 Part A — Doing


In the following, I give both full, detailed answers to the questions and an
indication of the expectations I have for a good answer. Note that my way of
answering is very detailed and there might be many different, equally valid
ways of writing up the same result. Note that the correctness of the answer
is always a factor but by far not the only one. If, as in 11.7.3, the correctness
of the answer is one out of 4 elements of a correct answer, just writing down
the correct answer can give at most 1/4th of the points.
Keep in mind that since this is your first formal course, I give you a
lot of leeway when it comes to precise, mathematical formulations, but the
elements of a good answer should always be there to get decent points.

11.7.1.1 Long answer : In order to determine all variable and quantifier occur-
rences, we first construct the stripped parsing tree for the formula:

354
APPENDIX K. CHAPTER 11. SOUNDNESS AND COMPLETENESS355

∀x

∃y

R ∀x

x y ∧

P ∃y

x R

x y

The following table contains the information which variable occurrence


is bound by which quantifier occurrence:

Variable occurrence Quantifier occurrence that binds it


((r, 1, 1, 1, 1), x) (r, ∀x)
((r, 1, 1, 1, 2), y) ((r, 1), ∃y)
((r, 1, 1, 2, 1, 1, 1), x) ((r, 1, 1, 2), ∀x)
((r, 1, 1, 2, 1, 2, 1, 1), x) ((r, 1, 1, 2), ∀x)
((r, 1, 1, 2, 1, 2, 1, 2), y) ((r, 1, 1, 2, 1, 2), ∃y)

Elements of a good answer :

– Proper naming of the occurrences.


– Clear statement which variable occurrence is bound by which
quantifier occurrence.
– Correct answer.
– Fully formulated sentences.

11.7.1.2 Full answer :

1. ∀x(V (x) → L(x))


2. ¬∀x(L(x) → ∀y(¬L(y) → I(x, y)))
3. L(x) ∧ ¬V (x) ∧ ¬D(x)
4. ∃x(¬L(x) ∧ ¬D(x))

Elements of a good answer :


APPENDIX K. CHAPTER 11. SOUNDNESS AND COMPLETENESS356

– Formalizations are indeed formulas.


– Adequate formalizations.
– Recognizes V (x) → L(x) as the best formalization of “only if”
– Recognizes “he” as an indefinite pronoun, i.e. free variable.
– Recognizes the subordinate clause indicated by the commas in
(iv) to indicate an existential quantifier.

11.7.1.3 Long answer : We are asked to determine the value of J(0 + x) · S(0)KM
α
for the given model and assignment. Applying the recursive definition
of the denotation of a term in a model under an assignment, we get
the following calculation:

J(0 + x) · S(0)KM M M
α = J0 + xKα · JS(0)KM
α
= (J0KM
α +
M
JxKM
α )·
M M
S (J0KM
α )
= (0M +M α(x)) ·M S M (0M )
= (1 +M 3) ·M S M (1)
=5·3
= 15

Elements of a good answer :

– Explains what’s being done.


– Applies the recursive clauses in sufficient detail.
– Applies the functions from the model correctly.
– Gets the correct result.

11.7.1.4 (Very) Long Answer : We claim that M, α  ∀x∀y(R(x, y) → R(f (y), f (x))).
In order to determine what we need to show is, we observe the following

M, α  ∀x∀y(R(x, y) → R(f (y), f (x)))


iff for all d ∈ DM , M, α[x 7→ d]  ∀y(R(x, y) → R(f (y), f (x)))
iff for all d, d0 ∈ DM , M, α[x 7→ d, y 7→ d0 ]  R(x, y) → R(f (y), f (x))

Since there are only 2 elements in DM = {1, 2}, there are 4 possible
choices for d and d0 to consider:

– d = 1, d0 = 1
– d = 1, d0 = 2
– d = 2, d0 = 1
– d = 2, d0 = 2
APPENDIX K. CHAPTER 11. SOUNDNESS AND COMPLETENESS357

Since for all choices of d and d0 other than d = 1, d0 = 2, we have that


M, α[x 7→ d, y 7→ d0 ] 2 R(x, y), so we don’t have to check anything
else to see that M, α[x 7→ d, y 7→ d0 ]  R(x, y) → R(f (y), f (x)).
For d = 1, d0 = 2, we observe that:

Jf (x)KM M
α[x7→d,y7→d0 ] = f (1) = 2

Jf (y)KM M
α[x7→d,y7→d0 ] = f (2) = 1.

But then, we have M, α[x 7→ d, y 7→ d0 ]  R(f (y), f (x)) and so


M, α[x 7→ d, y 7→ d0 ]  R(x, y) → R(f (y), f (x)).
So, for each choice of d, d0 ∈ DM , we have

M, α[x 7→ d, y 7→ d0 ]  R(x, y) → R(f (y), f (x))

and so
M, α  ∀x∀y(R(x, y) → R(f (y), f (x)),
as desired.
Elements of a good answer :

– Applies the clause for the universal quantifier.


– Considers all the values for the variables.
– Notices that the only interesting case is when the value of x is 1
and the value of y 2.
– Gives the correct answer.
– Explains reasoning.
– Is written in full, comprehensible sentences.

11.7.1.5 1. By definition, ∃x(P (x) ∧ x = c), ∀x(P (x) → Q(x)) ` Q(c) iff
the tableau for {∃x(P (x) ∧ x = c), ∀x(P (x) → Q(x)), ¬Q(c)} is
closed. Here is the tableau:
APPENDIX K. CHAPTER 11. SOUNDNESS AND COMPLETENESS358

∃x(P (x) ∧ x = c)
∀x(P (x) → Q(x))
¬Q(c)

P (p) ∧ p = c

P (p)

p=c

P (p) → Q(p)

¬P (p) Q(p)
7
¬Q(p)
7

Since the tableau is closed, we can infer the conclusion from the
premises as claimed.
Elements of a good answer :
– Explains why the tableau is done.
– Makes a tableau for Γ ∪ {¬φ} not for Γ ∪ {φ}.
– Applies all rules correctly.
– Recognizes the correct application of the identity rule to close
the second branch.
– Gets the correct answer.
2. By definition, P (c)∨(P (c)∧Q(c)), ∀x(Q(x) → ¬P (c)) ` ¬P (c) iff
the tableau for {P (c)∨(P (c)∧Q(c)), ∀x(Q(x) → ¬P (c)), ¬¬P (c)}
is closed. Here is the tableau:
APPENDIX K. CHAPTER 11. SOUNDNESS AND COMPLETENESS359

P (c) ∨ (P (c) ∧ Q(c))


∀x(Q(x) → ¬P (x)
¬¬P (c)

P (c)

P (c) P (c) ∧ Q(c)

Q(c) → ¬P (c) P (c)

¬Q(c) ¬P (c) Q(c)


7
Q(c) → ¬P (c)

¬Q(c) ¬P (c)
7 7

Since the tableau is open, we cannot infer the conclusion from


the premises, as claimed. The associated model of the only open
branch is given by the following specification:
– DM = {c}
– cM = c
– P M = {c}
– QM = ∅
Elements of a good answer :
– Explains why the tableau is done.
– Makes a tableau for Γ ∪ {¬φ} not for Γ ∪ {φ}.
– Applies all rules correctly.
– Makes a complete tableau (i.e. applies all possible rules).
– Gets the correct tableau.
– Specifies the associated model completely (including the in-
terpretation of c and Q).
– Gets the correct model.

11.7.1.6 Long answer : We’re asked to determine whether the following inference
is valid:

– The ball is round, and everything round comes from Mars. So,
the ball comes from Mars.
APPENDIX K. CHAPTER 11. SOUNDNESS AND COMPLETENESS360

In order to determine the validity of the argument, we first formalize


it. We make use of the following translation key:

b : the ball
R1 : . . . is round
M1 : . . . comes from Mars

We obtain
R(b), ∀x(R(x) → M (x)) ∴ M (b)

I claim that this inference is valid, i.e.

R(b), ∀x(R(x) → M (x))  M (b)

In order to show that I’m making use of the tableau method. We know
that R(b), ∀x(R(x) → M (x)) ` M (b) iff the tableau for {R(b), ∀x(R(x) →
M (x)), ¬M (b)} is closed. Here is that tableau:

R(b)
∀x(R(x) → M (x))
¬M (b)

R(b) → M (b)

¬R(b) M (b)
7 7

Since the tableau is closed, we can infer that

R(b), ∀x(R(x) → M (x)) ` M (b).

By the soundness theorem, our claim that

R(b), ∀x(R(x) → M (x))  M (b)

follows from this.


We have now shown that the formal inference

R(b), ∀x(R(x) → M (x)) ∴ M (b)

is valid. Since this formal inference is a formalization of the natural


language inference we started with, we can infer that the natural lan-
guage inference is valid, too.
Elements of a good answer :
APPENDIX K. CHAPTER 11. SOUNDNESS AND COMPLETENESS361

– Explains what is done.


– Formalizes the inference.
– Gets a decent formalization.
– Checks whether the inferences entail the conclusion with a suit-
able method (tableau or semantics).
– Applies that method correctly.
– Transfers the results back to the natural language inference.
– Gets the correct result.

11.7.2 Part B — Proving


Below, I provide a proof for each of the claimed theorems. Please keep in
mind that, as I said in the last lecture, if a mathematical claim is provable,
then there is always more than one proof of it. This means, the proofs I
provide are not the only possible proofs. The merit of the answers I formulate
below is that they might give you a better idea of what I expect a good
answer to look like. Also keep in mind that my answer are always as explicit
as possible, and I don’t necessarily expect the same level of attention to
detail from you.
The elements of a good answer are the same in every case:

ˆ Recognizes correctly what needs to be shown.

ˆ Explains each reasoning step, doesn’t have gaps in the argumentation.

ˆ Uses correct reasoning, doesn’t commit fallacies.

ˆ Applies the definitions correctly.

ˆ Is written in full, grammatical English/Dutch sentences.

ˆ Obtains the correct result.

Here is a (non-exhaustive) list of marking categories that we use:

Abbreviation Mistake
 Error/mistake (generic)
Df. Incorrect or imprecise definition
Q? Question not read correctly
6⇒ Non-sequitur, reasoning mistake
6= Calculation mistake
? QED missing, reasoning incomplete
x? Undeclared variables
⇒? Right-to-left direction missing
APPENDIX K. CHAPTER 11. SOUNDNESS AND COMPLETENESS362

⇐? Left-to-right direction missing


∨ Distinction by cases not exhaustive
abc Write complete sentences.
[squiggles] No (unexplained) paintings!

11.7.2.1 We’re asked to show that if φ is an open formula with y as its only
free variable, then ∀xφ is also an open formula (i.e. not a sentence).
We show this claim by showing that for every free occurrence of y in φ
there will be a corresponding free occurrence of y in ∀xφ. Since there
are free occurrences of y in φ, from this the claim follows.
So, consider a free occurrence of y in φ. Clearly, there is a correspond-
ing occurrence of y in ∀xφ. What remains to be shown is that the oc-
currence is free. Suppose, for indirect proof, that it’s not. This would
mean that there’s a quantifier occurrence of the form Qy in ∀xφ, which
binds the occurrence of y. That quantifier occurrence cannot have a
corresponding occurrence in φ, because then the occurrence of y in
φ would be bound, contrary to our assumption. But the only quanti-
fier occurrence that’s in ∀xφ without a corresponding occurrence in φ
is (r, ∀x). But this occurrence cannot bind any occurrence of y since
x 6= y. Hence, y needs to be free in ∀yφ, as desired.
Alternative strategy (way more complicated): Using induction on for-
mulas.
11.7.2.2 We’re asked to show that for all Γ, we have Γ ` P (c)∨¬P (c). We know,
by definition, that this is the case iff the tableau for Γ ∪ {¬(P (c) ∨
¬P (c))} is closed. But we can infer that this tableau is closed even
without knowing what the members of Γ are. To see this, note that
the initial list consists in Γ∪ {¬(P (c)∨ ¬P (c))}, so we can always close
the tableau as follows:

Γ
¬(P (c) ∨ ¬P (c)

¬P (c)

¬¬P (c)

P (c)
7

Hence the tableau is always closed, which is what we needed to show.


Alternative strategy: Semantically show that Γ  P (c) ∨ ¬P (c) and
then use completeness to infer the result.
APPENDIX K. CHAPTER 11. SOUNDNESS AND COMPLETENESS363

11.7.2.3 We aim to show that ∀x(φ → ψ)  ¬∃x(φ ∧ ¬ψ). By definition, we


know that ∀x(φ → ψ)  ¬∃x(φ∧¬ψ) iff for all models M, we have that
if M, α  ∀x(φ → ψ), then M, α  ¬∃x(φ ∧ ¬ψ) (for some arbitrary
assignment α). So, let M be an arbitrary model and suppose that
M, α  ∀x(φ → ψ). This means, by definition, that for all d ∈ DM we
have M, α[x 7→ d]  φ → ψ. From this, we need to derive that M, α 
¬∃x(φ∧¬ψ). We do this indirectly. Suppose that M, α 2 ¬∃x(φ∧¬ψ).
It follows that M, α  ∃x(φ ∧ ¬ψ). This means, by definition, that
there must be a d ∈ DM such that M, α[x 7→ d]  φ ∧ ¬ψ. From this
it would follow that M, α[x 7→ d]  φ and M, α[x 7→ d] 2 ψ, and so
M, α[x 7→ d] 2 φ → ψ. But this would contradict our assumption that
M, α[x 7→ d]  φ → ψ for all d ∈ DM . So, by indirect proof, we can
infer that M, α  ¬∃x(φ ∧ ¬ψ), as desired.
11.7.2.4 We want to find a model M+ that makes ∀x∃yR(x, y) true and a
model M− that makes the formula false. There are different ways in
which we can achieve this, for example via tableau. You know how
that works, so I provide an answer that I found by thinking about
what needs to be the case for the formula to be true/false
First, consider the model M+ given by:
+
– DM = {∗}
+
– RM = {(∗, ∗)}
I claim that we have M+ , α  ∀x∃yR(x, y). To see this, remember
+
that M+ , α  ∀x∃yR(x, y) iff for all d ∈ DM , we have M+ , α[x 7→
d]  ∃yR(x, y). And we have M+ , α[x 7→ d]  ∃yR(x, y) iff for some
+
d0 ∈ DM , we have M+ , α[x 7→ d, y 7→ d0 ]  R(x, y). So, we have that
+ +
M+ , α  ∀x∃yR(x, y) iff for all d ∈ DM there exists a d0 ∈ DM
such that M+ , α[x 7→ d, y 7→ d0 ]  R(x, y). But there is only one
+
element in DM , namely ∗. And for d = ∗, we can easily find a d0
such that M+ , α[x 7→ d, y 7→ d0 ]  R(x, y), namely d0 = ∗. To see this,
just note that M+ , α[x 7→ ∗, y 7→ ∗]  R(x, y) since (∗, ∗) ∈ RM . So,
M+ , α  ∀x∃yR(x, y), as desired.
Now, consider the model M− given by:
+
– DM = {∗}
+
– RM = ∅
As in the case of M+ , we can infer that M− , α  ∀x∃yR(x, y) iff for
− −
all d ∈ DM there exists a d0 ∈ DM such that M− , α[x 7→ d, y 7→

d0 ]  R(x, y). Again, there is only one element in DM , namely ∗. But
for d = ∗, there exists no d0 such that M− , α[x 7→ d, y 7→ d0 ]  R(x, y).
For the only possible d0 would ∗ itself and we have M− , α[x 7→ ∗, y 7→
∗] 2 R(x, y) since RM− = ∅. So M− , α 2 ∀x∃yR(x, y), as desired.
APPENDIX K. CHAPTER 11. SOUNDNESS AND COMPLETENESS364

11.7.2.5 We need to show that if Γ  c 6= c, then Γ is unsatisfiable. By definition


Γ is satisfiable iff there is a model M and assignment α such that
M, α  φ for all φ ∈ Γ. So, assume that Γ  c 6= c. We derive that Γ is
unsatisfiable using indirect proof. So suppose that Γ is satisfiable, that
is there is a model M and assignment α such that M, α  φ for all φ ∈
Γ. Since M, α  φ for all φ ∈ Γ and Γ  c 6= c, it follows that M, α 
c 6= c. But that would entail that cM 6= cM , which is impossible. So,
there is no such M and α and Γ is therefore unsatisfiable.

11.7.2.6 We’re asked to prove that for all terms s and t we have, in the given
model M and under the given assignment α, that JsKM M
α = JtKα . In
order to prove this fact, we show that for all t we have JtKM
α = a .
M

From this, the claim follows immediately since then we have:

JsKM
α =a
M
= JtKM
α

So, we want to prove by induction that for all t we have JtKM α = a .


M

We have two base cases: (i) either t is a constant or (ii) t a variable. If


(i) t is a constant, then t = a, since a is the only constant of S. And
trivially, if t = a, we have JtKM = JaKM = aM . If (ii) t is a variable
x ∈ V, then JtKM = JxKM = α(x) = aM , as desired.
For the induction step, assume the induction hypothesis that JtKM
α =
aM . We need to derive from this that Jf (t)KM
α = aM , as well. To see

this, we can reason as follows:

Jf (t)KM M M M M
α = f (JtKα ) = f (a ) = a
M

So, using the principle of induction on terms, we have seen that JtKM
α =
aM for all terms t, from which our main claim follows as explained
before.

11.7.2.7 We’re asked to prove by induction that if φ is an open formula with


x as its only free variable, then (φ)[x := c] where c is a constant is a
closed formula, i.e. a sentence.
For the base case, we need to consider two situations: (i) φ is of the
form R(t1 , . . . , x, . . . , tn ) where the ti are ground terms or (ii) φ is of
the form t = x or x = t where t is a ground term. In case (i), we
simply observe that since each ti is a ground term and thus doesn’t
contain x, we have (R(t1 , . . . , x, . . . , tn ))[x := c] = R(t1 , . . . , c, . . . , tn ).
Since all of the ti ’s are ground-terms and c is a constant, there is no
free variable in this formula and the claim holds. In case (ii), we only
consider the situation where φ is of the form t = x since x = t is
completely analogous. We simply note that since t is a ground term
and thus doesn’t contain x, we have (t = x)[x := c] = t = c. And since,
APPENDIX K. CHAPTER 11. SOUNDNESS AND COMPLETENESS365

again, t is a ground term and c a constant, t = c contains no variables


at all and is thus closed.
We go through the induction steps one by one:

– Assume the induction hypothesis that if φ has only x free, then


(φ)[x := c] is a sentence. We need to derive that then also if
¬φ contains only x free, then (¬φ)[x := c] is a sentence. But
suppose that ¬φ contains only x free. We have by definition that
(¬φ)[x := c] = ¬(φ)[x := c]. And since if ¬φ contains only x
free, then also φ can only contain x free. Hence (φ)[x := c] is a
sentence by the induction hypothesis and so ¬(φ)[x := c] is also
sentence.
– Assume the induction hypotheses that (a) if φ has only x free,
then (φ)[x := c] is a sentence and that (b) if ψ has only x free,
then (ψ)[x := c] is a sentence. Now consider φ ◦ ψ with only x free
for ◦ = ∧, ∨, →, ↔. We know that (φ ◦ ψ)[x := c] = (φ)[x := c] ◦
(ψ)[x := c] by definition. Now if φ◦ψ has only x free, each of φ and
ψ can only have x free. So, by the induction hypotheses (a) and
(b), we have that (φ)[x := c] and (ψ)[x := c] are both sentences.
But if (φ)[x := c] and (ψ)[x := c], then (φ)[x := c] ◦ (ψ)[x := c] is
a sentence, too, as desired.
– Assume the induction hypothesis that if φ has only x free, then
(φ)[x := c] is a sentence. Consider Qyφ with only x free for Q =
∀, ∃. Note that if Qyφ contains x free, then we need to have
that y 6= x. For in Qxφ, every occurrence of x would be bound
by (r, Qx). But if y 6= x, then we know that (Qyφ)[x := c] =
Qy(φ)[x := c]. And since (φ)[x := c] is a sentence by the induction
hypothesis, so is Qy(φ)[x := c].

This completes our proof, we can now infer by the principle of induc-
tion over formulas that for all φ with only x free, (φ)[x := c] is a
sentence.

11.7.2.8 We’re essentially asked to show that for all models M and assignments
α, we have M, α  ∀xR(x, x) iff (d, d) ∈ RM for all d ∈ DM . That is,
we need to show two things:

(a) If M, α  ∀xR(x, x), then (d, d) ∈ RM for all d ∈ DM


(b) If (d, d) ∈ RM for all d ∈ DM , then M, α  ∀xR(x, x).

To see that (a) holds, assume that M, α  ∀xR(x, x). This means, by
definition, that M, α[x 7→ d]  R(x, x). We derive that (d, d) ∈ RM for
all d ∈ DM by contradiction. Suppose that there exists a d ∈ DM such
that (d, d) ∈/ RM . But then, we’d have that M, α[x 7→ d] 2 R(x, x),
APPENDIX K. CHAPTER 11. SOUNDNESS AND COMPLETENESS366

contrary to our assumption that for all d ∈ DM , we have M, α[x 7→


d]  R(x, x). Hence (d, d) ∈ RM for all d ∈ DM , as desired.
To see that (b) holds assume that (d, d) ∈ RM for all d ∈ DM . We
need to derive that M, α  ∀xR(x, x), i.e. for all d ∈ DM we have
M, α[x 7→ d]  R(x, x). But this follows immediately since M, α[x 7→
d]  R(x, x) iff (JxKM M M
α[x7→d] , JxKα[x7→d] ) = (d, d) ∈ R .

11.7.2.9 We’re asked to show that in the given model M we have


M, α  ∃xP (x) → P (a) ∨ P (b) ∨ P (c)
for each α. We know that, by definition, M, α  ∃xP (x) → P (a) ∨
P (b) ∨ P (c) iff either (i) M, α 2 ∃xP (x) or (ii) M, α  P (a) ∨
P (b) ∨ P (c). Now clearly, we either have (a) M, α  ∃xP (x) or (b)
M, α 2 ∃xP (x) by bivalence. In case (b), we can immediately in-
fer that M, α  ∃xP (x) → P (a) ∨ P (b) ∨ P (c), so we focus on case
(a). Suppose that M, α  ∃xP (x). This means, by definition, that
there exists a d ∈ DM such that M, α[x 7→ d]  P (x). Now, since
DM = {1, 2, 3}, we can only have d = 1, 2, 3. If d = 1, then, since
aM = 1, we have that M, α[x 7→ JaKM α ]  P (x). By the denotation
lemma, this gives us M, α  (P (x))[x := a], i.e. M, α  P (a). But
if M, α  P (a), then M, α  P (a) ∨ P (b) ∨ P (c) and so M, α 
∃xP (x) → P (a) ∨ P (b) ∨ P (c) by (ii), as desired. If d = 2, 3, we can
give the same, analogous argument using the denotation lemma. So,
either way, M, α  ∃xP (x) → P (a) ∨ P (b) ∨ P (c), as desired.
11.7.2.10 We’re asked to prove that if φ ` ψ and ψ ` φ, then φ ` ψ. There
are many different ways we could do this, but I’ll use soundness and
completeness. First, I’ll show my (Lemma) that if φ  ψ and ψ  φ,
then φ  ψ.1 For suppose that φ  ψ and ψ  φ. By definition, this
means that in every model M, (a) if M  φ, then M  ψ and (b)
if M  ψ, then M  θ. I want to derive φ  ψ, i.e. for every model
M, such that M  φ, we also have M  ψ. But if M  φ, then by
(a) M  ψ, and then by (b) M  ψ, as desired. Now, to prove our
initial claim, suppose that φ ` ψ and ψ ` φ. By soundness, I have that
φ  ψ and ψ  φ. So, by my (Lemma), we have φ  θ. But then, by
completeness, we have φ ` θ, as desired.
11.7.2.11 We want to show that if Γ  c 6= c, then for all formulas φ, either Γ  φ
or Γ  ¬φ (the either . . . or was meant inclusively, otherwise the claim
doesn’t hold). We actually show the stronger claim that if Γ  c 6= c,
then Γ ` φ for all φ, so certainly also for φ and ¬φ.
Above, we proved that if Γ  c 6= c, then Γ is unsatisfiable. We’re
going to use this result. Since Γ is unsatisfiable, so is Γ ∪ {¬φ} for
1
We actually proved this in class, so you could, in principle just use this without proof.
APPENDIX K. CHAPTER 11. SOUNDNESS AND COMPLETENESS367

every φ (we proved this in 6.2.5.(c) for propositional logic, but the
proof clearly goes through for first-order logic, too). But we know by
the “I Can’t Get No Satisfaction” Theorem, that Γ ∪ {¬φ} iff Γ  φ.
So, we can conclude that if Γ  c 6= c, then Γ  φ, as desired.

11.7.2.12 In order to define the desired function, we first define the auxiliary
function c : T → N given by the following recursion:

– c(x) = 1 and c(c) = 1


– c(f (t1 , . . . , tn )) = c(t1 ) + . . . + c(tn ) + 1

We then define the desired function as follows:

– c(R(t1 , . . . , tn )) = c(t1 ) + . . . + c(tn ) + 1


– c(s = t) = c(s) + c(t) + 1
– c(¬φ) = c(φ) + 1
– c(φ ◦ ψ) = c(φ) + c(ψ) + 1
– c(∀xφ) = c(φ) + 1

We now need to prove that the number of nodes in T (φ), #T (φ), is


c(φ) for all φ. We do this by induction. Well, first we prove the lemma
that for all #T (t) = c(t).
For the induction base, note that the parsing tree for both x ∈ V and
c ∈ C contains precisely one node. So the claim holds that #T (x) =
#T (c) = c(x) = c(c) = 1.
So, assume the induction hypothesis that #T (ti ) = c(ti ) for 1 ≤
i ≤ n and consider the number of nodes in T (f (t1 , . . . , tn )). Since
T (f (t1 , . . . , tn )) =

f (t1 , . . . , tn )

T (t1 ) . . . T (tn )

We have that #T (f (t1 , . . . , tn )) = #T (t1 ) + . . . + #T (tn ) + 1, which,


by the induction hypothesis is identical to c(t1 ) + . . . + c(tn ) + 1 =
c(f (t1 , . . . , tn ), as desired.
Now, for our main claim, we prove by induction that #T (φ) = c(φ).
For the first base case, we note that T (R(t1 , . . . , tn ) =

R(t1 , . . . , tn )

T (t1 ) . . . T (tn )
APPENDIX K. CHAPTER 11. SOUNDNESS AND COMPLETENESS368

And so #T (R(t1 , . . . , tn ) = c(t1 ) + . . . + c(tn ) + 1 = c(R(t1 , . . . , tn ), as


desired. For the second base case, we note that T (s = t) =

s=t

T (s) T (t)

And so #T (s = t) = #T (s) + #T (t) + 1 = c(s) + c(t) + 1 = c(s = t),


as desired.
For the induction steps,

– Assume that #T (φ) = c(φ) and consider T (¬φ) =


¬φ

T (φ)

and so #T (¬φ) = #T (φ) + 1 = c(φ) + 1 by the induction hypoth-


esis, as desired.
– Assume that #T (φ) = c(φ) and that #T (ψ) = c(ψ) and consider
T (φ ◦ ψ) =
φ◦ψ

T (φ) T (ψ)

So, #T (φ ◦ ψ) = #T (φ) + #T (ψ) + 1 = c(φ) + c(ψ) + 1 by the


induction hypothesis, as desired.
– Assume that #T (φ) = c(φ) and consider T (Qxφ) =
Qxφ

T (φ)

and so #T (Qxφ) = #T (φ) + 1 = c(φ) + 1 by the induction


hypothesis, as desired.

So, by induction, we indeed have #T (φ) = c(φ), as desired.

11.7.2.13 We show that Betrand is correct using logic. First, we formalize the
stranger’s claim using the following translation key:

K1 . . . is a barber
S2 . . . shaves
APPENDIX K. CHAPTER 11. SOUNDNESS AND COMPLETENESS369

The stranger’s claim becomes:

∃x(K(x) ∧ ∀y(¬S(y, y) ↔ S(x, y)))

We now prove that Bertrand is correct, since the stranger’s claim is a


logical falsehood. We show this using tableaux. More specifically, we
show that ` ¬∃x(K(x) ∧ ∀y(¬S(y, y) ↔ S(x, y))). By definition, what
we need to show is that the tableau for {¬¬∃x(K(x) ∧ ∀y(¬S(y, y) ↔
S(x, y)))} closes. This can be seen as follows:

¬¬∃x(K(x) ∧ ∀y(¬S(y, y) ↔ S(x, y)))

∃x(K(x) ∧ ∀y(¬S(y, y) ↔ S(x, y)))

(K(p) ∧ ∀y(¬S(y, y) ↔ S(p, y)))

K(p)

∀y(¬S(y, y) ↔ S(p, y))

¬S(p, p) ↔ S(p, p)

¬S(p, p) S(p, p)

S(p, p) ¬S(p, p)
7 7

Now, since ` ¬∃x(K(x) ∧ ∀y(¬S(y, y) ↔ S(x, y))), by soundness, we


have  ¬∃x(K(x) ∧ ∀y(¬S(y, y) ↔ S(x, y))). That is the stranger’s
claim cannot be true in any model, so also not in the actual world.
Bertrand is right.
Appendix M

List of Symbols

Will be updated.

Symbol(s) Reading Definition


∈ Member, element 3.1.1
∅ Empty set 3.1.5
{x : Φ(x)} Class abstract 3.1.6
⊆ Subset 3.2.1
⊂ Proper subset 3.2.3
∪ Union 3.4.1
∩ Intersection 3.4.2
\ Difference 3.4.5
X ×Y Cartesian product 3.5.3–4
Xn Cartesian n-cube 3.5.4
f, g, h, . . . Functions 3.6.4
dom(f ) Domain of f
rg(f ) Range of f
N Natural numbers 3.7.1
¬ Negation 4.1.3
∧ Conjunction
∨ Disjunction
→ Conditional
↔ Biconditional
P The set of sentence letters 4.1.5
L The set of formulas 4.1.6
φ, ψ, θ, . . . Formulas
T (φ) The parsing tree of φ 4.3.5
c(φ) The complexity of φ
v, u, w, . . . Valuations 5.1.2
JφKv The truth-value of φ under v 5.1.9
Γφ Γ entails φ 5.2.2

370

You might also like