100% found this document useful (1 vote)
287 views933 pages

Rules of Inference

This document is a preface to the textbook "Symbolic Logic: An Accessible Introduction to Serious Mathematical Logic" by Tony Roy. The preface discusses the goals and structure of the textbook. It aims to bridge the gap between introductory and advanced formal logic courses by building skills incrementally. It also strives for clarity and accessibility by minimizing assumptions, introducing concepts systematically from the ground up, and directing discussions towards core results. The textbook covers sentential and predicate logic, and mathematical logic including Gödel's incompleteness theorems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
287 views933 pages

Rules of Inference

This document is a preface to the textbook "Symbolic Logic: An Accessible Introduction to Serious Mathematical Logic" by Tony Roy. The preface discusses the goals and structure of the textbook. It aims to bridge the gap between introductory and advanced formal logic courses by building skills incrementally. It also strives for clarity and accessibility by minimizing assumptions, introducing concepts systematically from the ground up, and directing discussions towards core results. The textbook covers sentential and predicate logic, and mathematical logic including Gödel's incompleteness theorems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 933

Symbolic Logic

An Accessible Introduction to Serious Mathematical Logic


Tony Roy
version 7.0
November 11, 2014

Preface
There is, I think, a gap between what many students learn in their first course in
formal logic, and what they are expected to know for their second. Thus courses
in mathematical logic with metalogical components often cast the barest glance at
mathematical induction, and even the very idea of reasoning from definitions. But
a first course also may leave these untreated, and fail as well explicitly to lay down
the definitions upon which the second course is based. The aim of this text is to
integrate material from these courses and, in particular, to make serious mathematical
logic accessible to students I teach. The first parts introduce classical symbolic logic
as appropriate for beginning students; the material builds to Gdels adequacy and
incompleteness results in the last parts. A distinctive feature of the last part is a
complete development of Gdels second incompleteness theorem.
Accessibility, in this case, includes components which serve to locate this text
among others: First, assumptions about background knowledge are minimal. I do
not assume particular content about computer science, or about mathematics much
beyond high school algebra. Officially, everything is introduced from the ground up.
No doubt, the material requires a certain sophistication which one might acquire
from other courses in critical reasoning, mathematics or computer science. But the
requirement does not extend to particular contents from any of these areas.
Second, I aim to build skills, and to keep conceptual distance for different applications of so relatively short. Authors of books that are entirely correct and precise,
may assume skills and require readers to recognize connections and arguments that
are not fully explicit. Perhaps this accounts for some of the reputed difficulty of the
material. In contrast, I strive to make arguments almost mechanical and mundane
(some would say pedantic). In many cases, I attempt this by introducing relatively
concrete methods for reasoning. The methods are, no doubt, tedious or unnecessary
for the experienced logician. However, I have found that they are valued by students,
insofar as students are presented with an occasion for success. These methods are not
meant to wash over or substitute for understanding details, but rather to expose and
i

PREFACE

ii

clarify them. Clarity, beauty and power come, I think, by getting at details, rather
than burying or ignoring them.
Third, the discussion is ruthlessly directed at core results. Results may be rendered inaccessible to students, who have many constraints on their time and schedules, simply because the results would come up in, say, a second course rather than
a first. My idea is to exclude side topics and problems, and to go directly after (what
I see as) the core. One manifestation is the way definitions and results from earlier
sections feed into ones that follow. Thus simple integration is a benefit. Another is
the way predicate logic with identity is introduced as a whole in Part I. Though it
is possible to isolate sentential logic from the first parts of chapter 2 through chapter 7, and so to use the text for separate treatments of sentential and predicate logic,
the guiding idea is to avoid repetition that would be associated with independent
treatments for sentential logic, or perhaps monadic predicate logic, the full predicate
logic, and predicate logic with identity.
Also (though it may suggest I am not so ruthless about extraneous material as
I would like to think), I try to offer some perspective about what is accomplished
along the way. In addition, this text may be of particular interest to those who have,
or desire, an exposure to natural deduction in formal logic. In this case, accessibility
arises from the nature of the system, and association with what has come before.
In the first part, I introduce both axiomatic and natural derivation systems; and in
Part III, show how they are related.
Answers to selected exercises indicated by star are provided in the back of the
book. Answers function as additional examples, complete demonstrations, and supply a check to see that work is on the right track. It is essential to success that you
work a significant body of exercises successfully and independently. So do not neglect exercises!
There are different ways to organize a course around this text. For students who
are likely to complete the whole, the ideal is to proceed sequentially through the text
from beginning to end (but postponing chapter 3 until after chapter 6). Taken as
wholes, Part II depends on Part I; parts III and IV on parts I and II. Part IV is mostly
independent of Part III. I am currently working within a sequence that isolates sentential logic from quantificational logic, treating them in separate quarters, together
covering all of chapters 1 - 7 (except 3). A third course picks up leftover chapters
from the first two parts (3 and 8) with Part III; and a fourth the leftover chapters
from the first parts with Part IV. Perhaps not the most efficient arrangement, but the
best I have been able to do with shifting student populations. Other organizations are
possible!
A remark about chapter 7 especially for the instructor: By a formal system for

PREFACE

iii

reasoning with semantic definitions, chapter 7 aims to leverage derivation skills from
earlier chapters to informal reasoning with definitions. I have had a difficult time
convincing instructors to try this material and even been told flatly that these
skills cannot be taught. In my experience, this is false (and when I have been able
to convince others to try the chapter, they have quickly seen its value). Perhaps the
difficulty is that it is weird none of us had (or needed) anything like this when
we learned logic. Of course, if one is presented with students whose mathematical
sophistication is sufficient for advanced work, the material is not necessary. But if, as
is often the case especially for students in philosophy, one obtains ones mathematical
sophistication from courses in logic, this chapter is an important part of the bridge
from earlier material to later. Additionally, the chapter is an important take-away
even for students who will not continue to later material. The chapter closes an
open question how it is possible to demonstrate quantificational validity from
chapter 4. But further, the ability to reason closely with derivations is a skill from
which students in (sentential or) predicate logic, even though they never go on to
formalize another sentence or do another derivation, will benefit both for philosophy
and more generally.
Naturally, results in this book are not innovative. If there is anything original,
it is in presentation. Even here, I am greatly indebted to others, especially perhaps
Bergmann, Moor and Nelson, The Logic Book, Mendelson, Introduction to Mathematical Logic, and Smith, An Introduction to Gdels Theorems. I thank my first
logic teacher, G.J. Mattey, who communicated to me his love for the material. And
I thank especially my colleagues John Mumma and Darcy Otto for many helpful
comments. In addition I have received helpful feedback from Hannah Roy and Steve
Johnson, along with students in different logic classes at CSUSB. I welcome comments, and expect that your sufferings will make it better still.
This text evolved over a number of years starting modestly from notes originally
provided as a supplement to other texts. It is now long (!) and perhaps best conceived
in separate volumes for parts I and II and for parts III and IV. With the addition of
Part IV it is complete for the first time in this version. (But chapter 11, which I never
get to in teaching, remains a stub that could be developed in different directions.)
Most of the text is reasonably stable, though I shall be surprised if I have not introduced errors in the last part both substantive and otherwise. I apologize for these in
advance, and anticipate that you will let me hear about them in short order!
I think this is fascinating material, and consider it great reward when students
respond cool! as they sometimes do. I hope you will have that response more than
once along the way.

PREFACE
T.R.
Fall 2014

iv

Contents
Preface

Contents

Named Definitions

ix

Quick Reference Guides

xvii

I The Elements: Four Notions of Validity

Logical Validity and Soundness


1.1 Consistent Stories . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 The Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3 Some Consequences . . . . . . . . . . . . . . . . . . . . . . . . .

4
5
10
22

Formal Languages
2.1 Sentential Languages . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Quantificational Languages . . . . . . . . . . . . . . . . . . . . . .

29
30
44

Axiomatic Deduction
3.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2 Sentential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3 Quantificational . . . . . . . . . . . . . . . . . . . . . . . . . . . .

65
66
70
78

Semantics
94
4.1 Sentential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
4.2 Quantificational . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

Translation

135
v

CONTENTS
5.1
5.2
5.3
6

vi

General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Sentential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Quantificational . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168

Natural Deduction
6.1 General . . . . .
6.2 Sentential . . . .
6.3 Quantificational .
6.4 The system ND+

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

205
205
214
266
313

II Transition: Reasoning About Logic

325

Direct Semantic Reasoning


7.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2 Sentential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.3 Quantificational . . . . . . . . . . . . . . . . . . . . . . . . . . . .

327
328
331
346

Mathematical Induction
8.1 General Characterization . . . . .
8.2 Preliminary Examples . . . . . . .
8.3 Further Examples (for Part III) . .
8.4 Additional Examples (for Part IV)

373
373
379
392
402

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

III Classical Metalogic: Soundness and Adequacy

418

Preliminary Results
9.1 Semantic Validity Implies Logical Validity
9.2 Validity in AD Implies Validity in ND . .
9.3 Validity in ND Implies Validity in AD . .
9.4 Extending to ND+ . . . . . . . . . . . .

10 Main Results
10.1 Soundness . . . . . . . . .
10.2 Sentential Adequacy . . .
10.3 Quantificational Adequacy:
10.4 Quantificational Adequacy:
11 More Main Results

. . . . . . . .
. . . . . . . .
Basic Version
Full Version .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

421
421
426
432
453

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

457
458
466
477
492
507

CONTENTS
11.1
11.2
11.3
11.4
11.5

vii

Expressive Completeness . . . . .
Independence . . . . . . . . . . . .
Isomorphic Models . . . . . . . . .
Compactness and Isomorphism . . .
Submodels and Lwenheim-Skolem

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

IV Logic and Arithmetic: Incompleteness and Computability


12 Recursive Functions and Q
12.1 Recursive Functions . . . . . . .
12.2 Expressing Recursive Functions
12.3 Capturing Recursive Functions .
12.4 More Recursive Functions . . .
12.5 Essential Results . . . . . . . .

507
512
515
525
527

532

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

536
537
544
555
572
590

13 Gdels Theorems
13.1 Gdels First Theorem . . . . . . . . . .
13.2 Gdels Second Theorem: Overview . . .
13.3 The Derivability Conditions: Background
13.4 The second and third conditions . . . . .
13.5 Reflections on the theorem . . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

602
602
609
614
669
685

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

14 Logic and Computability


691
14.1 Turing Computable Functions . . . . . . . . . . . . . . . . . . . . 691
14.2 Essential Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 705
14.3 Churchs Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 711
Concluding Remarks

731

Answers to Selected Exercises


Chapter One . . . . . . . . .
Chapter Two . . . . . . . . .
Chapter Three . . . . . . . .
Chapter Four . . . . . . . .
Chapter Five . . . . . . . . .
Chapter Six . . . . . . . . .
Chapter Seven . . . . . . . .
Chapter Eight . . . . . . . .

734
735
740
749
756
768
782
805
813

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

CONTENTS
Chapter Nine . .
Chapter Ten . . .
Chapter Eleven .
Chapter Twelve .
Chapter Thirteen
Chapter Fourteen

viii
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

820
834
839
840
847
907

Bibliography

910

Index

914

Named Definitions
AR
LV
LS
IT
VT

chapter 1
Argument . . . . . . . .
Logical Validity . . . . .
Logical Soundness . . . .
Invalidity Test . . . . . .
Validity Test . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

4
10
11
11
15

VC
TR
FR
AB
FR0

chapter 2
Vocabulary (sentential) . . . . . .
Formulas (sentential) . . . . . . .
Subformulas . . . . . . . . . . . .
Immediate Subformula . . . . . .
Atomic Subformula . . . . . . . .
Main Operator (formal) . . . . . .
Abbreviation (sentential) . . . . .
Abbreviated Formulas (sentential) .
Vocabulary . . . . . . . . . . . . .
Terms . . . . . . . . . . . . . . .
Formulas . . . . . . . . . . . . . .
Abbreviation . . . . . . . . . . . .
Abbreviated Formulas . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

31
34
37
37
37
37
39
39
45
49
51
56
57

MP
AV
AS
AQ
AE
PA

chapter 3
Modus Ponens . . . . . .
Axiomatic Consequence .
AD Sentential . . . . . .
AD Quantificational . . .
AD Equality . . . . . . .
Peano Axioms . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

66
67
70
79
82
84

VC
FR
SB
IS
AS
MO
AB
FR 0

ix

.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

Named Definitions
AD

x
AD Axioms (summary) . . . . . . . . . . . . . . . . . .

85

QI
TA
SF
B(s)
B(r)
B()
B(!)
B(8)
TI
QV
B(^)
B(_)
B($)
B(9)
SF0

chapter 4
Satisfaction as Truth (Sentential) . .
Characteristic Table () . . . . . . .
Characteristic Table (!) . . . . . .
Sentential Validity . . . . . . . . . .
Characteristic Table (_) . . . . . . .
Characteristic Table (^) . . . . . . .
Characteristic Table ($) . . . . . .
Truth for Abbreviations (Sentential) .
Quantificational Interpretations . . .
Term Assignment . . . . . . . . . .
Satisfaction . . . . . . . . . . . . .
Branch Condition (s) . . . . . . . . .
Branch Condition (r) . . . . . . . . .
Branch Condition () . . . . . . . .
Branch Condition (!) . . . . . . . .
Branch Condition (8) . . . . . . . .
Truth on an Interpretation . . . . . .
Quantificational Validity . . . . . . .
Branch Condition (^) . . . . . . . .
Branch Condition (_) . . . . . . . .
Branch Condition ($) . . . . . . . .
Branch Condition (9) . . . . . . . .
Satisfaction for Abbreviations . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

95
95
95
102
107
107
107
109
111
116
118
120
120
120
120
120
124
125
130
130
130
130
133

CG
DC
SO
CS
MO
TF
TP

chapter 5
Criterion of Goodness for Translation
Declarative Sentences . . . . . . . .
Sentential Operator . . . . . . . . .
Compound and Simple . . . . . . .
Main Operator (informal) . . . . . .
Truth Functional Operator . . . . . .
Translation Procedure . . . . . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

136
142
142
142
142
142
144

N1
SD
FA

chapter 6
Natural Derivation One . . . . . . . . . . . . . . . . . . 206
Subderivation . . . . . . . . . . . . . . . . . . . . . . . 213
Accessible Formula . . . . . . . . . . . . . . . . . . . . 213

ST

T()
T(!)
SV
T(_)
T(^)
T($)
ST 0

Named Definitions
SA
R
!E
!I
^E
^I
I
E
?I
I
E
_I
_E
$E
$I
SG
SC

8E
9I
8I
9E
SG
SC
=I
=E
(8E)
(9I)
(8I)
(9E)
Q
IN
PA7
MT
NB
DS
HS
DN
Com

xi
Accessible Subderivation . . . . . . . . .
ND Reiteration . . . . . . . . . . . . . . .
ND ! Exploitation . . . . . . . . . . . .
ND ! Introduction . . . . . . . . . . . .
ND ^ Exploitation . . . . . . . . . . . . .
ND ^ Introduction . . . . . . . . . . . . .
ND  Introduction . . . . . . . . . . . . .
ND  Exploitation . . . . . . . . . . . . .
ND ? Introduction . . . . . . . . . . . . .
ND  Introduction . . . . . . . . . . . . .
ND  Exploitation . . . . . . . . . . . . .
ND _ Introduction . . . . . . . . . . . . .
ND _ Exploitation . . . . . . . . . . . . .
ND $ Exploitation . . . . . . . . . . . .
ND $ Introduction . . . . . . . . . . . .
Strategies for a Goal (Sentential) . . . . .
Strategies for a Contradiction (Sentential)
ND 8 Exploitation . . . . . . . . . . . . .
ND 9 Introduction . . . . . . . . . . . . .
ND 8 Introduction . . . . . . . . . . . . .
ND 9 Exploitation . . . . . . . . . . . . .
Strategies for a Goal . . . . . . . . . . . .
Strategies for a Contradiction . . . . . . .
ND = Introduction . . . . . . . . . . . . .
ND = Exploitation . . . . . . . . . . . . .
(8) Exploitation . . . . . . . . . . . . . .
(9) Introduction . . . . . . . . . . . . . .
(8) Introduction . . . . . . . . . . . . . .
(9) Exploitation . . . . . . . . . . . . . .
Robinson Arithmetic Axioms . . . . . . .
Mathematical Induction . . . . . . . . . .
Peano Induction Axiom . . . . . . . . . .
ND+ Modus Tollens . . . . . . . . . . . .
ND+ Negated Biconditional . . . . . . . .
ND+ Disjunctive Syllogism . . . . . . . .
ND+ Hypothetical Syllogism . . . . . . .
ND+ Double Negation . . . . . . . . . . .
ND+ Commutation . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

213
213
215
216
219
220
224
224
226
226
226
227
228
233
234
241
253
268
269
273
276
283
283
294
295
299
299
299
299
299
305
311
313
314
314
314
315
316

Named Definitions
Assoc
Idem
Impl
Trans
DeM
Exp
Equiv
Dist
QN
BQN
ST ()

T()
ST

com
idm
dem
cnj
dsj
neg
ret
SV
exs
ins
cnd
bcnd
abv
ST 0

dst
SF
SF0
TI
QV
unv
qn
TA
eq

xii
ND+ Association . . . . . .
ND+ Idempotence . . . . . .
ND+ Implication . . . . . . .
ND+ Transposition . . . . .
ND+ DeMorgan . . . . . . .
ND+ Exportation . . . . . .
ND+ Equivalence . . . . . .
ND+ Distribution . . . . . .
ND+ Quantifier Negation . .
Bounded Quantifier Negation

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

317
317
317
317
317
317
317
317
318
321

chapter 7
Satisfaction for Stroke . . . . . . . . . . . . . .
Characteristic Table () . . . . . . . . . . . . . .
Sentential Truth (formalized) . . . . . . . . . .
Commutation (metalinguistic) . . . . . . . . . .
Idempotence (metalinguistic) . . . . . . . . . .
DeMorgan (metalinguistic) . . . . . . . . . . .
Conjunctive rules (metalinguistic) . . . . . . . .
Disjunctive rules (metalinguistic) . . . . . . . .
Negation Rules (metalinguistic) . . . . . . . . .
Reiteration (metalinguistic) . . . . . . . . . . .
Sentential Validity (formalized) . . . . . . . . .
Existential rules (metalinguistic) . . . . . . . .
Inspection . . . . . . . . . . . . . . . . . . . .
Conditional rules (metalinguistic) . . . . . . . .
Biconditional rules (metalinguistic) . . . . . . .
Abbreviation (metalinguistic) . . . . . . . . . .
Abbreviations for Sentential Truth (formalized)
Distribution (metalinguistic) . . . . . . . . . . .
Satisfaction (formalized) . . . . . . . . . . . .
Abbreviations for Satisfaction (formalized) . . .
Truth on an Interpretation (formalized) . . . . .
Quantificational Validity (formalized) . . . . . .
Universal rules (metalinguistic) . . . . . . . . .
Quantifier negation (metalinguistic) . . . . . . .
Term Assignment (formalized) . . . . . . . . .
Equality rules (metalinguistic) . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

330
331
332
332
332
332
332
332
332
332
334
335
335
339
339
340
340
341
346
346
348
348
348
349
351
351

Named Definitions

xiii

SF( R )
SF(8)
SF0 .9/
def

Satisfaction for relation symbols (formalized)


Satisfaction for 8 (formalized) . . . . . . . .
Satisfaction for 9 (formalized) . . . . . . . . .
Definition (metalinguistic) . . . . . . . . . . .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

352
356
356
362

AI

chapter 8
Term Assignment on an Interpretation . . . . . . . . . . 405
chapter 9

Con
./
Max
.?/
Scgt
.??/

chapter 10
Consistency . . . . . . . . .
Core Thesis (sentential) . . .
Maximality . . . . . . . . . .
Core Thesis (preliminary) . .
Scapegoat Set . . . . . . . .
Core Thesis (quantificational)

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

467
468
472
480
484
492

A1()
A1(!)
A2()
A2(!)
IS
EE
ST
SM
ES

chapter 11
Table for Independence ()
Table for Independence (!)
Table for Independence ()
Table for Independence (!)
Isomorphism . . . . . . . .
Elementary Equivalence . .
Satisfiability . . . . . . . .
Submodel . . . . . . . . .
Elementary Submodel . . .

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

512
512
515
515
516
518
525
527
527

chapter 12
successor . . . . . . . . . .
zero . . . . . . . . . . . .
identity . . . . . . . . . . .
Composition . . . . . . . .
Recursion . . . . . . . . .
plus . . . . . . . . . . . . .
times . . . . . . . . . . . .
factorial . . . . . . . . . .
Recursion Theorem . . . .
regular minimization . . . .
Recursive Functions . . . .

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

539
539
539
539
539
540
540
540
541
543
543

suc.x/
zero.x/
j
idntk

CM
RC
plus.x; y/
times.x; y/
fact.y/

RT
RM
RF

Named Definitions
PR

primitive recursive . . . .
power . . . . . . . . . .
EXr
Expressionr . . . . . . .
EXf
Expressionf . . . . . . .
CP
Capture . . . . . . . . . .
0
Delta Formulas . . . . . .
pred.y/
predecessor . . . . . . .
subc.x; y/
subtraction with cutoff . .
absval.x - y/
absolute value . . . . . .
sg.y/
sign . . . . . . . . . . . .
csg.y/
converse sign . . . . . . .
CF
characteristic function . .
EQ.s.E
x/; t.Ey//
equality . . . . . . . . .
LEQ.s.E
x/; t.Ey//
less than or equal . . . .
LESS.s.E
x/; t.Ey//
less than . . . . . . . . .
NEG.P.E
x//
negation . . . . . . . . .
DSJ.P.E
x/; Q.Ey//
disjunction . . . . . . . .
IMP.P.E
x/; Q.Ey//
implication . . . . . . . .
.9y  z/P.Ex; z; y/ exists less than or equal to
.9y < z/P.Ex; z; y/ exists less than . . . . . .
.8z  y/P.Ex; z/
all less than or equal to .
.8z < y/P.Ex; z/
all less than . . . . . . .
.y  z/P.Ex; z; y/ bounded minimization . .
f.Ex/=C0 : : : Ck
definition by cases . . . .
FCTR.m; n/
factor . . . . . . . . . . .
PRIME.n/
prime . . . . . . . . . . .
pi.n/
prime sequence . . . . .
exp.n; i/
prime exponent . . . . .
len.n/
prime length . . . . . . .
cncat.m; n/
concatenation . . . . . .
VAR.n/
variable . . . . . . . . . .
TERMSEQ.m; n/
term sequence . . . . . .
TERM.n/
term . . . . . . . . . . .
ATOM.n/
atomic formula . . . . . .
WFF.n/
well-formed formula . . .
FORMSEQ.m; n/
formula sequence . . . .
SENTPRF.m; n/
sentential proof . . . . .
SENTAXIOM.n/
sentential axiom . . . . .
power.x; y/

xiv
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

543
544
545
545
555
556
572
572
573
573
573
573
574
575
575
575
575
575
575
576
576
576
576
576
577
577
578
578
578
582
583
583
583
584
584
584
585
585

Named Definitions
cnd.n; o/ D m

conditional . . . . . . . .
recursive modus ponens .
TERMSUB.t; v; s; u/ substitution in terms . . .
ATOMSUB.p; v; s; u/ substitution in atomics . .
FORMSUB.p; v; s; u/ substitution into formulas
formusb.p; v; s/
formsub (function) . . . .
FREE.p; v/
free variable . . . . . . .
SENT.n/
sentence . . . . . . . . .
FREEFOR.s; v; u/
free for . . . . . . . . . .
FFSEQ.m; s; v; u/
free for sequence . . . . .
AXIOM4.n/
axiom 4 . . . . . . . . .
GEN.m; n/
gen rule . . . . . . . . .
AXIOM6.n/
axiom 6 . . . . . . . . .
ICON.m; n; o/
immediate consequence .
AXIOM.n/
axiom of Q . . . . . . . .
PRFQ.m; n/
proof in Q . . . . . . . .

xv
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

585
585
586
587
587
587
588
588
588
588
589
589
589
590
590
590

chapter 13
f .x;
E y;
E zE/
composition . . . . . . . .
vQ.x;
E v/
minimization . . . . . . . .
.y  z/Q.x;
E z; y/bounded minimization . . .
rm
remainder . . . . . . . . .
qt
quotient . . . . . . . . . .

beta function . . . . . . . .
suc.x/
defined successor . . . . .
zero.x/
defined zero . . . . . . . .
j
idntk .x1 : : : xj /
defined identity function . .
:dot minus . . . . . . . . .
j
Factor . . . . . . . . . . .
Pr
Prime . . . . . . . . . . . .
Rp
Rprime . . . . . . . . . . .
G
Good . . . . . . . . . . . .
d
least good . . . . . . . . .
lcm
lcm . . . . . . . . . . . . .
plm
plm . . . . . . . . . . . . .
maxs
maxs . . . . . . . . . . . .
maxp
maxp . . . . . . . . . . . .
h.i/
h(i) . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

621
623
623
626
627
627
628
628
628
635
636
638
638
638
638
641
641
644
645
647

MP.m; n; o/

Named Definitions
pred
sg
csg
ex
exc.m; n; i/
val.m; n; i/
? )
CT
PRSEQ.m; n/

AC
KU

xvi
pred . . . . . . . . .
sg . . . . . . . . . . .
csg . . . . . . . . . .
ex . . . . . . . . . . .
exc . . . . . . . . . .
val . . . . . . . . . .
Sigma Star Formulas .

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

655
655
655
663
665
665
676

chapter 14
Churchs thesis . . . . . . . .
primitive recursive sequence .
Algorithmic computability . .
K-U Machine . . . . . . . .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

711
718
722
723

Quick Reference Guides


Negation and Quantity . . . . . . . . . . . . . .
Countability . . . . . . . . . . . . . . . . . . . .
Parts of a Formula . . . . . . . . . . . . . . . . .
More on Countability . . . . . . . . . . . . . . .
Grammar Quick Reference . . . . . . . . . . . .
AD Quick Reference . . . . . . . . . . . . . . .
Peano Arithmetic (AD) . . . . . . . . . . . . . .
Semantics Quick Reference (Sentential) . . . . .
Basic Notions of Set Theory . . . . . . . . . . .
Semantics Quick Reference (quantificational) . .
Definitions for Translation . . . . . . . . . . . .
Cause and Conditional . . . . . . . . . . . . . .
Definitions for Auxiliary Assumptions . . . . . .
ND Quick Reference (Sentential) . . . . . . . . .
ND Quick Reference (Quantificational) . . . . . .
LNT reference . . . . . . . . . . . . . . . . . . .
Robinson and Peano Arithmetic (ND) . . . . . .
ND+ Quick Reference . . . . . . . . . . . . . .
Metalinguistic Quick Reference (sentential) . . .
Metalinguistic Quick Reference (quantificational)
Theorems of Chapter 7 . . . . . . . . . . . . . .
Induction Schemes . . . . . . . . . . . . . . . .
First Theorems of Chapter 8 . . . . . . . . . . .
Final Theorems of Chapter 8 . . . . . . . . . . .
Some Arithmetic Relevant to Gdel Numbering .
More Arithmetic Relevant to Gdel Numbering .
The Recursion Theorem . . . . . . . . . . . . . .
Arithmetic for the Beta Function . . . . . . . . .
xvii

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

19
32
37
46
60
85
92
109
112
133
142
162
213
235
296
300
311
319
343
364
370
380
400
417
471
483
541
552

Quick Reference Guides


First Results of Chapter 12 . . . . . .
Final Results of Chapter 12 . . . . . .
Additional Theorems of PA . . . . . .
First theorems of chapter 13 . . . . .
Theorems to carry forward from 13.3.3
Final theorems of chapter 13 . . . . .
Simple Time Dilation . . . . . . . . .

xviii
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

591
600
615
632
669
690
715

Part I

The Elements: Four Notions of


Validity

Introductory
Symbolic logic is a tool for argument evaluation. In this part of the text we introduce
the basic elements of that tool. Those parts are represented in the following diagram.


Ordinary
Arguments

Formal
Language

Semantic
Methods

@
@
@
R Metalogical
@
 Consideration

@
@
@
R
@

Derivation
Methods

The starting point is ordinary arguments. Such arguments come in various forms and
contexts from politics and ordinary living, to mathematics and philosophy. Here
is a classic, simple case.
All men are mortal.
(A)

Socrates is a man.
Socrates is mortal.

This argument has premises listed above a line, with a conclusion listed below. Here
is another case which may seem less simple.

(B)

If the maid did it, then it was done with a revolver only if it was done in the
parlor. But if the butler is innocent, then the maid did it unless it was done in
the parlor. The maid did it only if it was done with a revolver, while the butler
is guilty if it did happen in the parlor. So the butler is guilty.

(It is fun to think about this; from the given evidence, it follows that the butler did it!)
At any rate, we begin in chapter 1 with an account of success for ordinary arguments

PART I.

THE ELEMENTS

(the leftmost box). This introduces us to the fundamental notions of logical validity
and logical soundness.
But just as it is one thing to know what a cookie is, and another to know whether
there is one in the jar, so it is one thing to know what logical validity and soundness
are, and another to know whether arguments have them. In some cases, it may be obvious. But others are not so clear. Consider, say, the butler case (B) above, along with
complex or controversial arguments in philosophy or mathematics. Thus symbolic
logic is introduced as a sort of machine or tool to identify validity and soundness.
This machine begins with certain formal representations of ordinary reasonings. We
introduce these representations in chapter 2 and translate from ordinary arguments
to the formal representations in chapter 5 (the box second from the left). Once arguments have this formal representation, there are different modes of operation upon
them. A semantic notion of validity is developed in chapter 4 and chapter 7 (the
upper box). And a pair of derivation systems, with corresponding notions of validity,
are introduced in chapter 3 and chapter 6 (the lower box). Evaluation of the butler
case is entirely routine given the methods of just the first parts from, say, chapter 4
and chapter 5, or chapter 5 and chapter 6.
These, then, are the elements of our logical machine we start with the fundamental notion of logical validity; then there are formal representations of ordinary
reasonings, along with semantic validity, and validity for our two derivation systems.
These elements are developed in this part. In Part II and Part III we turn to thinking
about how these parts are interrelated. In particular, we demonstrate that the machine
does what it is supposed to do (the right-hand box). But first we have to say what the
parts are. And that is the task we set ourselves in this part.

Chapter 1

Logical Validity and Soundness


Symbolic logic is a tool or machine for the identification of argument goodness. It
makes sense to begin, however, not with the machine, but by saying something about
this argument goodness that the machinery is supposed to identify. That is the task
of this chapter.
But first, we need to say what an argument is.
AR An argument is some sentences, one of which (the conclusion) is taken to be
supported by the remaining sentences (the premises).
So some sentences are an argument depending on whether premises are taken to support a conclusion. Such support is often
indicated by words or phrases of the sort,
so, it follows, therefore, or the like.
We will typically indicate the division by a
simple line between premises and conclusion. Roughly, an argument is good if premises do what they are taken to do, if they
actually support the conclusion. An argument is bad if they do not accomplish what
they are taken to do, if they do not actually support the conclusion.
Logical validity and soundness correspond to different ways an argument can go
wrong. Consider the following two arguments:
Important definitions are often
offset and given a short name as
above. Then there may be appeal
to the definition by its name, in
this case, AR.

Only citizens can vote


(A)

Hannah is a citizen

All citizens can vote


(B)

Hannah can vote

Hannah is a citizen
Hannah can vote

CHAPTER 1. LOGICAL VALIDITY AND SOUNDNESS

The line divides premises from conclusion, indicating that the premises are supposed
to support the conclusion. Thus these are arguments. But these arguments go wrong
in different ways. The premises of argument (A) are true; as a matter of fact, only
citizens can vote, and Hannah (my daughter) is a citizen. But she cannot vote; she
is not old enough. So the conclusion is false. Thus, in argument (A), the relation
between the premises and the conclusion is defective. Even though the premises
are true, there is no guarantee that the conclusion is true as well. We will say that
this argument is logically invalid. In contrast, argument (B) is logically valid. If its
premises were true, the conclusion would be true as well. So the relation between
the premises and conclusion is not defective. The problem with this argument is that
the premises are not true not all citizens can vote. So argument (B) is defective,
but in a different way. We will say that it is logically unsound.
The task of this chapter is to define and explain these notions of logical validity
and soundness. I begin with some preliminary notions, then turn to official definitions
of logical validity and soundness, and finally to some consequences of the definitions.

1.1

Consistent Stories

Given a certain notion of a possible or consistent story, it is easy to state definitions


for logical validity and soundness. So I begin by identifying the kind of stories that
matter. Then we will be in a position to state the definitions, and apply them in some
simple cases.
Let us begin with the observation that there are different sorts of possibility. Consider, say, Hannah could make it in the WNBA. This seems true. She is reasonably
athletic, and if she were to devote herself to basketball over the next few years, she
might very well make it in the WNBA. But wait! Hannah is only a kid she rarely
gets the ball even to the rim from the top of the key so there is no way she could
make it in the WNBA. So she both could and could not make it. But this cannot be
right! What is going on? Here is a plausible explanation: Different sorts of possibility are involved. When we hold fixed current abilities, we are inclined to say there is
no way she could make it. When we hold fixed only general physical characteristics,
and allow for development, it is natural to say that she might. The scope of what is
possible varies with whatever constraints are in play. The weaker the constraints, the
broader the range of what is possible.
The sort of possibility we are interested in is very broad, and constraints are
correspondingly weak. We will allow that a story is possible or consistent so long
as it involves no internal contradiction. A story is impossible when it collapses from

CHAPTER 1. LOGICAL VALIDITY AND SOUNDNESS

within. For this it may help to think about the way you respond to ordinary fiction.
Consider, say, Bill and Teds Excellent Adventure (set and partly filmed locally for
me in San Dimas, CA). Bill and Ted travel through time in a modified phone booth
collecting historical figures for a history project. Taken seriously, this is bizarre, and
it is particularly outlandish to think that a phone booth should travel through time.
But the movie does not so far contradict itself. So you go along. So far, then, so good
(excellent).
But, late in the movie, Bill and Ted have a problem breaking the historical figures
out of jail. So they decide today to go back in time tomorrow to set up a diversion
that will go off in the present. The diversion goes off as planned, and the day is saved.
Somehow, then, as often happens in these films, the past depends on the future, at the
same time as the future depends on the past. This, rather than the time travel itself,
generates an internal conflict. The movie makes it the case that you cannot have
today apart from tomorrow, and cannot have tomorrow apart from today. Perhaps
today and tomorrow have always been repeating in an eternal loop. But, according
to the movie, there were times before today and after tomorrow. So the movie faces
internal collapse. Notice: the objection does not have anything to do with the way
things actually are with the nature of actual phone booths and the like; it has rather
to do with the way the movie hangs together internally it makes it impossible
for today to happen without tomorrow, and for tomorrow to happen without today.1
Similarly, we want to ask whether stories hold together internally. If a story holds
together internally, it counts for our purposes, as consistent and possible. If a story
does not hold together, it is not consistent or possible.
In some cases, then, stories may be consistent with things we know are true in the
real world. Thus perhaps I come home, notice that Hannah is not in her room, and
imagine that she is out back shooting baskets. There is nothing inconsistent about
this. But stories may remain consistent though they do not fit with what we know to
be true in the real world. Here are cases of phone booths traveling through time and
the like. Stories become inconsistent when they collapse internally as when today
both can and cannot happen apart from tomorrow.
As with a movie or novel, we can say that different things are true or false in our
stories. In Bill and Teds Excellent Adventure it is true that Bill and Ted travel through
1 In

more consistent cases of time travel (in the movies) time seems to move in a sort of Z so that
after yesterday and today, there is another yesterday and another today. So time does not return to the
very point at which it first turns back. In the trouble cases, however, time seems to move in a sort of
loop so that a point on the path to today (this very day) goes through tomorrow. With this in mind,
it is interesting to think about say, the Terminator and Back to the Future movies and, maybe more
consistent, Groundhog Day. Even if I am wrong, and Bill and Ted is internally consistent, the overall
point should be clear. And it should be clear that I am not saying anything serious about time travel.

CHAPTER 1. LOGICAL VALIDITY AND SOUNDNESS

time in a phone booth, but false that they go through time in a DeLorean (as in the
Back to the Future films). In the real world, of course, it is false that phone booths go
through time, and false that DeLoreans go through time. Officially, a complete story
is always maximal in the sense that any sentence is either true or false in it. A story
is inconsistent when it makes some sentence both true and false. Since, ordinarily,
we do not describe every detail of what is true and what is false when we tell a story,
what we tell is only part of a maximal story. In practice, however, it will be sufficient
for us merely to give or fill in whatever details are relevant in a particular context.
But there are a couple of cases where we cannot say when sentences are true
or false in a story. The first is when stories we tell do not fill in relevant details.
In The Wizard of Oz, it is true that Dorthy wears red shoes. But neither the movie
nor the book have anything to say about whether her shoes include Odor-Eaters. By
themselves, then, neither the book nor the movie give us enough information to tell
whether The red shoes include Odor-Eaters is true or false in the story. Similarly,
there is a problem when stories are inconsistent. Suppose according to some story,
(a) All dogs can fly
(b) Fido is a dog
(c) Fido cannot fly
Given (a), all dogs fly; but from (b) and (c), it seems that not all dogs fly. Given (b),
Fido is a dog; but from (a) and (c) it seems that Fido is not a dog. Given (c), Fido
cannot fly; but from (a) and (b) it seems that Fido can fly. The problem is not that
inconsistent stories say too little, but rather that they say too much. When a story is
inconsistent, we will simply refuse to say that it makes any sentence (simply) true or
false.2
Consider some examples: (a) The true story, Everything is as it actually is.
Since no contradiction is actually true, this story involves no contradiction; so it is
internally consistent and possible.
(b) All dogs can fly: over the years, dogs have developed extraordinarily
large and muscular ears; with these ears, dogs can fly. It is bizarre, but not obviously
inconsistent. If we allow the consistency of stories according to which monkeys fly,
as in The Wizard of Oz, or elephants fly, as in Dumbo, then we should allow that this
story is consistent as well.
(c) All dogs can fly, but my dog Fido cannot; Fidos ear was injured while he
was chasing a helicopter, and he cannot fly. This is not internally consistent. If all
dogs can fly and Fido is a dog, then Fido can fly. You might think that Fido remains
a flying sort of thing. In evaluating internal consistency, however, we require that
2 The intuitive picture developed above should be sufficient for our purposes. However, we are
on the verge of vexed issues. For further discussion, you may want to check out the vast literature
on possible worlds. Contributions of my own include the introductory article, Modality, in The
Continuum Companion to Metaphysics.

CHAPTER 1. LOGICAL VALIDITY AND SOUNDNESS

meanings remain the same: If can fly means just is a flying sort of thing, then
the story falls apart insofar as it says both that Fido is and is not that sort of thing;
if can fly means is himself able to fly, then the story falls apart insofar as it says
that Fido himself both is and is not able to fly. So long as can fly means the same
in each use, the story is sure to fall apart insofar as it says both that Fido is and is not
that sort of thing.
(d) Germany won WWII; the United
Some authors prefer talk of
States never entered the war; after a long
possible worlds, possible sitand gallant struggle, England and the rest
uations or the like to that of
of Europe surrendered. It did not happen;
consistent stories. It is concepbut the story does not contradict itself. For
tually simpler to stick with stoour purposes, then it counts as possible.
ries, as I have, than to have situa(e) 1 + 1 = 3; the numerals 2 and
tions and distinct descriptions of
3 are switched (1, 3, 2, 4, 5, 6,
them. However, it is worth rec7. . . ); so that taking one thing and one
ognizing that our consistent stothing results in three things. This story
ries are or describe possible sitdoes not hang together. Of course numeruations, so that the one notion
als can be switched; but switching numermatches up directly with the othals does not make one thing and one thing
ers.
three things! We tell stories in our own
language (imagine that you are describing
a foreign-language film in English). According to the story, people can say correctly
1 + 1 = 3, but this does not make it the case that 1 + 1 = 3. Compare a language
like English except that fly means bark; and consider a movie where dogs are
ordinary, but people correctly assert, in this language, dogs fly: it would be wrong
to say, in English, that this is a movie in which dogs fly. And, similarly, we have not
told a story where 1 + 1 = 3.
E1.1. Say whether each of the following stories is internally consistent or inconsistent. In either case, explain why.
*a. Smoking cigarettes greatly increases the risk of lung cancer, although most
people who smoke cigarettes do not get lung cancer.
b. Joe is taller than Mary, but Mary is taller than Joe.
*c. Abortion is always morally wrong, though abortion is morally right in order
to save a womans life.

CHAPTER 1. LOGICAL VALIDITY AND SOUNDNESS

d. Mildred is Dr. Saunderss daughter, although Dr. Saunders is not Mildreds


father.
*e. No rabbits are nearsighted, though some rabbits wear glasses.
f. Ray got an A on the final exam in both Phil 200 and Phil 192. But he got a
C on the final exam in Phil 192.
*g. Bill Clinton was never president of the United States, although Hillary is
president right now.
h. Egypt, with about 100 million people is the most populous country in Africa,
and Africa contains the most populous country in the world. But the United
States has over 200 million people.
*i. The death star is a weapon more powerful than that in any galaxy, though
there is, in a galaxy far far away, a weapon more powerful than it.
j. Luke and the rebellion valiantly battled the evil empire, only to be defeated.
The story ends there.
E1.2. For each of the following sentences, (i) say whether it is true or false in the
real world and then (ii) say if you can whether it is true or false according
to the accompanying story. In each case, explain your answers. The first is
worked as an example.
a. Sentence: Aaron Burr was never a president of the United States.
Story: Aaron Burr was the first president of the United States, however he
turned traitor and was impeached and then executed.
(i) It is true in the real world that Aaron Burr was never a president of the
United States. (ii) But the story makes the sentence false, since the story says
Burr was the first president.
b. Sentence: In 2006, there were still buffalo.
Story: A thundering herd of buffalo overran Phoenix Arizona in early 2006.
The city no longer exists.
*c. Sentence: After overrunning Phoenix in early 2006, a herd of buffalo overran
Newark, New Jersey.
Story: A thundering herd of buffalo overran Phoenix Arizona in early 2006.
The city no longer exists.

CHAPTER 1. LOGICAL VALIDITY AND SOUNDNESS

10

d. Sentence: There has been an all-out nuclear war.


Story: After the all-out nuclear war, John Connor organized resistance against
the machines who had taken over the world for themselves.
*e. Sentence: Jack Nicholson has swum the Atlantic.
Story: No human being has swum the Atlantic. Jack Nicholson and Bill
Clinton and you are all human beings, and at least one of you swam all the
way across!
f. Sentence: Some people have died as a result of nuclear explosions.
Story: As a result of a nuclear blast that wiped out most of this continent, you
have been dead for over a year.
*g. Sentence: Your instructor is not a human being.
Story: No beings from other planets have ever made it to this country. However, your instructor made it to this country from another planet.
h. Sentence: Lassie is both a television and movie star.
Story: Dogs have super-big ears and have learned to fly. Indeed, all dogs can
fly. Among the many dogs are Lassie and Rin Tin Tin.
*i. Sentence: The Yugo is the most expensive car in the world.
Story: Jaguar and Rolls Royce are expensive cars. But the Yugo is more
expensive than either of them.
j. Sentence: Lassie is a bird who has learned to fly.
Story: Dogs have super-big ears and have learned to fly. Indeed, all dogs can
fly. Among the many dogs are Lassie and Rin Tin Tin.

1.2

The Definitions

The definition of logical validity depends on what is true and false in consistent
stories. The definition of soundness builds directly on the definition of validity. Note:
in offering these definitions, I stipulate the way the terms are to be used; there is no
attempt to say how they are used in ordinary conversation; rather, we say what they
will mean for us in this context.
LV An argument is logically valid if and only if (iff) there is no consistent story in
which all the premises are true and the conclusion is false.

CHAPTER 1. LOGICAL VALIDITY AND SOUNDNESS

11

LS An argument is logically sound iff it is logically valid and all of its premises
are true in the real world.
Logical (deductive) validity and soundness are to be distinguished from inductive
validity and soundness or success. For the inductive case, it is natural to focus on
the plausibility or the probability of stories where an argument is relatively strong
when stories that make the premises true and conclusion false are relatively implausible. Logical (deductive) validity and soundness are thus a sort of limiting case, where
stories that make premises true and conclusion false are not merely implausible, but
impossible. In a deductive argument, conclusions are supposed to be guaranteed;
in an inductive argument, conclusions are merely supposed to be made probable or
plausible. For mathematical logic, we set the inductive case to the side, and focus on
the deductive.

1.2.1

Invalidity

If an argument is logically valid, there is no consistent story that makes the premises
true and conclusion false. So, to show that an argument is invalid, it is enough to
produce even one consistent story that makes premises true and conclusion false.
Perhaps there are stories that result in other combinations of true and false for the
premises and conclusion; this does not matter for the definition. However, if there
is even one story that makes premises true and conclusion false then, by definition,
the argument is not logically valid and if it is not valid, by definition, it is not
logically sound. We can work through this reasoning by means of a simple invalidity
test. Given an argument, this test has the following four stages.
IT

a. List the premises and negation of the conclusion.


b. Produce a consistent story in which the statements from (a) are all true.
c. Apply the definition of validity.
d. Apply the definition of soundness.

We begin by considering what needs to be done to show invalidity. Then we do it.


Finally we apply the definitions to get the results. For a simple example, consider the
following argument,
Eating Brussels sprouts results in good health
(C)

Ophilia has good health


Ophilia has been eating brussels sprouts

CHAPTER 1. LOGICAL VALIDITY AND SOUNDNESS

12

The definition of validity has to do with whether there are consistent stories in which
the premises are true and the conclusion false. Thus, in the first stage, we simply
write down what would be the case in a story of this sort.
a. List premises and
negation of conclusion.

In any story with premises true and conclusion false,


(1) Eating brussels sprouts results in good health
(2) Ophilia has good health
(3) Ophilia has not been eating brussels sprouts

Observe that the conclusion is reversed! At this stage we are not giving an argument.
We rather merely list what is the case when the premises are true and conclusion
false. Thus there is no line between premises and the last sentence, insofar as there is
no suggestion of support. It is easy enough to repeat the premises. Then we say what
is required for the conclusion to be false. Thus, Ophilia has been eating brussels
sprouts is false if Ophilia has not been eating brussels sprouts. I return to this point
below, but that is enough for now.
An argument is invalid if there is even one consistent story that makes the premises
true and the conclusion false. Thus, to show invalidity, it is enough to produce a consistent story that makes the premises true and conclusion false.
b. Produce a consistent story in which
the statements from
(a) are all true.

Story: Eating brussels sprouts results in good health, but


eating spinach does so as well; Ophilia is in good health
but has been eating spinach, not brussels sprouts.

For each of the statements listed in (a), we satisfy (1) insofar as eating brussels
sprouts results in good health, (2) since Ophilia is in good health, and (3) since
Ophilia has not been eating brussels sprouts. The story explains how she manages to
maintain her health without eating brussels sprouts, and so the consistency of (1) - (3)
together. The story does not have to be true and, of course, many different stories
will do. All that matters is that there is a consistent story in which the premises of
the original argument are true, and the conclusion is false.
Producing a story that makes the premises true and conclusion false is the creative
part. What remains is to apply the definitions of validity and soundness. By LV an
argument is logically valid only if there is no consistent story in which the premises
are true and the conclusion is false. So if, as we have demonstrated, there is such a
story, the argument cannot be logically valid.

CHAPTER 1. LOGICAL VALIDITY AND SOUNDNESS


c. Apply the definition of validity.

13

This is a consistent story that makes the premises true


and the conclusion false; thus, by definition, the argument is not logically valid.

By LS, for an argument to be sound, it must have its premises true in the real world
and be logically valid. Thus if an argument fails to be logically valid, it automatically
fails to be logically sound.
d. Apply the definition of soundness.

Since the argument is not logically valid, by definition,


it is not logically sound.

Given an argument, the definition of validity depends on stories that make the
premises true and the conclusion false. Thus, in step (a) we simply list claims required of any such story. To show invalidity, in step (b), we produce a consistent
story that satisfies each of those claims. Then in steps (c) and (d) we apply the definitions to get the final results; for invalidity, these last steps are the same in every
case.
It may be helpful to think of stories as a sort of wedge to pry the premises of
an argument off its conclusion. We pry the premises off the conclusion if there is a
consistent way to make the premises true and the conclusion not. If it is possible to
insert such a wedge between the premises and conclusion, then a defect is exposed
in the way premises are connected to the conclusion. Observe that the flexibility
allowed in consistent stories (with flying dogs and the like) corresponds directly to
the strength of connections required. If connections are sufficient to resist all such
attempts to wedge the premises off the conclusion, they are significant indeed.
Here is another example of our method. Though the argument may seem on its
face not to be a very good one, we can expose its failure by our methods in fact,
our method may formalize or make rigorous a way you very naturally think about
cases of this sort. Here is the argument,
I shall run for president
(D)
I will be one of the most powerful men on earth
To show that the argument is invalid, we turn to our standard procedure.
a. In any story with the premise true and conclusion false,
1. I shall run for president
2. I will not be one of the most powerful men on earth

CHAPTER 1. LOGICAL VALIDITY AND SOUNDNESS

14

b. Story: I do run for president, but get no financing and gain no votes; I lose the
election. In the process, I lose my job as a professor and end up begging for
scraps outside a Dominos Pizza restaurant. I fail to become one of the most
powerful men on earth.
c. This is a consistent story that makes the premise true and the conclusion false;
thus, by definition, the argument is not logically valid.
d. Since the argument is not logically valid, by definition, it is not logically sound.
This story forces a wedge between the premise and the conclusion. Thus we use the
definition of validity to explain why the conclusion does not properly follow from
the premises. It is, perhaps, obvious that running for president is not enough to make
me one of the most powerful men on earth. Our method forces us to be very explicit
about why: running for president leaves open the option of losing, so that the premise
does not force the conclusion. Once you get used to it, then, our method may come
to seem a natural approach to arguments.
If you follow this method for showing invalidity, the place where you are most
likely to go wrong is stage (b), telling stories where the premises are true and the
conclusion false. Be sure that your story is consistent, and that it verifies each of the
claims from stage (a). If you do this, you will be fine.
E1.3. Use our invalidity test to show that each of the following arguments is not
logically valid, and so not logically sound. Understand terms in their most
natural sense.
*a. If Joe works hard, then he will get an A
Joe will get an A
Joe works hard
b. Harry had his heart ripped out by a government agent
Harry is dead
c. Everyone who loves logic is happy
Jane does not love logic
Jane is not happy
d. Our car will not run unless it has gasoline
Our car has gasoline
Our car will run

CHAPTER 1. LOGICAL VALIDITY AND SOUNDNESS

15

e. Only citizens can vote


Hannah is a citizen
Hannah can vote

1.2.2

Validity

For a given argument, if you cannot find a story that makes the premises true and
conclusion false, you may begin to suspect that it is valid. However, mere failure
to demonstrate invalidity does not demonstrate validity for all we know, there
might be some tricky story we have not thought of yet. So, to show validity, we need
another approach. If we could show that every story which makes the premises true
and conclusion false is inconsistent, then we could be sure that no consistent story
makes the premises true and conclusion false and so we could conclude that the
argument is valid. Again, we can work through this by means of a procedure, this
time a validity test.
VT

a. List the premises and negation of the conclusion.


b. Expose the inconsistency of such a story.
c. Apply the definition of validity.
d. Apply the definition of soundness.

In this case, we begin in just the same way. The key difference arises at stage (b).
For an example, consider this sample argument.
No car is a person
(E)

My mother is a person
My mother is not a car

Since LV has to do with stories where the premises are true and the conclusion false,
as before we begin by listing the premises together with the negation of the conclusion.
a. List premises and
negation of conclusion.

In any story with premises true and conclusion false,


(1) No car is a person
(2) My mother is a person
(3) My mother is a car

CHAPTER 1. LOGICAL VALIDITY AND SOUNDNESS

16

Any story where My mother is not a car is false, is one where my mother is a car
(perhaps along the lines of the much reviled 1965 TV series, My Mother the Car.).
For invalidity, we would produce a consistent story in which (1) - (3) are all true.
In this case, to show that the argument is valid, we show that this cannot be done.
That is, we show that no story that makes each of (1) - (3) true is consistent.
b. Expose the inconsistency of such a
story.

In any such story,


Given (1) and (3),
(4) My mother is not a person
Given (2) and (4),
(5) My mother is and is not a person

The reasoning should be clear if you focus just on the specified lines. Given (1) and
(3), if no car is a person and my mother is a car, then my mother is not a person. But
then my mother is a person from (2) and not a person from (4). So we have our goal:
any story with (1) - (3) as members contradicts itself and therefore is not consistent.
Observe that we could have reached this result in other ways. For example, we might
have reasoned from (1) and (2) that (40 ), my mother is not a car; and then from (3) and
(40 ) to the result that (50 ) my mother is and is not a car. Either way, an inconsistency
is exposed. Thus, as before, there are different options for this creative part.
Now we are ready to apply the definitions of logical validity and soundness. First,
c. Apply the definition of validity.

So no story with the premises true and conclusion false


is a consistent story; so by definition, the argument is
logically valid.

For the invalidity test, we produce a consistent story that hits the target from stage
(a), to show that the argument is invalid. For the validity test, we show that any
attempt to hit the target from stage (a) must collapse into inconsistency: no consistent
story includes each of the elements from stage (a) so that there is no consistent story
in which the premises are true and the conclusion false. So by application of LV the
argument is logically valid.
Given that the argument is logically valid, LS makes logical soundness depend
on whether the premises are true in the real world. Suppose we think the premises of
our argument are in fact true. Then,
d. Apply the definition of soundness.

Since in the real world no car is a person and my mother


is a person, all the premises are true; so by definition, it
is logically sound.

CHAPTER 1. LOGICAL VALIDITY AND SOUNDNESS

17

Observe that LS requires for logical soundness that an argument is logically valid
and that its premises are true in the real world. Thus we are no longer thinking about
merely possible stories! And we do not say anything at this stage about claims other
than the premises of the original argument! Thus we do not make any claim about the
truth or falsity of the conclusion, my mother is not a car. Rather, the observations
have entirely to do with the two premises, no car is a person and my mother is a
person. When an argument is valid and the premises are true in the real world, by
LS, it is logically sound. But it will not always be the case that a valid argument has
true premises. Say My Mother the Car is in fact a documentary and therefore
a true account of some car that is a person. Then some cars are persons and the first
premise is false; so you would have to respond as follows,
d. Since in the real world some cars are persons, not all the premises are true. So,
though the argument is logically valid, by definition it is not logically sound.
Another option is that you are in doubt about reincarnation into cars, and in particular
about whether some cars are persons. In this case you might respond as follows,
d. Although in the real world my mother is a person, I cannot say whether no car
is a person; so I cannot say whether all the premises are true. So although the
argument is logically valid, I cannot say whether it is logically sound.
So given validity there are three options: (i) You are in a position to identify all of
the premises as true in the real world. In this case, you should do so, and apply the
definition for the conclusion that the argument is logically sound. (ii) You are in a
position to say that at least one of the premises is false in the real world. In this case,
you should do so, and apply the definition for the conclusion that the argument is not
logically sound. (iii) You cannot identify any premise as false, but neither can you
identify them all as true. In this case, you should explain the situation and apply the
definition for the result that you are not in a position to say whether the argument is
logically sound.
Again, given an argument we say in step (a) what would be the case in any story
that makes the premises true and the conclusion false. Then, at step (b), instead of
finding a consistent story in which the premises are true and conclusion false, we
show that there is no such thing. Steps (c) and (d) apply the definitions for the final
results. Observe that only one method can be correctly applied in a given case! If
we can produce a consistent story according to which the premises are true and the
conclusion is false, then it is not the case that no consistent story makes the premises
true and the conclusion false. Similarly, if no consistent story makes the premises

CHAPTER 1. LOGICAL VALIDITY AND SOUNDNESS

18

true and the conclusion false, then we will not be able to produce a consistent story
that makes the premises true and the conclusion false.
In this case, the most difficult steps are (a) and (b), where we say what is the case
in every story that makes the premises true and the conclusion false. For an example,
consider the following argument.
Some collies can fly
(F)

All collies are dogs


All dogs can fly

It is invalid. We can easily tell a story that makes the premises true and the conclusion
false say one where Lassie is a collie who can fly, but otherwise things are as usual.
Suppose, however, that we proceed with the validity test as follows,
a. In any story with premises true and conclusion false,
(1) Some collies can fly
(2) All collies are dogs
(3) No dogs can fly
b. In any such story,
Given (1) and (2),
(4) Some dogs can fly
Given (3) and (4),
(5) Some dogs can and cannot fly
c. So no story with the premises true and conclusion false is a consistent story; so
by definition, the argument is logically valid.
d. Since in the real world no collies can fly, not all the premises are true. So,
though the argument is logically valid, by definition it is not logically sound.
The reasoning at (b), (c) and (d) is correct. Any story with (1) - (3) is inconsistent.
But something is wrong. (Can you see what?) There is a mistake at (a): It is not
the case that every story that makes the premises true and conclusion false makes (3)
true. The negation of All dogs can fly is not No dogs can fly, but rather, Not
all dogs can fly (Some dogs cannot fly). All it takes to falsify the claim that all
dogs fly, is one dog that does not. Thus, for example, all it takes to falsify the claim

CHAPTER 1. LOGICAL VALIDITY AND SOUNDNESS

19

Negation and Quantity


In general you want to be careful about negations. To negate any claim P it is
always correct to write simply, it is not the case that P . You may choose to do
this for conclusions in the first step of our procedures. At some stage, however,
you will need to understand what the negation comes to. We have chosen to offer
interpreted versions in the text. It is easy enough to see that,
My mother is a car

and

My mother is not a car

negate one another. However, there are cases where caution is required. This is
particularly the case where quantity terms are involved.
In the first step of our procedures, we say what is the case in any story where the
premises are true and the conclusion is false. The negation of a claim states what
is required for falsity, and so meets this condition. If I say there are at least ten
apples in the basket, my claim is of course false if there are only three. But not
every story where my claim is false is one in which there are three apples. Rather,
my claim is false just in case there are less than ten. Any story in which there are
less than ten makes my claim false.
A related problem arises with other quantity terms. To bring this out, consider
grade examples: First, if a professor says, everyone will not get an A, she
says something disastrous. To deny it, all you need is one person to get an A. In
contrast, if she says, someone will not get an A (not everyone will get an A),
she says only what you expect from the start. To deny it, you need that everyone
will get an A. Thus the following pairs negate one another.
Everybody will get an A

and

Somebody will not get an A

Somebody will get an A

and

Everybody will not get an A

A sort of rule is that pushing or pulling not past all or some flips one to the
other. But it is difficult to make rules for arbitrary quantity terms. So it is best just
to think about what you are saying, perhaps with reference to examples like these.
Thus the following also are negations of one another.
Somebody will get an A
Only jocks will get an A

and
and

Nobody will get an A


Some non-jock will get an A

The first works because nobody will get an A is just like everybody will not
get an A, so the first pair reduces to the parallel one above. In the second case,
everything turns on whether a non-jock gets an A: if none does, then only jocks
will get an A; if one or more do, then some non-jock does get an A.

CHAPTER 1. LOGICAL VALIDITY AND SOUNDNESS

20

that everyone will get an A is one person who does not (on this, see the extended
discussion on p. 19). We have indeed shown that every story of a certain sort is
inconsistent, but have not shown that every story which makes the premises true and
conclusion false is inconsistent. In fact, as we have seen, there are consistent stories
that make the premises true and conclusion false. Similarly, in step (c) it is easy to
get confused if you consider too much information at once. Ordinarily, if you focus
on sentences singly or in pairs, it will be clear what must be the case in every story
including those sentences. It does not matter which sentences you consider in what
order, so long as you reach a contradiction in the end.
So far, we have seen our procedures applied in contexts where it is given ahead of
time whether an argument is valid or invalid. And some exercises have been this way
too. But not all situations are so simple. In the ordinary case, it is not given whether
an argument is valid or invalid. In this case, there is no magic way to say ahead of
time which of our two tests, IT or VT applies. The only thing to do is to try one way
if it works, fine. If it does not, try the other. It is perhaps most natural to begin by
looking for stories to pry the premises off the conclusion. If you can find a consistent
story to make the premises true and conclusion false, the argument is invalid. If you
cannot find any such story, you may begin to suspect that the argument is valid. This
suspicion does not itself amount to a demonstration of validity! But you might try
to turn your suspicion into such a demonstration by attempting the validity method.
Again, if one procedure works, the other better not!
E1.4. Use our validity procedure to show that each of the following is logically
valid, and to decide (if you can) whether it is logically sound.
*a. If Bill is president, then Hillary is first lady
Hillary is not first lady
Bill is not president
b. Only fools find love
Elvis was no fool
Elvis did not find love
c. If there is a good and omnipotent god, then there is no evil
There is evil
There is no good and omnipotent god

CHAPTER 1. LOGICAL VALIDITY AND SOUNDNESS

21

d. All sparrows are birds


All birds fly
All sparrows fly
e. All citizens can vote
Hannah is a citizen
Hannah can vote
E1.5. Use our procedures to say whether the following are logically valid or invalid,
and sound or unsound. Hint: You may have to do some experimenting to
decide whether the arguments are logically valid or invalid and so to decide
which procedure applies.
a. If Bill is president, then Hillary is first lady
Bill is president
Hillary is first lady
b. Most professors are insane
TR is a professor
TR is insane
*c. Some dogs have red hair
Some dogs have long hair
Some dogs have long red hair
d. If you do not strike the match, then it does not light
The match lights
You strike the match
e. Shaq is taller than Kobe
Kobe is at least as tall as TR
Kobe is taller than TR

CHAPTER 1. LOGICAL VALIDITY AND SOUNDNESS

1.3

22

Some Consequences

We now know what logical validity and soundness are and should be able to identify
them in simple cases. Still, it is one thing to know what validity and soundness
are, and another to know how we can use them. So in this section I turn to some
consequences of the definitions.

1.3.1

Soundness and Truth

First, a consequence we want: The conclusion of every sound argument is true in the
real world. Observe that this is not part of what we require to show that an argument
is sound. LS requires just that an argument is valid and that its premises are true.
However, it is a consequence of these requirements that the conclusion is true as
well. To see this, suppose we have a sound two-premise argument, and think about
the nature of the true story. The premises and conclusion must fall into one of the
following combinations of true and false in the real world:
1
T
T
T

2
T
T
F

3
T
F
T

4
F
T
T

5
T
F
F

6
F
T
F

7
F
F
T

8
F
F
F

If the argument is logically sound, it is logically valid; so no consistent story makes


the premises true and the conclusion false. But the true story is a consistent story.
So we can be sure that the true story does not result in combination (2). So far,
the true story might fall into any of the other combinations. Thus the conclusion of
a valid argument may or may not be true in the real world. But if an argument is
sound, its premises are true in the real world. So, for a sound argument, we can be
sure that the premises do not fall into any of the combinations (3) - (8). (1) is the
only combination left: in the true story, the conclusion is true. And, in general, if an
argument is sound, its conclusion is true in the real world: If there is no consistent
story where the premises are true and the conclusion is false, and the premises are
in fact true, then the conclusion must be true as well. If the conclusion were false
in the real world then the real world would correspond to a story with premises true
and conclusion false, and the argument would not be valid after all. Note again:
we do not need that the conclusion is true in the real world in order to say that an
argument is sound, and saying that the conclusion is true is no part of our procedure
for validity or soundness! Rather, by discovering that an argument is logically valid
and that its premises are true, we establish that it is sound; this gives us the result that
its conclusion therefore is true. And that is just what we want.

CHAPTER 1. LOGICAL VALIDITY AND SOUNDNESS

1.3.2

23

Validity and Form

Some of the arguments we have seen so far are of the same general form. Thus both
of the arguments on the left have the form on the right.

(G)

If Joe works hard, then


he will get an A

If Hannah is a citizen
then she can vote

Joe works hard

Hannah is a citizen

Joe will get an A

Hannah can vote

If P then Q
P
Q

As it turns out, all arguments of this form are valid. In contrast, the following arguments with the indicated form are not.
If Hannah can vote,
If Joe works hard then
If P then Q
then she is a citizen
he will get an A
Q
(H)
Hannah is a citizen
Joe will get an A
P
Joe works hard
Hannah can vote
There are stories where, say, Joe cheats for the A, or Hannah is a citizen but not old
enough to vote. In these cases, there is some other way to obtain condition Q than by
having P this is what the stories bring out. And, generally, it is often possible to
characterize arguments by their forms, where a form is valid iff every instance of it is
logically valid. Thus the first form listed above is valid, and the second not. In fact,
the logical machine to be developed in chapters to come takes advantage of certain
very general formal or structural features of arguments to demonstrate the validity of
arguments with those features.
For now, it is worth noting that some presentations of critical reasoning (which
you may or may not have encountered), take advantage of such patterns, listing typical ones that are valid, and typical ones that are not (for example, Cederblom and
Paulsen, Critical Reasoning). A student may then identify valid and invalid arguments insofar as they match the listed forms. This approach has the advantage of
simplicity and one may go quickly to applications of the logical notions to concrete cases. But the approach is limited to application of listed forms, and so to a very
limited range, whereas our definition has application to arbitrary arguments. Further,
a mere listing of valid forms does not explain their relation to truth, whereas the
definition is directly connected. Similarly, our logical machine develops an account
of validity for arbitrary forms (within certain ranges). So we are pursuing a general
account or theory of validity that goes well beyond the mere lists of these other more

CHAPTER 1. LOGICAL VALIDITY AND SOUNDNESS

24

traditional approaches.3

1.3.3

Relevance

Another consequence seems less welcome. Consider the following argument.


Snow is white
(I)

Snow is not white


All dogs can fly

It is natural to think that the premises are not connected to the conclusion in the
right way for the premises have nothing to do with the conclusion and that this
argument therefore should not be logically valid. But if it is not valid, by definition,
there is a consistent story that makes the premises true and the conclusion false.
And, in this case, there is no such story, for no consistent story makes the premises
true. Thus, by definition, this argument is logically valid. The procedure applies in a
straightforward way. Thus,
a. In any story with premises true and conclusion false,
(1) Snow is white
(2) Snow is not white
(3) Some dogs cannot fly
b. In any such story,
Given (1) and (2),
(4) Snow is and is not white
c. So no story with the premises true and conclusion false is a consistent story; so
by definition, the argument is logically valid.
d. Since in the real world snow is white, not all the premises are true (the second
premise is false). So, though the argument is logically valid, by definition it is
not logically sound.
3 Some authors introduce a notion of formal validity (maybe in the place of logical validity as
above) such that an argument is formally valid iff it has some valid form. As above, formal validity
is parasitic on logical validity, together with a to-be-specified notion of form. But if an argument is
formally valid, it is logically valid. So if our logical machine is adequate to identify formal validity, it
identifies logical validity as well.

CHAPTER 1. LOGICAL VALIDITY AND SOUNDNESS

25

This seems bad! Intuitively, there is something wrong with the argument. But,
on our official definition, it is logically valid. One might rest content with the observation that, even though the argument is logically valid, it is not logically sound. But
this does not remove the general worry. For this argument,
There are fish in the sea
(J)
1+1=2
has all the problems of the other and is logically sound as well. (Why?) One might,
on the basis of examples of this sort, decide to reject the (classical) account of validity
with which we have been working. Some do just this.4 But, for now, let us see what
can be said in defense of the classical approach. (And the classical approach is,
no doubt, the approach you have seen or will see in any standard course on critical
thinking or logic.)
As a first line of defense, one might observe that the conclusion of every sound argument is true and ask, What more do you want? We use arguments to demonstrate
the truth of conclusions. And nothing we have said suggests that sound arguments
do not have true conclusions: An argument whose premises are inconsistent, is sure
to be unsound. And an argument whose conclusion cannot be false, is sure to have a
true conclusion. So soundness may seem sufficient for our purposes. Even though we
accept that there remains something about argument goodness that soundness leaves
behind, we can insist that soundness is useful as an intellectual tool. Whenever it is
the truth or falsity of a conclusion that matters, we can profitably employ the classical
notions.
But one might go further, and dispute even the suggestion that there is something
about argument goodness that soundness leaves behind. Consider the following two
argument forms.
(ds)

P or Q, not-P
Q

(add)

P
P or Q

According to ds (disjunctive syllogism), if you are given that P or Q and that notP , you can conclude that Q. If you have cake or ice cream, and you do not have
cake, you have ice cream; if you are in California or New York, and you are not in
California, you are in New York; and so forth. Thus ds seems hard to deny. And
4 Especially

the so-called relevance logicians. For an introduction, see Graham Priest, NonClassical Logics. But his text presumes mastery of material corresponding to Part I and Part II (or at
least Part I with chapter 7) of this one. So the non-classical approaches develop or build on the classical
one developed here.

CHAPTER 1. LOGICAL VALIDITY AND SOUNDNESS

26

similarly for add (addition). Where or means one or the other or both, when you
are given that P , you can be sure that P or anything. Say you have cake, then you
have cake or ice cream, cake or brussels sprouts, and so forth; if grass is green, then
grass is green or pigs have wings, grass is green or dogs fly, and so forth.
Return now to our problematic argument. As we have seen, it is valid according
to the classical definition LV. We get a similar result when we apply the ds and add
principles.
1.
2.
3.
4.

Snow is white
Snow is not white
Snow is white or all dogs can fly
All dogs can fly

premise
premise
from 1 and add
from 2 and 3 and ds

If snow is white, then snow is white or anything. So snow is white or dogs fly. So
we use line 1 with add to get line 3. But if snow is white or dogs fly, and snow is
not white, then dogs fly. So we use lines 2 and 3 with ds to reach the final result. So
our principles ds and add go hand-in-hand with the classical definition of validity.
The argument is valid on the classical account; and with these principles, we can
move from the premises to the conclusion. If we want to reject the validity of this
argument, we will have to reject not only the classical notion of validity, but also one
of our principles ds or add. And it is not obvious that one of the principles should
go. If we decide to retain both ds and add then, seemingly, the classical definition
of validity should stay as well. If we have intuitions according to which ds and add
should stay, and also that the definition of validity should go, we have conflicting
intuitions. Thus our intuitions might, at least, be sensibly resolved in the classical
direction.
These issues are complex, and a subject for further discussion. For now, it is
enough for us to treat the classical approach as a useful tool: It is useful in contexts
where what we care about is whether conclusions are true. And alternate approaches
to validity typically develop or modify the classical approach. So it is natural to begin
where we are, with the classical account. At any rate, this discussion constitutes a
sort of acid test: If you understand the validity of the snow is white and fish in the
sea arguments (I) and (J), you are doing well you understand how the definition
of validity works, with its results that may or may not now seem controversial. If you
do not see what is going on in those cases, then you have not yet understood how the
definitions work and should return to section 1.2 with these cases in mind.
E1.6. Use our procedures to say whether the following are logically valid or invalid,
and sound or unsound. Hint: You may have to do some experimenting to

CHAPTER 1. LOGICAL VALIDITY AND SOUNDNESS

27

decide whether the arguments are logically valid or invalid and so to decide
which procedure applies.
a. Bob is over six feet tall
Bob is under six feet tall
Bob is disfigured
b. Marilyn is not over six feet tall
Marilyn is not under six feet tall
Marilyn is beautiful
*c. The earth is (approximately) round
There is no round square
d. There are fish in the sea
There are birds in the sky
There are bats in the belfry
Two dogs are more than one
e. All dogs can fly
Fido is a dog
Fido cannot fly
I am blessed
E1.7. Respond to each of the following.
a. Create another argument of the same form as the first set of examples (G)
from section 1.3.2, and then use our regular procedures to decide whether it
is logically valid and sound. Is the result what you expect? Explain.
b. Create another argument of the same form as the second set of examples (H)
from section 1.3.2, and then use our regular procedures to decide whether it
is logically valid and sound. Is the result what you expect? Explain.
E1.8. Which of the following are true, and which are false? In each case, explain
your answers, with reference to the relevant definitions. The first is worked
as an example.

CHAPTER 1. LOGICAL VALIDITY AND SOUNDNESS

28

a. A logically valid argument is always logically sound.


False. An argument is sound iff it is logically valid and all of its premises are
true in the real world. Thus an argument might be valid but fail to be sound
if one or more of its premises is false in the real world.
b. A logically sound argument is always logically valid.
*c. If the conclusion of an argument is true in the real world, then the argument
must be logically valid.
d. If the premises and conclusion of an argument are true in the real world, then
the argument must be logically sound.
*e. If a premise of an argument is false in the real world, then the argument cannot
be logically valid.
f. If an argument is logically valid, then its conclusion is true in the real world.
*g. If an argument is logically sound, then its conclusion is true in the real world.
h. If an argument has contradictory premises (its premises are true in no consistent story), then it cannot be logically valid.
*i. If the conclusion of an argument cannot be false (is false in no consistent
story), then the argument is logically valid.
j. The premises of every logically valid argument are relevant to its conclusion.
E1.9. For each of the following concepts, explain in an essay of about two pages,
so that Hannah could understand. In your essay, you should (i) identify the
objects to which the concept applies, (ii) give and explain the definition, and
give and explicate examples of your own construction (iii) where the concept
applies, and (iv) where it does not. Your essay should exhibit an understanding of methods from the text.
a. Logical validity
b. Logical soundness
E1.10. Do you think we should accept the classical account of validity? In an essay
of about two pages, explain your position, with special reference to difficulties raised in section 1.3.3.

Chapter 2

Formal Languages
In the picture of symbolic logic from p. 2, we suggested that symbolic logic is
introduced as a machine or tool to identify validity and soundness. This machine
begins with formal representations of ordinary reasonings.
There are different ways to introduce a formal language. It is natural to introduce
expressions of a new language in relation to expressions of one that is already familiar. Thus, a standard course in a foreign language is likely to present vocabulary lists
of the sort,
cabbage
small

chou:
petit:
::
:

But such lists do not define the terms of one language relative to another. It is not a
legitimate criticism of a Frenchman who refers to his sweetheart as mon petit chou to
observe that she is no cabbage. Rather, French has conventions such that sometimes
chou corresponds to cabbage and sometimes it does not. It is possible to use such
correlations to introduce conventions of a new language. But it is also possible to
introduce a language as itself the way a native speaker learns it. In this case,
one avoids the danger of importing conventions and patterns from one language onto
the other. Similarly, the expressions of a formal language might be introduced in
correlation with expressions of, say, English. But this runs the risk of obscuring just
what the official definitions accomplish. Since we will be concerned extensively with
what follows from the definitions, it is best to introduce our languages in their pure
forms.
In this chapter, we develop the grammar of our formal languages. As a computer
can check the spelling and grammar of English without reference to meaning, so we
29

CHAPTER 2. FORMAL LANGUAGES

30

can introduce the vocabulary and grammar of our formal languages without reference
to what their expressions mean or what makes them true. We will give some hints for
the way formal expressions match up with ordinary language. But do not take these
as defining the formal language. The formal language has definitions of its own. And
the grammar, taken alone, is completely straightforward. Taken this way, we work
directly from the definitions, without pollution from associations with English or
whatever.

2.1

Sentential Languages

Let us begin with some of those hints at least to suggest the way things will work.
Consider some simple sentences of an ordinary language, say, Bill is happy and
Hillary is happy. It will be convenient to use capital letters to abbreviate these,
say, B and H . Such sentences may combine to form ones that are more complex
as, It is not the case that Bill is happy or If Bill is happy, then Hillary is happy.
We shall find it convenient to express these, Bill is happy and Bill is happy !
Hillary is happy, with operators  and !. Putting these together we get, B and
B ! H . Operators may be combined in obvious ways so that B ! H says that if
Bill is happy, then Hillary is not. And so forth. We shall see that incredibly complex
expressions of this sort are possible!
In the above case, simple sentences, Bill is happy and Hillary is happy are
atoms and complex sentences are built out of them. This is characteristic of the
sentential languages to be considered in this section. For the quantificational languages of section 2.2, certain sentence parts are taken as atoms. So quantificational
languages expose structure beyond that considered here. However, this should be
enough to give you a glimpse of the overall strategy and aims for the sentential languages of which we are about to introduce the grammar.
Specification of the grammar for a formal language breaks into specification of
the vocabulary or symbols of the language, and specification of those expressions
which count as grammatical sentences. After introducing the vocabulary, and then
the grammar for our languages, we conclude with some discussion of abbreviations
for official expressions.

2.1.1

Vocabulary

The specification of a formal language begins with specification of its vocabulary. In


the sentential case, this includes,

CHAPTER 2. FORMAL LANGUAGES


VC

31

(p) Punctuation symbols: . /


(o) Operator symbols:  !
(s) A non-empty countable collection of sentence letters

And that is all.  is tilde and ! is arrow. Sometimes sentential languages include
operators in addition to  and ! (for example, _, ^, $).1 Such symbols will
be introduced in due time but as abbreviations for complex official expressions.
A stripped-down vocabulary is sufficient to accomplish what can be done with
expanded ones. And when we turn to reasoning about the language and logic, it will
be convenient to have simple specifications, with a stripped-down vocabulary.
Some definitions have both a sentential and then an extended quantificational version. In this case, I adopt the convention of naming the initial sentential version
in small caps. Thus the definition above is VC, and the parallel definition of the
next section, VC.
In order to fully specify the vocabulary of any particular sentential language, we
need to specify its sentence letters so far as definition VC goes, different languages
may differ in their collections of sentence letters. The only constraint on such specifications is that the collections of sentence letters be non-empty and countable. A
collection is non-empty iff it has at least one member. So any sentential language has
at least one sentence letter. A collection is countable iff its members can be correlated one-to-one with some or all of the integers. Thus, for some language, we might
let the sentence letters be A; B : : : Z, where these correlate with the integers 1 : : : 26.
Or we might let there be infinitely many sentence letters, S0 ; S1 ; S2 : : :
Let us introduce a standard language Ls whose sentence letters are Roman italics
A : : : Z with or without integer subscripts. Thus,
A C
L2
R3
Z25
are all sentence letters of Ls . We will not use the subscripts very often. But they
guarantee that we never run out of sentence letters! Official sentences of Ls are built
out of this vocabulary.
To proceed, we need some conventions for talking about expressions of a language like Ls . For any formal object language L, an expression is a sequence of one
or more elements of its vocabulary. The sentences of any language L are a subset of
its expressions. Thus, already, it is clear that .A ? B/ is not an official sentence of
1 And sometimes sentential languages are introduced with different symbols, for example,

 for !, or & for ^. It should be easy to convert between presentations of the different sorts.

: for ,

CHAPTER 2. FORMAL LANGUAGES

32

Countability
To see the full range of languages which are allowed under VC, observe how
multiple infinite series of sentence letters may satisfy the countability constraint.
Thus, for example, suppose we have two series of sentence letters, A0 ; A1 : : : and
B0 ; B1 : : : These can be correlated with the integers as follows,
A0 B0 A1 B1 A2 B2
j
j
j
j
j
j :::
0
1
2
3
4
5
For any integer n, An is matched with 2n, and Bn with 2n C 1. So each sentence
letter is matched with some integer; so the sentence letters are countable. If there
are three series, they may be correlated,
A0 B0 C0 A1 B1 C1
j
j
j
j
j
j :::
0
1
2
3
4
5
so that every sentence letter is matched to some integer. And similarly for any
finite number of series. And there might be 26 such series, as for our language Ls .
In fact even this is not the most general case. If there are infinitely many series
of sentence letters, we can still line them up and correlate them with the integers.
Here is one way to proceed. Order the letters as follows,
A0
B0
#
C0

!
.

A1

A2

A3

:::

B2

B3

:::

C1

C2

C3

:::

D1

D2

D3

:::

%
B1

!
.

.
D0
::
:

And following the arrows, match them accordingly with the integers,
A0 A1 B0 C0 B1 A2
j
j
j
j
j
j :::
0
1
2
3
4
5
so that, again, any sentence letter is matched with some integer. It may seem odd
that we can line symbols up like this, but it is hard to dispute that we have done so.
Thus we may say that VC is compatible with a wide variety of specifications, but
also that all legitimate specifications have something in common: If a collection
is countable, it is possible to sort its members into a series with a first member, a
second member, and so forth.

CHAPTER 2. FORMAL LANGUAGES

33

Ls . (Why?). We shall use script characters A : : : Z to represent expressions. Insofar


as these script characters are symbols for symbols, they are metasymbols and so
part of a metalanguage. , !, ., and / represent themselves. Concatenated or
joined symbols in the metalanguage represent the concatenation of the symbols they
represent. Thus, where S represents an arbitrary sentence letter, S may represent
any of, A, B, or Z24 . But .A ! B/ is not of that form, for it does not consist
of a tilde followed by a sentence letter. However, where P is allowed to represent
any arbitrary expression, .A ! B/ is of the form P , for it consists of a tilde
followed by an expression of some sort.
It is convenient to think of metalinguistic expressions as mapping onto objectlanguage ones. Thus, with S restricted to sentence letters, there is a straightforward
map from S onto A, B, or Z24 , but not from S onto .A ! B/.
(A)

S

S

S

S

??

??

??

A
B
Z24
 .A ! B/
In the first three cases,  maps to itself, and S to a sentence letter. In the last case
there is no map. We might try mapping S to A or B; but this would leave the rest of
the expression unmatched. An object-language expression has some metalinguistic
form just when there is a complete map from the metalinguistic form to it.
Say P may represent any arbitrary expression. Then by similar reasoning, .A !
B/ ! .A ! B/ is of the form P ! P .
P !P
(B)

R
? @



.A ! B/ ! .A ! B/
In this case, P maps to all of .A ! B/ and ! to itself. A constraint on our maps
is that the use of the metavariables A : : : Z must be consistent within a given map.
Thus .A ! B/ ! .B ! B/ is not of the form P ! P .
P !P
(C)

P !P



.A ! B/ ! .B ! B/

or

@
R

?
.A ! B/ ! .B ! B/

We are free to associate P with whatever we want. However, within a given map,
once P is associated with some expression, we have to use it consistently within that
map.
Observe again that S and P ! P are not expressions of Ls . Rather, we use
them to talk about expressions of Ls . And it is important to see how we can use the
metalanguage to make claims about a range of expressions all at once. Given that

CHAPTER 2. FORMAL LANGUAGES

34

A, B and Z24 are all of the form S, when we make some claim about expressions of the form S, we say something about each of them but not about
.A ! B/. Similarly, if we make some claim about expressions of the form
P ! P , we say something with application to ranges of expressions. In the next
section, for the specification of formulas, we use the metalanguage in just this way.
E2.1. Assuming that S may represent any sentence letter, and P any arbitrary expression of Ls , use maps to determine whether each of the following expressions is (i) of the form .S ! P / and then (ii) whether it is of the form
.P ! P /. In each case, explain your answers.
a. .A ! A/
b. .A ! .R ! Z//
c. .A ! .R ! Z//
d. ..R ! Z/ ! .R ! Z//
*e. ..! / ! .! //
E2.2. On the pattern of examples from the countability guide on p. 32, show that
the sentence letters of Ls are countable that is, that they can be correlated
with the integers. On the scheme you produce, what integers correlate with
A, B1 and C10 ? Hint: Supposing that A without subscript is like A0 , for any
integer n, you should be able to produce a formula for the position of any An ,
and similarly for Bn , Cn and the like. Then it will be easy to find the position
of any letter, even if the question is about, say, L125 .

2.1.2

Formulas

We are now in a position to say which expressions of a sentential language are its
grammatical formulas and sentences. The specification itself is easy. We will spend
a bit more time explaining how it works. For a given sentential language L,
FR

(s) If S is a sentence letter, then S is a formula.


() If P is a formula, then P is a formula.
(!) If P and Q are formulas, then .P ! Q/ is a formula.
(CL) Any formula may be formed by repeated application of these rules.

CHAPTER 2. FORMAL LANGUAGES

35

In the quantificational case, we will distinguish a class of expressions that are formulas from those that are sentences. But, here, we simply identify the two: an expression is a sentence iff it is a formula.
FR is a first example of a recursive definition. Such definitions always build from
the parts to the whole. Frequently we can use tree diagrams to see how they work.
Thus, for example, by repeated applications of the definition, .A ! .B ! A//
is a formula and sentence of Ls .
A

A
A
A
A
A
A

(D)




B

@

@
@
.B ! A/

A
A
A

These are formulas by FR(s)

Since B is a formula, this is a formula by FR()

Since B and A are formulas, this is a formula by FR(!)

AA

.A ! .B ! A//

Since A and .B ! A/ are formulas, this is a formula by FR(!)

.A ! .B ! A//

Since .A ! .B ! A// is a formula, this is a formula by FR()

By FR(s), the sentence letters, A, B and A are formulas; given this, clauses FR() and
FR (!) let us conclude that other, more complex, expressions are formulas as well.
Notice that, in the definition, P and Q may be any expressions that are formulas: By
FR (), if B is a formula, then tilde followed by it is a formula; but similarly, if B
and A are formulas, then an opening parenthesis followed by B, followed by !
followed by A and then a closing parenthesis is a formula; and so forth as on the tree
above. You should follow through each step very carefully. In contrast, .AB/ for
example, is not a formula. A is a formula and B is a formula; but there is no way
to put them together, by the definition, without ! in between.
A recursive definition always involves some basic starting elements, in this
case, sentence letters. These occur across the top row of our tree. Other elements
are constructed, by the definition, out of ones that come before. The last, closure,
clause tells us that any formula is built this way. To demonstrate that an expression is
a formula and a sentence, it is sufficient to construct it, according to the definition, on
a tree. If an expression is not a formula, there will be no way to construct it according
to the rules.
Here are a couple of last examples which emphasize the point that you must
maintain and respect parentheses in the way you construct a formula. Thus consider,

CHAPTER 2. FORMAL LANGUAGES


A

(E)

36

These are formulas by FR(s)

@
@
@

.A ! B/

Since A and B are formulas, this is a formula by FR(!)

.A ! B/

Since .A ! B/ is a formula, this is a formula by FR()

And compare it with,


A

(F)

These are formulas by FR(s)




A

@

@
@

Since A is a formula, this is a formula by FR()

Since A and B are formulas, this is a formula by FR(!)

.A ! B/

Once you have .A ! B/ as in the first case, the only way to apply FR() puts the
tilde on the outside. To get the tilde inside the parentheses, by the rules, it has to go
on first, as in the second case. The significance of this point emerges immediately
below.
It will be helpful to have some additional definitions, each of which may be
introduced in relation to the trees. First, for any formula P , each formula which
appears in the tree for P including P itself is a subformula of P . Thus .A ! B/
has subformulas,
A

.A ! B/

.A ! B/

A

.A ! B/

In contrast, .A ! B/ has subformulas,


A

So it matters for the subformulas how the tree is built. The immediate subformulas
of a formula P are the subformulas to which P is directly connected by lines. Thus
.A ! B/ has one immediate subformula, .A ! B/; .A ! B/ has two, A and
B. The atomic subformulas of a formula P are the sentence letters that appear across
the top row of its tree. Thus both .A ! B/ and .A ! B/ have A and B as their
atomic subformulas. Finally, the main operator of a formula P is the last operator
added in its tree. Thus  is the main operator of .A ! B/, and ! is the main
operator of .A ! B/. So, again, it matters how the tree is built. We sometimes
speak of a formula by means of its main operator: A formula of the form P is a

CHAPTER 2. FORMAL LANGUAGES

37

negation; a formula of the form .P ! Q/ is a (material) conditional, where P is the


antecedent of the conditional and Q is the consequent.

Parts of a Formula
The parts of a formula are here defined in relation to its tree.
SB

Each formula which appears in the tree for formula P including P itself is
a subformula of P .

IS

The immediate subformulas of a formula P are the subformulas to which P


is directly connected by lines.

AS

The atomic subformulas of a formula P are the sentence letters that appear
across the top row of its tree.

MO

The main operator of a formula P is the last operator added in its tree.

E2.3. For each of the following expressions, demonstrate that it is a formula and a
sentence of Ls with a tree. Then on the tree (i) bracket all the subformulas,
(ii) box the immediate subformula(s), (iii) star the atomic subformulas, and
(iv) circle the main operator. A first case for ..A ! B/ ! A/ is worked as
an example.

s
u
b
f
o
r
m
u
l
a
s

A?

B?




A

@

@
@

A?





From A, formula by FR()





.A ! B/

These are formulas by FR(s)



@

@ 
@ 
..A ! B/ ! A/

*a. A
b. A
c. .A ! B/
d. .C ! .A ! B//

From A and B, formula by FR(!)

From .A ! B/ and A, formula by FR(!)

CHAPTER 2. FORMAL LANGUAGES

38

e. ..A ! B/ ! .C ! A//
E2.4. Explain why the following expressions are not formulas or sentences of Ls .
Hint: you may find that an attempted tree will help you see what is wrong.
a. .A  B/
*b. .P ! Q/
c. .B/
d. .A ! B ! C /
e. ..A ! B/ ! .A ! C / ! D/
E2.5. For each of the following expressions, determine whether it is a formula and
sentence of Ls . If it is, show it on a tree, and exhibit its parts as in E2.3. If it
is not, explain why as in E2.4.
*a. ..A ! B/ ! ..A ! B/ ! A//
b. .A ! B ! ..A ! B/ ! A//
*c. .A ! B/ ! ..A ! B/ ! A/
d. .A ! A/
e. ...A ! B/ ! .C ! D// ! ..E ! F / ! G//

2.1.3

Abbreviations

We have completed the official grammar for our sentential languages. So far, the
languages are relatively simple. For the purposes of later parts, when we turn to
reasoning about logic, it will be good to have languages of this sort. However, for
applications of logic, it will be advantageous to have additional expressions which,
though redundant with expressions of the language already introduced, simplify the
work. I begin by introducing these additional expressions, and then turn to the question about how to understand the redundancy.

CHAPTER 2. FORMAL LANGUAGES

39

Abbreviating. As may already be obvious, formulas of a sentential language like


Ls can get complicated quickly. Abbreviated forms give us ways to manipulate
official expressions without undue pain. First, for any formulas P and Q,
AB

(_) .P _ Q/ abbreviates .P ! Q/


(^) .P ^ Q/ abbreviates .P ! Q/
($) .P $ Q/ abbreviates ..P ! Q/ ! .Q ! P //

The last of these is easier than it looks. Observe that it can be thought of as based on
a simple abbreviation of the sort we expect. That is, ..P ! Q/ ^ .Q ! P // is of
the sort .A ^ B/; so by AB(^), it abbreviates .A ! B/; but with .P ! Q/ for
A and .Q ! P / for B, this is just, ..P ! Q/ ! .Q ! P // as in AB($). So
you may think of .P $ Q/ as an abbreviation of ..P ! Q/ ^ .Q ! P //, which in
turn abbreviates the more complex ..P ! Q/ ! .Q ! P //.
_ is wedge, ^ is caret, and $ is double arrow. An expression of the form
.P _ Q/ is a disjunction with P and Q as disjuncts; it has the standard reading, (P or
Q). An expression of the form .P ^ Q/ is a conjunction with P and Q as conjuncts;
it has the standard reading, (P and Q). An expression of the form .P $ Q/ is a
(material) biconditional; it has the standard reading, (P iff Q).2 Again, we do not
use ordinary English to define our symbols. All the same, this should suggest how
the extra operators extend the range of what we are able to say in a natural way.
With the abbreviations, we are in a position to introduce derived clauses for FR.
Suppose P and Q are formulas; then by FR(), P is a formula; so by FR(!),
.P ! Q/ is a formula; but this is just to say that .P _ Q/ is a formula. And
similarly in the other cases. (If you are confused by such reasoning, work it out on a
tree.) Thus we arrive at the following conditions.
FR 0

(_) If P and Q are formulas, then .P _ Q/ is a formula.


(^) If P and Q are formulas, then .P ^ Q/ is a formula.
($) If P and Q are formulas, then .P $ Q/ is a formula.

Once FR is extended in this way, the additional conditions may be applied directly in
trees. Thus, for example, if P is a formula and Q is a formula, we can safely move
in a tree to the conclusion that .P _ Q/ is a formula by FR0 (_). Similarly, for a more
complex case, ..A $ B/ ^ .A _ B// is a formula.
2 Common

alternatives are & for ^, and  for $.

CHAPTER 2. FORMAL LANGUAGES


A

@
@
@

These are formulas by FR(s)




A

@

@
@

.A $ B/

(G)

40

\
\
\

These are formulas by FR0 ($) and FR()

This is a formula by FR0 (_)

.A _ B/

\

\

\\

This is a formula by FR0 (^)

..A $ B/ ^ .A _ B//

In a derived sense, expressions with the new symbols have subformulas, atomic subformulas, immediate subformulas, and main operator all as before. Thus, with notation from exercises, with star for atomic formulas, box for immediate subformulas
and circle for main operator, on the diagram immediately above,

A?

(H)

s
u
b
f
o
r
m
u
l
a
s

B?

@
@
@

.A $ B/

\
\

A?

A

B?

@
@
\
.A _ B/
\

\



\ 
..A $ B/ ^ .A _ B//

These are formulas by FR(s)

These are formulas by FR0 ($) and FR()

This is a formula by FR0 (_)

This is a formula by FR0 (^)

In the derived sense, ..A $ B/ ^ .A _ B// has immediate subformulas .A $ B/


and .A _ B/, and main operator ^.
A couple of additional abbreviations concern parentheses. First, it is sometimes
convenient to use a pair of square brackets [ ] in place of parentheses ( ). This
is purely for visual convenience; for example ((()())) may be more difficult to absorb than ([()()]). Second, if the very last step of a tree for some formula P is
justified by FR(!), FR0 (^), FR0 (_), or FR0 ($), we feel free to abbreviate P with
the outermost set of parentheses or brackets dropped. Again, this is purely for visual convenience. Thus, for example, we might write, A ! .B ! C / in place
of .A ! .B ! C //. As it turns out, where A, B, and C are formulas, there is
a difference between ..A ! B/ ! C / and .A ! .B ! C//, insofar as the main
operator shifts from one case to the other. In .A ! B ! C/, however, it is not
clear which arrow should be the main operator. That is why we do not count the
latter as a grammatical formula or sentence. Similarly there is a difference between
.A ! B/ and .A ! B/; again, the main operator shifts. However, there is no

CHAPTER 2. FORMAL LANGUAGES

41

room for ambiguity when we drop just an outermost pair of parentheses and write
.A ! B/ ! C for ..A ! B/ ! C/; and similarly when we write A ! .B ! C /
for .A ! .B ! C//. And similarly for abbreviations with ^, _, or $. So dropping
outermost parentheses counts as a legitimate abbreviation.
An expression which uses the extra operators, square brackets, or drops outermost parentheses is a formula just insofar as it is a sort of shorthand for an official
formula which does not. But we will not usually distinguish between the shorthand
expressions and official formulas. Thus, again, the new conditions may be applied
directly in trees and, for example, the following is a legitimate tree to demonstrate
that A _ .A ! B ^ B/ is a formula.
A

Formulas by FR(s)

@
@
@

S
S




A ! B

H
HH

H
H
H

(I)

S
S

.A ! B ^ B/

S
S
S
S

S

Formula by FR(!), with [ ]

Formula by FR0 (^)




A _ .A ! B ^ B/

Formula by FR0 (_), with outer ( ) dropped

So we use our extra conditions for FR0 , introduce square brackets instead of parentheses, and drop parentheses in the very last step. Remember that the only case where
you can omit parentheses is if they would have been added in the very last step of
the tree. So long as we do not distinguish between shorthand expressions and official
formulas, we regard a tree of this sort as sufficient to demonstrate that an expression
is a formula and a sentence.
Unabbreviating. As we have suggested, there is a certain tension between the advantages of a simple language, and one that is more complex. When a language is
simple, it is easier to reason about; when it has additional resources, it is easier to
use. Expressions with ^, _ and $ are redundant with expressions that do not have
them though it is easier to work with a language that has ^, _ and $ than with
one that does not (something like reciting the Pledge of Allegiance in English, and
then in Morse code; you can do it in either, but it is easier in the former). If all we
wanted was a simple language to reason about, we would forget about the extra operators. If all we wanted was a language easy to use, we would forget about keeping the
language simple. To have the advantages of both, we have adopted the position that
expressions with the extra operators abbreviate, or are a shorthand for, expressions of

CHAPTER 2. FORMAL LANGUAGES

42

the original language. It will be convenient to work with abbreviations in many contexts. But, when it comes to reasoning about the language, we set the abbreviations
to the side, and focus on the official language itself.
For this to work, we have to be able to undo abbreviations when required. It is, of
course, easy enough to substitute parentheses back for square brackets, or to replace
outermost dropped parentheses. For formulas with the extra operators, it is always
possible to work through trees, using AB to replace formulas with unabbreviated
forms, one operator at a time. Consider an example.
A

(J)



@
@


@
@
@
..A ! B/ ! .B ! A//

.A $ B/
A
A


\
\
@
@


\
\
@
@
@
@
\
\
.A _ B/
.A ! B/
\
\



\
\


\ 
\ 
..A $ B/ ^ .A _ B//

...A ! B/ ! .B ! A// ! .A ! B//

The tree on the left is (G) from above. The tree on the right simply includes unpacked versions of the expressions on the left. Atomics remain as before. Then,
at each stage, given an unabbreviated version of the parts, we give an unabbreviated
version of the whole. First, .A $ B/ abbreviates ..A ! B/ ! .B ! A//;
this is a simple application of AB($). A is not an abbreviation and so remains
as before. From AB(_), .P _ Q/ abbreviates .P ! Q/ so .A _ B/ abbreviates
tilde the left disjunct, arrow the right (so that we get two tildes). For the final result,
we combine the input formulas according to the unabbreviated form for ^. It is more
a bookkeeping problem than anything: There is one formula P that is .A $ B/,
another Q that is .A _ B/; these are combined into .P ^ Q/ and so, by AB(^), into
.P ! Q/. You should be able to see that this is just what we have done. There
is a tilde and a parenthesis; then the P ; then an arrow and a tilde; then the Q, and a
closing parenthesis. Not only is the abbreviation more compact but, as we shall see,
there is a corresponding advantage when it comes to grasping what an expression
says.
Here is a another example, this time from (I). In this case, we replace also square
brackets and restore dropped outer parentheses.

CHAPTER 2. FORMAL LANGUAGES


B



@
@

A ! B

HH

H
HH

S
S
S
S

(K)

S
S

.A ! B ^ B/

S
S


S 
S

A _ .A ! B ^ B/

43
B



@
@

.A ! B/

HH
HH
H

S
S
S
S
S
S

..A ! B/ ! B/

S
S

S 
S



.A ! ..A ! B/ ! B//

In the right hand tree, we reintroduce parentheses for the square brackets. Similarly,
we apply AB(^) and AB(_) to unpack shorthand symbols. And outer parentheses
are reintroduced at the very last step. Thus .A ! B ^ B/ is a shorthand for the
unabbreviated expression, .A ! ..A ! B/ ! B//.
Observe that right-hand trees are not ones of the sort you would use directly to
show that an expression is a formula by FR! FR does not let you move directly from
that .A ! B/ is a formula and B is a formula, to the result that ..A ! B/ ! B/
is a formula as just above. Of course, if .A ! B/ and B are formulas, then ..A !
B/ ! B/ is a formula, and nothing stops a tree to show it. This is the point of our
derived clauses for FR0 . In fact, this is a good check on your unabbreviations: If the
result is not a formula, you have made a mistake! But you should not think of trees
as on the right as involving application of FR. Rather they are unabbreviating trees,
with application of AB to shorthand expressions from trees as on the left. A fully
unabbreviated expression always meets all the requirements from section 2.1.2.
E2.6. For each of the following expressions, demonstrate that it is a formula and a
sentence of Ls with a tree. Then on the tree (i) bracket all the subformulas,
(ii) box the immediate subformula(s), (iii) star the atomic subformulas, and
(iv) circle the main operator.
*a. .A ^ B/ ! C
b. .A ! K14 _ C3 /
c. B ! .A $ B/
d. .B ! A/ ^ .C _ A/
e. .A _ B/ $ .C ^ A/

CHAPTER 2. FORMAL LANGUAGES

44

*E2.7. For each of the formulas in E2.6a - e, produce an unabbreviating tree to find
the unabbreviated expression it represents.
*E2.8. For each of the unabbreviated expressions from E2.7a - e, produce a complete
tree to show by direct application of FR that it is an official formula.
E2.9. In the text, we introduced derived clauses to FR by reasoning as follows,
Suppose P and Q are formulas; then by FR(), P is a formula; so by
FR(!), .P ! Q/ is a formula; but this is just to say that .P _ Q/ is a
formula. And similarly in the other cases (p. 39). Supposing that P and
Q are formulas, produce the similar reasoning to show that .P ^ Q/ and
.P $ Q/ are formulas. Hint: Again, it may help to think about trees.
E2.10. For each of the following concepts, explain in an essay of about two pages,
so that Hannah could understand. In your essay, you should (i) identify the
objects to which the concept applies, (ii) give and explain the definition, and
give and explicate examples of your own construction (iii) where the concept
applies, and (iv) where it does not. Your essay should exhibit an understanding of methods from the text.
a. The vocabulary for a sentential language, and use of the metalanguage.
b. A formula of a sentential language.
c. The parts of a formula.
d. The abbreviation and unabbreviation for an official formula of a sentential
language.

2.2

Quantificational Languages

The methods by which we define the grammar of a quantificational language are


very much the same as for a sentential language. Of course, in the quantificational
case, additional expressive power is associated with additional complications. We
will introduce a class of terms before we get to the formulas, and there will be a
distinction between formulas and sentences not all formulas are sentences. As
before, however, we begin with the vocabulary; we then turn to the terms, formulas,
and sentences. Again we conclude with some discussion of abbreviations.

CHAPTER 2. FORMAL LANGUAGES

45

Here is a brief intuitive picture. At the start of section 2.1 we introduced Bill is
happy and Hillary is happy as atoms for sentential languages, and the rest of the
section went on to fill out that picture. In this case, our atoms are certain sentence
parts. Thus we introduce a class of individual terms which work to pick out objects.
In the simplest case, these are like ordinary names such as Bill and Hillary; we
will find it convenient to indicate these, b and h. Similarly, we introduce a class of
predicate expressions as .x is happy) and .x loves y/ indicating them by capitals as
H 1 or L2 (with the superscript to indicate the number of object places). Then H 1 b
says that Bill is happy, and L2 bh that Bill loves Hillary. We shall read 8xH 1 x to
say for any thing x it is happy that everything is happy. (The upside-down A
for all is the universal quantifier.) As indicated by this reading, the variable x works
very much like a pronoun in ordinary language. And, of course, our notions may be
combined. Thus, 8xH 1 x ^ L2 hb says that everything is happy and Hillary loves
Bill. Thus we expose structure buried in sentence letters from before. Of course we
have so far done nothing to define quantificational languages. But this should give
you a picture of the direction in which we aim to go.

2.2.1

Vocabulary

We begin by specifying the vocabulary or symbols of our quantificational languages.


The vocabulary consists of infinitely many distinct symbols including,
VC

(p) Punctuation symbols: . /


(o) Operator symbols:  ! 8
(v) Variable symbols: i j : : : z with or without integer subscripts
(s) A possibly-empty countable collection of sentence letters
(c) A possibly-empty countable collection of constant symbols
(f) For any integer n  1, a possibly-empty countable collection of n-place
function symbols
(r) For any integer n  1, a possibly-empty countable collection of n-place
relation symbols

Unless otherwise noted, D is always included among the 2-place relation symbols.
Notice that all the punctuation symbols, operator symbols and sentence letters remain
from before (except that the collection of sentence letters may be empty). There is
one new operator symbol, with the new variable symbols, constant symbols, function
symbols, and relation symbols.

CHAPTER 2. FORMAL LANGUAGES

46

More on Countability
Given what was said on p. 32, one might think that every collection is countable.
However, this is not so. This amazing and simple result was proved by G. Cantor
in 1873. Consider the collection which includes every countably infinite series of
digits 0 through 9 (or, if your like, the collection of all real numbers between 0 and
1). Suppose that the members of this collection can be correlated one-to-one with
the integers. Then there is some list,
0
1
2
3
4

a1
b1
c1
d1
e1

a0
b0
c0
d0
e0

a2
b2
c2
d2
e2

a3
b3
c3
d3
e3

a4
b4
c4
d4
e4

:::
:::
:::
:::
:::

and so forth, which matches each series of digits with an integer. For any digit
x, say x 0 is the digit after it in the standard ordering (where 0 follows 9). Now
consider the digits along the diagonal, a0 , b1 , c2 , d3 , e4 : : : and ask: does the series
a00 ; b10 ; c20 ; d30 ; e40 : : : appear anywhere in the list? It cannot be the first member,
because a0 a00 ; it cannot be the second, because b1 b10 , and similarly for
every member! So a10 ; b20 ; c30 ; d40 ; e50 . . . does not appear in the list. So we have
failed to match all the infinite series of digits with integers and similarly for
any attempt! So the collection which contains every countably infinite series of
digits is not countable.
As an example, consider the following attempt to line up the integers with the
series of digits:
0
1
2
3
4
5
6
7
8
9
10
11
12
13

0
1
2
3
4
5
6
7
8
9
1
0
1
1

0
1
2
3
4
5
6
7
8
9
0
1
2
3

0
1
2
3
4
5
6
7
8
9
1
1
1
1

0
1
2
3
4
5
6
7
8
9
0
1
2
3

0
1
2
3
4
5
6
7
8
9
1
1
1
1

0
1
2
3
4
5
6
7
8
9
0
1
2
3

0
1
2
3
4
5
6
7
8
9
1
1
1
1

0
1
2
3
4
5
6
7
8
9
0
1
2
3

0
1
2
3
4
5
6
7
8
9
1
1
1
1

0
1
2
3
4
5
6
7
8
9
0
1
2
3

0
1
2
3
4
5
6
7
8
9
1
1
1
1

0
1
2
3
4
5
6
7
8
9
0
1
2
3

0
1
2
3
4
5
6
7
8
9
1
1
1
1

0
1
2
3
4
5
6
7
8
9
0
1
2
3

:::
:::
:::
:::
:::
:::
:::
:::
:::
:::
:::
:::
:::
:::

and so forth. For each integer, repeat its digits, except that for duplicate cases
1 and 11, 2 and 22, 12 and 1212 prefix enough 0s so that no later series
duplicates an earlier one. Then, by the above method, from the diagonal,
1

:::

cannot appear anywhere on the list. And similarly, any list has some missing series.

CHAPTER 2. FORMAL LANGUAGES

47

To fully specify the vocabulary of any particular language, we need to specify its
sentence letters, constant symbols, function symbols, and relation symbols. Our general definition VC leaves room for languages with different collections of these symbols. As before, the requirement that the collections be countable is compatible with
multiple series; for example, there may be sentence letters A; A1 ; A2 : : : ; B; B1 ; B2
. . . (where we may think of the unsubscripted letter as with an implicit subscript zero).
So, again VC is compatible with a wide variety of specifications, but legitimate specifications always require that sentence letters, constant symbols, function symbols,
and relation symbols can be sorted into series with a first member, a second member,
and so forth. Notice that the variable symbols may be sorted into such a series as
well.
i j k : : : z i1 j1
j j j
j
j
j :::
0 1 2 : : : 17 18 19
So every variable is matched with an integer, and the variables are countable.
As a sample for the other symbols, we shall adopt a generic quantificational language Lq which includes the equality symbol D along with,
Sentence letters: uppercase Roman italics A : : : Z with or without integer subscripts
Constant symbols: lowercase Roman italics a : : : h with or without integer subscripts
Function symbols: for any integer n  1, superscripted lowercase Roman italics an : : : z n with or without integer subscripts
Relation symbols: for any integer n  1, superscripted uppercase Roman italics
An : : : Z n with or without integer subscripts.
Observe that constant symbols and variable symbols partition the lowercase alphabet:
a : : : h for constants, and i : : : z for variables. Sentence letters are distinguished from
relation symbols by superscripts; similarly, constant and variable symbols are distinguished from function symbols by superscripts. Function symbols with a superscript
1 (a1 : : : z 1 ) are one-place function symbols; function symbols with a superscript 2
(a2 : : : z 2 ) are two-place function symbols; and so forth. Similarly, relation symbols
with a superscript 1 (A1 : : : Z 1 ) are one-place relation symbols; relation symbols
with a superscript 2 (A2 : : : Z 2 ) are two-place relation symbols; and so forth. Subscripts merely guarantee that we never run out of symbols of the different types.

CHAPTER 2. FORMAL LANGUAGES

48

Notice that superscripts and subscripts suffice to distinguish all the different symbols
from one another. Thus, for example A and A1 are different symbols one a sentence letter, and the other a one-place relation symbol; A1 , A11 and A2 are distinct as
well the first two are one-place relation symbols, distinguished by the subscript;
the latter is a completely distinct two-place relation symbol. In practice, again, we
will not see subscripts very often. (And we shall even find ways to abbreviate away
some superscripts.)
The metalanguage works very much as before. We use script letters A : : : Z and
a : : : z to represent expressions of an object language like Lq . Again, , !,
8, =, (, and ) represent themselves. And concatenated or joined symbols of
the metalanguage represent the concatenation of the symbols they represent. As
before, the metalanguage lets us make general claims about ranges of expressions
all at once. Thus, where x is a variable, 8x is a universal x-quantifier. Here, 8x
is not an expression of an object language like Lq (Why?) Rather, we have said of
object language expressions that 8x is a universal x-quantifier, 8y2 is a universal
y2 -quantifier, and so forth. In the metalinguistic expression, 8 stands for itself, and
x for the arbitrary variable. Again, as in section 2.1.1, it may help to use maps to
see whether an expression is of a given form. Thus given that x maps to any variable,
8x and 8y are of the form 8x, but 8c and 8f 1 z are not.
8x
8x
8x
8x

(L)
??
??
?
?
8x
8y
8c
8 f 1z
In the leftmost two cases, 8 maps to itself, and x to a variable. In the next, c is a
constant so there is no variable to which x can map. In the rightmost case, there is a
variable z in the object expression, but if x is mapped to it, the function symbol f 1
is left unmatched. So the rightmost two expressions are not of the form 8x.
E2.11. Assuming that R1 may represent any one-place relation symbol, h2 any twoplace function symbol, x any variable, and c any constant of Lq , use maps
to determine whether each of the following expressions is (i) of the form,
8x.R1 x ! R1 c/ and then (ii) of the form, 8x.R1 x ! R1 h2 xc/.
a. 8k.A1 k ! A1 d /
b. 8h.J 1 h ! J 1 b/
c. 8w.S 1 w ! S 1 g 2 wb/
d. 8w.S 1 w ! S 1 c 2 xc/
e. 8vL1 v ! L1 yh2

CHAPTER 2. FORMAL LANGUAGES

2.2.2

49

Terms

With the vocabulary of a language in place, we can turn to specification of its grammatical expressions. For this, in the quantificational case, we begin with terms.
TR

(v) If t is a variable x, then t is a term.


(c) If t is a constant c, then t is a term.
(f) If hn is a n-place function symbol and t1 : : : tn are n terms, then hn t1
: : : tn is a term.
(CL) Any term may be formed by repeated application of these rules.

TR is another example of a recursive definition. As before, we can use tree diagrams


to see how it works. This time, basic elements are constants and variables. Complex
elements are put together by clause (f). Thus, for example, f 1 g 2 h1 xc is a term of
Lq .
x

h1 x

(M)

x is a term by TR(v), and c is a term by TR(c)

since x is a term, this is a term by TR(f)



@
@

since h1 x and c are terms, this is a term by TR(f)

g 2 h1 xc

since g 2 h1 xc is a term, this is a term by TR(f)

f 1 g 2 h1 xc

Notice how the superscripts of a function symbol indicate the number of places that
take terms. Thus x is a term, and h1 followed by x to form h1 x is another term. But
then, given that h1 x and c are terms, g 2 followed by h1 x and then c is another term.
And so forth. Observe that neither g 2 h1 x nor g 2 c are terms the function symbol
g 2 must be followed by a pair of terms to form a new term. And neither is h1 xc a
term the function symbol h1 can only be followed by a single term to compose a
term. You will find that there is always only one way to build a term on a tree. Here
is another example.
x

T
T

(N)

,
,


T
 ,,
1
T h c  ,
T
,
,
TT

,
f 4 xh1 czx

these are terms by TR(v), TR(c), TR(v), and TR(v)

since c is a term, this is a term by TR(f)

given the four input terms, this is a term by TR(f)

CHAPTER 2. FORMAL LANGUAGES

50

Again, there is always just one way to build a term by the definition. If you are
confused about the makeup of a term, build it on a tree, and all will be revealed. To
demonstrate that an expression is a term, it is sufficient to construct it, according to
the definition, on such a tree. If an expression is not a term, there will be no way to
construct it according to the rules.
E2.12. For each of the following expressions, demonstrate that it is a term of Lq
with a tree.
a. f 1 c
b. g 2 yf 1 c
*c. h3 cf 1 yx
d. g 2 h3 xyf 1 cx
e. h3 f 1 f 1 xcg 2 f 1 za
E2.13. Explain why the following expressions are not terms of Lq . Hint: you may
find that an attempted tree will help you see what is wrong.
a. X
b. g 2
c. zc
*d. g 2 yf 1 xc
e. h3 f 1 f 1 cg 2 f 1 za
E2.14. For each of the following expressions, determine whether it is a term of Lq ;
if it is, demonstrate with a tree; if not, explain why.
*a. g 2 g 2 xyf 1 x
*b. h3 cf 2 yx
c. f 1 g 2 xh3 yf 2 yc
d. f 1 g 2 xh3 yf 1 yc
e. h3 g 2 f 1 xcg 2 f 1 zaf 1 b

CHAPTER 2. FORMAL LANGUAGES

2.2.3

51

Formulas

With the terms in place, we are ready for the central notion of a formula. Again, the
definition is recursive.
FR

(s) If S is a sentence letter, then S is a formula.


(r) If Rn is an n-place relation symbol and t1 : : : tn are n terms, then Rn t1
: : : tn is a formula.
() If P is a formula, then P is a formula.
(!) If P and Q are formulas, then .P ! Q/ is a formula.
(8) If P is a formula and x is a variable, then 8xP is a formula.
(CL) Any formula can be formed by repeated application of these rules.

Again, we can use trees to see how it works. In this case, FR(r) depends on which expressions are terms. So it is natural to split the diagram into two, with applications of
TR above a division, and FR below. Then, for example, 8x.A1 f 1 x ! 8yB 2 cy/
is a formula.
y

D
D
f 1x





D
D 
. . . . . . . . . . . . . . . D. . . . . . . . .
DD
A1 f 1 x

Terms by TR(v), TR(c), and TR(v)

Term by TR(f)

B 2 cy

Formulas by FR(r)

8yB 2 cy

Formula by FR(8)

8yB 2 cy

Formula by FR()

.A1 f 1 x ! 8yB 2 cy/

Formula by FR(!)

8x.A1 f 1 x ! 8yB 2 cy/

Formula by FR(8)

C
C

(O)

C
C
C
C
C
C
C
CC

By now, the basic strategy should be clear. We construct terms by TR just as before.
Given that f 1 x is a term, FR(r) gives us that A1 f 1 x is a formula, for it consists of

CHAPTER 2. FORMAL LANGUAGES

52

a one-place relation symbol followed by a single term; and given that c and y are
terms, FR(r) gives us that B 2 cy is a formula, for it consists of a two-place relation
symbol followed by two terms. From the latter, by FR(8), 8yB 2 cy is a formula.
Then FR() and FR(!) work just as before. The final step is another application of
FR(8).
Here is another example. By the following tree, 8x.L ! 8yB 3 f 1 ycx/ is a
formula of Lq .
y




f 1y

. . . . . . . . . . . . . . . . . . . . .@
. . . . . . . . . . . . . . .
@
@
B 3 f 1 ycx

(P)

\
\
\

8yB 3 f 1 ycx
\


\


\\


Terms by TR(v), TR(c), and TR(v)

Term by TR(f)

Formulas by FR(s), and FR(r)

Formula by FR(8)

.L ! 8yB 3 f 1 ycx/

Formula by FR(!)

.L ! 8yB 3 f 1 ycx/

Formula by FR()

8x.L ! 8yB 3 f 1 ycx/

Formula by FR(8)

The basic formulas appear in the top row of the formula part of the diagram. L is a
sentence letter. So it does not require any terms to be a formula. B 3 is a three-place
relation symbol, so by FR(r) it takes three terms to make a formula. After that, other
formulas are constructed out of ones that come before.
If an expression is not a formula, then there is no way to construct it by the rules.
Thus, for example, .A1 x/ is not a formula of Lq . A1 x is a formula; but the only way
parentheses are introduced is in association with !; the parentheses in .A1 x/ are
not introduced that way; so there is no way to construct it by the rules, and it is not a
formula. Similarly, A2 x and A2 f 2 xy are not formulas; in each case, the problem is
that the two-place relation symbol is followed by just one term. You should be clear
about these in your own mind, particularly for the second case.
Before turning to the official notion of a sentence, we introduce some additional
definitions, each directly related to the trees and to notions you have seen before.

CHAPTER 2. FORMAL LANGUAGES

53

First, where !, , and any quantifier count as operators, a formulas main operator is the last operator added in its tree. Second, every formula in the formula portion
of a diagram for P , including P itself, is a subformula of P . Notice that terms
are not formulas, and so are not subformulas. An immediate subformula of P is a
subformula to which P is immediately connected by lines. A subformula is atomic
iff it contains no operators and so appears in the top line of the formula part of the
tree. Thus, with notation from exercises before, with star for atomic formulas, box
for immediate subformulas and circle for main operator, on the diagram immediately
above we have,
y




f 1y

. . . . . . . . . . . . . . . . . . . . .@
. . . . . . . . . . . . . . .
@
@

B 3 f 1 ycx ?

L?

(Q)

s
u
b
f
o
r
m
u
l
a
s

\
\
\

8yB 3 f 1 ycx
\


\


\\


.L ! 8yB 3 f 1 ycx/

.L ! 8yB 3 f 1 ycx/


8x
.L ! 8yB 3 f 1 ycx/

The main operator is 8x, and the immediate subformula is .L ! 8yB 3 f 1 ycx/.
The atomic subformulas are L and B 3 f 1 ycx. The atomic subformulas are the most
basic formulas. Given this, everything is as one would expect from before. In general, if P and Q are formulas and x is a variable, the main operator of 8xP is the
quantifier, and the immediate subformula is P ; the main operator of P is the tilde,
and the immediate subformula is P ; the main operator of .P ! Q/ is the arrow, and
the immediate subformulas are P and Q for you would build these by getting P ,
or P and Q, and then adding the quantifier, tilde, or arrow as the last operator.
Now if a formula includes a quantifier, that quantifiers scope is just the subformula in which the quantifier first appears. Using underlines to indicate scope,

CHAPTER 2. FORMAL LANGUAGES


z

54

A

A 
A

........................
B 2 xy

A1 z

C
C
C

8xB 2 xy

(R)

The scope of the x-quantifier is 8xB 2 xy

C
C
C

8y8xB 2 xy

C
C

The scope of the y-quantifier is 8y8xB 2 xy

C
CC

.A1 z ! 8y8xB 2 xy/

8z.A1 z ! 8y8xB 2 xy/

The scope of the z-quantifier is the entire formula

A variable x is bound iff it appears in the scope of an x quantifier, and a variable


is free iff it is not bound. In the above diagram, each variable is bound. The xquantifier binds both instances of x; the y-quantifier binds both instances of y; and
the z-quantifier binds both instances of z. In 8xR2 xy, however, both instances of x
are bound, but the y is free. Finally, and expression is a sentence iff it is a formula
and it has no free variables. To determine whether an expression is a sentence, use
a tree to see if it is a formula. If it is a formula, use underlines to check whether
any variable x has an instance that falls outside the scope of an x-quantifier. If it
is a formula, and there is no such instance, then the expression is a sentence. From
the above diagram, 8z.A1 z ! 8y8xB 2 xy/ is a formula and a sentence. But as
follows, 8y.Q1 x ! 8xDxy/ is not.

CHAPTER 2. FORMAL LANGUAGES


x

55


A 
A

........................
Q1 x

(S)

Q1 x

Dxy

8xDxy

The scope of the x-quantifier is 8xDxy

@
@
@

.Q1 x ! 8xDxy/

8y.Q1 x ! 8xDxy/

The scope of the y-quantifier is the entire formula

Recall that D is a two-place relation symbol. The expression has a tree, so it is a


formula. The x-quantifier binds the last two instances of x, and the y-quantifier binds
both instances of y. But the first instance of x is free. Since it has a free variable,
although it is a formula, 8y.Q1 x ! 8xDxy/ is not a sentence. Notice that
8xR2 ax, for example, is a sentence, as the only variable is x (a being a constant)
and all the instances of x are bound.
E2.15. For each of the following expressions, (i) Demonstrate that it is a formula of
Lq with a tree. (ii) On the tree bracket all the subformulas, box the immediate subformulas, star the atomic subformulas, circle the main operator, and
indicate quantifier scope with underlines. Then (iii) say whether the formula
is a sentence, and if it is not, explain why.
a. H 1 x
*b. B 2 ac
c. 8x.Dxc ! A1 g 2 ay/
d. 8x.B 2 xc ! 8yA1 g 2 ay/
e. .S ! .8wB 2 f 1 wh1 a ! 8z.H 1 w ! B 2 za///
E2.16. Explain why the following expressions are not formulas or sentences of Lq .
Hint: You may find that an attempted tree will help you see what is wrong.
a. H 1

CHAPTER 2. FORMAL LANGUAGES

56

b. g 2 ax
*c. 8xB 2 xg 2 ax
d. .8aA1 a ! .S ! B 2 zg 2 xa//
e. 8x.Dax ! 8zK 2 zg 2 xa/
E2.17. For each of the following expressions, determine whether it is a formula and
a sentence of Lq . If it is a formula, show it on a tree, and exhibit its parts as
in E2.15. If it fails one or both, explain why.
a. .L ! V /
b. 8x.L ! K 1 h3 xb/
c. 8z8w.8xR2 wx ! K 2 zw/ ! M 2 zz/
*d. 8z.L1 z ! .8wR2 wf 3 axw ! 8wR2 f 3 azww//
e. ..8w/B 2 f 1 wh1 a ! .8z/.H 1 w ! B 2 za//

2.2.4

Abbreviations

That is all there is to the official grammar. Having introduced the official grammar,
though, it is nice to have in hand some abbreviated versions for official expressions.
Abbreviated forms give us ways to manipulate official expressions without undue
pain. First, for any variable x and formulas P and Q,
AB (_) .P _ Q/ abbreviates .P ! Q/
(^) .P ^ Q/ abbreviates .P ! Q/
($) .P $ Q/ abbreviates ..P ! Q/ ! .Q ! P //
(9) 9xP abbreviates 8xP
The first three are as from AB. The last is new. For any variable x, an expression of
the form 9x is an existential quantifier it is read to say, there exists an x such that
P.
As before, these abbreviations make possible derived clauses to FR. Suppose P
is a formula; then by FR(), P is a formula; so by FR(8), 8xP is a formula; so
by FR() again, 8xP is a formula; but this is just to say that 9xP is a formula.
With results from before, we are thus given,

CHAPTER 2. FORMAL LANGUAGES

57

FR0 (^) If P and Q are formulas, then .P ^ Q/ is a formula.


(_) If P and Q are formulas, then .P _ Q/ is a formula.
($) If P and Q are formulas, then .P $ Q/ is a formula.
(9) If P is a formula and x is a variable, then 9xP is a formula.
The first three are from before. The last is new. And, as before, we can incorporate
these conditions directly into trees for formulas. Thus 8x.A1 x ^ 9yA2 yx/ is a
formula.
y

These are terms by TR(v)

. . . . . . . . . . . A. . . . . . . .

(T)

A 
A

A1 x

A2 yx

A1 x

9yA2 yx

These are formulas by FR(r)

These are formulas by FR() and FR0 (9)

@
@
@

.A1 x ^ 9yA2 yx/

This is a formula by FR0 (^)

8x.A1 x ^ 9yA2 yx/

This is a formula by FR(8)

In a derived sense, we carry over additional definitions from before. Thus, the main
operator is the last operator added in its tree, subformulas are all the formulas in
the formula part of a tree, atomic subformulas are the ones in the upper row of the
formula part, and immediate subformulas are the one(s) to which a formula is immediately connected by lines. Thus the main operator of 8x.A1 x ^ 9yA2 yx/ is the
universal quantifier and the immediate subformula is .A1 x ^9yA2 yx/. In addition,
a variable is in the scope of an existential quantifier iff it would be in the scope of
the unabbreviated universal one. So it is possible to discover whether an expression
is a sentence directly from diagrams of this sort. Thus, as indicated by underlines,
8x.A1 x ^ 9yA2 yx/ is a sentence.
To see what it is an abbreviation for, we can reconstruct the formula on an unabbreviating tree, one operator at a time.

CHAPTER 2. FORMAL LANGUAGES


y

. . . . . . . . . . . A. . . . . . . .
A 
A

(U)

A1 x

A2 yx

A1 x

9yA2 yx

58
y

. . . . . . . . . . . A. . . . . . . .
A 
A
A1 x

A2 yx

A1 x

8yA2 yx

By AB(9)

@
@
@

.A1 x

^ 9yA2 yx/

8x.A1 x ^ 9yA2 yx/

.A1 x

@
@

! 8yA2 yx/

By AB(^)

8x.A1 x ! 8yA2 yx/

First the existential quantifier is replaced by the unabbreviated form. Then, where
P and Q are joined by FR0 (^) to form .P ^ Q/, the corresponding unabbreviated
expressions are combined into the unabbreviated form, .P ! Q/. At the last
step, FR(8) applies as before. So 8x.A1 x ^ 9yA2 yx/ abbreviates 8x.A1 x !
8yA2 yx/. Again, abbreviations are nice! Notice that the resultant expression
is a formula and a sentence, as it should be.
As before, it is sometimes convenient to use a pair of square brackets [ ] in place
of parentheses ( ). And if the very last step of a tree for some formula is justified by
FR(!), FR0 (_), FR0 (^), or FR0 ($), we may abbreviate that formula with the outermost set of parentheses or brackets dropped. In addition, for terms t1 and t2 we will
frequently represent the formula Dt1 t2 as .t1 D t2 /. Notice the extra parentheses.
This lets us see the equality symbol in its more usual infix form. When there is
no danger of confusion, we will sometimes omit the parentheses and write, t1 D t2 .
Also, where there is no potential for confusion, we sometimes omit superscripts.
Thus in Lq we might omit superscripts on relation symbols simply assuming
that the terms following a relation symbol give its correct number of places. Thus
Ax abbreviates A1 x; Axy abbreviates A2 xy; Axf 1 y abbreviates A2 xf 1 y; and so
forth. Notice that Ax and Axy, for example, involve different relation symbols. In
formulas of Lq , sentence letters are distinguished from relation symbols insofar as
relation symbols are followed immediately by terms, where sentence letters are not.
Notice, however, that we cannot drop superscripts on function symbols in Lq
thus, even given that f and g are function symbols rather than constants, apart from
superscripts, there is no way to distinguish the terms in, say, Afgxyzw.
As a final example, 9y.c D y/ _ 8xRxf 2 xd is a formula and a sentence.

CHAPTER 2. FORMAL LANGUAGES


y

x x


D A
D A 

D
D A2 
D 
D 
D f xd

D
. . . . . . . . . . . . . . . D. . .. . . . . .
DD
DD
D

.c D y/

Rxf 2 xd

.c D y/

8xRxf 2 xd

59
y

x x




D A 
D A2 

D 
D f xd

D
. . . . . . . . . . . . . . . D. . .. . . . . .
DD
DD
D
D

Dcy

R2 xf 2 xd

Dcy

8xR2 xf 2 xd

(V)




9y.c D y/

@

@
@

9y.c D y/ _ 8xRxf 2 xd




8yDcy

@

@
@

.8yDcy ! 8xR2 xf 2 xd /

The abbreviation drops a superscript, uses the infix notation for equality, uses the existential quantifier and wedge, and drops outermost parentheses. As before, the righthand diagram is not a direct demonstration that .8yDcy ! 8xR2 xf 2 xd /
is a sentence. However, it unpacks the abbreviation, and we know that the result is
an official sentence, insofar as the left-hand tree, with its application of derived rules,
tells us that 9y.c D y/ _ 8xRxf 2 xd is an abbreviation of formula and a sentence,
and the right-hand diagram tells us what that expression is.
E2.18. For each of the following expressions, (i) Demonstrate that it is a formula of
Lq with a tree. (ii) On the tree bracket all the subformulas, box the immediate subformulas, star the atomic subformulas, circle the main operator, and
indicate quantifier scope with underlines. Then (iii) say whether the formula
is a sentence, and if it is not, explain why.
a. .A ! B/ $ .A ^ C /
b. 9xF x ^ 8yGxy
*c. 9xAf 1 g 2 ah3 zwf 1 x _ S
d. 8x8y8z..x D y/ ^ .y D z/ ! .x D z//
e. 9yc D y ^ 8xRxf 1 xy

CHAPTER 2. FORMAL LANGUAGES

60

Grammar Quick Reference


VC

(p) Punctuation symbols: (, )


(o) Operator symbols: , !, 8
(v) Variable symbols: i : : : z with or without integer subscripts
(s) A possibly-empty countable collection of sentence letters
(c) A possibly-empty countable collection of constant symbols
(f) For any integer n  1, a possibly-empty countable collection of n-place function symbols
(r) For any integer n  1, a possibly-empty countable collection of n-place relation symbols

TR

(v) If t is a variable x, then t is a term.


(c) If t is a constant c, then t is a term.
(f) If hn is a n-place function symbol and t1 : : : tn are n terms, then hn t1 : : : tn is a term.
(CL) Any term may be formed by repeated application of these rules.

FR

(s) If S is a sentence letter, then S is a formula.


(r) If Rn is an n-place relation symbol and t1 : : : tn are n terms, Rn t1 : : : tn is a formula.
() If P is a formula, then P is a formula.
(!) If P and Q are formulas, then .P ! Q/ is a formula.
(8) If P is a formula and x is a variable, then 8xP is a formula.
(CL) Any formula can be formed by repeated application of these rules.

A quantifiers scope includes just the formula on which it is introduced; a variable x is free iff it
is not in the scope of an x-quantifier; an expression is a sentence iff it is a formula with no free
variables. A formulas main operator is the last operator added; its immediate subformulas are the
ones to which it is directly connected by lines.
AB (_) .P _ Q/ abbreviates .P ! Q/
(^) .P ^ Q/ abbreviates .P ! Q/
($) .P $ Q/ abbreviates ..P ! Q/ ! .Q ! P //
(9) 9xP abbreviates 8xP
FR0 (^) If P and Q are formulas, then .P ^ Q/ is a formula.
(_) If P and Q are formulas, then .P _ Q/ is a formula.
($) If P and Q are formulas, then .P $ Q/ is a formula.
(9) If P is a formula and x is a variable, then 9xP is a formula.
The generic language Lq includes the equality symbol D along with,
Sentence letters: A : : : Z with or without integer subscripts
Constant symbols: a : : : h with or without integer subscripts
Function symbols: for any n  1, an : : : z n with or without integer subscripts
Relation symbols: for any n  1, An : : : Z n with or without integer subscripts.

CHAPTER 2. FORMAL LANGUAGES

61

*E2.19. For each of the formulas in E2.18, produce an unabbreviating tree to find
the unabbreviated expression it represents.

*E2.20. For each of the unabbreviated expressions from E2.19, produce a complete
tree to show by direct application of FR that it is an official formula. In
each case, using underlines to indicate quantifier scope, is the expression a
sentence? does this match with the result of E2.18?

2.2.5

Another Language

To emphasize the generality of our definitions VC, TR, and FR, let us introduce an
enhanced version of a language with which we will be much concerned later in the
<
text. LNT
is like a minimal language we shall introduce later for number theory.
Recall that VC leaves open what are the sentence letters, constant symbols, function
symbols and relation symbols of a quantificational language. So far, our generic
<
language Lq fills these in by certain conventions. LNT
replaces these with,
Constant symbol: ;
two-place relation symbols: D; <
one-place function symbol: S
two-place function symbols: C; 
<
and that is all. Later we shall introduce a language like LNT
except without the <
symbol; for now, we leave it in. Notice that Lq uses capitals for sentence letters and
lowercase for function symbols. But there is nothing sacred about this. Similarly,
Lq indicates the number of places for function and relation symbols by superscripts,
<
where in LNT
the number of places is simply built into the definition of the symbol. In
<
fact, LNT
is an extremely simple language! Given the vocabulary, TR and FR apply
in the usual way. Thus ;, S ; and S S; are terms as is easy to see on a tree. And
<;S S ; is an atomic formula.
As with our treatment for equality, for terms m and n, we often abbreviate official
terms of the sort, Cmn and mn as .m C n/ and .m  n/; similarly, it is often
convenient to abbreviate an atomic formula <mn as .m < n/. And we will drop
these parentheses when there is no danger of confusion. Officially, we have not said
a word about what these expressions mean. It is natural, however, to think of them
with their usual meanings, with S the successor function so that the successor of

CHAPTER 2. FORMAL LANGUAGES

62

zero, S ; is one, the successor of the successor of zero S S ; is two, and so forth. But
we do not need to think about that for now.
As an example, we show that 8x8y.x D y ! .x C y/ < .x C Sy// is a(n
abbreviation of) a formula and a sentence.
y

C
C

D
D

A 
A

.x C y/

C
C

D
D

Sy

L
D 
L
DD
L
C
.x C Sy/
L
C
. . . . . . . C. . . . . . . . . . . .L. . . . . . . . . . . .
LL
CC

Terms by TR(v)

Terms by TR(f) for 2- and 1-place symbols

C
C

(W)

xDy

.x C y/ < .x C Sy/

Term by TR(f) for 2-place function symbol

Formulas by by FR(r) for 2-place symbols

Q
Q
Q
Q

.x D y ! .x C y/ < .x C Sy//

Formula by FR(!)

8y.x D y ! .x C y/ < .x C Sy//

Formula by FR(8)

8x8y.x D y ! .x C y/ < .x C Sy//

Formula by FR(8)

And we can show what it abbreviates by unpacking the abbreviation in the usual way.
This time, we need to pay attention to abbreviations in the terms as well as formulas.

CHAPTER 2. FORMAL LANGUAGES


y

D
D

A 
A

C
C

.x C y/

D
D

Sy

L
D 
L
DD
L
C
.x C Sy/
L
C
. . . . . . . C. . . . . . . . . . . .L. . . . . . . . . . . .
LL
CC
xDy

.x C y/ < .x C Sy/


A
A 
A

C
C

C
C

(X)

63

Cxy

D
D

Sy

D
D

C
C


L
L
DD
L
C
CxSy
L
C
. . . . . . . C. . . . . . . . . . . .L. . . . . . . . . . . .
LL
CC
C

Dxy

Q
Q

<CxyCxSy

Q
Q
Q
Q

Q
Q

.x D y ! .x C y/ < .x C Sy//

.Dxy ! <CxyCxSy/

8y.x D y ! .x C y/ < .x C Sy//

8y.Dxy ! <CxyCxSy/

8x8y.x D y ! .x C y/ < .x C Sy//

8x8y.Dxy ! <CxyCxSy/

The official (Polish) notation on the right may seem strange. But it follows the official
definitions TR and FR. And it conveniently reduces the number of parentheses from
the more typical infix presentation. (You may also be familiar with Polish notation
for math from certain electronic calculators.) If you are comfortable with grammar
<
and abbreviations for this language LNT
, you are doing well with the grammar for our
formal languages.
E2.21. For each of the following expressions, (i) Demonstrate that it is a formula of
<
LNT
with a tree. (ii) On the tree bracket all the subformulas, box the immediate subformulas, star the atomic subformulas, circle the main operator, and
indicate quantifier scope with underlines. Then (iii) say whether the formula
is a sentence, and if it is not, explain why.
a. S ; D .S;  S S;/
*b. 9x8y.x  y D x/
c. 8x.x D ;/ ! 9y.y < x/
d. 8y.x < y _ x D y/ _ y < x
e. 8x8y8z.x  .y C z// D ..x  y/ C .x  z//

CHAPTER 2. FORMAL LANGUAGES

64

*E2.22. For each of the formulas in E2.21, produce an unabbreviating tree to find
the unabbreviated expression it represents.

E2.23. For each of the following concepts, explain in an essay of about two pages,
so that Hannah could understand. In your essay, you should (i) identify the
objects to which the concept applies, (ii) give and explain the definition, and
give and explicate examples of your own construction (iii) where the concept
applies, and (iv) where it does not. Your essay should exhibit an understanding of methods from the text.
<
a. The vocabulary for a quantificational language and then for Lq and LNT
.

b. A formula and a sentence of a quantificational language.


c. An abbreviation for an official formula and sentence of a quantificational language.

Chapter 3

Axiomatic Deduction
Having developed the grammar of our formal languages in the previous chapter, it
would be natural to go to say what their expressions mean. This is just what we do in
the next chapter. However, just as it is possible to do grammar without reference to
meaning, so it is possible to do derivations without reference to meaning. Derivations
are defined purely in relation to formula and sentence form. That is why it is crucial
to show that derivations stand in important relations to validity and truth. And that
is why it is possible to do derivations without knowing what the expressions mean.
To emphasize this point about form, in this chapter, we develop a first axiomatic
derivation system without any reference to meaning and truth. Apart from relations
to meaning and truth, derivations are perfectly well-defined counting at least as
a sort of puzzle or game with, perhaps, a related thrill of victory and agony of
defeat. And as with a game, it is possible to build derivation skills, to become a
better player. Later, we will show how derivation games matter.
Derivation systems are constructed for different purposes. Introductions to mathematical logic typically employ an axiomatic approach. We will see a natural deduction system in chapter 6. The advantage of axiomatic systems is their extreme
simplicity. From a theoretical point of view, an axiomatic system lets us see what is
at the basis or foundation of the logic. From a practical point of view, when we want
to think about logic, it is convenient to have a relatively simple object to think about.
The axiomatic approach makes it natural to build toward increasingly complex and
powerful results. As we will see, however, in the beginning, axiomatic derivations
can be relatively challenging! We will introduce our system in stages: After some
general remarks about what an axiom system is supposed to be, we will introduce the
sentential component of our system the part with application to forms involving
just  and ! (and so _, ^, and $). After that, we will turn to the full system for
65

CHAPTER 3. AXIOMATIC DEDUCTION

66

forms with quantifiers and equality, including a mathematical application.

3.1

General

Before turning to the derivations themselves, it will be helpful to make some points
about the metalanguage and form. First, we are familiar with the idea that different
formulas may be of the same form. Thus, for example, where P and Q are formulas,
A ! B and A ! .B _ C / are both of the form, P ! Q in the one case Q maps
to B, and in the other to .B _ C /. And, more generally, for formulas A; B; C, any
formula of the form A ! .B _ C/ is also of the form P ! Q. For if .B _ C/ maps
onto some formula, Q maps onto that formula as well. Of course, this does not go
the other way around: it is not the case that every expression of the form P ! Q is
of the form A ! .B _ C/; for it is not the case that B _ C maps to any expression
to onto which Q maps. Be sure you are clear about this! Using the metalanguage this
way, we can speak generally about formulas in arbitrary sentential or quantificational
languages. This is just what we will do on the assumption that our script letters
A : : : Z range over formulas of some arbitrary formal language L, we frequently
depend on the fact that every formula of one form is also of another.
Given a formal language L, an axiomatic logic AL consists of two parts. There
is a set of axioms and a set of rules. Different axiomatic logics result from different
axioms and rules. For now, the set of axioms is just some privileged collection of
formulas. A rule tells us that one formula follows from some others. One way to
specify axioms and rules is by form. Thus, for example, modus ponens may be
included among the rules.
MP P ! Q, P ` Q
The ` symbol is single turnstile (to contrast with a double turnstile  from chapter 4).
According to this rule, for any formulas P and Q, the formula Q follows from P !
Q together with P . Thus, as applied to Ls , B follows by MP from A ! B and
A; but also .B $ D/ follows from .A ! B/ ! .B $ D/ and .A ! B/. And
for a case put in the metalanguage, quite generally, a formula of the form .A ^ B/
follows from A ! .A ^ B/ and A for any formulas of the form A ! .A ^ B/
and A are of the forms P ! Q and P as well. Axioms also may be specified by
form. Thus, for some language with formulas P and Q, a logic might include all
formulas of the forms,
^1 .P ^ Q/ ! P

^2

among its axioms. Then in Ls ,

.P ^ Q/ ! Q

^3 P ! .Q ! .P ^ Q//

CHAPTER 3. AXIOMATIC DEDUCTION


.A ^ B/ ! A,

.A ^ A/ ! A

67
..A ! B/ ^ C / ! .A ! B/

are all axioms of form ^1. So far, for a given axiomatic logic AL, there are no
constraints on just which forms will be the axioms, and just which rules are included.
The point is only that we specify an axiomatic logic when we specify some collection
of axioms and rules.
Suppose we have specified some axioms and rules for an axiomatic logic AL.
Then where (Gamma), is a set of formulas taken as the formal premises of an
argument,
AV

(p) If P is a premise (a member of ), then P is a consequence in AL of .


(a) If P is an axiom of AL, then P is a consequence in AL of .
(r) If Q1 : : : Qn are consequences in AL of , and there is a rule of AL such
that P follows from Q1 : : : Qn by the rule, then P is a consequence in
AL of .
(CL) Any consequence in AL of may be obtained by repeated application of
these rules.

The first two clauses make premises and axioms consequences in AL of . And if,
say, MP is a rule of an AL and P ! Q and P are consequences in AL of , then by
AV(r), Q is a consequence in AL of as well. If P is a consequence in AL of some
premises , then the premises prove P in AL and we write `AL P ; in this case
the argument is valid in AL. If Q1 : : : Qn are the members of , we sometimes write
Q1 : : : Qn `AL P in place of `AL P . If has no members at all and `AL P , then
P is a theorem of AL. In this case, listing all the premises individually, we simply
write, `AL P .
Before turning to our official axiomatic system AD, it will be helpful to consider
a simple example. Suppose an axiomatic derivation system A1 has MP as its only
rule, and just formulas of the forms ^1, ^2, and ^3 as axioms. AV is a recursive
definition like ones we have seen before. Thus nothing stops us from working out its
consequences on trees. Thus we can show that A ^ .B ^ C / `A1 C ^ B as follows,

CHAPTER 3. AXIOMATIC DEDUCTION


C ! .B ! .C ^ B//

A
A

(A)

.B ^ C / ! C

A ^ .B ^ C /

68
.A ^ .B ^ C // ! .B ^ C /

.B ^ C / ! B

HH
,

\
H

,
HH
\
,
\
A
B^C
,

H
\
A
,

H
HH ,
\ 
A
\
H,
A
C
B
A


,

A

,
A 
,
B ! .C ^ B/
,
XXX
,
XXX
XXX ,
X,
C ^B

For definition AV, the basic elements are the premises and axioms. These occur
across the top row. Thus, reading from the left, the first form is an instance of
^3. The second is of type ^2. These are thus consequences of by AV(a). The
third is the premise. Thus it is a consequence by AV(p). Any formula of the form
.A ^ .B ^ C// ! .B ^ C / is of the form, .P ^ Q/ ! Q; so the fourth is of the
type ^2. And the last is of the type ^1. So the final two are consequences by AV(a).
After that, all the results are by MP, and so consequences by AV(r). Thus for example, in the first case, .A ^ .B ^ C // ! .B ^ C/ and A ^ .B ^ C/ are of the sort
P ! Q and P , with A ^ .B ^ C/ for P and .B ^ C/ for Q; thus B ^ C follows
from them by MP. So B ^ C is a consequence in A1 of by AV(r). And similarly
for the other consequences. Notice that applications of MP and of the axiom forms
are independent from one use to the next. The expressions that count as P or Q must
be consistent within a given application of the axiom or rule, but may vary from one
application of the axiom or rule to the next. If you are familiar with another derivation system, perhaps the one from chapter 6, you may think of an axiom as a rule
without inputs. Then the axiom applies to expressions of its form in the usual way.
These diagrams can get messy, and it is traditional to represent the same information as follows, using annotations to indicate relations among formulas.

(B)

1.
2.
3.
4.
5.
6.
7.
8.
9.
10.

A ^ .B ^ C /
.A ^ .B ^ C// ! .B ^ C/
B^C
.B ^ C / ! B
B
.B ^ C / ! C
C
C ! .B ! .C ^ B//
B ! .C ^ B/
C ^B

prem(ise)
^2
2,1 MP
^1
4,3 MP
^2
6,3 MP
^3
8,7 MP
9,5 MP

CHAPTER 3. AXIOMATIC DEDUCTION

69

Each of the forms (1) - (10) is a consequence of A ^ .B ^ C/ in A1. As indicated


on the right, the first is a premise, and so a consequence by AV(p). The second is
an axiom of the form ^2, and so a consequence by AV(a). The third follows by MP
from the forms on lines (2) and (1), and so is a consequence by AV(r). And so forth.
Such a demonstration is an axiomatic derivation. This derivation contains the very
same information as the tree diagram (A), only with geometric arrangement replaced
by line numbers to indicate relations between forms. Observe that we might have
accomplished the same end with a different arrangement of lines. For example, we
might have listed all the axioms first, with applications of MP after. The important
point is that in an axiomatic derivation, each line is either an axiom, a premise, or
follows from previous lines by a rule. Just as a tree is sufficient to demonstrate that
`AL P , that P is a consequence of in AL, so an axiomatic derivation is sufficient
to show the same. In fact, we shall typically use derivations, rather than trees to show
that `AL P .
Notice that we have been reasoning with sentence forms, and so have shown that a
formula of the form C ^ B follows in A1 from one of the form A ^ .B ^ C/. Given
this, we freely appeal to results of one derivation in the process of doing another.
Thus, if we were to encounter a formula of the form A ^ .B ^ C/ in an A1 derivation, we might simply cite the derivation (B) completed above, and move directly
to the conclusion that C ^ B. The resultant derivation would be an abbreviation of
an official one which includes each of the above steps to reach C ^ B. In this way,
derivations remain manageable, and we are able to build toward results of increasing complexity. (Compare your high school experience of Euclidian geometry.) All
of this should become more clear, as we turn to the official and complete axiomatic
system, AD.
Unless you have a special reason for studying axoimatic systems, or are just
looking for some really challenging puzzles, you should move on to the next
chapter after these exercises and return only after chapter 6. This chapter makes
sense here for conceptual reasons, but is completely out of order from a learning
point of view. After chapter 6 you can return to this chapter, but recognize its
place in in the conceptual order.
E3.1. Where A1 is as above with rule MP and axioms ^1-3, construct derivations
to show each of the following.
*a. A ^ .B ^ C/ `A1 B
b. A; B; C `A1 A ^ .B ^ C/

CHAPTER 3. AXIOMATIC DEDUCTION

70

c. A ^ .B ^ C/ `A1 .A ^ B/ ^ C
d. .A ^ B/ ^ .C ^ D/ `A1 B ^ C
e. `A1 ..A ^ B/ ! A/ ^ ..A ^ B/ ! B/

3.2

Sentential

We begin by focusing on sentential forms, forms involving just  and ! (and so


^, _ and $). The sentential component of our official axiomatic logic AD tells us
how to manipulate such forms, whether they be forms for expressions in a sentential
language like Ls , or in a quantificational language like Lq . The sentential fragment
of AD includes three forms for logical axioms, and one rule.
AS A1. P ! .Q ! P /
A2. .O ! .P ! Q// ! ..O ! P / ! .O ! Q//
A3. .Q ! P / ! ..Q ! P / ! Q/
MP P ! Q, P ` Q
We have already encountered MP. To take some cases to appear immediately below,
the following are both of the sort A1.
A ! .A ! A/

.B ! C/ ! A ! .B ! C/

Observe that P and Q need not be different! You should be clear about these cases.
Although MP is the only rule, we allow free movement between an expression and
its abbreviated forms, with justification, abv. That is it! As above, `AD P just in
case P is a consequence of in AD. `AD P just in case there is a derivation of P
from premises in .
The following is a series of derivations where, as we shall see, each may depend
on ones from before. At first, do not worry so much about strategy, as about the
mechanics of the system.
T3.1. `AD A ! A
1.
2.
3.
4.
5.

A ! .A ! A ! A/
.A ! .A ! A ! A// ! ..A ! A ! A/ ! .A ! A//
.A ! A ! A/ ! .A ! A/
A ! A ! A
A!A

A1
A2
2,1 MP
A1
3,4 MP

CHAPTER 3. AXIOMATIC DEDUCTION

71

Line (1) is an axiom of the form A1 with A ! A for Q. Line (2) is an axiom of the
form A2 with A for O, A ! A for P , and A for Q. Notice again that O and Q may
be any formulas, so nothing prevents them from being the same. Similarly, line (4)
is an axiom of form A1 with A in place of both P and Q. The applications of MP
should be straightforward.
T3.2. A ! B; B ! C `AD A ! C
1.
2.
3.
4.
5.
6.
7.

B!C
.B ! C / ! A ! .B ! C/
A ! .B ! C /
A ! .B ! C/ ! .A ! B/ ! .A ! C/
.A ! B/ ! .A ! C /
A!B
A!C

prem
A1
2,1 MP
A2
4,3 MP
prem
5,6 MP

Line (4) is an instance of A2 which gives us our goal with two applications of MP
that is, from (4), A ! C follows by MP if we have A ! .B ! C/ and A ! B. But
the second of these is a premise, so the only real challenge is getting A ! .B ! C /.
But since B ! C is a premise, we can use A1 to get anything arrow it and that is
just what we do by the first three lines.
T3.3. A ! .B ! C/ `AD B ! .A ! C /
1.
2.
3.
4.
5.

B ! .A ! B/
A ! .B ! C /
A ! .B ! C/ ! .A ! B/ ! .A ! C/
.A ! B/ ! .A ! C /
B ! .A ! C /

A1
prem
A2
3,2 MP
1,4 T3.2

In this case, the first four steps are very much like ones you have seen before. But
the last is not. We have B ! .A ! B/ on line (1), and .A ! B/ ! .A ! C/ on
line (4). These are of the form to be inputs to T3.2 with B for A, A ! B for
B, and A ! C for C . T3.2 is a sort of transitivity or chain principle which lets us
move from a first form to a last through some middle term. In this case, A ! B is
the middle term. So at line (5), we simply observe that lines (1) and (4), toghether
with the reasoning from T3.2, give us the desired result.
What we have not produced is an official derivation, where each step is a premise,
an axiom, or follows from previous lines by a rule. But we have produced an abbreviation of one. And nothing prevents us from unabbreviating by including the routine
from T3.2 to produce a derivation in the official form. To see this, first, observe

CHAPTER 3. AXIOMATIC DEDUCTION

72

that the derivation for T3.2 has its premises at lines (1) and (6), where lines with the
corresponding forms in the derivation for T3.3 appear at (4) and (1). However, it is
a simple matter to reorder the derivation for T3.2 so that it takes its premises from
those same lines. Thus here is another demonstration for T3.2.

(C)

1.
::
:
4.
5.
6.
7.
8.
9.

A!B

prem

B!C
.B ! C/ ! A ! .B ! C /
A ! .B ! C/
A ! .B ! C / ! .A ! B/ ! .A ! C /
.A ! B/ ! .A ! C /
A!C

prem
A1
5,4 MP
A2
7,6 MP
8,1 MP

Compared to the original derivation for T3.2, all that is different is the order of a few
lines, and corresponding line numbers. The reason for reordering the lines is for a
merge of this derivation with the one for T3.3.
But now, although we are after expressions of the form A ! B and B ! C, the
actual expressions we want for T3.3 are B ! .A ! B/ and .A ! B/ ! .A ! C/.
But we can convert derivation (C) to one with those very forms by uniform substituation of B for every A; .A ! B/ for every B; and .A ! C/ for every C that is,
we apply our original map to the entire derivation (C). The result is as follows.

(D)

1.
::
:
4.
5.
6.
7.
8.
9.

B ! .A ! B/

prem

.A ! B/ ! .A ! C /
..A ! B/ ! .A ! C// ! B ! ..A ! B/ ! .A ! C//
B ! ..A ! B/ ! .A ! C //
B ! ..A ! B/ ! .A ! C// ! .B ! .A ! B// ! .B ! .A ! C//
.B ! .A ! B// ! .B ! .A ! C//
B ! .A ! C/

prem
A1
5,4 MP
A2
7,6 MP
8,1 MP

You should trace the parallel between derivations (C) and (D) all the way through.
And you should verify that (D) is a derivation on its own. This is an application
of the point that our derivation for T3.2 applies to any premises and conclusions of
that form. The result is a direct demonstration that B ! .A ! B/; .A ! B/ !
.A ! C / `AD B ! .A ! C/.
And now it is a simple matter to merge the lines from (D) into the derivation for
T3.3 to produce a complete demonstration that A ! .B ! C/ `AD B ! .A ! C/.

CHAPTER 3. AXIOMATIC DEDUCTION

(E)

1.
2.
3.
4.
5.
6.
7.
8.
9.

73

B ! .A ! B/
A ! .B ! C/
A ! .B ! C / ! .A ! B/ ! .A ! C /
.A ! B/ ! .A ! C /
..A ! B/ ! .A ! C// ! B ! ..A ! B/ ! .A ! C//
B ! ..A ! B/ ! .A ! C //
B ! ..A ! B/ ! .A ! C// ! .B ! .A ! B// ! .B ! .A ! C//
.B ! .A ! B// ! .B ! .A ! C//
B ! .A ! C/

A1
prem
A2
3,2 MP
A1
5,4 MP
A2
7,6 MP
8,1 MP

Lines (1) - (4) are the same as from the derivation for T3.3, and include what are the
premises to (D). Lines (5) - (9) are the same as from (D). The result is a demonstration
for T3.3 in which every line is a premise, an axiom, or follows from previous lines
by MP. Again, you should follow each step. It is hard to believe that we could think
up this last derivation particularly at this early stage of our career. However, if
we can produce the simpler derivation, we can be sure that this more complex one
exists. Thus we can be sure that the final result is a consequence of the premise in
AD. That is the point of our direct appeal to T3.2 in the original derivation of T3.3.
And similarly in cases that follow. In general, we are always free to appeal to prior
results in any derivation so that our toolbox gets bigger at every stage.
T3.4. `AD .B ! C/ ! .A ! B/ ! .A ! C/
1. .B ! C / ! A ! .B ! C/
2. A ! .B ! C/ ! .A ! B/ ! .A ! C /
3. .B ! C / ! .A ! B/ ! .A ! C/

A1
A2
1,2 T3.2

Again, we have an application of T3.2. In this case, the middle term (the B) from
T3.2 maps to A ! .B ! C/. Once we see that the consequent of what we want is
like the consequent of A2, we should be inspired by T3.2 to go for (1) as a link
between the antecedent of what we want, and antecedent of A2. As it turns out, this
is easy to get as an instance of A1. It is helpful to say to yourself in words, what the
various axioms and theorems do. Thus, given some P , A1 yields anything arrow it.
And T3.2 is a simple transitivity principle.
T3.5. `AD .A ! B/ ! .B ! C/ ! .A ! C/
1. .B ! C / ! .A ! B/ ! .A ! C/
2. .A ! B/ ! .B ! C / ! .A ! C/

T3.4
1 T3.3

T3.5 is like T3.4 except that A ! B and B ! C switch places. But T3.3 precisely
switches terms in those places with B ! C for A, A ! B for B, and A ! C

CHAPTER 3. AXIOMATIC DEDUCTION

74

for C. Again, often what is difficult about these derivations is seeing what you can
do. Thus it is good to say to yourself in words what the different principles give you.
Once you realize what T3.3 does, it is obvious that you have T3.5 immediately from
T3.4.
T3.6. B; A ! .B ! C/ `AD A ! C
Hint: You can get this in the basic system using just A1 and A2. But you can
get it in just four lines if you use T3.3.
T3.7. `AD .A ! A/ ! A
Hint: This follows in just three lines from A3, with an instance of T3.1.
T3.8. `AD .B ! A/ ! .A ! B/
1.
2.
3.
4.
5.

.B ! A/ ! .B ! A/ ! B


.B ! A/ ! B ! .A ! .B ! A// ! .A ! B/
A ! .B ! A/
.B ! A/ ! B ! .A ! B/
.B ! A/ ! .A ! B/

A3
T3.4
A1
2,3 T3.6
1,4 T3.2

The idea behind this derivation is that the antecedent of A3 is the antecedent of our
goal. So we can get the goal by T3.2 with the instance of A3 on (1) and (4). That is,
given .B ! A/ ! X, what we need to get the goal by an application of T3.2 is
X ! .A ! B/. But that is just what (4) is. The challenge is to get (4). Our strategy
uses T3.4, and then T3.6 with A1 to delete the middle term. This derivation is not
particularly easy to see. Here is another approach, which is not all that easy either.

(F)

1.
2.
3.
4.
5.

.B ! A/ ! .B ! A/ ! B


.B ! A/ ! .B ! A/ ! B
A ! .B ! A/
A ! .B ! A/ ! B
.B ! A/ ! .A ! B/

A3
1 T3.3
A1
3,2 T3.2
4 T3.3

This derivation also begins with A3. The idea this time is to use T3.3 to swing
B ! A out, replace it by A with T3.2 and A1, and then use T3.3 to swing A
back in.
T3.9. `AD A ! .A ! B/
Hint: You can do this in three lines with T3.8 and an instance of A1.

CHAPTER 3. AXIOMATIC DEDUCTION

75

T3.10. `AD A ! A


Hint: You can do this in three lines wih instances of T3.7 and T3.9.
T3.11. `AD A ! A
Hint: You can do this in three lines with instances of T3.8 and T3.10.
*T3.12. `AD .A ! B/ ! .A ! B/
Hint: Use T3.5 and T3.10 to get .A ! B/ ! .A ! B/; then use T3.4,
and T3.11 to get .A ! B/ ! .A ! B/; the result follows easily by T3.2.
T3.13. `AD .A ! B/ ! .B ! A/
Hint: You can do this in three lines with instances of T3.8 and T3.12.
T3.14. `AD .A ! B/ ! .B ! A/
Hint: Use T3.4 and T3.10 to get .B ! A/ ! .B ! A/; the result
follows easily with an instance of T3.13.
T3.15. `AD .A ! B/ ! .A ! B/ ! B
Hint: Use T3.13 and A3 to get .A ! B/ ! .B ! A/ ! B; then use
T3.5 and T3.14 to get .B ! A/ ! B ! .A ! B/ ! B; the result
follows easily by T3.2.
*T3.16. `AD A ! B ! .A ! B/
Hint: Use instances of T3.1 and T3.3 to get A ! .A ! B/ ! B; then use
T3.13 to turn around the consequent. This idea of deriving conditionals
in reversed form, and then using T3.13 or T3.14 to turn them around, is
frequently useful for getting tilde outside of a complex expression.
T3.17. `AD A ! .A _ B/
1. A ! .A ! B/
2. A ! .A ! B/
3. A ! .A _ B/

T3.9
1 T3.3
2 abv

CHAPTER 3. AXIOMATIC DEDUCTION

76

We set as our goal the unabbreviated form. We have this at (2). Then, in the last line,
simply observe that the goal abbreviates what has already been shown.
T3.18. `AD A ! .B _ A/
Hint: Go for A ! .B ! A/. Then, as above, you can get the desired
result in one step by abv.

T3.19. `AD .A ^ B/ ! B
T3.20. `AD .A ^ B/ ! A
*T3.21. A ! .B ! C/ `AD .A ^ B/ ! C
T3.22. .A ^ B/ ! C `AD A ! .B ! C/
T3.23. A; A $ B `AD B
Hint: A $ B abbreviates the same thing as .A ! B/ ^ .B ! A/; you may
thus move to this expression from A $ B by abv.
T3.24. B; A $ B `AD A
T3.25. A; A $ B `AD B
T3.26. B; A $ B `AD A
*E3.2. Provide derivations for T3.6, T3.7, T3.9, T3.10, T3.11, T3.12, T3.13, T3.14,
T3.15, T3.16, T3.18, T3.19, T3.20, T3.21, T3.22, T3.23, T3.24, T3.25, and
T3.26. As you are working these problems, you may find it helpful to refer to
the AD summary on p. 85.

CHAPTER 3. AXIOMATIC DEDUCTION

77

E3.3. For each of the following, expand derivations to include all the steps from
theorems. The result should be a derivation in which each step is either a
premise, an axiom, or follows from previous lines by a rule. Hint: it may be
helpful to proceed in stages as for (C), (D) and then (E) above.
a. Expand your derivation for T3.7.
*b. Expand the above derivation for T3.4.
E3.4. Consider an axiomatic system A2 which takes ^ and  as primitive operators,
and treats P ! Q as an abbreviation for .P ^ Q/. The axiom schemes
are,
A2 A1. P ! .P ^ P /
A2. .P ^ Q/ ! P
A3. .O ! P / ! .P ^ Q/ ! .Q ^ O/
MP is the only rule. Provide derivations for each of the following, where
derivations may appeal to any prior result (no matter what you have done).
*a. A ! B; B ! C `A2 .C ^ A/
c. `A2 A ! A

b. `A2 .A ^ A/
*d. `A2 .A ^ B/ ! .B ! A/

e. `A2 A ! A

f. `A2 .A ! B/ ! .B ! A/

*g. A ! B `A2 B ! A

h. A ! B `A2 .C ^ A/ ! .B ^ C /

*i. A ! B; B ! C; C ! D `A2 A ! D

j. `A2 A ! A

k. `A2 .A ^ B/ ! .B ^ A/

l. A ! B; B ! C `A2 A ! C

m. B ! B `A2 B

n. B ! B `A2 B

o. `A2 .A ^ B/ ! B

p. A ! B; C ! D `A2 .A ^ C/ ! .B ^ D/

q. B ! C `A2 .A ^ B/ ! .A ^ C/

r. A ! B; A ! C `A2 A ! .B ^ C /

s. `A2 .A ^ B/ ^ C ! A ^ .B ^ C /

t. `A2 A ^ .B ^ C/ ! .A ^ B/ ^ C

*u. `A2 A ! .B ! C / ! .A ^ B/ ! C/ v. `A2 .A ^ B/ ! C ! A ! .B ! C/


*w. A ! B; A ! .B ! C/ `A2 A ! C

x. `A2 A ! B ! .A ^ B/

y. `A2 A ! .B ! A/

Hints: (i): Apply (a) to the first two premises and (f) to the third; then recognize that you have the makings for an application of A3. (j): Apply A1, two

CHAPTER 3. AXIOMATIC DEDUCTION

78

instances of (h), and an instance of (i) to get A ! ..A ^ A/ ^ .A ^ A//; the


result follows easily with A2 and (i). (m): B ! B is equivalent to .B ^
B/; and B ! .B ^ B/ is immediate from A2; you can turn this
around by (f) to get .B ^ B/ ! B; then it is easy. (u): Use abv so
that you are going for A ^ .B ^ C/ ! .A ^ B/ ^ C; plan on
getting to this by (f); the proof then reduces to working from ..A ^ B/ ^ C/.
(v): Structure your proof very much as with (u). (w): Use (u) to set up a
chain to which you can apply transitivity.

3.3

Quantificational

We begin this section by introducing one new rule, and some axioms for quantifier
forms. There will be an one axiom and one rule for manipulating quantifiers, and
some axioms for features of equality. After introducing the axioms and rule, we use
them with application to some theorems of Peano Arithmetic.

3.3.1

Quantifiers

Excluding equality, to work with quantifier forms, we add just one axiom form and
one rule. To state the axiom, we need a couple of definitions. First, for any formula
A, variable x, and term t, say Axt is A with all the free instances of x replaced by t.
And say t is free for x in A iff all the variables in the replacing instances of t remain
free after substitution in Axt . Thus, for example, where A is .8xRxy _ P x/,
(G)

.8xRxy _ P x/xy is 8xRxy _ P y

There are three instances of x in 8xRxy _ P x, but only the last is free; so y is
substituted only for that instance. Since the substituted y is free in the resultant
expression, y is free for x in 8xRxy _ P x. Similarly,
(H)

.8x.x D y/ _ Ryx/f 1 x is 8x.x D f 1 x/ _ Rf 1 xx/

Both instances of y in 8x.x D y/ _ Ryx are free; so our substitution replaces both.
But the x in the first instance of f 1 x is bound upon substitution; so f 1 x is not free
for y in 8x.x D y/ _ Ryx. Notice that if x is not free in A, then replacing every
free instance of x in A with some term results in no change. So if x is not free in
A, then Axt is A. Similarly, Axx is just A itself. Further, any variable x is sure to
be free for itself in a formula A if every free instance of variable x is replaced
with x, then the replacing instances are sure to be free! And constants are sure to be
free for a variable x in a formula A. Since a constant c is a term without variables,
no variable in the replacing term is bound upon substitution for free instances of x.

CHAPTER 3. AXIOMATIC DEDUCTION

79

For the quantificational version of axiomatic derivation system AD, in addition to


A1, A2, A3 and MP from AS, we add an axiom A4 and a rule Gen (Generalization).
AQ A4. 8xP ! Ptx

where t is free for x in P

Gen A ! B ` A ! 8xB

where x is not free in A

A1, A2, A3 and MP remain from before. The axiom A4 and rule Gen are new. A4
is a conditional in which the antecedent is a quantified expression; the consequent
drops the quantifier, and substitutes term t for each free instance of the quantified
variable in the resulting P subject to the constraint that the term t is free for the
quantified variable in P . Thus, for example in Lq ,
(I)

8xRx ! Rx

8xRx ! Ry

8xRx ! Ra

8xRx ! Rf 1 z

are all instances of A4. In these cases, P is Rx; x is free in it, and since Rx includes
no quantifier, it is easy to see that the substituted terms are all free for x in it. So each
of these satisfies the condition on A4. The following are also instances of A4.
(J)

8x8yRxy ! 8yRzy

8x8yRxy ! 8yRf 1 xy

In each case, we drop the main quantifier, and substitute a term for the quantified variable, where the substituted term remains free in the resultant expression. However
these cases contrast with the ones that follow.
(K)

8x8yRxy ! 8yRyy

8x8y ! 8yRf 1 yy

In these cases, we drop the quantifier and make a substitution as before. But the
substituted terms are not free. So the constraint on A4 is violated, and these formulas
do not qualify as instances of the axiom.
The new rule also comes with a constraint. Given P ! Q, one may move to
P ! 8xQ so long as x is not free in P . Thus the leftmost three cases below are
legitimate applications of Gen, where the right-hand case is not.
Ry ! S x
(L)
Ry ! 8xS x

Ra ! S x

8xRx ! S x

Ra ! 8xS x

8xRx ! 8xS x

No!

Rx ! S x
Rx ! 8xS x

In the leftmost three cases, for one reason or another, the variable x is not free in
the antecedent of the premise. Only in the last case is the variable for which the
quantifier is introduced free in the antecedent of the premise. So the rightmost case
violates the constraint, and is not a legitimate application of Gen. Continue to move
freely between an expression and its abbreviated forms with justification, abv. That
is it!

CHAPTER 3. AXIOMATIC DEDUCTION

80

Because the axioms and rule from before remain available, nothing blocks reasoning with sentential forms as before. Thus, for example, 8xRx ! 8xRx and,
more generally, 8xA ! 8xA are of the form A ! A, and we might derive them
by exactly the five steps for T3.1 above. Or, we might just write them down with
justification, T3.1. Similarly any theorem from the sentential fragment of AD is a
theorem of larger quantificational part. Here is a way to get 8xRx ! 8xRx without either A1 or A2.
`AD 8xRx ! 8xRx
(M)

1. 8xRx ! Rx
2. 8xRx ! 8xRx

A4
1 Gen

The x is sure to be free for x in Rx. So (1) is an instance of A4. And the only
instances of x are bound in 8xRx. So the application of Gen satisfies its constraint.
The reasoning is similar in the more general case.
`AD 8xA ! 8xA
(N)

1. 8xA ! A
2. 8xA ! 8xA

A4
1 Gen

Again, Axx is A, and since only free instances of x are replaced, none of the replacing instances of x is bound in the result. So x is free for x in A, and (1) is therefore
a legitimate instance of A4. Because its main operator is an x-quantifier, no instance
of x can be free in 8xA. So we move directly to (2) by Gen.
Here are a few more examples.
T3.27. `AD 8xA ! 8yAxy
1. 8xA ! Ax
y
2. 8xA ! 8yAx
y

where y is not free in 8xA but free for x in A


A4
1 Gen

The results of derivations (M) and (N) are instances of this more general principle.
The difference is that T3.27 makes room for for variable exchange. Given the constraints, this derivation works for exactly the same reasons as the ones before. If y
is free for x in A, then (1) is a straightforward instance of A4. And if y is not free
in 8xA, the constraint on Gen is sure to be met. A simple instance of T3.27 in Lq
is `AD 8xRx ! 8yRy. If you are confused about restrictions on the axiom and
rule, think about the derivation as applied to this case. While our quantified instances
of T3.1 could have been derived by sentential rules, T3.27 cannot; 8xA ! 8xA
has sentential form A ! A; but when x is not the same as y, 8xA ! 8yAxy has
sentential form, A ! B.

CHAPTER 3. AXIOMATIC DEDUCTION


T3.28. A `AD 8xA
1.
2.
3.
4.
5.
6.

81

a derived Gen*

A
A ! .8y.y D y/ ! 8y.y D y/ ! A/
8y.y D y/ ! 8y.y D y/ ! A
8y.y D y/ ! 8y.y D y/ ! 8xA
8y.y D y/ ! 8y.y D y/
8xA

prem
A1
2,1 MP
3 Gen
T3.1
4,5 MP

In this derivation, we use 8y.y D y/ ! 8y.y D y/ as a mere dummy to bring


Gen into play. For step (4), it is important that this expression has no free variables,
and so no free occurrences of x. For step (6), it is important that it is a theorem, and
so can be asserted on line (5). This theorem is so frequently used that we think of it
as a derived form of Gen (Gen*).
*T3.29. `AD Axt ! 9xA for any term t free for x in A
Hint: As in sentential cases, show the unabbreviated form, Axt ! 8xA
and get the final result by abv. You should find 8xA ! Axt to be a useful
instance of A4. Notice that Axt is the same expression as Axt , as all
the replacements must go on inside the A.

T3.30. A ! B `AD 9xA ! B

where x is not free in B.

Hint: Similarly, go for an unabbreviated form, and then get the goal by abv.

T3.31. `AD 8x.A ! B/ ! .A ! 8xB/ where x is not free in A


Hint: consider uses of T3.21 and T3.22.
This completes the fragment of AD for sentential operators and quantifiers. It remains
to add axioms for equality.
*E3.5. Provide derivations for T3.29, T3.30 and T3.31, explaining in words for every
step that has a restriction, how you know that that restriction is met.

E3.6. Provide derivations to show each of the following.


*a. 8x.H x ! Rx/; 8yHy `AD 8zRz
b. 8y.F y ! Gy/ `AD 9zF z ! 9xGx

CHAPTER 3. AXIOMATIC DEDUCTION

82

*c. `AD 9x8yRxy ! 8y9xRxy


d. 8y8x.F x ! By/ `AD 8y.9xF x ! By/
e. `AD 9x.F x ! 8yGy/ ! 9x8y.F x ! Gy/

3.3.2

Equality

We complete our axiomatic derivation system AD with three axioms governing equality. In this case, the axioms assert particularly simple, or basic, facts. For any variables x1 : : : xn and y, n-place function symbol hn and n-place relation symbol Rn ,
the following forms are axioms.
AE A5. .y D y/
A6. .xi D y/ ! .hn x1 : : : xi : : : xn D hn x1 : : : y : : : xn /
A7. .xi D y/ ! .Rn x1 : : : xi : : : xn ! Rn x1 : : : y : : : xn /
From A5, .x D x/ and .z D z/ are axioms. Of course, these are abbreviations for
Dxx and Dzz. This should be straightforward. The others are complicated only
by abstract presentation. For A6, hn x1 : : : xi : : : xn differs from hn x1 : : : y : : : xn
just in that variable xi is replaced by variable y. xi may be any of the variables in
x1 : : : xn . Thus, for example,
.x D y/ ! .f 1 x D f 1 y/

(O)

.x D y/ ! .f 3 wxy D f 3 wyy/

are simple examples of A6. In the one case, we have a string of one variables
and replace the only member based on the equality. In the other case, the string is
of three variables, and we replace the second. Similarly, Rn x1 : : : xi : : : xn differs
from Rn x1 : : : y : : : xn just in that variable xi is replaced by y. xi may be any of
the variables in x1 : : : xn . Thus, for example,
.x D z/ ! .A1 x ! A1 z/

(P)

.x D y/ ! .A2 xz ! A2 yz/

are simple examples of A7.


This completes the axioms and rules of our full derivation system AD. As examples, let us begin with some fundamental principles of equality. Suppose that r, s
and t are arbitrary terms.
T3.32. `AD .t D t/
1.
2.
3.
4.

reflexivity of equality

yDy
8y.y D y/
8y.y D y/ ! .t D t/
tDt

A5
1 Gen*
A4
3,2 MP

CHAPTER 3. AXIOMATIC DEDUCTION

83

Since y D y has no quantifiers, any term t is sure to be free for y in it. So (3) is sure
to be an instance of A4. This theorem strengthens A5 insofar as the axiom applies
only to variables, but the theorem has application to arbitrary terms. Thus .z D z/ is
an instance of the axiom, but .f 2 xy D f 2 xy/ is an instance of the theorem as well.
We convert variables to terms by Gen* with A4 and MP. This pattern repeats in the
following.
T3.33. `AD .t D s/ ! .s D t/
1.
2.
3.
4.
5.
6.
7.
8.
9.

symmetry of equality

.x D y/ ! .x D x/ ! .y D x/
.x D x/
.x D y/ ! .y D x/
8x.x D y/ ! .y D x/
8x.x D y/ ! .y D x/ ! .t D y/ ! .y D t/
.t D y/ ! .y D t/
8y.t D y/ ! .y D t/
8y.t D y/ ! .y D t/ ! .t D s/ ! .s D t/
.t D s/ ! .s D t/

A7
A5
1,2 T3.6
3 Gen*
A4
5,4 MP
6 Gen*
A4
8,7 MP

In (1), x D x is (an abbreviation of an expression) of the form R2 xx, and y D x is


of that same form with the first instance of x replaced by y. Thus (1) is an instance of
A7. At line (3) we have symmetry expressed at the level of variables. Then the task
is just to convert from variables to terms as before. Notice that, again, (5) and (8) are
legitimate applications of A4 insofar as there are no quantifiers in the consequents.
T3.34. `AD .r D s/ ! .s D t/ ! .r D t/

transitivity of equality

Hint: Start with .y D x/ ! .y D z/ ! .x D z/ as an instance of A7


being sure that you see how it is an instance of A7. Then you can use T3.33
to get .x D y/ ! .y D z/ ! .x D z/, and all you have to do is convert
from variables to terms as above.

T3.35. r D s, s D t `AD r D t
Hint: This is a mere recasting of T3.34 and follows directly from it.
T3.36. `AD .ti D s/ ! .hn t1 : : : ti : : : tn D hn t1 : : : s : : : tn /
Hint: For any given instance of this theorem, you can start with .x D y/ !
hn x1 : : : x : : : xn D hn x1 : : : y : : : xn / as an instance of A6. Then it is easy
to convert x1 : : : xn to t1 : : : tn , and y to s.

CHAPTER 3. AXIOMATIC DEDUCTION

84

T3.37. `AD .ti D s/ ! .Rn t1 : : : ti : : : tn ! Rn t1 : : : s : : : tn /


Hint: As for T3.36, for any given instance of this theorem, you can start with
.x D y/ ! Rn x1 : : : x : : : xn ! Rn x1 : : : y : : : xn / as an instance of A7.
Then it is easy to convert x1 : : : xn to t1 : : : tn , and y to s.
We will see further examples in the context of the extended application to come in
the next section.
E3.7. Provide demonstrations for T3.34 and T3.35.
E3.8. Provide demonstrations for the following instances of T3.36 and T3.37. Then,
in each case, say in words how you would go about showing the results for
an arbitrary number of places.
a. .f 1 x D g 2 xy/ ! .h3 zf 1 xf 1 z D h3 zg 2 xyf 1 z/
*b. .s D t/ ! .A2 rs ! A2 rt/

3.3.3

Peano Arithmetic

<
LNT is a language like LNT
introduced from section 2.2.5 on p. 61 but without the <
symbol: There is the constant symbol ;, the function symbols S , C and , and the
relation symbol D. It is possible to treat x  y as an abbreviation for 9v.v C x D y/
and x < y as an abbreviation for 9v.v C S x/ D y (these definitions are summarized
in the language of arithmetic reference, p. 300). Officially, formulas of this language
are so far uninterpreted. It is natural, however, to think of them with their usual
meanings, with ; for zero, S the successor function, C the addition function,  the
multiplication function, and D the equality relation. But, again, we do not need to
think about that for now.
We will say that a formula P is an AD consequence of the Peano Axioms1 just in
case P follows in AD from a collection of premises which includes all formulas of
the following forms for variables x and y.

PA

1. .Sx D ;/
2. .Sx D Sy/ ! .x D y/

1 After the work of R. Dedekind and G. Peano. For historical discussion, see Wang, The Axiomatization of Arithmetic. These axioms are presented as schema for formulas with free variables. But
with Gen* and A4, they are equivalent to universally quantified forms as derived in, say, (2) of T3.39
below and we might as well have stated the axioms as universally quantified sentences.

CHAPTER 3. AXIOMATIC DEDUCTION

85

AD Quick Reference
AD A1. P ! .Q ! P /
A2. .O ! .P ! Q// ! ..O ! P / ! .O ! Q//
A3. .Q ! P / ! ..Q ! P / ! Q/
A4. 8xP ! Ptx

where t is free for x in P

A5. .x D x/
A6. .xi D y/ ! .hn x1 : : : xi : : : xn D hn x1 : : : y : : : xn /
A7. .xi D y/ ! .Rn x1 : : : xi : : : xn ! Rn x1 : : : y : : : xn /
MP P ! Q, P ` Q
Gen A ! B ` A ! 8xB

where x is not free in A

T3.1 `AD A ! A

T3.22 .A ^ B/ ! C `AD A ! .B ! C /

T3.2 A ! B; B ! C `AD A ! C

T3.23 A; A $ B `AD B

T3.3 A ! .B ! C / `AD B ! .A ! C /

T3.24 B; A $ B `AD A

T3.4 `AD .B ! C / ! .A ! B/ ! .A ! C /

T3.25 A; A $ B `AD B

T3.5 `AD .A ! B/ ! .B ! C/ ! .A ! C /

T3.26 B; A $ B `AD A

T3.6 B; A ! .B ! C / `AD A ! C

T3.27 `AD 8xA ! 8yAx


y

T3.7 `AD .A ! A/ ! A

where y is not free in 8xA but is free


for x in A

T3.8 `AD .B ! A/ ! .A ! B/

T3.28 A `AD 8xA

T3.9 `AD A ! .A ! B/

T3.29 `AD Ax
t ! 9xA

T3.10 `AD A ! A


T3.11 `AD A ! A
T3.12 `AD .A ! B/ ! .A ! B/
T3.13 `AD .A ! B/ ! .B ! A/

(Gen*)

where t is free for x in A


T3.30 A ! B `AD 9xA ! B
where x is not free in B
T3.31 `AD 8x.A ! B/ ! .A ! 8xB/
where x is not free in A

T3.14 `AD .A ! B/ ! .B ! A/


T3.15 `AD .A ! B/ ! .A ! B/ ! B
T3.16 `AD A ! B ! .A ! B/
T3.17 `AD A ! .A _ B/
T3.18 `AD A ! .B _ A/
T3.19 `AD .A ^ B/ ! B
T3.20 `AD .A ^ B/ ! A
T3.21 A ! .B ! C / `AD .A ^ B/ ! C

T3.32 `AD .t D t/
T3.33 `AD .t D s/ ! .s D t/
T3.34 `AD .r D s/ ! .s D t/ ! .r D t/
T3.35 r D s, s D t `AD r D t
T3.36 `AD .ti D s/ ! .hn t1 : : : ti : : : tn D
hn t1 : : : s : : : tn /
T3.37 `AD .ti D s/ ! .Rn t1 : : : ti : : : tn !
R n t1 : : : s : : : tn /

CHAPTER 3. AXIOMATIC DEDUCTION

86

3. .x C ;/ D x
4. .x C Sy/ D S.x C y/
5. .x  ;/ D ;
6. .x  Sy/ D .x  y/ C x
x
7. P;x ^ 8x.P ! PSx
/ ! 8xP

In the ordinary case we suppress mention of PA1 - PA7 as premises, and simply write
PA `AD P to indicate that P is an AD consequence of the Peano axioms that there
is an AD derivation of P which may include appeal to any of PA1 - PA7.
The axioms set up basic arithmetic on the non-negative integers. Intuitively, ; is
not the successor of any non-negative integer (PA1); if the successor of x is the same
as the successor of y, then x is y (PA2); x plus ; is equal to x (PA3); x plus one
more than y is equal to one more than x plus y (PA4); x times ; is equal to ; (PA5);
x times one more than y is equal to x times y plus x (PA6); and if P applies to ;,
and for any x, if P applies to x, then it also applies to Sx, then P applies to every x
(PA7). This last form represents the principle of mathematical induction.
Sometimes it is convenient to have this principle in rule form.
x
T3.38. P;x , 8x.P ! PSx
/, PA `AD 8xP

1.
2.
3.
4.
5.
6.

P;x
x/
8x.P ! PSx
x
x / ! 8xP
P; ^ 8x.P ! PSx
x
x / ! 8xP
P; ! 8x.P ! PSx
x
8x.P ! PSx / ! 8xP
8xP

(a derived Ind*)
prem
prem
PA7
3 T3.22
4,1 MP
5,2 MP

Observe the way we simply appeal to PA7 as a premise at (3). Again, that we can
do this in a derivation, is a consequence of our taking all the axioms available as
x
premises. So if we were to encounter P;x , and 8x.P ! PSx
/ in a derivation with the
axioms of PA, we could safely move to the conclusion that 8xP by this derived rule
Ind*. We will have much more to say about the principle of mathematical induction
in Part II. For now, it is enough to recognize its instances. Thus, for example, if P is
.x D S x/, the corresponding instance of PA7 would be,
(Q)

.; D S ;/ ^ 8x..x D S x/ ! .S x D S S x// ! 8x.x D S x/

There is the formula with ; substituted for x, the formula itself, and the formula
with S x substituted for x. If the entire antecedent is satisfied, then the formula holds
for every x. For the corresponding application of T3.38 you would need .; D

CHAPTER 3. AXIOMATIC DEDUCTION

87

S ;/ and 8x.x D S x/ ! .S x D S S x/ in order to move to the conclusion


that 8x.x D S x/. You should track these examples through. The principle of
mathematical induction turns out to be essential for deriving many general results.
As before, if a theorem is derived from some premises, we use the theorem in
derivations that follow. Thus we build toward increasingly complex results. Let us
start with some simple generalizations of the premises for application to arbitrary
terms. The derivations all follow the Gen* / A4 / MP pattern we have seen before.
As usual, let s and t be terms and w, x and y variables.
T3.39. PA `AD .St D ;/
1.
2.
3.
4.

.Sx D ;/
8x.Sx D ;/
8x.Sx D ;/ ! .St D ;/
.St D ;/

PA1
1 Gen*
A4
3,2 MP

As usual, because there is no quantifier in the consequent, (3) is sure to satisfy the
constraint on A4, no matter what t may be.
*T3.40. PA `AD .St D Ss/ ! .t D s/
T3.41. PA `AD .t C ;/ D t
T3.42. PA `AD .t C Ss/ D S.t C s/
T3.43. PA `AD .t  ;/ D ;
T3.44. PA `AD .t  Ss/ D .t  s/ C t
If a theorem T3.n is an equality .t D s/, let T3.n* be .s D t/. Thus T3.41* is
PA `AD t D .t C ;/; T3.42* is PA `AD S.t C s/ D .t C S s/. In each case, the
result is immediate from the theorem with T3.33 and MP. Notice that t and s in these
theorems may be any terms. Thus,
(R)

.x C ;/ D x

..x  y/ C ;/ D .x  y/

.; C x/ C ; D .; C x/

are all straightforward instances of T3.41.


Given this much, we are ready for a series of results which are much more interesting for example, some general principles of commutativity and associativity.
For a first application of Ind*, let P be .; C x/ D x; then P;x is .; C ;/ D ; and
x
PSx
is .; C S x/ D S x.

CHAPTER 3. AXIOMATIC DEDUCTION

88

T3.45. PA `AD .; C t/ D t
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.

.; C ;/ D ;
.; C x/ D x ! S.; C x/ D Sx
S.; C x/ D .; C Sx/
S.; C x/ D .; C Sx/ ! .S.; C x/ D Sx/ ! ..; C Sx/ D Sx/
.S.; C x/ D Sx/ ! ..; C Sx/ D Sx/
.; C x/ D x ! .; C Sx/ D Sx
8x..; C x/ D x ! .; C Sx/ D Sx/
8x.; C x/ D x
8x.; C x/ D x ! .; C t/ D t
.; C t/ D t

T3.41
T3.36
T3.42*
T3.37
4,3 MP
2,5 T3.2
6 Gen*
1,7 Ind*
A4
9,8 MP

The key to this derivation, and others like it, is bringing Ind* into play. The basic
strategy for the beginning and end of these arguments is always the same. In this
case,

(S)

1.
::
:
6.
7.
8.
9.
10.

.; C ;/ D ;

T3.41

.; C x/ D x ! .; C Sx/ D Sx
8x..; C x/ D x ! .; C Sx/ D Sx/
8x.; C x/ D x
8x.; C x/ D x ! .; C t/ D t
.; C t/ D t

6 Gen*
1,7 Ind*
A4
9,8 MP

The goal is automatic by A4 and MP once you have 8x.; C x/ D x by Ind* at


x
(8). For this, you need P;x and 8x.P ! PSx
/. We have P;x at (1) as an instance of
x
x
T3.41 and P; is almost always easy to get. 8x.P ! PSx
/ is automatic by Gen*
from (6). So the real work is getting (6). Thus, once you see what is going on, the
entire derivation for T3.45 boils down to lines (2) - (6). For this, begin by noticing
that the antecedent of what we want is like the antecedent of (2), and the consequent
like what we want but for the equivalence in (3). Given this, it is a simple matter to
apply T3.37 to switch the one term for the equivalent one we want.
T3.46. PA `AD .St C ;/ D S.t C ;/
1.
2.
3.
4.
5.

.St C ;/ D S t
t D .t C ;/
t D .t C ;/ ! St D S.t C ;/
St D S.t C ;/
.St C ;/ D S.t C ;/

T3.41
T3.41*
T3.36
3,2 MP
1,4 T3.35

This derivation has T3.41 at (1) with S t for t. Line (2) is a straightforward version
of T3.41*. Then the key to the derivation is that the antecedent of (1) is like what

CHAPTER 3. AXIOMATIC DEDUCTION

89

we want, and the consequent of (1) is like what we want but for the equality on (2).
The goal then is to use T3.36 to switch the one term for the equivalent one. You
should get used to this pattern of using T3.36 and T3.37 to substitute terms. This
result forms the zero-case for the one that follows.
T3.47. PA `AD .St C s/ D S.t C s/
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.

.S t C ;/ D S.t C ;/
.S t C x/ D S.t C x/ ! S.S t C x/ D SS.t C x/
S.S t C x/ D .St C Sx/
S.S t C x/ D .St C Sx/ !
.S.St C x/ D SS.t C x/ ! .S t C Sx/ D SS.t C x//
S.S t C x/ D SS.t C x/ ! .St C Sx/ D SS.t C x/
.S t C x/ D S.t C x/ ! .S t C Sx/ D SS.t C x/
S.t C x/ D .t C Sx/
S.t C x/ D .t C Sx/ ! SS.t C x/ D S.t C Sx/
SS.t C x/ D S.t C Sx/
SS.t C x/ D S.t C Sx/ !
..St C Sx/ D SS.t C x/ ! .S t C Sx/ D S.t C Sx//
.S t C Sx/ D SS.t C x/ ! .S t C Sx/ D S.t C Sx/
.S t C x/ D S.t C x/ ! .S t C Sx/ D S.t C Sx/
8x..St C x/ D S.t C x/ ! .St C Sx/ D S.t C Sx//
8x.S t C x/ D S.t C x/
8x.St C x/ D S.t C x/ ! .St C s/ D S.t C s/
.S t C s/ D S.t C s/

T3.46
T3.36
T3.42*
T3.37
4,3 MP
2,5 T3.2
T3.42*
T3.36
8,7 MP
T3.37
10,9 MP
6,11 T3.2
12 Gen*
1,13 Ind*
A4
15,14 MP

The idea behind this longish derivation is to bring Ind* into play, where formula P
is, .St C x/ D S.t C x/. Do not worry about how we got this for now. Given this
much, the following setup is automatic,

(T)

1.
::
:
12.
13.
14.
15.
16.

.S t C ;/ D S.t C ;/

T3.46

.S t C x/ D S.t C x/ ! .S t C Sx/ D S.t C Sx/


8x..St C x/ D S.t C x/ ! .St C Sx/ D S.t C Sx//
8x.S t C x/ D S.t C x/
8x.S t C x/ D S.t C x/ ! .S t C s/ D S.t C s/
.S t C s/ D S.t C s/

12 Gen*
1,13 Ind*
A4
15,14 MP

We have the zero-case from T3.46 on (1); the goal is automatic once we have the
result on (12). For (12), the antecedent at (2) is what we want, and the consequent is
right but for the equivalences on (3) and (9). We use T3.37 to substitute terms into
the consequent. The equivalence on (3) is a straightforward instance of T3.42*. We
had to work (just a bit) starting again with T3.42* to get the equivalence on (9).

CHAPTER 3. AXIOMATIC DEDUCTION


T3.48. PA `AD .t C s/ D .s C t/
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.

90

commutativity of addition

.t C ;/ D t
t D .; C t/
.t C ;/ D .; C t/
.t C x/ D .x C t/ ! S.t C x/ D S.x C t/
S.t C x/ D .t C Sx/
S.t C x/ D .t C Sx/ !
.S.t C x/ D S.x C t/ ! .t C Sx/ D S.x C t//
S.t C x/ D S.x C t/ ! .t C Sx/ D S.x C t/
.t C x/ D .x C t/ ! .t C Sx/ D S.x C t/
S.x C t/ D .Sx C t/
S.x C t/ D .Sx C t/ !
..t C Sx/ D S.x C t/ ! .t C Sx/ D .Sx C t//
.t C Sx/ D S.x C t/ ! .t C Sx/ D .Sx C t/
.t C x/ D .x C t/ ! .t C Sx/ D .Sx C t/
8x..t C x/ D .x C t/ ! .t C Sx/ D .Sx C t//
8x.t C x/ D .x C t/
8x.t C x/ D .x C t/ ! .t C s/ D .s C t/
.t C s/ D .s C t/

T3.41
T3.45*
1,2 T3.35
T3.36
T3.42*
T3.37
6,5 MP
4,7 T3.2
T3.47*
T3.37
10,9 MP
8,11 T3.2
12 Gen*
3,13 Ind*
A4
15,14 MP

The pattern of this derivation is very much like ones we have seen before. Where P
is .t C x/ D .x C t/ we have the zero-case at (3), and the derivation effectively
reduces to getting (12). We get this by substituting into the consequent of (4) by
means of the equivalences on (5) and (9).
T3.49. PA `AD ..r C s/ C ;/ D .r C .s C ;//
Hint: Begin with ..r C s/ C ;/ D .r C s/ as an instance of T3.41. The
derivation is then a matter of using T3.41* to replace s in the right-hand side
with .s C ;/.
*T3.50. PA `AD ..r C s/ C t/ D .r C .s C t//

associativity of addition

Hint: For an application of Ind*, let P be ..r C s/ C x/ D .r C .s C x//.


Start with ..rCs/Cx/ D .rC.sCx// ! S..rCs/Cx/ D S.rC.sCx//
as an instance of T3.36, and substitute into the consequent as necessary by
T3.42* to reach ..r C s/ C x/ D .r C .s C x// ! ..r C s/ C Sx// D
.r C .s C Sx//. The derivation is longish, but straightforward.

CHAPTER 3. AXIOMATIC DEDUCTION

91

T3.51. PA `AD .;  t / D ;
Hint: For an application of Ind*, let P be .;  x/ D ;; then the derivation
reduces to sowing .;  x/ D ; ! .;  S x/ D ;. This is easy enough if
you use T3.41* and T3.44* to show that .;  x/ D .;  S x/.
T3.52. PA `AD .St  ;/ D ..t  ;/ C ;/
Hint: This does not require application of Ind*.

*T3.53. PA `AD .St  s/ D ..t  s/ C s/


Hint: For an application of Ind*, let P be .St  x/ D ..t  x/ C x/. The
derivation reduces to getting .St  x/ D ..t  x/ C x/ ! .St  S x/ D
..t  S x/ C S x/. For this, you can start with .St  x/ D ..t  x/ C x/ !
..Stx/CSt/ D ...tx/Cx/CS t/ as an instance of T3.36, and substitute
into the consequent. You may find it helpful to obtain .x C S t/ D .t C S x/
and then ..t  x/ C .x C St// D ..t  S x/ C S x/ as as a preliminary result.
T3.54. PA `AD .t  s/ D .s  t/

commutativity of multiplication

Hint: For an application of Ind*, let P be .t  x/ D .x  t/. You can start


with .t  x/ D .x  t/ ! ..t  x/ C t/ D ..x  t/ C t/ as an instance
of T3.36, and substitute into the consequent.
We will stop here. With the derivation system ND of chapter 6, we obtain all these
results and more. But that system is easier to manipulate than what we have so far in
AD. Still, we have obtained some significant results! Perhaps you have heard from
your mothers knee that a C b D b C a. But this is a sweeping general claim of
the sort that cannot ever have all its instances checked. We have derived it from the
Peano axioms. Of course, one might want to know about justifications for the Peano
axioms. But that is another story.
*E3.9. Provide derivations to show each of T3.40, T3.41, T3.42, T3.43, T3.44,
T3.49, T3.50, T3.51, T3.52, T3.53, and T3.54. Hint: you may find the AD
Peano reference on p. 92 helpful.

CHAPTER 3. AXIOMATIC DEDUCTION

92

Peano Arithmetic (AD)


PA

1. .Sx D ;/
2. .Sx D Sy/ ! .x D y/
3. .x C ;/ D x
4. .x C Sy/ D S.x C y/
5. .x  ;/ D ;
6. .x  Sy/ D .x  y/ C x
x / ! 8xP
7. P;x ^ 8x.P ! PSx

x /, PA `
T3.38 P;x , 8x.P ! PSx
AD 8xP

Ind*

T3.39 PA `AD .St D ;/


T3.40 PA `AD .St D Ss/ ! .t D s/
T3.41 PA `AD .t C ;/ D t
T3.42 PA `AD .t C Ss/ D S.t C s/
T3.43 PA `AD .t  ;/ D ;
T3.44 PA `AD .t  Ss/ D .t  s/ C t
T3.45 PA `AD .; C t/ D t
T3.46 PA `AD .St C ;/ D S.t C ;/
T3.47 PA `AD .St C s/ D S.t C s/
T3.48 PA `AD .t C s/ D .s C t/

commutativity of addition

T3.49 PA `AD ..r C s/ C ;/ D .r C .s C ;//


T3.50 PA `AD ..r C s/ C t/ D .r C .s C t//

associativity of addition

T3.51 PA `AD .;  t/ D ;
T3.52 PA `AD .St  ;/ D ..t  ;/ C ;/
T3.53 PA `AD .St  s/ D ..t  s/ C s/
T3.54 PA `AD .t  s/ D .s  t/

commutativity of multiplication

If T3.n is of the sort .t D s/, then T3.n* is .s D t/.

CHAPTER 3. AXIOMATIC DEDUCTION

93

E3.10. For each of the following concepts, explain in an essay of about two pages,
so that Hannah could understand. In your essay, you should (i) identify the
objects to which the concept applies, (ii) give and explain the definition, and
give and explicate examples of your own construction (iii) where the concept
applies, and (iv) where it does not. Your essay should exhibit an understanding of methods from the text.
a. A consequence in a some axiomatic logic of , and then a consequence in
AD of .
b. An AD consequence of the Peano Axioms.
c. Term t being free for variable x in formula A along with the restrictions on
A4 and Gen.

Chapter 4

Semantics
Having introduced the grammar for our formal languages and even (if you did not
skip the last chapter) done derivations in them, we need to say something about
semantics about the conditions under which their expressions are true and false.
In addition to logical validity from chapter 1 and validity in AD from chapter 3,
this will lead to a third, semantic notion of validity. Again, the discussion divides
into the relatively simple sentential case, and then the full quantificational version.
Recall that we are introducing formal languages in their pure form, apart from
associations with ordinary language. Having discussed, in this chapter, conditions
under which formal expressions are true and not, in the next chapter, we will finally
turn to translation, and so to ways formal expressions are associated with ordinary
ones.

4.1

Sentential

Let us say that any sentence in a sentential or quantificational language with no subformula (other than itself) that is a sentence is basic. For a sentential language, basic
sentences are the sentence letters, for a sentence letter is precisely a sentence with
no subformula other than itself that is a sentence. In the quantificational case, basic
sentences may be more complex.1 In this part, we treat basic sentences as atomic.
Our initial focus is on forms with just operators  and !. We begin with an account of the conditions under which sentences are true and not true, learn to apply
that account in arbitrary conditions, and turn to validity. The section concludes with
applications to our abbreviations, ^, _, and $.
1 Thus

the basic sentences of A ^ B are just the atomic subformulas A and B. But F a ^ 9xGx,
say has atomic subformulas F a and Gx, but basic parts F a and 9xGx.

94

CHAPTER 4. SEMANTICS

4.1.1

95

Interpretations and Truth

Sentences are true and false relative to an interpretation of basic sentences. In the
sentential case, the notion of an interpretation is particularly simple. For any formal
language L, a sentential interpretation assigns a truth value true or false, T or F, to
each of its basic sentences. Thus, for Ls we might have interpretations I and J,
I

...

(A)
J

...

When a sentence A is T on an interpretation I, we write I[A] = T, and when it is F,


we write, I[A] = F. Thus, in the above case, J[B] = T and J[C ] = F.
Truth for complex sentences depends on truth and falsity for their parts. In particular, for any interpretation I,
ST

() For any sentence P , I[P ] = T iff I[P ] = F; otherwise I[P ] = F.


(!) For any sentences P and Q, I[.P ! Q/] = T iff I[P ] = F or I[Q] = T (or
both); otherwise I[.P ! Q/] = F.

Thus a basic sentence is true or false depending on the interpretation. For complex
sentences, P is true iff P is not true; and .P ! Q/ is true iff P is not true or Q
is. (In the quantificational case, we will introduce a notion of satisfaction distinct
from truth. However, in the sentential case, satisfaction and truth are the same: An
arbitrary sentence A is satisfied on a sentential interpretation I iff it is true on I. So
definition ST is all we need.)
It is traditional to represent the information from ST() and ST(!) in the following truth tables.
P Q P !Q
P P

T()

T F
F T

T(!)

T
T
F
F

T
F
T
F

T
F
T
T

From ST(), we have that if P is F then P is T; and if P is T then P is F. This is


just the way to read table T() from left-to-right in the bottom row, and then the top
row. Similarly, from ST(!), we have that P ! Q is T in conditions represented by
the first, third and fourth rows of T(!). The only way for P ! Q to be F is when
P is T and Q is F as in the second row.

CHAPTER 4. SEMANTICS

96

ST works recursively. Whether a basic sentence is true comes directly from the
interpretation; truth for other sentences depends on truth for their immediate subformulas and can be read directly off the tables. As usual, we can use trees to see
how it works. Suppose I[A] = T, I[B] = F, and I[C ] = F. Then I[.A ! B/ ! C ]
= T.

B .F/

A.T/

(B)

L
L
L
L
L
LL

C .F/

From I






B .T/

By T(), row 2




.A ! B/.T/







.A ! B/.F/

HH

HH
H
H

By T(!), row 1

By T(), row 1




.A ! B/ ! C .T/

By T(!), row 4

The basic tree is the same as to show that .A ! B/ ! C is a formula. From the
interpretation, A is T, B is F, and C is F. These are across the top. Since B is F, from
the bottom row of table T(), B is T. Since A is T and B is T, reading across
the top row of the table T(!), A ! B is T. And similarly, according to the tree,
for the rest. You should carefully follow each step. As we built the formula from its
parts to the whole, so now we calculate its truth from the parts to the whole.
Here is the same formula considered on another interpretation. Where interpretation J is as on p. 95, J[.A ! B/ ! C ] = F.
A.T/

(C)

B .T/

L
L
L
L
L
LL

C .F/

From J






B .F/

By T(), row 1




.A ! B/.F/







.A ! B/.T/

HH

H
HH
H

By T(!), row 2

By T(), row 2




.A ! B/ ! C .F/

By T(!), row 2

CHAPTER 4. SEMANTICS

97

This time, for both applications of ST(!), the antecedent is T and the consequent
is F; thus we are working on the second row of table T(!), and the conditionals
evaluate to F. Again, you should follow each step in the tree.
E4.1. Where the interpretation is as J from p. 95, with JA D T, JB D T and
JC D F, use trees to decide whether the following sentences of Ls are T or
F.
*a. A

b. C

c. A ! C
*e. .A ! A/

d. C ! A
*f. .A ! A/

g. .A ! C / ! C

h. .A ! C / ! C

*i. .A ! B/ ! .B ! A/

j. .B ! A/ ! .A ! B/

4.1.2

Arbitrary Interpretations

Sentences are true and false relative to an interpretation. But whether an argument
is semantically valid depends on truth and falsity relative to every interpretation. As
a first step toward determining semantic validity, in this section, we generalize the
method of the last section to calculate truth values relative to arbitrary interpretations.
First, any complex sentence has a finite number of basic sentences as components. It is thus possible simply to list all the possible interpretations of those basic
sentences. If an expression has just one basic sentence A, then on any interpretation
whatsoever, that basic sentence must be T or F.
A

(D)

T
F

If an expression has basic sentences A and B, then the possible interpretations of its
basic sentences are,
A B

(E)

T
T
F
F

T
F
T
F

B can take its possible values, T and F when A is true, and B can take its possible
values, T and F when A is false. And similarly, every time we add a basic sentence,
we double the number of possible interpretations, so that n basic sentences always

CHAPTER 4. SEMANTICS

98

have 2n possible interpretations. Thus the possible interpretations for three and four
basic sentences are,
A B C D

(F)

A B C

T
T
T
T

T
T
T
T

T
T
F
F

T
F
T
F

T
T
T
T

T
T
F
F

T
F
T
F

T
T
T
T

F
F
F
F

T
T
F
F

T
F
T
F

F
F
F
F

T
T
F
F

T
F
T
F

F
F
F
F

T
T
T
T

T
T
F
F

T
F
T
F

F
F
F
F

F
F
F
F

T
T
F
F

T
F
T
F

(G)

Extra horizontal lines are added purely for visual convenience. There are 8 D 23
combinations with three basic sentences and 16 D 24 combinations with four. In
general, to write down all the possible combinations for n basic sentences, begin by
finding the total number r D 2n of combinations or rows. Then write down a column
with half that many (r=2) Ts and half that many (r=2) Fs; then a column alternating
half again as many (r=4) Ts and Fs; and a column alternating half again as many
(r=8) Ts and Fs continuing to the nth column alternating groups of just one T and
one F. Thus, for example, with four basic sentences, r D 24 D 16; so we begin with
a column consisting of r=2 D 8 Ts and r=2 D 8 Fs; this is followed by a column
alternating groups of 4 Ts and 4 Fs, a column alternating groups of 2 Ts and 2 Fs,
and a column alternating groups of 1 T and 1 F. And similarly in other cases.
Given an expression involving, say, four basic sentences, we could imagine doing
trees for each of the 16 possible interpretations. But, to exhibit truth values for each
of the possible interpretations, we can reduce the amount of work a bit or at least
represent it in a relatively compact form. Suppose I[A] = T, I[B] = F, and I[C ] = F,
and consider a tree as in (B) from above, along with a compressed version of the
same information.

CHAPTER 4. SEMANTICS
B .F/

A.T/

99
C .F/

(H)


L
L
L
L
LL




B .T/







.A ! B/.T/

A B C  .A !  B/ ! C
T F F F T T TF

T F



.A ! B/.F/

HH
H




HH
H

.A ! B/ ! C .T/

In the table on the right, we begin by simply listing the interpretation we will consider
in the lefthand part: A is T, B is F and C is F. Then, under each basic sentence, we put
its truth value, and for each formula, we list its truth value under its main operator.
Notice that the calculation must proceed precisely as it does in the tree. It is because
B is F, that we put T under the second . It is because A is T and B is T that we
put a T under the first !. It is because .A ! B/ is T that we put F under the
first . And it is because .A ! B/ is F and C is F that we put a T under the
second !. In effect, then, we work down through the tree, only in this compressed
form. We might think of truth values from the tree as squished up into the one row.
Because there is a T under its main operator, we conclude that the whole formula,
.A ! B/ ! C is T when I[A] = T, I[B] = F, and I[C ] = F. In this way, we might
conveniently calculate and represent the truth value of .A ! B/ ! C for all
eight of the possible interpretations of its basic sentences.
A B C  .A !  B/ ! C

(I)

T
T
T
T

T
T
F
F

T
F
T
F

T
T
F
F

T
T
T
T

F
F
T
T

F
F
T
T

T
T
F
F

T
F
T
T

T
F
T
F

F
F
F
F

T
T
F
F

T
F
T
F

F
F
F
F

F
F
F
F

T
T
T
T

F
F
T
T

T
T
F
F

T
T
T
T

T
F
T
F

The emphasized column under the second ! indicates the truth value of .A !
B/ ! C for each of the interpretations on the left which is to say, for every
possible interpretation of the three basic sentences. So the only way for .A !
B/ ! C to be F is for C to be F, and A and B to be T. Our above tree (H)
represents just the fourth row of this table.

CHAPTER 4. SEMANTICS

100

In practice, it is easiest to work these truth tables vertically. For this, begin
with the basic sentences in some standard order along with all their possible interpretations in the left-hand column. For Ls let the standard order be alphanumeric
(A; A1 ; A2 : : : ; B; B1 ; B2 : : : ; C : : :). And repeat truth values for basic sentences
under their occurrences in the formula (this is not crucial, since truth values for basic
sentences are already listed on the left; it will be up to you whether to repeat values
for basic sentences). This is done in table (J) below.

(J)

A B C  .A !  B/ ! C

A B C  .A !  B/ ! C

T
T
T
T

T
T
F
F

T
F
T
F

T
T
T
T

T
T
F
F

T
F
T
F

T
T
T
T

T
T
F
F

T
F
T
F

T
T
T
T

F
F
T
T

T
T
F
F

T
F
T
F

F
F
F
F

T
T
F
F

T
F
T
F

F
F
F
F

T
T
F
F

T
F
T
F

F
F
F
F

T
T
F
F

T
F
T
F

F
F
F
F

F
F
T
T

T
T
F
F

T
F
T
F

(K)

Now, given the values for B as in (J), we are in a position to calculate the values for
B; so get the T() table in you mind, put your eye on the column under B in the
formula (or on the left if you have decided not to repeat the values for B under its
occurrence in the formula). Then fill in the column under the second , reversing the
values from under B. This is accomplished in (K). Given the values for A and B,
we are now in a position to calculate values for A ! B; so get the T(!) table in
your head, and put your eye on the columns under A and B. Then fill in the column
It is worth asking what happens if basic sentences are listed in some order other
than alphanumeric.
AB

B A

T
T
F
F

T
T
F
F

T
F
T
F

H
Y
*

H

H


j
H

T
F
T
F

All the combinations are still listed, but their locations in a


table change.

Each of the above tables list all of the combinations for the basic sentences. But
the first table has the interpretation I with I[A] = T and I[B] = F in the second
row, where the second table has this combination in the third. Similarly, the
tables exchange rows for the interpretation J with J[A] = F and J[B] = T. As it
turns out, the only real consequence of switching rows is that it becomes difficult
to compare tables as, for example, with the back of the book. And it may matter
as part of the standard of correctness for exercises!

CHAPTER 4. SEMANTICS

101

under the first !, going with F only when A is T and B is F. This is accomplished
in (L).

(L)

A B C  .A !  B/ ! C

A B C  .A !  B/ ! C

T
T
T
T

T
T
F
F

T
F
T
F

T
T
T
T

F
F
T
T

F
F
T
T

T
T
F
F

T
F
T
F

T
T
T
T

T
T
F
F

T
F
T
F

T
T
F
F

T
T
T
T

F
F
T
T

F
F
T
T

T
T
F
F

T
F
T
F

F
F
F
F

T
T
F
F

T
F
T
F

F
F
F
F

T
T
T
T

F
F
T
T

T
T
F
F

T
F
T
F

F
F
F
F

T
T
F
F

T
F
T
F

F
F
F
F

F
F
F
F

T
T
T
T

F
F
T
T

T
T
F
F

T
F
T
F

(M)

Now we are ready to fill in the column under the first . So get the T() table in your
head, and put your eye on the column under the first !. The column is completed in
table (M). And the table is finished as in (I) by completing the column under the last
!, based on the columns under the first  and under the C . Notice again, that the
order in which you work the columns exactly parallels the order from the tree.
As another example, consider these tables for .B ! A/, the first with truth
values repeated under basic sentences, the second without.
A B  .B ! A/

(N)

T
T
F
F

T
F
T
F

F
F
T
F

T
F
T
F

T
T
F
T

T
T
F
F

A B  .B ! A/

(O)

T
T
F
F

T
F
T
F

F
F
T
F

T
T
F
T

We complete the table as before. First, with our eye on the columns under B and
A, we fill in the column under !. Then, with our eye on that column, we complete
the one under . For this, first, notice that  is the main operator. You would not
calculate B and then the arrow! Rather, your calculations move from the smaller
parts to the larger; so the arrow comes first and then the tilde. Again, the order is the
same as on a tree. Second, if you do not repeat values for basic formulas, be careful
about B ! A; the leftmost column of table (O), under A, is the column for the
consequent and the column immediately to its right, under B, is for the antecedent;
in this case, then, the second row under arrow is T and the third is F. Though it is fine
to omit columns under basic sentences, as they are already filled in on the left side,
you should not skip other columns, as they are essential building blocks for the final
result.
E4.2. For each of the following sentences of Ls construct a truth table to determine
its truth value for each of the possible interpretations of its basic sentences.
*a. A

CHAPTER 4. SEMANTICS

102

b. .A ! A/
c. .A ! A/
*d. .B ! A/ ! B
e. .B ! A/ ! B
f. .A ! B/ ! .B ! A/
*g. C ! .A ! B/
h. A ! .C ! B/ ! .A ! C / ! .A ! B/
*i. .A ! B/ ! .C ! D/
j. .A ! B/ ! .C ! D/

4.1.3

Validity

As we have seen, sentences are true and false relative to an interpretation. For any
interpretation, a complex sentence has some definite value. But whether an argument is sententially valid depends on truth and falsity relative to every interpretation.
Suppose a formal argument has premises P1 : : : Pn and conclusion Q. Then,
P1 : : : Pn sententially entail Q (P1 : : : Pn s Q) iff there is no sentential interpretation I such that IP1 D T and . . . and IPn D T but IQ D F.
We can put this more generally as follows. Suppose (Gamma) is a set of formulas,
and say I[] = T iff I[P ] = T for each P in . Then,
SV sententially entails Q ( s Q) iff there is no sentential interpretation I such
that I[] = T but I[Q] = F.
Where the members of are P1 : : : Pn , this says the same thing as before. sententially entails Q when there is no sentential interpretation that makes each member of
true and Q false. If sententially entails Q we say the argument whose premises
are the members of and conclusion is Q is sententially valid. does not sententially entail Q ( s Q) when there is some sentential interpretation on which all the
members of are true, but Q is false. We can think of the premises as constraining the interpretations that matter: for validity it is just the interpretations where the
members of are all true, on which the conclusion Q cannot be false. If has no

CHAPTER 4. SEMANTICS

103

members then there are no constraints on relevant interpretations, and the conclusion
must be true on every interpretation in order for it to be valid. In this case, listing all
the members of individually, we simply write s Q, and if Q is valid, Q is logically true (a tautology). Notice the new double turnstile  for this semantic notion,
in contrast to the single turnstile ` for derivations from chapter 3.
Given that we are already in a position to exhibit truth values for arbitrary interpretations, it is a simple matter to determine whether an argument is sententially
valid. Where the premises and conclusion of an argument include basic sentences
B1 : : : Bn , begin by calculating the truth values of the premises and conclusion for
each of the possible interpretations for B1 : : : Bn . Then look to see if any interpretation makes all the premises true but the conclusion false. If no interpretation makes
the premises true and the conclusion not, then by SV, the argument is sententially
valid. If some interpretation does make the premises true and the conclusion false,
then it is not valid.
Thus, for example, suppose we want to know whether the following argument is
sententially valid.
.A ! B/ ! C
B

(P)

C
By SV, the question is whether there is an interpretation that makes the premises
true and the conclusion not. So we begin by calculating the values of the premises
and conclusion for each of the possible interpretations of the basic sentences in the
premises and conclusion.
A B C . A ! B/ ! C

B / C

T
T
T
T

T
T
F
F

T
F
T
F

FT
FT
FT
FT

T
T
T
T

T
T
F
F

T
F
T
F

T
F
T
F

T
T
F
F

T
F
T
F

F
F
F
F

T
T
F
F

T
F
T
F

TF
TF
TF
TF

T
T
F
F

T
T
F
F

T
F
T
T

T
F
T
F

T
T
F
F

T
F
T
F

Now we simply look to see whether any interpretation makes all the premises true
but the conclusion not. Interpretations represented by the top row, ones that make A,
B, and C all T, do not make the premises true and the conclusion not, because both
the premises and the conclusion come out true. In the second row, the conclusion is
false, but the first premise is false as well; so not all the premises are true and the

CHAPTER 4. SEMANTICS

104

conclusion is false. In the third row, we do not have either all the premises true or the
conclusion false. In the fourth row, though the conclusion is false, the premises are
not true. In the fifth row, the premises are true, but the conclusion is not false. In the
sixth row, the first premise is not true, and in the seventh and eighth rows, the second
premise is not true. So no interpretation makes the premises true and the conclusion
false. So by SV, .A ! B/ ! C , B s C . Notice that the only column that matters
for a complex formula is the one under its main operator the one that gives the
value of the sentence for each of the interpretations; the other columns exist only to
support the calculation of the value of the whole.
In contrast, .B ! A/ ! B s .A ! B/. That is, an argument with
premise, .B ! A/ ! B and conclusion .A ! B/ is not sententially valid.
A B  .B ! A/ ! B /  .A ! B/

(Q)

T
T
F
F

T
F
T
F

F
T
F
T

T
F
T
F

T
T
F
T

T
T
F
F

T
F
T
F

T
F
T
F

F
T
F
F

T
T
F
F

T
F
T
T

T
F
T
F (

In the first row, the premise is F. In the second, the conclusion is T. In the third,
the premise is F. However, in the last, the premise is T, and the conclusion is F. So
there are interpretations (any interpretation that makes A and B both F) that make the
premise T and the conclusion not true. So by SV, .B ! A/ ! B s .A ! B/,
and the argument is not sententially valid. All it takes is one interpretation that makes
all the premises T and the conclusion F to render an argument not sententially valid.
Of course, there might be more than one, but one is enough!
As a final example, consider table (I) for .A ! B/ ! C on p. 99 above.
From the table, there is an interpretation where the sentence is not true. Thus, by
SV, s .A ! B/ ! C . A sentence is valid only when it is true on every
interpretation. Since there is an interpretation on which it is not true, the sentence is
not valid (not a logical truth).
Since all it takes to demonstrate invalidity is one interpretation on which all the
premises are true and the conclusion is false, we do not actually need an entire table to
demonstrate invalidity. You may decide to produce a whole truth table in order to find
an interpretation to demonstrate invalidity. But we can sometimes work backward
from what we are trying to show to an interpretation that does the job. Thus, for
example, to find the result from table (Q), we need an interpretation on which the
premise is T and the conclusion is F. That is, we need a row like this,
(R)

A B  .B ! A/ ! B /  .A ! B/
T

CHAPTER 4. SEMANTICS

105

In order for the premise to be T, the conditional in the brackets must be F. And in
order for the conclusion to be F, the conditional must be T. So we can fill in this
much.
(S)

A B  .B ! A/ ! B /  .A ! B/
T

Since there are three ways for an arrow to be T, there is not much to be done with the
conclusion. But since the conditional in the premise is F, we know that its antecedent
is T and consequent is F. So we have,
(T)

A B  .B ! A/ ! B /  .A ! B/
T

F F

That is, if the conditional in the brackets is F, then .B ! A/ is T and B is F. But now
we can fill in the information about B wherever it occurs. The result is as follows.
(U)

A B  .B ! A/ ! B /  .A ! B/
F T

F T

F F

T F

Since the first B in the premise is F, the first conditional in the premise is T irrespective of the assignment to A. But, with B false, the only way for the conditional in the
arguments conclusion to be T is for A to be false as well. The result is our completed
row.
(V)

A B  .B ! A/ ! B /  .A ! B/
F F T

F T F

F F T F

F F

And we have recovered the row that demonstrates invalidity without doing the
entire table. In this case, the full table had only four rows, and we might just as
well have done the whole thing. However, when there are many rows, this shortcut
approach can be attractive. A disadvantage is that sometimes it is not obvious just
how to proceed. In this example each stage led to the next. At stage (S), there were
three ways to make the conclusion true. We were able to proceed insofar as the
premise forced the next step. But it might have been that neither the premise nor the
conclusion forced a definite next stage. In this sort of case, you might decide to do
the whole table, just so that you can grapple with all the different combinations in an
orderly way.
Notice what happens when we try this approach with an argument that is not
invalid. Returning to argument (P) above, suppose we try to find a row where the
premises are T and the conclusion is F. That is, we set out to find a row like this,
(W)

A B C . A ! B/ ! C
T

B / C
T

Immediately, we are in a position to fill in values for B and C .

CHAPTER 4. SEMANTICS

(X)

A B C . A ! B/ ! C
T F

T F

106
B / C
T

Since the first premise is a true arrow with a false consequent, its antecedent .A !
B/ must be F. But this requires that A be T and that B be F.
(Y)

A B C . A ! B/ ! C
T F

F F/T T F

B / C
T

And there is no way to set B to F, as we have already seen that it has to be T in order
to keep the second premise true and no interpretation makes B both T and F. At
this stage, we know, in our hearts, that there is no way to make both of the premises
true and the conclusion false. In Part II we will turn this knowledge into an official
mode of reasoning for validity. However, for now, let us consider a single row of a
truth table (or a marked row of a full table) sufficient to demonstrate invalidity, but
require a full table, exhibiting all the options, to show that an argument is sententially
valid.
You may encounter odd situations where premises are never T, where conclusions
are never F, or whatever. But if you stick to the definition, always asking whether
there is any interpretation of the basic sentences that makes all the premises T and
the conclusion F, all will be well.
E4.3. For each of the following, use truth tables to decide whether the entailment
claims hold. Notice that a couple of the tables are already done from E4.2.
*a. A ! A s A
b. A ! A s A
*c. A ! B, A s B
d. A ! B, B s A
e. .A ! B/ s B
f. s C ! .A ! B/
*g. s A ! .C ! B/ ! .A ! C / ! .A ! B/
h. .A ! B/ ! .B ! A/, A, B s .C ! C /
i. A ! .B ! C /, B ! .C ! D/ s A ! .B ! D/
j. .A ! .B ! C // ! D, D ! A s C

CHAPTER 4. SEMANTICS

4.1.4

107

Abbreviations

We turn, finally to applications for our abbreviations. Consider, first, a truth table for
P _ Q, that is for P ! Q.
P Q P ! Q

T(_)

T
T
F
F

T
F
T
F

F
F
T
T

T
T
F
F

T
T
T
F

T
F
T
F

When P is T and Q is T, P _ Q is T; when P is T and Q is F, P _ Q is T; and


so forth. Thus, when P is T and Q is T, we know that P _ Q is T, without going
through all the steps to get there in the unabbreviated form. Just as when P is a
formula and Q is a formula, we move directly to the conclusion that P _ Q is a
formula without explicitly working all the intervening steps, so if we know the truth
value of P and the truth value of Q, we can move in a tree by the above table to the
truth value of P _ Q without all the intervening steps. And similarly for the other
abbreviating sentential operators.
P Q  .P !  Q/

T(^)

T
T
F
F

T
F
T
F

T
F
F
F

T
T
F
F

F
T
T
T

F
T
F
T

T
F
T
F

P Q  .P ! Q/ !  .Q ! P /

T($)

T
T
F
F

T
F
T
F

T
F
F
T

T
T
F
F

T
F
T
T

T
F
T
F

F
T
T
F

F
F
T
F

T
F
T
F

T
T
F
T

T
T
F
F

As a help toward remembering these tables, notice that P _ Q is F only when P is F


and Q is F; P ^ Q is T only when P is T and Q is T; and P $ Q is T only when P
and Q are the same and F when P and Q are different. We can think of these clauses
as representing derived clauses T(_), T(^), and T($) to the definition for truth.

There are a couple of different ways tables for our operators can be understood:
First, as we shall see in Part III, it is possible to take tables for operators other
than  and ! as basic, say, just T() and T(_), or just T() and T(^), and
then abbreviate ! in terms of them. Challenge: What expression involving just
 and _ has the same table as !? what expression involving just  and ^?
Another option is to introduce all five as basic. Then the task is not showing that
the table for _ is TTTF that is given; rather we simply notice that P _ Q, say,
is redundant with P ! Q. Again, our above approach with  and ! basic
has the advantage of preserving relative simplicity in the basic language (though
other minimal approaches would do so as well).

CHAPTER 4. SEMANTICS

108

And nothing prevents direct application of the derived tables in trees. Suppose,
for example, I[A] = T, I[B] = F, and I[C ] = T. Then I[.B ! A/ $ .A ^ B/ _ C ]
= F.
B .F/

A.T/

A.T/

C .T/

From I

@
@
@

@
@

.B ! A/.T/

(Z)

B .F/

.A ^ B/.F/

Q
Q

l
l
l

C .F/ T(!); T(^), row 2; T()



Q

Q

.A ^ B/ _ C .F/

l
l
l
l !
l!

T(_), row 4

!
!!
!
!

.B ! A/ $ .A ^ B/ _ C .F/

T($), row 2

We might get the same result by working through the full tree for the unabbreviated
form. But there is no need. When A is T and B is F, we know that .A ^ B/ is F;
when .A ^ B/ is F and C is F, we know that .A ^ B/ _ C is F; and so forth. Thus
we move through the tree directly by the derived tables.
Similarly, we can work directly with abbreviated forms in truth tables.
A B C .B ! A/ $ .A ^ B/ _  C

(AA)

T
T
T
T

T
T
F
F

T
F
T
F

T
T
F
F

T
T
T
T

T
T
T
T

T
T
F
T

T
T
T
T

T
T
F
F

T
T
F
F

T
T
F
T

F
T
F
T

T
F
T
F

F
F
F
F

T
T
F
F

T
F
T
F

T
T
F
F

F
F
T
T

F
F
F
F

T
F
F
T

F
F
F
F

F
F
F
F

T
T
F
F

F
T
F
T

F
T
F
T

T
F
T
F

Tree (Z) represents just the third row of this table. As before, we construct the table
vertically, with tables for abbreviating operators in mind as appropriate.
Finally, given that we have tables for abbreviated forms, we can use them for
evaluation of arguments with abbreviated forms. Thus, for example, A $ B, A s
A ^ B.
A B .A $ B/

(AB)

T
T
F
F

T
F
T
F

T
T
F
F

T
F
F
T

T
F
T
F

A / .A ^ B/
T
T
F
F

T
T
F
F

T
F
F
F

T
F
T
F

There is no row where each of the premises is true and the conclusion is false. So the
argument is sententially valid. And, from either of the following rows,

CHAPTER 4. SEMANTICS

109

Semantics Quick Reference (Sentential)


For any formal language L, a sentential interpretation assigns a truth value true or
false, T or F, to each of its basic sentences. Then for any interpretation I,
ST

() For any sentence P , I[P ] = T iff I[P ] = F; otherwise I[P ] = F.


(!) For any sentences P and Q, I[.P ! Q/] = T iff I[P ] = F or I[Q] = T
(or both); otherwise I[.P ! Q/] = F.

And for abbreviated expressions,


ST 0

(^) For any sentences P and Q, I[.P ^ Q/] = T iff I[P ] = T and I[Q] = T;
otherwise I[.P ^ Q/] = F.
(_) For any sentences P and Q, I[.P _ Q/] = T iff I[P ] = T or I[Q] = T
(or both); otherwise I[.P _ Q/] = F.
($) For any sentences P and Q, I[.P $ Q/] = T iff I[P ] = I[Q]; otherwise
I[.P $ Q/] = F.

If (Gamma) is a set of formulas, I[] = T iff I[P ] = T for each P in . Then,


where the members of are the formal premises of an argument, and sentence P
is its conclusion,
SV sententially entails P iff there is no sentential interpretation I such that
I[] = T but I[P ] = F.
We treat a single row of a truth table (or a marked row of a full table) as sufficient
to demonstrate invalidity, but require a full table, exhibiting all the options, to show
that an argument is sententially valid.

A B C D .B ! A/ ^ . C _ D/

(AC)

F F T T
F F F T

F T F T
F T F T

FT T T
TF T T

.A $  D/ ^ . D ! B/ / B
F T FT
F T FT

T
T

FT T F
FT T F

F
F

we may conclude that .B ! A/ ^ .C _ D/, .A $ D/ ^ .D ! B/ s B. In


this case, the shortcut table is attractive relative to the full version with sixteen rows!
E4.4. For each of the following, use truth tables to decide whether the entailment
claims hold.
a. s A _ A

CHAPTER 4. SEMANTICS

110

b. A $ A $ .A ^ A/, A ! .A $ A/ s A ! A
*c. B _ C s B ! C
*d. A _ B, C ! A, .B ^ C / s C
e. A ! .B _ C /, C $ B, C s A
f. .A ^ B/ s A _ B
g. A ^ .B ! C / s .A ^ B/ _ .A ^ C /
*h. s .A $ B/ $ .A ^ B/
i. A _ .B ^ C /, .B _ C / ! A s A $ .C _ B/
j. A _ B, D ! .C _ A/ s B $ C
E4.5. For each of the following, use truth tables to decide whether the entailment
claims hold. Hint: the trick here is to identify the basic sentences. After that,
everything proceeds in the usual way with truth values assigned to the basic
sentences.
*a. 9xAx ! 9xBx, 9xAx s 9xBx
b. 8xAx ! 9x.Ax ^ 8yBy/, 9x.Ax ^ 8yBy/ s 8xAx
E4.6. For each of the following concepts, explain in an essay of about two pages,
so that Hannah could understand. In your essay, you should (i) identify the
objects to which the concept applies, (ii) give and explain the definition, and
give and explicate examples of your own construction (iii) where the concept
applies, and (iv) where it does not. Your essay should exhibit an understanding of methods from the text.
a. Sentential interpretations and truth for complex sentences.
b. Sentential validity.

CHAPTER 4. SEMANTICS

4.2

111

Quantificational

Semantics for the quantificational case work along the same lines as the sentential
one. Sentences are true or false relative to an interpretation; arguments are semantically valid when there is no interpretation on which the premises are true and the
conclusion is not. But, corresponding to differences between sentential and quantificational languages, the notion of an interpretation differs. And we introduce a
preliminary notion of a term assignment, along with a preliminary notion of satisfaction distinct from truth, before we get to truth and validity. Certain issues are put
off for chapter 7 at the start of Part II. However, we should be able to do enough to
see how the definitions work. This time, we will say a bit more about connections to
English, though it remains important to see the definitions for what they are, and we
leave official discussion of translation to the next chapter.

4.2.1

Interpretations

Given a quantificational language L, formulas are true relative to a quantificational


interpretation. As in the sentential case, languages do not come associated with
any interpretation. Rather, a language consists of symbols which may be interpreted
in different ways. In the sentential case, interpretations assigned T or F to basic
sentences and the assignments were made in arbitrary ways. Now assignments
are more complex, but remain arbitrary. In general,
QI A quantificational interpretation I of language L, consists of a nonempty set
U, the universe of the interpretation, along with,
(s) An assignment of a truth value IS to each sentence letter S of L.
(c) An assignment of a member Ic of U to each constant symbol c of L.
(r) An assignment of an n-place relation IRn on U to each n-place relation
symbol Rn of L, where ID is always assigned fho; oi j o 2 Ug.
(f) An assignment of a total n-place function Ihn from Un to U, to each
n-place function symbol hn of L.
The notions of a function and a relation come from set theory, for which you might
want to check out the set theory summary on p. 112. Conceived literally and mathematically, these assignments are themselves functions from symbols in the language
L to objects. Each sentence letter is associated with a truth value, T or F this
is no different than before. Each constant symbol is associated with some element

CHAPTER 4. SEMANTICS

112

Basic Notions of Set Theory


I. A set is a thing that may have other things as elements or members. If m is
a member of set s we write m 2 s. One set is identical to another iff their
members are the same so order is irrelevant. The members of a set may
be specified by list: fSally, Bob, Jimg, or by membership condition: fo j o is
a student at CSUSBg; read, the set of all objects o such that o is a student
at CSUSB. Since sets are things, nothing prevents a set with other sets as
members.
II. Like a set, an n-tuple is a thing with other things as elements or members.
For any integer n, an n-tuple has n elements, where order matters. 2-tuples
are frequently referred to as pairs. An n-tuple may be specified by list:
hSally, Bob, Jimi, or by membership condition, the first 5 people (taken in
order) in line at the Bursars window. Nothing prevents sets of n-tuples, as
fhm; ni j m loves ng; read, the set of all m/n pairs such that the first member
loves the second. 1-tuples are frequently equated with their members. So,
depending on context, fSally, Bob, Jimg may be fhSallyi, hBobi, hJimig.
III. Set r is a subset of set s iff any member of r is also a member of s. If r is
a subset of s we write r  s. r is a proper subset of s (r  s) iff r  s
but r s. Thus, for example, the subsets of fm; n; og are fg, fmg, fng, fog,
fm; ng, fm; og, fn; og, and fm; n; og. All but fm; n; og are proper subsets of
fm; n; og. Notice that the empty set is a subset of any set s, for it is sure to be
the case that any member of it is also a member of s.
IV. The union of sets r and s is the set of all objects that are members of r or
s. Thus, if r D fm; ng and s D fn; og, then the union of r and s, .r [ s/ D
fm; n; og. Given a larger collection of sets, s1 , s2 . . . the union of them all,
S
s1 , s2 : : : is the set of all objects that are members of s1 , or s2 , or . . . .
Similarly, the intersection of sets r and s is the set of all objects that are
members of r and s. Thus the intersection of r and s, .r \ s/ D fng, and
T
s1 , s2 : : : is the set of all objects that are members of s1 , and s2 , and . . . .
V. Let sn be the set of all n-tuples formed from members of s. Then an n-place
relation on set s is any subset of sn . Thus, for example, fhm; ni j m is married
to ng is a subset of the pairs of people, and so is a 2-place relation on the set of
people. An n-place function from rn to s is a set of pairs whose first member
is an element of rn and whose second member is an element of s where
no member of rn is paired with more than one member of s. Thus hh1; 1i; 2i
and hh1; 2i; 3i might be members of an addition function. hh1; 1i; 2i and
hh1; 1i; 3i could not be members of the same function. A total function from
rn to s is one that pairs each member of rn with some member of s. We think
of the first element of these pairs as an input, and the second as the functions
output for that input. Thus if hhm; ni; oi 2 f we say f.m; n/ D o.

CHAPTER 4. SEMANTICS

113

of U. Each n-place relation symbol is associated with a subset of Un with a set


whose members are of the sort ha1 : : : an i where a1 : : : an are elements of U. And
each n-place function symbol is associated with a set whose members are of the sort
hha1 : : : an i; bi where a1 : : : an and b are elements of U. And where U D fa; b; c : : :g,
ID is fha; ai; hb; bi; hc; ci; : : :g. Notice that U may be any non-empty set, and so
need not be countable. Any such assignments count as a quantificational interpretation.
Intuitively, the universe contains whatever objects are under consideration in a
given context. Thus one may ask whether everyone understands the notion of
an interpretation, and have in mind some limited collection of individuals not
literally everyone. Constant symbols work like proper names: Constant symbol a
names the object I[a] with which it is associated. So, for example, in Lq we might
set I[b] to Bill, and I[h] to Hillary. Relation symbols are interpreted like predicates:
Relation symbol Rn applies to the n-tuples with which it is associated. Thus, in Lq
where U is the set of all people, we might set I[H 1 ] to fo j o is happyg,2 and I[L2 ]
to fhm; ni j m loves ng. Then if Bill is happy, H applies to Bill, and if Bill loves
Hillary, L applies to hBill, Hillaryi, though if she is mad enough, L might not apply
to hHillary, Billi. Function symbols are used to pick out one object by means of
other(s). Thus, when we say that Bills father is happy, we pick out an object (the
father) by means of another (Bill). Similarly, function symbols are like oblique
names which pick out objects in response to inputs. Such behavior is commonplace
in mathematics when we say, for example that 3 + 3 is even and we are talking
about 6. Thus we might assign fhm; ni j n is the father of mg to one-place function
symbol f and fhhm; ni; oi j m plus n D og to two-place function symbol p.
<
For some examples of interpretations, let us return to the language LNT
from
<
section 2.2.5 on p. 61. Recall that LNT
includes just constant symbol ;; two-place relation symbols <, D; one-place function symbol S ; and two-place function symbols
 and C. Given these symbols, terms and formulas are generated in the usual way.
Where N is the set f0; 1; 2 : : :g of natural numbers3 and the successor of any integer
<
is the integer after it, the standard interpretation N1 for LNT
has universe N with,
N1

N1; D 0
N1< D fhm; ni j m; n 2 N , and m is less than ng
N1S D fhm; ni j m; n 2 N , and n is the successor of mg

2 Or

fhoi j o is happy g. As mentioned in the set theory guide, one-tuples are collapsed into their
members.
3 There is a problem of terminology: Strangely, many texts for elementary and high school mathematics exclude zero from the natural numbers, where most higher-level texts do not. We take the latter
course.

CHAPTER 4. SEMANTICS

114

N1C D fhhm; ni; oi j m; n; o 2 N , and m plus n equals og


N1 D fhhm; ni; oi j m; n; o 2 N , and m times n equals og

where it is automatic from QI that N1D is fh1; 1i; h2; 2i; h3; 3i : : :g. The standard
interpretation N of the minimal language LNT which omits the < symbol is like N1
but without the interpretation of <. These definitions work just as we expect. Thus,
for example,
N1S D fh0; 1i; h1; 2i; h2; 3i : : :g

(AD)

N1< D fh0; 1i; h0; 2i; h0; 3i; : : : h1; 2i; h1; 3i : : :g
N1C D fhh0; 0i; 0i; hh0; 1i; 1i; hh0; 2i; 2i; : : : hh1; 0i; 0i; hh1; 1i; 2i; : : :g

The standard interpretation represents the way you have understood these symbols
since grade school.
But there is nothing sacred about this interpretation. Thus, for example, we might
introduce an I with U = fBill, Hillg and,
I

I; D Bill
I< D fhHill, Hilli, hHill, Billig
IS D fhBill, Billi, hHill, Hillig
IC D fhhBill, Billi, Billi, hhBill, Hilli, Billi, hhHill, Billi, Hilli, hhHill, Hilli, Hillig
I D fhhBill, Billi, Hilli, hhBill, Hilli, Billi, hhHill, Billi, Billi, hhHill, Hilli, Billig

This assigns a member of the universe to the constant symbol; a set of pairs to the
two-place relation symbol (where the interpretation of D is automatic); a total 1-place
function to S, and total 2-place functions to  and C. So it counts as an interpretation
<
of LNT
.
It is frequently convenient to link assignments with bits of (relatively) ordinary
language. This is a key to translation, as explored in the next chapter! But there is
no requirement that we link up with ordinary language. All that is required is that we
assign a member of U to the constant symbol, a subset of U2 to the 2-place relation
symbol, and a total function from Un to U to each n-place function symbol. That is
all that is required and nothing beyond that is required in order to say what the
function and predicate symbols mean. So I counts as a legitimate (though non<
standard) interpretation of LNT
. With a language like Lq it is not always possible to
specify assignments for all the symbols in the language. Even so, we can specify
a partial interpretation an interpretation for the symbols that matter in a given
context.
E4.7. Suppose Bill and Hill have another child and (for reasons known only to
them) name him Dill. Where U = fBill, Hill, Dillg, give another interpretation

CHAPTER 4. SEMANTICS

115

<
J for LNT
. Arrange your interpretation so that: (i) J[;] Bill; (ii) there are
exactly five pairs in J[<]); and (iii) for any m, hhm, Billi, Dilli and hhBill,
mi, Dilli are in J[+]. Include J[D] in your account. Hint: a two-place total

function on a three-member universe should have 32 D 9 members.

4.2.2

Term Assignments

For some language L, say U = fo j o is a persong, one-place predicate H is assigned


the set of happy people, and constant b is assigned Bill. Perhaps H applies to Bill. In
this case, H b comes out true. Intuitively, however, we cannot say that H x is either
true or false on this interpretation, precisely because there is no particular individual
that x picks out we do not know who is supposed to be happy. However we will be
able to say that H x is satisfied or not when the interpretation is supplemented with a
variable (designation) assignment d associating each variable with some individual
in U.
Given a language L and interpretation I, a variable assignment d is a total function from the variables of L to objects in the universe U. Conceived pictorially, where
U = fo1 ; o2 : : :g, d and h are variable assignments.

i
#

j
#

k
#

l
#

m
#

n
#

o
#

p
#

o1

o2

o3

o4

o5

o6

o7

o8

i
#

j
#

o1

o2

...

l
#

m
#

n
#

o
#

p
#

o3

o4

o5

o6

o7

o8

...

If d assigns o to x we write dx D o. So dk D o3 and hk D o2 . Observe that the


total function from variables to things assigns some element of U to every variable
of L. But this leaves room for one thing assigned to different variables, and things
assigned to no variable at all. For any assignment d, d.xjo/ is the assignment that is
just like d except that x is assigned to o. Thus, d.kjo2 / D h. Similarly,

i
#

j
#

o1

o2

&
o3

o4

m
#

n
#

o
#

p
...
#

o5

o6

o7

o8

d.kjo2 ; ljo5 / D h.ljo5 / D k. Of course, if some d already has x assigned to o, then


d.xjo/ is just d. Thus, for example, k.ijo1 / is just k itself. We will be willing to say

CHAPTER 4. SEMANTICS

116

that H x is satisfied or not satisfied relative to an interpretation supplemented by a


variable assignment. But before we get to satisfaction, we need the general notion of
a term assignment.
In general, a term contributes to a formula by picking out some member of the
universe U terms act something like names. We have seen that an interpretation
I assigns a member I[c] of U to each constant symbol c. And a variable assignment d assigns a member d[x] to each variable x. But these are assignments just
to basic terms. An interpretation assigns to function symbols, not members of
U, but certain complex sets. Still an interpretation I supplemented by a variable assignment d, is sufficient to associate a member Id t of U with any term t of L.
Where hha1 : : : an i; bi 2 Ihn , let Ihn ha1 : : : an i D b; that is, Ihn ha1 : : : an i is
the thing the function I[hn ] associates with input ha1 : : : an i. Thus, for example,
N1Ch1; 1i D 2 and IChBill, Hilli D Bill. Then for any interpretation I, variable
assignment d, and term t,
TA

(c) If c is a constant, then Id c D Ic.


(v) If x is a variable, then Id x D dx.
(f) If hn is a function symbol and t1 : : : tn are terms, then Id hn t1 : : : tn D
Ihn hId t1 : : : Id tn i.

The first two clauses take over assignments to constants and variables from I and d.
The last clause is parallel to the one by which terms are formed. The assignment
to a complex term depends on assignments to the terms that are its parts, with the
interpretation of the relevant function symbol. Again, the definition is recursive, and
we can see how it works on a tree in this case, one with the very same shape as
the one by which we see that an expression is in fact a term. Say the interpretation of
<
LNT
is I as above, and d[x] = Hill; then Id S.S x C ;/ D Hill.
x [Hill]

;[Bill]

By TA(v) and TA(c)

(AE)



[Hill]
Sx

@

@
@

.Sx C ;/[Hill]

S.Sx C ;/[Hill]

With the input, since hHill, Hilli 2 IS, by TA(f)

With the inputs, since hhHill, Billi, Hilli 2 IC, by TA(f)

With the input, since hHill, Hilli 2 IS, by TA(f)

As usual, basic elements occur in the top row. Other elements are fixed by ones that
come before. The hard part about definition TA is just reading clause (f). It is perhaps

CHAPTER 4. SEMANTICS

117

easier to apply in practice than to read. For a complex term, assignments to terms
that are the parts, together with the assignment to the function symbol determine the
assignment to the whole. And this is just what clause (f) says. For practice, convince
yourself that Id.xjBill/ S.S x C ;/ D Bill, and where N is as above and d[x] = 1,
Nd S.S x C ;/ D 3.
<
E4.8. For LNT
and interpretation N1 as above on p. 113, let d include,

w
#
1

x
#
2

y
#
3

z
#
4

and use trees to determine each of the following.


*a. N1d CxS ;
b. N1d x C .S S ;  x/
c. N1d w  S.; C .y  S S S z//
*d. N1d.xj4/ x C .S S ;  x/
e. N1d.xj1;wj2/ S.x  .S; C S w//
<
E4.9. For LNT
and interpretation I as above on p. 114, let d include,

w
#
Bill

x
#
Hill

y
#
Hill

z
#
Hill

and use trees to determine each of the following.


*a. Id CxS;
b. Id x C .S S;  x/
c. Id w  S.; C .y  S S S z//
*d. Id.xjBill/ x C .S S ;  x/
e. Id.xjBill;wjHill/ S.x  .S ; C S w//
<
E4.10. Consider your interpretation J for LNT
from E4.7. Supposing that dw = Bill,
dy = Hill, and dz = Dill, determine Jd w  S.; C .y  S S S z//. Explain
how your interpretation has this result.

CHAPTER 4. SEMANTICS

118

E4.11. For Lq and an interpretation K with universe U = fAmy, Bob, Chrisg with,
K

Ka D Amy
Kc D Chris
Kf 1 D fhAmy, Bobi; hBob, Chrisi; hChris, Amyig
Kg 2 D fhhAmy, Amyi, Amyi; hhAmy, Bobi, Chrisi; hhAmy, Chrisi, Bobi; hhBob,

Bobi, Bobi; hhBob, Chrisi, Amyi; hhBob, Amyi, Chrisi; hhChris, Chrisi, Chrisi;
hhChris, Amyi, Bobi; hhChris, Bobi, Amyig

where d.x/ D Bob, d.y/ D Amy and d.z/ D Bob, use trees to determine
each of the following,
a. Kd f 1 c
*b. Kd g 2 yf 1 c
c. Kd g 2 g 2 axf 1 c
d. Kd.xjChris/ g 2 g 2 axf 1 c
e. Kd.xjAmy/ g 2 g 2 g 2 xyzg 2 f 1 af 1 c

4.2.3

Satisfaction

A terms assignment depends on an interpretation supplemented by an assignment


for variables, that is, on some Id . Similarly, a formulas satisfaction depends on both
the interpretation and variable assignment. As we shall see, however, truth is fixed by
the interpretation I alone just as in the sentential case. If a formula P is satisfied
on I supplemented with d, we write Id P D S; if P is not satisfied on I with d,
Id P D N. For any interpretation I with variable assignment d,
SF

(s) If S is a sentence letter, then Id S D S iff IS D T; otherwise Id S D N.


(r) If Rn is an n-place relation symbol and t1 : : : tn are terms, Id Rn t1 : : :
tn D S iff hId t1 : : : Id tn i 2 IRn ; otherwise Id Rn t1 : : : tn D N.
() If P is a formula, then Id P D S iff Id P D N; otherwise Id P D
N.
(!) If P and Q are formulas, then Id .P ! Q/ D S iff Id P D N or Id Q D
S (or both); otherwise Id .P ! Q/ D N.
(8) If P is a formula and x is a variable, then Id 8xP D S iff for any o 2 U,
Id.xjo/ P D S; otherwise Id 8xP D N.

CHAPTER 4. SEMANTICS

119

SF(s), SF() and SF(!) are closely related to ST from before, though satisfaction
applies now to any formulas and not only to sentences. Other clauses are new.
SF(s) and SF(r) determine satisfaction for atomic formulas. Satisfaction for other
formulas depends on satisfaction of their immediate subformulas. First, the satisfaction of a sentence letter works just like truth before: If a sentence letter is true on an
interpretation, then it is satisfied. Thus satisfaction for sentence letters depends only
on the interpretation, and not at all on the variable assignment.
In contrast, to see if Rn t1 : : : tn is satisfied, we find out which things are assigned
to the terms. It is natural to think about this on a tree like the one by which we show
<
that the expression is a formula. Thus given interpretation I for LNT
from p. 114,
consider .x  S ;/ < x; and compare cases with dx D Bill, and hx D Hill. It will
be convenient to think about the expression in its unabbreviated form, <  xS ;x.
x [Bill]

(AF)

;[Bill]

L
L
L
L
L
LL

x [Bill]

x [Hill]





S;[Bill]





xS ;[Hill]



H
. . .H
. .H
. . . . . . . . . . . . . . . .
H 
H
H
<  xS;x .S/

(AG)

x [Hill]

;[Bill]

L
L
L
L
L
LL



S ;[Bill]







xS;[Bill]

. . .H
. .H
. . . . . . . . . . . . . . . .

H
H

<  xS;x .N/

Above the dotted line, we calculate term assignments in the usual way. Assignment
d is worked out on the left, and h on the right. But <  xS;x is a formula of the sort
<t1 t2 . From diagram (AF), Id xS; D Hill, and Id x D Bill. So the assignments
to t1 and t2 are Hill and Bill. Since hHill, Billi 2 I<, by SF(r), Id <  xS ;x D S.
But from (AG), Ih xS ; D Bill, and Ih x D Hill. And hBill, Hilli 62 I<, so by
SF(r), Ih <  xS;x D N. Rn t1 : : : tn is satisfied just in case the n-tuple of the thing
assigned to t1 , and . . . and the thing assigned to tn is in the set assigned to the relation
symbol. To decide if Rn t1 : : : tn is satisfied, we find out what things are assigned to
the term or terms, and then look to see whether the relevant ordered sequence is in
the assignment. The simplest sort of case is when there is just one term. Id R1 t D S
just in case Id t 2 Id R1 . When there is more than one term, we look for the objects
taken in order.
SF() and SF(!) work just as before. And we could work out their consequences on trees or tables for satisfaction as before. In this case, though, to accomodate quantifiers it will be convenient to turn the trees on their sides. For this,
we begin by constructing the tree in the forward direction, from left-to-right, and

CHAPTER 4. SEMANTICS

120

then determine satisfaction the other way from the branch tips back to the trunk.
Where the members of U are fm; n : : :g, the branch conditions are as follows:

B(s)

Id S

B(r)

Id Rn t1 : : : tn

B()

Id P

forward

backward

does not branch

the tip is S iff IS D T

branches only
for terms

the tip is S iff hId t1 : : : Id tn i 2 IRn

Id P

the trunk is S iff the branch is N

Id P

B(!)

Id .P ! Q/

the trunk is S iff the top branch is N or the bottom branch is S (or both)

Id Q
Id.xjm/ P
Id.xjn/ P

B(8)
Id 8xP

::
:
8x one branch for
each member of

The trunk is S iff every branch is S

A formula branches according to its main operator. If it is atomic, it does not branch
(or branches only for its terms). (AF) and (AG) are examples of branching for terms,
only oriented vertically. If the main operator is , a formula has just one branch; if
its main operator is !, it has two branches; and if its main operator is 8 it has as
many branches as there are members of U. This last condition makes it impractical
to construct these trees in all but the most simple cases and impossible when U is
infinite. Still, we can use them to see how the definitions work.
When there are no quantifiers, we should be able to recognize these trees as a
mere sideways variant of ones we have seen before. Thus, suppose an interpretation L with U = fBob, Sue, Jimg, LA D T, LB 1 D fSueg, and LC 2 D fhBob,
Suei; hSue, Jimig where variable assignment dx DBob. Then,
1

Ld A.N/

(AH)

Ld A !

Bx.S/

Ld Bx.N/ ..

.
.

Ld A.S/

x [Bob]

CHAPTER 4. SEMANTICS

121

The main operator at stage (1) is !; so there are two branches. Bx on the bottom is
atomic, so the formula branches no further though we use TA to calculate the term
assignment. On the top at (2), A has main operator . So there is one branch. And
we are done with the forward part of the tree. Given this, we can calculate satisfaction
from the tips, back toward the trunk. Since LA D T, by B(s), the tip at (3) is S.
And since this is S, by B(), the top formula at (2) is N. But since Ld x D Bob, and
Bob 62 LB, by B(r), the bottom at (2) is N. And with both the top and bottom at (2)
N, by B(!), the formula at (1) is S. So Ld A ! Bx D S. You should be able to
recognize that the diagram (AH) rotated counterclockwise by 90 degrees would be
a mere variant of diagrams we have seen before. And the branch conditions merely
implement the corresponding conditions from SF.
Things are more interesting when there are quantifiers. For a quantifier, there
are as many branches as there are members of U. Thus working with the same interpretation, consider Ld 8yC xy. If there were just one thing in the universe, say
U D fBobg, the tree would branch as follows,
1

(AI)

Ld 8yC xy.S/

2
8y

Ld.yjBob/ C xy.S/

3


4
[Bob]
x
.
H y [Bob]
.H

Ld.yjBob/ C xy.N/ ..

The main operator at (1) is the universal quantifier. Supposing one thing in U, there is
the one branch. Notice that the variable assignment d becomes d.yjBob/. The main
operator at (2) is . So there is the one branch, carrying forward the assignment
d.yjBob/. The formula at (3) is atomic, so the only branching is for the term assignment. Then, in the backward direction, Ld.yjBob/ still assigns Bob to x; and Ld.yjBob/
assigns Bob to y. Since hBob, Bobi 62 LC 2 , the branch at (3) is N; so the branch
at (2) is S. And since all the branches for the universal quantifier are S, by B(8), the
formula at (1) is S.
But L was originally defined with U = fBob, Sue, Jimg. Thus the quantifier
requires not one but three branches, and the proper tree is as follows.
1

2
Ld.yjBob/ C xy.S/

(AJ)

Ld 8yC xy.N/

8y

Ld.yjSue/ C xy.N/
Ld.yjJim/ C xy.S/

3




4
[Bob]
x
.
H y [Bob]
.H

Ld.yjBob/ C xy.N/ ..

[Bob]
x
.
H y [Sue]
.H

Ld.yjSue/ C xy.S/ ..

[Bob]
x
.
H
. H y [Jim]

Ld.yjJim/ C xy.N/ ..

CHAPTER 4. SEMANTICS

122

Now there are three branches for the quantifier. Note the modification of d on each
branch, and the way the modified assignments carry forward and are used for evaluation at the tips. d.yjSue/, say, has the same assignment to x as d, but assigns Sue
to y. And similarly for the rest. This time, not all the branches for the universal
quantifier are S. So the formula at (1) is N. You should convince yourself that it is
S on Ih where hx D Jim. It would be S also with the assignment d as above, but
formula Cyx.
(AK) on p. 123 is an example for 8x.S x < x/ ! 8y.Sy C ;/ D x/ using
<
interpretation I from p. 114 and LNT
. This case should help you to see how all the
parts fit together in a reasonably complex example. It turns out to be helpful to think
about the formula in its unabbreviated form, 8x.<S xx ! 8yDCSy;x/. For this
case notice especially how when multiple quantifiers come off, a variable assignment
once modified is simply modified again for the new variable. If you follow through
the details of this case by the definitions, you are doing well.
A word of advice: Once you have the idea, constructing these trees to determine
satisfaction is a mechanical (and tedious) process. About the only way to go wrong
or become confused is by skipping steps or modifying the form of trees. But, very
often, skipping steps or modifying form does correlate with confusion! So it is best
to stick with the official pattern and so to follow the way it forces you through
definitions SF and TA.
E4.12. Supplement interpretation K for E4.11 so that
K

KS D T
KH 1 D fAmy, Bobg
KL2 D fhAmy, Amyi; hAmy, Bobi; hBob, Bobi; hBob, Chrisi; hAmy, Chrisig

Where d.x/ D Amy, d.y/ D Bob, use trees to determine whether the following formulas are satisfied on K with d.
*a. H x
c. Hf 1 y
e. 8xLxg 2 cx
*g. 8y8xLxy
i. 8x.Hf 1 x ! Lxx/

b. Lxa
d. 8xLyx
*f. 8x.H x ! S /
*h. 8y8xLyx
j. 8x.H x ! 8yLyx/

123
CHAPTER 4. SEMANTICS

8yDCSy;x/.S/

Id 8x.<S xx !

(AK)

8x

Id.xjBill/ <S xx ! 8yDCSy;x.S/

Id.xjHill/ <S xx ! 8yDCSy;x.S/

.
.

Id.xjBill/ <S xx.N/ ..

a
4

x [Bill]
x [Bill]
Id.xjBill;yjBill/ DCSy;x.S/

S x [Bill]

8y

[Bill]
CSy;[Bill]
Sy [Bill]
X
X
.
XXX y
.
X ;[Bill]
. hh
h
h
.
h
hhhh
hh
h x [Bill]

CSy;[Hill] X Sy [Hill]
y [Hill]
X
X
Id.xjBill;yjHill/ DCSy;x.N/ ..
XXX
. hh
;[Bill]
h
h
.
h
hhhh
hh
h x [Bill]
x [Hill]

x [Hill]
[Bill]
CSy;[Bill] X Sy [Bill]
y
X
X
Id.xjHill;yjBill/ DCSy;x.S/ ..
XXX
. hh
;[Bill]
h
h
.
h
hhhh
hh
h x [Hill]

S x [Hill]

Id.xjBill/ 8yDCSy;x.N/

.
.

Id.xjHill/ <S xx.S/ ..

8y

[Hill]
CSy;[Hill]
Sy [Hill]
X
X
XXX y
Id.xjHill;yjHill/ DCSy;x.S/ ..
X ;[Bill]
. hh
h
h
.
h
hhhh
hh
h x [Hill]

Backward: At the tips for terms apply the variable assignment from the corresponding atomic formula. Thus, in the
top at (b) with d.xjBill; yjBill/, both x and y are assigned to
Bill. The assignment to ; comes from I. For (4), recall that
ID is automatically fhBill, Billi; hHill, Hillig. After that,
the calculation at each stage is straightforward.

Id.xjHill/ 8yDCSy;x.S/

Forward: Since there are two objects in U, there are two


branches for each quantifier. At stage (2), for the xquantifier, d is modified for assignments to x, and at stage
(4) for the y-quantifier those assignments are modified
again. <S xx and DCSy;x are atomic. Branching for
terms continues at stages (a) and (b) in the usual way.

CHAPTER 4. SEMANTICS

124

E4.13. What, if anything, changes with the variable assignment h where hx D


Chris and hy D Amy? Challenge: Explain why differences in the initial
variable assignment cannot matter for the evaluation of (e) - (j).

4.2.4

Truth and Validity

It is a short step from satisfaction to definitions for truth and validity. Formulas
are satisfied or not on an interpretation I together with a variable assignment d. But
whether a formula is true or false on an interpretation depends on satisfaction relative
to every variable assignment.
TI A formula P is true on an interpretation I iff with any d for I, Id P D S. P is
false on I iff with any d for I, Id P D N.
A formula is true on I just in case it is satisfied with every variable assignment for
I. From (AJ), then, we are already in a position to see that 8yC xy is not true on
L. For there is a variable assignment d on which it is N. Neither is 8yC xy false
on L, insofar as it is satisfied when the assignment is h. Since there is an assignment
on which it is N, it is not satisfied on every assignment, and so is not true. Since
there is an assignment on which it is S, it is not N on every assignment, and so is
not false. In contrast, from (AK), 8x.S x < x/ ! 8y.Sy C ;/ D x/ is true on
I. For some variable assignment d, the tree shows directly that Id 8x.S x < x/ !
8y.Sy C ;/ D x/ D S. But the reasoning for the tree makes no assumptions
whatsoever about d. That is, with any variable assignment, we might have reasoned
in just the same way, to reach the conclusion that the formula is satisfied. Since it
comes out satisfied no matter what the variable assignment may be, by TI, it is true.
In general, if a sentence is satisfied on some d for I, then it is satisfied on every
d for I. In a sentence, every variable is bound; so by the time you get to formulas
without quantifiers at the tips of a tree, assignments are of the sort, d.xjm; yjn : : :/
for every variable in the formula; so satisfaction depends just on assignments that are
set on the branch itself, and the initial d is irrelevant to satisfaction at the tips and
thus to evaluation of the formula as a whole. Satisfaction depends on adjustments to
the assignment that occur within the tree, rather than on the initial assignment itself.
So every starting d has the same result. So if a sentence is satisfied on some d for I,
it is satisfied on every d for I, and therefore true on I. Similarly, if a sentence is N on
some d for I, it is N on every d for I, and therefore false on I.
In contrast, a formula with free variables may be sensitive to the initial variable
assignment. Thus, in the ordinary case, H x is not true and not false on an interpretation depending on the assignment to x. We have seen this pattern so far in examples

CHAPTER 4. SEMANTICS

125

and exercises: for formulas with free variables, there may be variable assignments
where they are satisfied, and variable assignments where they are not. Therefore the
formulas fail to be either true or false by TI. Sentences, on the other hand, are satisfied on every variable assignment if they are satisfied on any, and not satisfied on
every assignment if they are not satisfied on any. Therefore the sentences from our
examples and exercises come out either true or false.
But a word of caution is in order: Sentences are always true or false on an interpretation. And, in the ordinary case, formulas with free variables are neither true
nor false. But this is not always so. .x D x/ is true on any I. (Why?) Similarly,
I H x D T if IH D U and F if IH D fg. And 8x.x D y/ is true on any I with
a U that has more than one member. To see this, suppose for some I, U D fm; n : : :g;
then for an arbitrary d the tree is as follows,
1

(AL)

4
Id.xjm/ x D y .. 
.. 
H
H
Id.xjn/ x D y .. 
.. 
H
H
::
:
Id 8x.x D y/
I 8x.x D y/
 d
8x one branch for
each member of

x m
y dy
x n
y dy

No matter what d is like, at most one branch at (3) is S. If dy D m then the top
branch at (3) is S and the rest are N. If dy D n then the second branch at (3) is S and
the others are N. And so forth. So in this case where U has more than one member,
at least one branch is N for any d. So the universally quantified expression is N for
any d, and the negation at (1) is S for any d. So by TI it is true. So satisfaction for
a formula may but need not be sensitive to the particular variable assignment under
consideration. Again, though, a sentence is always true or false depending only on
the interpretation. To show that a sentence is true, it is enough to show that it is
satisfied on some d, from which it follows that it is satisfied on any. For a formula
with free variables, the matter is more complex though you can show that such a
formula not true by finding an assignment that makes it N, and not false by finding
an assignment that makes it S.
Given the notion of truth, quantificational validity works very much as before.
Where (Gamma) is a set of formulas, say I D T iff IP D T for each formula
P 2 . Then for any formula P ,
QV quantificationally entails P iff there is no quantificational interpretation I
such that I D T but IP T.

CHAPTER 4. SEMANTICS

126

quantificationally entails P when there is no quantificational interpretation that


makes the premises true and the conclusion not. If quantificationally entails P
we write,  P , and say an argument whose premises are the members of and
conclusion is P is quantificationally valid. does not quantificationally entail P
( P ) when there is some quantificational interpretation on which all the premises
are true, but the conclusion is not true (notice that there is a difference between
being not true, and being false). As before, if Q1 : : : Qn are the members of , we
sometimes write Q1 : : : Qn  P in place of  P . If there are no premises, listing
all the members of individually, we simply write  P . If  P , then P is logically
true. Notice again the double turnstile , in contrast to the single turnstile ` for
derivations.
In the quantificational case, demonstrating semantic validity is problematic. In
the sentential case, we could simply list all the ways a sentential interpretation could
make basic sentences T or F. In the quantificational case, it is not possible to list
all interpretations. Consider just interpretations with universe N : the interpretation
of a one-place relation symbol R might be f1g or f2g or f3g . . . ; it might be f1; 2g
or f1; 3g, or f1; 3; 5 : : :g, or whatever. There are infinitely many options for this
one relation symbol and so at least as many for quantificational interpretations in
general. Similarly, when the universe is so large, by our methods, we cannot calculate
even satisfaction and truth in arbitrary cases for quantifiers would have an infinite
number of branches. One might begin to suspect that there is no way to demonstrate
semantic validity in the quantificational case. There is a way. And we respond to this
concern in chapter 7.
For now, though, we rest content with demonstrating invalidity. To show that
an argument is invalid, we do not need to consider all possible interpretations; it is
enough to find one interpretation on which the premises are true, and the conclusion
is not. (Compare the invalidity format from chapter 1 and shortcut truth tables
in this chapter.) An argument is quantificationally valid just in case there is no I
on which its premises are true, and its conclusion is not true. So to show that an
argument is not quantificationally valid, it is sufficient to produce an interpretation
that violates this condition an interpretation on which its premises are true and
conclusion is not. This should be enough at least to let us see how the definitions
work, and we postpone the larger question about showing quantificational validity to
later.
For now, then, our idea is to produce an interpretation, and then to use trees in
order to show that the interpretation makes premises true, but the conclusion not.
Thus, for example, for Lq we can show that 8xP x P a that an argument
with premise 8xP x and conclusion P a is not quantificationally valid. To see

CHAPTER 4. SEMANTICS

127

this, consider an I with U = f1; 2g, IP D f1g, and Ia D 1. Then 8xP x is T on I.

(AM)

Id 8xP x.S/

Id 8xP x.N/

3
Id.xj1/ P x.S/ ..

x 1

Id.xj2/ P x.N/ ..

x 2

.
.

8x

.
.

8xP x is satisfied with this d for I; since it is a sentence it is satisfied with any d for
I. So by TI it is true on I. But P a is not true on this I.
1

Id P a.N/

Id P a.S/ ..

3
.
.

a1

By TA(c), Id a D Ia. So the assignment to a is 1 and the formula at (2) is satisfied,


so that the formula at (1) is not. So by TI, IP a T. So there is an interpretation
on which the premise is true and the conclusion is not; so 8xP x P a, and the
argument is not quantificationally valid. Notice that it is sufficient to show that the
conclusion is not true which is not always the same as showing that the conclusion
is false.
Here is another example. We show that 8xP x, 8xQx 8y.P y !
Qy/. One way to do this is with an I that has U D f1; 2g where IP D f1g and
IQ D f2g. Then the premises are true.
1

Id 8xP x.S/

Id 8xP x.N/

Id.xj1/ P x.N/

(AN)

8x

Id.xj2/ P x.S/

Id.xj1/ Qx.S/
Id 8xQx.S/

Id 8xQx.N/

8x

Id.xj2/ Qx.N/







Id.xj1/ P x.S/ ..

x 1

Id.xj2/ P x.N/ ..

x 2

Id.xj1/ Qx.N/ ..

x 1

Id.xj2/ Qx.S/ ..

x 2

.
.
.
.

.
.
.
.

To make 8xP x true, we require that there is at least one thing in IP . We


accomplish this by putting 1 in its interpretation. This makes the top branch at stage
(4) S; this makes the top branch at (3) N; so the quantifier at (2) is N and the formula
at (1) comes out S. Since it is a sentence and satisfied on the arbitrary assignment,

CHAPTER 4. SEMANTICS

128

it is true. 8xQx is true for related reasons. For it to be true, we require at least
one thing in IQ. This is accomplished by putting 2 in its interpretation. But this
interpretation does not make the conclusion true.
1

3
.
.
.
.

y 1

Id.yj1/ Qy.N/ ..

y 1

Id.yj2/ P y.N/ ..

y 2

Id.yj2/ Qy.S/ ..

y 2

Id.yj1/
Id.yj1/ P y ! Qy.N/

Id 8y.P y ! Qy/.N/

4
P y.S/

.
.

8y
Id.yj2/ P y !

Qy.S/

.
.

.
.

The conclusion is not satisfied so long as something is in IP but not in IQ. We


accomplish this by making the thing in the interpretation of P different from the
thing in the interpretation of Q. Since 1 is in IP but not in IQ, there is an S/N
pair at (4), so that the top branch at (2) is N and the formula at (1) is N. Since the
formula is not satisfied, by TI it is not true. And since there is an interpretation on
which the premises are true and the conclusion is not, by QV, the argument is not
quantificationally valid.
In general, to show that an argument is not quantificationally valid, you want to
think backward to see what kind of interpretation you need to make the premises
true but the conclusion not true. It is to your advantage to think of simple interpretations. Remember that U need only be non-empty. So it will often do to work with
universes that have just one or two members. And the interpretation of a relation
symbol might even be empty! It is often convenient to let the universe be some set of
integers. And, if there is any interpretation that demonstrates invalidity, there is sure
to be one whose universe is some set of integers but we will get to this in Part III.
E4.14. For language Lq consider an interpretation I such that U D f1; 2g, and
I

Ia D 1
IA D T
IP 1 D f1g
If 1 D fh1; 2i; h2; 1ig

Use interpretation I and trees to show that (a) below is not quantificationally
valid. Then each of the others can be shown to be invalid on an interpretation

CHAPTER 4. SEMANTICS

129

I that modifies just one of the main parts of I. Produce the modified interpre-

tations, and use them to show that the other arguments also are invalid. Hint:
If you are having trouble finding the appropriate modified interpretation, try
working out the trees on I, and think about what changes to the interpretation
would have the results you want.
a. P a 8xP x
b. P a 8xP x
*c. 8xPf 1 x 8xP x
d. 8x.P x ! Pf 1 x/ 8x.Pf 1 x ! P x/
e. 8xP x ! A 8x.P x ! A/
E4.15. Find interpretations and use trees to demonstrate each of the following. Be
sure to explain why your interpretations and trees have the desired result.
*a. 8x.Qx ! P x/ 8x.P x ! Qx/
b. 8x.P x ! Qx/, 8x.Rx ! P x/ 8y.Ry ! Qy/
*c. 8xP x P a
d. 8xP x 8xP x
e. 8xP x ! 8xQx, Qb P a ! 8xQx
f. .A ! 8xP x/ 8x.A ! P x/
g. 8x.P x ! Qx/, Qa 8xP x
*h. 8y8xRxy 8x8yRxy
i. 8x8y.Rxy ! Ryx/, 8x8yRxy 8xRxx
j. 8x8yy D f 1 x ! .x D f 1 y/ 8x.P x ! Pf 1 x/

CHAPTER 4. SEMANTICS

4.2.5

130

Abbreviations

Finally, we turn to applications for abbreviations. Consider first a tree for .P ^ Q/,
that is for .P ! Q/.
2

3
Id P

(AO)

Id .P ! Q/

Id P ! Q

Id Q

Id Q

The formula at (1) is satisfied iff the formula at (2) is not. But the formula at (2)
is not satisfied iff the top at (3) is satisfied and the bottom is not satisfied. And the
bottom at (3) is not satisfied iff the formula at (4) is satisfied. So the formula at (1) is
satisfied iff P is satisfied and Q is satisfied. The only way for .P ^ Q/ to be satisfied
on some I and d, is for P and Q both to be satisfied on that I and d. If either P or Q
is not satisfied, then .P ^ Q/ is not satisfied. Reasoning similarly for _, $, and 9,
we get the following derived branch conditions.
Id P

B(^)

Id .P ^ Q/

Id Q

the trunk is S iff both branches are S

Id P

B(_)

Id .P _ Q/

Id Q

the trunk is S iff at least one branch is S

Id P

B($)

Id .P $ Q/

Id Q

the trunk is S iff both branches are S or both are N

Id.xjm/ P
Id.xjn/ P

B(9)
Id 9xP

9x

::
:
one branch for
each member of

The trunk is S iff at least one branch is S

The cases for ^, _, and $ work just as in the sentential case. For the last, consider
a tree for 8xP , that is for 9xP .

CHAPTER 4. SEMANTICS

131

Id.xjm/ P
Id.xjn/ P

(AP)
Id 8xP

Id 8xP

8x




Id.xjm/ P
Id.xjn/ P

::
:
one branch for each
member of U

The formula at (1) is satisfied iff the formula at (2) is not. But the formula at (2) is
not satisfied iff at least one of the branches at (3) is not satisfied. And for a branch at
(3) to be not satisfied, the corresponding branch at (4) has to be satisfied. So 9xP is
satisfied on I with assignment d iff for some o 2 U, P is satisfied on I with dxjo; if
there is no such o 2 U, then 9xP is N on I with d.
Given derived branch conditions, we can work directly with abbreviations in trees
for determining satisfaction and truth. And the definition of validity applies in the
usual way. Thus, for example, 9xP x ^9xQx 9x.P x ^Qx/. To see this, consider
an I with I D f1; 2g, IP D f1g and IQ D f2g. The premise, 9xP x ^ 9xQx is true
on I. To see this, we construct a tree, making use of derived clauses as necessary.
1

2
Id 9xP x.S/

(AQ)

Id 9xP x ^ 9xQx.S/

Id.xj1/ P x.S/ ..

x 1

Id.xj2/ P x.N/ ..

x 2

Id.xj1/ Qx.N/ ..

x 1

Id.xj2/ Qx.S/ ..

x 2

.
.

9x

.
.

^
Id 9xQx.S/

.
.

9x

.
.

The existentials are satisfied because at least one branch is satisfied, and the conjunction because both branches are satisfied, according to derived conditions B(9)
and B(^). So the formula is satisfied, and because it is a sentence, is true. But the
conclusion, 9x.P x ^ Qx/ is not true.

CHAPTER 4. SEMANTICS

132

Id.xj1/ P x

Id 9x.P x ^ Qx/.N/

3
^ Qx.N/

Id.xj1/ P x.S/ ..

x 1

Id.xj1/ Qx.N/ ..

x 1

Id.xj2/ P x.N/ ..

x 2

Id.xj2/ Qx.S/ ..

x 2

.
.

.
.

9x
Id.xj2/ P x ^ Qx.N/

.
.

.
.

The conjunctions at (2) are not satisfied, in each case because not both branches at
(3) are satisfied. And the existential at (1) requires that at least one branch at (2) be
satisfied; since none is satisfied, the main formula 9x.P x ^ Qx/ is not satisfied, and
so by TI not true. Since there is an interpretation on which the premise is true and
the conclusion is not, by QV, 9xP x ^ 9xQx 9x.P x ^ Qx/. As we will see
in the next chapter, the intuitive point is simple: just because something is P and
something is Q, it does not follow that something is both P and Q. And this is just
what our interpretation I illustrates.
E4.16. On p. 130 we say that reasoning similar to that for ^ results in other branch
conditions. Give the reasoning similar to that for ^ and 9 to demonstrate from
trees the conditions B(_) and B($).
E4.17. Produce interpretations to demonstrate each of the following. Use trees, with
derived clauses as necessary, to demonstrate your results. Be sure to explain
why your interpretations and trees have the results they do.
*a. 9xP x 8yP y
b. 9xP x 9y.P y ^ Qy/
c. 9xP x 9yPf 1 y
d. P a ! 8xQx 9xP x ! 8xQx
e. 8x9yRxy 9y8xRxy
f. 8xP x $ 8xQx, 9x9y.P x ^ Qy/ 9y.P y $ Qy/
*g. 8x.9yRxy $ A/ 9xRxx _ A

CHAPTER 4. SEMANTICS

133

Semantics Quick Reference (quantificational)


For a quantificational language L, a quantificational interpretation I consists of a nonempty set U,
the universe of the interpretation, along with,
QI

(s) An assignment of a truth value IS to each sentence letter S of L.


(c) An assignment of a member Ic of U to each constant symbol c of L.
(r) An assignment of an n-place relation IRn on U to each n-place relation symbol Rn of
L, where ID is always assigned fho; oi j o 2 Ug.
(f) An assignment of a total n-place function Ihn from Un to U, to each n-place function
symbol hn of L.

Given a language L and interpretation I, a variable assignment d is a total function from the variables
of L to objects in the universe U. Then for any interpretation I, variable assignment d, and term t,
TA

(c) If c is a constant, then Id c D Ic.


(v) If x is a variable, then Id x D dx.
(f) If hn is a function symbol and t1 : : : tn are terms, then Id hn t1 : : : tn
Ihn hId t1 : : : Id tn i.

For any interpretation I with variable assignment d,


SF

(s) If S is a sentence letter, then Id S D S iff IS D T; otherwise Id S D N.


(r) If Rn is an n-place relation symbol and t1 : : : tn are terms, then Id Rn t1 : : : tn D S iff
hId t1 : : : Id tn i 2 IRn ; otherwise Id Rn t1 : : : tn D N.
() If P is a formula, then Id P D S iff Id P D N; otherwise Id P D N.
(!) If P and Q are formulas, then Id .P ! Q/ D S iff Id P D N or Id Q D S (or both);
otherwise Id .P ! Q/ D N.
(8) If P is a formula and x is a variable, then Id 8xP D S iff for any o 2 U, Id.xjo/ P D S;
otherwise Id 8xP D N.

SF0 (^) If P and Q are formulas, then Id .P ^ Q/ D S iff Id P D S and Id Q D S; otherwise


Id .P ^ Q/ D N.
(_) If P and Q are formulas, then Id .P _ Q/ D S iff Id P D S or Id Q D S (or both);
otherwise Id .P _ Q/ D N.
($) If P and Q are formulas, then Id .P $ Q/ D S iff Id P D Id Q; otherwise
Id .P $ Q/ D N.
(9) If If P is a formula and x is a variable, then Id 9xP D S iff for some o 2 U,
Id.xjo/ P D S; otherwise Id 9xP D N.
TI A formula P is true on an interpretation I iff with any d for I, Id P D S. P is false on I iff
with any d for I, Id P D N.
QV quantificationally entails P (  P ) iff there is no quantificational interpretation I such that
I D T but IP T.
If  P , an argument whose premises are the members of and conclusion is P is quantificationally valid.

CHAPTER 4. SEMANTICS

134

h. 9x.P x ^ 9yQy/ 9x8y.P x ^ Qy/


i. 8x8y.P x _ Qxy/, 9xP x 9x9yQxy
j. 9x9y.x D y/ 8x8y9z.x D z/ ^ .y D z/
<
E4.18. Produce an interpretation to demonstrate each of the following (now in LNT
).
Use trees to demonstrate your results. Be sure to explain why your interpretations and trees have the results they do. Hint: When there are no premises,
all you need is an interpretation where the expression is not true. You need
not use the standard interpretation! In some cases, it may be convenient to
produce only that part of the tree which is necessary for the result.

a. 8x.x < S x/
b. .S ; C S ;/ D S S ;
c. 9x.x  x/ D x
*d. 8x8y.x D y/ ! .x < y _ y < x/
e. 8x8y8z.x < y ^ y < z/ ! x < z
E4.19. For each of the following concepts, explain in an essay of about two pages,
so that Hannah could understand. In your essay, you should (i) identify the
objects to which the concept applies, (ii) give and explain the definition, and
give and explicate examples of your own construction (iii) where the concept
applies, and (iv) where it does not. Your essay should exhibit an understanding of methods from the text.
a. Quantificational interpretations.
b. Term assignments, satisfaction and truth.
c. Quantificational validity.

Chapter 5

Translation
We have introduced logical validity from chapter 1, along with notions of semantic
validity from chapter 4, and validity in an axiomatic derivation system from chapter 3. But logical validity applies to arguments expressed in ordinary language, where
the other notions apply to arguments expressed in a formal language. Our guiding
idea has been to use the formal notions with application to ordinary arguments via
translation from ordinary language to the formal ones. It is to the translation task
that we now turn. After some general discussion, we will take up issues specific to
the sentential, and then the quantificational, cases.

5.1

General

As speakers of ordinary languages (at least English for those reading this book) we
presumably have some understanding of the conditions under which ordinary language sentences are true and false. Similarly, we now have an understanding of the
conditions under which sentences of our formal languages are true and false. This
puts us in a position to recognize when the conditions under which ordinary sentences are true are the same as the conditions under which formal sentences are true.
And that is what we want: Our goal is to translate the premises and conclusion of
ordinary arguments into formal expressions that are true when the ordinary sentences
are true, and false when the ordinary sentences are false. Insofar as validity has to
do with conditions under which sentences are true and false, our translations should
thus be an adequate basis for evaluations of validity.
We can put this point with greater precision. Formal sentences are true and false
relative to interpretations. As we have seen, many different interpretations of a formal
language are possible. In the sentential case, any sentence letter can be true or false
135

CHAPTER 5. TRANSLATION

136

so that there are 2n ways to interpret any n sentence letters. When we specify an
interpretation, we select just one of the many available options. Thus, for example,
we might set IB D T and IH D F. But we might also specify an interpretation as
follows,
B: Bill is happy
(A)
H : Hillary is happy
intending B to take the same truth value as Bill is happy and H the same as Hillary
is happy. In this case, the single specification might result in different interpretations, depending on how the world is: Depending on how Bill and Hillary are, the
interpretation of B might be true or false, and similarly for H. That is, specification
(A) is really a function from ways the world could be (from complete and consistent
stories) to interpretations of the sentence letters. It results in a specific or intended
interpretation relative to any way the world could be. Thus, where ! (omega) ranges
over ways the world could be, (A) is a function II which results in an intended interpretation II! corresponding to any such way thus II! B is T if Bill is happy at !
and F if he is not.
When we set out to translate some ordinary sentences into a formal language,
we always begin by specifying an intended interpretation of the formal language for
arbitrary ways the world can be. In the sentential case, this typically takes the form
of a specification like (A). Then for any way the world can be ! there is an intended
interpretation II! of the formal language. Given this, for an ordinary sentence A,
the aim is to produce a formal counterpart A0 such that II! A0 D T iff the ordinary
A is true in world !. This is the content of saying we want to produce formal
expressions that are true when the ordinary sentences are true, and false when the
ordinary sentences are false. In fact, we can turn this into a criterion of goodness
for translation.
CG Given some ordinary sentence A, a translation consisting of an interpretation function II and formal sentence A0 is good iff it captures available sentential/quantificational structure and, where ! is any way the world can be,
II! A0 D T iff A is true at !.
If there is a collection of sentences, a translation is good given an II where each
member A of the collection of sentences has an A0 such that II! A0 D T iff A is true
at !. Set aside the question of what it is to capture available sentential/quantificational structure, this will emerge as we proceed. For now, the point is simply that we
want formal sentences to be true on intended interpretations when originals are true at

CHAPTER 5. TRANSLATION

137

corresponding worlds, and false on intended interpretations when originals are false.
CG says that this correspondence is necessary for goodness. And, supposing that
sufficient structure is reflected, according to CG such correspondence is sufficient as
well.
The situation might be pictured as follows. There is a specification II which
results in an intended interpretation corresponding to any way the world can be. And
corresponding to ordinary sentences P and Q there are formal sentences P 0 and Q0 .
Then,
#

P 0

II!1
DT
II!1 Q0 D T

"

P 0

II!2
DT
II!2 Q0 D F

!
"

II *

II *

!1 P : true
Q: true

"

!2 P : true
Q: false

!
"

#
P 0

II!3
DF
II!3 Q0 D T

!
"

II *

!3 P : false
Q: true

!
"

#
II!4 P 0 D F
II!4 Q0 D F

!
"

II *

!4 P : false
Q: false

!
"

The interpretation function results in an intended interpretation corresponding to each


world. The translation is good only if no matter how the world is, the values of P 0 and
Q0 on the intended interpretations match the values of P and Q at the corresponding
worlds or stories.
The premises and conclusion of an argument are some sentences. So the translation of an argument is good iff the translation of the sentences that are its premises
and conclusion is good. And good translations of arguments put us in a position to
use our machinery to evaluate questions of validity. Of course, so far, this is an abstract description of what we are about to do. But it should give some orientation,
and help you understand what is accomplished as we proceed.

5.2

Sentential

We begin with the sentential case. Again, the general idea is to recognize when the
conditions under which ordinary sentences are true are the same as the conditions
under which formal ones are true. Surprisingly perhaps, the hardest part is on the
side of recognizing truth conditions in ordinary language. With this in mind, let us
begin with some definitions whose application is to expressions of ordinary language;
after that, we will turn to a procedure for translation, and to discussion of particular
operators.

CHAPTER 5. TRANSLATION

5.2.1

138

Some Definitions

In this section, we introduce a series of definitions whose application is to ordinary


language. These definitions are not meant to compete with anything you have learned
in English class. They are rather specific to our purposes. With the definitions under
our belt, we will be able to say with some precision what we want to do.
First, a declarative sentence is a sentence which has a truth value. Snow is
white and Snow is green are declarative sentences the first true and the second
false. Study harder! and Why study? are sentences, but not declarative sentences.
Given this, a sentential operator is an expression containing blanks such that when
the blanks are filled with declarative sentences, the result is a declarative sentence.
In ordinary speech and writing, such blanks do not typically appear (!) however
punctuation and expression typically fill the same role. Examples are,
John believes that
John heard that
It is not the case that
and
John believes that snow is white, John believes that snow is green, and John
believes that dogs fly are all sentences some more plausibly true than others. Still,
Snow is white, Snow is green, and Dogs fly are all declarative sentences, and
when we put them in the blank of John believes that
the result is a declarative
sentence, where the same would be so for any declarative sentence in the blank; so
John believes that
is a sentential operator. Similarly, Snow is white and dogs
fly is a declarative sentence a false one, since dogs do not fly. And, so long as
we put declarative sentences in the blanks of
and
the result is always a
declarative sentence. So
and
is a sentential operator. In contrast,
When
is white
are not sentential operators. Though Snow is white is a declarative sentence, When
snow is white is an adverbial clause, not a declarative sentence. And, though Dogs
fly and Snow is green are declarative sentences, Dogs fly is white snow is green is
ungrammatical nonsense. If you can think of even one case where putting declarative

CHAPTER 5. TRANSLATION

139

sentences in the blanks of an expression does not result in a declarative sentence, then
the expression is not a sentential operator. So these are not sentential operators.
Now, as in these examples, we can think of some declarative sentences as generated by the combination of sentential operators with other declarative sentences.
Declarative sentences generated from other sentences by means of sentential operators are compound; all others are simple. Thus, for example, Bob likes Mary and
Socrates is wise are simple sentences, they do not have a declarative sentence in
the blank of any operator. In contrast, John believes that Bob likes Mary and Jim
heard that John believes that Bob likes Mary are compound. The first has a simple
sentence in the blank of John believes that
. The second puts a compound in
the blank of Jim heard that
.
For cases like these, the main operator of a compound sentence is that operator
not in the blank of any other operator. The main operator of John believes that Bob
likes Mary is John believes that
. And the main operator of Jim heard that
John believes that Bob likes Mary is Jim heard that
. The main operator of
It is not the case that Bob likes Sue and it is not the case that Sue likes Bob is

and
, for that is the operator not in the blank of any other. Notice that the
main operator of a sentence need not be the first operator in the sentence. Observe
also that operator structure may not be obvious. Thus, for example, Jim heard that
Bob likes Sue and Sue likes Jim is capable of different interpretations. It might
be, Jim heard that Bob likes Sue and Sue likes Jim with main operator, Jim heard
that
and the compound, Bob likes Sue and Sue likes Jim in its blank. But it
might be Jim heard that Bob likes Sue and Sue likes Jim with main operator,
and
. The question is what Jim heard, and what the and joins. As suggested
above, punctuation and expression often serve in ordinary language to disambiguate
confusing cases. These questions of interpretation are not peculiar to our purposes!
Rather they are the ordinary questions that might be asked about what one is saying.
The underline structure serves to disambiguate claims, to make it very clear how the
operators apply.
When faced with a compound sentence, the best approach is start with the whole,
rather than the parts. So begin with blank(s) for the main operator. Thus, as we have
seen, the main operator of It is not the case that Bob likes Sue, and it is not the case
that Sue likes Bob is
and
. So begin with lines for that operator, It is not
the case that Bob likes Sue and it is not the case that Sue likes Bob (leaving space
for lines above). Now focus on the sentence in one of the blanks, say the left; that
sentence, It is not the case that Bob likes Sue is is a compound with main operator,
it is not the case that
. So add the underline for that operator, It is not the case
that Bob likes Sue and it is not the case that Sue likes Bob. The sentence in the blank

CHAPTER 5. TRANSLATION

140

of it is not the case that


is simple. So turn to the sentence in the right blank
of the main operator. That sentence has main operator it is not the case that
.
So add an underline. In this way we end up with, It is not the case that Bob likes
Sue and it is not the case that Sue likes Bob where, again, the sentence in the last
blank is simple. Thus, a complex problem is reduced to ones that are progressively
more simple. Perhaps this problem was obvious from the start. But this approach
will serve you well as problems get more complex!
We come finally to the key notion of a truth functional operator. A sentential operator is truth functional iff any compound generated by it has its truth value wholly
determined by the truth values of the sentences in its blanks. We will say that the
truth value of a compound is determined by the truth values of sentences in blanks
just in case there is no way to switch the truth value of the whole while keeping truth
values of sentences in the blanks constant. This leads to a test for truth functionality:
We show that an operator is not truth functional, if we come up with some situation(s) where truth values of sentences in the blanks are the same, but the truth value
of the resulting compounds are not. To take a simple case, consider John believes
that
. If things are pretty much as in the actual world, Dogs fly and There is
a Santa are both false. But if John is a small child it may be that,
(B)

Dogs fly
John believes that There is a Santa
F /T

the compound is false with one in the blank, and true with the other. Thus the truth
value of the compound is not wholly determined by the truth value of the sentence in
the blank. We have found a situation where sentences with the same truth value in the
blank result in a different truth value for the whole. Thus John believes that

is not truth functional. We might make the same point with a pair of sentences that
are true, say Dogs bark and There are infinitely many prime numbers (be clear in
your mind about how this works).
As a second example, consider,
because
. Suppose You are happy,
You got a good grade, There are fish in the sea and You woke up this morning
are all true.
(C)

You are happy


You got a good grade
There are fish in the sea because You work up this morning
T

T /F

Still, it is natural to think that, the truth value of the compound, You are happy
because you got a good grade is true, but There are fish in the sea because you woke
up this morning is false. For perhaps getting a good grade makes you happy, but the
fish in the sea have nothing to do with your waking up. Thus there are consistent

CHAPTER 5. TRANSLATION

141

situations or stories where sentences in the blanks have the same truth values, but
the compounds do not. Thus, by the definition,
because
is not a truth
functional operator. To show that an operator is not truth functional it is sufficient
to produce some situation of this sort: where truth values for sentences in the blanks
match, but truth values for the compounds do not. Observe that sentences in the
blanks are fixed but the value of the compound is not. Thus, it would be enough to
find, say, a case where sentences in the first blank are T, sentences in the second are
F but the value of the whole flips from T to F. To show that an operator is not truth
functional, any matching combination that makes the whole switch value will do.
To show that an operator is truth functional, we need to show that no such cases
are possible. For this, we show how the truth value of what is in the blank determines
the truth value of the whole. As an example, consider first,
It is not the case that

(D)

F
T

T
F

In this table, we represent the truth value of whatever is in the blank by the column
under the blank, and the truth value for the whole by the column under the operator.
If we put something true according to a consistent story into the blank, the resultant
compound is sure to be false according to that story. Thus, for example, in the true
story, Snow is white, 2 + 2 = 4 and Dogs bark are all true; correspondingly, It
is not the case that snow is white, It is not the case that 2 + 2 = 4 and It is not the
case that dogs bark are all false. Similarly, if we put something false according to a
story into the blank, the resultant compound is sure to be true according to the story.
Thus, for example, in the true story, Snow is green and 2 + 2 = 3 are both false.
Correspondingly, It is not the case that snow is green and It is not the case that 2
+ 2 = 3 are both true. It is no coincidence that the above table for It is not the case
that
looks like the table for . We will return to this point shortly.
For a second example of a truth functional operator, consider
and
.
This seems to have table,
and

(E)

T
T
F
F

T
F
F
F

T
F
T
F

Consider a situation where Bob and Sue each love themselves, but hate each other.
Then Bob loves Bob and Sue loves Sue is true. But if at least one blank has a sentence
that is false, the compound is false. Thus, for example, in that situation, Bob loves
Bob and Sue loves Bob is false; Bob loves Sue and Sue loves Sue is false; and Bob

CHAPTER 5. TRANSLATION

142

loves Sue and Sue loves Bob is false. For a compound,


and
to be true,
the sentences in both blanks have to be true. And if they are both true, the compound
is itself true. So the operator is truth functional. Again, it is no coincidence that the
table looks so much like the table for ^. To show that an operator is truth functional,
it is sufficient to produce the table that shows how the truth values of the compound
are fixed by the truth values of the sentences in the blanks.

Definitions for Translation


DC A declarative sentence is a sentence which has a truth value.
SO A sentential operator is an expression containing blanks such that when the blanks
are filled with declarative sentences, the result is a declarative sentence.
CS Declarative sentences generated from other sentences by means of sentential operators are compound; all others are simple.
MO The main operator of a compound sentence is that operator not in the blank of any
other operator.
TF A sentential operator is truth functional iff any compound generated by it has its
truth value wholly determined by the truth values of the sentences in its blanks.
To show that an operator is not truth functional it is sufficient to produce some situation
where truth values for sentences in the blanks match, but truth values for the compounds
do not.
To show that an operator is truth functional, it is sufficient to produce the table that shows
how the truth values of the compound are fixed by truth values of the sentences in the
blanks.

For an interesting sort of case, consider the operator According to every consistent story
, and the following attempted table,
According to every consistent story

(F)

?
F

T
F

(On some accounts, this operator works like Necessarily


). Say we put some
sentence P that is false according to a consistent story into the blank. Then since
P is false according to that very story, it is not the case that P according to every
consistent story and the compound is sure to be false. So we fill in the bottom row
under the operator as above. So far, so good. But consider Dogs bark and 2 + 2 =
4. Both are true according to the true story. But only the second is true according to
every consistent story. So the compound is false with the first in the blank, true with

CHAPTER 5. TRANSLATION

143

the second. So According to every consistent story


is therefore not a truth
functional operator. The truth value of the compound is not wholly determined by
the truth value of the sentence in the blank. Similarly, it is natural to think that
because
is false whenever one of the sentences in its blanks is false. It cannot
be true that P because Q if not-P , and it cannot be true that P because Q if not-Q.
If you are not happy, then it cannot be that you are happy because you understand the
material; and if you do not understand the material, it cannot be that you are happy
because you understand the material. So far, then, the table for
because

is like the table for


and
.
because

(G)

T
T
F
F

?
F
F
F

T
F
T
F

However, as we saw just above, in contrast to


and
, compounds generated
by
because
may or may not be true when sentences in the blanks are both
true. So, although
and
is truth functional,
because
is not.
Thus the question is whether we can complete a table of the above sort: If there
is a way to complete the table, the operator is truth functional. The test to show
an operator is not truth functional simply finds some case to show that such a table
cannot be completed.
E5.1. For each of the following, identify the simple sentences that are parts. If the
sentence is compound, use underlines to exhibit its operator structure, and
say what is its main operator.
a. Bob likes Mary.
b. Jim believes that Bob likes Mary.
c. It is not the case that Bob likes Mary.
d. Jane heard that it is not the case that Bob likes Mary.
e. Jane heard that Jim believes that it is not the case that Bob likes Mary.
f. Voldemort is very powerful, but it is not the case that Voldemort kills Harry
at birth.
g. Harry likes his godfather and Harry likes Dumbledore, but it is not the case
that Harry likes his uncle.

CHAPTER 5. TRANSLATION

144

*h. Hermoine believes that studying is good, and Hermione studies hard, but Ron
believes studying is good, and it is not the case that Ron studies hard.
i. Malfoy believes mudbloods are scum, but it is not the case that mudbloods
are scum; and Malfoy is a dork.
j. Harry believes that Voldemort is evil and Hermione believes that Voldemort
is evil, but it is not the case that Bellatrix believes that Voldemort is evil.
E5.2. Which of the following operators are truth functional and which are not? If
the operator is truth functional, display the relevant table; if it is not, give a
case to show that it is not.
*a. It is a fact that
b. Elmore believes that
*c.

but

d. According to some consistent story


e. Although

*f. It is always the case that


g. Sometimes it is the case that
h.

therefore

i.

however

j. Either

5.2.2

or

(or both)

Parse Trees

We are now ready to outline a procedure for translation into our formal sentential
language. In the end, you will often be able to see how translations should go and to
write them down without going through all the official steps. However, the procedure
should get you thinking in the right direction, and remain useful for complex cases.
To translate some ordinary sentences P1 : : : Pn the basic translation procedure is,
TP

(1) Convert the ordinary P1 : : : Pn into corresponding ordinary equivalents


exposing truth functional and operator structure.

CHAPTER 5. TRANSLATION

145

(2) Generate a parse tree for each of P1 : : : Pn and specify the interpretation function II by assigning sentence letters to sentences at the bottom
nodes.
(3) Construct a parallel tree that translates each node from the parse tree, to
generate a formal Pi0 for each Pi .
For now at least, the idea behind step (1) is simple: Sometimes all you need to do is
expose operator structure by introducing underlines. In complex cases, this can be
difficult! But we know how to do this. Sometimes, however, truth functional structure
does not lie on the surface. Ordinary sentences are equivalent when they are true
and false in exactly the same consistent stories. And we want ordinary equivalents
exposing truth functional structure. Suppose P is a sentence of the sort,
(H)

Bob is not happy

Is this a truth functional compound? Not officially. There is no declarative sentence


in the blank of a sentential operator; so it is not compound; so it is not a truth functional compound. But one might think that (H) is short for,
(I)

It is not the case that Bob is happy

which is a truth functional compound. At least, (H) and (I) are equivalent in the sense
that they are true and false in the same consistent stories. Similarly, Bob and Carol
are happy is not a compound of the sort we have described, because Bob is not a
declarative sentence. However, it is a short step from this sentence to the equivalent,
Bob is happy and Carol is happy which is an official truth functional compound.
As we shall see, in some cases, this step can be more complex. But let us leave it at
that for now.
Moving to step (2), in a parse tree we begin with sentences constructed as in step
(1). If a sentence has a truth functional main operator, then it branches downward
for the sentence(s) in its blanks. If these have truth functional main operators, they
branch for the sentences in their blanks; and so forth, until sentences are simple
or have non-truth functional main operators. Then we construct the interpretation
function II by assigning a distinct sentence letter to each distinct sentence at a bottom
node from a tree for the original P1 : : : Pn .
Some simple examples should make this clear. Say we want to translate a collection of four sentences.
1. Bob is happy
2. Carol is not happy

CHAPTER 5. TRANSLATION

146

3. Bob is healthy and Carol is not


4. Bob is happy and John believes that Carol is not healthy
The first is a simple sentence. Thus there is nothing to be done at step (1). And since
there is no main operator, the sentence itself is a completed parse tree. The tree is
just,
(J)

Bob is happy

Insofar as the simple sentence is a complete branch of the tree, it counts as a bottom
node of its tree. It is not yet assigned a sentence letter, so we assign it one. B1 : Bob
is happy. We select this letter to remind us of the assignment.
The second sentence is not a truth functional compound. Thus in the first stage,
Carol is not happy is expanded to the equivalent, It is not the case that Carol is
happy. In this case, there is a main operator; since it is truth functional, the tree has
some structure.
It is not the case that Carol is happy

(K)
Carol is happy

The bottom node is simple, so the tree ends. Carol is happy is not assigned a letter;
so we assign it one. C1 : Carol is happy.
The third sentence is equivalent to, Bob is healthy and it is not the case that Carol
is healthy. Again, the operators are truth functional, and the result is a structured
tree.

(L)

Bob is healthy and it is not the case that Carol is healthy



 HHH



H
it
is
not
the
case
that Carol is healthy
Bob is healthy

Carol is healthy

The main operator is truth functional. So there is a branch for each of the sentences in
its blanks. On the left, Bob is healthy has no main operator, so it does not branch.
On the right, it is not the case that Carol is healthy has a truth functional main
operator, and so branches. At bottom, we end up with Bob is healthy and Carol is
healthy. Neither has a letter, so we assign them ones. B2 : Bob is healthy; C2 : Carol
is healthy.

CHAPTER 5. TRANSLATION

147

The final sentence is equivalent to, Bob is happy and John believes it is not the
case that Carol is healthy. It has a truth functional main operator. So there is a
structured tree.

(M)

Bob is happy and John believes it is not the case that Carol is healthy
((H
H
((((
(
(
HH
(
(((
John believes it is not the case that Carol is healthy
Bob is happy

On the left, Bob is happy is simple. On the right, John believes it is not the case
that Carol is healthy is complex. But its main operator is not truth functional. So
it does not branch. We only branch for sentences in the blanks of truth functional
main operators. Given this, we proceed in the usual way. Bob is happy already has
a letter. The other does not; so we give it one. J : John believes it is not the case that
Carol is healthy.
And that is all. We have now compiled an interpretation function,
II B1 : Bob is happy

C1 : Carol is happy
B2 : Bob is healthy
C2 : Carol is healthy
J : John believes it is not the case that Carol is healthy
Of course, we might have chosen different letters. All that matters is that we have
a distinct letter for each distinct sentence. Our intended interpretations are ones that
capture available sentential structure, and make the sentence letters true in situations
where these sentences are true and false when they are not. In the last case, there is
a compulsion to think that we can somehow get down to the simple sentence Carol
is happy. But resist temptation! A non-truth functional operator seals off that
upon which it operates, and forces us to treat the compound as a unit. We do not
automatically assign sentence letters to simple sentences, but rather to parts that are
not truth functional compounds. Simple sentences fit this description. But so do
compounds with non-truth-functional main operators.
E5.3. Use our method to expose truth functional structure and produce parse trees
for each of the following. Use your trees to produce an interpretation function
for the sentences. Hint: pay attention to punctuation as a guide to structure.
a. Bingo is spotted, and Spot can play bingo.

CHAPTER 5. TRANSLATION

148

b. Bingo is not spotted, and Spot cannot play bingo.


c. Bingo is spotted, and believes that Spot cannot play bingo.
*d. It is not the case that: Bingo is spotted and Spot can play bingo.
e. It is not the case that: Bingo is not spotted and Spot cannot play bingo.
E5.4. Use our method to expose truth functional structure and produce parse trees
for each of the following. Use your trees to produce an interpretation function
for the sentences.
*a. People have rights and dogs have rights, but rocks do not.
b. It is not the case that: rocks have rights, but people do not.
c. Aliens believe that rocks have rights, but it is not the case that people believe
it.
d. Aliens landed in Roswell NM in 1947, and live underground but not in my
backyard.
e. Rocks do not have rights and aliens do not have rights, but people and dogs
do.

5.2.3

Formal Sentences

Now we are ready for step (3) of the translation procedure TP. Our aim is to generate
translations by constructing a parallel tree where the force of ordinary truth functional
operators is captured by equivalent formal operators. Let us say an ordinary truth
functional operator is equivalent to some formal expression containing blanks just in
case their tables are the same. Thus 
is equivalent to it is not the case that
. They are equivalent insofar as in each case, the whole has the opposite truth
value of what is in the blank. Similarly,
^
is equivalent to
and
. In either case, when sentences in the blanks are both T the whole is T, and in
!
/ takes the
other cases, the whole is F. Of course, the complex .
same values as the
^
that abbreviates it. So different formal expressions
may be equivalent to a given ordinary one.
To see how this works, let us return to the sample sentences from above. Again,
the idea is to generate a parallel tree. We begin by using the sentence letters from our
interpretation function for the bottom nodes. The case is particularly simple when the

CHAPTER 5. TRANSLATION

149

tree has no structure. Bob is happy had a simple unstructured tree, and we assigned
it a sentence letter directly. Thus our original and parallel trees are,
(N)

Bob is happy

B1

So for a simple sentence, we simply read off the final translation from the interpretation function. So much for the first sentence.
As we have seen, the second sentence is equivalent to It is not the case that Carol
is happy with a parse tree as on the left below. We begin the parallel tree on the other
side.
It is not the case that Carol is happy

(O)
C1

Carol is happy

We know how to translate the bottom node. But now we want to capture the force
of the truth functional operator with some equivalent formal operator(s). For this,
we need a formal expression containing blanks whose table mirrors the table for the
sentential operator in question. In this case, 
works fine. That is, we have,

F
T

It is not the case that


F
T

T
F

T
F

In each case, when the expression in the blank is T, the whole is F, and when the
expression in the blank is F, the whole is T. So 
is sufficient as a translation
of It is not the case that
. Other formal expressions might do just as well.
Thus, for example, we might go with, 
. The table for this is the same as
the table for 
. But it is hard to see why we would do this, with  so close
at hand. Now the idea is to apply the equivalent operator to the already translated
expression from the blank. But this is easy to do. Thus we complete the parallel tree
as follows.
It is not the case that Carol is happy

C1

Carol is happy

C1

The result is the completed translation, C1 .


The third sentence has a parse tree as on the left, and resultant parallel tree as on
the right. As usual, we begin with sentence letters from the interpretation function
for the bottom nodes.

CHAPTER 5. TRANSLATION

(P)

150

Bob is healthy and it is not the case that Carol is healthy


HH


HH

it is not the case that Carol is healthy
Bob is healthy

Carol is healthy

.B2 ^ C2 /

B2

@
@
C2

C2

Given translations for the bottom nodes, we work our way through the tree, applying
equivalent operators to translations already obtained. As we have seen, a natural
translation of it is not the case that
is 
. Thus, working up from Carol
is healthy, our parallel to it is not the case that Carol is healthy is C2 . But
now we have translations for both of the blanks of
and
. As we have
seen, this has the same table as .
^
/. So that is our translation. Again,
other expressions might do. In particular, ^ is an abbreviation with the same table
as .
! 
/. In each case, the whole is true when the sentences in
both blanks are true, and otherwise false. Since this is the same as for
and
, either would do as a translation. But again, the simplest thing is to go with
.
^
/. Thus the final result is .B2 ^ C2 /. With the alternate translation
for the main operator, the result would have been .B2 ! C2 /. Observe that the
parallel tree is an upside-down version of the (by now quite familiar) tree by which
we would show that the expression is a sentence.
Our last sentence is equivalent to, Bob is happy and John believes it is not the
case that Carol is healthy. Given what we have done, the parallel tree should be easy
to construct.
Bob is happy and John believes it is not the case that Carol is healthy .B1 ^ J /
(((H
@
(((
HH
(
(Q)
(
(
@
H
(((
it
is
not
the
case
that
Carol
is
healthy
John
believes
Bob is happy
B1
J

Given that the tree bottoms out on both Bob is happy and John believes it is
not the case that Carol is healthy the only operator to translate is the main operator

and
. And we have just seen how to deal with that. The result is the
completed translation, .B1 ^ J /.
Again, once you become familiar with this procedure, the full method, with the
trees, may become tedious and we will often want to set it to the side. But notice:
the method breeds good habits! And the method puts us in a position to translate
complex expressions, even ones that are so complex that we can barely grasp what
they are saying. Beginning with the main operator, we break expressions down from

CHAPTER 5. TRANSLATION

151

complex parts to ones that are simpler. Then we construct translations, one operator
at a time, where each step is manageable. Also, we should be able to see why the
method results in good translations: For any situation and corresponding intended
interpretation, truth values for basic parts are the same by the specification of the
interpretation function. And given that operators are equivalent, truth values for parts
built out of them must be the same as well, all the way up to the truth value of the
whole. We satisfy the first part of our criterion CG insofar as the way we break down
sentences in parse trees forces us to capture all the truth functional structure there is
to be captured.
For a last example, consider, Bob is happy and Bob is healthy and Carol is happy
and Carol is healthy. This is true only if Bob is happy, Bob is healthy, Carol
is happy, and Carol is healthy are all true. But the method may apply in different
ways. We might at step one, treat the sentence as a complex expression involving
multiple uses of
and
; perhaps something like,
(R)

Bob is happy and Bob is healthy and Carol is happy and Carol is healthy

In this case, there is a straightforward move from the ordinary operators to formal
ones in the final step. That is, the situation is as follows.
Bob is happy and Bob is healthy and Carol is happy and Carol is healthy

..B1 ^ B2 / ^ .C1 ^ C2 //

!PP
PP
!!
!
PP
!
P

Q

Q

Q

Bob is happy and Bob is healthy

@
@
Bob is happy

Carol is happy and Carol is healthy .B1 ^ B2 /

@
@

Bob is healthy Carol is happy Carol is healthy

.C1 ^ C2 /

A
 A
B1

A
 A

B2

C1

C2

So we use multiple applications of our standard caret operator. But we might have
treated the sentence as something like,
(S)

Bob is happy and Bob is healthy and Carol is happy and Carol is healthy

involving a single four-blank operator,


and
and
and
, which
yields true only when sentences in all its blanks are true. We have not seen anything
like this before, but nothing stops a tree with four branches all at once. In this case,
we would begin,
Bob is happy and Bob is healthy and Carol is happy and Carol is healthy

Q
XX

X


Q XXXX


Q
XX


Bob is happy

Bob is healthy Carol is happy Carol is healthy

B1

B2

C1

C2

CHAPTER 5. TRANSLATION

152

But now, for an equivalent operator we need a formal expression with four blanks
that is true when sentences in all the blanks are true and otherwise false. Here is
something that would do: ..
^
/^.
^
//. On either of these
approaches, then, the result is ..B1 ^ B2 / ^ .C1 ^ C2 //. Other options might result
in something like ...B1 ^ B2 / ^ C1 / ^ C2 /. In this way, there is room for shifting
burden between steps one and three. Such shifting explains how step (1) can be more
complex than it was initially represented to be. Choices about expanding truth functional structure in the initial stage, may matter for what are the equivalent operators
at the end. And the case exhibits how there are options for different, equally good,
translations of the same ordinary expressions. What matters for CG is that resultant
expressions capture available structure and be true when the originals are true and
false when the originals are false. In most cases, one translation will be more natural
than others, and it is good form to strive for natural translations. If there had been a
comma so that the original sentence was, Bob is happy and Bob is healthy, and Carol
is happy and Carol is healthy it would have been most natural to go for an account
along the lines of (R). And it is crazy to use, say, 
when 
will do
as well.
*E5.5. Construct parallel trees to complete the translation of the sentences from E5.3
and E5.4. Hint: you will not need any operators other than  and ^.
E5.6. Use our method to translate each of the following. That is, generate parse
trees with an interpretation function for all the sentences, and then parallel
trees to produce formal equivalents.
a. Plato and Aristotle were great philosophers, but Ayn Rand was not.
b. Plato was a great philosopher, and everything Plato said was true, but Ayn
Rand was not a great philosopher, and not everything she said was true.
*c. It is not the case that: everything Plato, and Aristotle, and Ayn Rand said was
true.
d. Plato was a great philosopher but not everything he said was true, and Aristotle was a great philosopher but not everything he said was true.
e. Not everyone agrees that Ayn Rand was not a great philosopher, and not everyone thinks that not everything she said was true.

CHAPTER 5. TRANSLATION

153

E5.7. Use our method to translate each of the following. That is, generate parse
trees with an interpretation function for all the sentences, and then parallel
trees to produce formal equivalents.
a. Bob and Sue and Jim will pass the class.
b. Sue will pass the class, but it is not the case that: Bob will pass and Jim will
pass.
c. It is not the case that: Bob will pass the class and Sue will not.
d. Jim will not pass the class, but it is not the case that: Bob will not pass and
Sue will not pass.
e. It is not the case that: Jim will pass and not pass, and it is not the case that:
Sue will pass and not pass.

5.2.4

And, Or, Not

Our idea has been to recognize when truth conditions for ordinary and formal sentences are the same. As we have seen, this turns out to require recognizing when
operators have the same tables. We have had a lot to say about it is not the case that
and
and
. We now turn to a more general treatment. We will not be
able to provide a complete menu of ordinary operators. Rather, we will see that some
uses of some ordinary operators can be appropriately translated by our symbols. We
should be able to discuss enough cases for you to see how to approach others on a
case-by-case basis. The discussion is organized around our operators, , ^, _, !
and $, taken in that order.
First, as we have seen, It is not the case that
has the same table as .
And various ordinary expressions may be equivalent to expressions involving this
operator. Thus, Bob is not married and Bob is unmarried might be understood as
equivalent to It is not the case that Bob is married. Given this, we might assign a
sentence letter, say, M to Bob is Married and translate M . But the second case
calls for comment. By comparison, consider, Bob is unlucky. Given what we have
done, it is natural to treat Bob is unlucky as equivalent to It is not the case that
Bob is lucky; assign L to Bob is lucky; and translate L. But this is not obviously
right. Consider three situations: (i) Bob goes to Las Vegas with $1,000, and comes
away with $1,000,000. (ii) Bob goes to Las Vegas with $1,000, and comes away
with $100, having seen a show and had a good time. (iii) Bob goes to Las Vegas
with $1,000, falls into a manhole on his way into the casino, and has his money

CHAPTER 5. TRANSLATION

154

stolen by a light-fingered thief on the way down. In the first case he is lucky; in
the third, unlucky. But, in the second, one might want to say that he was neither
lucky nor unlucky. If this is right, Bob is unlucky is not equivalent to It is not the
case that Bob is lucky for it is not the case that Bob is lucky in both situations
(ii) and (iii). Thus we might have to assign Bob is lucky one letter, and Bob is
unlucky another.1 Decisions about this sort of thing may depend heavily on context,
and assumptions which are in the background of conversation. We will ordinarily
assume contexts where there is no neutral state so that being unlucky is not
being lucky, and similarly in other cases.
Second, as we have seen,
and
has the same table as ^. As you may
recall from E5.2, another common operator that works this way is
but
.
Consider, for example, Bob likes Mary but Mary likes Jim. Suppose Bob does
like Mary and Mary likes Jim; then the compound sentence is true. Suppose one
of the simples is false, Bob does not like Mary or Mary does not like Jim; then the
compound is false. Thus
but
has the table,
but

(T)

T
T
F
F

T
F
F
F

T
F
T
F

and so has the same table as ^. So, in this case, we might assign B to Bob likes
Mary M to Mary likes Jim, and translate, .B ^ M /. Of course, the ordinary
expression but carries a sense of opposition that and does not. Our point is not
that and and but somehow mean the same, but rather that compounds formed by
means of them are true and false under the same truth functional conditions. Another
common operator with this table is Although
,
. You should convince
yourself that this is so, and be able to find other ordinary terms that work just the
same way.
Once again, however, there is room for caution in some cases. Consider, for
example, Bob took a shower and got dressed. Given what we have done, it is
natural to treat this as equivalent to Bob took a shower and Bob got dressed; assign
letters S and D; and translate .S ^ D/. But this is not obviously right. Suppose Bob
gets dressed, but then realizes that he is late for a date and forgot to shower, so he
1 Or so we have to do in the context of our logic where T and F are the only truth values. Another
option is to allow three values so that the one letter might be T, F or neither. It is possible to proceed
on this basis though the two valued (classical) approach has the virtue of relative simplicity! With
the classical approach as background, some such alternatives are developed in Priest, Non-Classical
Logics.

CHAPTER 5. TRANSLATION

155

jumps in the shower fully clothed, and air-dries on the way. Then it is true that Bob
took a shower, and true that Bob got dressed. But is it true that Bob took a shower
and got dressed? If not because the order is wrong our translation .S ^ D/
might be true when the original sentence is not. Again, decisions about this sort of
thing depend heavily upon context and background assumptions. And there may be
a distinction between what is said and what is conversationally implied in a given
context. Perhaps what was said corresponds to the table, so that our translation is
right, though there are certain assumptions typically made in conversation that go
beyond. But we need not get into this. Our point is not that the ordinary and always
works like our operator ^; rather the point is that some (indeed, many) ordinary uses
are rightly regarded as having the same table.2 Again, we will ordinarily assume a
context where and, but and the like have tables that correspond to ^.
Now consider Neither Bob likes Sue nor Sue likes Bob. This seems to involve
an operator, Neither
nor
with the following table.
Neither
F
F
F
T

(U)

nor
T
T
F
F

T
F
T
F

Neither Bob likes Sue nor Sue likes Bob is true just when Bob likes Sue and Sue
likes Bob are both false, and otherwise false. But no operator of our formal language
has a table which is T just when components are both F. Still, we may form complex
expressions which work this way. Thus, for example, .
^
/ is T just
when sentences in the blanks are both F.
P Q P ^ Q

(V)

T
T
F
F

T
F
T
F

F
F
T
T

T
T
F
F

F
F
F
T

F
T
F
T

T
F
T
F

2 The ability to make this point is an important byproduct of our having introduced the formal
operators as themselves. Where ^ and the like are introduced as being direct translations of ordinary
operators, a natural reaction to cases of this sort a reaction had even by some professional logicians
and philosophers is that the table is wrong. But this is mistaken! ^ has its own significance, which
may or may not agree with the shifting meaning of ordinary terms. The situation is no different than
for translation across ordinary languages, where terms may or may not have uniform equivalents.
But now, one may feel a certain tension with our account of what it is for an operator to be truth
functional for there seem to be contexts where the truth value of sentences in the blanks does not
determine the truth value of the whole, even for a purportedly truth functional operator like
and
. However, we want to distinguish different senses in which an operator may be used (or an
ambiguity, as between a bank of a river, and a bank where you deposit money), so that when an operator
is used with just one sense it has some definite truth function.

CHAPTER 5. TRANSLATION

156

So .
^
/ is a good translation of Neither
nor
. Another
expression with the same table is .P _ Q/. As it turns out, for any table a truth
functional operator may have, there is some way to generate that table by means of
our formal operators and in fact, by means of just the operators  and ^, or just
the operators  and _, or just the operators  and !. We will prove this in Part III.
For now, let us return to our survey of expressions which do correspond to operators.
The operator which is most naturally associated with _ is
or
. In this
case, there is room for caution from the start. Consider first a restaurant menu which
says that you will get soup, or you will get salad, with your dinner. This is naturally
understood as you will get soup or you will get salad where the sentential operator
is
or
. In this case, the table would seem to be,
or

(W)

T
T
F
F

F
T
T
F

T
F
T
F

The compound is true if you get soup, true if you get salad, but not if you get neither
or both. None of our operators has this table.
But contrast this case with one where a professor promises either to give you an
A on a paper, or to give you very good comments so that you will know what went
wrong. Suppose the professor gets excited about your paper, giving you both an A
and comments. Presumably, she did not break her promise! That is, in this case, we
seem to have, I will give you an A or I will give you comments with the table,
or

(X)

T
T
F
F

T
T
T
F

T
F
T
F

The professor breaks her word just in case she gives you a low grade without comments. This table is identical to the table for _. For another case, suppose you set
out to buy a power saw, and say to your friend I will go to Home Depot or I will
go Lowes. You go to Home Depot, do not find what you want, so go to Lowes and
make your purchase. When your friend later asks where you went, and you say you
went to both, he or she will not say you lied (!) when you said where you were going
for your statement required only that you would try at least one of those places.
The grading and shopping cases represent the so-called inclusive use of or
including the case when both components are T; the menu uses the exclusive use
of or excluding the case when both are T. Ordinarily, we will assume that or is

CHAPTER 5. TRANSLATION

157

used in its inclusive sense, and so is translated directly by _.3 Another operator that
works this way is
unless
. Again, there are exclusive and inclusive senses
which you should be able to see by considering restaurant and grade examples as
above. And again, we will ordinarily assume that the inclusive sense is intended.
For the exclusive cases, we can generate the table by means of complex expressions.
Thus, for example both .P $ Q/ and .P _ Q/ ^ .P ^ Q/ do the job. You
should convince yourself that this is so.
Observe that either
or
says the same as
or
. So one
might think that either has no real role; it does however serve a sort of bracketing function. So for example a different (perhaps more natural) way to think about
neither
nor
is as a negation of either
or
(the n to indicate
negation). Then observe that neither Bob nor Sue is happy which becomes, it is not
the case that either Bob is happy or Sue is happy is not legitimately parsed into it
is not the case that either Bob is happy or Sue is happy with main operator
or
, insofar either Bob is happy in the blank of it is not the case that
is not
a complete sentence; the required result is it is not the case that either Bob is happy
or Sue is happy with complete sentences in each blank and translation .B _ S /;
and this has the same table as B ^ S, the translation suggested above. A similar
bracketing results from both
and
. Thus the proper understanding of not
both Bob and Sue are happy is it is not the case that Bob is happy and Sue is happy
with translation, .B ^ S /. So either and both bracket what comes after.
And we continue to work with complex forms on trees. Thus, for example, consider Neither Bob likes Sue nor Sue likes Bob, but Sue likes Jim unless Jim does not
like her. This is a mouthful, but we can deal with it in the usual way. The hard part,
perhaps, is just exposing the operator structure.
3 Again,

there may be a distinction between what is said and what is conversationally implied in a
given context. Perhaps what was said generally corresponds to the inclusive table, though many uses
are against background assumptions which automatically exclude the case when both are T. But we
need not get into this. It is enough that some uses are according to the inclusive table.

CHAPTER 5. TRANSLATION

158

Neither Bob likes Sue nor Sue likes Bob but Sue likes Jim unless it is not the case that Jim likes Sue

XXX
Neither Bob likes Sue nor Sue likes Bob

(Y)

Sue likes Jim unless it is not the case that Jim likes Sue

HHH


H

Bob likes Sue

XXX
XX




Sue likes Bob

Sue likes Jim


 HHH
H
it is not the case that Jim likes Sue

Jim likes Sue

Given this, with what we have said above, generate the interpretation function and
then the parallel tree as follows.

B: Bob likes Sue


S : Sue likes Bob
J : Sue likes Jim

..B ^ S / ^ .J _ L//
H

HH


H
.B ^ S /
.J _ L/

@
@
S

@
@
L

L: Jim likes Sue


L

We have seen that .


_
/ is equivalent to
unless
; that .
^

/ is equivalent to neither
nor
; and that .
^
/ is equivalent to
but
. Given these, everything works as before. Again, the complex
problem is rendered simple, if we attack it one operator at a time. Another natural
option would be ..B _ S / ^ .J _ L// with the alternate version of neither
nor
.
E5.8. Using the interpretation function below, produce parse trees and then parallel
ones to complete the translation for each of the following.
B: Bob likes Sue
S: Sue likes Bob
B1 : Bob is cool
S1 : Sue is cool

CHAPTER 5. TRANSLATION

159

a. Bob likes Sue.


b. Sue does not like Bob.
c. Bob likes Sue and Sue likes Bob.
d. Bob likes Sue or Sue likes Bob.
e. Bob likes Sue unless she is not cool.
f. Either Bob does not like Sue or Sue does not like Bob.
g. Neither Bob likes Sue, nor Sue likes Bob.
*h. Not both Bob and Sue are cool.
i. Bob and Sue are cool, and Bob likes Sue, but Sue does not like Bob.
j. Although neither Bob nor Sue are cool, either Bob likes Sue, or Sue likes
Bob.
E5.9. Use our method to translate each of the following. That is, generate parse
trees with an interpretation function for all the sentences, and then parallel
trees to produce formal equivalents.4
a. Harry is not a Muggle.
b. Neither Harry nor Hermione are Muggles.
c. Either Harrys or Hermiones parents are Muggles.
*d. Neither Harry, nor Ron, nor Hermione are Muggles.
e. Not both Harry and Hermione have Muggle parents.
f. The game of Quidditch continues unless the Snitch is caught.
*g. Although blatching and blagging are illegal in Quidditch, the woolongong
shimmy is not.
h. Either the beater hits the bludger or you are not protected from it, and the
bludger is a very heavy ball.
4 My

source for the information on Quidditch is Kennilworthy Whisp (aka, J.K. Rowling), Quidditch Through the Ages, along with a daughter who is a rabid fan of all things Potter.

CHAPTER 5. TRANSLATION

160

i. The Chudley Cannons are not the best Quidditch team ever, however they
hope for the best.
j. Harry won the Quidditch cup in his 3rd year at Hogwarts, but not in his 1st,
2nd, 4th, or 5th.

5.2.5

If, Iff

The operator which is most naturally associated with ! is if


then
. Consider some fellow, perhaps of less than sterling character, of whom we assert, If he
loves her, then she is rich. In this case, the table begins,
then

If

(Z)

T
F
?
T

T
T
F
F

T
F
T
F

If He loves her and She is rich are both true, then what we said about him is true.
If he loves her, but she is not rich, what we said was wrong. If he does not love her,
and she is poor, then we are also fine, for all we said was that if he loves her, then
she is rich. But what about the other case? Suppose he does not love her, but she is
rich. There is a temptation to say that our conditional assertion is false. But do not
give in! Notice: we did not say that he loves all the rich girls. All we said was that
if he loves this particular girl, then she is rich. So the existence of rich girls he does
not love does not undercut our claim. For another case, say you are trying to find the
car he is driving and say If he is in his own car, then it is a Corvette. That is, If he
is in his own car then it is a Corvette. You would be mistaken if he has traded his
Corvette for a Yugo. But say the Corvette is in the shop and he is driving a loaner
that also happens to be a Corvette. Then He is in his own car is F and He is driving
a Corvette is T. Still, there is nothing wrong with your claim if he is in his own
car, then it is a Corvette. Given this, we are left with the completed table,
If

(AA)

then
T
T
F
F

T
F
T
T

T
F
T
F

which is identical to the table for !. With L for He loves her and R for She
is rich, for If he loves her then she is rich the natural translation is .L ! R/.
Another case which works this way is He loves her only if she is rich. You should
think through this as above. So far, perhaps, so good.

CHAPTER 5. TRANSLATION

161

But the conditional calls for special comment. First, notice that the table shifts
with the position of if. Suppose he loves her if she is rich. Intuitively, this says the
same as, If she is rich then he loves her. This time, we are mistaken if she is rich
and he does not love her. Thus, with the above table and assignments, we end up
with translation .R ! L/. Notice that the order is switched around the arrow. We
can make this point directly from the original claim.
he loves her if she is rich

(AB)

T
T
F
F

T
T
F
T

T
F
T
F

The claim is false just in the case where she is rich but he does not love her. The
result is not the same as the table for !. What we need is an expression that is F in
the case when L is F and R is T, and otherwise T. We get just this with .R ! L/.
Of course, this is just the same result as by intuitively reversing the operator into the
regular If
then
form.
In the formal language, the order of the components is crucial. In a true material
conditional, the truth of the antecedent guarantees the truth of the consequent. In
ordinary language, this role is played, not by the order of the components, but by
operator placement. In general, if by itself is an antecedent indicator; and only if is
a consequent indicator. That is, we get,

(AC)

If P then Q
P if Q
P only if Q
only if P , Q

.P ! Q/
.Q ! P /
.P ! Q/
.Q ! P /

If, taken alone, identifies what does the guaranteeing, and so the antecedent of our
material conditional; only if identifies what is guaranteed, and so the consequent.5
As we have just seen, the natural translation of P if Q is Q ! P , and the
translation of P only if Q is P ! Q. Thus it should come as no surprise that the
translation of P if and only if Q is .P ! Q/ ^ .Q ! P /, where this is precisely
what is abbreviated by .P $ Q/. We can also make this point directly. Consider,
he loves her if and only if she is rich. The operator is truth functional, with the
table,
5 It

may feel natural to convert P unless Q to P if not Q and translate .Q ! P /. This is fine
and, as is clear from the abbreviated form, equivalent to .Q _ P /. However, with the extra negation
and concern about direction of the arrow, it is easy to get confused on this approach so the simple
wedge is less likely to go wrong.

CHAPTER 5. TRANSLATION

162

Cause and Conditional


It is important that the material conditional does not directly indicate causal connection. Suppose we have sentences S: You strike the match, and L: The match
will light. And consider,
(i)
(ii)

If you strike the match then it will light


The match will light only if you strike it

S !L
L!S

with natural translations by our method on the right. Good. But, clearly the cause
of the lighting is the striking. So the first arrow runs from cause to effect, and the
second from effect to cause. Why? In (i) we represent the cause as sufficient for
the effect: striking the match guarantees that it will light. In (ii) we represent the
cause as necessary for the effect the only way to get the match to light, is to
strike it so that the matchs lighting guarantees that it was struck.
There may be a certain tendency to associate the ordinary if and only if with
cause, so that we say, if P then Q when we think of P as a (sufficient) cause of
Q, and say P only if Q when we think of Q as a (necessary) cause of P . But
causal direction is not reflected by the arrow, which comes out .P ! Q/ either
way. The material conditional indicates guarantee.
This point is important insofar as certain ordinary conditionals seem inextricably
tied to causation. This is particularly the case with subjunctive conditionals
(conditionals about what would have been). Suppose I was playing basketball and
said, If I had played Kobe, I would have won where this is, If it were the case that
I played Kobe then it would have been the case that I won the game. Intuitively,
this is false, Kobe would wipe the floor with me. But contrast, If it were the case
that I played Lassie then it would have been the case that I won the game. Now,
intuitively, this is true; Lassie has many talents but, presumably, basketball is not
among them and I could take her. But I have never played Kobe or Lassie, so
both I played Kobe and I played Lassie are false. Thus the truth value of the
whole conditional changes from false to true though the values of sentences in the
blanks remain the same; and If it were the case that
then it would have been
the case that
is not even truth functional. Subjunctive conditionals do offer
a sort of guarantee, but the guarantee is for situations alternate to the way things
actually are. So actual truth values do not determine the truth of the conditional.
Conditionals other than the material conditional are a central theme of Priest, NonClassical Logics. As usual, we simply assume that if and only if are used in
their truth functional sense, and so are given a good translation by !.

CHAPTER 5. TRANSLATION

163

he loves her if and only if she is rich

(AD)

T
T
F
F

T
F
F
T

T
F
T
F

It cannot be that he loves her and she is not rich, because he loves her only if she is
rich; so the second row is F. And it cannot be that she is rich and he does not love
her, because he loves her if she is rich; so the third row is F. The conditional is true
just when both she is rich and he loves her, or neither. Another operator that works
this way is
just in case
. You should convince yourself that this is so.
Notice that if, only if, and if and only if play very different roles for translation
you almost want to think of them as completely different words: if, onlyif, and
ifandonlyif, each with their own distinctive logical role. Do not get the different roles
confused!
For an example that puts some of this together, consider, She is rich if he loves
her, if and only if he is a cad or very generous. This comes to the following.

(AE)

She is rich if he loves her if and only if he is a cad or he is very generous


XXXX
XXX



X

She is rich if he loves her
he is a cad or he is very generous
HH
HH


H
HH


H
he is very generous
She is rich
he loves her
he is a cad

We begin by assigning sentence letters to the simple sentences at the bottom. Then
the parallel tree is constructed as follows.
R: She is rich
L: He loves her

..L ! R/ $ .C _ G//
H

HH


H
(C _ G/
.L ! R/

C : He is a cad
G: He is very generous

@
@
L

@
@
G

Observe that she is rich if he loves her is equivalent to .L ! R/, not the other way
around. Then the wedge translates
or
, and the main operator has the
same table as $.
Notice again that our procedure for translating, one operator or part at a time,
lets us translate even where the original is so complex that it is difficult to compre-

CHAPTER 5. TRANSLATION

164

hend. The method forces us to capture all available truth functional structure, and
the translation is thus good insofar as given the specified interpretation function, the
method makes the formal sentence true at just the consistent stories where the original is true. It does this because the formal and informal sentences work the same
way. Eventually, you want to be able to work translations without the trees! (And
maybe you have already begun to do so.) In fact, it will be helpful to generate them
from the top down, rather than from the bottom up, building the translation operatorby-operator as you take the sentence apart from the main operator. But, of course,
the result should be the same no matter how you do it.
From definition AR on p. 4 an argument is some sentences, one of which (the
conclusion) is taken to be supported by the remaining sentences (the premises).
In some courses on logic or critical reasoning, one might spend a great deal of
time learning to identify premises and conclusions in ordinary discourse. However,
we have taken this much as given, representing arguments in standard form, with
premises listed as complete sentences above a line, and the conclusion under. Thus,
for example,
If you strike the match, then it will light
(AF)

The match will not light


You did not strike the match

is a simple argument of the sort we might have encountered in chapter 1. To translate


the argument, we produce a translation for the premises and conclusion, retaining
the standard-form structure. Thus as in the discussion of causation on p. 162, we
might end up with an interpretation function and translation as below,
S: You strike the match

S !L

L: The match will light

L

S
The result is an object to which we can apply our semantic and derivation methods
in a straightforward way.
And this is what we have been after: If a formal argument is sententially valid,
then the corresponding ordinary argument must be logically valid. For some good
formal translation of its premises and conclusion, suppose an argument is sententially
valid; then by SV there is no interpretation on which the premises are true and the
conclusion is false; so there is no intended interpretation on which the premises are
true and the conclusion is false; but given a good translation, by CG, the ordinarylanguage premises and conclusion have the same truth values at any consistent story

CHAPTER 5. TRANSLATION

165

as formal expressions on the corresponding intended interpretation; so no consistent


story has the premises true and the conclusion false; so by LV the original argument
is logically valid. We will make this point again, in some detail, in Part III. For now,
notice that our formal methods, derivations and truth tables, apply to arguments of
arbitrary complexity. So we are in a position to demonstrate validity for arguments
that would have set us on our heels in chapter 1. With this in mind, consider again the
butler case (B) that we began with from p. 2. The demonstration that the argument is
logically valid is entirely straightforward, by a good translation and then truth tables
to demonstrate semantic validity. (It remains for Part III to show how derivations
matter for semantic validity.)
E5.10. Using the interpretation function below, produce parse trees and then parallel
ones to complete the translation for each of the following.
L: Lassie barks
T : Timmy is in trouble
P : Pa will help
H : Lassie is healthy
a. If Timmy is in trouble, then Lassie barks.
b. Timmy is in trouble if Lassie barks.
c. Lassie barks only if Timmy is in trouble.
d. If Timmy is in trouble and Lassie barks, then Pa will help.
*e. If Timmy is in trouble, then if Lassie barks Pa will help.
f. If Pa will help only if Lassie barks, then Pa will help if and only if Timmy is
in trouble.
g. Pa will help if Lassie barks, just in case Lassie barks only if Timmy is in
trouble.
h. If Timmy is in trouble and Pa does not help, then Lassie is not healthy or does
not bark.
*i. If Timmy is in trouble, then either Lassie is not healthy or if Lassie barks then
Pa will help.

CHAPTER 5. TRANSLATION

166

j. If Lassie neither barks nor is healthy, then Timmy is in trouble if Pa will not
help.

E5.11. Use our method, with or without parse trees, to produce a translation, including interpretation function for the following.
a. If animals feel pain, then animals have intrinsic value.
b. Animals have intrinsic value only if they feel pain.
c. Although animals feel pain, vegetarianism is not right.
d. Animals do not have intrinsic value unless vegetarianism is not right.
e. Vegetarianism is not right only if animals do not feel pain or do not have
intrinsic value.
f. If you think animals feel pain, then vegetarianism is right.
*g. If you think animals do not feel pain, then vegetarianism is not right.
h. If animals feel pain, then if animals have intrinsic value if they feel pain, then
animals have intrinsic value.
*i. Vegetarianism is right only if both animals feel pain, and animals have intrinsic value just in case they feel pain; but it is not the case that animals have
intrinsic value just in case they feel pain.
j. If animals do not feel pain if and only if you think animals do not feel pain,
but you do think animals feel pain, then you do not think that animals feel
pain.

E5.12. For each of the following arguments: (i) Produce a good translation, including interpretation function and translations for the premises and conclusion.
Then (ii) use truth tables to determine whether the argument is sententially
valid.
*a. Our car will not run unless it has gasoline
Our car has gasoline
Our car will run

CHAPTER 5. TRANSLATION

167

b. If Bill is president, then Hillary is first lady


Hillary is not first lady
Bill is not president
c. Snow is white and snow is not white
Dogs can fly
d. If Mustard murdered Boddy, then it happened in the library.
The weapon was the pipe if and only if it did not happen in the library, and
the weapon was not the pipe only if Mustard murdered him
Mustard murdered Boddy
e. There is evil
If god is good, there is no evil unless he has an excuse for allowing it.
If god is omnipotent, then he does not have an excuse for allowing evil.
God is not both good and omnipotent.
E5.13. For each of the arguments in E512 that is sententially valid, produce a derivation to show that it is valid in AD.

E5.14. Use translation and truth tables to show that the butler argument (B) from p.
2 is semantically valid.

E5.15. For each of the following concepts, explain in an essay of about two pages,
so that Hannah could understand. In your essay, you should (i) identify the
objects to which the concept applies, (ii) give and explain the definition, and
give and explicate examples of your own construction (iii) where the concept
applies, and (iv) where it does not. Your essay should exhibit an understanding of methods from the text.
a. Good translations.
b. Truth functional operators
c. Parse trees, interpretation functions and parallel trees

CHAPTER 5. TRANSLATION

5.3

168

Quantificational

It is not surprising that our goals for the quantificational case remain very much as
in the sentential one. We still want to produce translations consisting of interpretation functions and formal sentences which capture available structure, making
a formal P 0 true at intended interpretation II! just when the corresponding ordinary
P is true at story !. We do this as before, by assuring that the various parts of the
ordinary and formal languages work the same way. Of course, now we are interested
in capturing quantificational structure, and the interpretation and formal sentences
are for quantificational languages.
In the last section, we developed a recipe for translating from ordinary language
into sentential expressions, associating particular bits or ordinary language with various formal symbols. We might proceed in very much the same way here, moving
from our notion of truth-functional operators, to that of extensional terms, relation
symbols, and operators. Roughly, an ordinary term is extensional when the truth
value of a sentence in which it appears depends just on the object to which it refers;
an ordinary relation symbol is extensional when the truth value of a sentence in which
it appears depends just on the objects to which it applies; and an ordinary operator
is extensional when the truth value of a sentence in which it appears depends just
on the satisfaction of expressions which appear in its blanks. Clearly the notion of
an extensional operator at least is closely related to that of a truth functional operator. Extensional terms, relation symbols and operators in ordinary language work
very much like corresponding ones in a formal quantificational language where,
again, the idea would be to identify bits of ordinary language which contribute to
truth values in the same way as corresponding parts of the formal language.
However, in the quantificational case, an official recipe for translation is relatively
complicated. It is better to work directly with the fundamental goal of producing
formal translations that are true in the same situations as ordinary expressions. To be
sure, certain patterns and strategies will emerge, but, again, we should think of what
we are doing less as applying a recipe, than as directly using our understanding of
what makes ordinary and formal sentences true to produce good translations. With
this in mind, let us move directly to sample cases, beginning with those that are
relatively simple, and advancing to ones that are more complex.

5.3.1

Simple Quantifications

First, sentences without quantifiers work very much as in the sentential case. Consider a simple example. Say we are confronted with Bob is happy. We might begin,

CHAPTER 5. TRANSLATION

169

as in the sentential case, with the interpretation function,


B: Bob is happy
and use B for Bob is happy, B for Bob is not happy, and so forth. But this is to
ignore structure we are now capable of capturing. Thus, in our standard quantificational language Lq , we might let U be the set of all people, and set,
b: Bob
H 1 : fo j o is a happy persong
Then we can use H b for Bob is happy, H b for Bob is not happy, and so forth.
If II! assigns Bob to b, and the set of happy things to H , then H b is satisfied and
true on II! just in case Bob is happy at ! which is just what we want. Similarly
suppose we are confronted with Bobs father is happy. In the sentential case, we
might have tried, F : Bobs father is happy. But this is to miss structure available to
us now. So we might consider assigning a constant d to Bobs father and going with
Hd as above. But this also misses available structure. In this case, we can expand
the interpretation function to include,
f 1 : fhm; ni j m; n 2 U and n is the father of mg
Then for any variable assignment d, Id b D Bob and Id f 1 b is Bobs father. So
Hf 1 b is satisfied and true just in case Bobs father is happy. Hf 1 b is satisfied just
in case Bobs father is not happy, and so forth which is just what we want. In these
cases without quantifiers, once we have translated simple sentences, everything else
proceeds as in the sentential case. Thus, for example, for Neither Bob nor his father
is happy we might offer, H b ^ Hf 1 b.
The situation gets more interesting when we add quantifiers. We will begin with
cases where a quantifiers scope includes neither binary operators nor other quantifiers, and gradually increase complexity. Consider the following interpretation function.
II

U: fo j o is a dogg

f 1 : fhm; ni j m; n 2 U and n is the father of mg


W 1 : fo j o 2 U and o will have its dayg
We assume that there is some definite content to a dogs having its day, and that every
dog has a father if a dog Adam has no father at all, we will not have specified a
legitimate function. (Why?) Say we want to translate the following sentences.

CHAPTER 5. TRANSLATION

170

(1) Every dog will have its day


(2) Some dog will have its day
(3) Some dog will not have its day
(4) No dog will have its day
Assume some means at least one. The first sentence is straightforward. 8xW x is
read, for any x, W x; it is true just in case every dog will have its day. Suppose II!
is an interpretation I where the elements of U are m, n, and so forth. Then the tree is
as below.
1

Id.xjm/ W x ..
Id.xjn/ W x

(AG)
Id 8xW x

x m

..
..
..

x n

one branch for each


member of U

8x

The formula at (1) is satisfied just in case each of the branches at (2) is satisfied. But
this can be the case only of each member of U is in the interpretation of W which
given our interpretation function, can only be the case if each dog will have its day.
If even one dog does not have its day, then 8xW x is not satisfied, and is not true.
The second case is also straightforward. 9xW x is read, there is an x such that
W x; it is true just in case some dog will have its day.
1

Id.xjm/ W x ..
Id.xjn/ W x

(AH)
Id 9xW x

9x

..
..
..

x m
x n

one branch for each


member of U

The formula at (1) is satisfied just in case at least one of the branches at (2) is satisfied.
But this can be the case only of some member of U is in the interpretation of W
which, given the interpretation function, is to say that some dog will have its day.
The next two cases are only slightly more difficult. 9xW x is read, there is an
x such that not W x; it is true just in case some dog will not have its day.

CHAPTER 5. TRANSLATION

171

Id.xjm/ W x
I
W x ..
 d.xjm/
..
Id.xjn/ W x
I
W x ..
 d.xjn/
..

(AI)
Id 9xW x

x m
x n

one branch for each


member of U

9x

The formula at (1) is satisfied just in case at least one of the branches at (2) is satisfied.
And a branch at (2) is satisfied just in case the corresponding branch at (3) is not
satisfied. So 9xW x is satisfied and true just in case some member of U is not in the
interpretation of W just in case some dog does not have its day.
The last case is similar. 8xW x is read, for any x, not W x; it is true just in
case every dog does not have its day.
1

Id.xjm/ W x

Id.xjm/ W x ..

Id.xjn/ W x

(AJ)
Id 8xW x

8x

..
I
W x ..
 d.xjn/
..

x m
x n

one branch for each


member of U

The formula at (1) is satisfied just in case all of the branches at (2) are satisfied. And
this is the case just in case none of the branches at (3) are satisfied. So 8xW x is
satisfied and true just in case none of the members of U are in the interpretation of
W just in case no dog has its day.
Perhaps it has already occurred to you that there are other ways to translate these
sentences. The following lists what we have done, with quantifier switching alternatives on the right.

(AK)

Every dog will have its day


Some dog will have its day
Some dog will not have its day
No dog will have its day

8xW x
9xW x
9xW x
8xW x

9xW x
8xW x
8xW x
9xW x

There are different ways to think about these alternatives. First, in ordinary language,
beginning from the bottom, no dog will have its day, just in case not even one dog
does. Similarly, moving up the list, some dog will not have its day, just in case not
every dog does. And some dog will have its day just in case not every dog does not.

CHAPTER 5. TRANSLATION

172

And every dog will have its day iff not even one dog does not. These equivalences
may be difficult to absorb at first but, if you think about them, each should make
sense.
Next, we might think about the alternatives purely in terms of abbreviations.
Notice that, in a tree, Id P is always the same as Id P the tildes cancel
each other out. But then, in the first case, 9xW x abbreviates 8xW x
which is satisfied just in case 8xW x is satisfied. In the second case, 9xW x directly
abbreviates 8xW x. In the third, 9xW x abbreviates 8xW x which is
satisfied just in case 8xW x is satisfied. And, in the last case, 9xW x abbreviates
8xW x, which is satisfied just in case 8xW x is satisfied. So, again, the
alternatives are true under just the same conditions.
Finally, we might think about the alternatives directly, based on their branch conditions. Taking just the last case,
1

Id.xjm/ W x ..
Id.xjn/ W x

(AL)
Id 9xW x

Id 9xW x

9x

..
..
..

x m
x n

one branch for each


member of U

The formula at (1) is satisfied just in case the formula at (2) is not. But the formula
at (2) is not satisfied just in case none of the branches at (3) is satisfied and this
can only happen if no dog is in the interpretation of W , where this is as it should be
for no dog will have its day. In practice, there is no reason to prefer 9xP over
8xP or to prefer 8xP over 9xP the choice is purely a matter of taste. It
would be less natural to use 9xP in place of 8xP , or 8xP in place of 9xP .
And it is a matter of good form to pursue translations that are natural. At any rate,
all of the options satisfy CG. (But notice that we leave further room for alternatives
among good answers, thus complicating comparisons with, for example, the back of
the book!)
Observe that variables are mere placeholders for these expressions so that choice
of variables also does not matter. Thus, in tree (AL) immediately above, the formula
is true just in case no dog is in the interpretation of W . But we get the exact same
result if the variable is y.

CHAPTER 5. TRANSLATION

173

Id.yjm/ Wy ..
..
Id.yjn/ Wy ..
..

(AM)
Id 9yWy

Id 9yWy

9y

y m
y n

one branch for each


member of U

In either case, what matters in the end is whether the objects are in the interpretation
of the relation symbol: whether m 2 IW , and so forth. If none are, then the formulas
are satisfied. Thus the formulas are satisfied under exactly the same conditions. And
since one is satisfied iff the other is satisfied, one is a good translation iff the other is.
So the choice of variables is up to you.
Given all this, we continue to treat truth functional operators as before and we
can continue to use underlines to expose truth functional structure. The difference is
that what we would have seen as simple sentences have structure we were not able
to expose before. So, for example, Either every dog will have his day or no dog will
have his day gets translation, 8xW x _ 8xW x; Some dog will have its day and
some dog will not have its day, gets, 9xW x ^ 9xW x; and so forth. If we want to
say that some dog is such that its father will have his day, we might try 9xWf 1 x
there is an x such that the father of it will have its day.
E5.16. On p. 172 we say that we may show directly, based on branch conditions,
that the alternatives of table (AK) have the same truth conditions, but show it
only for the last case. Use trees to demonstrate that the other alternatives are
true under the same conditions. Be sure to explain how your trees have the
desired results.

E5.17. Given the following partial interpretation function for Lq , complete the translation for each of the following. Assume Phil 300 is a logic class with Ninfa
and Harold as members in which each student is associated with a unique
homework partner.
U: fo j o is a student in Phil 300g

a: Ninfa
d : Harold
p 1 : fhm; ni j m; n 2 U and n is the homework partner of mg

CHAPTER 5. TRANSLATION

174

G 1 : fo j o 2 U and o gets a good gradeg


H 2 : fhm; ni j m; n 2 U and m gets a higher grade than ng
a. Ninfa and Harold both get a good grade.
b. Ninfa gets a good grade, but her homework partner does not.
c. Ninfa gets a good grade only if both her homework partner and Harold do.
d. Harold gets a higher grade than Ninfa.
*e. If Harold gets a higher grade than Ninfa, then he gets a higher grade than her
homework partner.
f. Nobody gets a good grade.
*g. If someone gets a good grade, then Ninfas homework partner does.
h. If Ninfa does not get a good grade, then nobody does.
*i. Nobody gets a grade higher than their own grade.
j. If no one gets a higher grade than Harold, then no one gets a good grade.

E5.18. Produce a good quantificational translation for each of the following. In this
case you should provide an interpretation function for the sentences. Let
U be the set of famous philosophers, and, assuming that each has a unique
successor, implement a successor function.
a. Plato is a good philosopher.
b. Plato is better than Aristotle.
c. Neither Plato is better than Aristotle, nor Aristotle is better than Plato.
*d. If Plato is good, then his successor and successors successor are good.
e. No philosopher is better than his successor.
f. Not every philosopher is better than Plato.
g. If all philosophers are good, then Plato and Aristotle are good.

CHAPTER 5. TRANSLATION

175

h. If neither Plato nor his successor are good, then no philosopher is good.
*i. If some philosopher is better than Plato, then Aristotle is.
j. If every philosopher is better than his successor, then no philosopher is better
than Plato.

5.3.2

Complex Quantifications

With a small change to our interpretation function, we introduce a new sort of complexity into our translations. Suppose U includes not just all dogs, but all physical
objects, so that our interpretation function II has,
II

U: fo j o is a physical objectg

W 1 : fo j o 2 U and o will have its dayg


D 1 : fo j o 2 U and o is a dogg
Thus the universe includes more than dogs, and D is a relation symbol with application to dogs. We set out to translate the same sentences as before.6
(1) Every dog will have its day
(2) Some dog will have its day
(3) Some dog will not have its day
(4) No dog will have its day
This time, 8xW x does not say that every dog will have its day. 8xW x is true just in
case everything in U, dogs along with everything else, will have its day. So it might
be that every dog will have its day even though something else, for example my left
sock, does not. So 8xW x is not a good translation of every dog will have its day.
We do better with 8x.Dx ! W x/. 8x.Dx ! W x/ is read, for any x if x
is a dog, then x will have its day; it is true just in case every dog will have its day.
Again, suppose II! is an interpretation I such that the elements of U are m, n . . . .
6 Sentences of the sort, all P are Q, no P are Q, some P are Q, and some P are not Q are,
in a tradition reaching back to Aristotle, often associated with a square of opposition and called A,
E, I and O sentences. In a context with the full flexibility of quantifier languages, there is little point
to the special treatment, insofar as our methods apply to these as well as to ones that are more complex.
For discussion, see Pietroski, Logical Form.

CHAPTER 5. TRANSLATION

(AN)

176

Id.xjm/ Dx ..
..
Id.xjm/ Dx ! W x
!
Id.xjm/ W x ..
..
Id.xjn/ Dx ..
..
Id.xjn/ Dx ! W x
Id 8x.Dx ! W x/
!
8x
Id.xjn/ W x ..
..
one branch for each
member of U

4
x m
x m
x n
x n

The formula at (1) is satisfied just in case each of the branches at (2) is satisfied. And
all the branches at (2) are satisfied just in case there is no S/N pair at (3). This is so
just in case nothing in U is a dog that does not have its day; that is, just in case every
dog has its day. It is important to see how this works: There is a branch at (2) for
each thing in U. The key is that branches for things that are not dogs are vacuously
satisfied just because the things are not dogs. If 8x.Dx ! W x/ is true, however,
whenever a branch is for a thing that is a dog so that a top branch of a pair at (3) is
satisfied, that thing must be one that will have its day. If anything is a dog that does
not have its day, there is a S/N pair at (3), and 8x.Dx ! W x/ is not satisfied and
not true.
It is worth noting some expressions that do not result in a good translation.
8xDx ^ 8xW x is true just in case everything is a dog and everything will have
its day. To make it false, all it takes is one thing that is not a dog, or one thing that
will not have its day but this is not what we want. If this is not clear, work it out
on a tree. Similarly, 8xDx ! 8xW x is true just in case if everything is a dog,
then everything will have its day. To make it true, all it takes is one thing that is not
a dog then the antecedent is false, and the conditional is true; but again, this is
not what we want. In the good translation, 8x.Dx ! W x/, the quantifier picks out
each thing in U, the antecedent of the conditional identifies the ones we want to talk
about, and the consequent says what we want to say about them.
Moving on to the second sentence, 9x.Dx ^ W x/ is read, there is an x such that
x is a dog, and x will have its day; it is true just in case some dog will have its day.

CHAPTER 5. TRANSLATION

(AO)
Id 9x.Dx ^ W x/

177

Id.xjm/ Dx ..
..
Id.xjm/ Dx ^ W x
^
Id.xjm/ W x ..
..
Id.xjn/ Dx ..
..
Id.xjn/ Dx ^ W x
^
9x
Id.xjn/ W x ..
..
one branch for each
member of U

x m
x m
x n
x n

The formula at (1) is satisfied just in case one of the branches at (2) is satisfied. A
branch at (2) is satisfied just in cases both branches in the corresponding pair at (3)
are satisfied. And this is so just in case something is a dog that will have its day.
Again, it is worth noting expressions that do not result in good translation. 9xDx^
9xW x is true just in case something is a dog, and something will have its day
where these need not be the same; so 9xDx ^ 9xW x might be true even though no
dog has its day. 9x.Dx ! W x/ is true just in case something is such that if it is a
dog, then it will have its day.
1

3
Id.xjm/ Dx ..

..
!
Id.xjm/ W x ..
..
Id.xjn/ Dx ..
..
Id.xjn/ Dx ! W x
Id 9x.Dx ! W x/
!
9x
Id.xjn/ W x ..
..
one branch for each
member of U
Id.xjm/ Dx ! W x

(AP)

4
x m
x m
x n
x n

The formula at (1) is satisfied just in case one of the branches at (2) is satisfied; and
a branch at (2) is satisfied just in case there is a pair at (3) in which the top is N or the
bottom is S. So all we need for 9x.Dx ! W x/ to be true is for there to be even one
thing that is not a dog for example, my sock or one thing that will have its day.
So 9x.Dx ! W x/ can be true though no dog has its day.
The cases we have just seen are typical. Ordinarily, the existential quantifier
operates on expressions with main operator ^. If it operates on an expression with
main operator !, the resultant expression is satisfied just by virtue of something

CHAPTER 5. TRANSLATION

178

that does not satisfy the antecedent. And, ordinarily, the universal quantifier operates
on expressions with main operator !. If it operates on an expression with main
operator ^, the expression is satisfied only if everything in U has features from both
parts of the conjunction and it is uncommon to say something about everything in
U, as opposed to all the objects of a certain sort. Again, when the universal quantifier
operates on an expression with main operator !, the antecedent of the conditional
identifies the objects we want to talk about, and the consequent says what we want
to say about them.
Once we understand these two cases, the next two are relatively straightforward.
9x.Dx ^ W x/ is read, there is an x such that x is a dog and x will not have its
day; it is true just in case some dog will not have its day. Here is the tree without
branches for the (by now obvious) term assignments.
1

Id.xjm/ Dx
Id.xjm/ Dx ^ W x

(AQ)

Id.xjm/ W x

Id.xjm/ W x

Id.xjn/ Dx
Id 9x.Dx ^ W x/

9x

Id.xjn/ Dx ^ W x

Id.xjn/ W x

Id.xjn/ W x

one branch for each


member of U

The formula at (1) is satisfied just in case some branch at (2) is satisfied. A branch
at (2) is satisfied just in case the corresponding pair of branches at (3) is satisfied.
And for a lower branch at (3) to be satisfied, the corresponding branch at (4) has to
be unsatisfied. So for 9x.Dx ^ W x/ to be satisfied, there has to be something that
is a dog and does not have its day. In principle, this is just like, some dog will have
its day. We set out to say that some object of sort P has feature Q. For this, we say
that there is an x hat is of type P , and has feature Q. In some dog will have its day,
Q is the simple W . In this case, Q is the slightly more complex W .
Finally, 8x.Dx ! W x/ is read, for any x, if x is a dog, then x will not have
its day; it is true just in case every dog will not have its day that is, just in case
no dog will have its day.

CHAPTER 5. TRANSLATION

179

Id.xjm/ Dx
Id.xjm/ Dx ! W x

(AR)

Id.xjm/ W x

Id.xjm/ W x

Id.xjn/ Dx
Id 8x.Dx ! W x/

8x

Id.xjn/ Dx ! W x

Id.xjn/ W x

Id.xjn/ W x

one branch for each


member of U

The formula at (1) is satisfied just in case every branch at (2) is satisfied. Every
branch at (2) is satisfied just in case there is no S/N pair at (3); and for this to be so
there cannot be a case where a top at (3) is satisfied, and the corresponding bottom at
(4) is satisfied as well. So 8x.Dx ! W x/ is satisfied and true just in case nothing
is a dog that will have its day. Again, in principle, this is like every dog will have its
day. Using the universal quantifier, we pick out the class of things we want to talk
about in the antecedent, and say what we want to say about the members of the class
in the consequent. In this case, what we want to say is that things in the class will not
have their day.
As before, quantifier-switching alternatives are possible. In the table below, alternatives to what we have done are listed on the right.
(AS)

Every dog will have its day


Some dog will have its day
Some dog will not have its day
No dog will have its day

8x.Dx ! W x/
9x.Dx ^ W x/
9x.Dx ^ W x/
8x.Dx ! W x/

9x.Dx ^ W x/
8x.Dx ! W x/
8x.Dx ! W x/
9x.Dx ^ W x/

Beginning from the bottom, if not even one thing is a dog that will have its day, then
no dog will have its day. Moving up, if it is not the case that everything that is a
dog will have its day, then some dog will not. Similarly, if it is not the case that
everything that is a dog will not have its day, then some dog does. And if not even
one thing is a dog that does not have its day, then every dog will have its day. Again,
choices among the alternatives are a matter of taste, though the latter ones may be
more natural than the former. If you have any questions about how the alternatives
work, work them through on trees.
Before turning to some exercises, let us generalize what we have done a bit.
Include in our interpretation function,
H 1 : fo j o is happyg

CHAPTER 5. TRANSLATION

180

C 1 : fo j o is a catg
Suppose we want to say, not that every dog will have its day, but that every happy
dog will have its day. Again, in principle this is like what we have done. With
the universal quantifier, we pick out the class of things we want to talk about in the
antecedent in this case, happy dogs, and say what we want about them in the
consequent. Thus 8x.Dx ^ H x/ ! W x is true just in case everything that is both
happy and a dog will have its day, which is to say, every happy dog will have its
day. Similarly, if we want to say, every dog will or will not have its day, we might
try, 8xDx ! .W x _ W x/. Or putting these together, for every happy dog will
or will not have its day, 8x.Dx ^ H x/ ! .W x _ W x/. We consistently pick
out the things we want to talk about in the antecedent, and say what we want about
them with the consequent. Similar points apply to the existential quantifier. Thus
Some happy dog will have its day has natural translation, 9x.Dx ^ H x/ ^ W x
something is a happy dog and will have its day. Some happy dog will or will not
have its day gets, 9x.Dx ^ H x/ ^ .W x _ W x/. And so forth.
It is tempting to treat, All dogs and cats will have their day similarly with translation, 8x.Dx ^ C x/ ! W x. But this would be a mistake! We do not want to
say that everything which is a dog and a cat will have its day for nothing is both
a dog and a cat! Rather, good translations are, 8x.Dx ! W x/ ^ 8x.C x ! W x/
all dogs will have their day and all cats will have their day or the more elegant,
8x.Dx _ C x/ ! W x each thing that is either a dog or a cat will have its day.
In the happy dog case, we needed to restrict to class under consideration to include
just happy dogs; in this dog and cat case, we are not restricting the class, but rather
expanding it to include both dogs and cats. The disjunction .Dx _ C x/ applies to
things in the broader class which includes both dogs and cats.
This dog and cat case brings out the point that we do not merely cookbook from
ordinary language to formal translations, but rather want truth conditions to match.
And we can make the conditions match for expressions where standard language
does not lie directly on the surface. Thus, consider, Only dogs will have their day.
This does not say that all dogs will have their day. Rather it tells us that if something
has its day, then it is a dog, 8x.W x ! Dx/. Similarly, No dogs, except the happy
ones, will have their day, tells us that dogs that are not happy will not have their day,
8x.Dx ^ H x/ ! W x. It is tempting to add that the happy dogs will have their
day, but it is not clear that this is part of what we have actually said; except seems
precisely to except members of the specified class from what is said.7
7 It may be that we conventionally use except in contexts where the consequent is reversed for the
excepted class, for example, I like all foods except brussels sprouts where I say it this way because

CHAPTER 5. TRANSLATION

181

Further, as in the dog and cat case, sometimes surface language is positively misleading compared to standard readings. Consider, for example, if some dog is happy,
it will have its day, and if any dog is happy, then they all are. It is tempting to translate the first, 9x.Dx ^ H x/ ! W x but this is not right. All it takes to make
this expression true is something that is not a happy dog (for example, my sock); if
something is not a happy dog, then a branch for the conditional is satisfied, so that the
existentially quantified expression is satisfied. But we want rather to say something
about all dogs if some (arbitrary) dog is happy it will have its day so that no
matter what dog you pick, if it is happy, then it will have its day; thus the correct
translation is 8x.Dx ^ H x/ ! W x. Similarly, it may be tempting to translate,
the any of if any dog is happy, then they all are by the universal quantifier. But
the correct translation is rather, 9x.Dx ^ H x/ ! 8x.Dx ! H x/ if some dog
is happy, then every dog is happy. The best way to approach these cases is to think
directly about the conditions under which the ordinary expressions are true and false,
and to produce formal translations that are true and false under the same conditions.
For these last cases however, it is worth noting that when there is pronominal cross
reference as, if some/any P is Q then it has such-and-such features the statement
translates most naturally with the universal quantifier. But when such cross-reference
is absent as, if some/any P is Q then so-and-so is such-and-such the statement
translates naturally as a conditional with an existential antecedent. The point is not
that there are no grammatical cues! But cues are not so simple that we can always
simply read from some to the existential quantifier, and from any to the universal.
Perhaps this is sufficient for us to move to the following exercises.
E5.19. Use trees to show that the quantifier-switching alternatives from (AS) are true
and false under the same conditions as their counterparts. Be sure to explain
how your trees have the desired results.

E5.20. Given the following partial interpretation function for Lq , complete the translation for each of the following. (Perhaps these sentences reflect residual
frustration over a Mustang the author owned in graduate school).
U: fo j o is a carg

T 1 : fo j o 2 U and o is a Toyotag
F 1 : fo j o 2 U and o is a Fordg
I do not like brussels sprouts. But, again, it is not clear that I have actually said whether I like them or
not.

CHAPTER 5. TRANSLATION

182

E 1 : fo j o 2 U and o was built in the eightiesg


J 1 : fo j o 2 U and o is a piece of junkg
R1 : fo j o 2 U and o is reliableg
a. Some Ford is a piece of junk.
*b. Some Ford is an unreliable piece of junk.
c. Some Ford built in the eighties is a piece of junk.
d. Some Ford built in the eighties is an unreliable piece of junk.
e. Any Ford is a piece of junk.
f. Any Ford is an unreliable piece of junk.
*g. Any Ford built in the eighties is a piece of junk.
h. Any Ford built in the eighties is an unreliable piece of junk.
i. No reliable car is a piece of junk.
j. No Toyota is an unreliable piece of junk.
*k. If a car is unreliable, then it is a piece of junk.
l. If some Toyota is unreliable, then every Ford is.
m. Only Toyotas are reliable.
n. Not all Toyotas and Fords are reliable.
o. Any car, except for a Ford, is reliable.

E5.21. Given the following partial interpretation function for Lq , complete the translation for each of the following. Assume that Bob is married, and that each
married person has a unique primary spouse in case of more than one.
U: fo j o is a person who is marriedg

b: Bob
s 1 : fhm; ni j n is the (primary) spouse of mg

CHAPTER 5. TRANSLATION

183

A1 : fo j o 2 U and o is having an affairg


E 1 : fo j o 2 U and o is employedg
H 1 : fo j o 2 U and o is happyg
L2 : fhm; ni j m; n 2 U and m loves ng
M 2 : fhm; ni j m is married to ng
a. Bobs spouse is happy.
*b. Someone is married to Bob.
c. Anyone who loves their spouse is happy.
d. Nobody who is happy and loves their spouse is having an affair.
e. Someone is happy just in case they are employed.
f. Someone is happy just in case someone is employed.
g. Some happy people have affairs, and some do not.
*h. Anyone who loves and is loved by their spouse is happy, though some are not
employed.
i. Only someone who loves their spouse and is employed is happy.
j. Anyone who is unemployed and whose spouse is having an affair is unhappy.

k. People who are unemployed and people whose spouse is having an affair are
unhappy.
*l. Anyone married to Bob is happy if Bob is not having an affair.
m. Anyone married to Bob is happy only if Bob is employed and is not having
an affair.
n. If Bob is having an affair, then everyone married to him is unhappy, and
nobody married to him loves him.
o. Only unemployed people and unhappy people have affairs, but if someone
loves and is loved by their spouse, then they are happy unless they are unemployed.

CHAPTER 5. TRANSLATION

184

E5.22. Produce a good quantificational translation for each of the following. You
should produce a single interpretation function with application to all of the
sentences. Let U be the set of all animals.
a. Not all animals make good pets.
b. Dogs and cats make good pets.
c. Some dogs are ferocious and make good pets, but no cat is both.
d. No ferocious animal makes a good pet, unless it is a dog.
e. No ferocious animal makes a good pet, unless Lassie is both.
f. Some, but not all good pets are dogs.
g. Only dogs and cats make good pets.
h. Not all dogs and cats make good pets, but some do.
i. If Lassie does not make a good pet, then the only good pet is a cat that is
ferocious, or a dog that is not.
j. A dog or cat makes a good pet if and only if it is not ferocious.

5.3.3

Overlapping Quantifiers

The full power of our quantificational languages emerges only when we allow one
quantifier to appear in the scope of another.8 So let us turn to some cases of this sort.
First, let U be the set of all people, and suppose the intended interpretation of L2 is
fhm; ni j m; n 2 U, and m loves ng. Say we want to translate,
(1) Everyone loves everyone.
(2) Someone loves someone.
(3) Everyone loves someone.
(4) Everyone is loved by someone.
(5) Someone loves everyone.
8 Aristotles

categorical logic is capable of handling simple A, E, I , and O sentences consider


experience you may have had with Venn diagrams. But you will not be able to make his logic, or
such diagrams apply to the full range of cases that follow (see note 6)!

CHAPTER 5. TRANSLATION

185

(6) Someone is loved by everyone.


First, you should be clear how each of these differs from the others. In particular,
it is enough for (4) everyone is loved by someone that for each person there is a
lover of them perhaps their mother (or themselves); but for (6) someone is loved
by everyone we need some one person, say Elvis, that everyone loves. Similarly, it
is enough for (3) everyone loves someone that each person loves some person
perhaps their mother (or themselves); but for (5) someone loves everyone we need
some particularly loving individual, say Mother Theresa, who loves everyone.
The first two are straightforward. 8x8yLxy is read, for any x and any y, x
loves y; it is true just in case everyone loves everyone.
1

3
Id.xjm;yjm/ Lxy

Id.xjm/ 8yLxy

8y

(AT)

Id.xjm;yjn/ Lxy

::
:
Id.xjn;yjm/ Lxy

Id 8x8yLxy

8x

Id.xjn/ 8yLxy

::
:

8y

Id.xjn;yjn/ Lxy

::
:

The branch at (1) is satisfied just in case all of the branches at (2) are satisfied.
And all of the branches at (2) are satisfied just in case all of the branches at (3) are
satisfied. But every combination of objects appears at the branch tips. So 8x8yLxy
is satisfied and true just in case for any pair hm; ni 2 U2 , hm; ni is in the interpretation
of L. Notice that the order of the quantifiers and variables makes no difference: for
a given interpretation I, 8x8yLyx, 8y8xLxy, and 8y8xLyx are all satisfied and
true under the same condition just when every hm; ni 2 U2 is a member of IL.
The case for the second sentence is similar. 9x9yLxy is read, there is an x and
there is a y such that x loves y; it is true just in case some hm; ni 2 U2 is a member
of IL just in case someone loves someone. The tree is like (AT) above, but with
9 uniformly substituted for 8. Then the formula at (1) is satisfied iff a branch at (2) is
satisfied; iff a branch at (3) is satisfied; iff someone loves someone. Again the order
of the quantifiers does not matter.
The next cases are more interesting. 8x9yLxy is read, for any x there is a y
such that x loves y; it is true just in case everyone loves someone.

CHAPTER 5. TRANSLATION

186

3
Id.xjm;yjm/ Lxy

Id.xjm/ 9yLxy

9y

Id.xjm;yjn/ Lxy

::
:

(AU)

Id.xjn;yjm/ Lxy
Id 8x9yLxy

8x

Id.xjn/ 9yLxy

::
:

9y

Id.xjn;yjn/ Lxy

::
:

The branch at (1) is satisfied just in case each of the branches at (2) is satisfied. And
a branch at (2) is satisfied just in case at least one of the corresponding branches at
(3) is satisfied. So 8x9yLxy is satisfied just in case, no matter which o you pick,
there is some p such that such that o loves p so that everyone loves someone. This
time, the order of the of the variables makes a difference: thus, 8x9yLyx translates
sentence (4). The picture is like the one above, with Lyx uniformly replacing Lxy.
This expression is satisfied just in case no matter which o you pick, there is some p
such that such that p loves o so that everyone is loved by someone.
Finally, 9x8yLxy is read, there is an x such that for any y, x loves y; it is
satisfied and true just in case someone loves everyone.
1

3
Id.xjm;yjm/ Lxy

Id.xjm/ 8yLxy

8y

(AV)

Id.xjm;yjn/ Lxy

::
:
Id.xjn;yjm/ Lxy

Id 9x8yLxy

9x

Id.xjn/ 8yLxy

::
:

8y

Id.xjn;yjn/ Lxy

::
:

The branch at (1) is satisfied just in case some branch at (2) is satisfied. And a branch
at (2) is satisfied just in case each of the corresponding branches at (3) is satisfied.
So 9x8yLxy is satisfied and true just in case there is some o 2 U such that, no
matter what p 2 U you pick, ho; pi 2 IL just when there is someone who loves
everyone. If we switch Lyx for Lxy, we get a tree for 9x8yLyx; this formula is true

CHAPTER 5. TRANSLATION

187

just when someone is loved by everyone. Switching the order of the quantifiers and
variables makes no difference when quantifiers are the same. But it matters crucially
when quantifiers are different!
Let us see what happens when, as before, we broaden the interpretation function
so that U includes all physical objects.
II

U: fo j o is a physical objectg

P 1 : fo j o 2 U and o is a persong
L2 : fhm; ni j m; n 2 U, and m loves ng
Let us set out to translate the same sentences as before.
For everyone loves everyone, where we are talking about people, 8x8yLxy
will not do. 8x8yLxy requires that each member of U love all the other members
of U but then we are requiring that my left sock love my computer, and so forth.
What we need is rather, 8x8y.P x ^ P y/ ! Lxy. With the last branch tips
omitted, the tree is as follows.
1

4
Id.xjm;yjm/ P x ^ P y

Id.xjm;yjm/

.P x ^ P y/ ! Lxy

Id.xjm/

Id.xjm;yjn/

8y..P x ^ P y/ ! Lxy/

.P x ^ P y/ ! Lxy

8y

Id.xjm;yjn/ P x ^ P y

:
:
:

(AW)

Id 8x8y..P x ^ P y/ ! Lxy/

8x

Id.xjn/

Id.xjn;yjn/

8y..P x ^ P y/ ! Lxy/

.P x ^ P y/ ! Lxy

:
:
:

8y

:
:
:

Id.xjm;yjn/ Lxy

Id.xjn;yjm/ P x ^ P y

Id.xjn;yjm/

.P x ^ P y/ ! Lxy

Id.xjm;yjm/ Lxy

Id.xjn;yjm/ Lxy

Id.xjn;yjn/ P x ^ P y

Id.xjn;yjn/ Lxy

The formula at (1) is satisfied iff all the branches at (2) are satisfied; all the branches
at (2) are satisfied just in case all the branches at (3) are satisfied. And, for this to
be the case, there can be no pair at (4) where the top is satisfied and the bottom is
not. That is, there can be no o and p such that o and p are people, o; p 2 IP , but o
does not love p, ho; pi 62 IL. The idea is very much as before: With the universal

CHAPTER 5. TRANSLATION

188

quantifiers, we select the things we want to talk about in the antecedent, we make
sure that x and y pick out people, and then say what we want to say about the things
in the consequent.
The case for someone loves someone also works on close analogy with what
has gone before. In this case, we do not use the conditional. If the quantifiers in the
above tree were existential, all we would need is one branch at (2) to be satisfied,
and one branch at (3) satisfied. And, for this, all we would need is one thing that is
not a person so that the top branch for the conditional is N, and the conditional
is therefore S. On the analogy with what we have seen before, what we want is
something like, 9x9y.P x ^ P y/ ^ Lxy. There are some people x and y such that
x loves y.
1

4
Id.xjm;yjm/ P x ^ P y

Id.xjm;yjm/

.P x ^ P y/ ^ Lxy

Id.xjm/

Id.xjm;yjn/

9y..P x ^ P y/ ^ Lxy/

.P x ^ P y/ ^ Lxy

8y

Id.xjm;yjn/ P x ^ P y

:
:
:

(AX)

Id 9x9y..P x ^ P y/ ^ Lxy/

9x

Id.xjn/

Id.xjn;yjn/

9y..P x ^ P y/ ^ Lxy/

.P x ^ P y/ ^ Lxy

:
:
:

9y

:
:
:

Id.xjm;yjn/ Lxy

Id.xjn;yjm/ P x ^ P y

Id.xjn;yjm/

.P x ^ P y/ ^ Lxy

Id.xjm;yjm/ Lxy

Id.xjn;yjm/ Lxy

Id.xjn;yjn/ P x ^ P y

Id.xjn;yjn/ Lxy

The formula at (1) is satisfied iff at least one branch at (2) is satisfied. At least one
branch at (2) is satisfied just in case at least one branch at (3) is satisfied. And for this
to be the case, we need some branch pair at (4) where both the top and the bottom
are satisfied some o and p such that o and p are people, o; p 2 IP , and o loves p,
ho; pi 2 IL.
In these cases, the order of the quantifiers and variables does not matter. But order
matters when quantifiers are mixed. Thus, for everyone loves someone, 8xP x !
9y.P y ^ Lxy/ is good if any thing x is a person, then there is some y such that
y is a person and x loves y.

CHAPTER 5. TRANSLATION

189

Id.xjm/ P x
Id.xjm/

P x ! 9y.P y ^ Lxy/

Id.xjm;yjm/ P y ^ Lxy

Id.xjm/

9y.P y ^ Lxy/

9y

Id.xjm;yjn/ P y ^ Lxy

:
:
:

(AY)
Id.xjn/ P x
Id.xjn/
Id 8x.P x ! 9y.P y ^ Lxy//

8x

P x ! 9y.P y ^ Lxy/

Id.xjn;yjm/ P y ^ Lxy

:
:
:

Id.xjn/

9y.P y ^ Lxy/

9y

Id.xjn;yjn/ P y ^ Lxy

:
:
:

The formula at (1) is satisfied just in case all the branches at (2) are satisfied. All the
branches at (2) are satisfied just in case no pair at (3) has the top satisfied and the
bottom not. If x is assigned to something that is not a person, the branch at (2) is
satisfied trivially. But where the assignment to x is some o that is a person, a bottom
branch at (3) is satisfied just in case at least one of the corresponding branches at
(4) is satisfied just in case there is some p such that p is a person and o loves p.
Notice, again, that the universal quantifier is associated with a conditional, and the
existential with a conjunction. Similarly, we translate everyone is loved by someone,
8xP x ! 9y.P y ^ Lyx/. The tree is as above, with Lxy uniformly replaced by
Lyx.
For someone loves everyone, 9xP x ^ 8y.P y ! Lxy/ is good there is an
x such that x is a person, and for any y, if y is a person, then x loves y.
1

Id.xjm/ P x
Id.xjm/

P x ^ 8y.P y ! Lxy/

Id.xjm;yjm/ P y ! Lxy

Id.xjm/

8y.P y ! Lxy/

8y

Id.xjm;yjn/ P y ! Lxy

:
:
:

(AZ)
Id.xjn/ P x
Id.xjn/
Id 9x.P x ^ 8y.P y ! Lxy//

9x

P x ^ 8y.P y ! Lxy/
:
:
:

Id.xjn;yjm/ P y ! Lxy

Id.xjn/

8y.P y ! Lxy/

8y

Id.xjn;yjn/ P y ! Lxy

:
:
:

CHAPTER 5. TRANSLATION

190

The formula at (1) is satisfied just in case some branch at (2) is satisfied. A branch at
(2) is satisfied just in case the corresponding pair at (3) is satisfied. The top of such
a pair is satisfied when the assignment to x is some o 2 IP ; the bottom is satisfied
just in case all of the corresponding branches at (4) are satisfied just in case any p
is such that if it is a person, then o loves it. So there has to be an o that loves every
p. Similarly, you should be able to see that 9xP x ^ 8y.P y ! Lyx/ is good for
someone is loved by everyone.
Again, it may have occurred to you already that there are other options for
these sentences. This time natural alternatives are not for quantifier switching, but
for quantifier placement. For someone loves everyone we have given, 9xP x ^
8y.P y ! Lxy/ with the universal quantifier on the inside. However, 9x8yP x ^
.P y ! Lxy/ would do as well. As a matter of strategy, it may be best to keep
quantifiers as close as possible to that which they modify. However, we can show
that, in this case, pushing the quantifier across that which it does not bind leaves the
truth condition unchanged. Let us make the point generally. Say Q.v/ is a formula
with variable v free, but P is one in which v is not free. We are interested in the
relation between .P ^ 8vQ.v// and 8v.P ^ Q.v//. Here are the trees.
1

3
Id.vjm/ P

Id.vjm/ P ^ Q.v/

(BA)

Id.vjm/ Q.v/
Id.vjn/ P

Id 8v.P ^ Q.v//

8v

Id.vjn/ P ^ Q.v/

::
:

Id.vjn/ Q.v/

and,
4

(BB)

Id P ^ 8vQ.v/

Id P

Id.vjm/ Q.v/

Id 8vQ.v/

8v

Id.vjn/ Q.v/

::
:

The key is this: Since P has no free instances of v, for any o 2 U, Id P is satisfied
just in case Id.vjo/ P is satisfied; for if v is not free in P , the assignment to v makes
no difference to the evaluation of P . In (BA), the formula at (1) is satisfied iff each of
the branches at (2) is satisfied; and each of the branches at (2) is satisfied iff each of

CHAPTER 5. TRANSLATION

191

the branches at (3) is satisfied. In (BB) the formula at (4) is satisfied iff both branches
at (5) are satisfied. The bottom requires that all the branches at (6) are satisfied. But
the branches at (6) are just like the bottom branches from (3) in (BA). And given the
equivalence between Id P and Id.xjo/ P , the top at (5) is satisfied iff each of the
tops at (3) is satisfied. So the one formula is satisfied iff the other is as well. Notice
that this only works because v is not free in P . So you can move the quantifier past
the P only if it does not bind a variable free in P !
Parallel reasoning would work for any combination of 8 and 9, with ^, _ and !.
That is, supposing that v is not free in P , each of the following pairs is equivalent.
8v.P ^ Q.v//
9v.P ^ Q.v//
8v.P _ Q.v//
9v.P _ Q.v//
8v.P ! Q.v//
9v.P ! Q.v//

(BC)

P
P
P
P
P
P

^ 8vQ.v/
^ 9vQ.v/
_ 8vQ.v/
_ 9vQ.v/
! 8vQ.v/
! 9vQ.v/

The comparison between 8yP x ^ .P y ! Lxy/ and P x ^ 8y.P y ! Lxy/


is an instance of the first pair. In effect, then, we can push the quantifier into the
parentheses across a formula to which the quantifier does not apply, and pull it
out across a formula to which the quantifier does not apply without changing the
conditions under which the formula is satisfied.
But we need to be more careful when the order of P and Q.v/ is reversed. Some
cases work the way we expect. Consider 8v.Q.v/ ^ P / and .8vQ.v/ ^ P /.
1

3
Id.vjm/ Q.v/

Id.vjm/ Q.v/ ^ P

(BD)

Id.vjm/ P
Id.vjn/ Q.v/

Id 8v.Q.v/ ^ P /

and,

8v

Id.vjn/ Q.v/ ^ P

::
:

Id.vjn/ P

CHAPTER 5. TRANSLATION

192

6
Id.vjm/ Q.v/

Id 8vQ.v/

(BE)
Id 8vQ.v/ ^ P

8v

Id.vjn/ Q.v/

::
:

^
Id P

In this case, the reasoning is as before. In (BD), the formula at (1) is satisfied iff all the
branches at (2) are satisfied; and all the branches at (2) are satisfied iff all the branches
at (3) are satisfied. And in (BE), the formula at (4) is satisfied iff both branches at
(5) are satisfied. And the top at (5) is satisfied iff all the branches at (6) are satisfied.
But the branches at (6) are like the tops at (3). And given the equivalence between
Id P and Id.xjo/ P , the bottom at (5) is satisfied iff the bottoms at (3) are satisfied.
So, again, the formulas are satisfied under the same conditions. And similarly for
different combinations of the quantifiers 8 or 9 and the operators ^ or _. Thus our
table extends as follows.
8v.Q.v/ ^ P /
9v.Q.v/ ^ P /
8v.Q.v/ _ P /
9v.Q.v/ _ P /

(BF)

.8vQ.v/ ^ P /
.9vQ.v/ ^ P /
.8vQ.v/ _ P /
.9vQ.v/ _ P /

We can push a quantifier into the front part of a parenthesis or pull it out as above.
But the case is different when the main operator is !. Consider trees for 8v.Q.v/
! P / and, noting the quantifier shift, for .9vQ.v/ ! P /.
1

3
Id.vjm/ Q.v/

Id.vjm/ Q.v/ ! P

(BG)

Id.vjm/ P
Id.vjn/ Q.v/

Id 8v.Q.v/ ! P /

and

8v

Id.vjn/ Q.v/ ! P

::
:

Id.vjn/ P

CHAPTER 5. TRANSLATION

193

6
Id.vjm/ Q.v/

Id 9vQ.v/

(BH)
Id 9vQ.v/ ! P

9v

Id.vjn/ Q.v/

::
:

!
Id P

The formula at (4) is satisfied so long as at (5) the upper branch is N or bottom is
S; and the top is N iff no branch at (6) is S. The formula at (1) is satisfied iff all the
branches at (2) are satisfied. And all the branches at (2) are satisfied iff there is no
S/N pair at (3). But, as before, the tops at (3) are the same as the branches at (6). And
given the match between Id P and Id.xjo/ P , the bottoms at (3) are the same as the
bottom at (5). So an S/N pair at (3) requires that a branch at (6) is S and the bottom
at (5) is N. And there is no S/N pair at (3) just in case no branch at (6) is S or the
bottom at (5) is S. So 8v.Q.v/ ! P / and .9vQ.v/ ! P / are satisfied under the
same conditions. By similar reasoning, we are left with the following equivalences
to complete our table.
(BI)

8v.Q.v/ ! P /
9v.Q.v/ ! P /

.9vQ.v/ ! P /
.8vQ.v/ ! P /

When a universal goes in to the antecedent of a conditional, it flips to an existential.


And when an existentitial quantifier goes in to the antecedent of a conditional, it flips
to a universal. And similarly in the other direction.
Here is an explanation for what is happening: A universal quantifier outside
parentheses requires that each inner conditional branch is satisfied; with tips for the
consequent P the same, this requires that the consequent is S or every tip for the
antecedent is N. But with a quantifier pushed in, the resultant conditional A ! P is
satisfied when the antecedent is N or the consequent is S; and the original requirement
that all the antecedent tips be N corresponds to the requirement that an existentential A is N. Similarly, an existential quantifier outside parentheses requires that some
inner conditional branch is satisfied; with tips for the consequent P the same, this
requires that the consequent is S or some tip for the antecedent is N. But with a quantifier pushed in, the resultant conditional A ! P is satisfied when the antecedent
is N or the consequent is S; and the original requirement that some antecedent tip is
N corresponds to the condition that a universal A is N. This case differs from others
insofar as the conditional branch is S when its antecedent tip is N. In other cases, the
condition for satisfaction does not flip, so that the branch is S when the tip is S. So

CHAPTER 5. TRANSLATION

194

quantifier movement is mostly as one would expect. The place for caution is when a
quantifier comes from or goes into the antecedent of a conditional.9
Return to everybody loves somebody. We gave as a translation, 8xP x !
9y.P y ^Lxy/. But 8x9yP x ! .P y ^Lxy/ does as well. To see this, notice that
the immediate subformula, P x ! 9y.P y ^ Lxy/ is of the form P ! 9vQ.v/
where P has no free instance of the quantified variable y. The quantifier is in the
consequent of the conditional, so P x ! 9y.P y ^ Lxy/ is equivalent to 9yP x !
.P y ^ Lxy/. So the larger formula 8xP x ! 9y.P y ^ Lxy/ is equivalent to
8x9yP x ! .P y ^ Lxy/. And similarly in other cases. Officially, there is no
reason to prefer one option over the other. Informally, however, there is perhaps less
room for confusion when we keep quantifiers relatively close to the expressions they
modify. On this basis, 8xP x ! 9y.P y ^ Lxy/ is to be preferred. If you have
followed this discussion, you are doing well and should be in a good position to
think about the following exercises.
E5.23. Use trees to explain one of the equivalences in table (BC), and one of the
equivalences in (BF), for an operator other than ^. Then use trees to explain
the second equivalence in (BI). Be sure to explain how your trees justify the
results.

E5.24. Explain why we have not listed quantifier placement equivalences matching
8v.P $ Q.v// with .P $ 8vQ.v//. Hint: consider 8v.P $ Q.v// as
an abbreviation of 8v.P ! Q.v// ^ .Q.v/ ! P /; from trees, you can
see that this is equivalent to 8v.P ! Q.v// ^ 8v.Q.v/ ! P /. Now,
what is the consequence of quantifier placement difficulties for !? Would it
work if the quantifier did not flip?

E5.25. Given the following partial interpretation function for Lq , complete the translation for each of the following. (The last generates a famous paradox can
a barber shave himself?)
U: fo j o is a persong
9 Thus,

for example, we should expect quantifier flipping when pushing into expressions 8v.P #
Q.v// or 8v.Q.v/ # P / with a neither-nor operator true only when both sides are false. And this is
just so: The universal expression is satisfied only when all the inner branches are satisfied; and the inner
branches are satisfied just when all the tips are not. And this is like the condition from the existential
quantifier in 9vQ # P or P # 9vQ. And similarly for existentially quantified expressions with this
operator.

CHAPTER 5. TRANSLATION

195

b: Bob
B 1 : fo j o 2 U and o is a barberg
M 1 : fo j o 2 U and o is a mang
S 2 : fhm; ni j m; n 2 U and m shaves ng
a. Bob shaves himself.
b. Everyone shaves everyone.
c. Someone shaves everyone.
d. Everyone is shaved by someone.
e. Someone is shaved by everyone.
f. Not everyone shaves themselves.
*g. Any man is shaved by someone.
h. Some man shaves everyone.
i. No man is shaved by all barbers.
*j. Any man who shaves everyone is a barber.
k. If someone shaves all men, then they are a barber.
l. If someone shaves everyone, then they shave themselves.
m. A barber shaves anyone who does not shave themselves.
*n. A barber shaves only people who do not shave themselves.
o. A barber shaves all and only people who do not shave themselves.
<
E5.26. Given an extended version of LNT
and the standard interpretation N1 as below,
complete the translation for each of the following. Recall that < and D are
relation symbols, where S,  and C are function symbols. As we shall see
shortly, it is possible to define E and P in the primitive vocabulary. Also the
last sentence states the famous Goldbach conjecture, so far unproved!

U: N

CHAPTER 5. TRANSLATION

196

;: zero
S : fhm; ni j m; n 2 N , and n is the successor of mg
C: fhhm; ni; oi j m; n; o 2 N , and m plus n equals og
 : fhhm; ni; oi j m; n; o 2 N , and m times n equals og
<: fhm; ni j m; n 2 N , and m is less than ng
E 1 : fo j o 2 N and o is eveng
P 1 : fo j o 2 N and o is primeg

*a. One plus one equals two.


b. Three is greater than two.
c. There is an even prime number.
d. Zero is less than or equal to every number.
e. There is a number less than or equal to every other.
f. For any prime, there is one greater than it.
*g. Any odd (non-even) number is equal to the successor of some even number.
h. Some even number is not equal to the successor of any odd number.
i. A number x is even iff it is equal to two times some y.
j. A number x is odd if it is equal to two time some y plus one.
k. Any odd number is equal to the sum of an odd and an even.
l. Any even number not equal to zero is the sum of one odd with another.
*m. The sum of one odd with another odd is even.
n. No odd number is greater than every prime.
o. Any even number greater than two is equal to the sum of two primes.
E5.27. Produce a good quantificational translation for each of the following. In this
case you should provide an interpretation function for the sentences. Let U be
the set of people, and, assuming that each has a unique best friend, implement
a best friend of function.

CHAPTER 5. TRANSLATION

197

a. Bobs best friend likes all New Yorkers.


b. Some New Yorker likes all Californians.
c. No Californian likes all New Yorkers.
d. Any Californian likes some New Yorker.
e. Californians who like themselves, like at least some people who do not.
f. New Yorkers who do not like themselves, do not like anybody.
g. Nobody likes someone who does not like them.
h. There is someone who dislikes every new Yorker, and is liked by every Californian.
i. Anyone who likes themselves and dislikes every New Yorker, is liked by
every Californian.
j. Everybody who likes Bobs best friend likes some New Yorker who does not
like Bob.

5.3.4

Equality

We complete our discussion of translation by turning to some important applications


for equality. Adopt an interpretation function with U the set of people and,
b: Bob
c: Bob
f 1 : fhm; ni j m; n 2 U, where n is the father of mg
H 1 : fo j o 2 U and o is a happy persong
(Maybe Bobs friends call him Cronk.) The simplest applications for D assert the
identity of individuals. Thus, for example, b D c is satisfied insofar as hId b; Id ci 2
ID. Similarly, 9x.b D f 1 x/ is satisfied just in case Bob is someones father. And,
<
on the standard interpretation of LNT
, 9x.x C x/ D .x  x/ is satisfied insofar as,
say, hId.xj2/ x C x; Id.xj2/ x  xi 2 ID that is, h4; 4i 2 ID. If this last case is
not clear, think about it on a tree.
We get to an interesting class of cases when we turn to quantity expressions.
Thus, for example, we can easily say at least one person is happy, 9xH x. But

CHAPTER 5. TRANSLATION

198

notice that neither 9xH x ^9yHy nor 9x9y.H x ^Hy/ work for at least two people
are happy. For the first, it should be clear that each conjunct is satisfied, so that the
conjunction is satisfied, so long as there is at least one happy person. And similarly
for the second. To see this in a simple case, suppose Bob, Sue and Jim are the only
people in U. Then the existentials for 9x9y.H x ^ Hy/ result in nine branches of the
following sort,
1

2
Id.xjm;yjn/ H x ..

(BJ)
...

Id.xjm;yjn/ H x ^ Hy

..
^
Id.xjm;yjn/ Hy ..
..

x m
y n

for some individuals m and n. Just one of these branches has to be satisfied in order
for the main sentence to be satisfied and true. Clearly none of the tips are satisfied
if none of Bob, Sue or Jim is happy; then the branches are N and 9x9y.H x ^ Hy/
is N as well. But suppose just one of them, say Sue, is happy. Then on the branch
for d.xjSue;yjSue/ both H x and Hy are satisfied! Thus the conjunction is satisfied, and
the existential is satisfied as well. So 9x9y.H x ^ Hy/ does not require that at least
two people are happy. The problem, again, is that the same person might satisfy both
conjuncts at once.
But this case points the way to a good translation for at least two people are
happy. We get the right result with, 9x9y.H x ^ Hy/ ^ .x D y/. Now, in our
simple example, the existentials result in nine branches as follows,
1

Id.xjm;yjn/ H x ^ Hy

(BK)
...

Id.xjm;yjn/ .H x ^ Hy/ ^ .x D y/

Id.xjm;yjn/ H x

.
.
.
.

x m

Id.xjm;yjn/ Hy

.
.
.
.

y n

^
x m
Id.xjm;yjn/ .x D y/

Id.xjm;yjn/ x D y ..

.
.@

y n

The sentence is satisfied and true if at least one branch is satisfied. Now in the
case where just Sue is happy, on the branch with d.xjSue;yjSue/ both H x and Hy are
satisfied as before. But this branch has x D y satisfied; so .x D y/ is not satisfied,
and the branch as a whole fails. But suppose both Bob and Sue are happy. Then on
the branch with d.xjBob;yjSue/ both H x and Hy are satisfied; but this time, x D y is
not satisfied; so .x D y/ is satisfied, and the branch is satisfied, so that the whole
sentence, 9x9y.H x ^ Hy/ ^ .x D y/ is satisfied and true. That is, the sentence
is satisfied and true just when the happy people assigned to x and y are distinct

CHAPTER 5. TRANSLATION

199

just when there are at least two happy people. On this pattern, you should be able to
see how to say there are at least three happy people, and so forth.
Now suppose we want to say, at most one person is happy. We have, of course,
learned a couple of ways to say nobody is happy, 8xH x and 9xH x. But for
at most one we need something like, 8xH x ! 8y.Hy ! .x D y//. For
this, in our simplified case, the universal quantifier yields three branches of the sort,
Id.xjm/ H x ! 8y.Hy ! .x D y//. The begininning of the branch is as follows,
1

Id.xjm/ H x ..

.
.

Id.xjm/ H x ! 8y.Hy ! .x D y//

(BL)

x m
Id.xjm;yjBob/ Hy ! .x D y/

!
Id.xjm/ 8y.Hy ! .x D y//

8y

Id.xjm;yjSue/ Hy ! .x D y/

Id.xjm;yjJim/ Hy ! .x D y/

The universal 8xH x ! 8y.Hy ! .x D y// is satisfied and true if and only if all
the conditional branches at (1) are satisfied. And the branchs at (1) are satisfied so
long as there is no S/N pair at (2). This is of course so if nobody is happy so that the
top at (2) is never satisfied. But suppose m is a happy person, say, Sue and the top
at (2) is satisfied. The bottom comes out S so long as Sue is the only happy person,
so that any happy y is identical to her. In this case, again, we do not get an S/N
pair. But suppose Jim, say, is also happy; then the very bottom branch at (3) fails;
so the universal at (2) is N; so the conditional at (1) is N; and the entire sentence is
N. Suppose x is assigned to a happy person; in effect, 8y.Hy ! .x D y// limits
the range of happy things, telling us that anything happy is it. We get at most two
people are happy with 8x8y.H x ^ Hy/ ! 8z.H z ! .x D z _ y D z// if
some things are happy, then anything that is happy is one of them. And similarly in
other cases.
To say exactly one person is happy, it is enough to say at least one person is
happy, and at most one person is happy. Thus, using what we have already done,
9xH x ^ 8xH x ! 8y.Hy ! .x D y// does the job. But we can use the
limiting strategy with the universal quantifier more efficiently. Thus, for example,
if we want to say, Bob is the only happy person we might try H b ^ 8yHy !
.b D y/ Bob is happy, and every happy person is Bob. Similarly, for exactly
one person is happy, 9xH x ^ 8y.Hy ! .x D y// is good. We say that there is a
happy person, and that all the happy people are identical to it. For exactly two people
are happy, 9x9y..H x ^Hy/^.x D y//^8z.H z ! .x D z/_.y D z// does

CHAPTER 5. TRANSLATION

200

the job there are at least two happy people, and anything that is a happy person is
identical to one of them.
Phrases of the sort the such-and-such are definite descriptions. Perhaps it
is natural to think the such-and-such is so-and-so fails when there is more than
one such-and-such. Similarly, phrases of the sort the such-and-such is so-and-so
seem to fail when nothing is such-and-such. Thus, for example, neither The desk
at CSUSB has graffiti on it nor the present king of France is bald seem to be true.
The first because the description fails to pick out just one object, and the second because the description does not pick out any object. Of course, if a description does
pick out just one object, then the predicate must apply. So, for example, The president of the USA is a woman is not true. There is exactly one object which is the
president of the USA, but it is not a woman. And the president of the USA is a
man is true. In this case, exactly one object is picked out by the description, and
the predicate does apply. Thus, in On Denoting, Bertrand Russell famously proposes that a statement of the sort the P is Q is true just in case there is exactly
one P and it is Q. On Russells account, then, where P .x/ and Q.x/ have variable
x free, and P .v/ is like P .x/ but with free instances of x replaced by a new variable v, 9x.P .x/ ^ 8v.P .v/ ! x D v// ^ Q.x/ is good there is a P , it is
the only P , and it is Q. Thus, for example, with the natural interpretation function,
9x.P x ^ 8y.P y ! x D y// ^ W x translates the president is a woman. In a
course on philosophy of language, one might spend a great deal of time discussing
definite descriptions. But in ordinary cases we will simply assume Russells account
for translating expressions of the sort, the P is Q.
Finally, notice that equality can play a role in exception clauses. This is particularly important when making general comparisons. Thus, for example, if we want
to say that zero is smaller than every other integer, with the standard interpretation
<
N1 of LNT
, 8x.; < x/ is a mistake. This formula is satisfied only if zero is less than
zero! What we want is rather, 8x.x D ;/ ! .; < x/. Similarly, if we want to
say that there is a person taller than every other, we would not use 9x8yT xy where
T xy when x is taller than y. This would require that the tallest person be taller than
herself! What we want is rather, 9x8y.x D y/ ! T xy.
Observe that relations of this sort may play a role in definite descriptions. Thus it
seems natural to talk about the smallest integer, or the tallest person. We might therefore additionally assert uniqueness with something like, 9xxis taller than every other
^ 8z.z is taller than every other ! x D z/.10 However, we will not usually
add the second clause, insofar as uniqueness follows automatically in these cases
10 9x8y..x

D y/ ! T xy/ ^ 8z.8y..z D y/ ! T zy/ ! x D z/.

CHAPTER 5. TRANSLATION

201

from the initial claim, 9x8y.x D y/ ! T xy together with the premise that
taller than (less than) is asymmetric, that 8x8y.T xy ! T yx/.11 By itself,
9x8y.x D y/ ! T xy does not require uniqueness it says only that there
is a tallest object. When a relation is asymmetric, however, there cannot be multiple
things with the relation to everything else. Thus, in these cases, for The tallest person is happy it will be sufficient conjoin a tallest person is happy with asymmetry,
9x8y..x D y/ ! T xy/ ^ H x ^ 8x8y.T xy ! T yx/. Taken together, these
imply all the elements of Russells account.
E5.28. Given the following partial interpretation function for Lq , complete the translation for each of the following.
U: fo j o is a a snake in my yardg

a: Aaalph
G 1 : fo j o 2 U and o is in the grassg
D 1 : fo j o 2 U and o is deadlyg
B 2 : fhm; ni j m; n 2 U and m is bigger than ng
a. There is at least one snake in the grass.
b. There are at least two snakes in the grass.
*c. There are at least three snakes in the grass.
d. There are no snakes in the grass.
e. There is at most one snake in the grass.
f. There are at most two snakes in the grass.
g. There are at most three snakes in the grass.
h. There is exactly one snake in the grass.
i. There are exactly two snakes in the grass.
j. There are exactly three snakes in the grass.
11 If m is taller than everything other than itself, n is taller than everything other than itself, but
m n, then m is taller than n and n is taller than m. But this is impossible if the relation is asymmetric.

So only one object can be taller than all the others.

CHAPTER 5. TRANSLATION

202

*k. The snake in the grass is deadly.


l. Aaalph is the biggest snake.
*m. Aaalph is bigger than any other snake in the grass.
n. The biggest snake in the grass is deadly.
o. The smallest snake in the grass is deadly.
<
E5.29. Given LNT
and a function for the standard interpretation as below, complete
the translation for each of the following. Hint: Once you know how to say
a number is odd or even, answers to some exercises will mirror ones from
E5.26.

U: N

;: zero
S : fhm; ni j m; n 2 N , and n is the successor of mg
C: fhhm; ni; oi j m; n; o 2 N , and m plus n equals og
: fhhm; ni; oi j m; n; o 2 N , and m times n equals og
<: fhm; ni j m; n 2 N , and m is less than ng

a. Any number is equal to itself (identity is reflexive).


b. If a number a is equal to a number b, then b is equal to a (identity is symmetric).
c. If a number a is equal to a number b and b is equal to c, then a is equal to c
(identity is transitive).
d. No number is less than itself (less than is irreflexive).
*e. If a number a is less than a number b, then b is not less then a (less than is
asymmetric).
f. If a number a is less than a number b and b is less than c, then a is less than
c (less than is transitive).
g. There is no largest number.
*h. Four is even (a number such that two times something is equal to it).

CHAPTER 5. TRANSLATION

203

i. Three is odd (such that two times something plus one is equal to it).
*j. Any odd number is the sum of an odd and an even.
k. Any even number other than zero is the sum of one odd with another.
l. The sum of one odd with another odd is even.
m. There is no largest even number.
*n. Three is prime (a number divided by no number other than one and itself
though you will have to put this in terms of multipliers).
o. Every prime except two is odd.

E5.30. For each of the following arguments: (i) Produce a good translation, including interpretation function and translations for the premises and conclusion.
Then (ii) for each argument that is not quantificationally valid, produce an
interpretation (trees optional) to show that the argument is not quantificationally valid.
a. Only citizens can vote
Hannah is a citizen
Hannah can vote
b. All citizens can vote
If someone is a citizen, then their father is a citizen
Hannah is a citizen
Hannahs father can vote
*c. Bob is taller than every other man
Only Bob is taller than every other man
d. Bob is taller than every other man
The taller than relation is asymmetric
Only Bob is taller than every other man

CHAPTER 5. TRANSLATION

204

e. Some happy animals are dogs


At most one happy dog is chasing a cat
Some happy dog is chasing a cat
E5.31. For each of the arguments in E530 that you have not shown is invalid, produce
a derivation to show that it is valid in AD.

E5.32. For each of the following concepts, explain in an essay of about two pages,
so that Hannah could understand. In your essay, you should (i) identify the
objects to which the concept applies, (ii) give and explain the definition, and
give and explicate examples of your own construction (iii) where the concept
applies, and (iv) where it does not. Your essay should exhibit an understanding of methods from the text.
a. Quantifier switching
b. Quantifier placement
c. Quantity expressions and definite descriptions

Chapter 6

Natural Deduction
Natural deductions systems are so-called because their rules formalize patterns of
reasoning that occur in relatively ordinary natural contexts. Thus, initially at least,
the rules of natural deduction systems are easier to motivate than the axioms and rules
of axiomatic systems. By itself, this is sufficient to give natural deduction a special
interest. As we shall see, natural deduction is also susceptible to proof strategies in
a way that (primitive) axiomatic systems are not. If you have had another course
in formal logic, you have probably been exposed to natural deduction. So, again, it
may seem important to bring what we have done into contact with what you have
encountered in other contexts. After some general remarks about natural deduction,
we turn to the sentential and quantificational components of our system ND, and
finally to an expanded system, ND+.

6.1

General

I begin this section with a few general remarks about derivation systems and derivation rules. We will then turn to some background notions for the particular rules of
our official natural derivation systems.1

6.1.1

Derivations as Games

In their essential nature, derivations are defined in terms of form. Both axiomatic and
natural derivations can be seen as a kind of game with the aim of getting from a
starting point to a goal by rules. Taken as games, there is no immediate or obvious
1 Parts of this section are reminiscent of 3.1 and, especially if you skipped over that section, you
may want to look over it now as additional background.

205

CHAPTER 6. NATURAL DEDUCTION

206

connection between derivations and semantic validity or truth. This point may have
been particularly vivid with respect to axiomatic systems. In the case of natural
derivations, the systems are driven by rules rather than axioms, and the rules may
make sense in a way that axioms do not. Still, we can introduce natural derivations
purely in their nature as games. Thus, for example, consider a system N1 with the
following rules.
N1

R1 P ! Q, P
Q

R2 P _ Q

R3 P ^ Q

R4 P
P _Q

In this system, R1: given formulas of the form P ! Q and P , one may move to Q;
R2: given a formula of the form P _ Q, one may move to Q; R3: given a formula
of the form P ^ Q, one may move to P ; and R4: given a formula P one may move
to P _ Q for any Q. For now, at least, the game is played as follows: One begins
with some starting formulas and a goal. The starting formulas are like cards in
your hand. One then applies the rules to obtain more formulas, to which the rules
may be applied again and again. You win if you eventually obtain the goal formula.
Each application of a rule is independent of the ones before so all that matters for
a given move is whether formulas are of the requisite forms; it does not matter what
was P or what was Q in a previous application of the rules.
Let us consider some examples. At this stage, do not worry about strategy, about
why we do what we do, as much as about how the rules work and the way the game
is played. A game always begins with starting premises at the top, and goal on the
bottom.
1. A ! .B ^ C /
2. A

P(remise)
P(remise)

(A)
B _D

(goal)

The formulas on lines (1) and (2) are of the form P ! Q and P , where P maps to
A and Q to .B ^ C /; so we are in a position to apply rule R1 to get the Q.
1. A ! .B ^ C /
2. A

P(remise)
P(remise)

3. B ^ C

1,2 R1

B _D

(goal)

The justification for our move the way the rules apply is listed on the right; in
this case, we use the formulas on lines (1) and (2) according to rule R1 to get B ^ C ;

CHAPTER 6. NATURAL DEDUCTION

207

so that is indicated by the notation. Now, B ^ C is of the form P ^ Q. So we can


apply R3 to it in order to obtain the P , namely B.
1. A ! .B ^ C /
2. A

P(remise)
P(remise)

3. B ^ C
4. B
B _D

1,2 R1
3 R3
(goal)

Notice that one application of a rule is independent of another. It does not matter
what formula was P or Q in a previous move, for evaluation of this one. Finally,
where P is B, B _ D is of the form P _ Q. So we can apply R4 to get the final
result.
1. A ! .B ^ C /
2. A

P(remise)
P(remise)

3. B ^ C
4. B
5. B _ D

1,2 R1
3 R3
4 R4 Win!

Notice that R4 leaves the Q unrestricted: Given some P , we can move to P _ Q for
any Q. Since we reached the goal from the starting sentences, we win! In this simple
derivation system, any line of a successful derivation is a premise, or justified from
lines before by the rules.
Here are a couple more examples, this time of completed derivations.
1. A ^ C
2. .A _ B/ ! D

(B)

3.
4.
5.
6.

A
A_B
D
D _ .R ! S/

P
P
1 R3
3 R4
2,4 R1
5 R4 Win!

A ^ C is of the form P ^ Q. So we can apply R3 to obtain the P , in this case A.


Then where P is A, we use R4 to add on a B to get A _ B. .A _ B/ ! D and A _ B
are of the form P ! Q and P ; so we apply R1 to get the Q, that is D. Finally,
where D is P , D _ .R ! S / is of the form P _ Q; so we apply R4 to get the final
result. Notice again that the Q may be any formula whatsoever.
Here is another example.

CHAPTER 6. NATURAL DEDUCTION


1. .A ^ B/ ^ D
2. .A ^ B/ ! C
3. A ! .C ! .B ^ D//

(C)

4.
5.
6.
7.
8.
9.

A^B
C
A
C ! .B ^ D/
B ^D
B

208

P
P
P
1 R3
2,4 R1
4 R3
3,6 R1
7,5 R1
8 R3 Win!

You should be able to follow the steps. In this case, we use A ^ B on line (4) twice;
once as part of an application of R1 to get C , and again in an application of R3 to
get the A. Once you have a formula in your hand you can use it as many times and
whatever way the rules will allow. Also, the order in which we worked might have
been different. Thus, for example, we might have obtained A on line (5) and then
C after. You win if you get to the goal by the rules; how you get there is up to you.
Finally, it is tempting to think we could get B from, say, A ^ B on line (4). We will
able to do this in our official system. But the rules we have so far do not let us do so.
R3 lets us move just to the left conjunct of a formula of the form P ^ Q.
When there is a way to get from the premises of some argument to its conclusion
by the rules of derivation system N, the premises prove the conclusion in system
N. In this case, where (Gamma) is the set of premises, and P the conclusion we
write `N P . If `N P the argument is valid in derivation system N. Notice
the distinction between this single turnstile ` and the double turnstile  associated
with semantic validity. As usual, if Q1 : : : Qn are the members of , we sometimes
write Q1 : : : Qn `N P in place of `N P . If has no members then, listing all the
members of individually, we simply write `N P . In this case, P is a theorem of
derivation system N.
One can imagine setting up many different rule sets, and so many different games
of this kind. In the end, we want our game to serve a specific purpose. That is, we
want to use the game in the identification of valid arguments. In order for our games
to be an indicator of validity, we would like it to be the case that `N P iff  P ,
that proves P iff entails P . In Part III we will show that our official derivation
games have this property.
For now, we can at least see how this might be: Roughly, we impose the following
condition on rules: we require of our rules that the inputs always semantically entail
the outputs. Then if some premises are true, and we make a move to a formula, the
formula we move to must be true; and if the formulas in our hand are all true, and
we add some formula by another move, the formula we add must be true; and so

CHAPTER 6. NATURAL DEDUCTION

209

forth for each formula we add until we get to the goal, which will have to be true as
well. So if the premises are true, the goal must be true as well. We will have much
more to say about this later!
For now, notice that our rules R1, R3 and R4 each meet the proposed requirement
on rules, but R2 does not.
R1
P !Q

P Q

(D)

T
T
F
F

T
F
T
T

T
F
T
F

P / Q
T
T
F
F

R2
P _Q / Q

T
F
T
F

T
T
T
F

R3
P ^Q / P

T
F
T
F

T
F
F
F

R4
P / P _Q

T
T
F
F

T
T
F
F

T
T
T
F

R1, R3 and R4 have no row where the input(s) are T and the output is F. But for
R2, the second row has input T and output F. So R2 does not meet our condition.
This does not mean that one cannot construct a game with R2 as a part. Rather, the
point is that R2 will not help us accomplish what we want to accomplish with our
games. As we demonstrate in Part III, so long as rules meet the condition, a win in
the game always corresponds to an argument that is semantically valid. Thus, for
example, derivation (C), in which R2 does not appear, corresponds to the result that
.A ^ B/ ^ D, .A ^ B/ ! C , A ! .C ! .B ^ D// s B.
A B C D .A ^ B/ ^ D

(E)

.A ^ B/ ! C

A ! .C ! .B ^ D// / B

T
T
T
T

T
T
T
T

T
T
F
F

T
F
T
F

T
T
T
T

T
F
T
F

T
T
T
T

T
T
F
F

T
F
T
T

T
F
T
T

T
F
T
F

T
T
T
T

T
T
T
T

F
F
F
F

T
T
F
F

T
F
T
F

F
F
F
F

F
F
F
F

F
F
F
F

T
T
T
T

F
F
T
T

F
F
T
T

F
F
F
F

F
F
F
F

F
F
F
F

T
T
T
T

T
T
F
F

T
F
T
F

F
F
F
F

F
F
F
F

F
F
F
F

T
T
T
T

T
T
T
T

T
F
T
T

T
F
T
F

T
T
T
T

F
F
F
F

F
F
F
F

T
T
F
F

T
F
T
F

F
F
F
F

F
F
F
F

F
F
F
F

T
T
T
T

T
T
T
T

F
F
T
T

F
F
F
F

F
F
F
F

There is no row where the premises are T and the conclusion is F. As the number of
rows goes up, we may decide that the games are dramatically easier to complete than
the tables. And derivations are particularly important in the quantificational case,
where we have not yet been able to demonstrate semantic validity at all.

CHAPTER 6. NATURAL DEDUCTION

210

E6.1. Show that each of the following is valid in N1. Complete (a) - (d) using just
rules R1, R3 and R4. You will need an application of R2 for (e).
*a. .A ^ B/ ^ C `N1 A
b. .A ^ B/ ^ C , A ! .B ^ C / `N1 B
c. .A ^ B/ ! .B ^ A/, A ^ B `N1 B _ A
d. R, R _ .S _ T / ! S `N1 S _ T
e. A `N1 A ! C
*E6.2. (i) For each of the arguments in E6.1, use a truth table to decide if the argument is sententially valid. (ii) To what do you attribute the fact that a win in
N1 is not a sure indicator of semantic validity?

6.1.2

Auxiliary Assumptions

So far, our derivations have had the following form,

(F)

a. A
::
:
b. B
::
:
c. G

P(remise)

P(remise)

(goal)

We have some premise(s) at the top, and a conclusion at the bottom. The premises
are against a line which indicates the range or scope over which the premises apply.
In each case, the line extends from the premises to the conclusion, indicating that the
conclusion is derived from them. It is always our aim to derive the conclusion under
the scope of the premises alone. But our official derivation system will allow appeal
to certain auxiliary assumptions in addition to premises. Any such assumption comes
with a scope line of its own indicating the range over which it applies. Thus, for
example, derivations might be structured as follows.

CHAPTER 6. NATURAL DEDUCTION

a. A

P(remise)

b. B

P(remise)

211
a. A

P(remise)

b. B

P(remise)

c.

(G)

c.

A(ssumption)

(H)

d.

A(ssumption)
D

A(ssumption)

e.
d.
f.
e. G

(goal)
g. G

(goal)

In each, there are premises A through B at the top and goal G at the bottom. As indicated by the main leftmost scope line, the premises apply throughout the derivations,
and the goal is derived under them. In case (G), there is an additional assumption at
(c). As indicated by its scope line, that assumption applies from (c) - (d). In (H), there
are a pair of additional assumptions. As indicated by the associated scope lines, the
first applies over (c) - (f), and the second over (d) - (e). We will say that an auxiliary
assumption, together with the formulas that fall under its scope, is a subderivation.
Thus (G) has a subderivation on from (c) - (d). (H) has a pair of subderivations, one
on (c) - (f), and another on (d) - (e). A derivation or subderivation may include various other subderivations. Any subderivation begins with an auxiliary assumption. In
general we cite a subderivation by listing the line number on which it begins, then a
dash, and the line number on which its scope line ends.
In contexts without auxiliary assumptions, we have been able freely to appeal to
any formula already in our hand. Where there are auxiliary assumptions, however,
we may appeal only to accessible subderivations and formulas. A formula is accessible at a given stage when it is obtained under assumptions all of which continue to
apply. In practice, what this means is that for justification of a formula at line number i we can appeal only to formulas which appear immediately against scope lines
extending as far as i . Thus, for example, with the scope structure as in (I) below, in
the justification of line (6),

CHAPTER 6. NATURAL DEDUCTION

212

1.

1.

2.
3.

2.
3.

4.

4.

5.

(I)

6.
7.

5.
A

(J)

6.
7.

8.

8.

9.

9.

10.

10.

11.

11.

12.

12.

we could appeal only to formulas at (1), (2) and (3), for these are the only ones
immediately against scope lines extending as far as (6). To see this, notice that scope
lines extending as far as (6), are ones cut by the arrow at (6). Formulas at (4) and (5)
are not against a line extending that far. Similarly, as indicated by the arrow in (J),
for the justification of (11), we could appeal only to formulas at (1), (2), and (10).
Formulas at other line numbers are not immediately against scope lines extending
as far as (11). The accessible formulas are ones derived under assumptions all of
which continue to apply. Similarly in (J) for the justification of (8), say, we could
appeal only to formulas on (1), (2), (3), (6) and (7). Again (4) and (5) fall under an
assumption whose scope line does not extend as far as (8). The justification of (12)
could appeal just to (1) or (2). You should be sure you understand these cases.
Our aim is always to obtain the goal against the leftmost scope line under the
scope of the premises alone and if the only formulas accessible for its justification are also against the leftmost scope line, it may appear mysterious why we would
ever introduce auxiliary assumptions and subderivations at all. What is the point of
auxiliary assumptions, if formulas under their scope are inaccessible for justification
for the formula we want? The answer is that certain of our rules will appeal to entire
subderivations, rather than to the formulas in them. A subderivation is accessible at a
given stage when it is obtained under assumptions all of which continue to apply. In
practice, what this means is that for a formula at line i , we can appeal to a subderivation only if its whole scope line is itself against a scope line which extends down
to i . At line (5) in (I) there are no subderivations to which one could appeal, just
because none have yet been completed. At (6) we could appeal to the subderivation
at (4) - (5), because its scope line is immediately against one of the lines cut by the

CHAPTER 6. NATURAL DEDUCTION

213

arrow. Similarly, at (11) we could appeal to the subderivation at (3) - (9) because
its scope line is immediately against one of the lines cut by the arrow. But, at (11),
the subderivations at (4) - (5) and (7) - (8) are not accessible for their scope lines
are not immediately against a line extending as far as (11). At (12) we can appeal
to either of the subderivations at (3) - (9) and (10) - (11); the ones at (4) - (5) and
(7) - (8) remain inaccessible. The justification for line (12) might therefore appeal
either to the formulas on lines (1) and (2) or to the subderivations on lines (3) - (9)
and (10) - (11). Notice again that the justification for line (12) does not have access to the formulas inside the subderivations from lines (3) - (9) and (10) - 11). So
those subderivations remain accessible even where the formulas inside them are not,
and there may be a point to the subderivations even where the formulas inside the
subderivation are inaccessible.

Definitions for Auxiliary Assumptions


SD An auxiliary assumption, together with the formulas that fall under its scope, is a
subderivation.
FA A formula is accessible at a given stage when it is obtained under assumptions all of
which continue to apply.
SA A subderivation is accessible at a given stage when it (as a whole) is obtained under
assumptions all of which continue to apply.
In practice, what this means is that for justification of a formula at line i we can appeal to
another formula only if it is immediately against a scope line extending as far as i.
And in practice, for justification of a formula at line i, we can appeal to a subderivation
only if its whole scope line is itself immediately against a scope line extending as far as i .

All this will become more concrete as we turn now to the rules of our official
system ND. We can reinforce the point about accessibility of formulas by introducing the first, and simplest, rule of our official system. If a formula P appears on
an accessible line a of a derivation, we may repeat it by the rule reiteration, with
justification a R.
a. P

R
P

aR

It should be obvious why reiteration satisfies our basic condition on rules. If P is


true, of course P is true. So this rule could never lead from a formula that is true,
to one that is not. Observe, though, that the line a must be accessible. If in (I) the

CHAPTER 6. NATURAL DEDUCTION

214

assumption at line (3) were a formula P , then we could conclude P with justification
3 R at lines (5), (6), (8) or (9). We could not obtain P with the same justification at
(11) or (12) without violating the rule, because (3) is not accessible for justification
of (11) or (12). You should be clear about why this is so.
*E6.3. Consider a derivation with the following structure.
1.

2.

3.
4.

5.

6.
7.
8.

For each of the lines (3), (6), (7) and (8) which lines are accessible? which
subderivations (if any) are accessible? That is, complete the following table.
accessible lines

accessible subderivations

line 3
line 6
line 7
line 8
*E6.4. Suppose in a derivation with structure as in E6.3 we have obtained a formula
A on line (3). (i) On what lines would we be allowed to conclude A by 3
R? Suppose there is a formula B on line (4). (ii) On what lines would we be
allowed to conclude B by 4 R?

6.2

Sentential

Our system N1 set up the basic idea of derivations as games. We begin presentation
of our official natural deduction system ND with rules whose application is just to
sentential forms to forms involving , and ! (and so to ^, _, and $). Though
the only operators in the forms are sentential, the forms may apply to expressions
in either a sentential language like Ls , or a quantificational one like Lq . For the
most part, though, we simply focus on Ls . In a derivation, each formula is either a

CHAPTER 6. NATURAL DEDUCTION

215

premise, an auxiliary assumption, or is justified by the rules. As we will see, auxiliary


assumptions are always introduced in conjunction with an exit strategy. In addition to
reiteration, the sentential part of ND includes two rules for each of the five sentential
operators for a total of eleven rules. For each of the operators, there is an I
or introduction rule, and an E or exploitation rule.2 As we will see, this division
helps structure the way we approach derivations: To generate a formula with main
operator ?, you will typically use the corresponding introduction rule. To make use
of a formula with main operator ?, you will typically employ the exploitation rule
for that operator.

6.2.1

! and ^

Let us start with the I- and E-rules for ! and ^. We have already seen the exploitation rule for !. It is R1 of system N1. If formulas P ! Q and P and appear on
accessible lines a and b of a derivation, we may conclude Q with justification a,b
!E.
!E

a. P ! Q
b. P
Q

a,b !E

Intuitively, if it is true that if P then Q, and it is true that P , then Q must be true as
well. And, on table (D) we saw that if both P ! Q and P are true, then Q is true.
Notice that we do not somehow get the P from P ! Q. Rather, we exploit P ! Q
when, given that P also is true, we use P together with P ! Q to conclude Q.
So this rule requires two input cards. The P ! Q card sits idle without a P to
activate it. The order in which P ! Q and P appear does not matter so long as they
are both accessible. However, you should cite them in the standard order line for
the conditional first, then the antecedent. As in the axiomatic system from chapter 3,
this rule is sometimes called modus ponens.
Here is an example. We show, L, L ! .A ^ K/, .A ^ K/ ! .L ! P / `ND P .
2 I- and E-rules are often called introduction and elimination rules. This can lead to confusion as
E-rules do not necessarily eliminate anything. The above, which is becoming more common, is more
clear.

CHAPTER 6. NATURAL DEDUCTION

(K)

1. L
2. L ! .A ^ K/
3. .A ^ K/ ! .L ! P /

P
P
P

4. A ^ K
5. L ! P
6. P

2,1 !E
3,4 !E
5,1 !E

216

L ! .A^K/ and L and are of the form P ! Q and P where L is the P and A^K is
Q. So we use them to conclude A ^ K by !E on (4). But then .A ^ K/ ! .L ! P /
and A ^ K are of the form P ! Q and Q, so we use them to conclude Q, in this
case, L ! P , on line (5). Finally L ! P and L are of the form P ! Q and P ,
and we use them to conclude P on (6). Notice that,
(L)

1. .A ! B/ ^ C
2. A

P
P

3. B

1,2 !E

Mistake!

misapplies the rule. .A ! B/ ^ C is not of the form P ! Q the main operator


being ^, so that the formula is of the form P ^ Q. The rule !E applies just to
formulas with main operator !. If we want to use .A ! B/ ^ C with A to conclude
B, we would first have to isolate A ! B on a line of its own. We might have done
this in N1. But there is no rule for this (yet) in ND!
!I is our first rule that requires a subderivation. Once we understand this rule,
the rest are mere variations on a theme. !I takes as its input an entire subderivation.
Given an accessible subderivation which begins with assumption P on line a and
ends with Q against the assumptions scope line at b, one may conclude P ! Q
with justification a-b !I.

!I

a.

b.

Q
P !Q

A (Q, !I)

or
a-b !I

a.

b.

Q
P !Q

A (g, !I)

a-b !I

Note that the auxiliary assumption comes with a stated exit strategy: In this case the
exit strategy includes the formula Q with which the subderivation is to end, and an
indication of the rule (!I) by which exit is to be made. We might write out the entire
formula inside the parentheses as on the left. In practice, however, this is tedious,
and it is easier just to write the formula at the bottom of the scope line where we
will need it in the end. Thus in the parentheses on the right g is a simple pointer to
the goal formula at the end of the scope line. Note that the pointer is empty unless
there is a formula to which it points, and the exit strategy therefore is not complete
unless the goal formula is stated. In this case, the strategy includes the pointer to

CHAPTER 6. NATURAL DEDUCTION

217

the goal formula, along with the indication of the rule (!I) by which exit is to be
made. Again, at the time we make the assumption, we write the Q down as part of
the strategy for exiting the subderivation. But this does not mean the Q is justified!
The Q is rather introduced as a new goal. Notice also that the justification a-b !I
does not refer to the formulas on lines a and b. These are inaccessible. Rather, the
justification appeals to the subderivation which begins on line a and ends on line b
where this subderivation is accessible even though the formulas in it are not. So there
is a difference between the comma and the hyphen, as they appear in justifications.
For this rule, we assume the antecedent, reach the consequent, and conclude to
the conditional by !I. Intuitively, if an assumption P leads to Q then we know that
if P then Q. On truth tables, if there is a sententially valid argument from some other
premises together with assumption P to conclusion Q, then there is no row where
those other premises are true and the assumption P is true but Q is false but this
is just to say that there is no row where the other premises are true and P ! Q is
false. We will have much more to say about this in Part III.
For an example, suppose we are confronted with the following.

(M)

1. A ! B
2. B ! C

P
P

A!C

In general, we use an introduction rule to produce some formula typically one


already given as a goal. !I generates P ! Q given a subderivation that starts with
the P and ends with the Q. Thus to reach A ! C , we need a subderivation that
starts with A and ends with C . So we set up to reach A ! C with the assumption A
and an exit strategy to produce A ! C by !I. For this we set the consequent C as
a subgoal.
1. A ! B
2. B ! C
3.

P
P
A (g, !I)

C
A!C

Again, we have not yet reached C or A ! C . Rather, we have assumed A and set C
as a subgoal, with the strategy of terminating our subderivation by an application of
!I. This much is stated in the exit strategy. As it happens, C is easy to get.

CHAPTER 6. NATURAL DEDUCTION


1. A ! B
2. B ! C

218

P
P

3.

A (g, !I)

4.
5.

B
C

1,3 !E
2,4 !E

A!C

Having reached C , and so completed the subderivation, we are in a position to execute our exit strategy and conclude A ! C by !I.
1. A ! B
2. B ! C

P
P

3.

A (g, !I)

4.
5.

B
C

1,3 !E
2,4 !E

6. A ! C

3-5 !I

We appeal to the subderivation that starts with the assumption of the antecedent,
and reaches the consequent. Notice that the !I setup is driven, not by available
premises and assumptions, but by where we want to get. We will say something
more systematic about strategy once we have introduced all the rules. But here is
the fundamental idea: think goal directedly. We begin with A ! C as a goal. Our
idea for producing it leads to C as a new goal. And the new goal is relatively easy to
obtain.
Here is another example, one that should illustrate the above point about strategy,
as well as the rule. Say we want to show A `ND B ! .C ! A/.
1. A

(N)
B ! .C ! A/

Forget about the premise! Since the goal is of the form P ! Q, we set up to get it
by !I.
1. A
2.

P
B

A (g, !I)

C !A
B ! .C ! A/

We need a subderivation that starts with the antecedent, and ends with the consequent.
So we assume the antecedent, and set the consequent as a new goal. In this case, the
new goal C ! A has main operator !, so we set up again to reach it by !I.

CHAPTER 6. NATURAL DEDUCTION


1. A

P
A (g, !I)

2.

219

A (g, !I)

3.

A
C !A
B ! .C ! A/

The pointer g in an exit strategy points to the goal formula at the bottom of its scope
line. Thus g for assumption B at (2) points to C ! A at the bottom of its line,
and g for assumption C at (3) points to A at the bottom of its line. Again, for the
conditional, we assume the antecedent, and set the consequent as a new goal. And
this last goal is particularly easy to reach. It follows immediately by reiteration from
(1). Then it is a simple matter of executing the exit strategies with which our auxiliary
assumptions were introduced.
1. A
2.

P
A (g, !I)

3.

A (g, !I)

4.

1R

C !A

3-4 !I

6. B ! .C ! A/

2-5 !I

5.

The subderivation which begins on (3) and ends on (4) begins with the antecedent
and ends with the consequent of C ! A. So we conclude C ! A on (5) by 3-4 !I.
The subderivation which begins on (2) and ends at (5) begins with the antecedent and
ends with the consequent of B ! .C ! A/. So we reach B ! .C ! A/ on (6)
by 2-5 !I. Notice again how our overall reasoning is driven by the goals, rather than
the premises and assumptions. It is sometimes difficult to motivate strategy when
derivations are short and relatively easy. But this sort of thinking will stand you in
good stead as problems get more difficult!
Given what we have done, the E- and I- rules for ^ are completely straightforward. If P ^ Q appears on some accessible line a of a derivation, then you may
move to the P , or to the Q with justification a ^E.
a. P ^ Q

^E

a. P ^ Q
a ^E

a ^E

Either qualifies as an instance of the rule. The left-hand case was R3 from N1.
Intuitively, ^E should be clear. If P and Q is true, then P is true. And if P and Q is

CHAPTER 6. NATURAL DEDUCTION

220

true, then Q is true. We saw a table for the left-hand case in (D). The other is similar.
The ^ introduction rule is equally straightforward. If P and Q appear on accessible
lines a and b of a derivation, then you may move to P ^ Q with justification a,b ^I.
^I

a. P
b. Q
P ^Q

a,b ^I

The order in which P and Q appear is irrelevant, though you should cite them in the
specified order, line for the left conjunct first, and then for the right. If P is true and
Q is true, then P and Q is true. Similarly, on a table, any line with both P and Q
true has P ^ Q true.
Here is a simple example, demonstrating the associativity of conjunction.
1. A ^ .B ^ C /

(O)

2.
3.
4.
5.
6.
7.

A
B ^C
B
C
A^B
.A ^ B/ ^ C

P
1 ^E
1 ^E
3 ^E
3 ^E
2,4 ^I
6,5 ^I

Notice that we could not get the B alone or the C alone without first isolating B ^ C
on (3). As before, our rules apply just to the main operator. In effect, we take apart the
premise with the E-rule, and put the conclusion together with the I-rule. Of course,
as with !I and !E, rules for other operators do not always let us get to the parts
and put them together in this simple and symmetric way.

CHAPTER 6. NATURAL DEDUCTION

221

Words to the wise:


 A common mistake made by beginning students is to assimilate other rules
to ^E and ^I moving, say, from P ! Q alone to P or Q, or from P
and Q to P ! Q. Do not forget what you have learned! Do not make this
mistake! The ^ rules are particularly easy. But each operator has its own
special character. Thus !E requires two cards to play. And !I takes a
subderivation as input.
 Another common mistake is to assume a formula P merely because it
would be nice to have access to P . Do not make this mistake! An assumption always comes with an exit strategy, and is useful only for application
of the exit rule. At this stage, then, the only reason to assume P is to
produce a formula of the sort P ! Q by !I.
A final example brings together all of the rules so far (except R).
1. A ! C

(P)

2.

A^B

A (g, !I)

3.
4.
5.
6.

A
C
B
B ^C

2 ^E
1,3 !E
2 ^E
5,4 ^I

7. .A ^ B/ ! .B ^ C /

2-6 !I

We set up to obtain the overall goal by !I. This generates B ^ C as a subgoal.


We get B ^ C by getting the B and the C . Here is our guiding idea for strategy
(which may now seem obvious): As you focus on a goal, to generate a formula with
main operator ?, consider producing it by ?I. Thus, if the main operator of a goal or
subgoal is !, consider producing the formula by !I; if the main operator of a goal
is ^, consider producing it by ^I. This much should be sufficient for you to approach
the following exercises. As you do the derivations, it is good simply to leave plenty
of space on the page for your derivation as you state goal formulas, and let there be
blank lines if room remains.3
3 Typing

on a computer, it is easy to push lines down if you need more room. It is not so easy with
pencil and paper, and worse with pen! If you decide to type, most word processors have a symbol font,
with the capability of assigning symbols to particular keys. Assigning keys is far more efficient than
finding characters over and over in menus.

CHAPTER 6. NATURAL DEDUCTION

222

E6.5. Complete the following derivations by filling in justifications for each line.
Hint: it may be convenient to xerox the problems, and fill in your answers
directly on the copy.
a. 1. .A ^ B/ ! C
2. B ^ A
3.
4.
5.
6.

B
A
A^B
C

b. 1. .R ! L/ ^ .S _ R/ ! .T $ K/
2. .R ! L/ ! .S _ R/
3.
4.
5.
6.

c.

1. B
2. .A ! B/ ! .B ! .L ^ S//
3.
4.
5.
6.
7.
8.
9.
10.

d.

R!L
S _R
.S _ R/ ! .T $ K/
T $K

A
B
A!B
B ! .L ^ S/
L^S
S
L
S ^L

1. A ^ B
2.

3.
4.

A
A^C

5. C ! .A ^ C /
6. C
7.
8.

B
B ^C

9. C ! .B ^ C /
10. C ! .A ^ C / ^ C ! .B ^ C /

CHAPTER 6. NATURAL DEDUCTION

223

e. 1. .A ^ S/ ! C
2.

3.

4.
5.

A^S
C

6.

S !C

7. A ! .S ! C /

E6.6. The following are not legitimate ND derivations. In each case, explain why.
*a. 1. .A ^ B/ ^ .C ! B/

P
1 ^E

2. A

b. 1. .A ^ B/ ^ .C ! A/
2. C

P
P

3. A

1,2 !E

c. 1. .A ^ B/ ^ .C ! A/
2. C ! A
3. A

d. 1. A ! B
2.

A^C

3.

P
1 ^E
2 !E

P
A (g, !I)
2 ^E
1,3 !E

4. B

e. 1. A ! B

2.

A^C

A (g, !I)

3.
4.
5.
6.

A
B
C
A^C

2 ^E
1,3 !E
2 ^E
3,5 ^I

Hint: For this problem, think carefully about the exit strategy and the scope
lines. Do we have the conclusion where we want it?

E6.7. Provide derivations to show each of the following.

CHAPTER 6. NATURAL DEDUCTION

224

a. A ^ B `ND B ^ A
*b. A ^ B, B ! C `ND C
c. A ^ .A ! .A ^ B// `ND B
d. A ^ B, B ! .C ^ D/ `ND A ^ D
*e. A ! .A ! B/ `ND A ! B
f. A, .A ^ B/ ! .C ^ D/ `ND B ! C
g. C ! A, C ! .A ! B/ `ND C ! .A ^ B/
*h. A ! B, B ! C `ND .A ^ K/ ! C
i. A ! B `ND .A ^ C / ! .B ^ C /
j. D ^ E, .D ! F / ^ .E ! G/ `ND F ^ G
k. O ! B, B ! S , S ! L `ND O ! L
*l. A ! B `ND .C ! A/ ! .C ! B/
m. A ! .B ! C / `ND B ! .A ! C /
n. A ! .B ! C /, D ! B `ND A ! .D ! C /
o. A ! B `ND A ! .C ! B/

6.2.2  and _
Now let us consider the I- and E-rules for  and _. The two rules for  are quite
similar to one another. Each appeals to a single subderivation. For I, given an
accessible subderivation which begins with assumption P on line a, and ends with a
formula of the form Q ^ Q against its scope line on line b, one may conclude P
by a-b I. For E, given an accessible subderivation which begins with assumption
P on line a, and ends with a formula of the form Q ^ Q against its scope line on
line b, one may conclude P by a-b E.

I

a.

b.

Q ^ Q
P

A (c, I)

E
a-b I

a.

P

b.

Q ^ Q
P

A (c, E)

a-b E

CHAPTER 6. NATURAL DEDUCTION

225

I introduces an expression with main operator tilde, adding tilde to the assumption
P . E exploits the assumption P , with a result that takes the tilde off. For these
rules, the formula Q may be any formula, so long as Q is it with a tilde in front.
Because Q may be any formula, when we declare our exit strategy for the assumption, we might have no particular goal formula in mind. So, where g always points
to a formula written at the bottom of a scope line, c is not a pointer to any particular
formula. Rather, when we declare our exit strategy, we merely indicate our intent to
obtain some contradiction, and then to exit by I or E.
Intuitively, if an assumption leads to a result that is false, the assumption is
wrong. So if the assumption P leads to Q ^ Q, then P ; and if the assumption P leads to Q ^ Q, then P . On tables, there can be no row where Q ^ Q
is true; so if every row where some premises together with assumption P are true
would have to make both Q ^ Q true, then there can be no row where those other
premises are true and P is true so any row where the other premises are true is
one where P is false, and P is therefore true. Similarly when the assumption is
P , any row where the other premises are true has to be one where P is false, so
that P is true. Again, we will have much more to say about this reasoning in Part III.
Here are some examples of these rules. Notice that, again, we introduce subderivations with the overall goal in mind.
1. A ! B
2. A ! B

(Q)

P
P

3.

A (c, I)

4.
5.
6.

B
B
B ^ B

1,3 !E
2,3 !E
4,5 ^I

7. A

3-6, I

We begin with the goal of obtaining A. The natural way to obtain this is by I.
So we set up a subderivation with that in mind. Since the goal is A, we begin with
A, and go for a contradiction. In this case, the contradiction is easy to obtain, by a
couple applications of !E and then ^I.
Here is another case that may be more interesting.

CHAPTER 6. NATURAL DEDUCTION


1. A
2. B ! A

(R)

P
P

3.

L^B

A (c, I)

4.
5.
6.

B
A
A ^ A

3 ^E
2,4 !E
5,1 ^I

7. .L ^ B/

226

3-6 I

This time, the original goal is .L ^ B/. It is of the form P , so we set up to
obtain it with a subderivation that begins with the P , that is, L ^ B. In this case, the
contradiction is A ^ A. Once we have the contradiction, we simply apply our exit
strategy.
A simplification. Let ? (bottom) abbreviate an arbitrary contradiction say Z ^
Z. Adopt a rule ?I as on the left below,

?I

1. Q
2. Q

a. Q
b. Q
?

(S)
a,b ?I

3.

.Z ^ Z/

A (c E)

4.

Q ^ Q

1,2 ^I

5. Z ^ Z

3-4 E

Given Q and Q on accessible lines, we move directly to ? by ?I. This is an example of a derived rule. For, given Q and Q, we can always derive Z ^ Z (that is,
?) as in (S) on the right. Given this, the I and E rules appear in the forms,

I

a.

b.

?
P

A (c, I)

a.

E
a-b I

b.

P

A (c, E)

?
P

a-b E

Since ? is (abbreviates) a sentence of the form Q^Q, the subderivations for I and
E are appropriately concluded with ?. Observe that with ? at the bottom the I
and E rules have a particular goal sentence, very much like !I. However, the Q and
Q required to obtain ? by ?I are the same as would be required for Q ^ Q on the
original form of the rules. For this reason, we declare our exit strategy with a c rather
than g any time the goal is ?. At one level, this simplification is a mere notational
convenience: having obtained Q and Q, we move to ?, instead of writing out
the complex conjunction Q ^ Q. However, there are contexts where it will be
convenient to have a particular contradiction as goal. Thus this is the standard form
in which we use these rules.

CHAPTER 6. NATURAL DEDUCTION

227

Here is an example of the rules in this form, this time for E.
1. A

(T)

2.

A

A (c, E)

3.

2,1 ?I

4. A

2-3 E

It is no surprise that we can derive A from A! This is how to do it in ND. Again,
do not begin by thinking about the premise. The goal is A, and we can get it with a
subderivation that starts with A, by a E exit strategy. In this case the Q and Q
for ?I are A and A that is A with a tilde in front of it. Though very often
(at least in the beginning) an atomic and its negation will do for your contradiction, Q
and Q need not be simple. Observe that E is a strange and powerful rule: Though
an E-rule, effectively it can be used in pursuit of any goal whatsoever to obtain
formula P by E, all one has to do is obtain a contradiction from the assumption of
P with a tilde in front. As in this last example (T), E is particularly useful when
the goal is an atomic formula, and thus without a main operator, so that there is no
straightforward way for regular introduction rules to apply. In this way, it plays the
role of a sort of back door introduction role.
The _I and and _E rules apply methods we have already seen. For _I, given an
accessible formula P on line a, one may move to either P _ Q or to Q _ P for any
formula Q, with justification a _I.
a. P

_I

P _Q

a. P
a _I

Q_P

a _I

The left-hand case was R4 from N1. Also, we saw an intuitive version of this rule as
addition on p. 25. Table (D) exhibits the left-hand case. And the other side should
be clear as well: Any row of a table where P is true has both P _ Q and Q _ P true.
Here is a simple example.

(U)

1. P
2. .P _ Q/ ! R

P
P

3. P _ Q
4. R

1 _I
2,3 !E

It is easy to get R once we have P _Q. And we build P _Q directly from the P . Note
that we could have done the derivation as well if (2) had been, say, (P _ K ^ .L $
T // ! R and we used _I to add K ^ .L $ T / to the P all at once.
The inputs to _E are a formula of the form P _ Q and two subderivations. Given
an accessible formula of the form P _ Q on line a, with an accessible subderivation

CHAPTER 6. NATURAL DEDUCTION

228

beginning with assumption P on line b and ending with conclusion C against its
scope line at c, and an accessible subderivation beginning with assumption Q on line
d and ending with conclusion C against its scope line at e, one may conclude C with
justification a,b-c,d-e _E.
a. P _ Q
b. P

_E

c.

d.

A (g, a_E)

A (g, a_E)

e.

a,b-c,d-e _E

Given a disjunction P _ Q, one subderivation begins with P , and the other with
Q; both concluding with C. This time our exit strategy includes markers for the
new subgoals, along with a notation that we exit by appeal to the disjunction on line
a and _E. Intuitively, if we know it is one or the other, and either leads to some
conclusion, then the conclusion must be true. Here is an example a student gave
me near graduation time: She and her mother were shopping for a graduation dress.
They narrowed it down to dress A or dress B. Dress A was expensive, and if they
bought it, her mother would be mad. But dress B was ugly and if they bought it
the student would complain and her mother would be mad. Conclusion: her mother
would be mad and this without knowing which dress they were going to buy! On
a truth table, if rows where P is true have C true, and rows where Q is true have C
true, then any row with P _ Q true must have C true as well.
Here are a couple of examples. The first is straightforward, and illustrates both
the _I and _E rules.
1. A _ B
2. A ! C

(V)

P
P

3.

A (g, 1_E)

4.
5.

C
B _C

2,3 !E
4 _I

6.

A (g, 1_E)

7.

B _C

6 _I

8. B _ C

1,3-5,6-7 _E

We have the disjunction A _ B as premise, and original goal B _ C . And we set up to


obtain the goal by _E. For this, one subderivation starts with A and ends with B _ C ,

CHAPTER 6. NATURAL DEDUCTION

229

and the other starts with B and ends with B _ C . As it happens, these subderivations
are easy to complete.
Very often, beginning students resist using _E no doubt because it is relatively
messy. But this is a mistake _E is your friend! In fact, with this rule, we have a
case where it pays to look at the premises for general strategy. Again, we will have
more to say later. But if you have a premise or accessible line of the form P _ Q,
you should go for your goal, whatever it is, by _E. Here is why: As you go for the
goal in the first subderivation, you have whatever premises were accessible before,
plus P ; and as you go for the goal in the second subderivation, you have whatever
premises were accessible before plus Q. So you can only be better off in your quest
to reach the goal. In many cases where a premise has main operator _, there is no
way to complete the derivation except by _E. The above example (V) is a case in
point.
Here is a relatively messy example, which should help you be sure you understand the _ rules. It illustrates the associativity of disjunction.
1. A _ .B _ C /

(W)

2.

A (g, 1_E)

3.
4.

A_B
.A _ B/ _ C

2 _I
3 _I

5.

B _C

A (g, 1_E)

6.

A (g, 5_E)

7.
8.

A_B
.A _ B/ _ C

6 _I
7 _I

9.

A (g, 5_E)

.A _ B/ _ C

9 _I

10.
11.

.A _ B/ _ C

12. .A _ B/ _ C

5,6-8,9-10 _E
1,2-4,5-11 _E

The premise has main operator _. So we set up to obtain the goal by _E. This
gives us subderivations starting with A and B _ C , each with .A _ B/ _ C as goal.
The first is easy to complete by a couple instances of _I. But the assumption of the
second, B _ C has main operator _. So we set up to obtain its goal by _E. This
gives us subderivations starting with B and C , each again having .A _ B/ _ C as
goal. Again, these are easy to complete by application of _I. The final result follows
by the planned applications of _E. If you have been able to follow this case, you are
doing well!

CHAPTER 6. NATURAL DEDUCTION

230

E6.8. Complete the following derivations by filling in justifications for each line.
a. 1. B
2. .A _ C / ! .B ^ C /
3.

A

4.
5.
6.
7.

A _ C
B ^C
B
?

8. A

b. 1. R
2. .S _ T /
3.

R!S

4.
5.
6.

S
S _T
?

7. .R ! S/

c.

1. .R ^ S/ _ .K ^ L/
2.

R^S

3.
4.
5.
6.

R
S
S ^R
.S ^ R/ _ .L ^ K/

7.

K ^L

8.
9.
10.
11.

K
L
L^K
.S ^ R/ _ .L ^ K/

12. .S ^ R/ _ .L ^ K/

CHAPTER 6. NATURAL DEDUCTION


d.

231

1. A _ B
2.

3.

A!B

4.

5.

.A ! B/ ! B

6.

B
A!B

7.

8.
9.

.A ! B/ ! B

10. .A ! B/ ! B

e.

1. B
2. A ! .A _ B/
3.

A

4.
5.

A_B
A

6.

7.

8.

A

9.

?
A

10.

11. A
12. ?
13. A

E6.9. The following are not legitimate ND derivations. In each case, explain why.
a. 1. A _ B

P
1 _E

2. B

b. 1. A
2. B ! A

P
P

3.

A (c, I)

4.

2,3 !E

5. B

1,3-4 I

CHAPTER 6. NATURAL DEDUCTION


*c. 1. W
2.

P
A (c, I)

3.

W

A (c, I)

4.

1,3 ?I

5. R

d. 1. A _ B

2-4 I
P

2.

A (g, 1_E)

3.

2R

4.

A (g, 1_E)

5.

3R
1,2-3,4-5 _E

6. A

e. 1. A _ B

2.

A (g, 1_E)

3.

2R

4.

A (c, I)

5.

6.

7. A

A (g, 1_E)
4R
1,2-3,5-6 _E

E6.10. Produce derivations to show each of the following.


a. A `ND .A ^ B/
b. A `ND A
*c. A ! B, B `ND A
d. A ! B `ND .A ^ B/
e. A ! B, B ! A `ND A
f. A ^ B `ND .R $ S / _ B
*g. A _ .A ^ B/ `ND A
h. S , .B _ C / ! S `ND B

232

CHAPTER 6. NATURAL DEDUCTION

233

i. A _ B, A ! B, B ! A `ND A ^ B
j. A ! B, .B _ C / ! D, D ! A `ND A
k. A _ B `ND B _ A
*l. A ! B `ND B ! A
m. .A ^ B/ ! A `ND A ! B
n. A _ B `ND A _ B
o. A _ B, B `ND A

6.2.3

We complete our presentation of rules for the sentential part of ND with the rules $E
and $I. Given that P $ Q abbreviates the same as .P ! Q/ ^ .Q ! P /, it is not
surprising that rules for $ work like ones for arrow, but going two ways. For $E,
if formulas P $ Q and P appear on accessible lines a and b of a derivation, we
may conclude Q with justification a,b $E; and similarly but in the other direction, if
formulas P $ Q and Q appear on accessible lines a and b of a derivation, we may
conclude P with justification a,b $E.

$E

a. P $ Q
b. P
Q

a. P $ Q
b. Q
a,b $E

a,b $E

P $ Q thus works like either P ! Q or Q ! P . Intuitively given P if and only


if Q, then if P is true, Q is true. And given P if and only if Q, then if Q is true P
is true. On tables, if P $ Q is true, then P and Q have the same truth value. So if
P $ Q is true and P is true, Q is true as well; and if P $ Q is true and Q is true,
P is true as well.
Given that P $ Q can be exploited like P ! Q or Q ! P , it is not surprising
that introducing P $ Q is like introducing both P ! Q and Q ! P . The
input to $I is two subderivations. Given an accessible subderivation beginning with
assumption P on line a and ending with conclusion Q against its scope line on b,
and an accessible subderivation beginning with assumption Q on line c and ending
with conclusion P against its scope line on d , one may conclude P $ Q with
justification, a-b,c-d $I.

CHAPTER 6. NATURAL DEDUCTION

$I

a.

b.

c.

d.

P
P $Q

234

A (g, $I)

A (g, $I)

a-b,c-d $I

Intuitively, if an assumption P leads to Q and the assumption Q leads to P , then we


know that if P then Q, and if Q then P which is to say that P if and only if Q.
On truth tables, if there is a sententially valid argument from some other premises
together with assumption P , to conclusion Q, then there is no row where those other
premises are true and assumption P is true and Q is false; and if there is a sententially
valid argument from those other premises together with assumption Q to conclusion
P , then there is no row where those other premises are true and the assumption Q is
true and P is false; so on rows where the other premises are true, P and Q do not
have different values, and the biconditional P $ Q is true.
Here are a couple of examples. The first is straightforward, and exercises both
the $I and $E rules. We show, A $ B, B $ C `ND A $ C .
1. A $ B
2. B $ C

(X)

P
P

3.

A (g, $I)

4.
5.

B
C

1,3 $E
2,4 $E

6.

A (g, $I)

7.
8.

B
A

2,6 $E
1,7 $E

9. A $ C

3-5,6-8 $I

Our original goal is A $ C . So it is natural to set up subderivations to get it by $I.


Once we have done this, the subderivations are easily completed by applications of
$E.
Here is an interesting case that again exercises both rules. We show, A $ .B $
C /, C `ND A $ B.

CHAPTER 6. NATURAL DEDUCTION

235

ND Quick Reference (Sentential)


R (reiteration)

I (negation intro)

a. P

a.

aR

b.

E (negation exploit)
A (c, I)

P
Q ^ Q .?/
P

b.
a-b I

P

a.

A (c, E)

Q ^ Q .?/
P

a-b E

^I (conjunction intro)

^E (conjunction exploit)

^E (conjunction exploit)

a. P
b. Q

a. P ^ Q

a. P ^ Q

P
P ^Q

a ^E

a ^E

a,b ^I

_I (disjunction intro)

_I (disjunction intro)

_E (disjunction exploit)

a. P

a. P

a. P _ Q
b. P

P _Q

a _I

!I (conditional intro)
a.
b.

P
Q
P !Q

A (g, !I)

a-b !I

$I (biconditional intro)
a.

b.

c.

A (g, $I)

Q_P

!E (conditional exploit)
a. P ! Q
b. P
Q

A (g, a_E)

a _I

a,b !E

c.

d.

e.

C
C

A (g, a_E)

a,b-c,d-e _E

$E (biconditional exploit)

$E (biconditional exploit)

a. P $ Q
b. P

a. P $ Q
b. Q

a,b $E

a,b $E

A (g, $I)
?I (bottom intro)

d.

P
P $Q

a-b,c-d $I

a. Q
b. Q
?

a,b ?I

CHAPTER 6. NATURAL DEDUCTION


1. A $ .B $ C /
2. C

(Y)

P
P

3.

A (g, $I)

4.
5.

B$C
B

1,3 $E
4,2 $E

6.

A (g, $I)

7.

A (g, $I)

8.

2R

9.

A (g, $I)

6R

10.
11.
12.

B$C
A

236

7-8,9-10 $I
1,11 $E

13. A $ B

3-5,6-12 $I

We begin by setting up the subderivations to get A $ B by $I. This first is easily


completed with a couple applications of $E. To reach the goal for the second by
means of the premise (1) we need B $ C as our second card. So we set up to
reach that. As it happens, the extra subderivations at (7) - (8) and (9) - (10) are easy
to complete. Again, if you have followed so far, you are doing well. We will be in a
better position to create such derivations after our discussion of strategy.
So much for the rules for this sentential part of ND. Before we turn in the next
sections to strategy, let us note a couple of features of the rules that may so-far have
gone without notice. First, premises are not always necessary for ND derivations.
Thus, for example, `ND A ! A.
(Z)

1.

A (g, !I)

2.

1R

3. A ! A

1-2 !I

If there are no premises, do not panic! Begin in the usual way. In this case, the
original goal is A ! A. So we set up to obtain it by !I. And the subderivation
is particularly simple. Notice that our derivation of A ! A corresponds to the fact
from truth tables that s A ! A. And we need to be able to derive A ! A from no
premises if there is to be the right sort of correspondence between derivations in ND
and semantic validity if we are to have  P iff `ND P .
Second, observe again that every subderivation comes with an exit strategy. The
exit strategy says whether you intend to complete the subderivation with a particular
goal, or by obtaining a contradiction, and then how the subderivation is to be used

CHAPTER 6. NATURAL DEDUCTION

237

once complete. There are just five rules which appeal to a subderivation: !I, I,
E, _E, and $I. You will complete the subderivation, and then use it by one of these
rules. So these are the only rules which may appear in an exit strategy. If you do not
understand this, then you need to go back and think about the rules until you do.
Finally, it is worth noting a strange sort of case, with application to rules that can
take more than one input of the same type. Consider a simple demonstration that
A `ND A ^ A. We might proceed as in (AA) on the left,
(AA)

1. A

2. A
3. A ^ A

1R
1,2 ^I

(AB)

1. A

3. A ^ A

1,1 ^I

We begin with A, reiterate so that A appears on different lines, and apply ^I. But we
might have proceeded as in (AB) on the right. The rule requires an accessible line
on which the left conjunct appears which we have at (1), and an accessible line
on which the right conjunct appears which we also have on (1). So the rule takes
an input for the left conjunct and an input for the right they just happen to be the
same thing. A similar point applies to rules _E and $I which take more than one
subderivation as input. Suppose we want to show A _ A `ND A.4
1. A _ A

(AC)

2.

A (g, 1_E)

3.

2R

4.

A (g, 1_E)

5.
6. A

4R

1. A _ A

(AD)

2.

A (g, 1_E)

3.

2R

4. A

1,2-3,2-3 _E

1,2-3,4-5 _E

In (AC), we begin in the usual way to get the main goal by _E. This leads to the
subderivations (2) - (3) and (4) - (5), the first moving from the left disjunct to the goal,
and the second from the right disjunct to the goal. But the left and right disjuncts are
the same! So we might have simplified as in (AD). _E still requires three inputs: First
an accessible disjunction, which we find on (1); second an accessible subderivation
which moves from the left disjunct to the goal, which we find on (2) - (3); third a
subderivation which moves from the right disjunct to the goal but we have this
on (2) - (3). So the justification at (4) of (AD) appeals to the three relevant facts, by
appeal to the same subderivation twice. Similarly one could imagine a quick-anddirty demonstration that `ND A $ A.
4I

am reminded of an irritating character in Groundhog Day who repeatedly asks, Am I right or


am I right? If he implies that the disjunction is true, it follows that he is right.

CHAPTER 6. NATURAL DEDUCTION

238

E6.11. Complete the following derivations by filling in justifications for each line.
a. 1. A $ B
2.

3.

4. A ! B

b. 1. A $ B
2. B
3.

4.
5.

B
?

6. A

c. 1.

A $ A

2.

3.
4.

A
?

5.
6.
7.

A
A
?

8. .A $ A/

d.

1.

2.

A

3.

4.

A ! A

5.

A ! A

6.

A

7.
8.

A
?

9.

10. A $ .A ! A/

CHAPTER 6. NATURAL DEDUCTION


e.

239

1. A
2. B
3.

4.

B

5.

6.

7.

B
A

8.

9.
10.

11. A $ B

E6.12. Each of the following are not legitimate ND derivations. In each case, explain
why.
a. 1. A
2. B

P
P

3. A $ B

1,2 $I

b. 1. A ! B
2. B

P
P

3. A

1,2 !E

*c. 1. A $ B

P
1 $E

2. A

d. 1. B

2.

A (g, $I)

3.

1R

4.

A (g, $I)

5.

2R

6. A $ B

2-3,4-5 $I

CHAPTER 6. NATURAL DEDUCTION


e. 1. A
2.

P
A (g, !I)

3.

A

A (g, $I)

4.

2R

5.

6. B ! B
7. B
8.

A

9. A $ B

2R
2-5 !I
A (g, $I)
1R
3-4,7-8 $I

E6.13. Produce derivations to show each of the following.


*a. .A ^ B/ $ A `ND A ! B
b. A $ .A _ B/ `ND B ! A
c. A $ B, B $ C , C $ D, A `ND D
d. A $ B `ND .A ! B/ ^ .B ! A/
*e. A $ .B ^ C /, B `ND A $ C
f. .A ! B/ ^ .B ! A/ `ND .A $ B/
g. A ! .B $ C / `ND .A ^ B/ $ .A ^ C /
h. A $ B, C $ D `ND .A ^ C / $ .B ^ D/
i. `ND A $ A
j. `ND .A ^ B/ $ .B ^ A/
*k. `ND A $ A
l. `ND .A $ B/ ! .B $ A/
m. .A ^ B/ $ .A ^ C / `ND A ! .B $ C /
n. A ! B, A ! B `ND A $ B
o. A, B `ND A $ B

240

CHAPTER 6. NATURAL DEDUCTION

6.2.4

241

Strategies for a Goal

It is natural to introduce derivation rules, as we have, with relatively simple cases.


And you may or may not have been able to see from the start in some cases how
derivations would go. But derivations are not always so simple, and (short of genius)
nobody can always see how they go. Perhaps this has already been an issue! So we
want to think about derivation strategies. As we shall see later, for the quantificational case at least, it is not possible to produce a mechanical algorithm adequate to
complete every completable derivation. However, as with chess or other games of
strategy, it is possible to say a good deal about how to approach problems effectively.
We have said quite a bit already. In this section, we pull together some of the themes,
and present the material more systematically.
For natural derivation systems, the overriding strategy is to work goal directedly.
What you do at any stage is directed primarily, not by what you have, but by where
you want to be. Suppose you are trying to show that `ND P . You are given P as
your goal. Perhaps it is tempting to begin by using E-rules to see what you can get
from the members of . There is nothing wrong with a bit of this in order to simplify
your premises (like arranging the cards in your hand into some manageable order),
but the main work of doing a derivation does not begin until you focus on the goal.
This is not to say that your premises play no role in strategic thinking. Rather, it is to
rule out doing things with them which are not purposefully directed at the end. In the
ordinary case, applying the strategies for your goal dictates some new goal; applying
strategies for this new goal dictates another; and so forth, until you come to a goal
that is easily achieved.
The following strategies for a goal are arranged in rough priority order:
SG

1. If accessible lines contain explicit contradiction, use E to reach goal.


2. Given an accessible formula with main operator _, use _E to reach goal.
3. If goal is in accessible lines (set goals and) attempt to exploit it out.
4. To reach goal with main operator ?, use ?I (careful with _).
5. For any goal, if all else fails, try E (especially for atomics and sentences
with _ as main operator).

If a high priority strategy applies, use it. If one does not apply, simply fall through
to the next. The priority order is not necessarily a frequency order. The frequency
will likely be something like SG4, SG3, SG5, SG2, SG1. But high priority strategies
are such that you should adopt them if they are available even though most often

CHAPTER 6. NATURAL DEDUCTION

242

you will fall through to ones that are more frequently used. I take up the strategies in
the priority order.
SG1 If accessible lines contain explicit contradiction, use E to reach goal. For
goal B, with an explicit contradiction accessible, you can simply assume B, use
your contradiction, and conclude B.
a. A
b. A

given

a. A
b. A
c. B

use

d.

(goal)

?
B

A (c, E)
a,b ?I
c-d E

That is it! No matter what your goal is, given an accessible contradiction, you can
reach that goal by E. Since this strategy always delivers, you should jump on it
whenever it is available. As an example, try to show, A, A `ND .R ^ S / ! T .
Your derivation need not involve !I. Hint: I mean it! This section will be far more
valuable if you work these examples, and so think through the steps. Here it is in two
stages.
1. A
2. A

(AE)

3.

.R _ S/ ! T
.R _ S/ ! T

P
P

1. A
2. A

A (c, E)

3.

.R _ S/ ! T

A (c, E)

4.

1,2 ?I

5. .R _ S/ ! T

P
P

3-4 E

As soon as we see the accessible contradiction, we assume the negation of our goal,
with a plan to exit by E. This is accomplished on the left. Then it is a simple matter
of applying the contradiction, and going to the conclusion by E.
For this strategy, it is not required that accessible lines contain a contradiction
only when it is directly available. However, the intent is that it should be no real
work to obtain it. Perhaps an application of ^E or the like does the job. It should
be possible to obtain the contradiction immediately by some E-rule(s). If you can do
this, then your derivation is over: assuming the opposite, applying the rules, and then
E reaches the goal. If there is no simple way to obtain a contradiction, fall through
to the next strategy.
SG2 Given an accessible formula with main operator _, use _E to reach goal. As
suggested above, you may prefer to avoid _E. But this is a mistake _E is your
friend! Suppose you have some accessible lines including a disjunction A _ B with

CHAPTER 6. NATURAL DEDUCTION

243

goal C. If you go for that very goal by _E, the result is a pair of subderivations with
goal C where, in the one case, all those very same accessible lines and A are
accessible, and in the other case, all those very same lines and B are accessible. So,
in each subderivation, you can only be better off in your attempt to reach C.
a. A _ B
b. A
a. A _ B

given

A (g, a_E)

c.

(goal)

d.

A (g, a_E)

e.

(goal)

use
C

(goal)

a,b-c,d-e _E

As an example, try to show, A ! B, A _ .A ^ B/ `ND A ^ B. Try showing it


without _E! Here is the derivation in stages.
1. A ! B
2. A _ .A ^ B/
3.

(AF)

P
P

1. A ! B
2. A _ .A ^ B/

A (g, 2_E)

3.

A (g, 2_E)

4.
5.

B
A^B

1,3 !E
3,4 ^I

6.

A^B

A (g, 2_E)

7.

A^B

6R

A^B
A^B
A^B
A^B

A (g, 2_E)

8. A ^ B

P
P

1,2-5,6-7 _E

When we start, there is no accessible contradiction. So we fall through to SG2. Since


a premise has main operator _, we set up to get the goal by _E. This leads to a pair
of simple subderivations. Notice that there is almost nothing one could do except set
up this way and that once you do, it is easy!
SG3 If goal is in accessible lines (set goals and) attempt to exploit it out. In most
derivations, you will work toward goals which are successively closer to what can be
obtained directly from accessible lines. And you finally come to a goal which can be
obtained directly. If it can be obtained directly, do so! In some cases, however, you
will come to a stage where your goal exists in accessible lines, but can be obtained
only by means of some other result. In this case, you can set that other result as a
new goal. A typical case is as follows.

CHAPTER 6. NATURAL DEDUCTION

244
a. A ! B

a. A ! B

given

use
B

b. A
B

(goal)

(goal)
a,b !E

The B exists in the premises. You cannot get it without the A. So you set A as a
new goal and use it to get the B. It is impossible to represent all the cases where
this strategy applies. The idea is that the complete goal exists in accessible lines, and
can either be obtained directly by an E-rule, or by an E-rule with some new goal.
Observe that the strategy would not apply in case you have A ! B and are going for
A. Then the goal exists as part of a premise all right. But there is no obvious result
such that obtaining it would give you a way to exploit A ! B to get the A.
As an example, let us try to show .A ! B/ ^ .B ! C /, .L $ S / ! A,
.L $ S / ^ H `ND C . Here is the derivation in four stages.

(AG)

1. .A ! B/ ^ .B ! C /
2. .L $ S/ ! A
3. .L $ S/ ^ H

P
P
P

1. .A ! B/ ^ .B ! C /
2. .L $ S/ ! A
3. .L $ S/ ^ H

P
P
P

4. B ! C

1 ^E

4. B ! C
5. A ! B

1 ^E
1 ^E

B
C

A
B
C

4, !E

5, !E
4, !E

The original goal C exists in the premises, as the consequent of the right conjunct
of (1). It is easy to isolate the B ! C , but this leaves us with the B as a new goal
to get the C . B also exists in the premises, as the consequent of the left conjunct of
(1). Again, it is easy to isolate A ! B, but this leaves us with A as a new goal. We
are not in a position to fill in the entire justification for our new goals, but there is no
harm filling in what we can, to remind us where we are going. So far, so good.
1. .A ! B/ ^ .B ! C /
2. .L $ S/ ! A
3. .L $ S/ ^ H

P
P
P

1. .A ! B/ ^ .B ! C /
2. .L $ S/ ! A
3. .L $ S/ ^ H

4. B ! C
5. A ! B
L$S
A
B
C

1 ^E
1 ^E

4.
5.
6.
7.
8.
9.

2, !E
5, !E
4, !E

B!C
A!B
L$S
A
B
C

P
P
P
1 ^E
1 ^E
3 ^E
2,6 !E
5,7 !E
4,8 !E

CHAPTER 6. NATURAL DEDUCTION

245

But A also exists in the premises, as the consequent of (2); to get it, we set L $ S as
a goal. But L $ S exists in the premises, and is easy to get by ^E. So we complete
the derivation with the steps that motivated the subgoals in the first place. Observe
the way we move from one goal to the next, until finally there is a stage where SG3
applies in its simplest form, so that L $ S is obtained directly.
SG4 To reach goal with main operator ?, use ?I (careful with _). This is the
most frequently used strategy, the one most likely to structure your derivation as a
whole. E to the side, the basic structure of I-rules and E-rules in ND gives you just
one way to generate a formula with main operator ?, whatever that may be. In the
ordinary case, then, you can expect to obtain a formula with main operator ? by the
corresponding I-rule. Thus, for a typical example,
a.

given

A (g, !I)

(goal)

use
A!B

b.

(goal)

A!B

a-b !I

Again, it is difficult to represent all the cases where this strategy might apply. It
makes sense to consider it for formulas with any main operator. Be cautious, however, for formulas with main operator _. There are cases where it is possible to prove
a disjunction, but not to prove it by _I as one might have conclusive reason to
believe the butler or the maid did it, without conclusive reason to believe the butler
did it, or conclusive reason to believe the maid did it (perhaps the butler and maid
were the only ones with means and motive). You should consider the strategy for _.
But it does not always work.
As an example, let us show D `ND A ! .B ! .C ! D//. Here is the
derivation in four stages.
1. D
2.

1. D

A (g, !I)

2.
3.

P
A (g, !I)

A
B

A (g, !I)

(AH)
C !D
B ! .C ! D/
A ! .B ! .C ! D//

B ! .C ! D/
2- !I

A ! .B ! .C ! D//

3- !I
2- !I

Initially, there is no contradiction or disjunction in the premises, and neither do we


see the goal. So we fall through to strategy SG4 and, since the main operator of the
goal is !, set up to get it by !I. This gives us B ! .C ! D/ as a new goal. Since

CHAPTER 6. NATURAL DEDUCTION

246

this has main operator !, and it remains that other strategies do not apply, we fall
through to SG4, and set up to get it by !I. This gives us C ! D as a new goal.
1. D
2.
3.
4.

A
B
C

1. D

A (g, !I)

2.

A (g, !I)

3.

A g, ! I )

4.

P
A (g, !I)

B ! .C ! D/
A ! .B ! .C ! D//

A g, ! I )

C
D

5.

C !D

A (g, !I)

1R

4- !I

6.

C !D

4-5 !I

3- !I

7.

B ! .C ! D/

3-6 !I

2- !I

8. A ! .B ! .C ! D//

2-7 !I

As before, with C ! D as the goal, there is no contradiction on accessible lines,


no accessible formula has main operator _, and the goal does not itself appear on
accessible lines. Since the main operator is !, we set up again to get it by !I. This
gives us D as a new subgoal. But D does exist on an accessible line. Thus we are
faced with a particularly simple instance of strategy SG3. To complete the derivation,
we simply reiterate D from (1), and follow our exit strategies as planned.
SG5 For any goal, if all else fails, try E (especially for atomics and sentences
with _ as main operator). The previous strategy has no application to atomics, because they have no main operator, and we have suggested that it is problematic for
disjunctions. This last strategy applies particularly in those cases. So it is applicable
in cases where other strategies seem not to apply.
a.

given

A

A (c, E)

use
A

(goal)

b.

?
A

a-b E

It is possible to obtain any formula by E, by assuming the negation of it and going
for a contradiction. So this strategy is generally applicable. And it cannot hurt: If you
could have reached the goal anyway, you can obtain the goal A under the assumption,
and then use it for a contradiction with the assumed A which lets you exit the
assumption with the A you would have had anyway. And the assumption may help:
for, as with _E, in going for the contradiction you have whatever accessible lines you
had before, plus the new assumption. And, in many cases, the assumption puts you
in a position to make progress you would not have been able to make before.

CHAPTER 6. NATURAL DEDUCTION

247

As a simple example of the strategy, try showing, A ! B, B `ND A. Here is


the derivation in two stages.
1. A ! B
2. B

(AI)

A

3.

P
P

1. A ! B
2. B

A (c, E)

3.

A

A (c, E)

4.
5.

B
?

1,3 !E
4,2 ?I

?
A

3- E

6. A

P
P

3-5 E

Initially, there is no contradiction in the premises. Sometimes the occasion between


this strategy and the first can seem obscure (and, in the end, it may not be all that
important to separate them). However, for the first, accessible lines by themselves
are sufficient for a contradiction. In this case, from the premises we have B, but
cannot get the B and so do have a contradiction. So the first strategy does not apply.
There is no formula with main operator _. Similarly, though A is in the antecedent
of (1), there is no obvious way to exploit the premise to isolate the A; so we do not
see the goal in the relevant form in the premises. The goal A has no operators, so it
has no main operator and strategy SG4 does not apply. So we fall through to strategy
SG 5, and set up to get the goal by E. In this case, the subderivation is particularly
easy to complete. Perhaps the case is too easy and may seem to be a case of SG1. In
this case, though, the contradiction does not become available until after you make
the assumption. In the case of SG1, it is the prior availability of the contradiction that
drives your assumption.
Here is an extended example which combines a number of the strategies considered so far. We show that B _ A `ND A ! B. You want especially to absorb the
mode of thinking about this case as a way to approach exercises.
1. B _ A

(AJ)
A ! B

There is no contradiction in accessible premises; so strategy SG1 is inapplicable.


Strategy SG2 tells us to go for the goal by _E. Another option is to fall through to
SG 4 and go for A ! B by !I and then apply _E to get the B, but !I has lower
priority, and let us follow the official procedure.

CHAPTER 6. NATURAL DEDUCTION


1. B _ A
2.

248

P
A (g, 1_E)
Given an accessible line with main operator _,

A ! B
A

A (g, 1_E)

use _E to reach goal.

A ! B
A ! B

1, ,

_E

Having set up for _E on line (1), we treat B _ A as effectively used up and so out
of the picture. Concentrating, for the moment, on the first subderivation, there is no
contradiction on accessible lines; neither is there another accessible disjunction; and
the goal is not in the premises. So we fall through to SG4.
1. B _ A
2.

A (g, 1_E)
A

3.

A (g, !I)

B
A ! B

3-

!I

A (g, 1_E)

To reach goal with main operator !, use !I.

A ! B
A ! B

1, ,

_E

In this case, the subderivation is easy to complete. The new goal, B exists as such
in the premises. So we are faced with a simple instance of SG3, and so can complete
the subderivation.
1. B _ A
2.

P
A (g, 1_E)

3.

A

A (g, !I)

4.

2R

The first subderivation is completed by reiterating B from line (2), and following the exit strat-

5.

A ! B

3-4 !I

6.

A (g, 1_E)

egy.

A ! B
A ! B

1, ,

_E

CHAPTER 6. NATURAL DEDUCTION

249

For the second main subderivation tick off in your head: there is no accessible contradiction; neither is there another accessible formula with main operator _; and the
goal is not in the premises. So we fall through to strategy SG4.
1. B _ A
2.

3.
4.

A (g, 1_E)
A

A (g, !I)

2R

5.

A ! B

3-4 !I

6.

A (g, 1_E)

7.

A

To reach goal with main operator !, use !I.

A (g, !I)

B
A ! B
A ! B

7-

!I

1, ,

_E

In this case, there is an accessible contradiction at (6) and (7). So SG1 applies, and
we are in a position to complete the derivation as follows.
1. B _ A
2.

3.
4.

P
A (g, 1_E)

A

A (g, !I)

2R

5.

A ! B

3-4 !I

6.

A (g, 1_E)

7.

A

A (g, !I)

8.

B

A (c, E)

9.

6,7 ?I

10.
11.

B
A ! B

12. A ! B

If accessible lines contain explicit contradiction, use E to reach goal.

8-9 E
7-10 !I
1,2-5,6-11 _E

This derivation is fairly complicated! But we did not need to see how the whole thing
would go from the start. Indeed, it is hard to see how one could do so. Rather it was
enough to see, at each stage, what to do next. That is the beauty of our goal-oriented
approach.
A couple of final remarks before we turn to exercises: First, as we have said from
the start, assumptions are only introduced in conjunction with exit strategies. This

CHAPTER 6. NATURAL DEDUCTION

250

almost requires goal-directed thinking. And it is important to see how pointless are
assumptions without an exit strategy! Results inside subderivations cannot be used
for a final conclusion except insofar as there is a way to exit the subderivation and
use it whole. So the point of the strategy is to ensure that the subderivation has a use
for getting where you want to go.
Second, in going for a contradiction, as with SG4 or SG5, the new goal is not a
definite formula any contradiction is sufficient for the rule and for a derivation of
?. So the strategies for a goal do not directly apply. This motivates the strategies
for a contradiction of the next section. For now, I will say just this: If there is a
contradiction to be had, and you can reduce formulas on accessible lines to atomics
and negated atomics, the contradiction will appear at that level. So one way to go
for a contradiction is simply by applying E-rules to accessible lines, to generate what
atomics and negated atomics you can.
Proof for the following theorems are left as exercises. You should not start them
now, but wait for the assignment in E6.16. The first three may remind you of axioms
from chapter 3. The others foreshadow rules from the system ND+, which we will
see shortly.
T6.1. `ND P ! .Q ! P /
T6.2. `ND .O ! .P ! Q// ! ..O ! P / ! .O ! Q//
*T6.3. `ND .Q ! P / ! ..Q ! P / ! Q/
T6.4. A ! B, B `ND A
T6.5. A ! B, B ! C `ND A ! C
T6.6. A _ B, A `ND B
T6.7. A _ B, B `ND A
T6.8. A $ B, A `ND B

CHAPTER 6. NATURAL DEDUCTION

251

T6.9. A $ B, B `ND A
T6.10. `ND .A ^ B/ $ .B ^ A/
*T6.11. `ND .A _ B/ $ .B _ A/
T6.12. `ND .A ! B/ $ .B ! A/
T6.13. `ND A ! .B ! C/ $ .A ^ B/ ! C
T6.14. `ND A ^ .B ^ C/ $ .A ^ B/ ^ C
T6.15. `ND A _ .B _ C/ $ .A _ B/ _ C
T6.16. `ND A $ A
T6.17. `ND A $ .A ^ A/
T6.18. `ND A $ .A _ A/
E6.14. For each of the following, (i) which goal strategy applies? and (ii) what is
the next step? If the strategy calls for a new subgoal, show the subgoal; if it
calls for a subderivation, set up the subderivation. In each case, explain your
response. Hint: Each goal strategy applies once.
a. 1. A _ B
2. A

P
P

b. 1. J ^ S
2. S ! K
K

P
P

CHAPTER 6. NATURAL DEDUCTION


*c. 1. A $ B

252

B $ A

d. 1. A $ B

P
P

2. A
B

e. 1. A ^ B
2. A

P
P

K _J

E6.15. Produce derivations to show each of the following. No worked out answers
are provided. However, if you get stuck, you will find strategy hints in the
back.
*a. A $ .A ! B/ `ND A ! B
*b. .A _ B/ ! .B $ D/, B `ND B ^ D
*c. .A ^ C /, .A ^ C / $ B `ND A _ B
*d. A ^ .C ^ B/, .A _ D/ ! E `ND E
*e. A ! B, B ! C `ND A ! C
*f. .A ^ B/ ! .C ^ D/ `ND .A ^ B/ ! C ^ .A ^ B/ ! D
*g. A ! .B ! C /, .A ^ D/ ! E, C ! D `ND .A ^ B/ ! E
*h. .A ! B/ ^ .B ! C /, .D _ E/ _ H ! A, .D _ E/ ^ H `ND C
*i. A ! .B ^ C /, C `ND .A ^ D/
*j. A ! .B ! C /, D ! B `ND A ! .D ! C /
*k. A ! .B ! C / `ND C ! .A ^ B/
*l. .A ^ B/ ! A `ND A ! B
*m. B $ A, C ! B, A ^ C `ND K

CHAPTER 6. NATURAL DEDUCTION

253

*n. A `ND A ! B
*o. A $ B `ND A $ B
*p. .A _ B/ _ C , B $ C `ND C _ A
*q. `ND A ! .A _ B/
*r. `ND A ! .B ! A/
*s. `ND .A $ B/ ! .A ! B/
*t. `ND .A ^ A/ ! .B ^ B/
*u. `ND .A ! B/ ! .C ! A/ ! .C ! B/
*v. `ND .A ! B/ ^ B ! A
*w. `ND A ! B ! .A ! B/
*x. `ND A ! .B ^ A/ ! C
*y. `ND .A ! B/ ! B ! .A ^ D/
*E6.16. Produce derivations to demonstrate each of T6.1 - T6.18. This is a mix
some repetitious, some challenging! But, when we need the results later, we
will be glad to have done them now. Hint: do not worry if one or two get a
bit longer than you are used to they should!

6.2.5

Strategies for a Contradiction

In going for a contradiction, the Q and Q can be any sentence. So the strategies for
reaching a definite goal do not apply. This motivates strategies for a contradiction.
Again, the strategies are in rough priority order.
SC

1. Break accessible formulas down into atomics and negated atomics.


2. Given a disjunction in a subderivation for E or I, go for ? by _E.
3. Set as goal the opposite of some negation (something that cannot itself be
broken down). Then apply strategies for a goal to reach it.
4. For some P such that both P and P lead to contradiction: Assume P
(P ), obtain the first contradiction, and conclude P (P ); then obtain
the second contradiction this is the one you want.

CHAPTER 6. NATURAL DEDUCTION

254

Again, the priority order is not the frequency order. The frequency is likely to be
something like SC1, SC3, SC4, SC2. Also sometimes, but not always, SC3 and SC4
coincide: in deriving the opposite of some negation, you end up assuming a P such
that P and P lead to contradiction.
SC1. Break accessible formulas down into atomics and negated atomics. As we
have already said, if there is a contradiction to be had, and you can break premises
into atomics and negated atomics, the contradiction will appear at that level. Thus,
for example,
1. A ^ B
2. B
C

3.

P
P

1. A ^ B
2. B

A (c, I)

3.

A (c, I)

4.
5.
6.

A
B
?

1 ^E
1 ^E
5,2 ?I

7. C

2-6 I

(AK)
?
C

2-

I

P
P

Our strategy for the goal, is SG4 with an application of I. Then the goal is to obtain
a contradiction. And our first thought is to break accessible lines down to atomics
and negated atomics. Perhaps this example is too simple. And you may wonder
about the point of getting A at (4) there is no need for A at (4). But this merely
illustrates the point: if you can get to atomics and negated atomics (randomly as it
were) the contradiction will appear in the end.
As another example, try showing A^.B^C /, F ! D, .A^D/ ! C `ND F .
Here is the completed derivation in two stages.
1. A ^ .B ^ C /
2. F ! D
3. .A ^ D/ ! C
F

4.

P
P
P

1. A ^ .B ^ C /
2. F ! D
3. .A ^ D/ ! C

A (c, E)

4.
5.
6.
7.
8.
9.
10.
11.

(AL)

?
F

4-

E

11. F

P
P
P

F

A (c, E)

D
A
A^D
C
B ^ C
C
?

2,4 !E
1 ^E
6,5 ^I
3,7 !E
1 ^E
9 ^E
8,10 ?I
4-10 E

CHAPTER 6. NATURAL DEDUCTION

255

This time, our strategy for the goal, falls through to SG5. After that, again, our goal
is to obtain a contradiction and our first thought is to break premises down to
atomics and negated atomics. The assumption F gets us D with (2). We can get
A from (1), and then C with the A and D together. Then C follows from (1) by a
couple applications of ^E. You might proceed to get the atomics in a different order,
but the basic idea of any such derivation is likely to be the same.
SC2. Given a disjunction in a subderivation for E or I, go for ? by _E. This
strategy applies only occasionally, though it is related to one that is common for the
quantificational case. In most cases, you will have applied _E by SG2 prior to setting
up for E or I. In some cases, however, a disjunction is uncovered only inside
a subderivation for a tilde rule. In any such case, SC2 has high priority for the same
reasons as SG2: You can only be better off in your attempt to reach a contradiction
inside the subderivations for _E than before. So the strategy says to set ? as the goal
you need for E or I, and go for it by _E.

given

a.

b.

A_B

A (c, I)

b.
c.

A_B
A

A (c, b_E)

d.

e.

f.

A (c, I)

use

?
P

a.

a-

A (c, c_E)

I
g.

?
P

b,c-d,e-f _E
a-g I

Observe that, since the subderivations for _E have goal ?, they have exit strategy c
rather than g. Here is another advantage of our standard use of ?. Because ? is a
particular sentence, it works as a goal sentence for this rule. We might obtain ? by
one contradiction in the first subderivation, and by another in the second. But, once
we have obtained ? in each, we are in a position to exit by _E in the usual way, and
so to apply I.
Here is an example. We show A ^ B `ND .A _ B/. The derivation is in
four stages.

CHAPTER 6. NATURAL DEDUCTION


1. A ^ B
A_B

2.

256

1. A ^ B

A (c, I)

2.

A_B

3.

P
A (c, I)
A (c, 2_E)

(AM)

A (c, 2_E)

?
?

.A _ B/

2-

I

2, ,

.A _ B/

2-

_E
I

In this case, our strategy for the goal is SG4. The disjunction appears only inside the
subderivation as the assumption for I. We might obtain A and B from (1) but
after that, there are no more atomics or negated atomics to be had. So we fall through
to SC2, with ? as the goal for _E.
1. A ^ B

1. A ^ B

A (c, I)

2.

A_B

2.

A_B

3.

A (c, 2_E)

3.

A (c, 2_E)

4.
5.

A
?

1 ^E
3,4 ?I

4.
5.

A
?

1 ^E
3,4 ?I

6.

A (c, 2_E)

6.

A (c, 2_E)

7.
8.

B
?

1 ^E
6,7 ?I

?
?
.A _ B/

2,3-5,
2-

I

_E

9.

10. .A _ B/

A (c, I)

2,3-5,6-8 _E
2-11 I

The first subderivation is easily completed from atomics and negated atomics. And
the second is completed the same way. Observe that it is only because of our assumptions for _E that we are able to get the contradictions at all.
SC3. Set as goal the opposite of some negation (something that cannot itself be
broken down). Then apply standard strategies for the goal. You will find yourself
using this strategy often, after SC1. In the ordinary case, if accessible formulas cannot
be broken into atomics and negated atomics, it is because complex forms are sealed
off by main operator . The tilde blocks SC1 or SC2. But you can turn this lemon
to lemonade: taking the complex Q as one half of a contradiction, set Q as goal.
For some complex Q,

CHAPTER 6. NATURAL DEDUCTION

a. Q
b. A

257
a. Q
b. A

A (c, I)

given

A (c, I)

use
Q
?

?
A

(goal)
c,a ?I

A

We are after a contradiction. Supposing that we cannot break Q into its parts, our
efforts to apply other strategies for a contradiction are frustrated. But SC3 offers an
alternative: Set Q itself as a new goal and use this with Q to reach ?. Then strategies for the new goal take over. If we reach the new goal, we have the contradiction
we need.
As an example, try showing B, .A ! B/ `ND A. Here is the derivation in
four stages.
1. B
2. .A ! B/
A

3.

P
P

1. B
2. .A ! B/

A (c, I)

3.

P
P

A (c, I)

A!B
?

(goal)
,2 ?I

(AN)

?
A

I

3-

A

3-

I

Our strategy for the goal is SG4; for main operator  we set up to get the goal by
I. So we need a contradiction. In this case, there is nothing to be done by way of
obtaining atomics and negated atomics, and there is no disjunction in the scope of
the assumption for I. So we fall through to strategy SC3. .A ! B/ on (2) has
main operator , so we set A ! B as a new subgoal with the idea to use it for
contradiction.
1. B
2. .A ! B/
3.
4.

P
P

1. B
2. .A ! B/

A (c, I)

3.

A (g, !I)

4.

(goal)

5.

4- !I
,2 ?I

6.
7.

A!B
?
A

3-

I

P
P
A (c, I)

A
A
B
A!B
?

8. A

A (g, !I)
1R
4-5 !I
6,2 ?I
3-7 I

Since A ! B is a definite subgoal, we proceed with strategies for the goal in the
usual way. The main operator is ! so we set up to get it by !I. The subderivation

CHAPTER 6. NATURAL DEDUCTION

258

is particularly easy to complete. And we finish by executing the exit strategies as


planned.
SC4. For some P such that both P and P lead to contradiction: Assume P
(P ), obtain the first contradiction, and conclude P (P ); then obtain the second
contradiction this is the one you want.
A

a.
A

a.

b.

c.

A (c, I)
A (c, I)

A (c, I)

given

use
?

P

b-c I

A
?

d.

A

a-d I

The essential point is that both P and P somehow lead to contradiction. Thus the
assumption of one leads by I or E to the other; and since both lead to contradiction, you end up with the contradiction you need. This is often a powerful way of
making progress when none seems possible by other means.
Let us try to show A $ B, B $ C , C $ A `ND K. Here is the derivation in
four stages.
1. A $ B
2. B $ C
3. C $ A
4.

K

P
P
P

1. A $ B
2. B $ C
3. C $ A

A (c, E)

4.

K

A (c, E)

5.

A (c, I)

P
P
P

(AO)
?
A

I

4-

E

?
K

5-

4-

E

Our strategy for the goal falls all the way through to SG5. So we assume the negation
of the goal, and go for a contradiction. In this case, there are no atomics or negated
atomics to be had. There is no disjunction under the scope of the negation, and no
formula is itself a negation such that we could reiterate and build up to the opposite.

CHAPTER 6. NATURAL DEDUCTION

259

But given formula A we can use $E to reach A and so contradiction. And, similarly, given A we can use $E to reach A and so contradiction. So, following SC4,
we assume one of them to get the other.
1. A $ B
2. B $ C
3. C $ A

P
P
P

1. A $ B
2. B $ C
3. C $ A

P
P
P

4.

K

A (c, E)

4.

K

A (c, E)

5.

A (c, I)

5.

A (c, I)

6.
7.
8.
9.

B
C
A
?

1,5 $E
2,6 $E
3,7 $E
5,8 ?I

6.
7.
8.
9.

B
C
A
?

1,5 $E
2,6 $E
3,7 $E
5,8 ?I

5-9 I

10.
11.
12.
13.
14.

10.

A

?
K

4-

E

A
C
B
A
?

15. K

5-9 I
3,10 $E
2,11 $E
1,12 $E
13,10 ?I
4-14 E

The first contradiction appears easily at the level of atomics and negated atomics.
This gives us A. And with A, the second contradiction also comes easily, at the
level of atomics and negated atomics.
Though it can be useful, this strategy is often difficult to see. And there is no
obvious way to give a strategy for using the strategy! The best thing to say is that
you should look for it when the other strategies seem to fail.
Let us consider an extended example which combines some of the strategies. We
show that A ! B `ND B _ A.
1. A ! B

(AP)
B _A

In this case, we do not see a contradiction in the premises; there is no formula with
main operator _ in the premises; and the goal does not appear in the premises. So we
might try going for the goal by _I in application of SG4. This would require getting
a B or an A. It is reasonable to go this way, but it turns out to be a dead end. (You
should convince yourself that this is so.) Thus we fall through to SG5.

CHAPTER 6. NATURAL DEDUCTION


1. A ! B
2.

.B _ A/

260

P
A (c, E)

When all else fails (and especially considering


our goal has main operator _), set up to get the
goal by E.

?
B _A

2-

E

To get a contradiction, our first thought is to go for atomics and negated atomics. But
there is nothing to be done. Similarly, there is no formula with main operator _. So
we fall through to SC3 and continue as follows.
1. A ! B
2.

.B _ A/

P
A (c, E)

Given a negation that cannot be broken down,


set up to get the contradiction by building up to

B _A
?
B _A

the opposite.
,2 ?I
2-

E

It might seem that we have made no progress, since our new goal is no different than
the original! But there is progress insofar as we have a premise not available before
(more on this in a moment). At this stage, we can get the goal by _I. Either side will
work, but it is easier to start with the A. So we set up for that.
1. A ! B
2.

.B _ A/

P
A (c, E)
For a goal with main operator _, go for the goal

A
B _A
?
B _A

by _I
_I
,2 ?I
2-

E

Now the goal is atomic. Again, there is no contradiction or formula with main operator _ in the premises. The goal is not in the premises in any form we can hope to
exploit. And the goal has no main operator. So, again, we fall through to SG5.

CHAPTER 6. NATURAL DEDUCTION


1. A ! B
2.
3.

.B _ A/
A

261

P
A (c, E)
A (c, E)
When all else fails, and especially for atomics,
go for the goal by E

?
A
B _A
?
B _A

3- E
_I
,2 ?I
2-

E

Again, our first thought is to get atomics and negated atomics. We can get B from
lines (1) and (3) by !E. But that is all. So we will not get a contradiction from atomics and negated atomics alone. There is no formula with main operator _. However,
the possibility of getting a B suggests that we can build up to the opposite of line
(2). That is, we complete the subderivation as follows, and follow our exit strategies
to complete the whole.
1. A ! B
2.

.B _ A/

P
A (c, E)

3.

A

A (c, E)

4.
5.
6.

B
B _A
?

1,3 !E
4 _I
5,2 ?I

7.
8.
9.

A
B _A
?

10. B _ A

Get the contradiction by building up to the opposite of an existing negation.

3-6 E
7 _I
8,2 ?I
2-9 E

A couple of comments: First, observe that we build up to the opposite of .B _ A/


twice, coming at it from different directions. First we obtain the left side B and
use _I to obtain the whole, then the right side A and use _I to obtain the whole.
This is typical with negated disjunctions. Second, note that this derivation might be
reconceived as an instance of SC4. A gets us B, and so B _ A, which contradicts
.B _ A/. But A gets us B _ A which, again, contradicts .B _ A/. So both A and
A lead to contradiction; so we assume one (A), and get the first contradiction;
this gets us A, from which the second contradiction follows.
The general pattern of this derivation is typical for formulas with main operator _
in ND. For P _ Q we may not be able to prove either P or Q from scratch so that
the formula is not directly provable by _I. However, it may be indirectly provable.
If it is provable at all, it must be that the negation of one side forces the other. So it

CHAPTER 6. NATURAL DEDUCTION

262

must be possible to get the P or the Q under the additional assumption that the other
is false. This makes possible an argument of the following form.
a.

(AQ)

.P _ Q/

A (c, E

b.

P

A (c, E)

c.
d.
e.

::
:
Q
P _Q
?

c _I
d,a ?I

f.
g.
h.

P
P _Q
?

i. P _ Q

b-e E
f _I
g,a ?I
a-h E

The work in this routine is getting from the negation of one side of the disjunction
to the other. Thus if from the assumption P it is possible to derive Q, all the rest
is automatic! We have just seen an extended example (AP) of this pattern. It may be
seen as an application of SC3 or SC4 (or both). Where a disjunction may be provable
but not provable by _I, it will work by this method! So in difficult cases when the
goal is a disjunction, it is wise to think about whether you can get one side from the
negation of the other. If you can, set up as above. (And reconsider this method, when
we get to a simplified version in the extended system ND+).
This example was fairly difficult! You may see some longer, but you will not
see many harder. The strategies are not a cookbook for performing all derivations
doing derivations remains an art. But the strategies will give you a good start, and
take you a long way through the exercises that follow. The theorems immediately
below again foreshadow rules of ND+.
*T6.19. `ND .A ^ B/ $ .A _ B/
T6.20. `ND .A _ B/ $ .A ^ B/
T6.21. `ND .A ! B/ $ .A _ B/
T6.22. `ND .A ! B/ $ .A _ B/
T6.23. `ND A ^ .B _ C/ $ .A ^ B/ _ .A ^ C/

CHAPTER 6. NATURAL DEDUCTION

263

T6.24. `ND A _ .B ^ C/ $ .A _ B/ ^ .A _ C/
T6.25. `ND .A $ B/ $ .A ! B/ ^ .B ! A/
T6.26. `ND .A $ B/ $ .A ^ B/ _ .A ^ B/
E6.17. Each of the following begins with a simple application of I or E for SG4
or SG5. Complete the derivations, and explain your use of strategies for a
contradiction. Hint: Each of the strategies for a contradiction is used at least
once.
*a. 1. A ^ B
2. .A ^ C /
C

3.

P
P
A (c, I)

?
C

b. 1. .B _ A/ ! D

P
P

2. C ^ D
B

3.

A (c, E)

?
B

c. 1. A ^ B
2.

A _ B

A (c, I)

?
.A _ B/

d. 1. A $ A
2.

B
?
B

P
A (c, I)

CHAPTER 6. NATURAL DEDUCTION


e. 1. .A ! B/
A

2.

264

P
A (c, E)

?
A

E6.18. Produce derivations to show each of the following. No worked out answers
are provided. However, if you get stuck, you will find strategy hints in the
back.
*a. A ! .B ^ C /, B ! C `ND A ! B
*b. `ND .A ! A/ ! A
*c. A _ B `ND .A ^ B/
*d. .A ^ B/, .A ^ B/ `ND A
*e. `ND A _ A
*f. `ND A _ .A ! B/
*g. A _ B, A _ B `ND B
*h. A $ .B _ C /, B ! C `ND A
*i. A $ B `ND .C $ A/ $ .C $ B/
*j. A $ .B $ C /, .A _ B/ `ND C
*k. C _ .A _ B/ ^ .C ! E/, A ! D, D ! A `ND C _ B
*l. .A ! B/, .B ! C / `ND D
*m. C ! A, .B ^ C / `ND .A _ B/ ! C
*n. .A $ B/ `ND A $ B
*o. A $ B, B $ C `ND .A $ C /
*p. A _ B, B _ C , C `ND A
*q. .A _ C / _ D, D ! B `ND .A ^ B/ ! C
*r. A _ D, D $ .E _ C /, .C ^ B/ _ C ^ .F ! C / `ND A

CHAPTER 6. NATURAL DEDUCTION

265

*s. .A _ B/ _ .C ^ D/; .A $ E/ ^ .B ! F /; G $ .E _ F /; C ! B `ND G


*t. .A _ B/ ^ C , C ! .D ^ A/, B ! .A _ E/ `ND E _ F

*E6.19. Produce derivations to demonstrate each of T6.19 - T6.26.

E6.20. Produce derivations to show each of the following. These are particularly
challenging. If you can get them, you are doing very well! (In keeping with
the spirit of the challenge, no help is provided in the back of the book.)
a. A $ .B $ C / `ND .A $ B/ $ C
b. .A _ B/ ! .A _ C / `ND A _ .B ! C /
c. A ! .B _ C / `ND .A ! B/ _ .A ! C /
d. .A $ B/ $ .C $ D/ `ND .A $ C / ! .B ! D/
e. .A $ B/, .B $ C /, .C $ A/ `ND K
E6.21. For each of the following, produce a good translation including interpretation
function. Then use a derivation to show that the argument is valid in ND. The
first two are suggested from the history of philosophy; the last is our familiar
case from p. 2.
a. We have knowledge about numbers.
If Platonism is true, then numbers are not in spacetime.
Either numbers are in spacetime, or we do not interact with them.
We have knowledge about numbers only if we interact with them.
Platonism is not true.
b. There is evil
If god is good, there is no evil unless he has an excuse for allowing it.
If god is omnipotent, then he does not have an excuse for allowing evil.
God is not both good and omnipotent.

CHAPTER 6. NATURAL DEDUCTION

266

c. If Bob goes to the fair, then so do Daniel and Edward. Albert goes to the fair
only if Bob or Carol go. If Daniel goes, then Edward goes only if Fred goes.
But not both Fred and Albert go. So Albert goes to the fair only if Carol goes
too.
d. If I think dogs fly, then I am insane or they have really big ears. But if dogs
do not have really big ears, then I am not insane. So either I do not think dogs
fly, or they have really big ears.
e. If the maid did it, then it was done with a revolver only if it was done in the
parlor. But if the butler is innocent, then the maid did it unless it was done
in the parlor. The maid did it only if it was done with a revolver, while the
butler is guilty if it did happen in the parlor. So the butler is guilty.

E6.22. For each of the following concepts, explain in an essay of about two pages,
so that Hannah could understand. In your essay, you should (i) identify the
objects to which the concept applies, (ii) give and explain the definition, and
give and explicate examples of your own construction (iii) where the concept
applies, and (iv) where it does not. Your essay should exhibit an understanding of methods from the text.
a. Derivations as games, and the condition on rules.
b. Accessibility, and auxiliary assumptions.
c. The rules _I and _E.
d. The strategies for a goal.
e. The strategies for a contradiction.

6.3

Quantificational

Our full system ND includes all the rules for the sentential part of ND, along with Iand E-rules for 8 and 9 and for equality. After some quick introductory remarks, we
will take up the new rules, and say a bit about strategy.
First, we do not sacrifice any of the rules we have so far. Recall that our rules
apply to formulas of quantificational languages as well as to formulas of sentential
ones. Thus, for example, F x ! 8xF x and F x are of the form P ! Q and P . So

CHAPTER 6. NATURAL DEDUCTION

267

we might move from them to 8xF x by !E as before. And similarly for other rules.
Here is a short example.
1. 8xF x ^ 9x8y.H x _ Zy/

(AR)

2.

Kx

A (g, !I)

3.

8xF x

1 ^E

4. Kx ! 8xF x

2-3 !I

The goal is of the form P ! Q; so we set up to get it in the usual way. And the
subderivation is particularly simple. Notice that formulas of the sort 8x.Kx ! F x/
and Kx are not of the form P ! Q and P . The main operator of 8x.Kx ! F x/
is 8x, not !. So !E does not apply. That is why we need new rules for the
quantificational operators.
For our quantificational rules, we need a couple of notions already introduced in
chapter 3. Again, for any formula A, variable x, and term t, say Axt is A with all
the free instances of x replaced by t. And t is free for x in A iff all the variables in
the replacing instances of t remain free after substitution in Axt . Thus, for example,
(AS)

.8xRxy _ P x/xy is 8xRxy _ P y

There are three instances of x in 8xRxy _ P x, but only the last is free; so y is
substituted only for that instance. Since the substituted y is free in the resultant
expression, y is free for x in 8xRxy _ P x. Similarly,
(AT)

.8x.x D y/ _ Ryx/f 1 x is 8x.x D f 1 x/ _ Rf 1 xx/

Both instances of y in 8x.x D y/ _ Ryx are free; so our substitution replaces both.
But the x in the first instance of f 1 x is bound upon substitution; so f 1 x is not free
for y in 8x.x D y/ _ Ryx. Notice that if x is not free in A, then replacing every
free instance of x in A with some term results in no change. So if x is not free in
A, then Axt is A. Similarly, Axx is just A itself. Further, any variable x is sure to
be free for itself in a formula A if every free instance of variable x is replaced
with x, then the replacing instances are sure to be free! And constants are sure to be
free for a variable x in a formula A. Since a constant c is a term without variables,
no variable in the replacing term is bound upon substitution for free instances of x.
With this said, we are ready to turn to our rules. We begin with the easier ones,
and work from there.

6.3.1

8E and 9I

8E and 9I are straightforward. For the former, for any variable x, given an accessible
formula 8xP on line a, if term t is free for x in P , one may move to Ptx with

CHAPTER 6. NATURAL DEDUCTION

268

justification, a 8E.
a. 8xP

8E

provided t is free for x in P


Ptx

a 8E

8E removes a quantifier, and substitutes a term t for resulting free instances of x,


so long as t is free in the resulting formula. Observe that t is always free if it is a
constant, or a variable that does not appear at all in P . We sometimes say that variable x is instantiated by term t. Thus, for example, 8x9yLxy is of the form 8xP ,
where P is 9yLxy. So by 8E we can move from 8x9yLxy to 9yLay, removing the
quantifier and substituting a for x. And similarly, since the complex terms f 1 a and
g 2 zb are free for x in 9yLxy, 8E legitimates moving from 8x8yLxy to 9yLf 1 ay
or 9yLg 2 zby. What we cannot do is move from 8x9yLxy to 9yLyy or 9yLf 1 yy.
These violate the constraint insofar as a variable of the substituted term is bound by
a quantifier in the resulting formula.
Intuitively, the motivation for this rule is clear: If P is satisfied for every assignment to variable x, then it is sure to be satisfied for the thing assigned to t, whatever that thing may be. Thus, for example, if everyone loves someone, 8x9yLxy,
it is sure to be the case that Al, and Als father love someone that 9yLay and
9yLf 1 ay. But from everyone loves someone, it does not follow that anyone loves
themselves, that 9yLyy, or that anyone is loved by their father 9yLf 1 yy. Though
we know Al and Als father loves someone, we do not know who that someone might
be. We therefore require that the replacing term be independent of quantifiers in the
rest of the formula.
Here are some examples. Notice that we continue to apply bottom-up goaloriented thinking.
1. 8x8yH xy
2. Hcf 2 ab ! 8zKz

(AU)

3.
4.
5.
6.

8yHcy
Hcf 2 ab
8zKz
Kb

P
P
1 8E
3 8E
2,4 !E
5 8E

Our original goal is Kb. We could get this by 8E if we had 8zKz. So we set that
as a subgoal. This leads to Hcf 2 ab as another subgoal. And we get this from (1)
by two applications of 8E. The constant c is free for x in 8yH xy so we move from
8x8yH xy to 8yHcy by 8E. And the complex term f 2 ab is free for y in Hcy, so
we move from 8yHcy to Hcf 2 ab by 8E. And similarly, we get Kb from 8zKz by
8E.

CHAPTER 6. NATURAL DEDUCTION

269

Here is another example, also illustrating strategic thinking.


1. 8xBx
2. 8x.C x ! Bx/

(AV)

P
P

3.

Ca

A (c, I)

4.
5.
6.
7.

C a ! Ba
Ba
Ba
?

2 8E
4,3 !E
1 8E
6,5 ?I

8. C a

3-7 I

Our original goal is C a; so we set up to get it by I. And our contradiction appears
at the level of atomics and negated atomics. The constant a is free for x in C x !
Bx. So we move from 8x.C x ! Bx/ to C a ! Ba by 8E. And similarly,
we move from 8xBx to Ba by 8E. Notice that we could use 8E to instantiate the
universal quantifiers to any terms. We pick the constant a because it does us some
good in the context of our assumption C a itself driven by the goal, C a. And
it is typical to swoop in with universal quantifiers to put variables on terms that
matter in a given context.
9I is equally straightforward. For variable x, given an accessible formula Ptx
on line a, where term t is free for x in formula P , one may move to 9xP , with
justification, a 9I.
a. Ptx

9I

provided t is free for x in P


9xP

a 9I

The statement of this rule is somewhat in reverse from the way one expects it to
be: Supposing that t is free for x in P , when one removes the quantifier from the
result and replaces every free instance of x with t one ends up with the start. A
consequence is that one starting formula might legitimately lead to different results
by 9I. Thus if P is any of F xx, F xa, or F ax, then Pax is F aa. So 9I allows a
move from F aa to any of 9xF xx, 9xF ax or 9xF xa. In doing a derivation, there is
a sense in which we replace one or more instances of a in F aa with x, and add the
quantifier to get the result. But then notice that not every instance of the term need be
replaced. Officially the rule is stated the other way: Removing the quantifier from the
result, and replacing free instances of the variable, yields the initial formula. Be clear
about this in your mind. The requirement that t be free for x in P prevents moving
from 8yLyy or 8yLf 1 yy to 9x8yLxy. The term from which we generalize must
be free in the sense that it has no bound variable!

CHAPTER 6. NATURAL DEDUCTION

270

Again, the motivation for this rule is clear. If P is satisfied for the individual
assigned to t, it is sure to be satisfied for some individual. Thus, for example, if Al or
Als father love everyone, 8yLay or 8yLf 1 ay, it is sure to be the case that someone
loves everyone 9x8yLxy. But from the premise that everyone loves themselves
8yLyy, or that everyone is loved by their father 8yLf 1 yy it does not follow that
someone loves everyone. Again, the constraint on the rule requires that the term on
which we generalize be independent of quantifiers in the rest of the formula.
Here are a couple of examples. The first is relatively simple. The second illustrates the duality between 8E and 9I.
1. Ha
2. 9yHy ! 8xJ x

(AW)

3.
4.
5.
6.
7.

9yHy
8xJ x
Ja
Ha ^ Ja
9x.H x ^ J x/

P
P
1 9I
2,3 !E
4 8E
1,5 ^I
6 9I

Ha ^ Ja is .H x ^ J x/xa so we can get 9x.H x ^ J x/ from Ha ^ Ja by 9I. Ha


is already a premise, so we set Ja as a subgoal. Ja comes by 8E from 8xJ x, and
to get this we set 9yHy as another subgoal. And 9yHy follows directly by 9I from
Ha. Observe that, for now, the natural way to produce a formula with main operator
9 is by 9I. You should fold this into your strategic thinking.
For the second example recall, from translations, that 8xP is equivalent to
9xP , and 9xP is equivalent to 8xP . Given this, it turns out that we can use the
universal rule with an effect something like 9I, and the existential rule with an effect
like 8E. The following pair of derivations illustrate this point.
1. P a
2.

(AX)

3.
4.

8xP x
P a
?

5. 8xP x

1. 9xP x

P
A (c, I)
2 8E
1,3 ?I
2-4 I

(AY)

2.

P a

A (c, E)

3.
4.

9xP x
?

2 9I
3,1 ?I

5. P a

2-4 E

By 9I we could move from P a to 9xP x in one step. In (AX) we use the universal
rule to move from the same premise to the equivalent 8xP x. Indeed, 9xP x
abbreviates this very expression. Similarly, by 8E we could move from 8xP x to
P a in one step. In (AY), we move to the same result by the existential rule from
the equivalent 9xP x. Thus there is a sense in which, in the presence of rules
for negation, the work done by one of these quantifier rules is very similar to, or can
substitute for, the work done by the other.

CHAPTER 6. NATURAL DEDUCTION

271

E6.23. Complete the following derivations by filling in justifications for each line.
Then for each application of 8E or 9I, show that the free for constraint
is met. Hint: it may be convenient to xerox the problems, and fill in your
answers directly on the copy.
a. 1. 8x.Ax ! Bxf 1 x/
2. 8xAx
3. Af 1 c
4. Af 1 c ! Bf 1 cf 1 f 1 c
5. Bf 1 cf 1 f 1 c

*b. 1. Gaa
2. 9yGay
3. 9x9yGxy

c. 1. 8x.Rx ^ J x/
2.
3.
4.
5.
6.

Rk ^ J k
Rk
Jk
J k ^ Rk
9y.Jy ^ Ry/

d. 1. 9x.Rx ^ Gx/ ! 8yF y


2. 8zGz
3. Ra
4.
5.
6.
7.
8.

Ga
Ra ^ Ga
9x.Rx ^ Gx/
8yF y
F g 2 ax

e. 1. 9zF g1 z
2.

8xF x

3.
4.
5.

F g1 k
9zF g 1 z
?

6. 8xF x

E6.24. The following are not legitimate ND derivations. In each case, explain why.

CHAPTER 6. NATURAL DEDUCTION


a. 1. 8xF x $ Gx

2. Fj $ Gj

1 8E

*b. 1. 8x9yGxy

2. 9yGyy

1 8E

c. 1. 8y.F ay ! Gy/

2. F ay ! Gf 1 b

d. 1. 8yGf 2 xyy

2. 9x8yGxy

e. 1. Gj
2. 9xGf

1 8E

1 9I
P

1x

1 9I

E6.25. Provide derivations to show each of the following.


a. 8xF x `ND F a ^ F b
*b. 8x8yF xy `ND F ab ^ F ba
c. 8x.Gf 1 x ! 8yAyx/, Gf 1 b `ND Af 1 cb
d. 8x8y.H xy ! Dyx/, Dab `ND H ba
e. `ND 8x8yF xy ^ 8x.F xx ! A/ ! A
f. F a, Ga `ND 9x.F x ^ Gx/
*g. Gaf 1 z `ND 9x9yGxy
h. `ND .F a _ F b/ ! 9xF x
i. Gaa `ND 9x9y.Kxx ! Gxy/
j. 8xF x, Ga `ND 9y.F y ^ Gy/
*k. 8x.F x ! Gx/, 9yGy ! Ka `ND F a ! 9xKx
l. 8x8yH xy `ND 9y9xHyx
m. 8x.Bx ! Kx/, Kf 1 x `ND Bf 1 x
n. 8x8y.F xy ! F yx/ `ND 9zF zz
o. 8x.F x ! Gx/, F a `ND 9x.Gx ! H x/

272

CHAPTER 6. NATURAL DEDUCTION

6.3.2

273

8I and 9E

In parallel with 8E and 9I, rules for 8I and 9E are a linked pair. 8I is as follows: For
variables v and x, given an accessible formula Pvx at line a, where v is free for x in
P , v is not free in any undischarged assumption, and v is not free in 8xP , one may
move to 8xP with justification a 8I.
a. Pvx

provided (i) v is free for x in P , (ii) v is not free in any undis-

8I
8xP

a 8I

charged auxiliary assumption, and (iii) v is not free in 8xP

The form of this rule is like a constrained 9I when t is a variable: from Pvx we
move to the quantified expression 8xP . The underlying difference is in the special
constraints. First, the combination of (i) and (iii) require that v and x appear free in
just the same places. If v is free for x in P , then v is free in Pvx everywhere x is
free in P ; if v is not free in 8xP , then v is free in Pvx only where x is free in P .
So you get back-and-forth between P and Pvx by replacing every free x with v or
every free v with x. This two-way requirement is not present for 9I.
In addition, v cannot be free in an auxiliary assumption still in effect when 8I is
applied. Recall that a formula is true when it is satisfied on any variable assignment.
As it turns out (and we shall see in detail in Part II), the truth of a formula with a free
variable therefore implies the truth of its universal quantification. But this is not so
under the scope of an assumption in which the variable is free. Under the scope of
an assumption with a free variable, we effectively constrain the range of assignments
under consideration to ones where the assumption is satisfied. Thus under any such
assumption, the move to a universal quantification is not justified. For the universal
quantification to be justified, the formula must be satisfied for any assignment to v,
and when v is free in an undischarged assumption we do not have that guarantee.
Only when assignments to v are arbitrary, when reasoning with respect to v might
apply to any individual, is the move from Pvx to 8xP justified. Again, observe that
no such constraint is required for 9I, which depends on satisfaction for just a single
individual, so that any assignment and term will do.
Once you get your mind around them, these constraints are not difficult. Somehow, though, managing them is a common source of frustration for beginning students. However, there is a simple way to be sure that the constraints are met. Suppose you have been following the strategies, along the lines from before, and come
to a goal of the sort, 8xP . It is natural to expect to get this by 8I from Pvx . You
will be sure to satisfy the constraints, if you set Pvx as a subgoal, where v does not
appear elsewhere in the derivation. If v does not otherwise appear in the derivation,
(i) there cannot be any v quantifier in P , so v is sure to be free for x in P . If v does

CHAPTER 6. NATURAL DEDUCTION

274

not otherwise appear in the derivation, (ii) v cannot appear in any assumption, and
so be free in an undischarged assumption. And if v does not otherwise appear in the
derivation, (iii) it cannot appear at all in 8xP , and so cannot be free in 8xP . It is not
always necessary to use a new variable in order to satisfy the constraints, and sometimes it is possible to simplify derivations by clever variable selection. However, we
shall make it our standard procedure to do so.
Here are some examples. The first is very simple, but illustrates the basic idea
underlying the rule.
1. 8x.H x ^ M x/

(AZ)
Hj
8yHy

8I

1. 8x.H x ^ M x/

2. Hj ^ Mj
3. Hj
4. 8yHy

1 8E
2 ^E
3 8I

The goal is 8yHy. So, picking a variable new to the derivation, we set up to get this
by 8I from Hj . This goal is easy to obtain from the premise by 8E and ^E. If every
x is such that both H x and M x, it is not surprising that every y is such that Hy.
The general content from the quantifier is converted to the form with free variables,
manipulated by ordinary rules, and converted back to quantified form. This is typical.
Another example has free variables in an auxiliary assumption.
1. 8x.Ex ! Sx/
2. 8z.Sz ! Kz/

1. 8x.Ex ! Sx/
2. 8z.Sz ! Kz/

P
P

(BA)

Ej ! Kj
8x.Ex ! Kx/

8I

P
P

3.

Ej

A (g, !I)

4.
5.
6.
7.

Ej ! Sj
Sj
Sj ! Kj
Kj

1 8E
4,3 !E
2 8E
6,5 !E

8. Ej ! Kj
9. 8x.Ex ! Kx/

3-7 !I
8 8I

Given the goal 8x.Ex ! Kx/, we immediately set up to get it by 8I from Ej !


Kj . At that stage, j does not appear elsewhere in the derivation, and we can therefore be sure that the constraints will be met when it comes time to apply 8I. The
derivation is completed by the usual strategies. Observe that j appears in an auxiliary assumption at (3). This is no problem insofar as the assumption is discharged
by the time 8I is applied. We would not, however, be able to conclude, say, 8xS x
or 8xKx inside the subderivation, since at that stage, the variable j is free in the
undischarged assumption. But, of course, given the strategies, there should be no
temptation whatsoever to do so! For when we set up for 8I, we set up to do it in a
way that is sure to satisfy the constraints.

CHAPTER 6. NATURAL DEDUCTION

275

A last example introduces multiple quantifiers and, again, emphasizes the importance of following the strategies. Insofar as the conclusion merely exchanges
variables with the premise, it is no surprise that there is a way for it to be done.
1. 8x.Gx ! 8yF yx/

1. 8x.Gx ! 8yF yx/

2.

(BB)

Gj

F kj
8xF xj
Gj ! 8xF xj
8y.Gy ! 8xF xy/

8I

Gj ! 8xF xj
8y.Gy ! 8xF xy/

P
A (g, !I)

8I
2- !I
8I

First, we set up to get 8y.Gy ! 8xF xy/ from Gj ! 8xF xj . The variable j
does not appear in the derivation, so we expect that the constraints on 8I will be
satisfied. But our new goal is a conditional, so we set up to go for it by !I in the
usual way. This leads to 8xF xj as a goal, and we set up to get it from F kj , where
k does not otherwise appear in the derivation. Observe that we have at this stage
an undischarged assumption in which j appears. However, our plan is to generalize
on k. Since k is new at this stage, we are fine. Of course, this assumes that we are
following the strategies so that our new variable automatically avoids variables free
in assumptions under which this instance of 8I falls. This goal is easily obtained and
the derivation completed as follows.
1. 8x.Gx ! 8yF yx/

2.

Gj

A (g, !I)

3.
4.
5.
6.

Gj ! 8yF yj
8yF yj
F kj
8xF xj

1 8E
3,2 !E
4 8E
5 8I

7. Gj ! 8xF xj
8. 8y.Gy ! 8xF xy/

2-6 !I
7 8I

When we apply 8I the first time, we replace each instance of k with x and add the x
quantifier. When we apply 8I the second time, we replace each instance of j with y
and add the y quantifier. This is just how we planned for the rules to work.
9E appeals to both a formula and a subderivation. For variables v and x, given
an accessible formula 9xP at a, and an accessible subderivation beginning with Pvx
at b and ending with Q against its scope line at c where v is free for x in P , v is
free in no undischarged assumption, v is not free in 9xP or in Q, one may move to
Q, with justification a,b-c 9E.

CHAPTER 6. NATURAL DEDUCTION


a. 9xP
b. Pvx

A (g, a9E)

9E

276

provided (i) v is free for x in P , (ii) v is not free in


any undischarged auxiliary assumption, and (iii) v is

c.

Q
Q

not free in 9xP or in Q


a,b-c 9E

Notice that the assumption comes with an exit strategy as usual. We can think of this
rule on analogy with _E. A universally quantified expression is something like a big
conjunction: if 8xP , then this element of U is P and that element of U is P and
. . . . And an existentially quantified expression is something like a big disjunction: if
9xP , then this element of U is P or that element of U is P or . . . . What we need to
show is that no matter which thing happens to be the one that is P , we get the result
that Q. Given this, we are in a position to conclude that Q. As for the case of 8I,
then, the constraints guarantee that our reasoning applies to any individual.
Again, if you are following the strategies, a simple way to guarantee that the constraints are met is to use a variable new to the derivation for the assumption. Suppose
you are going for goal Q. In parallel with _, when presented with an accessible
formula with main operator 9, it is wise to go for the entire goal by 9E.
a. 9xP
b. Pvx

a. 9xP

A (g, a9E)

(BC)
c.
Q

(goal)

Q
Q

(goal)
a,b-c 9E

If v does not otherwise appear in the derivation, then (i) there is no v quantifier in P
and v is sure to be free for x in P . If v does not otherwise appear in the derivation
(ii) v does not appear in any other assumption and so is not free in any undischarged
auxiliary assumption. And if v does not otherwise appear in the derivation (iii) v
does not appear in either 9xP or in Q and so is not free in 9xP or in Q. Thus we
adopt the same simple expedient to guarantee that the constraints are met. Of course,
this presupposes we are following the strategies enough so that other assumptions are
in place when we make the assumption for 9E, and that we are clear about the exit
strategy, so that we know what Q will be! The variable is new relative to this much
setup.
Here are some examples. The first is particularly simple, and should seem intuitively right. Notice again, that given an accessible formula with main operator 9, we
go directly for the goal by 9E.

CHAPTER 6. NATURAL DEDUCTION


1. 9x.F x ^ Gx/
Fj ^ Gj

2.

1. 9x.F x ^ Gx/

A (g, 19E)

2.

Fj ^ Gj

A (g, 19E)

3.
4.

Fj
9xF x

2^E
3 9I

(BD)
9xF x
9xF x

277

1,

9E

5. 9xF x

1,2-4 9E

Given an accessible formula with main operator 9, we go for the goal by 9E. This
gives us a subderivation with the same goal, and our assumption with the new variable. As it turns out, this goal is easy to obtain, with instances of ^E and 9I. We
could not do 8I to introduce 8xF x under the scope of the assumption with j free.
But 9I is not so constrained. So we complete the derivation as above. If some x is
such that both F x and Gx then of course some x is such that F x. Again, we are able
to take the quantifier off, manipulate the expressions with free variables, and put the
quantifier back on.
Observe that the following is a mistake. It violates the third constraint that v
the variable to which we instantiate the existential, is not free in Q the formula that
results from 9E.
1. 9x.F x ^ Gx/

(BE)

2.

Fj ^ Gj

A (g, 19E)

3.

Fj

2^E

4. Fj
5. 9xF x

1,2-3 9E
4 9I

Mistake!

If you are following the strategies, there should be no temptation to do this. In the
above example (BD), we go for the goal 9xF x by 9E. At that stage, the variable of
the assumption j is new to the derivation and so does not appear in the goal. So all
is well. This case (BE) does not introduce a variable that is new relative to the goal
of the subderivation, and so runs into trouble.
Very often, a goal from 9E is existentially quantified for introducing an existential quantifier is one way of eliminating the variable from the assumption so that
it is not free in the goal. In fact, we do not have to think much about this, insofar as
we explicitly introduce the assumption by a variable not in the goal. However, it is
not always the case that the goal for 9E is existentially quantified. Here is a simple
case of that sort.

CHAPTER 6. NATURAL DEDUCTION


1. 9xF x
2. 8z.9yF y ! Gz/
3.

Fj

278

P
P

1. 9xF x
2. 8z.9yF y ! Gz/

A (g, 19E)

3.

Fj

A (g, 19E)

4.
5.
6.
7.

9yF y ! Gk
9yF y
Gk
8xGx

2 8E
3 9I
4,5 !E
6 8I

(BF)

8xGx
8xGx

1,

9E

8. 8xGx

P
P

1,3-7 9E

Again, given an existential premise, we set up to reach the goal by 9E, where the
variable in the assumption is new. In this case, the goal is universally quantified, and
illustrates the point that any formula may be the goal for 9E. In this case, we reach
the goal in the usual way. To reach 8xGx set Gk as goal; at this stage, k is new to the
derivation, and so not free in any undischarged assumption. So there is no problem
about 8I. Then it is a simple matter of exploiting accessible lines for the result.
Here is an example with multiple quantifiers. It is another case which makes
sense insofar as the premise and conclusion merely exchange variables.
1. 9x.F x ^ 9yGxy/
2.

Fj ^ 9yGjy

1. 9x.F x ^ 9yGxy/

A (g 19E)

2.

Fj ^ 9yGjy

A (g, 19E)

3.
4.

9yGjy
Gj k

2 ^E
A (g, 39E)

(BG)

9y.F y ^ 9xGyx/
9y.F y ^ 9xGyx/
9y.F y ^ 9xGyx/

9y.F y ^ 9xGyx/
1, 2-

9E

9y.F y ^ 9xGyx/

3, 4-

9E

1, 2-

9E

The premise is an existential, so we go for the goal by 9E. This gives us the first
subderivation, with the same goal, and new variable j substituted for x. But just
a bit of simplification gives us another existential on line (3). Thus, following the
standard strategies, we set up to go for the goal again by 9E. At this stage, j is no
longer new, so we set up another subderivation with new variable k substituted for y.
Now the derivation is reasonably straightforward.

CHAPTER 6. NATURAL DEDUCTION


1. 9x.F x ^ 9yGxy/

2.

Fj ^ 9yGjy

A (g, 19E)

3.
4.

9yGjy
Gj k

2 ^E
A (g, 39E)

9xGjx
Fj
Fj ^ 9xGjx
9y.F y ^ 9xGyx/

5.
6.
7.
8.

9y.F y ^ 9xGyx/

9.

10. 9y.F y ^ 9xGyx/

279

4 9I
2 ^E
6,5 ^I
7 9I
3, 4-8 9E
1, 2-9 9E

9I applies in the scope of the subderivations. And we put Fj and 9xGjx together so
that the outer quantifier goes on properly, with y in the right slots.
Finally, observe that 8I and 9I also constitute a dual to one another. The derivations to show this are relatively difficult. But to not worry about that. It is enough
to understand the steps. For the parallel to 8I, suppose the constraints are met for a
derivation of 8xP x from Pj . And for the parallel to 9E, suppose it is possible to
derive Q by 9E from 9xP x; so from application of that rule, in a subderivation, we
can get Q from Pj .
1. 8xP x
1. Pj

(BH)

9xP x

A (c, I)

3.

Pj

A (c, 29E)

4.

1,3 ?I

5.

2.

Q

A (c, E)

3.

Pj

A (c, I)

4.
5.

::
:
Q
?

(somehow)
4,2 ?I

2.

6. 9xP x

2,3-4 9E
2-5 I

(BI)

6.
7.
8.

Pj
8xP x
?

9. Q

3-5 I
6 8I
7,1 ?I
2-8 E

Where Pj is a premise, it would be possible to derive 8xP x in one step by 8I. But
in (BH) from the same start, we derive the equivalent 9xP x by the existential
rule. Since conditions for the universal rule apply, j is not free in an undischarged
assumption, is free for x in P x and is not free in 9xP x. In this case, it matters
that ? abbreviates Z ^ Z and so includes no instance of j . So the constraints are
satisfied. Similarly, if it is possible to derive Q by 9E from 9xP x, we would set up
a subderivation starting with Pj , derive Q and use 9E to exit with the Q. In (BI) we
begin with the equivalent 8xP x and, supposing it is possible in a subderivation

CHAPTER 6. NATURAL DEDUCTION

280

to derive Q from Pj , use the universal rule to derive Q. Since conditions for the
existential rule apply, j is free for x in P x and not free in 8xP x. Observe
also that the assumption Pj is discharged by the time 8I is applied, and that the
constraint on 9E requires that j is not free in Q or other undischarged assumptions.
Thus, again, there is a sense in which in the presence of rules for negation, the work
done by one of these quantifier rules is very similar to, or can substitute for, the work
done by the other.
E6.26. Complete the following derivations by filling in justifications for each line.
Then for each application of 8I or 9E show that the constraints are met by
running through each of the three requirements. Hint: it may be convenient
to xerox the problems, and fill in your answers directly on the copy.
a. 1. 8x.H x ! Rx/
2. 8yHy
3.
4.
5.
6.

Hj ! Rj
Hj
Rj
8zRz

*b. 1. 8y.F y ! Gy/


2. 9zF z
3.

Fj

4.
5.
6.

Fj ! Gj
Gj
9xGx

7. 9xGx

c. 1. 9x8y8zH xyz
2.

8y8zHjyz

3.
4.
5.
6.

8zHjf 1 kz
Hjf 1 kf 1 k
9xH xf 1 kf 1 k
8y9xH xf 1 yf 1 y

7. 8y9xH xf 1 yf 1 y

CHAPTER 6. NATURAL DEDUCTION

281

d. 1. 8y8x.F x ! By/
2.

9xF x

3.

Fj

4.
5.
6.

8x.F x ! Bk/
Fj ! Bk
Bk

7.

Bk

8. 9xF x ! Bk
9. 8y.9xF x ! By/

e. 1. 9x.F x ! 8yGy/
2.

Fj ! 8yGy

3.

Fj

4.
5.

8yGy
Gk

6.
7.
8.

Fj ! Gk
8y.Fj ! Gy/
9x8y.F x ! Gy/

9. 9x8y.F x ! Gy/

E6.27. The following are not legitimate ND derivations. In each case, explain why.
*a. 1. Gjy ! Fjy

2. 8z.Gzy ! Fjy/

b. 1. 9x8yByx

1 8I

2.

8yByy

A (g, 19E)

3.

Baa

2 8E
1,2-3 9E

4. Baa

c. 1. 9xByx

2.

Byy

A (g, 19E)

3.

9yByy

2 9I

4. 9yByy

1,2-3 9E

CHAPTER 6. NATURAL DEDUCTION


d. 1. 8x9yLxy
2. 9yLjy
3. Lj k
4.
5.

8xLxk
9y8xLxy

6. 9y8xLxy

P
1 8E
A (g, 2 9E)
3 8I
4 9I
2,3-5 9E

e. 1. 8x.H x ! Gx/
2. 9xH x

P
P

3.

Hj

A (g, 29E)

4.
5.

Hj ! Gj
Gj

1 8E
4,3 !E

6. Gj
7. 8xGx

2,3-5 9E
6 8I

E6.28. Provide derivations to show each of the following.


a. 8xKxx `ND 8zKzz
b. 9xKxx `ND 9zKzz
*c. 8xKx, 8x.Kx ! S x/ `ND 8x.H x _ S x/
d. `ND 8xHf 1 x ! 8xHf 1 g 1 x
e. 8x8y.Gy ! F x/ `ND 8x.8yGy ! F x/
*f. 9yByyy `ND 9x9y9zBxyz
g. 8x.H x ^ Kx/ ! I x, 9y.Hy ^ Gy/, 8x.Gx ^ Kx/ `ND 9y.Iy ^ Gy/
h. 8x.Ax ! Bx/ `ND 9zAz ! 9zBz
i. 9x.C x _ Rx/ `ND 9xC x
j. 9x.N x _ Lxx/, 8xN x `ND 9yLyy
k. 8x8y.F x ! Gy/ `ND 8x.F x ! 8yGy/
l. 8x.F x ! 8yGy/ `ND 8x8y.F x ! Gy/
m. 9x.M x ^ Kx/, 9y.Oy ^ W y/ `ND 9x9y.Kx ^ Oy/
n. 8x.F x ! 9yGxy/ `ND 8xF x ! 9y.Gxy _ H xy/
o. 8x9yRxy, 8x8y.Rxy ! Ryx/ `ND 8x9y.Rxy ^ Ryx/

282

CHAPTER 6. NATURAL DEDUCTION

6.3.3

283

Strategy

Our strategies remain very much as before. They are modified only to accommodate
the parallels between ^ and 8, and between _ and 9. I restate the strategies in their
expanded form, and give some examples of each. As before, we begin with strategies
for reaching a determinate goal.
SG

1. If accessible lines contain explicit contradiction, use E to reach goal.


2. Given an accessible formula with main operator 9 or _, use 9E or _E to
reach goal (watch screened variables).
3. If goal is in accessible lines (set goals and) attempt to exploit it out.
4. To reach goal with main operator ?, use ?I (careful with _ and 9).
5. For any goal, if all else fails, try E (especially for atomics and formulas
with _ or 9 as main operator).

And we have strategies for reaching a contradiction.


SC

1. Break accessible formulas down into atomics and negated atomics.


2. Given an existential or disjunction in a subderivation for E or I, go for
? by 9E or _E (watch screened variables).
3. Set as goal the opposite of some negation (something that cannot itself be
broken down). Then apply strategies for a goal to reach it.
4. For some P such that both P and P lead to contradiction: Assume P
(P ), obtain the first contradiction, and conclude P (P ); then obtain
the second contradiction this is the one you want.

As before, these are listed in priority order, though the frequency order may be different. If a high priority strategy does not apply, simply fall through to one that
does. In each case, you may want to refer back to the corresponding discussion in
the sentential case for further discussion and examples.
SG1. If accessible lines contain explicit contradiction, use E to reach goal. The
strategy is unchanged from before. If premises contain an explicit contradiction, we
can assume the negation of our goal, bring the contradiction under the assumption,
and conclude to the original goal. Since this always works, we want to jump on it
whenever it is available. The only thing to add for the quantificational case is that

CHAPTER 6. NATURAL DEDUCTION

284

accessible lines might contain a contradiction that is just a short step away buried
in quantified expressions. Thus, for example,
1. 8xF x
2. 8yF y

1. 8xF x
2. 8yF y

P
P

(BJ)

Gz

P
P

3.

Gz

A (g, E)

4.
5.
6.

Fx
F x
?

1 8E
2 8E
4,5 ?I

7. Gz

3-6 E

Though 8xF x and 8yF y are not themselves an explicit contradiction, they lead
by 8E directly to expressions that are. Given the analogy between ^ and 8, it is as
if we had F ^ G and F ^ G in the premises. In the sentential case, we would not
hesitate to go for the goal by E. And similarly here.
SG2. Given an accessible formula with main operator 9 or _, use 9E or _E to
reach goal (watch screened variables). What is new for this strategy is the existential quantifier. Motivation is the same as before: With goal Q, and an accessible line
with main operator 9, go for the goal by 9E. Then you have all the same accessible
formulas as before, with the addition of the assumption. So you will (typically) be
better off in your attempt to reach Q. We have already emphasized this strategy in
introducing the rules. Here is an example.
1. 9xF x
2. 9yGy
3. 9zF z ! 8yF y
4.
5.

Fj
Gk

P
P
P

1. 9xF x
2. 9yGy
3. 9zF z ! 8yF y

A (g, 19E)

4.

A (g, 29E)

5.

(BK)

6.
7.
8.
9.
10.

9x.F x ^ Gx/
9x.F x ^ Gx/
9x.F x ^ Gx/

Fj

P
P
P
A (g, 19E)

Gk

A (g, 29E)

9zF z
8yF y
Fk
F k ^ Gk
9x.F x ^ Gx/

4 9I
3,6 !E
7 8E
8,5 ^I
9 9I

2, 5-

9E

11.

9x.F x ^ Gx/

1, 4-

9E

12. 9x.F x ^ Gx/

2, 5-10 9E
1, 4-11 9E

The premise at (3) has main operator ! and so is not existentially quantified. But
the first two premises have main operator 9. So we set up to reach the goal with two
applications of 9E. It does not matter which we do first, as either way, we end up

CHAPTER 6. NATURAL DEDUCTION

285

with the same accessible formulas to reach the goal at the innermost subderivation.
Once we have the subderivations set up, the rest is straightforward.
Given what we have said, it might appear mysterious how one could be anything
but better off going directly for a goal by 9E or _E. But consider the derivations
below.

(BL)

1. 8x9yF xy
2. 8x8y.F xy ! Gxy/

P
P

1. 8x9yF xy
2. 8x8y.F xy ! Gxy/

P
P

3. 9yFjy
4. Fj k

1 8E
A (g, 3 9E)

3. 9yFjy
4. Fj k

1 8E
A (g, 3 9E)

5.
6.
7.
8.
9.

8y.Fjy ! Gjy/
Fj k ! Gj k
Gj k
9yGjy
8x9yGxy

2 8E
5 8E
6,4 !E
7 9I
Mistake!

10. 8x9yGxy

3,4-9 9E

(BM)

5.
6.
7.
8.

8y.Fjy ! Gjy/
Fj k ! Gj k
Gj k
9yGjy

9. 9yGjy
10. 8x9yGxy

2 8E
5 8E
6,4 !E
7 9I
3,4-8 9E
9 8I

In derivation (BL), we isolate the existential on line (3) and go for the goal, 8x9yGxy
by 9E. But something is in fact lost when we set up for the subderivation the variable j , that was not in any undischarged assumption and therefore available for 8I,
gets screened off by the assumption and so lost for universal generalization. So
at step (9), we are blocked from using (8) and 8I to reach the goal. The problem is
solved in (BM) by letting variable j pass into the subderivation and back out, where
it is available again for 8I. This requires passing over our second strategy for a goal
for at least a step, to set up a new goal 9yGjy, to which we apply the second strategy
in the usual way. Observe that the restriction on 9E blocks a goal in which k is free,
but there is no problem about j . This simple case illustrates the sort of context where
caution is required in application of SG2.
SG3. If goal is in accessible lines (set goals and) attempt to exploit it out. This
is the same strategy as before. The only thing to add is that we should consider the
instances of a universally quantified expression as already in the expression (as if
it were a big conjunction). Thus, for example,
1. Ga ! 8xF x
2. 8xGx

P
P

(BN)
8xF x
Fa

8E

1. Ga ! 8xF x
2. 8xGx

P
P

3. Ga
4. 8xF x
5. F a

2 8E
1,3 !E
4 8E

CHAPTER 6. NATURAL DEDUCTION

286

The original goal F a is in the consequent of (1), 8xF x. So we set 8xF x as a


subgoal. This leads to Ga as another subgoal, and we find this in the premise at
(2). Very often, the difficult part of a derivation is deciding how to exploit quantifiers
to reach a goal. In this case, the choice was trivial. But it is not always so easy.
SG4. To reach goal with main operator ?, use ?I (careful with _ and 9). As before,
this is your bread-and-butter strategy. You will come to it over and over. Of new
applications, the most automatic is for 8. For a simple case,
1. 8xGx
2. 8yF y

1. 8xGx
2. 8yF y

P
P

(BO)
Fj ^ Gj
8z.F z ^ Gz/

8I

3.
4.
5.
6.

Gj
Fj
Fj ^ Gj
8z.F z ^ Gz/

P
P
1 8E
2 8E
4,3 ^I
5 8I

Given a goal with main operator 8, we immediately set up to get it by 8I. This
leads to Fj ^ Gj with the new variable j as a subgoal. After that, completing the
derivation is easy. Observe that this strategy does not always work for formulas with
main operator _ and 9.
SG5. For any goal, if all else fails, try E (especially for atomics and formulas with
_ or 9 as main operator). Recall that atomics now include more than just sentence
letters. Thus, for example, this rule might have special application for goals of the
sort F ab or Gz. And, just as one might have good reason to accept that P or Q,
without having good reason to accept that P , or that Q, so one might have reason
to accept that 9xP without having to accept that any particular individual is P
as one might be quite confident that someone did it, without evidence sufficient to
convict any particular individual. Thus there are contexts where it is possible to
derive 9xP but not possible to reach it directly by 9I. SG5 has special application in
those contexts. Thus, consider the following example.

CHAPTER 6. NATURAL DEDUCTION

1. 8xAx
2.

9xAx

1. 8xAx

P
A (c, E)

(BP)

2-

2.

9xAx

A (c, E)

3.

Aj

A (c, E)

4.
5.

9xAx
?

3 9I
4,1 ?I

Aj
8xAx
?

3-5 E
6 8I
7,2 ?I

9. 9xAx

2-8 E

6.
7.
8.

?
9xAx

287

E

Our initial goal is 9xAx. There is no contradiction in the premises; there is no


disjunction or existential in the premises; we do not see the goal in the premises; and
attempts to reach the goal by 9I are doomed to fail. So we fall through to SG5, and
set up to reach the goal by E. As it happens, the contradiction is not easy to get!
We can think of the derivation as involving applications of either SC3 or SC4. We
take up this sort of case below. For now, the important point is just the setup on the
left.
Where strategies for a goal apply in the context of some determinate goal, strategies for a contradiction apply when the goal is just some contradiction and any
contradiction will do. Again, there is nothing fundamentally changed from the sentential case, though we can illustrate some special quantificational applications.
SC1. Break accessible formulas down into atomics and negated atomics. This
works just as before. The only point to emphasize for the quantificational case is
one we made for SG1 above, that relevant atomics may be contained in quantified expressions. So going for atomics and negated atomics may include shaking
quantified expressions to see what falls out. Here is a simple example.
1. F a
2.

8x.F x ^ Gx/

1. F a

A (c, I)

2.

8x.F x ^ Gx/

A (c, I)

3.
4.
5.

F a ^ Ga
Fa
?

2 8E
3 ^E
4,1 ?I

6. 8x.F x ^ Gx/

2-5 I

(BQ)
?
8x.F x ^ Gx/

2-

I

Our strategy for the goal is SG4. For an expression with main operator , we go for
the goal by I. We already have F a toward a contradiction at the level of atomics
and negated atomics. And F a comes from the universally quantified expression by
8E.

CHAPTER 6. NATURAL DEDUCTION

288

SC2. Given an existential or disjunction in a subderivation for E or I, go for ?


by 9E or _E (watch screened variables). Where applications of this strategy were
infrequent in the sentential case, they will be much more common now. Motivation
is unchanged from SG2: In your attempt to reach a contradiction, you have all the
same accessible formulas as before, with the addition of the assumption. So you will
(typically) be better off in your attempt to reach a contradiction. Here is an example.
1. 8xAx
9xAx

2.

1. 8xAx

A (c, I)

2.

9xAx

A (c, I)

3.

Aj

A (c, 29E)

(BR)
?
?

9xAx

2-

I

9xAx

2,32-

9E
I

We set up to reach the main goal by I. This gives us an existentially quantified
expression at (2), where the goal is a contradiction. SC2 tells us to go for ? by
9E. Observe that, because the goal is ?, the exit strategy is c rather than g. But by
application of SC1, this subderivation is easy.
1. 8xAx

2.

9xAx

A (c, I)

3.

Aj

A (c, 29E)

4.
5.

Aj
?

1 8E
3,4 ?I

6.

7. 9xAx

2,3-5 9E
2-6 I

With Aj on line (3) and Aj contained on line (1), the derivation is easy. But as
occurs with the parallel goal-directed strategy, the the contradiction would not even
have been possible without the assumption Aj for 9E.
As can occur with applications of SG2, it is wise to be careful about applications
of this strategy when assumptions for 9E or _E screen off variables that would otherwise be available for 8I. Here is a version of the example from before to illustrate
the point.

CHAPTER 6. NATURAL DEDUCTION


1. 8x9yGxy
2. 8x8y.F xy ! Gxy/

(BS)

289

P
P

1. 8x9yGxy
2. 8x8y.F xy ! Gxy/

P
P

3.

8x9yF xy

A (c I)

3.

8x9yF xy

A (c I)

4.
5.

9yFjy
Fj k

3 8E
A (c, 4 9E)

4.
5.

9yFjy
Fj k

3 8E
A (g, 4 9E)

8y.Fjy ! Gjy/
Fj k ! Gj k
Gj k
9yGjy
8x9yGxy
?

6.
7.
8.
9.
10.
11.
12.

13. 8x9yF xy

2 8E
6 8E
7,5 !E
8 9I
Mistake!
10,1 ?I

8y.Fjy ! Gjy/
Fj k ! Gj k
Gj k
9yGjy

6.
7.
8.
9.

(BT)

2 8E
6 8E
7,5 !E
8 9I

4,5-11 9E

10.
11.
12.

9yGjy
8x9yGxy
?

4,5-9 9E
10 8I
11,1 ?I

3-12 I

13. 8x9yF xy

3-12 I

In derivation (BS), we isolate the existential on line (4) and set up to go for contradiction by 9E. But something is in fact lost when we set up for the subderivation the
variable j , that was not in any undischarged assumption and therefore available for
8I, gets screened off by the assumption and so lost for universal generalization. So
at step (10), we are blocked from using (9) and 8I to reach the goal. Again, the problem is solved in (BT) by letting variable j pass into the subderivation and back out,
where it is available for 8I. We do this by letting the goal for 9E be not ? but rather
the formula which results in ?, and obtaining ? once we get that formula out. This
simple case illustrates the sort of context where caution is required in application of
SC2.
SC3. Set as goal the opposite of some negation (something that cannot itself be
broken down); then apply strategies for a goal to reach it. In principle, this strategy
is unchanged from before, though of course there are new applications for quantified
expressions. (BT) above includes a case of this. Here is another quick example.
1. 9xAx
2.

Aj

1. 9xAx

A (c, I)

2.

Aj

A (c, I)

3.
4.

9xAx
?

2 9I
3,1 ?I

5. Aj
6. 8xAx

2-4 I
5 8I

(BU)
?
Aj
8xAx

2- I
8I

Our strategy for the goal is SG4. We plan on reaching 8xAx by 8I. So we set
Aj as a subgoal. Again the strategy for the goal is SG4, and we set up to get Aj

CHAPTER 6. NATURAL DEDUCTION

290

by I. Other than the assumption itself, there are no atomics and negated atomics to
be had. There is no existential or disjunction in the scope of the subderivation. But
the premise is a negated expression. So we set 9xAx as a goal. But this is easy as it
comes in one step by 9I.
SC4. For some P such that both P and P lead to contradiction: Assume P
(P ), obtain the first contradiction, and conclude P (P ); then obtain the second
contradiction this is the one you want. As in the sentential case, this strategy
often coincides with SC3 in building up to the opposite of something that cannot
be broken down, one assumes a P such that both P and P result in contradiction. Corresponding to the pattern with _, this often happens when some accessible
expression is a negated existential. Here is a challenging example.
1. 8x.Ax ! Kx/
2. 8yKy
3.

9wAw

P
P

1. 8x.Ax ! Kx/
2. 8yKy

A (c, E)

3.

(BV)

3-

E

A (c, E)

4.

Aj

A (c, I)

5.
6.

9wAw
?

4 9I
5,3 ?I

7.
8.
9.
10.
11.

?
9wAw

9wAw

P
P

Aj
Aj ! Kj
Kj
8yKy
?

12. 9wAw

4-6 I
1 8E
8,7 !E
9 8I
10,2 ?I
3-11 E

Once we decide that we cannot get the goal directly be 9I, the strategy for a goal
falls through to SG5. And, as it turns out, both Aj and Aj lead to contradiction.
So we assume one and get the contradiction; this gives us the other which leads to
contradiction as well. The decision to assume Aj may seem obscure! But it is a
common pattern: Given 9xP , assume an instance, Pvx for some variable v, or
at least something that will yield Pvx . Then 9I gives you 9xP , and so the first
contradiction. So you conclude Pvx and this outside the scope of the assumption,
where 8I and the like might apply for v. In effect, you come with an an instance
underneath the negated existential, where the result is a negation of the instance,
which has some chance to give you what you want. For another example of this
pattern, see (BP) above.
Notice that such cases can also be understood as driven by applications of SC3.
In (BV), we set the opposite of the formula on (2) as goal. This leads to Kj and

CHAPTER 6. NATURAL DEDUCTION

291

then Aj as subgoals. To reach Aj , we assume Aj , and get this by building to the
opposite of 9wAw. And similarly in (BP).
Again, these strategies are not a cookbook for performing all derivations doing
derivations remains an art. But the strategies will give you a good start, and take you
a long way through the exercises that follow, including derivation of the theorems
immediately below.
T6.27. `ND 8xP ! Ptx

where term t is free for variable x in formula P

*T6.28. P ! Q `ND P ! 8xQ

where variable x is not free in formula P

T6.29. `ND 8xP $ 9xP

for any variable x and formula P

T6.30. `ND 9xP $ 8xP

for any variable x and formula P

E6.29. For each of the following, (i) which strategies for a goal apply? and (ii) show
the next two steps. If the strategies call for a new subgoal, show the subgoal;
if they call for a subderivation, set up the subderivation. In each case, explain
your response. Hint: each of the strategies for a goal is used at least once.
*a. 1. 9x9y.F xy ^ Gyx/

9x9yF yx

b. 1. 8y.Hy ^ F y/ ! Gy
2. 8zF z ^ 8xKxb

P
P

8x.H x ! Gx/

c. 1. 8xF x ! 8y.Gy ! Rxy/


2. 8x.H x ! Gx/
3. F a ^ H b
Rab

P
P
P

CHAPTER 6. NATURAL DEDUCTION


d. 1. 8x8y.Rxy ! Ryx/

292

P
P

2. Raa
9z9ySyz

e. 1. 8x.F x _ A/

9xF x

E6.30. Each of the following sets up an application of I or E for SG4 or SG5.


Complete the derivations, and explain your use of strategies for a contradiction. Hint: Each of the strategies for a contradiction is used at least once.
*a. 1. 9x.F x ^ Gx/
2.
3.

Fj
Gj

P
A (g, !I)
A (c, I)

?
Gj
Fj ! Gj
8x.F x ! Gx/

b. 1. 8x.F x ! 8yF y/
2.

9xF x

I

3-

2- !I
8I
P
A (c, I)

?
9xF x

c. 1. 8x.F x ! 8yRxy/
2. Rab
3.

Fa

2-

I

P
P
A (c, I)

?
F a

3-

I

CHAPTER 6. NATURAL DEDUCTION


d. 1. 8xF x
2.

9x.F x _ A/

293

P
A (c, E)

?
9x.F x _ A/

e. 1.

9x.Ax $ Ax/

2-

E

A (c, I)

?
9x.Ax $ Ax/

1-

I

E6.31. Produce derivations to show each of the following. Though no full answers
are provided, strategy hints are available for the first problems. If you get the
last few on your own, you are doing very well!
*a. 8x.Bx ! W x/, 9xW x `ND 9xBx
*b. 8x8y8zGxyz `ND 8x8y8z.H xyz ! Gzyx/
*c. 8xAx ! 8y.Dxy $ Bf 1 f 1 y/, 8x.Ax ^ Bx/ `ND 8xDf 1 xf 1 x
*d. 8x.H x ! 8yRxyb/, 8x8z.Razx ! S xzz/ `ND Ha ! 9xS xcc
*e. 8x.F x ^ Abx/ $ 8xKx, 8y9x.F x ^ Abx/ ^ Ryy `ND 8xKx
*f. 9x.J xa ^ C b/, 9x.S x ^ H xx/, 8x.C b ^ S x/ ! Ax `ND 9z.Az ^ H zz/
*g. 8x8y.Dxy ! C xy/, 8x9yDxy, 8x8y.Cyx ! Dxy/ `ND 9x9y.C xy ^ Cyx/
*h. 8x8y.Ry _ Dx/ ! Ky, 8x9y.Ax ! Ky/, 9x.Ax _ Rx/ `ND 9xKx
*i. 8y.My ! Ay/, 9x9y.Bx ^ M x/ ^ .Ry ^ Syx/, 9xAx ! 8y8z.Syz ! Ay/
`ND 9x.Rx ^ Ax/
*j. 8x8y.H by ^ H xb/ ! H xy, 8z.Bz ! H bz/, 9x.Bx ^ H xb/
`ND 9zBz ^ 8y.By ! H zy/
*k. 8x..F x ^ Kx/ ! 9y.F y ^ Hyx/ ^ Ky/,
8x.F x ^ 8y.F y ^ Hyx/ ! Ky/ ! Kx ! M a `ND M a
*l. 8x8y.Gx ^ Gy/ ! .H xy ! Hyx/, 8x8y8z..Gx ^ Gy/ ^ Gz !
.H xy ^ Hyz/ ! H xz/ `ND 8w.Gw ^ 9z.Gz ^ H wz/ ! H ww/
*m. 8x8y.Ax ^ By/ ! C xy, 9yEy ^ 8w.H w ! Cyw/, 8x8y8z.C xy ^
Cyz/ ! C xz, 8w.Ew ! Bw/ `ND 8z8w.Az ^ H w/ ! C zw

CHAPTER 6. NATURAL DEDUCTION

294

*n. 8x9y8z.Axyz _ Bzyx/, 9x9y9zBzyx `ND 8x9y8zAxyz


*o. A ! 9xF x `ND 9x.A ! F x/
*p. 8xF x ! A `ND 9x.F x ! A/
q. 8x.F x ! Gx/, 8x8y.Rxy ! Syx/, 8x8y.S xy ! Syx/
`ND 8x9y.F x ^ Rxy/ ! 9y.Gx ^ S xy/
r. 9y8xRxy, 8x.F x ! 9ySyx/, 8x8y.Rxy ! S xy/ `ND 9xF x
s. 9x8y.F x _ Gy/ ! 8z.H xy ! Hyz/, 9z8xH xz `ND 9y8x.F y ! Hyx/
t. 8x8y9zHyz ! H xy `ND 9x9yH xy ! 8x8yH xy
u. 9x.F x ^ 8y.Gy ^ Hy/ ! S xy/, 8x8y..F x ^ Gy/ ^ Jy ! S xy/,
8x8y..F x ^ Gy/ ^ Rxy ! S xy/, 9x.Gx ^ .J x _ H x//
`ND 9x9y..F x ^ Gy/ ^ Rxy/
v. `ND 9x8y.F x ! F y/
w. `ND 9x.9yF y ! F x/
x. 9x8y9z.F zy ! 9wF yw/ ! F xy `ND 9xF xx
y. `ND 8x9y8z9wT xyw ! 9wT xzw
z. `ND 8x9y.F x _ Gy/ ! 9y8x.F x _ Gy/

*E6.32. Produce derivations to demonstrate each of T6.27 - T6.30, explaining for


each application how quantifier restrictions are met. Hint: You might try
working test versions where P and Q are atomics P x and Qx; then you can
think about the general case.

6.3.4

=I and =E

We complete the system ND with I- and E- rules for equality. Strictly, D is not an
operator at all; it is a two-place relation symbol. However, because its interpretation
is standardized across all interpretations, it is possible to introduce rules for its behavior. The =I rule is particularly simple. At any stage in a derivation, for any term
t, one may write down t D t with justification =I.
=I

tDt

=I

CHAPTER 6. NATURAL DEDUCTION

295

Strictly, without any inputs, this is an axiom of the sort we encountered in chapter 3.
It is a formula which may be asserted at any stage in a derivation. Its motivation
should be clear. Since for any m in the universe U, hm; mi is in the interpretation of
D, t D t is sure to be satisfied, no matter what the assignment to t might be. Thus,
in Lq , a D a, x D x, and f 2 az D f 2 az are formulas that might be justified by =I.
=E is more interesting and, in practice, more useful. Say an arbitrary term is free
in a formula iff every variable in it is free. Automatically, then, any term without
variables is free in any formula. And say P t=s is P where some, but not necessarily
all, free instances of term t may be replaced by term s. Then, given an accessible
formula P on line a and the atomic formula t D s or s D t on accessible line b,
one may move to P t=s , where s is free for all the replaced instances of t in P , with
justification a,b =E.

=E

a. P
b. t D s
P t=s

a. P
b. s D t
a,b =E

P t=s

provided that term s is free


for all the replaced instances of
a,b =E

term t in formula P

If the assignment to some terms is the same, this rule lets us replace free instances of
the one term by the other in any formula. Again, the motivation should be clear. On
trees, the only thing that matters about a term is the thing to which it refers. So if P
with term t is satisfied, and the assignment to t is the same as the assignment to s,
then P with s in place of t should be satisfied as well. When a term is not free, it is
not the assignment to the term that is doing the work, but rather the way it is bound.
So we restrict ourselves to contexts where it is just the assignment that matters!
Because we need not replace all free instances of one term with the other, this
rule has some special applications that are worth noticing. Consider the formulas
Raba and a D b. The following lists all the formulas that could be derived from
them in one step by =E.
1. Raba
2. a D b

(BW)

3.
4.
5.
6.
7.
8.

Rbba
Rabb
Rbbb
Raaa
aDa
bDb

P
P
1,2 =E
1,2 =E
1,2 =E
1,2 =E
2,2 =E
2,2 =E

(3) and (4) replace one instance of a with b. (5) replaces both instances of a with
b. (6) replaces the instance of b with a. We could reach, say, Raab, but this would

CHAPTER 6. NATURAL DEDUCTION

296

ND Quick Reference (Quantificational)


8E (universal exploit)

9I (existential intro)

a. 8xP

a. Ptx

Ptx

a 8E

9xP

provided t is free for x in P


a 9I

8I (universal intro)

9E (existential exploit)

a. Pvx

a. 9xP
b. Pvx

8xP

c.

=I (equality intro)
tDt

A (g, a9E)

a 8I

=I

Q
Q

provided (i) v is free for x in


P , (ii) v is not free in any
undischarged auxiliary assumption, and (iii) v is not free in
8xP / in 9xP or in Q

a,b-c 9E

=E (equality exploit)
a P
b tDs
P t=s

provided that term s is free


for all the replaced instances of
term t in formula P

P
sDt
P t=s

a,b =E

require another step which we could take from any of (4), (5) or (6). You should
be clear about why this is so. (7) and (8) are different. We have a formula a D b,
and an equality a D b. In (7) we use the equality to replace one instance of b in the
formula with a. In (8) we use the equality to replace one instance of a in the formula
with b. Of course (7) and (8) might equally have been derived by =I. Notice also that
=E is not restricted to atomic formulas, or to simple terms. Thus, for example,

(BX)

1. 8y.Rax ^ Kxy/
2. x D f 3 azx

P
P

3. 8y.Raf 3 azx ^ Kxy/


4. 8y.Rax ^ Kf 3 azxy/
5. 8y.Raf 3 azx ^ Kf 3 azxy/

1,2 =E
1,2 =E
1,2 =E

lists the steps that are legitimate applications of =E to (1) and (2). What we could not
do is use, x D f 3 azy to with (1) to reach say, 8y.Raf 3 azy ^ Kxy/, since f 3 azy
is not free for any instance of x in 8y.Rax ^ Kxy/. And of course, we could not
replace any instances of y in 8y.Rax ^ Kxy/ since none of them are free.
There is not much new to say about strategy, except that you should include =E
among the stock of rules you use to identify what is contained in the premises. It
may be that a goal is contained in the premises, when terms only need to be switched
by some equality. Thus, for goal F a, with F b explicitly in the premises, it might

CHAPTER 6. NATURAL DEDUCTION

297

be worth setting a D b as a subgoal, with the intent using the equality to switch the
terms.
Preliminary Results. Rather than dwell on strategy as such, let us consider a few
substantive applications. First, you should find derivation of the following theorems
straightforward. Thus, for example, T6.31 and T6.34 take just one step. The first
three may remind you of axioms from chapter 3. The others represent important
features of equality.
T6.31. `ND x D x
*T6.32. `ND .xi D y/ ! .hn x1 : : : xi : : : xn D hn x1 : : : y : : : xn /
T6.33. `ND .xi D y/ ! .Rn x1 : : : xi : : : xn ! Rn x1 : : : y : : : xn /
T6.34. `ND t D t

reflexivity of equality

T6.35. `ND .t D s/ ! .s D t/

symmetry of equality

T6.36. `ND .r D s/ ! .s D t/ ! .r D t/

transitivity of equality

For a more substantive case, suppose we want to show that the following argument is valid in ND.
(BY)

9x.Dx ^ 8y.Dy ! x D y// ^ Bx

The dog is barking

9x.Dx ^ C x/

Some dog is chasing a cat

9xDx ^ .Bx ^ C x/

Some dog is barking and chasing a cat

Using the methods of chapter 5, this might translate something like the argument on
the right. We set out to do the derivation in the usual way.

CHAPTER 6. NATURAL DEDUCTION


1. 9x.Dx ^ 8y.Dy ! x D y// ^ Bx
2. 9x.Dx ^ C x/
3.
4.

.Dj ^ 8y.Dy ! j D y// ^ Bj


Dk ^ C k
Dj ^ .Bj ^ Cj /
9xDx ^ .Bx ^ C x/
9xDx ^ .Bx ^ C x/
9xDx ^ .Bx ^ C x/

298
P
P
A (g, 19E)
A (g, 29E)

9I
2,4-

9E

1,3-

9E

Given two existentials in the premises, we set up to get the goal by two applications
of 9E. And we can get the conclusion from Dj ^ .Bj ^ Cj / by 9I. Dj and Bj
are easy to get from (3). But we do not have Cj . What we have is rather C k. The
existentials in the assumptions are instantiated to different (new) variables and
they must be so instantiated if we are to meet the constraints on 9E. From 9xP and
9xQ it does not follow that any one thing is both P and Q. In this case, however,
we are given that there is just one dog. And we can use this to force an equivalence
between j and k. Then we get the result by =E.
1. 9x.Dx ^ 8y.Dy ! x D y// ^ Bx
2. 9x.Dx ^ C x/
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.

.Dj ^ 8y.Dy ! j D y// ^ Bj

P
P
A (g, 19E)

Dk ^ C k

A (g, 29E)

Bj
Dj ^ 8y.Dy ! j D y/
Dj
8y.Dy ! j D y/
Dk ! j D k
Dk
j Dk
Ck
Cj
Bj ^ Cj
Dj ^ .Bj ^ Cj /
9xDx ^ .Bx ^ C x/

3 ^E
3 ^E
6 ^E
6 ^E
8 8E
4 ^E
9,10 !E
4 ^E
12,11 =E
5,13 ^I
7,14 ^I
15 9I

9xDx ^ .Bx ^ C x/

18. 9xDx ^ .Bx ^ C x/

2,4-16 9E
1,3-17 9E

Though there are a few steps, the work to get it done is simple. This is a very common
pattern: Arbitrary individuals are introduced as if they were distinct. But uniqueness

CHAPTER 6. NATURAL DEDUCTION

299

clauses let us establish an identity between them. Given this, facts about the one
transfer to the other by =E.
At this stage, it would be appropriate to take on E6.33 and E6.34.
Robinson Arithmetic, Q. A very important application, already encountered in
<
chapter 3, is to mathematics. For this, LNT is like LNT
in section 2.2.5 on p. 61
but without <. There is the constant symbol ;, the function symbols S , C and ,
and the relation symbol D. Let s  t abbreviate 9v.v C s D t/, and s < t
abbreviate 9v.S v C s D t/ where v is a variable that does not appear in s or t.
We shall also require a species of bounded quantifiers. So, .8x  t/P abbreviates
8x.x  t ! P / and .9x  t/P abbreviates 9x.x  t ^ P /, and similarly for
.8x < t/P and .9x < t/P , where x does not occur in t.
Observe that simple derived introduction and exploitation rules are possible for
the bounded quantifiers. So, for example,
(8E)

(9I)

(8I)

a. .8x < t/P


b. s < t

a. Psx
b. s < t

a.

Psx

(9E)
v<t

a. .9x < t/P


b. Pvx
c. v < t

.9x < t/P


provided s is free for x in P

Pvx
.8x < t/P

Q
Q

provided v is free for x in P , not free in any undischarged


assumption and not free in the quantified expression or Q

So, for example, for (8E), unabbreviation and then 8E with !E give the desired
result. The other cases are just as easy, and left as an exercise.
Officially, formulas of LNT may be treated as uninterpreted. It is natural, however,
to think of them with their usual meanings, with ; for zero, S the successor function,
C the addition function,  the multiplication function, and D the equality relation.
But, again, we do not need to think about that for now.
We will say that a formula P is an ND theorem of Robinson Arithmetic just
in case P follows in ND from a collection of premises which includes any of the
following formulas.5
Q

1. .S x D ;/

5 After R. Robinson, An Essentially Undecidable Axiom System. These axioms are presented
as formulas with free variables. But given 8I and 8E, they are equivalent to universally quantified
forms as derived at T6.37(2) and T6.38(3) below and we might as well have stated the axioms as
universally quantified sentences.

CHAPTER 6. NATURAL DEDUCTION

300

LNT reference
Vocabulary:
constant: ;
one-place function symbol: S
two-place function symbols: C, 
relation symbol: D
Abbreviations:
s  t abbreviates 9v.v C s D t/
s < t abbreviates 9v.S v C s D t/
where v does not appear in s or t
.8x  t/P abbreviates 8x.x  t ! P /
.8x < t/P abbreviates 8x.x < t ! P /
.9x  t/P abbreviates 9x.x  t ^ P /
.9x < t/P abbreviates 9x.x < t ^ P /
where x does not appear in t
In ND, the bounded quantifiers have natural derived introduction and exploitation rules
(8E), (8I), (9E), (9I) along with a bounded quantifier negation BQN. In addition, on the
standard interpretation for number theory there are derived semantic conditions for the
inequalities T12.5 and for the bounded quantifiers T12.6 and T12.7.

2. .S x D Sy/ ! .x D y/
3. .x C ;/ D x
4. .x C Sy/ D S.x C y/
5. .x  ;/ D ;
6. .x  Sy/ D .x  y/ C x
7. .x D ;/ ! 9y.x D Sy/
In the ordinary case we suppress mention of Q1 - Q7 as premises, and simply write
Q `ND P to indicate that P is an ND consequence of the Robinson axioms that
there is an ND derivation of P which may include appeal to any of Q1 - Q7.
The axioms set up a basic version of arithmetic on the non-negative integers.
Intuitively, ; is not the successor of any non-negative integer (Q1); if the successor

CHAPTER 6. NATURAL DEDUCTION

301

of x is the same as the successor of y, then x is y (Q2); x plus ; is equal to x (Q3);


x plus one more than y is equal to one more than x plus y (Q4); x times ; is equal to
; (Q5); x times one more than y is equal to x times y plus x (Q6); and any number
other than ; is a successor (Q7).
If some P is derived directly from some of Q1 - Q7 then it is trivially an ND
theorem of Robinson Arithmetic. But if the members of a set are ND theorems of
Robinson Arithmetic, and `ND P , then P is an ND theorem of Roninson Arithmetic as well for any derivation of P from some theorems might be extended into
one which derives the theorems, and then goes on from there to obtain P . In the
ordinary case, then, we build to increasingly complex results: having once demonstrated a theorem by a derivation, we feel free simply to cite it as a premise in the
next derivation. So the collection of formulas we count as premises increases from
one derivation to the next.
Though the application to arithmetic is interesting, there is in principle nothing
different about derivations for Q from ones we have done before: We are moving
from premises to a goal. As we make progress, however, there will be an increasing
number of premises available, and it may be relatively challenging to recognize which
premises are relevant to a given goal.
Let us start with some simple generalizations of Q1 - Q7. As they are stated,
Q1 - Q7 are forms involving variables. But they permit derivation of corresponding
principles for arbitrary terms s and t. The derivations all follow the same 8I, 8E
pattern.
T6.37. Q `ND .S t D ;/
1. .Sx D ;/

Q1

2. 8x.Sx D ;/
3. .St D ;/

1 8I
2 8E

Observe that since .S x D ;/ has no quantifiers, term t is sure to be free for x in
.S x D ;/. So there is no problem about the restriction on 8E. And since t is any
term, substituting ; and .S ; C y/ and the like for for t, we have that .S ; D ;/,
.S.S; C y/ D ;/ and the like are all instances of T6.37. The next theorems are
similar.
T6.38. Q `ND .S t D Ss/ ! .t D s/

CHAPTER 6. NATURAL DEDUCTION


1. .Sx D Sy/ ! .x D y/
2.
3.
4.
5.

302

Q2

8u.Su D Sy/ ! .u D y/
8v8u.Su D Sv/ ! .u D v/
8u.Su D S s/ ! .u D s/
.St D Ss/ ! .t D s/

1 8I
2 8I
3 8E
4 8E

Observe that for (4) it is important that term s not include any variable u. Thus for
this derivation we simply choose u so that it is not a variable in s.
*T6.39. Q `ND .t C ;/ D t
T6.40. Q `ND .t C S s/ D S.t C s/
T6.41. Q `ND .t  ;/ D ;
T6.42. Q `ND .t  S s/ D ..t  s/ C t/
T6.43. Q `ND .t D ;/ ! 9y.t D Sy/
where variable y does not appear in t
Given these results, we are ready for some that are more interesting. Let us show
that 1 C 1 D 2. That is, that S ; C S; D S S;.
T6.44. Q `ND S ; C S ; D S S ;
1. .S; C S ;/ D S.S ; C ;/
2. .S; C ;/ D S;

T6.40
T6.39

3. .S; C S ;/ D SS;

1,2 =E

Given the premises, this derivation is simple. Given that .S; C ;/ D S ; from (2),
we can replace S ; C ; with S ; by =E. This is just what we do, substituting into the
first premise. The first premise is an instance of T6.40 that has S ; for t, and ; for s.
(2) is an instance of T6.39 with S ; for t. Be sure you understand each step.
Observe the way Q3 and Q4 work together: Q3 (T6.39) gives the sum of any
term with zero; and given the sum of a term with any number, Q4 (T6.40) gives the
sum of that term and one more than it. So we can calculate the sum of a term and
zero from T6.39, and then with T6.40 get the sum of it and one, then it and two, and
so forth. So, for example,

CHAPTER 6. NATURAL DEDUCTION

(BZ)

1.
2.
3.
4.

.SS ; C SSS ;/ D S.SS ; C SS;/


.SS ; C SS;/ D S.SS; C S ;/
.SS ; C S ;/ D S.SS ; C ;/
.SS ; C ;/ D SS;

5. .SS ; C S ;/ D SSS ;
6. .SS ; C SS;/ D SSSS;
7. .SS ; C SSS ;/ D SSSSS ;

303
T6.40
T6.40
T6.40
T6.39
3,4 =E
2,5 =E
1,6 =E

From T6.40, 2 + 3 depends on 2 + 2; but then 2 + 2 depends on 2 + 1; 2 + 1 on 2


+ 0; and we get the latter directly. So starting with T6.39, we work our way up to
the result we want. And similarly for multiplication: Q5 (T6.41) gives the product
of any term with zero; and given the product of a term with any number, Q6 (T6.42)
gives the product of that term and one more than it. So we can calculate the product
of a term and zero from T6.41, and then with T6.42 get the product of it and one, it
and two, and so forth. Here is a general result of the same type.
T6.45. Q `ND t C S ; D S t
Hint: You can do this in three lines.
Of course, we may manipulate other operators in the usual way.
1. .j C Sk/ D S.j C k/

(CA)

T6.40

2.

9y.j C y D S;/

A (g, !I)

3.

j C k D S;

A (g, 29E)

4.
5.

j C Sk D SS;
9y.j C y D SS;/

1,3 = E
4 9I

6.

9y.j C y D SS;/

7. 9y.j C y D S;/ ! 9y.j C y D SS;/


8. 8x9y.x C y D S;/ ! 9y.x C y D SS ;/

2,3-5 9E
2-6 !I
7 8I

The basic setup for 8I, !I and 9E is by now routine. The real work is where we use
(1) and (3) to obtain j C S k D S S ;. Before, we have used T6.40 with application
to closed terms without free variables, built up from ;. But nothing stops this application of the theorem in its generic (original) form. Here are a couple of theorems
that will be of interest later.
T6.46. Q `ND 8x.x  ; ! x D ;/
Hints: Be sure you are clear about what is being asked for; at some stage,
you will need to unpack the abbreviation. Do not forget that you can appeal
to T6.37 and T6.43.

CHAPTER 6. NATURAL DEDUCTION

304

T6.47. Q `ND 8x.x < ;/


Hint: This reduces to a difficult application of SC4. From 9v.S v C j D ;/,
and using T6.43, assume j ; to obtain a first contradiction; and you will
be able to obtain contradiction from j D ; as well. You will need a couple
applications of SC2 to extract contradictions from applications of 9E.
With this much, you shold be able to work E6.35 now.
Robinson Arithmetic is interesting. Its axioms are sufficient to prove arbitrary
facts about particular numbers. Its language and derivation system are just strong
enough to support Gdels incompleteness result, on which it is not possible for a
nicely specified theory including a sufficient amount of arithmetic to have as consequences P or P for every P (Part IV). But we do not need Gdels result to see that
Robinson Arithmetic is incomplete: It turns out that many true generalizations are not
provable in Robinson Arithmetic. So, for example, neither 8x8y.x y/ D .y x/,
nor its negation is provable.6 So Robinson Arithmetic is a particularly weak theory.
Peano Arithmetic. Though Robinson Arithmetic leaves even standard results like
commutation for multiplication unproven, it is possible to strengthen the derivation
system to obtain such results. Thus such standard generalizations are provable in
Peano Arithmetic.7 For this, let PA1 - PA6 be the same as Q1 - Q6. Replace Q7 as
follows. For any formula P ,
x
PA7 P;x ^ 8x.P ! PSx
/ ! 8xP

is an axiom. If a formula P applies to ;, and for any x, if P applies to x, then it


also applies to Sx, then P applies to every x. This schema represents the principle
of mathematical induction. We will have much more to say about the principle of
mathematical induction in Part II. For now, it is enough merely to recognize its
x
instances. Thus, for example, if P is .x D S x/, then P;x is .; D S;/, and PSx
is .S x D S S x/. So,
.; D S ;/ ^ 8x..x D S x/ ! .S x D S S x// ! 8x.x D S x/
6A

semantic demonstration of this negative result is left as an exercise for chapter 7. But we
already understand the basic idea from chapter 4: To show that a conclusion does not follow, produce
an interpretation on which the axioms are true, but the conclusion is not. The connection between
derivations and the semantic results must wait for chapter 10.
7 After the work of R. Dedekind and G. Peano. For historical discussion, see Wang, The Axiomatization of Arithmetic.

CHAPTER 6. NATURAL DEDUCTION

305

is an instance of the scheme. You should see why this is so.


It will be convenient to have the principle of mathematical induction in a rule
x
form. Given P;x and 8x.P ! PSx
/ on accessible lines a and b, one may move to
8xP with justification a,b IN.

IN

a. P;x
x/
b. 8x.P ! PSx
8xP

a,b IN

1. P;x
x/
2. 8x.P ! PSx
x
x / ! 8xP
3. P; ^ 8x.P ! PSx

P
P
PA7

x/
4. P;x ^ 8x.P ! PSx
5. 8xP

1,2 ^I
3,4 !E

The rule is justified from PA7 by reasoning as on the right. That is, given P;x
x
and 8x.P ! PSx
/ on accessible lines, one can always conjoin them, then with
an instance of PA7 as a premise reach 8xP by !E. The use of IN merely saves
a couple steps, and avoids some relatively long formulas we would have to deal
with using P7 alone. Thus, from our previous example, to apply IN we need P;x
x
and 8x.P ! PSx
/ to move to 8xP . So, if P is .x D S x/, we would need
.; D S ;/ and 8x.x D S x/ ! .S x D S S x/ to move to 8x.x D S x/ by
IN. You should see that this is no different from before.
In this system, there is no need for an axiom like Q7, insofar as we shall be able
to derive it with the aid of PA7. That is, for y not in t we shall be able to show,
T6.48. PA `ND .t D ;/ ! 9y.t D Sy/
Since it is to follow from PA1 - PA7, the proof must, of course, not depend
on Q7 and so on any of T6.43, T6.46, or T6.47.
But T6.48 has Q7 as an instance. Given this, any ND theorem of Q is automatically
an ND theorem of PA for we can derive T6.48, and use it as it would have been
used in a derivation for Q. We thus freely use any theorem from Q in the derivations
that follow.
With these axioms, including the principle of mathematical induction, in hand
we set out to show some general principles of commutativity, associativity and distribution for addition and multiplication. But we build gradually to them. For a first
x
application of IN, let P be .; C x/ D x; then P;x is .; C ;/ D ; and PSx
is
.; C S x/ D S x.
T6.49. PA `ND .; C t/ D t

CHAPTER 6. NATURAL DEDUCTION


1. .; C ;/ D ;
2. .; C Sj / D S.; C j /

306
T6.39
T6.40

3.

.; C j / D j

A (g, !I)

4.

.; C Sj / D Sj

2,3 =E

5.
6.
7.
8.

.; C j / D j ! .; C Sj / D Sj
8x..; C x/ D x ! .; C Sx/ D Sx/
8x.; C x/ D x
.; C t/ D t

3-4 !I
5 8I
1,6 IN
7 8E

The key to this derivation, and others like it, is bringing IN into play. That we want
to do this, is sufficient to drive us to the following as setup.
.; C ;/ D ;

(CB)

(goal)

.; C j / D j

A (g, !I)

.; C Sj / D Sj

(goal)

.; C j / D j ! .; C Sj / D Sj
8x..; C x/ D x ! .; C Sx/ D Sx/
8x.; C x/ D x
.; C t/ D t

!I
8I
IN
8E

Our aim is to get the goal by 8E from 8x.; C x/ D x. And we will get this by
x
IN. So we need the inputs to IN, P;x , that is, .; C ;/ D ;, and 8x.P ! PSx
/,
x
that is, 8x..; C x/ D x ! .; C S x/ D S x/. As is often the case, P; , here
.; C ;/ D ;, is easy to get. It is natural to get the latter by 8I from .; C j / D j !
.; C Sj / D Sj , and to go for this by !I. The work of the derivation is reaching
our two goals. But that is not hard. The first is an immediate instance of T6.39. And
the second follows from the equality on (3), with an instance of T6.40. We are in a
better position to think about which (axioms or) theorems we need as premises once
we have gone through this standard setup for IN. We will see this pattern over and
over.
T6.50. PA `ND .S t C ;/ D S.t C ;/
1. .St C ;/ D St
2. .t C ;/ D t

T6.39
T6.39

3. .St C ;/ D S.t C ;/

1,2 =E

This simple derivation results by using the equality on (2) to justify a substitution for
t in (1). This result forms the zero case for the one that follows.

CHAPTER 6. NATURAL DEDUCTION

307

T6.51. PA `ND .S t C s/ D S.t C s/


1. .St C ;/ D S.t C ;/
2. .t C Sj / D S.t C j /
3. .St C Sj / D S.S t C j /

T6.50
T6.40
T6.40

4.

.St C j / D S.t C j /

A (g, !I)

5.
6.

.St C Sj / D SS.t C j /
.St C Sj / D S.t C Sj /

3,4 =E
5,2 =E

7.
8.
9.
10.

.St C j / D S.t C j / ! .St C Sj / D S.t C Sj /


8x..St C x/ D S.t C x/ ! .S t C Sx/ D S.t C Sx//
8x.St C x/ D S.t C x/
.St C s/ D S.t C s/

4-6 !I
7 8I
1,8 IN
9 8E

Again, the idea is to bring IN into play. Here P is .StCx/ D S.tCx/. Given that we
have the zero-case on line (1), with standard setup, the derivation reduces to obtaining
the formula on (6) given the assumption on (4). Line (6) is like (3) except for the
right-hand side. So it is a matter of applying the equalities on (4) and (2) to reach the
goal. You should study this derivation, to be sure that you follow the applications of
=E. If you do, you are managing some reasonably complex applications of the rule!
T6.52. PA `ND .t C s/ D .s C t/
1.
2.
3.
4.

commutativity of addition

.t C ;/ D t
.; C t/ D t
.t C Sj / D S.t C j /
.Sj C t/ D S.j C t/

5. .t C ;/ D .; C t/
6. .t C j / D .j C t/
7.
8.
9.
10.
11.
12.

.t C Sj / D S.j C t/
.t C Sj / D .Sj C t/
.t C j / D .j C t/ ! .t C Sj / D .Sj C t/
8x..t C x/ D .x C t/ ! .t C Sx/ D .Sx C t//
8x.t C x/ D .x C t/
.t C s/ D .s C t/

T6.39
T6.49
T6.40
T6.51
1,2 =E
A (g, !I)
3,6 =E
7,4 =E
6-8 !I
9 8I
5,10 IN
11 8E

Again the derivation is by IN where P is .t Cx/ D .x Ct/. We achieve the zero case
on (5) from (1) and (2). So the derivation reduces to getting (8) given the assumption
on (6). The left-hand side of (8) is like (3). So it is a matter of applying the equalities
on (6) and then (4) to reach the goal. Very often the challenge in these cases is not so
much doing the derivations, as organizing in your mind which equalities you have,
and which are required to reach the goal.

CHAPTER 6. NATURAL DEDUCTION

308

T6.52 is an interesting result! No doubt, you have heard from your mothers knee
that .t C s/ D .s C t/. But it is a sweeping claim with application to all numbers.
Surely you have not been able to test every case. But here we have a derivation of
the result, from the Peano Axioms. And similarly for results that follow. Now that
you have this result, recognize that you can use instances of it to switch around terms
in additions just as you would have done automatically for addition in elementary
school.
*T6.53. PA `ND .r C s/ C ; D r C .s C ;/
Hint: Begin with ..r C s/ C ;/ D .r C s/ as an instance of T6.39. The
derivation is then a simple matter of using T6.39 again to replace s in the
right-hand side with s C ;.
*T6.54. PA `ND .r C s/ C t D r C .s C t/

associativity of addition

Hint: For an application of IN let P be .r C s/ C x D r C .s C x/. You


already have the zero case from T6.53. Inside the subderivation for !I, use
the assumption together with some instances of T6.40 to reach the goal.
Again, once you have this result, be aware that you can use its instances for association as you would have done long ago. It is good to think about what the different
theorems give you, so that you can make sense of what to use where!
T6.55. PA `ND .t  S ;/ D t
Hint: This does not require IN. It is a rather a simple result which you can do
in just five lines.

T6.56. PA `ND .;  t/ D ;
Hint: For an application of IN, let P be .;  x/ D ;. The derivation is easy
enough with an application of T6.41 for the zero case, and instances of T6.42
and T6.39 for the main result.

T6.57. PA `ND .S t  ;/ D .t  ;/ C ;
Hint: This does not require IN. It follows rather by some simple applications
of T6.39 and T6.41.

CHAPTER 6. NATURAL DEDUCTION

309

T6.58. PA `ND .S t  s/ D .t  s/ C s
Hint: For this longish derivation, plan to reach the goal through IN where P
is .St  x/ D .t  x/ C x. You will be able to use your assumption for !I
with an instance of T6.42 to show .S t Sj / D ..t j /Cj CSt/. And you
should be able to use associativity and the like to manipulate the right-hand
side into the result you want. You will need several theorems as premises.
T6.59. PA `ND .t  s/ D .s  t/

commutativity for multiplication

Hint: Plan on reaching the goal by IN where P is .t  x/ D .x  t/. Apart


from theorems for the zero case, you will need an instance of T6.42, and an
instance of T6.58.
T6.60. PA `ND r  .s C ;/ D .r  s/ C .r  ;/
Hint: You will not need IN for this.
T6.61. PA `ND r  .s C t/ D .r  s/ C .r  t/

distributivity

Hint: Plan on reaching the goal by IN where P is r  .s C x/ D .r 


s/ C .r  x/. Perhaps the simplest thing is to start with r  .s C Sj / D
r  .s C Sj / by =I. Then the left side is what you want, and you can work
on the right. Working on the right-hand side, .s C Sj / D S.s C j / by T6.40.
And r  S.s C j / D .r  .s C j / C r/ by T6.42. With this, you will
be able to apply the assumption for !I. And further simplification should get
you to your goal.
T6.62. PA `ND .s C t/  r D .s  r/ C .t  r/

distributivity

Hint: You will not need IN for this. Rather, it is enough to use T6.61 with a
few applications of T6.59.
T6.63. PA `ND .r C s/  .t C u/ D .r  s/ C .r  u/ C .s  t/ C s  u/
Hint: This is a simple application of distributivity.
T6.64. PA `ND .s  t/  ; D s  .t  ;/
Hint: This is easy without an application of IN.

CHAPTER 6. NATURAL DEDUCTION

310

T6.65. PA `ND .s  t/  r D s  .t  r/

associativity of multiplication

Hint: Go after the goal by IN where P is .s  t/  x D s  .t  x/. You


should be able to use the assumption with T6.42 to show that .st/Sj D
.s  .t  j // C .s  t/; then you can reduce the right hand side to what you
want.

T6.66. PA `ND .r C t D s C t/ ! r D s

cancellation law for addition

T6.67. PA `ND .s ; ^ t  s D r  s/ ! t D r
multiplication

cancellation law for

After you have completed the exercises, if you are looking for more to do, you might
take a look at the additional results from T13.13 on p. 615. These are more theorems
of the sort you are prepared to work at this stage.
Peano Arithmetic is thus sufficient for results we could not obtain in Q alone
for ordinary arithmetic. However, insofar as it includes the language and results of
Q, it too is sufficient for Gdels incompleteness theorem. So PA is not complete, and
it is not possible for a nicely specified theory including PA to be such that it proves
either P or P for every P . But such results must wait for later.
*E6.33. Produce derivations to show T6.31 - T6.36. Hint: it may help to begin with
concrete versions of the theorems and then move to the general case. Thus,
for example, for T6.32, show that `ND .y D j / ! .g 3 xyz D g 3 xjz/. Then
you will be able to show the general case.

E6.34. Produce derivations to show each of the following.


*a. `ND 8x9y.x D y/
b. `ND 8x9y.f 1 x D y/
c. `ND 8x8y.F x ^ F y/ ! .x D y/
d. 8x.Rxa ! x D c/, 8x.Rxb ! x D d /, 9x.Rxa ^ Rxb/ `ND c D d
e. `ND 8x.f 1 x D x/ ! 8y..f 1 x D y/ ! .x D y//
f. `ND 8x8y.f 1 x D y ^ f 1 y D x/ ! f 1 f 1 x D x

CHAPTER 6. NATURAL DEDUCTION

311

Robinson and Peano Arithmetic (ND)


Q/PA

1. .Sx D ;/
2. .Sx D Sy/ ! .x D y/
3. .x C ;/ D x
IN

4. .x C Sy/ D S.x C y/

a. P;x
x/
b. 8x.P ! PSx

5. .x  ;/ D ;

8xP

6. .x  Sy/ D .x  y/ C x
Q7
PA7

Derived from PA7

.x D ;/ ! 9y.x D Sy/


x / ! 8xP
P;x ^ 8x.P ! PSx

T6.37 Q `ND .St D ;/


T6.38 Q `ND .St D Ss/ ! .t D s/
T6.39 Q `ND .t C ;/ D t
T6.40 Q `ND .t C Ss/ D S.t C s/
T6.41 Q `ND .t  ;/ D ;
T6.42 Q `ND .t  Ss/ D ..t  s/ C t/
T6.43 Q `ND .t D ;/ ! 9y.t D Sy/

where variable y does not appear in t

T6.44 Q `ND S ; C S; D SS;


T6.45 Q `ND t C S ; D St
T6.46 Q `ND 8x.x  ; ! x D ;/
T6.47 Q `ND 8x.x < ;/
T6.48 PA `ND .t D ;/ ! 9y.t D S y/

(y not in t) and so Q7

T6.49 PA `ND .; C t/ D t
T6.50 PA `ND .St C ;/ D S.t C ;/
T6.51 PA `ND .St C s/ D S.t C s/
T6.52 PA `ND .t C s/ D .s C t/

commutativity of addition

T6.53 PA `ND .r C s/ C ; D r C .s C ;/
T6.54 PA `ND .r C s/ C t D r C .s C t/

associativity of addition

T6.55 PA `ND .t  S;/ D t


T6.56 PA `ND .;  t/ D ;
T6.57 PA `ND .St  ;/ D .t  ;/ C ;
T6.58 PA `ND .St  s/ D .t  s/ C s
T6.59 PA `ND .t  s/ D .s  t/

commutativity for multiplication

T6.60 PA `ND r  .s C ;/ D .r  s/ C .r  ;/
T6.61 PA `ND r  .s C t/ D .r  s/ C .r  t/

distributivity

T6.62 PA `ND .s C t/  r D .s  r/ C .t  r/

distributivity

T6.63 PA `ND .r C s/  .t C u/ D .r  s/ C .r  u/ C .s  t/ C s  u/
T6.64 PA `ND .s  t/  ; D s  .t  ;/
T6.65 PA `ND .s  t/  r D s  .t  r/
T6.66 PA `ND .r C t D s C t/ ! r D s

associativity of multiplication
cancellation law for addition

T6.67 PA `ND .s ; ^ t  s D r  s/ ! t D r

cancellation law for multiplication

a,b IN

CHAPTER 6. NATURAL DEDUCTION

312

g. 9x9yH xy, 8y8z.Dyz $ H zy/, 8x8y.H xy _ x D y/


`ND 9x.H xx ^ Dxx/
h. 8x8y.Rxy ^ Ryx/ ! x D y, 8x8y.Rxy ! Ryx/
`ND 8x9y.Rxy _ Ryx/ ! Rxx
i. 9x8y.x D y $ F y/, 8x.Gx ! F x/ `ND 8x8y.Gx ^ Gy/ ! x D y
j. 8xF x ! 9y.Gyx ^ Gxy/, 8x8y.F x ^ F y/ ! x D y
`ND 8x.F x ! 9yF y/
k. 9xF x, 8x8yx D y _ .F x ^ F y/ `ND 9x8y.x D y $ F y/
*E6.35. Produce derivations to show derived rules for the bounded quantifiers along
with T6.39 - T6.43, T6.45 - T6.47 and each of the following. You should hold
off on derivations for T6.46 and T6.47 until the end. For any problem, you
may appeal to results before.
*a. Q `ND .S S ; C S ;/ D S S S ;
b. Q `ND .S S ; C S S;/ D S S S S;
c. Q `ND .; C S;/ D S ;
d. Q `ND .S ;  S;/ D S;
e. Q `ND .S S ;  S S ;/ D S S S S ;
Hint: You may decide some preliminary results will be helpful.
*f. Q `ND 9x.x C S S ; D S ;/
Hint: Do not forget that you can appeal to T6.37 and T6.38.
g. Q `ND 8x.x D ; _ x D S ;/ ! x  S ;/
h. Q `ND 8x.x D ; _ x D S ;/ ! x < S S;/
i. Q `ND .8x  S ;/.x D ; _ x D S;/
Hint: You will be able to use T6.46 to show that if a C b D ; then b D ;.
j. Q `ND .8x  S ;/.x  S S ;/
Hint: You may find the previous result helpful.

CHAPTER 6. NATURAL DEDUCTION

313

*E6.36. Produce derivations to show T6.53 - T6.67.

E6.37. Produce a derivation to show T6.48 and so that any ND theorem of Q is an


ND theorem of PA. Hint: For an application of IN let P be .x D ;/ !
9y.x D Sy/.

6.4

The system ND+

ND+ includes all the rules of ND, with four new inference rules, and some new
replacement rules. It is not possible to derive anything in ND+ that cannot already
be derived in ND. Thus the new rules do not add extra derivation power. They are
rather shortcuts for things that can already be done in ND. This is particularly
obvious in the case of the inference rules.
For the first, suppose in an ND derivation, we have P ! Q and Q and want to
reach P . No doubt, we would proceed as follows.
1. P ! Q
2. Q

(CC)

P
P

3.

A (c, I)

4.
5.

Q
?

1,3 !E
4,1 ?I

6. P

3-5 I

We assume P , get the contradiction, and conclude by I. Perhaps you have done
this so many times that you can do it in your sleep. In ND+ you are given a way
to shortcut the routine, and go directly from an accessible P ! Q on a, and an
accessible Q on b to P with justification a,b MT (modus tollens).
MT

a P !Q
b Q
P

a,b MT

The justification for this is that the rule does not let you do anything that you could
not already do in ND. So if the rules of ND preserve truth, this rule preserves truth.
And, as a matter of fact, we already demonstrated that P ! Q, Q `ND P in
T6.4. Similarly, T6.5, T6.6, T6.7, T6.8 and T6.9 justify the other inference rules
included in ND+.

CHAPTER 6. NATURAL DEDUCTION

NB

a P $Q
b P
Q

314
a P $Q
b Q

a,b NB

P

a,b NB

NB (negated biconditional) lets you move from a biconditional and the negation of
one side, to the negation of the other. It is like MT, but with the arrow going both
ways. The parts are justified in T6.8 and T6.9.
DS

a P _Q
b P
Q

a P _Q
b Q
a,b DS

a,b DS

DS (disjunctive syllogism) lets you move from a disjunction and the negation of one
side, to the other side of the disjunction. We saw an intuitive version of this rule on
p. 25. The two parts are justified by T6.6 and T6.7.
HS

a O!P
b P !Q
O!Q

a,b HS

HS (hypothetical syllogism) is a principle of transitivity by which you may string a


pair of conditionals together into one. It is justified by T6.5.
Each of these rules should be clear, and easy to use. Here is an example that puts
all of them together into one derivation.

CHAPTER 6. NATURAL DEDUCTION

315
1.
2.
3.
4.
5.

(CD)

1.
2.
3.
4.

A$B
B
A _ .C ! D/
D!B

P
P
P
P

5.
6.
7.
8.

A
C !D
C !B
C

1,2 NB
3,5 DS
6,4 HS
7,2 MT

A$B
B
A _ .C ! D/
D!B
A

P
P
P
P
A (g, (3_E)

7.

A (c, I)

8.
9.

B
?

1,5 $E
8,2 ?I

10.

C

6-8 I

10.

C !D

A (g, 3_E)

11.

A (c, I)

12.
13.
14.

D
B
?

10,11 E
4,12 !E
13,2 ?I

15.

C

16. C

11-14 I
3,5-9,10-15 _E

We can do it by our normal methods with the rules of ND as on the right. But it is
easier with the shortcuts from ND+ as on the left. It may take you some time to see
applications of the new rules when you are doing derivations, but the simplification
makes it worth getting used to them.
The replacement rules of ND+ are different from ones we have seen before in
two respects. First, replacement rules go in two directions. Consider the following
simple rule.
DN

P G F P

According to DN (double negation), given P on an accessible line a, you may move


to P with justification a DN; and given P on an accessible line a, you may
move to P with justification a DN. This two-way rule is justified by T6.16, in which
we showed `ND P $ P . Given P we could use the routine from one half of the
derivation to reach P , and given P we could use the routine from the other
half of the derivation to reach P .
But, further, we can use replacement rules to replace a subformula that is just a
proper part of another formula. Thus, for example, in the following list, we could
move in one step by DN from the formula on the left, to any of the ones on the right,
and from any of the ones on the right, to the one on the left.

CHAPTER 6. NATURAL DEDUCTION

(CE)

316
A ^ .B ! C /
A ^ .B ! C /
A ^ .B ! C /
A ^ .B ! C /
A ^ .B ! C /

A ^ .B ! C /

The first application is of the sort we have seen before, in which the whole formula is
replaced. In the second, the replacement is between the subformulas A and A. In
the third, between the subformulas .B ! C / and .B ! C /. The fourth switches
B and B and the last C and C . Thus the DN rule allows the substitution of
any subformula P with one of the form P , and vice versa.
The application of replacement rules to subformulas is not so easily justified as
their application to whole formulas. A complete justification that ND+ does not let
you go beyond what can be derived in ND will have to wait for Part III. Roughly,
though, the idea is this: given a complex formula, we can take it apart, do the replacement, and then put it back together. Here is a very simple example from above.

(CF)

1. A ^ .B ! C /

2. A ^ .B ! C /

1 DN

1. A ^ .B ! C /

2. A
3. .B ! C /

1 ^E
A (c, I)

4.
5.

B!C
?

6. .B ! C /
7. A ^ .B ! C /

1 ^E
4,3 ?I
3-5 I
2,6 ^I

On the left, we make the move from A ^ .B ! C / to A ^ .B ! C / in one step


by DN. On the right, using just the rules of ND, we begin by taking off the A. Then
we convert B ! C to .B ! C /, and put it back together with the A. Though
we will not be able to show that sort of thing is generally possible until Part III, for
now I will continue to say that replacement rules are justified by the corresponding
biconditionals. As it happens, for replacement rules, the biconditionals play a crucial
role in the demonstration that `ND P iff `NDC P .
The rest of the replacement rules work the same way.
Com

P ^Q GF Q^P
P _Q GF Q_P

Com (commutation) lets you reverse the order of conjuncts or disjuncts around an
operator. By Com you could go from, say, A ^ .B _ C / to .B _ C / ^ A, switching
the order around ^, or from A ^ .B _ C / to A ^ .C _ B/, switching the order around
_. You should be clear about why this is so. The two forms are justified by T6.10
and T6.11.

CHAPTER 6. NATURAL DEDUCTION

Assoc

317

O ^ .P ^ Q/ G F .O ^ P / ^ Q
O _ .P _ Q/ G F .O _ P / _ Q

Assoc (association) lets you shift parentheses for conjoined or disjoined formulas.
The two forms are justified by T6.14 and T6.15.
Idem

P GF P ^P
P GF P _P

Idem (idempotence) exposes the equivalence between P and P ^ P , and between


P and P _ P . The two forms are justified by T6.17 and T6.18.
Impl

P ! Q G F P _ Q
P ! Q G F P _ Q

Impl (implication) lets you move between a conditional and a corresponding disjunction. Thus, for example, by the first form of Impl you could move from A !
.B _ C / to A _ .B _ C /, using the rule from left-to-right, or to A ! .B ! C /,
using the rule from right-to-left. As we will see, this rule can be particularly useful.
The two forms are justified by T6.21 and T6.22.
Trans

P ! Q G F Q ! P

Trans (transposition) lets you reverse the antecedent and consequent around a conditional subject to the addition or removal of negations. From left-to-right, this rule
should remind you of MT, as Trans plus !E has the same effect as one application
of MT. Trans is justified by T6.12.
DeM

.P ^ Q/ G F P _ Q
.P _ Q/ G F P ^ Q

DeM (DeMorgan) should remind you of equivalences we learned in chapter 5, for


not both (the first form) and neither nor (the second form). This rule also can be very
useful. The two forms are justified by T6.19 and T6.20.
Exp

O ! .P ! Q/ G F .O ^ P / ! Q

Exp (exportation) is another equivalence that may have arisen in translation. It is


justified by T6.13.
Equiv

P $ Q G F .P ! Q/ ^ .Q ! P /
P $ Q G F .P ^ Q/ _ .P ^ Q/

Equiv (equivalence) converts between a biconditional, and the corresponding pair of


conditionals, or converts between a biconditional and a formula on which the sides
are both true or both false. The two forms are justified by T6.25 and T6.26.
Dist

O ^ .P _ Q/ G F .O ^ P / _ .O ^ Q/
O _ .P ^ Q/ G F .O _ P / ^ .O _ Q/

CHAPTER 6. NATURAL DEDUCTION

318

Dist (distribution) works something like the mathematical principle for multiplying
across a sum. In each case, moving from left to right, the operator from outside
attaches to each of the parts inside the parenthesis, and the operator from inside
becomes the main operator. The two forms are justified by T6.23 and T6.24. Finally,
QN

8xP G F 9xP
9xP G F 8xP

QN (quantifier negation) is another principle we encountered in chapter 5. It lets


you push or pull a negation across a quantifier, with a corresponding flip from one
quantifier to the other. The forms are justified by T6.29 and T6.30.
Thus end the rules of ND+. They are a lot to absorb at once. But you do not
need to absorb all the rules at once. Again, the rules do not let you do anything you
could not already do in ND. For the most part, you should proceed as if you were in
ND. If an ND+ shortcut occurs to you, use it. You will gradually become familiar
with more and more of the special ND+ rules. Perhaps, though, we can make a
few observations about strategy that will get you started. First, again, do not get
too distracted by the extra rules! You should continue with the overall goal-directed
approach from ND. There are, however, a few contexts where special rules from ND+
can make a substantive difference. I comment on three.
First, as we have seen, in ND, formulas with _ can be problematic. _E is awkward to apply, and _I does not always work. In simple cases, DS can get you out
of _E. But this is not always so, and you will want to keep _E among your standard strategies. More importantly, Impl can convert between awkward formulas with
main operator _ and more manageable ones with main operator !. For premises,
this does not help much. DS gets you just as much as Impl and then !E or MT
(think about it). But converting to ! does matter when a goal has main operator _.
Although a disjunction may be derivable, but not by _I, if a conditional is derivable,
it is derivable by !I. Thus to reach a goal with main operator _, consider going for
the corresponding !, and converting with Impl.
a.

given

use
A_B

(goal)

b.

A
B
A ! B
A_B

A (g !I)
(goal)
!I
Impl

And the other form of Impl may be helpful for a goal of the sort A _ B. Here is a
quick example.

CHAPTER 6. NATURAL DEDUCTION

319

ND+ Quick Reference


Inference Rules
MT (Modus Tollens)

NB (Negated Biconditional)

NB (Negated Biconditional)

a P !Q
b Q

a P $Q
b P

a P $Q
b Q

P

Q

a,b MT

a,b NB

P

a,b NB

DS (Disjunctive Syllogism)

DS (Disjunctive Syllogism)

HS (Hypothetical Syllogism)

a P _Q
b P

a P _Q
b Q

a O!P
b P !Q

a,b DS

a,b DS

Replacement Rules
DN

P G F P

Com

P ^Q GF Q^P
P _Q GF Q_P

Assoc

O ^ .P ^ Q/ G F .O ^ P / ^ Q
O _ .P _ Q/ G F .O _ P / _ Q

Idem

P GF P ^P
P GF P _P

Impl

P ! Q G F P _ Q
P ! Q G F P _ Q

Trans

P ! Q G F Q ! P

DeM

.P ^ Q/ G F P _ Q
.P _ Q/ G F P ^ Q

Exp
Equiv

O ! .P ! Q/ G F .O ^ P / ! Q
P $ Q G F .P ! Q/ ^ .Q ! P /
P $ Q G F .P ^ Q/ _ .P ^ Q/

Dist

O ^ .P _ Q/ G F .O ^ P / _ .O ^ Q/
O _ .P ^ Q/ G F .O _ P / ^ .O _ Q/

QN

8xP G F 9xP
9xP G F 8xP

O!Q

a,b HS

CHAPTER 6. NATURAL DEDUCTION

320
1.

(CG)

1.

A

A (g, !I)

2.

A

1R

3. A ! A
4. A _ A

1-2 !I
3 Impl

.A _ A/

A (c, E)

2.

A (c, I)

3.
4.

A _ A
?

2 _I
4,1 ?I

5.
6.
7.

A
A _ A
?

8. A _ A

2-4 I
5 _I
6,1 ?I
1-7 E

Perhaps the number of lines is not all that different. However, the derivation on the
left using Impl is completely trivial, requiring just a derivation of A ! A. But the
derivation on the right is not. It falls through to SG5, and then requires a challenging
application of SC3 or SC4. This proposed strategy replaces or simplifies the pattern
(AQ) for disjunctions described on p. 262. Observe that the work getting to one
side of a disjunction from the negation of the other, is exactly the same. It is only
that we use the derived rule to simplify away the distracting and messy setup.
Second, among the most useless formulas for exploitation are ones with main
operator . But the combination of QN, DeM, Impl, and Equiv let you push negations into arbitrary formulas. Thus you can convert formulas with main operator 
into a more useful form. To see how these rules can be manipulated, consider the
following sequence.
1. 9x.Ax ! Bx/

(CH)

2.
3.
4.
5.

8x.Ax ! Bx/
8x.Ax _ Bx/
8x.Ax ^ Bx/
8x.Ax ^ Bx/

P
1 QN
2 Impl
3 DeM
4 DN

We begin with the negation as main operator, and end with a negation only against
an atomic. This sort of thing is often very useful. For example, in going for a contradiction, you have the option of breaking down a formula with main operator 
rather than automatically building up to its opposite, according to SC3. And other
strategies can be affected as well. Thus, for example, if you see a negated universal
on some accessible line, you should think of it as if it were an existentially quantified
expression: push the negation through, get the existential, and go for the goal by 9E
as usual. Here is an example.

CHAPTER 6. NATURAL DEDUCTION

321
1. 8x.F x ! Gx/

(CI)

2.

1. 8x.F x ! Gx/

2. 9x.F x ! Gx/
3. .Fj ! Gj /

1 QN
A (g, 29E)

4.
5.
6.
7.

.Fj _ Gj /
Fj ^ Gj
Gj
9xGx

8. 9xGx

3 Impl
4 DeM
5 ^E
6 9I

3.

Fj

A (c E)
A (g, !I)

4.

Gj

A (c, E)

5.
6.

9xGx
?

4 9I
5,2 ?I

7.
8.
9.
10.

2,3-7 9E

9xGx

Gj
Fj ! Gj
8x.F x ! Gx/
?

11. 9xGx

4-6 E
3-7 !I
8 8I
9,1 ?I
2-10 E

The derivation on the left is much to be preferred over the one on the right, where
we are caught up in a difficult case of SG5 and then SC3 or SC4. But, after QN, the
derivation on the left is straightforward and would be relatively straightforward
even if we missed the uses of Impl and DeM. Observe that, as above, the uses of
Impl and DeM help us convert the negated conditional into a conjunction that can be
broken into its parts.
Finally, observe that derivations which can be conducted entirely be replacement
rules are reversible. Thus, for a simple case,

(CJ )

1.

.A ^ B/

A (g, $I)

2.
3.
4.

A _ B
A _ B
A!B

1 DeM
2 DN
3 Impl

5.

A!B

A (g, $I)

6.
7.
8.

A _ B
A _ B
.A ^ B/

5 Impl
6 DN
7 DeM

9. .A ^ B/ $ .A ! B/

1-4,5-8 $I

We set up for $I in the usual way. Then the subderivations work by precisely the
same steps, DeM, DN, Impl, but in the reverse order. This is not surprising since
replacement rules work in both directions. Notice that reversal does not generally
work where regular inference rules are involved.
Finally, it is worth noting that the rules for quantifier negation appear also as
derived rules for the bounded quantifiers introduced in the previous section.
BQN

.8x  t/P G F .9x  t/P


.9x  t/P G F .8x  t/P

CHAPTER 6. NATURAL DEDUCTION

322

And similarly for <. Demonstration of these rules is easy and left as an exercise.
The rules of ND+ are not a magic bullet to make all difficult derivations go
away! Rather, with the derived rules, we set aside a certain sort of difficulty that
should no longer worry us, so that we are in a position to take on new challenges
without becoming overwhelmed by details.
E6.38. Produce derivations to show each of the following.
a. 9x.Rx ^ S xx/, Saa `NDC Ra
b. 8x.Axf 1 x _ 9yBg 1 y/ `NDC 9xAf 1 xf 1 f 1 x ! 9yBg 1 y
c. 8x.C xb _ H x/ ! Lxx, 9yLyy `NDC 9xC xb
d. 9x.F x ^ Gx/ _ 9xGx, 8yGy `NDC 8z.F z ! Gz/
e. 8xF x, 8zH z `NDC 9y.F y _ Hy/
*f. 8x8y9zAf 1 xyz, 8x8y8zAxyz ! .C xyz _ Bzyx/
`NDC 9x9y8zBzg 1 yf 1 g 1 x
g. 9x8y.P xy ^ Qxy/ `NDC 8x9y.P xy ! Qxy/
h. 9y.T y _ 9xH xy/ `NDC 8x8yH xy ^ 8xT x
i. 9x.F x ! 9yF y/ `NDC 8xF x
j. `NDC 8x.Ax ! Bx/ _ 9xAx
k. `NDC 8x.F x _ A/ ! .8xF x _ A/
l. 9x.F x $ Gx/, 8xGx ! .H x ! J x/
`NDC 9xJ x _ 8xF x ! 9x.Gx ^ H x/
m. 9xBxa ^ 8y.Cy ! Gxy/, 8z8y.W y ! Gzy/ ! Bza
`NDC 8x.C x ! W x/
*n. 9xF x ! 8yGy, 8x.Kx ! 9yJy/, 9yGy ! 9xKx
`NDC 9xF x _ 9yJy
o. 9zQz ! 8w.Lww ! H w/, 9xBx ! 8y.Ay ! Hy/
`NDC 9w.Qw ^ Bw/ ! 8y.Lyy ! Ay/
p. 8x.P x _H x/ ! 8xC x ^8y.Ly ! Axy/, 9xH x ^8y.Ly ! Axy/ !
8x.Rx ^ 8yBxy/ `NDC 8x8yBxy ! 8x.P x _ H x/
q. `NDC .9xAx ! 9xBx/ ! 9x.Ax ! Bx/

CHAPTER 6. NATURAL DEDUCTION

323

r. 8xF x ! A `NDC 9x.F x ! A/


s. 8x9y.Ax _ By/ `NDC 9y8x.Ax _ By/
t. 8xF x $ 9x9yRxy `NDC 9x8y8z.F x ! Ryz/

E6.39. Provide derivations to demonstrate BQN for the bounded quantifiers (in the
case of ). That is to show `ND .8x  t/P $ .9x  t/P , and
`ND .9x  t/P $ .8x  t/P . Hint: Do not forget your derived rules
for the bounded quantifiers.
E6.40. For each of the following, produce a translation into Lq , including interpretation function and formal sentences, and show that the resulting arguments
are valid in ND.
a. If a first person is taller than a second, then the second is not taller than
the first. So nobody is taller than themselves. (An asymmetric relation is
irreflexive.)
b. A barber shaves all and only people who do not shave themselves. So there
are no barbers.
c. Bob is taller than every other man. If a first person is taller than a second,
then the second is not taller than the first. So only Bob is taller than every
other man.
d. There is at most one dog, and at least one flea. Any dog is a host for some
flea, and any flea has a dog for a host. So there is exactly one dog.
e. Some conception includes god. If one conception includes a thing and another does not, then the greatness of the thing in the first exceeds its greatness
in the other. The greatness of no thing in any conception exceeds that of god
in a true conception. Therefore, god is included in any true conception.
Hints: Let your universe include conceptions, things in them, along with measures of greatness. Then implement a greatness function g 2 D fhhm; ni; oi j o
is the greatness of m in conception ng. With an appropriate relation symbol, the greatness of a thing in one conception then exceeds that of a thing
in another if something like Eg 2 wxg 2 yz. This, of course, is a version of
Anselms Ontological Argument. For discussion see, Plantinga, God, Freedom, and Evil.

CHAPTER 6. NATURAL DEDUCTION

324

E6.41. For each of the following concepts, explain in an essay of about two pages,
so that Hannah could understand. In your essay, you should (i) identify the
objects to which the concept applies, (ii) give and explain the definition, and
give and explicate examples of your own construction (iii) where the concept
applies, and (iv) where it does not. Your essay should exhibit an understanding of methods from the text.
a. The rules 8I and 9E, including especially restrictions on the rules.
b. The axioms of Q and PA and the way theorems derive from them.
c. The relation between the rules of ND and the rules of ND+.

Part II

Transition: Reasoning About


Logic

325

Introductory
We have expended a great deal of energy learning to do logic. What we have learned
constitutes the complete classical predicate calculus with equality. This is a system
of tremendous power including for reasoning in foundations of arithmetic.
But our work itself raises questions. In chapter 4 we used truth trees and tables for
an account of the conditions under which sentential formulas are true and arguments
are valid. In the quantificational case, though, we were not able to use our graphical
methods for a general account of truth and validity there were simply too many
branches, and too many interpretations, for a general account by means of trees.
Thus there is an open question about whether and how quantificational validity can
be shown.
And once we have introduced our notions of validity, many interesting questions
can be asked about how they work: are the arguments that are valid in AD the same
as the ones that are valid in ND? are the arguments that are valid in ND the same
as the ones that are quantificationally valid? Are the theorems of Q the same as
the theorems of PA? are theorems of PA the same as the truths on N the standard
interpretation for number theory? Is it possible for a computing device to identify the
theorems of the different logical systems?
It is one thing to ask such questions, and perhaps amazing that there are demonstrable answers. We will come to that. However, in this short section we do not
attempt answers. Rather, we put ourselves in a position to think about answers by
introducing methods for thinking about logic. Thus this part looks both backward
and forward: By our methods we plug the hole left from chapter 4: in chapter 7 we
accomplish what could not be done with the tables and trees of chapter 4, and are
able to demonstrate quantificational validity. At the same time, we lay a foundation
to ask and answer core questions about logic.
Chapter 7 begins with our basic method of reasoning from definitions. Chapter 8
introduces mathematical induction. These methods are important not only for results,
but for their own sakes, as part of the package that comes with mathematical logic.
326

Chapter 7

Direct Semantic Reasoning


It is the task of this chapter to think about reasoning directly from definitions. Frequently, students who already reason quite skillfully with definitions flounder when
asked to do so explicitly, in the style of this chapter.1 Thus I propose to begin in a restricted context one with which we are already familiar, using a fairly rigid framework as a guide. Perhaps you first learned to ride a bicycle with training wheels,
but eventually learned to ride without them, and so to go faster, and to places other
than the wheels would let you go. Similarly, in the end, we will want to apply our
methods beyond the restricted context in which we begin, working outside the initial
framework. But the framework should give us a good start. In this section, then, I introduce the framework in the context of reasoning for specifically semantic notions,
and against the background of semantic reasoning we have already done.
In chapter 4 we used truth trees and tables for an account of the conditions under
which sentential formulas are true and arguments are valid. In the quantificational
case, though, we were not able to use our graphical methods for a general account
of truth and validity there were simply too many branches, and too many interpretations, for a general account by means of trees. For a complete account, we will
need to reason more directly from the definitions. But the tables and trees do exhibit
the semantic definitions. So we can build on what we have already done with them.
Our goal will be to move past the tables and trees, and learn to function without
1 The

ability to reason clearly and directly with definitions is important not only here, but also
beyond. In philosophy, compare the humorous, but also serious, verb to chisholm after Roderick
Chisholm, who was a master of the technique where one proposes a definition; considers a counterexample; modifies to account for the example; considers another counterexample; modifies again;
and so forth. As, He started with definition (d.8) and kept chisholming away at it until he ended up
with (d.800000000 ) (The Philosophers Lexicon). Such reasoning is impossible to understand apart from
explicit attention to consequences of definitions of the sort we have in mind.

327

CHAPTER 7. DIRECT SEMANTIC REASONING

328

them. After some general remarks, we start with the sentential case, and move to the
quantificational.

7.1

General

I begin with some considerations about what we are trying to accomplish, and how it
is related to what we have done. Consider the following row of a truth table, meant
to show that B ! C 6s B.
(A)

B C B ! C / B
T T T T T

F T

Since there is an interpretation on which the premise is true and the conclusion is not,
the argument is not sententially valid. Now, what justifies the move from IB D T
and IC D T, to the conclusion that B ! C is T? One might respond, the truth
table. But the truth table, T(!) is itself derived from definition ST(!). According
to ST(!), for sentences P and Q, I[.P ! Q/] = T iff I[P ] = F or I[Q] = T (or both).
In this case, IC D T; so IB D F or IC D T; so the condition from ST(!) is
met, and IB ! C D T. It may seem odd to move from IC D T; to IB D F
or IC D T, when in fact IB D T; but it is certainly correct just as for _I in
ND, the point is merely to make explicit that, in virtue of the fact that IC D T,
the interpretation meets the disjunctive condition from ST(!). And what justifies
the move from IB D T to the conclusion that IB D F? ST(). According to
ST (), for any sentence P , I[P ] = T iff I[P ] = F. In this case, IB D T; and since
IB is not F, IB is not T; so IB D F. Similarly, definition SV justifies the
conclusion that the argument is not sententially valid. According to SV, s P just
in case there is no sentential interpretation I such that I D T but IP D F. Since
we have produced an I such that IB ! C D T but IB D F, it follows that
B ! C 6s B. So the definitions drive the tables.
In chapter 4, we used tables to express these conditions. But we might have
reasoned directly.

(B)

Consider any interpretation I such that IB D T and IC D T. Since IC D T,


IB D F or IC D T; so by ST(!), IB ! C D T. But since IB D T, by ST(),
IB D F. So there is a sentential interpretation I such that IB ! C D T but
IB D F; so by SV, B ! C 6s B.

Presumably, all this is contained in the one line of the truth table, when we use it
to conclude that the argument is not sententially valid.
Similarly, consider the following table, meant to show that A s A ! A.

CHAPTER 7. DIRECT SEMANTIC REASONING

329

A A / A ! A

(C)

T T F T
F F T F

FT T T
TF F F

Since there is no row where the premise is true and the conclusion is false, the argument is sententially valid. Again, ST() and ST(!) justify the way you build the
table. And SV lets you conclude that the argument is sententially valid. Since no row
makes the premise true and the conclusion false, and any sentential interpretation is
like some row in its assignment to A, no sentential interpretation makes the premise
true and conclusion false; so, by SV, the argument is sententially valid.
Thus the table represents reasoning as follows (omitting the second row). To
follow, notice how we simply reason through each place in a row, and then about
whether the row shows invalidity.

(D)

For any sentential interpretation I, either (i) IA D T or (ii) IA D F. Suppose (i);


then IA D T; so by ST(), IA D F; so by ST() again, IA D T. But
IA D T, and by ST(), IA D F; from either of these it follows that IA D F
or IA D T; so by ST(!), IA ! A D T. From this either IA D F or
IA ! A D T; so it is not the case that IA D T and IA ! A D F.
Suppose (ii); then by related reasoning. . . it is not the case that IA D T and
IA ! A D F. So no interpretation makes it the case that IA D T and
IA ! A D F. So by SV, A s A ! A.

Thus we might recapitulate reasoning in the table. Perhaps we typically whip


through tables without explicitly considering all the definitions involved. But the
definitions are involved when we complete the table.
Strictly, though, not all of this is necessary for the conclusion that the argument
is valid. Thus, for example, in the reasoning at (i), for the conditional there is no
need to establish that both IA D F and that IA D T. From either, it follows
that IA D F or IA D T; and so by ST(!) that IA ! A D T. So we might
have omitted one or the other. Similarly at (i) there is no need to make the point that
IA D T. What matters is that IA ! A D T, so that IA D F or IA !
A D T, and it is therefore not the case that IA D T and IA ! A D F. So
reasoning for the full table might be shortcut as follows.

(E)

For any sentential interpretation either (i) IA D T or (ii) IA D F. Suppose (i); then
IA D T; so IA D F or IA D T; so by ST(!), IA ! A D T. From this
either IA D F or IA ! A D T; so it is not the case that IA D T and
IA ! A D F. Suppose (ii); then IA D F; so by ST(), IA D T; so by ST()
again, IA D F; so either IA D F or IA ! A D T; so it is not the case
that IA D T and IA ! A D F. So no interpretation makes it the case that
IA D T and IA ! A D F. So by SV, A s A ! A.

CHAPTER 7. DIRECT SEMANTIC REASONING

330

This is better. These shortcuts may reflect what you have already done when you
realize that, say, a true conclusion eliminates the need to think about the premises on
some row of a table. Though the shortcuts make things better, however, the idea of
reasoning in this way corresponding to a 4, 8 or more (!) row table remains painful.
But there is a way out.
Recall what happens when you apply the short truth-table method from chapter 4
to valid arguments. You start with the assumption that the premises are true and the
conclusion is not. If the argument is valid, you reach some conflict so that it is not,
in fact, possible to complete the row. Then, as we said on p. 106, you know in your
heart that the argument is valid. Let us turn this into an official argument form.
(F)

Suppose A 6s A ! A; then by SV, there is an I such that IA D T and
IA ! A D F. From the former, by ST(), IA D F. But from the latter,
by ST(!), IA D T and IA D F; and since IA D T, IA F. This is
impossible; reject the assumption: A s A ! A.

This is better still. The assumption that the argument is invalid leads to the conclusion
that for some I, IA D T and IA D F; but a formula is T just in case it is not
F, so this is impossible and we reject the assumption. The pattern is like I in ND.
This approach is particularly important insofar as we do not reason individually about
each of the possible interpretations. This is nice in the sentential case, when there are
too many to reason about conveniently. And in the quantificational case, we will not
be able to argue individually about each of the possible interpretations. So we need
to avoid talking about interpretations one-by-one.
Thus we arrive at two strategies: To show that an argument is invalid, we produce
an interpretation, and show by the definitions that it makes the premises true and the
conclusion not. That is what we did in (B) above. To show that an argument is valid,
we assume the opposite, and show by the definitions that the assumption leads to
contradiction. Again, that is what we did just above, at (F).
Before we get to the details, let us consider an important point about what we are
trying to do: Our reasoning takes place in the metalanguage, based on the definitions
where object-level expressions are uninterpreted apart from the definitions. To
see this, ask yourself whether a sentence P conflicts with P P . Well, you might
respond, I have never encountered this symbol before, so I am not in a position
to say. But that is the point: whether P conflicts with P P depends entirely on
a definition for stroke . As it happens, this symbol is typically read not-both as
given by what might be a further clause of ST,
ST ()

For any sentences P and Q, I[.P Q/] = T iff I[P ] = F or I[Q] = F (or both);
otherwise I[.P Q/] = F.

CHAPTER 7. DIRECT SEMANTIC REASONING

331

The resultant table is,


P Q P Q

T()

T
T
F
F

F
T
T
T

T
F
T
F

P Q is false when P and Q are both T, and otherwise true. Given this, P does
conflict with P P . Suppose IP D T and IP P D T; from the latter, by
ST (), IP D F or IP D F; either way, IP D F; but this is impossible given our
assumption that IP D T. In fact, P P has the same table as P , and P .Q Q/
the same as P ! Q.
P Q P .Q Q/
P P P

(G)

T
F

F
T

T
T
F
F

T
F
T
F

T
F
T
T

F
T
F
T

From this, we might have treated  and !, and so ^, _ and $, all as abbreviations for expressions whose only operator is . At best, however, this leaves official
expressions difficult to read. Here is the point that matters: Operators have their
significance entirely from the definitions. In this chapter, we make metalinguistic
claims about object expressions, where these can only be based on the definitions.
P and P P do not themselves conflict, apart from the definition which makes P
with P P have the consequence that IP D T and IP D F. And similarly for
operators with which we are more familiar. At every stage, it is the definitions which
justify conclusions.

7.2

Sentential

With this much said, it remains possible to become confused about details while
working with the definitions. It is one thing to be able to follow such reasoning
as I hope you have been able to do and another to produce it. The idea now is to
make use of something at which we are already good, doing derivations, to further
structure and guide the way we proceed. The result will be a sort of derivation system
for reasoning about definitions. We build up this system in stages.

7.2.1

Truth

Let us begin with some notation. Where the script characters A; B; C; D : : : represent object expressions in the usual way, let the Fraktur characters A; B; C; D : : :

CHAPTER 7. DIRECT SEMANTIC REASONING

332

represent metalinguistic expressions (A is the Fraktur A). Thus A might represent


an expression of the sort IB D T. Then ) and , are the metalinguistic conditional and biconditional respectively; :, M and O represent metalinguistic negation,
conjunction, and disjunction. In practice, negation is indicated by the slash () as
well.
Now consider the following restatement of definition ST. Each clause is given in
both a positive and a negative form. For any sentences P and Q and interpretation I,
ST

() IP D T , IP T

IP T , IP D T

(!) IP ! Q D T , IP T O IQ D T

IP ! Q T , IP D T M IQ T

Given the new symbols, and that a sentence is F iff it is not true, this is a simple
restatement of ST. As we develop our formal system, we will treat the metalinguistic
biconditionals both as (replacement) rules and as axioms. Thus, for example, it will
be legitimate to move by ST() directly from IP T to IP D T, moving from
right-to-left across the arrow. And similarly in the other direction. Alternatively, it
will be appropriate to assert by ST() the entire biconditional, that IP D T ,
IP T. For now, we will mostly use the biconditionals, in the first form, as rules.
To manipulate the definitions, we require some rules. These are like ones you
have seen before, only pitched at the metalinguistic level.
com

AOB,BOA

AMB,BMA

idm

A,AOA

A,AMA

dem

:.A M B/ , :A O :B

:.A O B/ , :A M :B

cnj

dsj

neg

A; B

AMB

AMB

AMB

A O B; :A

A O B; :B

AOB

AOB

A , ::A

A
B
:B
:A

:A
B
:B
A

ret

A
A

Each of these should remind you of rules from ND or ND+. In practice, we will
allow generalized versions of cnj that let us move directly from A1 ; A2 : : : An to
A1 M A2 M : : : M An . Similarly, we will allow applications of dsj and dem that skip
officially required applications of neg. Thus, for example, instead of going from
:A O B to :A O ::B and then by dem to :.A M :B/, we might move by dem

CHAPTER 7. DIRECT SEMANTIC REASONING

333

directly from :A O B to :.A M :B/. All this should become more clear as we
proceed.
With definition ST and these rules, we can begin to reason about consequences of
the definition. Suppose we want to show that an interpretation with IA D IB D T
is such that I.A ! B/ D T.

(H)

1.
2.
3.
4.
5.
6.

IA D T
IB D T
IB T
IA D T M IB T
IA ! B T
I.A ! B/ D T

prem
prem
2 ST()
1,3 cnj
4 ST(!)
5 ST()

We are given that IA D T and IB D T.


From the latter, by ST(), IB T; so
IA D T and IB T; so by ST(!),
IA ! B T; so by ST(), I.A !
B/ D T.

The reasoning on the left is a metalinguistic derivation in the sense that every step is
either a premise, or justified by a definition or rule. You should be able to follow each
step. On the right, we simply tell the story of the derivation mirroring it stepfor-step. This latter style is the one we want to develop. As we shall see, it gives us
power to go beyond where the formalized derivations will take us. But the derivations
serve a purpose. If we can do them, we can use them to construct reasoning of the sort
we want. Each stage on one side corresponds to one on the other. So the derivations
can guide us as we construct our reasoning, and constrain the moves we make. Note:
First, on the right, we replace line references with language (from the latter) meant
to serve the same purpose. Second, the metalinguistic symbols, ), ,, :, M, O
are replaced with ordinary language on the right side. Finally, on the right, though
we cite every definition when we use it, we do not cite the additional rules (in this
case cnj). In general, as much as possible, you should strive to put the reader (and
yourself at a later time) in a position to follow your reasoning supposing just a
basic familiarity with the definitions.
Consider now another example. Suppose we want to show that an interpretation
with IB T is such that I.A ! B/ T. In this case we begin with the
opposite and break down to the parts, for an application of neg.

(I)

1. I.A ! B/ D T
2. IA ! B T
3. IA D T M IB T
4. IB T
5. IB D T
6. IB T
7. I.A ! B/ T

assp
1 ST()
2 ST(!)
3 cnj
4 ST()
prem
1-6 neg

Suppose I.A ! B/ D T; then from


ST (), IA ! B T; so by ST (!),
IA D T and IB T; so IB T;
so by ST(), IB D T. But we are given
that IB T. This is impossible; reject
the assumption: I.A ! B/ T.

Again, the reasoning on the one side mirrors that on the other. So we can use the
formalized derivation as a guide for the reasoning on the right. Again, we leave out

CHAPTER 7. DIRECT SEMANTIC REASONING

334

the special metalinguistic symbols. And again we cite all instances of definitions,
but not the additional rules (this time, cnj and neg). We might have used dsj to argue
directly from the premise that IA T O IB D T and so that IA ! B D T,
and by ST() that I.A ! B/ T. (Try this.) But either way works. As you
work the exercises that follow, to the extent that you can, it is good to have one line
depend on the one before or in the immediate neighborhood, so as to minimize the
need for extended references in the written versions. As you work these and other
problems, you may find the sentential metalinguistic reference on p. 343 helpful.
E7.1. Suppose IA D T, IB T and IC D T. For each of the following, produce
a formalized derivation, and then non-formalized reasoning to demonstrate
either that it is or is not true on I. Hint: You may find a quick row of the truth
table helpful to let you see which you want to show. Also, (e) is much easier
than it looks.
a. B ! C
*b. B ! C
c. .A ! B/ ! C
d. A ! .B ! C /
e. A ! ..A ! B/ ! C / ! .C ! B/

7.2.2

Validity

So far, we have been able to reason about ST and truth. Let us now extend results
to validity. For this, we need to augment our formalized system. Let S be a metalinguistic existential quantifier it asserts the existence of some object. For now,
S will appear only in contexts asserting the existence of interpretations. Thus,
for example, S I.IP D T/ says there is an interpretation I such that IP D T,
and :S I.IP D T/ says it is not the case that there is an interpretation I such that
IP D T. Given this, we can state SV as follows, again in positive and negative
forms.
SV

:S I.IP1 D T M : : : M IPn D T M IQ T/ , P1 : : : Pn s Q
S I.IP1 D T M : : : M IPn D T M IQ T/ , P1 : : : Pn 6s Q

CHAPTER 7. DIRECT SEMANTIC REASONING

335

These should look familiar. An argument is valid when it is not the case that there
is some interpretation that makes the premises true and the conclusion not. An argument is invalid if there is some interpretation that makes the premises true and the
conclusion not.
Again, we need rules to manipulate the new operator. In general, whenever a
metalinguistic term t first appears outside the scope of a metalinguistic quantifier, it
is labeled arbitrary or particular. For the sentential case, terms will always be of the
sort I, J. . . , for interpretations, and labeled particular when they first appear apart
from the quantifier S . Say At is some metalinguistic expression in which term t
appears, and Au is like At but with free instances of t replaced by u. Perhaps
At is IA D T and Au is JA D T. Then,
exs

Au

u arbitrary or particular

S tAt

S tAt
Au

u particular and new

As an instance of the left-hand introduction rule, we might move from JA D T,


for a J labeled either arbitrary or particular, to S I.IA D T/. If interpretation J is such
that JA D T, then there is some interpretation I such that IA D T. For the other
exploitation rule, we may move from S I.IA D T/ to the result that JA D T so
long as J is identified as particular and is new to the derivation, in the sense required
for 9E in chapter 6. In particular, it must be that the term does not so-far appear
outside the scope of a metalinguistic quantifier, and does not appear free in the final
result of the derivation. Given that some I is such that IA D T, we set up J as a
particular interpretation for which it is so.2
In addition, it will be helpful to allow a rule which lets us make assertions by
inspection about already given interpretations and we will limit justifications by
(ins) just to assertions about interpretations (and, later, variable assignments). Thus,
for example, in the context of an interpretation I on which IA D T, we might allow,
n. IA D T

ins (I particular)

as a line of one of our derivations. In this case, I is a name of the interpretation, and
listed as particular on first use.
Now suppose we want to show that .B ! D/, B 6s D. Recall that our strategy for showing that an argument is invalid is to produce an interpretation, and show
2 Observe

that, insofar as it is quantified, term I may itself be new in the sense that it does not so
far appear outside the scope of a quantifier. Thus we may be justified in moving from S I.IA D T/
to IA D T, with I particular. However, as a matter of style, we will typically switch terms upon
application of the exs rule.

CHAPTER 7. DIRECT SEMANTIC REASONING

336

that it makes the premises true and the conclusion not. So consider an interpretation
J such that JB T and JD T.

(J)

1.
2.
3.
4.
5.
6.
7.
8.

JB T
JB T O JD D T
JB ! D D T
JB D T
JD T
JB ! D D T M JB D T M JD T

S I.IB ! D D T M IB D T M ID T/
B ! D; B 6s D

ins (J particular)
1 dsj
2 ST(!)
1 ST()
ins
3,4,5 cnj
6 exs
7 SV

(1) and (5) are by inspection of the interpretation J, where an individual name is
always labeled particular when it first appears. At (6) we have a conclusion about
interpretation J, and at (7) we we generalize to the existential, for an application of
SV at (8). Here is the corresponding informal reasoning.
JB T; so either JB T or JD D T; so by ST(!), JB ! D D T. But

since JB T, by ST(), JB D T. And JD T. So JB ! D D T and


JB D T but JD T. So there is an interpretation I such that IB ! D D T
and IB D T but ID T. So by SV, .B ! D/, B 6s D

It should be clear that this reasoning reflects that of the derivation. The derivation
thus constrains the steps we make, and guides us to our goal. We show the argument
is invalid by showing that there exists an interpretation on which the premises are
true and the conclusion is not.
Say we want to show that .A ! B/ s A. To show that an argument is
valid, our idea has been to assume otherwise, and show that the assumption leads to
contradiction. So we might reason as follows.

(K)

1. .A ! B/ 6s A
2. S I.I.A ! B/ D T M IA T/
3. J.A ! B/ D T M JA T
4. J.A ! B/ D T
5. JA ! B T
6. JA D T M JB T
7. JA D T
8. JA T
9. .A ! B/ s A

assp
1 SV
2 exs (J particular)
3 cnj
4 ST()
5 ST(!)
6 cnj
3 cnj
1-8 neg

Suppose .A ! B/ 6s A; then by SV there is some I such that I.A ! B/ D T and
IA T. Let J be a particular interpretation of this sort; then J.A ! B/ D T and

CHAPTER 7. DIRECT SEMANTIC REASONING

337

JA T. From the former, by ST(), JA ! B T; so by ST(!), JA D T and


JB T. So both JA D T and JA T. This is impossible; reject the assumption:

.A ! B/ s A.

At (2) we have the result that there is some interpretation on which the premise is true
and the conclusion is not. At (3), we set up to reason about a particular J for which
this is so. J does not so-far appear in the derivation, and does not appear in the goal at
(9). So we instantiate to it. This puts us in a position to reason by ST. The pattern is
typical. Given that the assumption leads to contradiction, we are justified in rejecting
the assumption, and thus conclude that the argument is valid. It is important that
we show the argument is valid, without reasoning individually about every possible
interpretation of the basic sentences!
Notice that we can also reason generally about forms. Here is a case of that sort.
T7.4s. s .Q ! P / ! .Q ! P / ! Q
1. 6s .Q ! P / ! ..Q ! P / ! Q/
2. S I.I.Q ! P / ! ..Q ! P / ! Q/ T/
3. J.Q ! P / ! ..Q ! P / ! Q/ T
4. JQ ! P D T M J.Q ! P / ! Q T
5. J.Q ! P / ! Q T
6. JQ ! P D T M JQ T
7. JQ T
8. JQ D T
9. JQ ! P D T
10. JQ T O JP D T
11. JP D T
12. JQ ! P D T
13. JQ T O JP D T
14. JP D T
15. JP T
16. s .Q ! P / ! ..Q ! P / ! Q/

assp
1 SV
2 exs (J particular)
3 ST(!)
4 cnj
5 ST(!)
6 cnj
7 SF()
6 cnj
9 ST(!)
8,10 dsj
4 cnj
12 ST(!)
8,13 dsj
14 ST()
1-15 neg

Suppose 6s .Q ! P / ! ..Q ! P / ! Q/; then by SV there is some I


such that I.Q ! P / ! ..Q ! P / ! Q/ T. Let J be a particular
interpretation of this sort; then J.Q ! P / ! ..Q ! P / ! Q/ T; so
by ST(!), JQ ! P D T and J.Q ! P / ! Q T; from the latter,
by ST(!), JQ ! P D T and JQ T; from the latter of these, by ST(),
JQ D T. Since JQ ! P D T, by ST(!), JQ T or JP D T; but
JQ D T, so JP D T. Since JQ ! P D T, by ST(!), JQ T or
JP D T; but JQ D T, so JP D T; so by ST(), JP T. This is
impossible; reject the assumption: s .Q ! P / ! ..Q ! P / ! Q/.

CHAPTER 7. DIRECT SEMANTIC REASONING

338

Observe that the steps represented by (11) and (14) are not by cnj but by the dsj
rule with A O B and :A for the result that B.3 Observe also that contradictions
are obtained at the metalinguistic level. Thus JP D T at (11) does not contradict
JP D T at (14). Of course, it is a short step to the result that JP D T and
JP T which do contradict. As a general point of strategy, it is much easier to
manage a negated conditional than an unnegated one for the negated conditional
yields a conjunctive result, and the unnegated a disjunctive. Thus we begin above
with the negated conditionals, and use the results to set up applications of dsj. This
is typical.
There is nothing special about reasoning with forms. Thus similarly we can show,
T7.1s. P , P ! Q s Q
T7.2s. s P ! .Q ! P /
T7.3s. s .O ! .P ! Q// ! ..O ! P / ! .O ! Q//
T7.1s - T7.4s should remind you of the axioms and rule for the sentential part of AD
from chapter 3. These results (or, rather, analogues for the quantificational case) play
an important role for things to come.
These derivations are structurally much simpler than ones you have seen before
from ND. The challenge is accommodating new notation with the different mix of
rules. Again, to show that an argument is invalid, produce an interpretation; then
use it for a demonstration that there exists an interpretation that makes premises
true and the conclusion not. To show that an argument is valid, suppose otherwise;
then demonstrate that your assumption leads to contradiction. The derivations then
provide the pattern for your informal reasoning.
E7.2. Produce a formalized derivation, and then informal reasoning to demonstrate
each of the following. To show invalidity, you will have to produce an interpretation to which your argument refers.
*a. A ! B, A 6s B
*b. A ! B, B s A
c. A ! B, B ! C , C ! D s A ! D
3 Or, rather, we have :A O B and A and thus skip application of neg to obtain the proper ::A
for this application of dsj.

CHAPTER 7. DIRECT SEMANTIC REASONING

339

d. A ! B, B ! A s A
e. A ! B, A ! B 6s .A ! B/
f. .A ! B/ ! A s A ! B
g. A ! B, B s .B ! A/
h. A ! B, B ! A 6s A ! B
i. 6s .A ! B/ ! .A ! C / ! .A ! B/ ! C
j. s .A ! B/ ! .B ! C / ! .C ! A/
E7.3. Provide demonstrations for T7.1s - T7.3s in the informal style. Hint: you may
or may not find that truth tables, or formalized derivations, would be helpful
as a guide.

7.2.3

Derived Rules

Finally, for this section on sentential forms, we expand the range of our results by
means of some rules for ) and ,.
cnd

A ) B, A

bcnd

B
A)B

A ) B, B ) C
A)C

A , B, A

A , B, B

A ) B; B ) A

A , B, B , C

A,B

A,C

We will also allow versions of bcnd which move from, say, A , B and :A, to :B
(like NB from ND+). And we will allow generalized versions of these rules moving
directly from, say, A ) B, B ) C, and C ) D to A ) D; and similarly, from
A , B, B , C, and C , D to A , D. In this last case, the natural informal
description is, A iff B; B iff C; C iff D; so A iff D. In real cases, however, repetition
of terms can be awkward and get in the way of reading. In practice, then, the pattern
collapses to, A iff B; iff C; iff D; so A iff D where this is understood as in the
official version.
Also, when demonstrating that A ) B, in many cases, it is helpful to get B by
neg; officially, the pattern is as on the left,

CHAPTER 7. DIRECT SEMANTIC REASONING


A
:B
C
:C
B
A)B

340

But the result is automatic


once we derive a contradiction from A and :B;
so, in practice, this pattern
collapses into:

A M :B
C
:C
A)B

So to demonstrate a conditional, it is enough to derive a contradiction from the antecedent and negation of the consequent. Let us also include among our definitions,
(abv) for unpacking abbreviations. This is to be understood as justifying any biconditional A , A0 where A0 abbreviates A. Such a biconditional can be used as either
an axiom or a rule.
We are now in a position to produce derived clauses for ST. In table form, we
have already seen derived forms for ST from chapter 4. But we did not then have the
official means to extend the definition.
ST0

(^) IP ^ Q D T , IP D T M IQ D T
IP ^ Q T , IP T O IQ T

(_) IP _ Q D T , IP D T O IQ D T
IP _ Q T , IP T M IQ T

($) IP $ Q D T , .IP D T M IQ D T/ O .IP T M IQ T/


IP $ Q T , .IP D T M IQ T/ O .IP T M IQ D T/

Again, you should recognize the derived clauses based on what you already know
from truth tables.
First, consider the positive form for ST0 (^). We reason about the arbitrary interpretation. The demonstration begins by abv, and strings together biconditionals to
reach the final result.

(L)

1.
2.
3.
4.
5.

IP ^ Q D T , I.P ! Q/ D T
I.P ! Q/ D T , IP ! Q T
IP ! Q T , IP D T M IQ T
IP D T M IQ T , IP D T M IQ D T
IP ^ Q D T , IP D T M IQ D T

abv (I arbitrary)
ST()
ST(!)
ST()
1,2,3,4 bcnd

This derivation puts together a string of biconditionals of the form A , B, B , C,


C , D, D , E; the conclusion follows by bcnd. Notice that we use the abbreviation and first two definitions as axioms, to state the biconditonals. Technically, (4)
results from an implicit IP D T M IQ T , IP D T M IQ T with ST()
as a replacement rule, substituting IQ D T for IQ T on the right-hand side. In
the collapsed biconditional form, the result is as follows.

CHAPTER 7. DIRECT SEMANTIC REASONING

341

By abv, IP ^ Q D T iff I.P ! Q/ D T; by ST(), iff IP ! Q T;


by ST(!), iff IP D T and IQ T; by ST(), iff IP D T and IQ D T. So
IP ^ Q D T iff IP D T and IQ D T.

In this abbreviated form, each stage implies the next from start to finish. But similarly, each stage implies the one before from finish to start. So one might think of it
as demonstrating conditionals in both directions all at once for eventual application
of bcnd. Because we have just shown a biconditional, it follows immediately that
IP ^ Q T just in case the right hand side fails just in case one of IP T or
IQ T. However, we can also make the point directly.
By abv, IP ^ Q T iff I.P ! Q/ T; by ST(), iff IP ! Q D T;
by ST(!), iff IP T or IQ D T; by ST(), iff IP T or IQ T. So
IP ^ Q T iff IP T or IQ T.

Reasoning for ST0 (_) is similar. For ST0 ($) it will be helpful to introduce, as a
derived rule, a sort of distribution principle.
dst

.:A O B/ M .:B O A/ , .A M B/ O .:A M :B/

To show this, our basic idea will be to obtain the conditional going in both directions,
and then apply bcnd. Here is the argument from left-to-right.
1. .:A O B/ M .:B O A/ M :.A M B/ O .:A M :B/
2. :.A M B/ O .:A M :B/
3. .:A O B/ M .:B O A/
4. :A O B
5. :B O A
6. :.A M B/ M :.:A M :B/
7. :.A M B/
8. :A O :B
9.
A
10.
B
11.
:B
12. :A
13. :B
14. :.:A M :B/
15. A O B
16. B
17. .:A O B/ M .:B O A/ ) .A M B/ O .:A M :B/

assp
1 cnj
1 cnj
3 cnj
3 cnj
2 dem
6 cnj
7 dem
assp
4,9 dsj
8,9 dsj
9-11 neg
5,12 dsj
6 cnj
14 dem
12,15 dsj
1-16 cnd

CHAPTER 7. DIRECT SEMANTIC REASONING

342

The conditional is demonstrated in the collapsed form, where we assume the antecedent with the negation of the consequent, and go for a contradiction. Note the
little subderivation at (9) - (11); often the way to make headway with metalinguistic disjunction is to assume the negation of one side. This can feed into dsj and
neg. Demonstration of the conditional in the other direction is left as an exercise. Given dst, you should be able to demonstrate ST($), also in the collapsed
biconditional style. You will begin by observing by abv that IP $ Q D T iff
I..P ! Q/ ! .Q ! P // D T; by neg iff . . . . The negative side is relatively
straightforward, and does not require dst.
Having established the derived clauses for ST0 , we can use them directly in our
reasoning. Thus, for example, let us show that B _ .A ^ C /, .C ! A/ $ B 6s
.A ^ C /. For this, consider an interpretation J such that JA D JB D JC D T.

(M)

1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.

JA D T
JC D T
JA D T M JC D T
JA ^ C D T
J.A ^ C / T
JB D T
JB D T O JA ^ C D T
JB _ .A ^ C / D T
JC T O JA D T
JC ! A D T
JC ! A D T M JB D T

.JC ! A D T M JB D T/ O .JC ! A T M JB T/
J.C ! A/ $ B D T
JB _ .A ^ C / D T M J.C ! A/ $ B D T M J.A ^ C / T
S IIB _ .A ^ C / D T M I.C ! A/ $ B D T M I.A ^ C / T
B _ .A ^ C /; .C ! A/ $ B 6s .A ^ C /

ins (J particular)
ins
1,2 cnj
3 ST0 (^)
4 ST()
ins
6 dsj
7 ST0 (_)
1, dsj
9 ST(!)
10,6 cnj
11 dsj
12, ST0 ($)
8,13,5 cnj
14 exs
15 SV

Since JA D T and JC D T, by ST0 (^), JA ^ C D T; so by ST(), J.A ^


C / T. Since JB D T, either JB D T or JA ^ C D T; so by ST0 (_),
JB _ .A ^ C / D T. Since JA D T, either JC T or JA D T; so by
ST(!), JC ! A D T; so both JC ! A D T and JB D T; so either both
JC ! A D T and JB D T or both JC ! A T and JB T; so by ST0 ($),
J.C ! A/ $ B D T. So JB _ .A ^ C / D T and J.C ! A/ $ B D T but
J.A ^ C / T; so there exists an interpretation I such that IB _ .A ^ C / D T
and I.C ! A/ $ B D T but I.A ^ C / T; so by SV, B _ .A ^ C /,
.C ! A/ $ B 6s .A ^ C /.

CHAPTER 7. DIRECT SEMANTIC REASONING

343

Metalinguistic Quick Reference (sentential)


DEFINITIONS:
ST

() IP D T , IP T

IP T , IP D T

(!) IP ! Q D T , IP T O IQ D T

IP ! Q T , IP D T M IQ T

() IP Q D T , IP T O IQ T
ST0

IP Q T , IP D T M IQ D T

(^) IP ^ Q D T , IP D T M IQ D T
IP ^ Q T , IP T O IQ T.

(_) IP _ Q D T , IP D T O IQ D T
IP _ Q T , IP T M IQ T.

($) IP $ Q D T , .IP D T M IQ D T/ O .IP T M IQ T/


IP $ Q T , .IP D T M IQ T/ O .IP T M IQ D T/.

SV :S I.IP1 D T M : : : M IPn D T M IQ T/ , P1 : : : Pn s Q
S I.IP1 D T M : : : M IPn D T M IQ T/ , P1 : : : Pn 6s Q
abv Abbreviation allows A , A0 where A0 abbreviates A.
RULES:
com A O B , B O A

AMB,BMA

idm A , A O A

A,AMA

dem :.A M B/ , :A O :B

:.A O B/ , :A M :B

cnj A; B
AMB

AMB

AMB

A O B; :A

A O B; :B

AOB

AOB

neg A , ::A

dsj A

B
:B
A
exs Au

u arbitrary or particular

StAt

bcnd A , B, A
B

A
B
A)B

ret

A
A

B
:B
A
StAt
Au

cnd A ) B, A
B

:A

u particular and new

A ) B, B ) C
A)C

A M :B
C
:C
A)B

A , B, B

A ) B; B ) A

A , B, B , C

A,B

A,C

dst .:A O B/ M .:B O A/ , .A M B/ O .:A M :B/


ins Inspection allows assertions about interpretations and variable assignments.

CHAPTER 7. DIRECT SEMANTIC REASONING

344

Similarly we can show that A ! .B _ C /, C $ B, C s A. As usual, our


strategy is to assume otherwise, and go for contradiction.

(N)

1. A ! .B _ C /; C $ B; C 6s A
2. S I.IA ! .B _ C / D T M IC $ B D T M IC D T M IA T/
3. JA ! .B _ C / D T M JC $ B D T M JC D T M JA T
4. JC D T
5. JC T
6. JC T O JB T
7. :.JC D T M JB D T/
8. JC $ B D T
9. .JC D T M JB D T/ O .JC T M JB T/
10. JC T M JB T
11. :.JC D T O JB D T/
12. JA T
13. JA D T
14. JA ! .B _ C / D T
15. JA T O JB _ C D T
16. JB _ C D T
17. JB D T O JC D T
18. JC D T O JB D T
19. A ! .B _ C /; C $ B; C s A

assp
1 SV
2 exs (J particular)
3 cnj
4 ST()
5 dsj
6 dem
3 cnj
8 ST0 ($)
9,7 dsj
10 dem
3 cnj
12 ST()
3 cnj
14 ST(!)
13,15 dsj
16 ST0 (_)
17 com
1-18 neg

Suppose A ! .B _ C /, C $ B, C 6s A; then by SV there is some I such that


IA ! .B _ C / D T, and IC $ B D T, and IC D T, but IA T. Let J be a
particular interpretation of this sort; then JA ! .B _ C / D T, and JC $ B D T,
and JC D T, but JA T. Since JC D T, by ST(), JC T; so either
JC T or JB T; so it is not the case that both JC D T and JB D T. But
JC $ B D T; so by ST0 ($), both JC D T and JB D T, or both JC T and
JB T; but not the former, so JC T and JB T; so it is not the case that either
JC D T or JB D T. JA T; so by ST(), JA D T. But JA ! .B _C / D T;
so by ST(!), JA T or JB _ C D T; but JA D T; so JB _ C D T; so by
ST0 (_), JB D T or JC D T; so either JC D T or JB D T. But this is impossible;
reject the assumption: A ! .B _ C /, C $ B, C 6s A.

Though the formalized derivations are useful to discipline the way we reason, in
the end, you may find the written versions to be both quicker, and easier to follow.
As you work the exercises, try to free yourself from the formalized derivations to
work the informal versions independently though you should continue to use the
formalized versions as a check for your work.

CHAPTER 7. DIRECT SEMANTIC REASONING

345

*E7.4. Complete the demonstration of derived clauses of ST0 by completing the


demonstration for dst from right-to-left, and providing non-formalized reasonings for both the positive and negative parts of ST0 (_) and ST0 ($).

E7.5. Using ST() as above on p. 330, produce non-formalized reasonings to show


each of the following. Again, you may or may not find formalized derivations
helpful but your reasoning should be no less clean than that guided by the
rules. Hint, by ST(), IP Q T iff IP D T and IQ D T.
a. IP P D T iff IP D T
*b. IP .Q Q/ D T iff IP ! Q D T
c. I.P P / .Q Q/ D T iff IP _ Q D T
d. I.P Q/ .P Q/ D T iff IP ^ Q D T
E7.6. Produce non-formalized reasoning to demonstrate each of the following.
a. A ! .B ^ C /, C $ B, C s A
*b. .A $ B/, A, B s C ^ C
*c. .A ^ B/ 6s A ^ B
d. A ! B, B ! A 6s B ! A
e. A ^ .B ! C / 6s .A ^ C / _ .A ^ B/
f. .C _ D/ ^ B ! A, D s B ! A
g. s A _ ..C ! B/ ^ A/ _ A
h. D ! .A ! B/, A ! D, C ^ D s B
i. .A _ B/ ! .C ^ D/, .A _ B/ 6s .C ^ D/
j. A ^ .B _ C /, .C _ D/ ^ .D ! D/ s A ^ B

CHAPTER 7. DIRECT SEMANTIC REASONING

7.3

346

Quantificational

So far, we might have obtained sentential results for validity and invalidity by truth
tables. But our method positions us to make progress for the quantificational case,
compared to what we were able to do before. Again, we will depend on, and gradually expand our formalized system as a guide.

7.3.1

Satisfaction

Given what we have done, it is easy to state definition SF for satisfaction as it applies to sentence letters, , and !. In this case, as described in chapter 4, we are
reasoning about satisfaction, and satisfaction depends not just on interpretations, but
on interpretations with variable assignments. For S an arbitrary sentence letter and
P and Q any formulas, where Id is an interpretation with variable assignment,
SF

(s) Id S D S , IS D T

Id S S , IS T

() Id P D S , Id P S

Id P S , Id P D S

(!) Id P ! Q D S , Id P S O Id Q D S

Id P ! Q S , Id P D S M Id Q S

Again, you should recognize this as a simple restatement of SF from p. 118. Rules
for manipulating the definitions remain as before. Already, then, we can produce
derived clauses for _, ^ and $.
SF0 (_) Id .P _ Q/ D S , Id P D S O Id Q D S
Id .P _ Q/ S , Id P S M Id Q S

(^) Id .P ^ Q/ D S , Id P D S M Id Q D S
Id .P ^ Q/ S , Id P S O Id Q S

($) Id .P $ Q/ D S , .Id P D S M Id Q D S/ O .Id P S M Id Q S/


Id .P $ Q/ S , .Id P D S M Id Q S/ O .Id P S M Id Q D S/

All these are are like ones from before. For the first,
(O)

1.
2.
3.
4.

Id P _ Q D S , Id P ! Q D S
Id P ! Q D S , Id P S O Id Q D S
Id P S O Id Q D S , Id P D S O Id Q D S
Id P _ Q D S , Id P D S O Id Q D S

abv
SF(!)
SF()
1,2,3 bcnd

Again, line (3) results from an implicit Id P S O Id Q D S , Id P S O


Id Q D S with ST() as a replacement rule, substituting Id P D S for Id P S
on the right-hand side. The informal reasoning is straightforward.
By abv, Id P _Q D S iff Id P ! Q D S; by SF(!), iff Id P S or Id Q D S;
by SF(), iff Id P D S or Id Q D S. So Id P _ Q D S iff Id P D S or Id Q D S.

CHAPTER 7. DIRECT SEMANTIC REASONING

347

The reasoning is as before, except that our condition for satisfaction depends on an
interpretation with variable assignment, rather than an interpretation alone.
Of course, given these definitions, we can use them in our reasoning. As a simple
example, let us demonstrate that if Id P _ Q D S and Id Q D S, then Id P D S.

(P)

1. Id P _ Q D S M Id Q D S
2. Id P _ Q D S
3. Id P D S O Id Q D S
4. Id Q D S
5. Id Q S
6. Id P D S
7. .Id P _ Q D S M Id Q D S/ ) Id P D S

assp
1 cnj
2 SF0 (_)
1 cnj
4 SF()
3,5 dsj
1-6 cnd

Suppose Id P _ Q D S and Id Q D S. From the former, by SF0 (_), Id P D S


or Id Q D S; but Id Q D S; so by SF(), Id Q S; so Id P D S. So if
Id P _ Q D S and Id Q D S, then Id P D S.

Again, basic reasoning is as in the sentential case, except that we carry along reference to variable assignments.
Observe that, given IA D T for a sentence letter A, to show that Id A _ B D S,
we reason,
(Q)

1.
2.
3.
4.

IA D T
Id A D S
Id A D S O Id B D S
Id A _ B D S

ins
1 SF(s)
2 dsj
3 SF0 (_)

moving by SF(s) from the premise that the letter is true, to the result that it is satisfied,
so that we are in a position to apply other clauses of the definition for satisfaction. SF
applies to satisfaction not truth! So we have to bridge from one to the other before
SF can apply!
This much should be straightforward, but let us pause to demonstrate derived
clauses for satisfaction, and reinforce familiarity with the quantificational definition
SF. As you work these and other problems, you may find the quantificational metalinguistic reference on p. 364 helpful.
E7.7. Produce formalized derivations and then informal reasoning to complete demonstrations for both positive and negative parts of derived clauses for SF0 . Hint:
you have been through the reasoning before!

CHAPTER 7. DIRECT SEMANTIC REASONING

348

*E7.8. Consider some Id and suppose IA D T, IB T and IC D T. For each of


the expressions in E7.1, produce the formalized and then informal reasoning
to demonstrate either that it is or is not satisfied on Id .

7.3.2

Validity

In the quantificational case, there is a distinction between satisfaction and truth. We


have been working with the definition for satisfaction. But validity is defined in terms
of truth. So to reason about validity, we need a bridge from satisfaction to truth that
applies beyond the case of sentence letters. For this, let A be a metalinguistic universal quantifier. So, for example, Ad.Id P D S/ says that any variable assignment
d is such that Id P D S. Then we have,
TI

IP D T , Ad.Id P D S/

IP T , S d.Id P S/

P is true on I iff it is satisfied for any variable assignment d. P is not true on I iff it
is not satisfied for some variable assignment d. The definition QV is like SV.
QV

:S I.IP1 D T M : : : M IPn D T M IQ T/ , P1 : : : Pn  Q
S I.IP1 D T M : : : M IPn D T M IQ T/ , P1 : : : Pn Q

An argument is quantificationally valid just in case there is no interpretation on which


the premises are true and the conclusion is not. Of course, we are now talking about
quantificational interpretations. Again, all of this repeats what was established in
chapter 4.
To manipulate the universal quantifier, we will need some new rules. In chapter 6, we used 8E to instantiate to any term variable, constant, or otherwise. But
8I was restricted the idea being to generalize only on variables for truly arbitrary
individuals. Corresponding restrictions are enforced here by the way terms are introduced. We generalize from variables for arbitrary individuals, but may instantiate to
variables or constants of any kind. The universal rules are,
unv

AtAt
Au

Au
u of any type

u arbitrary and new

AtAt

If some A is true for any t, then it is true for individual u. Thus we might move from
the generalization, Ad.Id A D S/ to the particular claim Ih A D S for assignment h.
For the right-hand introduction rule, we require that u be new in the sense required
for 8I in chapter 6. In particular, if u is new to a derivation for goal AtAt, u will
not appear free in any undischarged assumption when the universal rule is applied

CHAPTER 7. DIRECT SEMANTIC REASONING

349

(typically, our derivations will be so simple that this will not be an issue). If we can
show, say, Ih A D S for arbitrary assignment h, then it is appropriate to move to the
conclusion Ad.Id A D S/. We will also accept a metalinguistic quantifier negation,
as in ND+.
qn

:AtA , S t:A

:S tA , At:A

With these definitions and rules, we are ready to reason about validity at least
for sentential forms. Suppose we want to show,
T7.1. P , P ! Q  Q
1. P ; P ! Q Q
2. S I.IP D T M IP ! Q D T M IQ T/
3. JP D T M JP ! Q D T M JQ T/
4. JQ T
5. S d.Jd Q S/
6. Jh Q S
7. JP ! Q D T
8. Ad.Jd P ! Q D S/
9. Jh P ! Q D S
10. Jh P S O Jh Q D S
11. Jh P S
12. JP D T
13. Ad.Jd P D S/
14. Jh P D S
15. P ; P ! Q  Q

assp
1 QV
2 exs (J particular)
3 cnj
4 TI
5 exs (h particular)
3 cnj
7 TI
8 unv
9 SF(!)
6,10 dsj
3 cnj
12 TI
13 unv
1-14 neg

As usual, we begin with the assumption that the theorem is not valid, and apply the
definition of validity for the result that the premises are true and the conclusion not.
The goal is a contradiction. What is interesting are the applications of TI to bridge
between truth and satisfaction. We begin by working on the conclusion. Since the
conclusion is not true, by TI with exs we introduce a new variable assignment h on
which the conclusion is not satisfied. Then, because the premises are true, by TI with
unv the premises are satisfied on that very same assignment h. Then we use SF in the
usual way. All this is like the strategy from ND by which we jump on existentials: If
we had started with the premises, the requirement from exs that we instantiate to a
new term would have forced a different variable assignment. But, by beginning with
the conclusion, and coming with the universals from the premises after, we bring
results into contact for contradiction.
Suppose P , P ! Q Q. Then by QV, there is some I such that IP D T and
IP ! Q D T but IQ T; let J be a particular interpretation of this sort; then

CHAPTER 7. DIRECT SEMANTIC REASONING

350

JP D T and JP ! Q D T but JQ T. From the latter, by TI, there is some


d such that Jd Q S; let h be a particular assignment of this sort; then Jh Q S.

But since JP ! Q D T, by TI, for any d, Jd P ! Q D S; so Jh P ! Q D S;


so by SF(!), Jh P S or Jh Q D S; so Jh P S. But since JP D T, by TI,
for any d, Jd P D S; so Jh P D S. This is impossible; reject the assumption: P ,
P ! Q  Q.

Similarly we can show,


T7.2.  P ! .Q ! P /
T7.3.  .O ! .P ! Q// ! ..O ! P / ! .O ! Q//
T7.4. .Q ! P / ! .Q ! P / ! Q
T7.5. There is no interpretation I and formula P such that IP D T and IP D T.
Hint: Your goal is to show :S I.IP D T M IP D T/. You can get this by
neg.
In each case, the pattern is the same: Bridge assumptions about truth to definition SF
by TI with exs and unv. Reasoning with SF is as before. Given the requirement that
the metalinguistic existential quantifier always be instantiated to a new variable or
constant, it makes sense always to instantiate that which is not true, and so comes out
as a metalinguistic existential, first, and then come with universals on top of terms
already introduced. This is what we did above, and is like your derivation strategy in
ND.
*E7.9. Produce formalized derivations and non-formalized reasoning to show that
a,b,f,g,h from E7.6 are quantificationally valid.

E7.10. Provide demonstrations for T7.2, T7.3, T7.4 and T7.5 in the non-formalized
style. Hint: You may or may not decide that formalized derivations would be
helpful.

CHAPTER 7. DIRECT SEMANTIC REASONING

7.3.3

351

Terms and Atomics

So far, we have addressed only validity for sentential forms, and have not even seen
the (r) and (8) clauses for SF. We will get the quantifier clause in the next section. Here we come to the atomic clause for definition SF, but must first address
the connection with interpretations via definition TA. For constant c, variable x, and
complex term hn t1 : : : tn , we say Ihn ha1 : : : an i is the thing the function I[hn ] associates with input ha1 : : : an i (see p. 116).
TA

(c) Id c D Ic.
(v) Id x D dx.
(f) Id hn t1 : : : tn D Ihn hId t1 : : : Id tn i

This is a direct restatement of the definition. To manipulate it, we need rules for
equality.
eq

tDt

tDu,uDt

t D u, u D v

t D u, At

tDv

Au

These should remind you of results from ND. We will allow generalized versions so
that from t D u, u D v, and v D w, we might move directly to t D w. And we will
not worry much about order around the equals sign so that, for example, we could
move directly from t D u and Au to At without first converting t D u to u D t
as required by the rule as stated. As in other cases, we will treat clauses from TA as
both axioms and rules, though as usual, we typically take them as rules.
Let us consider first how this enables us to determine term assignments. Here is
a relatively complex case. Suppose I has U D f1; 2g, If 2 D fhh1; 1i; 1i; hh1; 2i; 1i;
hh2; 1i; 2i; hh2; 2i; 2ig, Ig 1 D fh1; 2i; h2; 1ig, and Ia D 1. Recall that one-tuples
are equated with their members so that Ig 1 is officially fhh1i; 2i; hh2i; 1ig. Suppose
dx D 2 and consider Id g 1 f 2 xg 1 a. We might do this on a tree as in chapter 4.
x [2]

(R)

L
L
L
L
L
LL

a[1]

g 1 a[2]

By TA(v) and TA(c)

By TA(f)

f 2 xg 1 a[2]

By TA(f)

g 1 f 2 xg 1 a[1]

By TA(f)

CHAPTER 7. DIRECT SEMANTIC REASONING

352

Perhaps we whip through this on the tree. But the derivation follows the very same
path, with explicit appeal to the definitions at every stage. In the derivation below,
lines (1) - (4) cover the top row by application of TA(v) and TA(c). Lines (5) - (7)
are like the second row, using the assignment to a with the interpretation of g 1 to
determine the assignment to g 1 a. Lines (8) - (10) cover the third row. And (11) (13) use this to reach the final result.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.

Ia D 1
Id a D 1
dx D 2
Id x D 2
Id g 1 a D Ig 1 h1i
Ig 1 h1i D 2
Id g 1 a D 2
Id f 2 xg 1 a D If 2 h2; 2i
If 2 h2; 2i D 2
Id f 2 xg 1 a D 2
Id g 1 f 2 xg 1 a D Ig 1 h2i
Ig 1 h2i D 1
Id g 1 f 2 xg 1 a D 1

ins (I particular)
1 TA(c)
ins (d particular)
3 TA(v)
2 TA(f)
ins
5,6 eq
4,7 TA(f)
ins
8,9 eq
10 TA(f)
ins
11,12 eq

As with trees, to discover that to which a complex term is assigned, we find the
assignment to the parts. Beginning with assignments to the parts, we work up to
the assignment to the whole. Notice that assertions about the interpretation and the
variable assignment are justified by ins. And notice the way we use TA as a rule at
(2) and (4), and then again at (5), (8) and (11).
Ia D 1; so by TA(c), Id a D 1. And dx D 2; so by TA(v), Id x D 2. Since
Id a D 1, by TA(f), Id g 1 a D Ig 1 h1i; but Ig 1 h1i D 2; so Id g 1 a D 2. Since
Id x D 2 and Id g 1 a D 2, by TA(f), Id f 2 xg 1 a D If 2 h2; 2i; but If 2 h2; 2i D

2; so Id f 2 xg 1 a D 2. And from this, by TA(f), Id g 1 f 2 xg 1 a D Ig 1 h2i; but


Ig 1 h2i D 1; so Id g 1 f 2 xg 1 a D 1.

With the ability to manipulate terms by TA, we can think about satisfaction and
truth for arbitrary formulas without quantifiers. This brings us to SF(r). Say Rn is
an n place relation symbol, and t1 : : : tn are terms.
SF(r) Id Rn t1 : : : tn D S , hId t1 : : : Id tn i 2 IRn
Id Rn t1 : : : tn S , hId t1 : : : Id tn i 62 IRn

This is a simple restatement of the definition from p. 118 in chapter 4. In fact,


because of the simple negative version, we will apply the definition just in its positive
form, and generate the negative case directly from it (as in NB from ND+).

CHAPTER 7. DIRECT SEMANTIC REASONING

353

Let us expand the above interpretation and variable assignment so that IA1 D
f2g (or fh2ig) and IB 2 D fh1; 2i; h2; 1ig. Then IAf 2 xa D S.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.

(S)

dx D 2
Id x D 2
Ia D 1
Id a D 1
Id f 2 xa D If 2 h2; 1i
If 2 h2; 1i D 2
Id f 2 xa D 2
Id Af 2 xa D S , h2i 2 IA

h2i 2 IA
Id Af 2 xa D S

ins (d particular)
1 TA(v)
ins (I particular)
3 TA(c)
2,4 TA(f)
ins
5,6 eq
7 SF(r)
ins
8,9 bcnd

Again, this mirrors what we did with trees moving through term assignments, to
the value of the atomic. Observe that satisfaction is not the same as truth! Insofar as
d is particular, (unv) does not apply for the result that Af 2 xa is satisfied on every
variable assignment, and so by TI that the formula is true. In this case, it is a simple
matter to identify a variable assignment other than d on which the formula is not
satisfied, and so to show that it is not true on I. Set hx D 1.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.

hx D 1
Ih x D 1
Ia D 1
Ih a D 1
Ih f 2 xa D If 2 h1; 1i
If 2 h1; 1i D 1
Ih f 2 xa D 1
Ih Af 2 xa D S , h1i 2 IA

h1i 62 IA
Ih Af 2 xa S
S d.Id Af 2 xa S/
IAf 2 xa T

ins (h particular)
1 TA(v)
ins (I particular)
3 TA(c)
2,4 TA(f)
ins
5,6 eq
7 SF(r)
ins
8,9 bcnd
10 exs
11 TI

Given that it is not satisfied on the particular variable assignment h, (exs) and TI
give the result that Af 2 xa is not true. In this case, we simply pick the variable
assignment we want: since the formula is not satisfied on this assignment, there is
an assignment on which it is not satisfied; so it is not true. For a formula that is
not a sentence, this is often the way to go. Just as it may be advantageous to find
a particular interpretation to show invalidity, so it may be advantageous to seek out
particular variable assignments for truth, in the case of open formulas.
hx D 1; so by TA(v), Ih x D 1. And Ia D 1; so by TA(c), Ih a D 1. So by

TA(f), Ih f 2 xa D If 2 h1; 1i; but If 2 h1; 1i D 1; so Ih f 2 xa D 1. So by SF(r),

CHAPTER 7. DIRECT SEMANTIC REASONING

354

Ih Af 2 xa D S iff h1i 2 IA; but h1i 62 IA; so Ih Af 2 xa S. So there is a variable

assignment d such that Id Af 2 xa S; so by TI, IAf 2 xa T.

In contrast, even though it has free variables, Bxg 1 x is true on this I. To show
truth a fact about every variable assignment assume otherwise, and demonstrate
a contradiction. This parallels our strategy for validity. Say o is a metalinguistic
variable that ranges over members of U. In this case, it will be necessary to make an
assertion by ins that Ao.o D 1 O o D 2/. This is clear enough, since U D f1; 2g.

(T)

1. IBxg 1 x T
2. S d.Id Bxg 1 x S/
3. Ih Bxg 1 x S
4. Ao.o D 1 O o D 2/
5. Ih x D 1 O Ih x D 2
6.
Ih x D 1
7.
Ih g 1 x D Ig 1 h1i
8.
Ig 1 h1i D 2
9.
Ih g 1 x D 2
10.
Ih Bxg 1 x D S , h1; 2i 2 IB
11.
h1; 2i 62 IB
h1; 2i 2 IB
12.
13. Ih x 1
14. Ih x D 2
15. Ih g 1 x D Ig 1 h2i
16. Ig 1 h2i D 1
17. Ih g 1 x D 1
18. Ih Bxg 1 x D S , h2; 1i 2 IB
19. h2; 1i 62 IB
20. h2; 1i 2 IB
21. IBxg 1 x D T

assp (I particular)
1 TI
2 exs (h particular)
ins
4 unv
assp
6 TA(f)
ins
7,8 eq
6,9 SF(r)
10,3 bcnd
ins
6-12 neg
5,13 dsj
14 TA(f)
ins
15,16 eq
14,17 SF(r)
18,3 bcnd
ins
1-20 neg

Up to this point, by ins we have made only particular claims about an assignment or
interpretation, for example that h2; 1i 2 IB or that Ig 1 h2i D 1. This is the typical
use of ins. In this case, however, at (4), we make a universal claim about U, any o 2 U
is equal to 1 or 2. Since Ih x is a metalinguistic term, picking out some member of
U, we instantiate the universal to it, with the result that Ih x D 1 or Ih x D 2. When
U is small, this is often helpful: By ins we identify all the members of U; then we are
in a position to argue about them individually. This argument works because we get
the result no matter which thing Ih x happens to be.
Suppose IBxg 1 x T; then by TI, for some d, Id Bxg 1 x S; let h be a particular
assignment of this sort; then Ih Bxg 1 x S. Since U D f1; 2g, Ih x D 1 or Ih x D 2.

CHAPTER 7. DIRECT SEMANTIC REASONING

355

Suppose the former; then by TA(f), Ih g 1 x D Ig 1 h1i; but Ig 1 h1i D 2; so Ih g 1 x D


2; so by SF(r), Ih Bxg 1 x D S iff h1; 2i 2 IB; so h1; 2i 62 IB; but h1; 2i 2 IB;
and this is impossible; reject the assumption; Ih x 1. So Ih x D 2; so by TA(f),
Ih g 1 x D Ig 1 h2i; but Ig 1 h2i D 1; so Ih g 1 x D 1; so by SF(r), Ih Bxg 1 x D S iff
h2; 1i 2 IB; so h2; 1i 62 IB. But h2; 1i 2 IB. And this is impossible; reject the
original assumption: IBxg 1 x D T.

To show that the formula is true, we assume otherwise. If there are no free variables,
the argument may be straightforward. In this case with free variables, however, we
are forced to reason individually about each of the possible assignments to x. It
remains that we have been forced into cases. This is doable when U is small. We will
have to consider other options when it is larger!
E7.11. Consider an I and d such that U D f1; 2g, Ia D 1, If 2 D fhh1; 1i; 2i;
hh1; 2i; 1i; hh2; 1i; 1i; hh2; 2i; 2ig, Ig 1 D fh1; 1i; h2; 1ig, dx D 1 and dy D
2. Produce formalized derivations and non-formalized reasoning to determine
the assignment Id for each of the following.
a. a
b. g 1 y
*c. g 1 g 1 x
d. f 2 g 1 ax
e. f 2 g 1 af 2 yx
E7.12. Augment the above interpretation for E7.11 so that IA1 D f1g and IB 2 D
fh1; 2i; h2; 2ig. Produce formalized derivations and non-formalized reasoning
to demonstrate each of the following.
a. Id Ax D S
*b. IByx T
c. IBg 1 ay T
d. IAa D T
e. IBxg 1 x D T

CHAPTER 7. DIRECT SEMANTIC REASONING

7.3.4

356

Quantifiers

We are finally ready to think more generally about validity and truth for quantifier
forms. For this, we will complete our formalized system by adding the quantifier
clause to definition SF.
SF(8) Id 8xP D S , Ao.Id.xjo/ P D S/

Id 8xP S , S o.Id.xjo/ P S/

This is a simple statement of the definition from p. 118. Again, we treat the metalinguistic individual variable o as implicitly restricted to the members of U (for
any o 2 U : : :). You should think about this in relation to trees: From Id 8xP there
are branches with Id.xjo/ P for each object o 2 U. The universal is satisfied when
each branch is satisfied; not satisfied when some branch is unsatisfied. That is what
is happening above. We have the derived clause too.
SF0 .9/ Id 9xP D S , S o.Id.xjo/ P D S/

Id 9xP S , Ao.Id.xjo/ P S/

The existential is satisfied when some branch is satisfied; not satisfied when every
branch is not satisfied. For the positive form,

(U)

1.
2.
3.
4.
5.

Id 9xP D S , Id 8xP D S
Id 8xP D S , Id 8xP S
Id 8xP S , S o.Id.xjo/ P S/

S o.Id.xjo/ P S/ , S o.Id.xjo/ P D S/
Id 9xP D S , S o.Id.xjo/ P D S/

abv
SF()
SF(8)
SF()
1,2,3,4 bcnd

By abv, Id 9xP D S iff Id 8xP D S; by SF() iff Id 8xP S; by SF(8),


iff for some o 2 U, Id.xjo/ P S; by SF(), iff for some o 2 U, Id.xjo/ P D S. So
Id 9xP D S iff there is some o 2 U such that Id.xjo/ P D S.

Recall that we were not able to use trees to demonstrate validity in the quantificational case, because there were too many interpretations to have trees for all of them,
and because universes may have too many members to have branches for all their
members. But this is not a special difficulty for us now. For a simple case, let us
show that  8x.Ax ! Ax/.

CHAPTER 7. DIRECT SEMANTIC REASONING

(V)

1. 8x.Ax ! Ax/
2. S I.I8x.Ax ! Ax/ T/
3. J8x.Ax ! Ax/ T
4. S d.Jd 8x.Ax ! Ax/ S/
5. Jh 8x.Ax ! Ax/ S
6. S o.Jh.xjo/ Ax ! Ax S/
7. Jh.xjm/ Ax ! Ax S
8. Jh.xjm/ Ax D S M Jh.xjm/ Ax S
9. Jh.xjm/ Ax D S
10. Jh.xjm/ Ax S
11.  8x.Ax ! Ax/

357

assp
1 QV
2 exs (J particular)
3 TI
4 exs (h particular)
5 SF(8)
6 exs (m particular)
7 SF(!)
8 cnj
8 cnj
1-10 neg

If 8x.Ax ! Ax/ is not valid, there has to be some I on which it is not true. If
8x.Ax ! Ax/ is not true on some I, there has to be some d on which it is not
satisfied. And if the universal is not satisfied, there has to be some o 2 U for which
the corresponding branch is not satisfied. But this is impossible for we cannot
have a branch where this is so.
Suppose 8x.Ax ! Ax/; then by QV, there is some I such that I8x.Ax ! Ax/
T. Let J be a particular interpretation of this sort; then J8x.Ax ! Ax/ T; so
by TI, for some d, Jd 8x.Ax ! Ax/ S. Let h be a particular assignment of
this sort; then Jh 8x.Ax ! Ax/ S; so by SF(8), there is some o 2 U such
that Jh.xjo/ Ax ! Ax S. Let m be a particular individual of this sort; then
Jh.xjm/ Ax ! Ax S; so by SF(!), Jh.xjm/ Ax D S and Jh.xjm/ Ax S. But this
is impossible; reject the assumption:  8x.Ax ! Ax/.

Notice, again, that the general strategy is to instantiate metalinguistic existential


quantifiers as quickly as possible. Contradictions tend to arise at the level of atomic
expressions and individuals.
Here is a case that is similar, but somewhat more involved. We show, 8x.Ax !
Bx/, 9xAx  9zBz. Here is a start.

CHAPTER 7. DIRECT SEMANTIC REASONING

(W)

1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.

8x.Ax ! Bx/; 9xAx 9zBz


S I.I8x.Ax ! Bx/ D T M I9xAx D T M I9zBz T/
J8x.Ax ! Bx/ D T M J9xAx D T M J9zBz T
J9zBz T
S d.Jd 9zBz S/
Jh 9zBz S
J9xAx D T
Ad.Jd 9xAx D S/
Jh 9xAx D S
S o.Jh.xjo/ Ax D S/
Jh.xjm/ Ax D S
J8x.Ax ! Bx/ D T
Ad.Jd 8x.Ax ! Bx/ D S/
Jh 8x.Ax ! Bx/ D S
Ao.Jh.xjo/ Ax ! Bx D S/
Jh.xjm/ Ax ! Bx D S
Jh.xjm/ Ax S O Jh.xjm/ Bx D S
Jh.xjm/ Bx D S
Ao.Jh.zjo/ Bz S/
Jh.zjm/ Bz S

358
assp
1 QV
2 exs (J particular)
3 cnj
4 TI
5 exs (h particular)
3 cnj
7 TI
8 unv
9 SF0 .9/
10 exs (m particular)
3 cnj
12 TI
13 unv
14 SF(8)
15 unv
16 SF(!)
17,11 dsj
6 SF0 .9/
19 unv

Note again the way we work with the metalinguistic quantifiers: We begin with the
conclusion, because it is the one that requires a particular variable assignment; the
premises can then be instantiated to that same assignment. Similarly, with that particular variable assignment on the table, we focus on the second premise, because it
is the one that requires an instantiation to a particular individual. The other premise
and the conclusion then come in later with universal quantifications that go onto the
same thing. Also, h.xjm/Ax D S contradicts h.xjm/Ax S; this justifies dsj
at (18). However Jh.xjm/ Bx D S at (18) does not contradict Jh.zjm/ Bz S at
(20). There would have been a contradiction if the variable had been the same. But
it is not. However, with the distinct variables, we can bring out the contradiction by
forcing the result into the interpretation as follows.
21. h.xjm/x D m
22. Jh.xjm/ x D m
23. Jh.xjm/ Bx D S , m 2 JB
24. m 2 JB
25. h.zjm/z D m
26. Jh.zjm/ z D m
27. Jh.zjm/ Bz D S , m 2 JB
28. m 62 JB
29. 8x.Ax ! Bx/; 9xAx  9zBz

ins
21 TA(v)
22 SF(r)
23,18 bcnd
ins
25 TA(v)
26 SF(r)
27,20 bcnd
1-28 neg

CHAPTER 7. DIRECT SEMANTIC REASONING

359

The assumption that the argument is not valid leads to the result that there is some
interpretation J and m 2 U such that m 2 JB and m 62 JB; so there can be no such
interpretation, and the argument is quantificationally valid. Observe that, though we
do not know anything else about h, simple inspection reveals that h.xjm/ assigns
object m to x. So we allow ourselves to assert it at (21) by ins; and similarly at (25).
This pattern of moving from facts about satisfaction, to facts about the interpretation
is typical.
Suppose 8x.Ax ! Bx/, 9xAx 9zBz; then by QV, there is some I such that
I8x.Ax ! Bx/ D T and I9xAx D T but I9zBz T. Let J be a particular interpretation of this sort; then J8x.Ax ! Bx/ D T and J9xAx D T but J9zBz T.
From the latter, by TI, there is some d such that Jd 9zBz S. Let h be a particular assignment of this sort; then Jh 9zBz S. Since J9xAx D T, by TI, for any
d, Jd 9xAx D S; so Jh 9xAx D S; so by SF0 .9/ there is some o 2 U such that
Jh.xjo/ Ax D S. Let m be a particular individual of this sort; then Jh.xjm/ Ax D S.
Since J8x.Ax ! Bx/ D T, by TI, for any d, Jd 8x.Ax ! Bx/ D S; so
Jh 8x.Ax ! Bx/ D S; so by SF(8), for any o 2 U, Jh.xjo/ Ax ! Bx D S;
so Jh.xjm/ Ax ! Bx D S; so by SF(!), either Jh.xjm/ Ax S or Jh.xjm/ Bx D S;
so Jh.xjm/ Bx D S; h.xjm/x D m; so by TA(v), Jh.xjm/ x D m; so by SF(r),
Jh.xjm/ Bx D S iff m 2 JB; so m 2 JB. But since Jd 9zBz S, by SF0 .9/,
for any o 2 U, Jh.zjo/ Bz S; so Jh.zjm/ Bz S; h.zjm/z D m; so by TA(v),
Jh.zjm/ z D m; so by SF(r), Jh.zjm/ Bz D S iff m 2 JB; so m 62 JB. This is
impossible; reject the assumption: 8x.Ax ! Bx/, 9xAx  9zBz.

Observe again the repeated use of the pattern that moves from truth through TI with
the quantifier rules to satisfaction, so that SF gets a grip, and the pattern that moves
through satisfaction to the interpretation. These should be nearly automatic.
Here is an example that is particularly challenging in the way quantifier rules
apply. We show, 9x8yAxy  8y9xAxy.

CHAPTER 7. DIRECT SEMANTIC REASONING

(X)

1. 9x8yAxy 8y9xAxy
2. S I.I9x8yAxy D T M I8y9xAxy T/
3. J9x8yAxy D T M J8y9xAxy T
4. J8y9xAxy T
5. S d.Jd 8y9xAxy S/
6. Jh 8y9xAxy S
7. S o.Jh.yjo/ 9xAxy S/
8. Jh.yjm/ 9xAxy S
9. J9x8yAxy D T
10. Ad.Jd 9x8yAxy D S/
11. Jh 9x8yAxy D S
12. S o.Jh.xjo/ 8yAxy D S/
13. Jh.xjn/ 8yAxy D S
14. Ao.Jh.xjn;yjo/ Axy D S/
15. Jh.xjn;yjm/ Axy D S
16. Ao.Jh.yjm;xjo/ Axy S/
17. Jh.yjm;xjn/ Axy S
18. h.yjm; xjn/ D h.xjn; yjm/
19. Jh.xjn;yjm/ Axy S
20. 9x8yAxy  8y9xAxy

360

assp
1 QV
2 exs (J particular)
3 cnj
4 TI
5 exs (h particular)
6 SF(8)
7 exs (m particular)
3 cnj
9 TI
10 exs
11 SF0 .9/
12 exs (n particular)
13 SF(8)
14 unv
8 SF0 .9/
16 unv
ins
17,18 eq
1-19 neg

When multiple quantifiers come off, variable assignments are simply modified again
just as with trees. Observe again that we instantiate the metalinguistic existential
quantifiers before universals. Also, the different existential quantifiers go to different
individuals, to respect the requirement that individuals from exs be new. The key
to this derivation is getting out both metalinguistic existentials for m and n before
applying the corresponding universals and what makes the derivation difficult is
seeing that this needs to be done. Strictly, the variable assignment at (15) is the same
as the one at (17), only the names are variants of one another. Thus we observe by
ins that the assignments are the same, and apply eq for the contradiction. Another
approach would have been to push for contradiction at the level of the interpretation.
Thus, after (17) we might have continued,

CHAPTER 7. DIRECT SEMANTIC REASONING


18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.

h.xjn; yjm/x D n
h.xjn; yjm/y D m
Jh.xjn;yjm/ x D n
Jh.xjn;yjm/ y D m
Jh.xjn;yjm/ Axy D S , hn; mi 2 IA

hn; mi 2 IA
h.yjm; xjn/x D n
h.yjm; xjn/y D m
Jh.yjm;xjn/ x D n
Jh.yjm;xjn/ y D m
Jh.yjm;xjn/ Axy D S , hn; mi 2 IA
hn; mi 62 IA

361

ins
ins
18 TA(v)
19 TA(v)
20,21 SF(r)
22,15 bcnd
ins
ins
24 TA(v)
25 TA(v)
26,27 SF(r)
28,17 bcnd

This takes more steps, but follows a standard pattern. And you want to be particularly
good at this pattern. We use facts about satisfaction to say that individuals assigned
to terms are, or are not, in the interpretation of the relation symbol. Something along
these lines would have been required if the conclusion had been, say, 8w9zAzw.
Based on this latter strategy, here is the non-formalized version.
Suppose 9x8yAxy 8y9xAxy; then by QV there is some I such that I9x8yAxy D
T and I8y9xAxy T; let J be a particular interpretation of this sort; then J9x8yAxy
D T and J8y9xAxy T. From the latter, by TI, there is some d such that
Jd 8y9xAxy S; let h be a particular assignment of this sort; then Jh 8y9xAxy
S; so by SF(8), there is some o 2 U such that Jh.yjo/ 9xAxy S; let m be a particular individual of this sort; then Jh.yjm/ 9xAxy S. Since J9x8yAxy D T,
by TI for any d, Jd 9x8yAxy D S; so Jh 9x8yAxy D S; so by SF0 .9/, there is
some o 2 U such that Jh.xjo/ 8yAxy D S; let n be a particular individual of this
sort; then Jh.xjn/ 8yAxy D S; so by SF(8), for any o 2 U, Jh.xjn;yjo/ Axy D S; so
Jh.xjn;yjm/ Axy D S. h.xjn; yjm/x D n and h.xjn; yjm/y D m; so by TA(v),
Jh.xjn;yjm/ x D n and Jh.xjn;yjm/ y D m; so by SF(r), Jh.xjn;yjm/ Axy D S iff
hn; mi 2 IA; so hn; mi 2 IA. Since Jh.yjm/ 9xAxy S, by SF0 .9/, for any
o 2 U, Jh.yjm;xjo/ Axy S; so Jh.yjm;xjn/ Axy S. h.yjm; xjn/x D n and
h.yjm; xjn/y D m; so by TA(v), Jh.yjm;xjn/ x D n and Jh.yjm;xjn/ y D m; so by
SF(r), Jh.yjm;xjn/ Axy D S iff hn; mi 2 IA; so hn; mi 62 IA. This is impossible;
reject the assumption: 9x8yAxy  8y9xAxy.

Try reading that to your roommate or parents! If you have followed to this stage, you
have accomplished something significant. These are important results, given that we
wondered in chapter 4 how this sort of thing could be done at all.
Here is a last trick that can sometimes be useful. Suppose we are trying to show
8xP x  P a. We will come to a stage, where we want to use the premise to in-

CHAPTER 7. DIRECT SEMANTIC REASONING

362

stantiate a variable o to the thing that is Jh a. So we might move directly from


Ao.Jh.xjo/ P x D S/ to Jh.xjJh a/ P x D S by unv. But this is ugly, and hard to
follow. An alternative is allow a rule (def) that defines m as a metalinguistic term for
the same object as Jh a. The result is as follows.

(Y)

1. 8xP x P a
2. S I.I8xP x D T M IP a T/
3. J8xP x D T M JP a T
4. JP a T
5. S d.Jd P a S/
6. Jh P a S
7. Jh a D m
8. Jh P a D S , m 2 IP
9. m 62 IP
10. J8xP x D T
11. Ad.Jd 8xP x D S/
12. Jh 8xP x D S
13. Ao.Jh.xjo/ P x D S/
14. Jh.xjm/ P x D S
15. h.xjm/x D m
16. Jh.xjm/ x D m
17. Jh.xjm/ P x D S , m 2 IP
18. m 2 IP
19. 8xP x  P a

assp
1 QV
2 exs (J particular)
3 cnj
4 TI
5 exs (h particular)
def (m particular)
7 SF(r)
6,8 bcnd
2 cnj
10 TI
11 unv
12 SF(8)
13 unv
ins
15 TA(v)
16 SF(r)
17,14 bcnd
1-18 neg

The result adds a couple lines, but is perhaps easier to follow. Though an interpretation is not specified, we can be sure that Jh a is some particular member of U; we
simply let m designate that individual, and instantiate the universal to it.
Suppose 8xP x P a; then by QV, there is some I such that I8xP x D T and
IP a T; let J be a particular interpretation of this sort; then J8xP x D T and
JP a T. From the latter, by TI, there is some d such that Jd P a S; let h be
a particular assignment of this sort; then Jh P a S; where m D Jh a, by SF(r),
Jh P a D S iff m 2 IP ; so m 62 IP . Since J8xP x D T, by TI, for any d,
Jd 8xP x D S; so Jh 8xP x D S; so by SF(8), for any o 2 U, Jh.xjo/ P x D S;
so Jh.xjm/ P x D S; h.xjm/x D m; so by TA(v), Jh.xjm/ x D m; so by SF(r),
Jh.xjm/ P x D S iff m 2 IP ; so m 2 IP . This is impossible; reject the assumption:
8xP x P a.

Since we can instantiate Ao.Jh.xjo/ P x D S/ to any object, we can instantiate it to


the one that happens to be Jh a. The extra name streamlines the process. One can
always do without the name. But there is no harm introducing it when it will help.

CHAPTER 7. DIRECT SEMANTIC REASONING

363

At this stage, we have the tools for a proof of the following theorem, that will be
useful for later chapters.
T7.6. For any I and P , IP D T iff I8xP D T
Hint: If P is satisfied for the arbitrary assignment, you may conclude that it
is satisfied on one like h.xjm/. In the other direction, if you can instantiate
o to any object, you can instantiate it to the thing that is hx. But by ins, h
with this assigned to x, just is h. So after substitution, you can end up with
the very same assignment as the one with which you started.
This result is interesting insofar as it underlies principles like A4 and Gen in AD or
8E and 8I in ND. We further explore this link in following chapters.
E7.13. Produce formalized derivations and non-formalized reasoning to demonstrate
each of the following.
a.  8x.Ax ! Ax/
b.  9x.Ax ^ Ax/
*c. P a  9xP x
d. 8x.Ax ^ Bx/  8yBy
e. 8yP y  8xPf 1 x
f. 9yAy  9x.Ax _ Bx/
g. 8x.Ax ! Dx/  9x.Ax ^ Dx/
h. 8x.Ax ! Bx/, 8x.Bx ! C x/  8x.Ax ! C x/
i. 8x8yAxy  8y8xAxy
j. 8x9y.Ay ! Bx/  8x.8yAy ! Bx/
*E7.14. Provide a demonstrations for (a) the negative form of SF0 .9/ and then (b)
T7.6, both in the non-formalized style. Hint: You may or may not decide that
formalized derivations would be helpful.

CHAPTER 7. DIRECT SEMANTIC REASONING

364

Metalinguistic Quick Reference (quantificational)


DEFINITIONS:
SF

(s) Id S D S , IS D T
(r) Id Rn t1 : : : tn D S , hId t1 : : : Id tn i 2 IRn
() Id P D S , Id P S
Id P S , Id P D S

(!) Id P ! Q D S , Id P S O Id Q D S
Id P ! Q S , Id P D S M Id Q S

(8) Id 8xP D S , Ao.Id.xjo/ P D S/


Id 8xP S , S o.Id.xjo/ P S/

SF0 (_) Id .P _ Q/ D S , Id P D S O Id Q D S
Id .P _ Q/ S , Id P S M Id Q S

(^) Id .P ^ Q/ D S , Id P D S M Id Q D S
Id .P ^ Q/ S , Id P S O Id Q S

($) Id .P $ Q/ D S , .Id P D S M Id Q D S/ O .Id P S M Id Q S/


Id .P $ Q/ S , .Id P D S M Id Q S/ O .Id P S M Id Q D S/

(9) Id 9xP D S , S o.Id.xjo/ P D S/


Id 9xP S , Ao.Id.xjo/ P S/

TA

(c) Id c D Ic
(v) Id x D dx
(f) Id hn t1 : : : tn D Ihn hId t1 : : : Id tn i

TI IP D T , Ad.Id P D S/
IP T , S d.Id P S/

QV :S I.IP1 D T M : : : M IPn D T M IQ T/ , P1 : : : Pn  Q
S I.IP1 D T M : : : M IPn D T M IQ T/ , P1 : : : Pn Q
RULES:
All the rules from the sentential metalinguistic reference (p. 343) plus:
unv AtAt
Au

Au
u of any type

qn :AtA , S t:A
eq t D t

u arbitrary and new

AtAt
:S tA , At:A

tDu,uDt

t D u, u D v

t D u, At

tDv

Au

def Defines one metalinguistic term t by another u so that t D u.

CHAPTER 7. DIRECT SEMANTIC REASONING

7.3.5

365

Invalidity

We already have in hand concepts required for showing invalidity. Difficulties are
mainly strategic and practical. As usual, for invalidity, the idea is to produce an
interpretation, and show that it makes the premises true and the conclusion not.
Here is a case parallel to one you worked with trees in homework from E4.14. We
show 8xPf 1 x 8xP x. For the interpretation J set, U D f1; 2g, JP D f1g,
Jf 1 D fh1; 1i; h2; 1ig. We want to take advantage of the particular features of this
interpretation to show that it makes the premise true and the conclusion not. Begin
as follows.

(Z)

1. J8xP x D T
2. Ad.Jd 8xP x D S/
3. Jh 8xP x D S
4. Ao.Jh.xjo/ P x D S/
5. Jh.xj2/ P x D S
6. h.xj2/x D 2
7. Jh.xj2/ x D 2
8. Jh.xj2/ P x D S , 2 2 JP
9. 2 2 JP
10. 2 62 JP
11. J8xP x T

assp (J particular)
1 TI
2 unv (h particular)
3 SF(8)
4 unv
ins
6 TA(v)
7 SF(r)
8,5 bcnd
ins
1-10 neg

This much is straightforward. We instantiate the metalinguistic universal quantifier


to 2, because that is the individual which exposes the conclusion as not true. Now one
option is to reason individually about each member of U. This is always possible, and
sometimes necessary. Thus the argument is straightforward but tedious by methods
we have seen before.

CHAPTER 7. DIRECT SEMANTIC REASONING


12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.

J8xPf 1 x T

S d.Jd 8xPf 1 x S/
Jh 8xPf 1 x S
S o.Jh.xjo/ Pf 1 x S/
Jh.xjm/ Pf 1 x S
h.xjm/x D m
Jh.xjm/ x D m
Ao.o D 1 O o D 2/
mD1OmD2
mD1
Jh.xjm/ x D 1
Jh.xjm/ f 1 x D Jf 1 h1i
Jf 1 h1i D 1
Jh.xjm/ f 1 x D 1
Jh.xjm/ Pf 1 x D S , 1 2 JP
1 62 JP
1 2 JP
m1
mD2
Jh.xjm/ x D 2
Jh.xjm/ f 1 x D Jf 1 h2i
Jf 1 h2i D 1
Jh.xjm/ f 1 x D 1
Jh.xjm/ Pf 1 x D S , 1 2 JP
1 62 JP
1 2 JP
J8xPf 1 x D T
J8xPf 1 x D T M J8xP x T
S I.I8xPf 1 x D T M I8xP x T/
8xPf 1 x 8xP x

366

assp
12 TI
13 exs (h particular)
14 SF(8)
15 exs (m particular)
ins
17 TA(v)
ins
19 unv
assp
18,21 eq
22 TA(f)
ins
24,23 eq
25 SF(r)
26,16 bcnd
ins
21-28 neg
20,29 dsj
18,30 eq
31 TA(f)
ins
33,32 eq
34 SF(r)
35,16 bcnd
ins
12-37 neg
11,38 cnj
39 exs
40 QV

m has to be some member of U, so we instantiate the universal at (19) to it, and reason

about the cases individually. This reflects what we have done before.
But this interpretation is designed so that no matter what o may be, If 1 hoi D 1.
And, rather than the simple generalization about the universe of discourse, we might
have generalized by ins about the interpretation of the function symbol itself. Thus,
we might have substituted for lines (19) - (34) as follows.
19.
20.
21.
22.

Jh.xjm/ f 1 x D Jf 1 hmi

Ao.Jf 1 hoi/ D 1
Jf 1 hmi D 1
Jh.xjm/ f 1 x D 1

18 TA(f)
ins
20 unv
19,21 eq

picking up with (35) after. This is better! Before, we found the contradiction when m

CHAPTER 7. DIRECT SEMANTIC REASONING

367

was 1 and again when m was 2. But, in either case, the reason for the contradiction is
that the function has output 1. So this version avoids the cases, by reasoning directly
about the result from the function. Here is the non-formalized version on this latter
strategy.
Suppose J8xP x D T; then by TI, for any d, Jd 8xP x D S; let h be a particular
assignment; then Jh 8xP x D S; so by SF(8), for any o 2 U, Jh.xjo/ P x D S;
so Jh.xj2/ P x D S; h.xj2/x D 2; so by TA(v), Jh.xj2/ x D 2; so by SF(r),
Jh.xj2/ P x D S iff 2 2 JP ; so 2 2 JP . But 2 62 JP . This is impossible;
reject the assumption: J8xP x T.
Suppose J8xPf 1 x T; then by TI, for some d, Jd 8xPf 1 x S; let h be a particular assignment of this sort; then Jh 8xPf 1 x S; so by SF(8), for some o 2 U,
Jh.xjo/ Pf 1 x S; let m be a particular individual of this sort; then Jh.xjm/ Pf 1 x
S. h.xjm/x D m; so by TA(v), Jh.xjm/ x D m; so by TA(f), Jh.xjm/ f 1 x D
Jf 1 hmi; but for any o 2 U, Jf 1 hoi D 1; so Jf 1 hmi D 1; so Jh.xjm/ f 1 x D 1;
so by SF(r), Jh.xjm/ Pf 1 x D S iff 1 2 JP ; so 1 62 JP ; but 1 2 JP . This is
impossible; reject the assumption: J8xPf 1 x D T.
So there is an interpretation I such that I8xPf 1 x D T and I8xP x T; so by QV,
8xPf 1 x 8xP x.

Reasoning about cases is possible, and sometimes necessary, when the universe is
small. But it is often convenient to organize your reasoning by generalizations about
the interpretation as above. Such generalizations are required when the universe is
large.
Here is a case that requires such generalizations, insofar as the universe U for the
interpretation to show invalidity has infinitely many members. We show 8x8y.S x D
Sy ! x D y/ 9x.S x D ;/. First note that no interpretation with finite U makes
the premise true and conclusion false. For suppose U has finitely many members and
the successor function is represented by arrows as follows,
o0 o1 o2 o3 o4 o5 : : : on

with I; D o0 . So IS includes ho0 ; o1 i, ho1 ; o2 i, ho2 ; o3 i, and so forth. What is


paired with on ? It cannot be any of o1 through on , or the premise is violated, because some one thing is the successor of different elements (you should see how this
works). And if the conclusion is false, it cannot be o0 either. And similarly for any
finite universe. But, as should be obvious by consideration of a standard interpretation of the symbols, the argument is not valid. To show this, let the interpretation be
N, where,
U D f0; 1; 2 : : :g

CHAPTER 7. DIRECT SEMANTIC REASONING

368

N; D 0
NS D fh0; 1i; h1; 2i; h2; 3i : : :g
ND D fh0; 0i; h1; 1i; h2; 2i : : :g

First we show that N9x.S x D ;/ T. Note that we might have specified the
interpretation for equality by saying something like, AoAp.ho; pi 2 ND , o D p/.
Similarly, the interpretation of S is such that no o has a successor equal to zero
Ao.NS hoi 0/. We will simply appeal to these facts by ins in the following.

(AA)

1. N9x.Sx D ;/ D T
2. Ad.Nd 9x.Sx D ;/ D S/
3. Nh 9x.Sx D ;/ D S
4. S o.Nh.xjo/ Sx D ; D S/
5. Nh.xjm/ Sx D ; D S
6. N; D 0
7. Nh.xjm/ ; D 0
8. Nh.xjm/ Sx D ; D S , hNh.xjm/ Sx; 0i 2 ND
9. hNh.xjm/ Sx; 0i 2 ND
10. AoAp.ho; pi 2 ND ) o D p/
11. Nh.xjm/ Sx D 0
12. h.xjm/x D m
13. Nh.xjm/ x D m
14. Nh.xjm/ Sx D NShmi
15. NShmi D 0
16. Ao.NShoi 0/
17. NShmi 0
18. N9x.Sx D ;/ T

assp (N particular)
1 TI
2 exs (h particular)
3 SF0 .9/
4 exs (m particular)
ins
6 TA(c)
7 SF(r)
8,5 bcnd
ins
10,9 unv
ins
12 TA(v)
13 TA(f)
11,14 eq
ins
16 unv
1-17 neg

Most of this is as usual. What is interesting is that at (10) we assert that for any
o and p in U, if ho; pi 2 U, then o D p by ins. This should be obvious from the
initial (automatic) specification of N[=]. And at (16) we assert that no o is such that
ho; 0i 2 NS . Again, this should be clear from the specification of NS . In this case,
there is no way to instantiate the metalinguistic quantifiers to every member of U, on
the pattern of what we have been able to do with two-member universes! But we do
not have to, as the general facts are sufficient for the result.
Suppose N9x.S x D ;/ D T; then by TI, for any d, Nd 9x.S x D ;/ D S; let h
be some particular d; then Nh 9x.S x D ;/ D S; so by SF0 .9/, for some o 2 U,
Nh.xjo/ S x D ; D S; let m be a particular individual of this sort; then Nh.xjm/ S x D
; D S. N; D 0; so by TA(c), Nh.xjm/ ; D 0; so by SF(r), Nh.xjm/ S x D ; D S

CHAPTER 7. DIRECT SEMANTIC REASONING

369

iff hNh.xjm/ S x; 0i 2 ND; so hNh.xjm/ S x; 0i 2 ND; but for any o; p 2 U, if


ho; pi 2 ND then o D p; so Nh.xjm/ S x D 0. h.xjm/x D m; so by TA(v),
Nh.xjm/ x D m; so by TA(f), Nh.xjm/ S x D NS hmi; so NS hmi D 0. But for any
o 2 U, NS hoi 0; so NS hmi 0. This is impossible; reject the assumption:
N9x.S x D ;/ T.

Given what we have already seen, this should be straightforward. Demonstration


that N8x8y.S x D Sy ! x D y/ D T, and so that the argument is not valid, is
left as an exercise. Hint: In addition to facts about equality, you may find it helpful
to assert AoAp.o p ) NS hoi NS hpi/. Be sure that you understand this,
before you assert it! Of course, we have here something that could never have been
accomplished with trees, insofar as the universe is infinite!
Recall that the interpretation of equality is the same across all interpretations.
Thus our general assertion is possible in case of the arbitrary interpretation, and we
are positioned to prove some last theorems.
T7.7.  .t D t/
Hint: By ins for any I, and any o 2 U, ho; oi 2 ND. Given this, the argument
is easy.
*T7.8.  .xi D y/ ! .hn x1 : : : xi : : : xn D hn x1 : : : y : : : xn /
Hint: If you have trouble with this, try showing a simplified version:  .x D
y/ ! .h1 x D h1 y/.
T7.9.  .xi D y/ ! .Rn x1 : : : xi : : : xn ! Rn x1 : : : y : : : xn /
Hint: If you have trouble with this, try showing a simplified version:  .x D
y/ ! .Rx ! Ry/.
At this stage, we have introduced a method for reasoning about semantic definitions. As you continue to work with the definitions, it should become increasingly
clear how they fit together into a coherent (and pleasing) whole. In later chapters,
we will leave the formalized system behind as we encounter further definitions in
diverse contexts. But from this chapter you should have gained a solid grounding in
the sort of thing we will want to do.
E7.15. Produce interpretations (with, if necessary, variable assignments) and then
formalized derivations and non-formalized reasoning to show each of the following.

CHAPTER 7. DIRECT SEMANTIC REASONING

370

Theorems of Chapter 7
T7.1s P , P ! Q s Q
T7.2s s P ! .Q ! P /
T7.3s s .O ! .P ! Q// ! ..O ! P / ! .O ! Q//
T7.4s s .Q ! P / ! .Q ! P / ! Q
T7.1 P , P ! Q  Q
T7.2  P ! .Q ! P /
T7.3  .O ! .P ! Q// ! ..O ! P / ! .O ! Q//
T7.4 .Q ! P / ! .Q ! P / ! Q
T7.5 There is no interpretation I and formula P such that IP D T and IP D
T.
T7.6 For any I and P , IP D T iff I8xP D T
T7.7  .t D t/
T7.8  .xi D y/ ! .hn x1 : : : xi : : : xn D hn x1 : : : y : : : xn /
T7.9  .xi D y/ ! .Rn x1 : : : xi : : : xn ! Rn x1 : : : y : : : xn /
a. 9xP x P a
*b. f 1 g 1 x D g 1 f 1 x
c. 9xF x, 9yGy 9z.F z ^ Gz/
d. 8x9yAxy 9y8xAxy
e. 8x9y.Ay ! Bx/ 8x.9yAy ! Bx/
*E7.16. Provide demonstrations for (simplified versions of) T7.7 - T7.9 in the nonformalized style. Hint: You may or may not decide that a formalized derivation would be helpful. Challenge: can you show the theorems in their general
form?

CHAPTER 7. DIRECT SEMANTIC REASONING

371

E7.17. Show that N8x8y.S x D Sy ! x D y/ D T, and so complete the demonstration that 8x8y.S x D Sy ! x D y/ 9x.S x D ;/. You may simply
assert that N9x.S x D ;/ T with justification, from the text.
E7.18. Suppose we want to show that 8x9yRxy, 8x9yRyx, 8x8y8z..Rxy ^
Ryz/ ! Rxz/ 9xRxx.
*a. Explain why no interpretation with a finite universe will do.
b. Explain why the standard interpretation N with U D f0; 1; 2 : : :g and NR D
fhm; ni j m < ng will not do.
c. Find an appropriate interpretation and use it to show that 8x9yRxy, 8x9yRyx,
8x8y8z..Rxy ^ Ryz/ ! Rxz/ 9xRxx.
E7.19. Here is an interpretation to show 9x8y.Axy ^ Axy/ ! .Axx $
Ayy/.
U D f1; 2; 3 : : :g
IA D fhm; ni j m  n and m is odd, or m < n and m is eveng

So IA has members,
h1; 1i, h1; 2i, h1; 3i. . .

h2; 3i, h2; 4i, h2; 5i. . .

h3; 3i, h3; 4i, h3; 5i. . .

h4; 5i, h4; 6i, h4; 7i. . .

and so forth. Try to understand why this works, and why  or < will not work
by themselves. Then see if you can find an interpretation where U has  four
members, and use your interpretation to demonstrate that 9x8y.Axy ^
Axy/ ! .Axx $ Ayy/.
E7.20. Consider LNT as in chapter 6 (p. 299) with just constant ;, the function symbols S , C and , and the relation symbol D along with the axioms of Robinson Arithmetic as in the Robinson and Peano reference on p. 311. Then (i)
use the standard interpretation N to show that Q 8x.;  x/ D ; and
Q 8x8y.x  y/ D .y  x/. And (ii) take a nonstandard interpretation
that has U D f0; 1; 2 : : : ; ag for some object a that is not a number; assign 0
to ; in the usual way. Then set,

CHAPTER 7. DIRECT SEMANTIC REASONING

i
a

i C1
a

C
i
a

j
i Cj
a

a
a
a


0
i 0
a

372

0
0
0
0

j 0
0
i j
a

a
a
a
a

Use this interpretation to show Q 8x.;  x/ D ; and Q 8x8y.x 


y/ D .y  x/. This result, together with T10.3 according to which our
derivation system is sound is sufficient to show that Robinson Arithmetic is
not (negation) complete there are sentences P of LNT such that Q proves
neither P nor P .
E7.21. For each of the following concepts, explain in an essay of about two pages,
so that Hannah could understand. In your essay, you should (i) identify the
objects to which the concept applies, (ii) give and explain the definition, and
give and explicate examples of your own construction (iii) where the concept
applies, and (iv) where it does not. Your essay should exhibit an understanding of methods from the text.
a. The difference between satisfaction and truth.
b. The definitions SF(r) and SF(8).
c. The way your reasoning works. For this you can provide an example of some
reasonably complex but clean bits of reasoning, (a) for validity, and (b) for invalidity. Then explain to Hannah how your reasoning works. That is, provide
her a commentary on what you have done, so that she could understand.

Chapter 8

Mathematical Induction
In chapter 1, (p. 11), we distinguished deductive from inductive arguments. As described there, in a deductive argument, conclusions are supposed to be guaranteed
by premises. In an inductive argument, conclusions are merely made probable or
plausible. Typical cases of inductive arguments involve generalization from cases.
Thus, for example, one might reason from the premise that every crow we have ever
seen is black, to the conclusion that all crows are black. The premise does not guarantee the conclusion, but it does give it some probability or plausibility. Similarly,
mathematical induction involves a sort of generalization. But mathematical induction
is a deductive argument form. The conclusion of a valid argument by mathematical
induction is guaranteed by its premises. So mathematical induction is to be distinguished from the sort of induction described in chapter 1. In this chapter, I begin with
a general characterization of mathematical induction, and turn to a series of exmples.
Some of the examples will matter for things to come. But the primary aim is to gain
facility with this crucial argument form.

8.1

General Characterization

Arguments by mathematical induction apply to objects that are arranged in series.


The conclusion of an argument by mathematical induction is that all the elements
of the series are of a certain sort. For cases with which we will be concerned, the
elements of a series are ordered by integers: there is a first member, a second member,
and so forth (we may thus think of a series as a function from the integers to the
members). Consider, for example, a series of dominoes.

373

CHAPTER 8. MATHEMATICAL INDUCTION

374
...

d0 d1 d2 d3 d4 d5 d6 d7 d8
This series is ordered spatially. d0 is the first domino, d1 the second, and so forth.
Alternatively, we might think of the series as defined by a function D from the natural
numbers to the dominoes, with D.0/ D d0 , D.1/ D d1 and so forth where this
ordering is merely exhibited by the spatial arrangement.
Suppose we are interested in showing that all the dominoes fall, and consider the
following two claims:
(i) the first domino falls
(ii) for any domino, if all the ones prior to it fall, then it falls.
By itself, (i) does not tell us that all the dominoes fall. For all we know, there might be
some flaw in the series so that for some j < k, dj falls, but dk does not. Perhaps the
space between dk 1 and dk is too large. In this case, under ordinary circumstances,
neither dk nor any of the dominoes after it fall. (ii) tells us that there is no such flaw
in the series if all the dominoes up to dk fall, then dk falls. But (ii) is not, by
itself, sufficient for the conclusion that all the dominoes fall. From the fact that the
dominoes are so arranged, it does not follow that any of the dominoes fall. Perhaps
you do the arrangement, and are so impressed with your work, that you leave the
setup forever as a memorial!
However, given both (i) and (ii), it is safe to conclude that all the dominoes fall.
There are a couple of ways to see this. First, we can reason from one domino to the
next. By (i), the first domino falls. This means that all the dominoes prior to the
second domino fall. So by (ii), the second falls. But this means all the dominoes
prior to the third fall. So by (ii), the third falls. So all the dominoes prior to the fourth
fall. And so forth. Thus we reach the conclusion that each domino falls. So all the
dominoes fall. Here is another way to make the point: Suppose not every member of
the series falls. Then there must be some least member da of the series which does
not fall. da cannot be the first member of the series, since by (i) the first member of
the series falls. And since da is the least member of the series which does not fall, all
the members of the series prior to it do fall! So by (ii), da falls. This is impossible;
reject the assumption: every member of the series falls.
Suppose we have some reason for accepting (i) that the first domino falls
perhaps you push it with your finger. Suppose further, that we have some special
reason for moving from the premise that all the dominoes prior to an arbitrary dk

CHAPTER 8. MATHEMATICAL INDUCTION

375

fall, to the conclusion that dk falls perhaps the setup only gets better and better
as the series continues, and the builder gains experience. Then we might attempt to
show that all the dominoes fall as follows.

(A)

a. d0 falls
b. all the dominoes prior to dk fall
::
:
c. dk falls

prem (d1 particular)


assp (dk arbitrary)

d. if all the dominoes prior to dk fall, then dk falls


e. for any domino, if all the dominoes prior to it fall, then it falls
f. every domino falls

b-c cnd
d unv
a,e induction

special reason

(a) is just (i); d0 falls because you push it. (e) is (ii); to get this, we reason from the
assumption at (b), and the special reason, to the conclusion that dk falls, and then
move to (e) by cnd and unv. The conclusion that every domino falls follows from (a)
and (e) by mathematical induction. This is in fact how we reason. However, all the
moves are automatic once we complete the subderivation the moves by cnd to get
(d), by unv to get (e), and by mathematical induction to get (f) are automatic once we
reach (c). In practice, then, those steps are usually left implicit and omitted. Having
gotten (a) and, from the assumption that all the dominoes prior to dk fall, reached
the conclusion that dk falls, we move directly to the conclusion that all the dominoes
fall.
Thus we arrive at a general form for arguments by mathematical induction. Suppose we want to show that P holds for each member of some series. Then an argument from mathematical induction goes as follows.
(B) Basis: Show that P holds for the first member of the series.
Assp: Assume, for arbitrary k, that P holds for every member of the series
prior to the kth member.
Show: Show that P holds for the kth member.
Indct: Conclude that P holds for every member of the series.
In the domino case, for the basis we show (i). At the assp (assumption) step, we
assume that all the dominoes prior to dk fall. In the show step, we would complete
the subderivation with the conclusion that domino dk falls. From this, moves by cnd,
to the conditional statement, and by unv to its generalization, are omitted, and we
move directly to the conclusion that all the dominoes fall. Notice that the assumption
is nothing more than a standard assumption for the (suppressed) application of cnd.
Perhaps the special reason is too special, and it is not clear how we might
generally reason from the assumption that some P holds for every member of a series

CHAPTER 8. MATHEMATICAL INDUCTION

376

prior to the kth, to the conclusion that it holds for the kth. For our purposes, the key is
that such reasoning is possible in contexts characterized by recursive definitions. As
we have seen, a recursive definition always moves from the parts to the whole. There
are some basic elements, and some rules for combining elements to form further
elements. In general, it is a fallacy (the fallacy of composition) to move directly from
characteristics of parts, to characteristics of a whole. From the fact that the bricks are
small, it does not follow that a building made from them is small. But there are cases
where facts about parts, together with the way they are arranged, are sufficient for
conclusions about wholes. If the bricks are hard, it may be that the building is hard.
And similarly with recursive definitions.
To see how this works, let us turn to another example. We show that every term of
a certain language has an odd number of symbols. Recall that the recursive definition
TR tells us how terms are formed from others. Variables and constants are terms;
and if hn is a n-place function symbol and t1 : : : tn are n terms, then hn t1 : : : tn is
a term. On tree diagrams, across the top are variables and constants terms with
no function symbols; in the next row are terms constructed out of them, and for any
n > 1, terms in row n are constructed out of terms from earlier rows. Let this series
of rows be our series for mathematical induction. Every term must appear in some
row of a tree. We consider a series whose first element consists of terms which appear
in the top row of a tree, whose second element consists of terms which appear in the
second, and so forth. Let Lt be a language with variables and constants as usual, but
just two function symbols, a two-place function symbol f 2 and a four-place function
symbol g 4 . We show, by induction on the rows in which terms appear, that the total
number of symbols in any term t of this language is odd. Here is the argument:
(C) Basis: If t appears in a top row (row zero), then it is a variable or a constant; in
this case, t consists of just one variable or constant symbol; so the total
number of symbols in t is odd.
Assp: For any i such that 0  i < k, the total number of symbols in any t
appearing in row i is odd.
Show: The total number of symbols in any t appearing in row k is odd.
If t appears in row k, then it is of the form f 2 t1 t2 or g 4 t1 t2 t3 t4 where
t1 : : : t4 appear in rows prior to k. So there are two cases.
(f ) Suppose t is f 2 t1 t2 . Let a be the total number of symbols in t1 and b
be the total number of symbols in t2 ; then the total number of symbols
in t is .a C b/ C 1: all the symbols in t1 , all the symbols in t2 , plus
the symbol f 2 . Since t1 and t2 each appear in rows prior to k, by
assumption, both a and b are odd. But the sum of two odds is an even,

CHAPTER 8. MATHEMATICAL INDUCTION

377

and the sum of an even plus one is odd; so .a C b/ C 1 is odd; so the


total number of symbols in t is odd.
(g) Suppose t is g 4 t1 t2 t3 t4 . Let a be the total number of symbols in t1 , b
be the total number of symbols in t2 , c be the total number of symbols
in t3 and d be the total number of symbols in t4 ; then the total number
of symbols in t is .a C b/ C .c C d / C 1. Since t1 : : : t4 each appear
in rows prior to k, by assumption a, b, c and d are all odd. But the sum
of two odds is an even; the sum of two evens is an even, and the sum of
an even plus one is odd; so .a C b/ C .c C d / C 1 is odd; so the total
number of symbols in t is odd.
In either case, then, if t appears in row k, the total number of symbols
in t is odd.
Indct: For any term t in Lt , the total number of symbols in t is odd.
Notice that this argument is entirely structured by the recursive definition for terms.
The definition TR includes clauses (v) and (c) for terms that appear in the top row.
In the basis stage, we show that all such terms consist of an odd number of symbols.
Then, for (suppressed) application of cnd and gen we assume that all terms prior to
an arbitrary row k have an odd number of symbols. The show line simply announces
what we plan to do. The sentence after derives directly from clause (f) of TR: In this
case, there are just two ways to construct terms out of other terms. If f 2 t1 t2 appears
in row k, t1 and t2 must appear in previous rows. So, by the assumption, they have
an odd number of symbols. And similarly for g 4 t1 t2 t3 t4 . In the reasoning for the
show stage we demonstrate that, either way, if the total number of symbols in the
parts are odd, then the total number of symbols in the whole is odd. It follows that
every term in this language Lt consists of an odd number of symbols.
Returning to the domino analogy, the basis is like (i), where we show that the
first member of the series falls terms appearing in the top row always have an odd
number of symbols. Then, for arbitrary k, we assume that all the members of the
series prior to the kth fall that terms appearing in rows prior to the kth always
have an odd number of symbols. We then reason that, given this, the kth member
falls terms constructed out of others which, by assumption have an odd number
of symbols, must themselves have an odd number of symbols. From this, (ii) follows
by cnd and unv, and the general conclusion by mathematical induction.
The argument works for the same reasons as before: Insofar as a variable or constant is regarded as a single element of the vocabulary, it is automatic that variables
and constants have an odd number of symbols. Given this, where function symbols

CHAPTER 8. MATHEMATICAL INDUCTION

378

are also regarded as single elements of the vocabulary, expressions in the next row
of a tree, as f 2 xc, or g 4 xycz, must have an odd number of symbols one function symbol, plus two or four variables and constants. But if terms from the first
and second rows of a tree have an odd number of symbols, by reasoning from the
show step, terms constructed out of them must have an odd number of symbols as
well. And so forth. Alternatively, suppose some terms in Lt have an even number
of symbols; then there must be a least row a where such terms appear. From the
basis, this row a is not the first. But since a is the least row at which terms have an
even number of symbols, terms at all the earlier rows must have an odd number of
symbols. But then, by reasoning as in the show step, terms at row a have an odd
number of symbols. Reject the assumption, no terms in Lt have an even number of
symbols.
In practice, for this sort of case, it is common to reason, not based on the row
in which a term appears, but on the number of function symbols in the term. This
differs in detail, but not in effect, from what we have done. In our trees, it may be
that a term in the third row, combining one from the first and one from the second,
has two function symbols, as f 2 xf 2 ab, or it may be that a term in the third row,
combining ones from the second, has three function symbols, as f 2 f 2 xyf 2 ab, or
five, as g 4 f 2 xyf 2 abf 2 zwf 2 cd , and so forth. However, it remains that the total
number of function symbols in each of some terms s1 : : : sn is fewer than the total
number of function symbols in hn s1 : : : sn ; for the latter includes all the function
symbols in s1 : : : sn plus hn . Thus we may consider the series: terms with no function symbols, terms with one function symbol, and so forth and be sure that for
any n > 0, terms at stage n are constructed of ones before. Here is a sketch of the
argument modified along these lines.
(D) Basis: If t has no function symbols, then it is a variable or a constant; in this
case, t consists of just the one variable or constant symbol; so the total
number of symbols in t is odd.
Assp: For any i such that 0  i < k, the total number of symbols in t with i
function symbols is odd.
Show: The total number of symbols in t with k function symbols is odd.
If t has k function symbols, then it is of the form f 2 t1 t2 or g 4 t1 t2 t3 t4
where t1 : : : t4 have less than k function symbols. So there are two
cases.
(f ) Suppose t is f 2 t1 t2 . [As before. . . ] the total number of symbols in t
is odd.

CHAPTER 8. MATHEMATICAL INDUCTION

379

(g) Suppose t is g 4 t1 t2 t3 t4 . [As before. . . ] the total number of symbols


in t is odd.
In either case, then, if t has k function symbols, then the total number
of symbols in t is odd.
Indct: For any term t in Lt , the total number of symbols in t is odd.
Here is the key point: If f 2 t1 t2 has k function symbols, the total number of function
symbols in t1 and t2 combined has to be k 1; and since the number of function
symbols in t1 and in t2 must individually be less than or equal to the combined
total, the number of function symbols in t1 and the number of function symbols in
t2 must also be less than k. And similarly for g 4 t1 t2 t3 t4 . That is why the inductive
assumption applies to t1 : : : t4 , and reasoning in the cases can proceed as before.
If you find this confusing, you might picture our trees regimented so that rows
correspond to the number of function symbols. Then this reasoning is no different
than before.

8.2

Preliminary Examples

Let us turn now to a series of examples, meant to illustrate mathematical induction in


a variety of contexts. Some of the examples have to do with our subject matter. But
some do not. For now, the primary aim is to gain facility with the argument form. As
you work through the cases, think about why the induction works. At first, examples
may be difficult to follow. But they should be more clear by the end.

8.2.1

Case

First, a case where the conclusion may seem too obvious even to merit argument.
We show that, any (official) formula P of a quantificational language has an equal
number of left and right parentheses. Again, the relevant definition FR is recursive.
Its basis clause specifies formulas without operator symbols; these occur across the
top row of our trees. FR then includes clauses which say how complex formulas
are constructed out of those that are less complex. We take as our series, formulas
with no operator symbols, formulas with one operator symbol, and so forth; thus the
argument is by induction on the number of operator symbols. As in the above case
with terms, this orders formulas so that we can use facts from the recursive definition
in our reasoning. Let us say L.P / is the number of left parentheses in P , and R.P /
is the number of right parentheses in P . Our goal is to show that for any formula P ,
L.P / D R.P /.

CHAPTER 8. MATHEMATICAL INDUCTION

380

Induction Schemes
Schemes for mathematical induction sometimes appear in different forms. But for
our purposes, these amount to the same thing. Suppose a series of objects, and
consider the following.
I.

(a) Show that P holds for the first member


(b) Assume that P holds for members < k
(c) Show P holds for member k

This is the form as we have seen it.

(d) Conclude P holds for every member


II.
(a) Show that P holds for the first member
(b) Assume that P holds for members  j
(c) Show P holds for member j C 1
(d) Conclude P holds for every member
III.
(a) Show that Q holds for the first member
(b) Assume that Q holds for member j
(c) Show Q holds for member j C 1
(d) Conclude Q holds for every member

This comes to the same thing if we


think of j as k 1. Then P holds
for members  j just in case it
holds for members < k.

This comes to the same thing if we


think of j as k 1 and Q as the
proposition that P holds for members  j .

And similarly the other forms follow from ours. So, though in a given context, one
form may be more convenient than another, the forms are equivalent or at least
they are equivalent for sequences corresponding to the natural numbers.
Where ! is the first infinite ordinal, there is no ordinal such that C 1 D !.
So for a sequence ordered by these ordinals, our assumption that P holds for all
the members < k might hold though there is no j D k 1 as in the second and
third cases. So the equivalence between the forms breaks down for series that are
so ordered. We do not need to worry about infinite ordinals, as our concerns will
be restricted to series ordered by the integers.
Our form of induction (I) is known as Strong Induction, for its relatively strong
inductive assumption, and the third as Weak. The second is a sometimesencountered blend of the other two.

CHAPTER 8. MATHEMATICAL INDUCTION

381

(E) Basis: If P has no operator symbols, then P is a sentence letter S or an atomic


Rn t1 : : : tn for some relation symbol Rn and terms t1 : : : tn . In either
case, P has no parentheses. So L.P / D 0 and R.P / D 0; so L.P / D
R.P /.
Assp: For any i such that 0  i < k, if P has i operator symbols, then
L.P / D R.P /.
Show: For every P with k operator symbols, L.P / D R.P /.
If P has k operator symbols, then it is of the form A, .A ! B/, or
8xA for variable x and formulas A and B with < k operator symbols.
./ Suppose P is A. Then L.P / D L.A/ and R.P / D R.A/. But by
assumption L.A/ D R.A/; so L.P / D R.P /.
(!) Suppose P is .A ! B/. Then L.P / D L.A/ C L.B/ C 1 and R.P / D
R.A/ C R.B/ C 1. But by assumption L.A/ D R.A/, and L.B/ D
R.B/; so the sums are the same, and L.P / D R.P /
(8) Suppose P is 8xA. Then as in the case for ./, L.P / D L.A/ and
R.P / D R.A/. But by assumption L.A/ D R.A/; so L.P / D R.P /.
If P has k operator symbols, L.P / D R.P /.
Indct: For any formula P , L.P / D R.P /.
No doubt, you already knew that the numbers of left and right parentheses match.
But, presumably, you knew it by reasoning of this very sort. Atomic formulas have
no parentheses; after that, parentheses are always added in pairs; so, no matter how
complex a formula may be, there is never a left parenthesis without a right to match.
Reasoning by mathematical induction may thus seem perfectly natural! All we have
done is to make explicit the various stages that are required to reach the conclusion.
But it is important to make the stages explicit, for in many cases results are not so
obvious. Here are some closely related problems.
*E8.1. For any (official) formula P of a quantificational language, where A.P / is
the number of its atomic formulas, and C.P / is the number of its arrow symbols, show that A.P / D C.P / C 1. Hint: Argue by induction on the number
of operator symbols in P . For the basis, when P has no operator symbols, it
is an atomic, so that A.P / D 1 and C.P / D 0. Then, as above, you will have
cases for , !, and 8. The hardest case is when P is of the form .A ! B/.
E8.2. Consider now expressions which allow abbreviations (_), (^), ($), and (9).
Where A.P / is the number of atomic formulas in P and B.P / is the number

CHAPTER 8. MATHEMATICAL INDUCTION

382

of its binary operators, show that A.P / D B.P / C 1. Hint: now you have
seven cases: (), (!), and (8) as before, but also cases for (_), (^), ($), and
(9). This suggests the beauty of reasoning just about the minimal language!

8.2.2

Case

Mathematical induction is so-called because many applications occur in mathematics. It will be helpful to have a couple of examples of this sort. These should be
illuminating at least if you do not get bogged down in the details of the arithmetic! The series of odd integers is 1, 3, 5, 7 . . . where the nth odd integer is 2n 1.
(The nth even integer is 2n; to find the nth odd, go to the even just above it, and
come down one.) Let S.n/ be the sum of the first n odd integers. So S.1/ D 1,
S.2/ D 1 C 3 D 4, S.3/ D 1 C 3 C 5 D 9, S.4/ D 1 C 3 C 5 C 7 D 16 and, in
general,
S.n/ D 1 C 3 C : : : C .2n

1/

We consider the series of these sums, S.1/, S.2/, and so forth, and show that, for any
n  1, S.n/ D n2 . The key to our argument is the realization that the sum of all the
odd numbers up to the nth odd number is equal to the sum of all the odd numbers
up to the .n 1/th odd number plus the nth odd number. That is, since the nth odd
number is 2n 1, S.n/ D S.n 1/ C .2n 1/. We argue by induction on the series
of sums.
(F) Basis: If n D 1 then S.n/ D 1 and n2 D 1; so S.n/ D n2 .
Assp: For any i , 1  i < k, S.i / D i 2 .
Show: S.k/ D k 2 . As above, S.k/ D S.k 1/ C .2k 1/. But since k 1 < k,
by the inductive assumption, S.k 1/ D .k 1/2 ; so S.k/ D .k 1/2 C
.2k 1/ D .k 2 2k C 1/ C .2k 1/ D k 2 . So S.k/ D k 2 .
Indct: For any n, S.n/ D n2 .
As is often the case in mathematical arguments, the kth element is completely determined by the one before; so we do not need to consider any more than this one
way that elements at stage k are determined by those at earlier stages.1 Surely this is
an interesting result though you might have wondered about it after testing initial
cases, we have a demonstration that it holds for every n.
1 Thus arguments by induction in arithmetic and geometry are often conveniently cast according
to the third weak induction scheme from induction schemes on p. 380. But, as above, our standard
scheme applies as well.

CHAPTER 8. MATHEMATICAL INDUCTION

383

*E8.3. Let S.n/ be the sum of the first n even integers; that is S.n/ D 2C4C: : :C2n.
So S.1/ D 2, S.2/ D 2 C 4 D 6, S.3/ D 2 C 4 C 6 D 12, and so forth. Show,
by mathematical induction, that for any n  1, S.n/ D n.n C 1/.
E8.4. Let S.n/ be the sum of the first n integers; that is S.n/ D 1 C 2 C 3 C : : : C n.
So S.1/ D 1, S.2/ D 1 C 2 D 3, S.3/ D 1 C 2 C 3 D 6, and so forth. Show,
by mathematical induction, that for any n  1, S.n/ D n.n C 1/=2.

8.2.3

Case

Now a case from geometry. Where a polygon is convex iff each of its interior angles
is less than 180 , we show that the sum of the interior angles in any convex polygon with n sides, S.P/ D .n 2/180 . Let us consider polygons with three sides,
polygons with four sides, polygons with five sides, and so forth. The key is that any
n-sided polygon may be regarded as one with n 1 sides combined with a triangle.
Thus given an n-sided polygon P,
b

@
@

Construct a line connecting opposite ends of a pair


of adjacent sides.

@
@


Q ef @

c

R
@
@

The result is a triangle Q and a figure R with n 1 sides, where a D cCd and b D eCf.
The sum of the interior angles of P is the same as the sum of the interior angles of
Q plus the sum of the interior angles of R. Once we realize this, our argument by
mathematical induction is straightforward. For any convex n-sided polygon P, we
show that the sum of the interior angles of P, S.P/ D .n 2/180 . The argument is
by induction on the number n of sides of the polygon.
(G) Basis: If n D 3, then P is a triangle; but by reasoning as follows,
@ d
e
@
b

@


@
c
a

@f

By definition, a C f D 180 ; but b D d


and if the horizontal lines are parallel,
c D e and d C e D f; so a C .b C c/ D
a C .d C e/ D a C f D 180 .

the sum of the angles in a triangle is 180 . So S.P/ D 180. But .3


2/180 D 180. So S.P/ D .n 2/180.

CHAPTER 8. MATHEMATICAL INDUCTION

384

Assp: For any i , 3  i < k, every P with i sides has S.P/ D .i


Show: For every P with k sides, S.P/ D .k

2/180.

2/180.

If P has k sides, then for some triangle Q and polygon R with k 1 sides,
S.P/ D S.Q/ C S.R/. Q is a triangle, so S.Q/ D 180. Since k 1 < k,
the inductive assumption applies to R; so S.R/ D ..k 1/ 2/180. So
S.P/ D 180 C ..k 1/ 2/180 D .1 C k 1 2/180 D .k 2/180.
So S.P/ D .k 2/180.
Indct: For any n-sided polygon P, S.P/ D .n

2/180.

Perhaps reasoning in the basis brings back good (or bad) memories of high school
geometry! But you do not have to worry about that. In this case, the sum of the
angles of a figure with n sides is completely determined once we are given the sum
of the angles for a figure with n 1 sides. So we do not need to consider any more
than this one way that elements at stage k are determined by those at earlier stages.
It is worth noting however that we do not have to see a k-sided polygon as composed of a triangle and a figure with k 1 sides. For consider any diagonal of a
k-sided polygon; it divides the figure into two, each with < k sides. So the inductive
assumption applies to each figure. So we might reason about the angles of a k-sided
figure as the sum of angles of these arbitrary parts, as in the exercise that follows.
*E8.5. Using the fact that any diagonal of a k-sided polygon divides it into two
polygons with < k sides, show by mathematical induction that the sum of the
interior angles of any convex polygon P, S.P/ D .n 2/180. Hint: If a figure
has k sides, then for some a such that both a and k a are at least two (> 1),
a diagonal divides it into a figure Q with a C 1 sides (a sides from P, plus the
diagonal), and a figure R with .k a/ C 1 sides (the remaining sides from P,
plus the diagonal). From a > 1, k C a > k C 1 so that k > k a C 1; and
from k a > 1, k > a C 1. So the inductive assumption applies to both Q
and R.

E8.6. Where P is a convex polygon with n sides, and D.P/ is the number of its
diagonals (where a diagonal is a line from one vertex to another that is not
a side), show by mathematical induction that any P with n  3 sides is such
that D.P/ D n.n 3/=2.

CHAPTER 8. MATHEMATICAL INDUCTION

385

Hint: When you add a triangle to a convex figure to form


P

Q
P

P
A
@
P a new convex figure with k sides, the diagonals are all the
@
@

A
P
diagonals you had before, plus the base of the triangle, plus

A @
@
k 3 lines from vertices not belonging to the triangle to the
@ RAA

apex of the triangle.


Also, in case your algebra is rusty, .k

8.2.4

1/.k

4/ D k 2

5k C 4.

Case

Finally we take up a couple of cases of real interest for our purposes though
we limit consideration just to sentential forms. We have seen cases structured by
the recursive definitions TR and FR. Here is one that uses ST. Say a formula is in
normal form iff its only operators are _, ^, and , and the only instances of  are
immediately prefixed to atomics (of course, any normal form is an abbreviation of a
formula whose only operators are ! and ). Where P is a normal form, let P 0 be
like P except that _ and ^ are interchanged and, for a sentence letter S, S and S
are interchanged. Thus, for example, if P is an atomic A, then P 0 is A, if P is
.A _ .B ^ C //, then P 0 is .A ^ .B _ C //. We show that if P is in normal
form, then IP D T iff IP 0 D T. Thus, for the case we have just seen,
I.A _ .B ^ C // D T

iff

I.A ^ .B _ C // D T

So the result works like a generalized semantic version of DeM in combination with
DN: When you push a negation into a normal form, ^ flips to _, _ flips to ^, and
atomics switch between S and S.
Our argument is by induction on the number of operators in a formula P . Let P
be any normal form.
(H) Basis: If P has no operators, then P is an atomic S; so P D S and
P 0 D S; so IP D T iff IP 0 D T.
Assp: For any i, 0  i < k, any P in normal form with i operator symbols is
such that IP D T iff IP 0 D T.
Show: Any P in normal form with k operator symbols is such that IP D T
iff IP 0 D T.
If P is in normal form and has k operator symbols, then it is of the form
S, A _ B, or A ^ B where S is atomic and A and B are in normal
form with less than k operator symbols. So there are three cases.
() Suppose P is S. Then P is S, and P 0 is S. So IP D T iff
IS D T; by ST() iff IS T; by ST() again iff IS D T; iff
IP 0 D T. So IP D T iff IP 0 D T.

CHAPTER 8. MATHEMATICAL INDUCTION

386

(_) Suppose P is A _ B. Then P is .A _ B/, and P 0 is A0 ^ B 0 . So


IP D T iff I.A _ B/ D T; by ST() iff IA _ B T; by ST0 (_)
iff IA T and IB T; by ST() iff IA D T and IB D T; by
assumption iff IA0 D T and IB 0 D T; by ST0 (^) iff IA0 ^ B 0 D T;
iff IP 0 D T. So IP D T iff IP 0 D T.
(^) Homework.
Every P with k operator symbols is such that IP D T iff IP 0 D T.
Indct: Every P is such that IP D T iff IP 0 D T.
For the show step, it is important that A and B are in normal form. If they were
not, then the inductive assumption, which applies only to formulas in normal form,
would not apply to them. Similarly, it is important that A and B have < k operators.
If they did not, then the inductive assumption, which applies only to formulas with
< k operators, would not apply to them. The pattern here is typical: In the cases, we
break down to parts to which the assumption applies, apply the assumption, and put
the resultant parts back together. In the second case, we assert that if P is A_B, then
P 0 is A0 ^ B 0 . Here A and B may be complex. We do the conversion on P iff we
do the conversion on its main operator, and then do the conversion on its parts. And
similarly for (^). It is this which enables us to feed into the inductive assumption.
Notice that it is convenient to cast reasoning in the collapsed biconditional style.
Where P is any form whose operators are , _, ^, or !, we now show that
P is equivalent to a normal form. Consider a transform P  defined as follows:
For atomic S, S  = S; for arbitrary formulas A and B with just those operators,
.A _ B/ D .A _ B  /, .A ^ B/ D .A ^ B  /, and with prime defined as above,
.A ! B/ D .A 0 _ B  /, and A D A 0 . To see how this works, consider
how you would construct P  on a tree.
A

A
A

@
@

A
A
A

(I)

@
B _A

A
A
A

.B _ A/

A
AA

A ! .B _ A/

For any P on the


left, the corresponding P  appears on
the right

A
A
A

@
@
A
A
A

B _A

A
.B ^ A/
A
A
AA

A _ .B ^ A/

CHAPTER 8. MATHEMATICAL INDUCTION

387

For the last line, A is A and A0 is A. The star-transform, and the right-hand tree
works very much like unabbreviating from subsection 2.1.3. The conversion of a
complex formula depends on the conversion of its parts. So starting with the parts,
we construct the star-transform of the whole, one component at a time. Observe that,
at each stage of the right-hand tree, the result is a normal form.
We show by mathematical induction on the number of operators in P that P 
must be a normal form and that IP D T iff IP  D T. For the argument it will be
important, not only to use the inductive assumption, but also the result from above
that for any P in normal form, IP D T iff IP 0 D T. In order to apply this
result, it will be crucial that every P  is in normal form! Let P be any formula with
just operators , _, ^ and !. Here is an outline of the argument, with parts left as
homework.
T8.1. For any P whose operators are , _, ^ and !, P  is in normal form and
IP D T iff IP  D T.
Basis: If P is an atomic S, then P  D S. But an atomic S is in normal
form; so P  is in normal form; and since they are the same IP D T
iff IP  D T.
Assp: For any i , 0  i < k if P has i operator symbols, then P  is in
normal form and IP D T iff IP  D T.
Show: If P has k operator symbols, then P  is in normal form and IP D T
iff IP  D T.
If P has k operator symbols, then P is of the form A, A_B, A^B,
or A ! B for formulas A and B with less than k operator symbols.
() Suppose P is A. Then P  D A 0 . By assumption A is in
normal form; so since the prime operation converts a normal form to
another normal form, A 0 is in normal form; so P  is in normal
form. IP D T iff IA D T; by ST(), iff IA T; by assumption
iff IA T; by ST() iff I.A / D T; by assumption A is in
normal form, so by our previous result, iff I.A /0 D T; iff IP  D T.
So IP D T iff IP  D T.
(^) Homework.
(_) Homework.
(!) Homework.
In any case, if P has k operator symbols, P  is in normal form and
IP D T iff IP  D T.

CHAPTER 8. MATHEMATICAL INDUCTION

388

Indct: For any P , P  is in normal form and IP D T iff IP  D T.


The inductive assumption applies just to formulas with < k operator symbols. So it
applies just to formulas on the order of A and B. The result from before applies to
any formulas in normal form. So it applies to A , once we have determined that A
is in normal form.
E8.7. Complete induction (H) to show that every P in normal form is such that
IP D T iff IP 0 D T. You should set up the whole induction with statements for the basis, assumption and show parts. But then you may appeal
to the text for parts already done, as the text appeals to homework. Hint: If
P D .A ^ B/ then P 0 D .A0 _ B 0 /.
E8.8. Complete T8.1 to show that any P with just operators , _, ^ and ! has a
P  in normal form such that IP D T iff IP  D T. Again, you should set
up the whole induction with statements for the basis, assumption and show
parts. But then you may appeal to the text for parts already done, as the text
appeals to homework.
E8.9. Show that for any P whose operators are , _, ^ and !, P  is in normal
form and ` P $ P  . Hint: the reasoning is parallel to the semantic case,
but now about what you can derive. You will need results for both the prime
and star.
E8.10. Let IS D T for every sentence letter S. Where P is any sentential formula
whose only operators are !, ^, _ and $, show by induction on the number
of operators in P that IP D T. Use this result to show that s P .

8.2.5

Case

Here is a result like one we will seek later for the quantificational case. It depends
on the (recursive) notion of a derivation. Because of their relative simplicity, we will
focus on axiomatic derivations. If we were working with derivations of the sort
described in the diagram on p. 68, then we could reason by induction on the row in
which a formula appears. Formulas in the top row result directly as axioms, those in
the next row from ones before with MP; and so forth. Similarly, we could regiment
diagrams and proceed by induction on the number of applications of MP by which a

CHAPTER 8. MATHEMATICAL INDUCTION

389

formula is derived. But our official notion of an axiomatic derivation is not this; in
an official axiomatic derivation, lines are ordered, where each line is either an axiom,
a premise, or follows from previous lines by a rule. But this is sufficient for us to
reason about one line of an axiomatic derivation based on ones that come before; that
is, we reason by induction on the line number of a derivation. Say `ADs P just
in case there is a derivation of P in the sentential fragment of AD; that is, there is
a derivation using just A1, A2, A3 and MP from definition AS. We show that if P
is a theorem of ADs, then P is true on any sentential interpretation: if `ADs P then
s P . Insofar as it applies where there are no premises, this result is known as weak
soundness.
Suppose `ADs P ; then there is an ADs derivation hA1 ; A2 : : : An i of P from no
premises, with An D P . By induction on the line numbers of this derivation, we
show that for any j , s Aj . The case when j D n is the desired result.
(J) Basis: Since hA1 ; A2 : : : An i is a derivation from no premises, A1 can only be
an instance of A1, A2 or A3.
(A1) Say A1 is an instance of A1 and so of the form P ! .Q ! P /.
Suppose s P ! .Q ! P /; then by SV, there is an I such that IP !
.Q ! P / T; so by ST(!), IP D T and IQ ! P T; from the
latter, by ST(!), IQ D T and IP T. This is impossible; reject the
assumption: s P ! .Q ! P /.
(A2) Similarly.
(A3) Similarly.
Assp: For any i , 1  i < k, s Ai .
Show: s Ak .
Ak is either an axiom or arises from previous lines by MP. If Ak is an
axiom then, as in the basis, s Ak . So suppose Ak arises from previous
lines by MP. In this case, the picture is something like this:
a. B ! C
b. B
k. C

a,b MP

where a; b < k and C is Ak . By assumption, s B and s B ! C.


Suppose s Ak ; then s C; so by SV there is some I such that IC T;
let J be a particular interpretation of this sort; then JC T; but by SV,
for any I, IB D T and IB ! C D T; so JB D T and JB ! C D
T; from the latter, by ST(!), JB T or JC D T; so JC D T. This
is impossible; reject the assumption: s Ak .

CHAPTER 8. MATHEMATICAL INDUCTION

390

Indct: For any line j of the derivation s Aj .


We might have continued as above for (A2) and (A3). Alternatively, since we have
already done the work, we might have appealed directly to T7.2s, T7.3s and T7.4s
for (A1), (A2) and (A3) respectively. From the case when Aj D P we have s P .
This result is a precursor to one we will obtain in chapter 10. There, we will show
strong soundness for the complete system AD, if `AD P , then  P . This tells
us that our derivation system can never lead us astray. There is no situation where a
derivation moves from premises that are true, to a conclusion that is not. Still, what
we have is interesting in its own right: It is a first connection between the syntactic
notions associated with derivations, and the semantic notions of validity and truth.
E8.11. Let A3 be like A2 for exercise E3.4 (p. 77) except that the rule MP is stated
entirely in  and ^. Then the axiom and rule schemes are,
A3 A1. P ! .P ^ P /
A2. .P ^ Q/ ! P
A3. .O ! P / ! .P ^ Q/ ! .Q ^ O/
MP. .P ^ Q/; P  Q
Show by mathematical induction that A3 is weakly sound. That is show that
if `A3 P then s P .
E8.12. Modify your above argument to show that A3 is strongly sound. That is,
modify the argument to show that if `A3 P then s P . You may appeal
to reasoning from the previous problem where it is applicable. Hint: When
premises are allowed, Aj is either an axiom, a premise, or arises by a rule.
So there is one additional case in the basis; but that case is trivial if all of
the premises are true, and Aj is a premise, then Aj cannot be false. And
your reasoning for the show will be modified; now the assumption gives you
s B ! C and s B and your goal is to show s C.
E8.13. Modify table T() so that IP D F both when IP D T and IP D F;
let table T(!) remain as before. Say a formula is ideal iff it is true on every
interpretation, given the revised tables. Show by mathematical induction that
every consequence in AD of MP with A1 and A2 alone is ideal. Then by
a table show that A3 is not ideal, and so that there is no derivation of A3

CHAPTER 8. MATHEMATICAL INDUCTION

391

from A1 and A2 alone. Hint: your induction may be a simple modification of


argument (J) from above.

E8.14. Where t is a term of Lq , let X.t/ be the sum of all the superscripts in t
and Y .t/ be the number of symbols in t. So, for example, if t is z, then
X.t/ D 0 and Y .t/ D 1; if t is g 1 f 2 cx, then X.t/ D 3 and Y .t/ D 4. By
induction on the number of function symbols in t, show that for any t in Lq ,
X.t/ C 1 D Y .t/.
E8.15. Show, by mathematical induction, that at a recent convention, the number
of logicians who shook hands an odd number of times is even. Assume that
0 is even. Hints: Reason by induction on the number of handshakes at the
convention. At any stage n, let O.n/ be the number of people who have
shaken hands an odd number of times. Your task is to show that for any n,
O.n/ is even. You will want to consider cases for what happens to O.n/ when
(i) someone who has already shaken hands an odd number of times shakes
with someone who has shaken an odd number of times; (ii) someone who has
already shaken hands an even number of times shakes with someone who has
shaken an even number of times; and (iii) someone who has already shaken
hands an odd number of times shakes with someone who has shaken an even
number of times.
E8.16. For any n  1, given a 2n  2n checkerboard with any one square deleted,
show by mathematical induction, that it is possible to cover the board with
3-square L-shaped pieces. For example, a 4  4 board with a corner deleted
could be covered as follows,

CHAPTER 8. MATHEMATICAL INDUCTION

392

Hint: The basis is easy a 2  2 board with one square missing is covered
by a single L-shaped piece. The trick is to see how an arbitrary 2k board with
one square missing can be constructed out of an L-shaped piece and 2k 1
size boards with a square missing. But this is not hard!

8.3

Further Examples (for Part III)

We continue our series of examples, moving now to quantificational cases, and to


some theorems that will be useful especially if you go on to consider Part III.

8.3.1

Case

For variables x and v, where v does not appear in term t, it should be obvious that
x v D t. If we replace every instance of x with v, and then all the instances of
tv
x
v with x, we get back to where we started. The restriction that v not appear in t is
required to prevent putting back instances of x where there were none in the original
as f xvvx is f vv, but then f vvxv is f xx. We demonstrate that when v does not
x v D t more rigorously by a simple induction on the number of
appear in t, tv
x
function symbols in t. Suppose v does not appear in t.
(K) Basis: If t has no function symbols, then it is a variable or a constant. If it is
x D t (nothing is replaced);
a variable or a constant other than x, then tv
v
and since v does not appear in t, tx D t (nothing is replaced); so
x v D t. If t is the variable x, then t x D v; and v v D x; so
tv
x
v
x
x v D x D t. In either case, then, t x v D t.
tv
x
v x
Assp: For any i , 0  i < k, if t has i function symbols, and v does not appear
x v D t.
in t, then tv
x
x v D t.
Show: If t has k function symbols, and v does not appear in t, then tv
x

If t has k function symbols, then it is of the form, hn s1 : : : sn for some


function symbol hn and terms s1 : : : sn each of which has < k function symbols; since v does not appear in t, it does not appear in any
of s1 : : : sn ; so the inductive assumption applies to s1 : : : sn ; so by
x v
x v
assumption s1 xv v
x D s1 , and . . . and sn v x D sn . But tv x D
hn s1 : : : sn xv v
x ; and since replacements only occur within the terms,
n
x
x v
n
this is h s1 v v
x : : : sn v x ; and by assumption this is h s1 : : : sn D
x
v
t. So tv x D t.
x v D t
Indct: For any term t, if v does not appear in t, tv
x

CHAPTER 8. MATHEMATICAL INDUCTION

393

Consider a concrete application of the point that replacements occur only within the
terms. We find f 2 g 2 axb xv vx if we find g 2 ax xv vx and bvx vx and compose the whole
from them for the function symbol f 2 cannot be affected by substitutions on the
variables! It is also worthwhile to note the place where it matters that v is not a
variable in t: In the basis case where t is a variable other than x, txv D t insofar as
nothing is replaced; but suppose t is v; then txv D x t, and we do not achieve the
desired result.
This result can be extended to one with application to formulas. If v is not free
in a formula P and v is free for x in P , then Pvx v
x D P . We require the restriction
that v is not free in P for the same reason as before: if v were free in P , we might
end up with instances of x where there are none in the original as Rxvvx is Rvv,
but then Rvvxv is Rxx. And we need the restriction that v is free for x in P so that
instances of x go back for all the instances of v when free instances of v are replaced
by x as 8vRxvvx is 8vRvv, but then remains the same when x is substituted for
free instances of v. Here is the basic structure of the argument, with parts left for
homework.
T8.2. For variables x and v, if v is not free in a formula P and v is free for x in
P , then Pvx v
x D P.
Let P be any formula such that v is not free P and v is free for x in P . We
show that Pvx v
x D P by induction on the number of operator symbols in P .
Basis: If P has no operator symbols, then it is a sentence letter S or an
atomic of the form Rn t1 : : : tn for some relation symbol Rn and
terms t1 : : : tn . (i) If P is S then it has no variables; so Svx D S
x v
n
and Sxv D S; so Svx v
x D S; so Pv x D P . (ii) If P is R t1 : : : tn
n
x v
x v
then Pvx v
x D R t1 v x : : : tn v x ; but since v is not free in P ,
v does not appear at all in P or its terms; so by the previous result,
x v
x v
n
t1 xv v
x D t1 and . . . tn v x D tn ; so Pv x D R t1 : : : tn ; which
is to say, Pvx v
x D P.
Assp: For any i , 0  i < k, if P has i operator symbols, where v is not free
in P and v is free for x in P , then Pvx v
x D P.
Show: Any P with k operator symbols, is such that if v is not free in P and
v is free for x in P , then Pvx v
x D P.
If P has k operator symbols, then it is of the form A, .A ! B/ or
8wA for some variable w and formulas A and B with < k operator
symbols.

CHAPTER 8. MATHEMATICAL INDUCTION

394

() Suppose P is A, v is not free in P , and v is free for x in P . Then


x v
x v
Pvx v
x D .A/v x D Av x . Since v is not free in P , v is not
free in A; and since v is free for x in P , v is free for x in A. So the
assumption applies to A . . .
(!) Homework.
(8) Suppose P is 8wA, v is not free in P , and v is free for x in P .
Either x is free in P or not. (i) If x is not free in P , then Pvx D P
and since v is not free in P , Pxv D P ; so Pvx v
x D P . (ii) Suppose
x is free in P D 8wA. Then x is other than w; and since v is free
for x in P , v is other than w; so the quantifier does not affect the
x v
replacements, and Pvx v
x is 8wAv x . Since v is not free in P and
is not w, v is not free in A; and since v is free for x in P , v is free
for x in A. So the inductive assumption applies to A; so Axv v
x D A;
x
v
x
v
so 8wAv x D 8wA; but this is just to say, Pv x D P .
If P has k operator symbols, if v is not free in P and v is free for x
in P , then Pvx v
x D P.
Indct: For any P , if v is not free in P and v is free for x in P , then Pvx v
x D
P.
There are a few things to note about this argument. First, again, we have to be careful
that the formulas A and B of which P is composed are in fact of the sort to which
the inductive assumption applies. In this case, the requirement is not only that A
and B have < k operator symbols, but that they satisfy the additional assumptions,
that v is not free but is free for x. It is easy to see that this condition obtains in the
cases for  and !, but it is relatively complicated in the case for 8, where there
is interaction with another quantifier. Observe also that we cannot assume that the
arbitrary quantifier has the same variable as x or v. In fact, it is because the variable
may be different that we are able to reason the way we do. Finally, observe that the
arguments of this section for (K) and T8.2 are a linked pair in the sense that the
result of the first for terms is required for the basis of the next for formulas. This
pattern repeats in the next cases.
*E8.17. Provide a complete argument for T8.2, completing cases for () and (!).
You should set up the complete induction, but may appeal to the text at parts
that are already completed, just as the text appeals to homework.

CHAPTER 8. MATHEMATICAL INDUCTION

395

E8.18. Consider language Lt from examples (C) and (D) that has just function symbols f 2 and g 4 , and let it be developed so that it has just one constant symbol
c, and just the primitive operators , from p. 330, and 9. Provide a complete demonstration for expressions in this language that Pvx v
x D P . Hint:
You will need arguments parallel to (K) and then T8.2, but structured by the
symbols of this language.

8.3.2

Case

This example develops another pair of linked results which may seem obvious. Even
so, the reasoning is instructive, and we will need the results for things to come. First,
T8.3. For any interpretation I, variable assignments d and h, and term t, if dx D
hx for every variable x in t, then Id t D Ih t.
If variable assignments agree at least on assignments to the variables in t, then corresponding term assignments agree on the assignment to t. The reasoning, as one
might expect, is by induction on the number of function symbols in t. Let I, d, h and
t be arbitrary, and suppose dx D hx for every variable x in t.
Basis: If t has no function symbols, then it is a variable x or a constant c. (i) Suppose
t is a constant c. Then by TA(c), Id t D Id c D Ic; and by TA(c) again,
Ic D Ih c D Ih t. So Id t D Ih t. (ii) Suppose t is a variable x. Then by
TA(v), Id t D Id x D dx; but by the assumption to the theorem, dx D
hx; and by TA(v) again, hx D Ih x D Ih t]. So Id t D Ih t.
Assp: For any i , 0  i < k, if t has i function symbols, and dx D hx for every
variable x in t, then Id t D Ih t.
Show: If t has k function symbols, and dx D hx for every variable x in t, then
Id t D Ih t.
If t has k function symbols, then it is of the form hn s1 : : : sn for some function symbol hn and terms s1 : : : sn with < k function symbols. Suppose
dx D hx for every variable x in t. Since dx D hx for every variable x
in t, dx D hx for every variable x in s1 : : : sn ; so the inductive assumption applies to s1 : : : sn . So Id s1 D Ih s1 , and . . . and Id sn D Ih sn ; so
hId s1 ; : : : Id sn i D hIh s1 ; : : : Ih sn i; so Ihn hId s1 ; : : : Id sn i D Ihn
hIh s1 ; : : : Ih sn i; so by TA(f), Id hn s1 : : : sn D Ih hn s1 : : : sn ; which is
to say Id t D Ih t.

CHAPTER 8. MATHEMATICAL INDUCTION

396

Indct: For any t, Id t D Ih t.


So for any interpretation I, variable assignments d and h and term t, if dx D hx
for every variable in t, then Id t D Ih t. It should be clear that we follow our usual
pattern to complete the show step: The assumption gives us information about the
parts in this case, about assignments to s1 : : : sn ; from this, with TA, we move
to a conclusion about the whole term t. Notice again, that it is important to show
that the parts are of the right sort for the inductive assumption to apply: it matters
that s1 : : : sn have < k function symbols, and that dx D hx for every variable in
s1 : : : sn . Perhaps the overall result is intuitively obvious: If there is no difference
in assignments to relevant variables, then there is no difference in assignments to
the whole terms. Our proof merely makes explicit how this result follows from the
definitions.
We now turn to a result that is very similar, except that it applies to formulas. In
this case, T8.3 is essential for reasoning in the basis.
T8.4. For any interpretation I, variable assignments d and h, and formula P , if
dx D hx for every free variable x in P , then Id P D S iff Ih P D S.
The argument, as you should expect, is by induction on the number of operator symbols in the formula P . Let I, d, h and P be arbitrary, and suppose dx D hx for
every variable x free in P .
Basis: If P has no operator symbols, then it is a sentence letter S or an atomic of
the form Rn t1 : : : tn for some relation symbol Rn and terms t1 : : : tn . (i)
Suppose P is a sentence letter S. Then Id P D S; iff Id S D S; by SF(s)
iff IS D T; by SF(s) again iff Ih S D S; iff Ih P D S. (ii) Suppose P
is Rn t1 : : : tn . Then since every variable in P is free, by the assumption for
the theorem, dx D hx for every variable in P ; so dx D hx for every
variable in t1 : : : tn ; so by T8.3, Id t1 D Ih t1 , and . . . and Id tn D Ih tn ;
so hId t1 ; : : : Id tn i D hIh t1 ; : : : Ih tn i; so hId t1 ; : : : Id tn i 2 IRn iff
hIh t1 ; : : : Ih tn i 2 IRn ; so by SF(r), Id Rn t1 : : : tn D S iff Ih Rn t1 : : : tn
D S; which is to say, Id P D S iff Ih P D S.
Assp: For any i , 0  i < k, if P has i operator symbols and dx D hx for every
free variable x in P , then Id P D S iff Ih P D S.
Show: If P has k operator symbols and dx D hx for every free variable x in P ,
then Id P D S iff Ih P D S.

CHAPTER 8. MATHEMATICAL INDUCTION

397

If P has k operator symbols, then it is of the form A, A ! B, or 8vA


for variable v and formulas A and B with < k operator symbols. Suppose
dx D hx for every free variable x in P .
() Suppose P is A. Then since dx D hx for every free variable x in P ,
and every variable free in A is free in P , dx D hx for every free variable
in A; so the inductive assumption applies to A. Id P D S iff Id A D S; by
SF() iff Id A S; by assumption iff Ih A S; by SF(), iff Ih A D S;
iff Ih P D S.
(!) Homework.
(8) Suppose P is 8vA. Then since dx D hx for every free variable x in P ,
dx D hx for every free variable in A with the possible exception of v; so
for arbitrary o 2 U, d.vjo/x D h.vjo/x for every free variable x in A.
Since the assumption applies to arbitrary assignments, it applies to d.vjo/ and
h.vjo/; so by assumption for any o 2 U, Id.vjo/ A D S iff Ih.vjo/ A D S.
Now suppose Id P D S but Ih P S; then Id 8vA D S but Ih 8vA S;
from the latter, by SF(8), there is some o 2 U such that Ih.vjo/ A S; let m
be a particular individual of this sort; then Ih.vjm/ A S; so, as above, with
the inductive assumption, Id.vjm/ A S. But Id 8vA D S; so by SF(8),
for any o 2 U, Id.vjo/ A D S; so Id.vjm/ A D S. This is impossible; reject
the assumption: if Id P D S, then Ih P D S. And similarly [by homework]
in the other direction.
If P has k operator symbols, then Id P D S iff Ih P D S.
Indct: For any P , Id P D S iff Ih P D S.
So for any interpretation I, variable assignments d and h, and formula P , if dx D
hx for every free variable x in P , then Id P D S iff Ih P D S. Notice again
that it is important to make sure the inductive assumption applies. In the (8) case,
first we are careful to distinguish the arbitrary variable of quantification v, from
x of the assumption. For the quantifier case, the condition that d and h agree on
assignments to all the free variables in A is not satisfied merely because they agree
on assignments to all the free variables in P . We solve the problem by switching
to assignments d.vjo/ and h.vjo/, which must agree on all the free variables in A.
(Why?) The overall reasoning in the quantifier case is fairly sophisticated. But you
should be in a position to bear down and follow each step.

CHAPTER 8. MATHEMATICAL INDUCTION

398

From T8.4 it is a short step to a corollary, the proof of which was promised in
chapter 4: If a sentence P is satisfied on any variable assignment, then it is satisfied
on every variable assignment, and so true.
T8.5. For any interpretation I and sentence P , if there is some assignment d such
that Id P D S, then IP D T.
For sentence P and interpretation I, suppose there is some assignment d such
that Id P D S, but IP T. From the latter, by TI, there is some particular assignment h such that Ih P S; but if P is a sentence, it has no
free variables; so every assignment agrees with h in its assignment to every
free variable in P ; in particular d agrees with h in its assignment to every
free variable in P ; so by T8.4, Id P S. This is impossible; reject the
assumption: if Id P D S then IP D T.
In effect, the reasoning is as sketched in chapter 4. Whether 8xP is satisfied by d
does not depend on what particular object d assigns to x for satisfaction of the
quantified formula depends on satisfaction for every assignment to x. The key step
is contained in the reasoning for the (8) case of the induction. Given this, the move
to T8.5 is straightforward.
T8.5 puts us in a position to recover simple semantic conditions for sentences of
the sort P and P ! Q.
T8.6. For any sentences P and Q, (i) IP D T iff IP T; and (ii) IP !
Q D T iff IP T or IQ D T.
() Suppose IP D T; then by TI, for any d, Id P D S; so by SF(),
Id P S; and by TI again, IP T. Suppose IP T; then by TI, there
is some d such that Id P S; let h be a particular interpretation of this sort;
then Ih P S; so by SF(), Ih P D S; and since P is a sentence, P is
a sentence; so by T8.5, IP D T. So IP D T iff IP T.
(!) Homework.

*E8.19. Provide a complete argument for T8.4, completing the case for (!), and
expanding the other direction for (8). You should set up the complete induction, but may appeal to the text at parts that are already completed, as the text
appeals to homework.

CHAPTER 8. MATHEMATICAL INDUCTION

399

E8.20. Complete the demonstration of T8.6 by working the case for (!).

E8.21. Show that T8.4 holds for expressions in Lt from E8.18. Hint: you will need
results parallel to both T8.3 and T8.4.

E8.22. Show that for any interpretation I and sentence P , either IP D T or IP D
T. Hint: This is not an argument by induction, but rather another corollary to
T8.4. So begin by supposing the result is false. . . .

8.3.3

Case

Finally, we turn to another pair of results, with reasoning like what we have already
seen.
T8.7. For any formula P , term t, constant c, and distinct variables v and x, Ptv cx
is the same formula as Pxc v
tc
x

Notice that switching t for v and then x for c is not the same as switching x for c
and then t for v for if t contains an instance of c, that instance of c is replaced
in the first case, but not in the second. The proof breaks into two parts. (i) By
induction on the number of function symbols in an arbitrary term r, we show that
c
c v . Given this, (ii) by induction on the number of operator symbols in
rv
c
t x D rx tx
an arbitrary formula P , we show that Ptv cx D Pxc v
c . Only part (i) is completed
tx
here; (ii) is left for homework. Suppose v x.
Basis: If r has no function symbols, then it is either v, c or some other constant or
variable.
v c
c
c
c v is t c .
(v) Suppose r is v. Then rv
c
t is t and rt x is tx . But rx is v; so rx tx
x
v
v
c
c
So rt x D rx tc .
x

v c
c
(c) Suppose r is c. Then rv
t is c and rt x is x. But rx is x; and, since v x,
c
c v
rcx v
is x. So rv
t x D rx t c .
tc
x

(oth) Suppose r is some variable or constant other than v or c. Then rv


t D
c D r. Similarly, rc D rc v D r. So rv c D rc v .
rv

t x
x
x tc
t x
x tc
x

c
c v
Assp: For any i , 0  i < k, if r has i function symbols, then rv
t x D rx t c .
x

CHAPTER 8. MATHEMATICAL INDUCTION

400

First Theorems of Chapter 8


T8.1 For any P whose operators are , _, ^ and !, P  is in normal form and IP D
T iff IP  D T.
T8.2 For variables x and v, if v is not free in a formula P and v is free for x in P ,
then Pvx v
x D P.
T8.3 For any interpretation I, variable assignments d and h, and term t, if dx D hx
for every variable x in t, then Id t D Ih t.
T8.4 For any interpretation I, variable assignments d and h, and formula P , if dx D
hx for every free variable x in P , then Id P D S iff Ih P D S.
T8.5 For any interpretation I and sentence P , if there is some assignment d such that
Id P D S, then IP D T.
T8.6 For any sentences P and Q, (i) IP D T iff IP T; and (ii) IP ! Q D T
iff IP T or IQ D T.
T8.7 For any formula P , term t, constant c, and distinct variables v and x, Ptv cx is
the same formula as Pxc v
tc
x

c
c v
Show: If r has k function symbols, then rv
t x D rx t c .
x

If r has k function symbols, then it is of the form, hn s1 : : : sn for some


function symbol hn and terms s1 : : : sn each of which has < k function symc
n
v c
v c
c v D
bols. In this case, rv
c
t x D h .s1 t x : : : sn t x /. Similarly, rx tx
c v
v c
c v
hn .s1 cx v
c : : : sn x c /. But by assumption, s1 t x D s1 x c , and . . . and
tx
tx
tx
c
c v ; so hn .s v c : : : s v c / D hn .s c v : : : s c v /;
sn v
c
1t x
nt x
1 x tc
n x tc
t x D sn x tx
x
x
c D rc v .
so rv

c
t x
x t
x

c
c v
Indct: For any r, rv
t x D rx t c .
x

You will find this result useful when you turn to the final proof of T8.7. That argument is a straightforward induction on the number of operator symbols in P . For
the case where P is of the form 8wA, notice that v is either w or it is not. On the
one hand, if v is w, then P D 8wA has no free instances of v so that Ptv D P ,
c
and Ptv cx D Pxc ; but, similarly, Pxc has no free instances of v, so Pxc v
c D Px .
tx
c
On the other hand, if v is a variable other than w, then Ptv cx D 8wAv
t x and
Pxc v
D 8wAcx v
and you will be able to use the inductive assumption.
tc
tc
x

CHAPTER 8. MATHEMATICAL INDUCTION

401

*E8.23. Complete the proof of T8.7 by showing by induction on the number of operator symbols in an arbitrary formula P that if v is distinct from x, then
Ptv cx D Pxc v
.
tc
x

E8.24. Show that T8.7 holds for expressions in Lt from E8.18.


E8.25. Set U D f1g, IS D T for every sentence letter S, IR1 D f1g for every
R1 ; IR2 D fh1; 1ig for every R2 ; and in general, IRn D Un for every
Rn . Where P is any formula whose only operators are !, ^, _, $, 8 and 9,
show by induction on the number of operators in P that Id P D S. Use this
result to show that P . Hint: This is a quantificational version of E8.10.
E8.26. Where the only operator in formula P is $, show that P is valid,  P iff
each atomic in P occurs an even number of times. For this, say formulas P
and Q are equivalent just in case Id P D S iff Id Q D S. Then the argument
breaks into three parts.
(i) Show com: A $ B is equivalent to B $ A; assoc A $ .B $ C/ is
equivalent to .A $ B/ $ C; and sub if A is equivalent to B, then B $ C
is equivalent to A $ C. These are simple arguments in the style of chapter 7.
(ii) Suppose the only operator in formula P is $, and Q and R are any
formulas, whose only operator is $, such that the atomics of Q plus the
atomics of R are the same as the atomics of P . Where P has at least one
operator symbol, show by induction on the number of operator symbols in
P , that P is equivalent to Q $ R. Hint: If P is of the form A $ B, then
you will be able to use the assumption to say that A $ B is equivalent to
some .QA $ RA / $ .QB $ RB / which sort the atomics of A and B
into the atomics of Q and the atomics of R. Then you can use (i) to force a
form .QA $ QB / $ .RA $ RB /. But you will also have to take account
of (simplified) cases where A and B lack atomics from Q or from R.
(iii) Where the only operator in formula P is $, show by induction on the
number of operators in P , that P is valid,  P iff each atomic in P occurs
an even number of times. Hints: Say an atomic which occurs an odd number
of times has an unmatched occurrence. Then, if P has k operator symbols,
either (a) all of the atomics in P are matched, (b) P has both matched and
unmatched atomics, or (c) P includes only unmatched atomics. In the first

CHAPTER 8. MATHEMATICAL INDUCTION

402

two cases, you will be able to use result (ii) with the assumption. For (c) use
(ii) to get an expression of the sort A $ B where atomics in A are disjoint
from the atomics in B; then, depending on how you have set things up, you
may not even need the inductive assumption.
E8.27. Show that any sentential form P whose only operators are  and $ and
whose truth table has at least four rows, has an even number of Ts and Fs
under its main operator. Hints: Reason by induction on the number of operators in P where P is (a subformula) on a table with at least four rows so
for atomics you may be sure that a table with at least four rows has an even
number of Ts and Fs. The show step has cases for  and $. The former is
easy, the latter is not.

8.4

Additional Examples (for Part IV)

Our primary motivation in this section is to practice doing mathematical induction.


However, a final series of examples develop some results about Q that will be particularly useful if you go on to consider Part IV. As we have already mentioned (p. 304,
compare E7.20), many true generalizations are not provable in Robinson Arithmetic.
However, we shall be able to show that Q is generally adequate for some interesting
classes of results. As you work through these results, you may find it convenient to
refer to the final chapter 8 theorems reference on p. 417.
First we shall string together a series of results sufficient to show that Q correctly
decides atomic sentences of LNT : where N is the standard interpretation for number
theory and P is a sentence s D t, s  t or s < t, if NP D T then Q `ND P , and
if NP T then Q `ND P . Observe that if P is atomic and a sentence, it has no
variables.

8.4.1

Case
n Ss


Let n abbreviate S : : : S ;. So, for example, 2 is S S ;, and 0 is just ;. We begin with
some simple results for the addition and multiplication of these numerals.
T8.8. For any a; b; c 2 U, if a C b D c, then Q `ND a C b D c.
By induction on the value of b. Recall that by Q3, Q `ND x C ; D x and
from Q4, Q `ND x C Sy D S.x C y/. In addition, we depend on the general
fact that, so long as a > 0, Sa 1 is the same numeral as a.

CHAPTER 8. MATHEMATICAL INDUCTION

403

Basis: Suppose b D 0 and a C b D c; then a D c; but by Q3, Q `ND a C ; D a; so


Q `ND a C b D c.
Assp: For any i, 0  i < k if a C i D c, then Q `ND a C i D c.
Show: If a C k D c, then Q `ND a C k D c.
Suppose a C k D c. Since k > i, k > 0. So k is the same as S k 1; and
a C k 1 D c 1; and by assumption Q `ND .a C k 1/ D c 1. By Q4,
Q `ND .a C S k 1/ D S.a C k 1/; but S k 1 is k so Q `ND .a C k/ D
S.a C k 1/; so with DE, Q `ND .a C k/ D S c 1 D c. So Q `ND a C k D c.
Indct: For any a, b and c, if a C b D c, then Q `ND a C b D c.
There are some manipulations to get the result, but the idea is simple: From the
basis, a C ; D a; then given the assumption for one value of b, we use Q4 to get the
next. Observe that we informally manipulate objects in the universe by expressions
of the sort, a C b D c but doing so is not itself to manipulate the corresponding
expression of LNT which would appear, a C b D c.
*T8.9. For any a, b, c 2 U, if a  b D c then Q `ND a  b D c. By induction on the
value of b.
Hint: You should come to as stage where you want to apply the assumption
to a  k 1 C a; but since a  .k 1/ D a  k a D c a the inductive
assumption tells you that Q `ND a  k 1 D c a; and you will be able to
apply T8.8 for the desired result.
*E8.28. Provide an argument to show T8.9.

8.4.2

Case

T8.10. For any a; b 2 U, if a b, then Q `ND a b


Whenever a b, there is some d > 0 that is the difference between them. We
show that for any n, Q `ND n d C n. The the case when n D a and d C n D
b gives the desired result. Recall that according to Q1, Q `ND .S x D 0/;
and from Q2, Q `ND .S x D Sy/ ! .x D y/.
Suppose a b; then a < b or b < a; without loss of generality, suppose
a < b; then there is some d > 0 such that d C a D b. By induction on n, we
show Q `ND n d C n; the case when n D a gives Q `ND a d C a; which
is to say, Q `ND a b.

CHAPTER 8. MATHEMATICAL INDUCTION

404

Basis: Suppose n D 0. Since d > 0, d D S d 1; and since n D 0, d D d C n.


By Q1 with reflexivity, Q `ND ; S d 1; so Q `ND n d D d C n; so
Q `ND n d C n.
Assp: For 0  i < k, Q `ND i d C i
Show: Q `ND k d C k
In this case, both k and d C k are > 0; so k is S k 1 and d C k is S d C k 1;
by Q2, Q `ND S k 1 D S d C k 1 ! k 1 D d C k 1; but by assumption, Q `ND k 1 d C k 1; so by MT, Q `ND S k 1 S d C k 1;
which is to say, Q `ND k d C k.
Indct: For any n, Q `ND n d C n.
So Q `ND a d C a D b. In the basis, we show that Q proves the difference d
between a and b is not equal to 0. Given this, at the show, Q proves that adding
one to each side results in an inequality; and similarly adding one again results in an
inequality until we get the result that Q proves that a b. The demonstration that
Q `ND a b works so long as we start with d the difference between a and b.
The same basic strategy applies in a related case. But we need a preliminary
theorem for one of the parts.
T8.11. Q `ND Sj C n D j C S n.
Hint: this is a simple induction on n. You will want the assumption in the
form, Q `ND Sj C k 1 D j C S k 1 D j C k.
Now we are ready for the result like T8.10.
T8.12. (i) If a 6 b, then Q `ND a 6 b; and (ii) If a 6< b, then Q `ND a 6< b.
Recall that s  t is 9v.v C s D t/ and s < t is 9v.Sv C s D t/ for v
not in s or t. Suppose a 6 b then a > b; so, again, there is a difference d
between them.
For (i) we need that if a 6 b then Q `ND 9v.v C a D b/. Suppose a 6 b;
then a > b; so for d > 0, a D d C b. By induction on n, we show that for any
n, Q `ND j C d C n n; the case when n D b gives Q `ND j C a b; then
by 8I, Q `ND 8v.v C a b/; and the result follows by QN.
Basis: Suppose n D 0; then d C n D d; since d > 0, d D S d 1. By Q1, Q `ND
S.j C d n/ ;; but by Q4, Q `ND j C S d 1 D S.j C d 1/; so Q `ND
j C S d 1 ;; but this is just to say Q `ND j C d D j C d C n ; D n;
so Q `ND j C d C n n.

CHAPTER 8. MATHEMATICAL INDUCTION

405

Assp: For 0  i < k, Q `ND j C d C i i .


Show: Q `ND j C d C k k.
In this case, k and d C k > 0 so that k D S k 1 and d C k D S d C k 1.
By assumption, Q `ND j C d C k 1 k 1. But by Q2, Q `ND S.j C
d C k 1/ D S k 1 ! j C d C k 1 D k 1; so by MT, Q `ND S.j C
d C k 1/ S k 1; by Q4, Q `ND j C S d C k 1 D S.j C d C k 1/; so
Q `ND j CS d C k 1 S k 1; but this is just to say, Q `ND j C d C k k.
Indct: For any n, Q `ND j C d C n n
So Q `ND j C d C b b which is to say Q `ND j C a b. So by 8I, Q `ND
8v.v C a b/; and by QN, Q `ND 9v.v C a D b/; which is to say, Q `ND a 6 b.
In the basis, we show that for d > 0, Q proves j C d 0. Then, at the show, each
side is incremented by one until Q proves j C a b. Again, this works because we
begin with d the difference between a and b.
E8.29. Provide arguments to show T8.11 and then (ii) of T8.12. Hint: For the latter,
the induction is to show Q `ND Sj C d C n n. There is a complication,
however, in the basis. From a 6< b, a D b C d for d  0. So we cannot set
d D S d 1. You can solve the problem by obtaining T8.11 as a preliminary
result. Then it will be easy to show j C S d ; and apply the preliminary
theorem. For the show, since k > 0, the argument remains straightforward.

8.4.3

Case

Up to this stage, we have been dealing entirely with atomics whose only terms are
numerals of the sort n. We now broaden our results to include atomic sentences with
arbitrary terms.
We have said a formula is true iff it is satisfied on every variable assignment. Let
us introduce a parallel notion for terms.
AI The assignment of a term on an interpretation It D n iff with any d for I,
Id t D n.
In particular, from T8.3, if assignments d and h agree on assignments to free variables
in t, then Id t D Ih t; so if t is without free variables, any assignments must agree
on assignments to all the free variables in t so it is automatic that any Id t D
Ih t D It.
Given this, we start by establishing a connection between numerals and complex
terms.

CHAPTER 8. MATHEMATICAL INDUCTION

406

T8.13. For any variable-free term t of LNT , where Nt D n, Q `ND t D n


By induction on the number of function symbols in t.
Basis: If a variable-free term t has no function symbols, then it is the constant ;.
N; D 0. But by =I, Q `ND ; D ;; so Q `ND t D n.
Assp: For any i; 0  i < k if t has i function symbols and Nt D n, then Q `ND
t D n.
Show: If t has k function symbols and Nt D n, then Q `ND t D n.
If t has k function symbols, it is of the form, S r, r C s or r  s for r; s with
< k function symbols.
(S) t is Sr. Suppose Nt D n. Since r is variable free, Nr D Nd r D a
for some a. Since t is variable-free, Nt D Nd t D Nd S r; by TA(f),
Nd S r D NS hai D a C 1; so Nt D a C 1; so a C 1 D n. By assumption
Q `ND r D a; but Q `ND S r D S r; so by DE, Q `ND S r D S a D a C 1 D
n; so Q `ND t D n.
(C) t is r C s. Suppose Nt D n. Since r and s are variale-free, Nr D
Nd r D a and Ns D Nd s D b for some a and b. Since t is variable free,
Nt D Nd t D Nd r C s; by TA(f), Nd r C s D NCha; bi D a C b; so
Nt D a C b; so a C b D n. By assumption, Q `ND r D a and Q `ND s D b;
but by =I, Q `ND r C s D r C s; so by =E, Q `ND r C s D a C b; and since
a C b D n by T8.8, Q `ND a C b D n; so Q `ND r C s D n. So Q `ND t D n.
() Similarly by homework.
Indct: So for any variable-free term t, with Nt D n, Q `ND t D n
Our intended result, that Q correctly decides atomic sentences of LNT is not an
argument by induction, but rather collects what we have done into a simple argument.
T8.14. Q correctly decides atomic sentences of LNT . For any sentence P of the sort
s D t, s  t or s < t, if NP D T then Q `ND P ; and if NP T then
Q `ND P .
Since the atomics are sentences (and the quantified variable does not appear
in the terms for the inequalities), s and t are variable free. A few selected
parts are worked as examples.
(a) Ns D t D T. Then by TI, for any d, Nd s D t D S; so by SF(r),
hNd s; Nd ti 2 ND; so Nd s D Nd t. But since s and t are variable

CHAPTER 8. MATHEMATICAL INDUCTION

407

free, Ns D Nd s D a D Nd t D Nt; so by T8.13, Q `ND s D a and


Q `ND t D a; but by =I, Q `ND a D a so by =E, Q `ND s D t.
(b) Ns D t T.
(c) Ns  t D T. Then N9v.v C s D t/ D T; so by TI, for any d,
Nd 9v.v C s D t/ D S; so by SF(9), for some m 2 U, Nd.vjm/ v C s D
t D S; but d.vjm/v D m; and by TA(v), Nd.vjm/ v D m; and since s and t
are variable-free, Nd.vjm/ s D Ns D a and Nd.vjm/ t D Nt D Nt D b
for some a and b. By TA(f), Nd.vjm/ v C s D NChm; ai D m C a; and
by SF(r), hm C a; bi 2 ND; so m C a D b. From the latter, by T8.10,
Q `ND m C a D b. So by 9I, Q `ND 9v.v C a D b/; which is to say,
Q `ND a  b. But since Ns D a and Nt D b, by T8.13, Q `ND s D a and
Q `ND t D b; so by =E, Q `ND s  t.
(d) Ns  t T. Then N9v.v C s D t/ T; so by TI, for some d,
Nd 9v.v C s D t/ S; so by SF(9), for any o 2 U, Nd.vjo/ v C s D t S;
let m be an arbitrary individual of this sort; then Nd.vjm/ v C s D t S.
d.vjm/v D m; so by TA(v), Nd.vjm/ v D m; and since s and t are variablefree, Nd.vjm/ s D Ns D a and Nd.vjm/ t D Nt D Nt D b for some a
and b. By TA(f), Nd.vjm/ v C s D NChm; ai D m C a; so that by SF(r),
hm C a; bi 62 ND; so m C a b; and since m is arbitrary, for any o 2 U,
o C a b; so a 6 b; so by T8.12, Q `ND a 6 b. But since Ns D a and
Nt D b, by T8.13, Q `ND s D a and Q `ND t D b; so by =E, Q `ND s 6 t.
(e) Ns < t D T.
(f) Ns < t T.
Since we are able to correctly decide the required results at the level of numerals, and
then equalities between numerals and arbitrary terms, we are able to combine the two
to correctly decide arbitrary atomics.
E8.30. Complete the argument for T8.13 by completing the case for (). You should
set up the entire induction, but may appeal to the text for parts that are already
completed, just as the text appeals to homework.

E8.31. Complete the remaining cases of T8.14 to show that Q correctly decides
atomic sentences of LNT .

CHAPTER 8. MATHEMATICAL INDUCTION

8.4.4

408

Case

We conclude the chapter with some more examples of mathematical induction, this
time working toward important results about inequality. We begin by aiming at a
result sometimes called trichotomy, for any n, Q `ND 8x.x < n _ x D n _ n < x/.
Again, though, we begin with preliminaries. Recall that the bounded quantifiers
.8x < t/P , .9x < t/P , .8x  t/P , and .9x  t/P , are abbreviations with
associated derived introduction and exploitation rules (see p. 299). First, a simple
argument that repeats a pattern of reasoning we shall see again.
T8.15. For any n and T , if T `ND x D Sy and T `ND y D 0 _ y D 1 _ : : : _ y D n,
then T `ND x D S 0 _ x D S 1 _ : : : _ x D S n.
The argument is by induction on the value of n. Suppose T `ND x D Sy.
Basis: n D 0. Suppose T `ND y D 0; we need that T `ND x D S 0. But this is
immediate by DE.
Assp: For any i, 0  i < k, if T `ND y D 0 _ y D 1 _ : : : _ y D i , then
T `ND x D S 0 _ x D S 1 _ : : : _ x D S i
Show: If T `ND y D 0 _ y D 1 _ : : : _ y D k, then T `ND x D S 0 _ x D
S 1 _ : : : _ x D S k. Suppose T `ND y D 0 _ y D 1 _ : : : _ y D k.
1. x D Sy
2. y D 0 _ y D 1 _ : : : _ y D k

1_y Dk

given from T
given from T

3.

y D 0 _ y D 1 _ ::: _ y D k

4.
5.

x D S0 _ x D S1 _ : : : _ x D Sk
x D S0 _ x D S1 _ : : : _ x D Sk

6.

yDk

A (g 2_E)

7.
8.

x D Sk
x D S0 _ x D S1 _ : : : _ x D Sk

1,6 DE
7 _I

A (g 2_E)

1
1
1 _ x D Sk

9. x D S 0 _ x D S 1 _ : : : _ x D S k

1,3 assp
4 _I

2,3-5,6-8 _E

So T `ND x D S 0 _ x D S 1 _ : : : _ x D S k.
Indct: For any n, if T `ND x D Sy and T `ND y D 0 _ y D 1 _ : : : _ y D n, then
T `ND x D S 0 _ x D S 1 _ : : : _ x D S n.
Intuitively, we can use x D Sy together with an extended version of _E on y D
0 _ y D 1 _ : : : _ y D n to get the result. The induction works by obtaining the

CHAPTER 8. MATHEMATICAL INDUCTION

409

result for the first disjunct, and then showing that no matter how far we have gone, it
is always possible to go to the next stage. This theorem is useful for the next.
T8.16. For any n, (i) Q `ND .8x  n/.x D 0 _ x D 1 : : : _ x D n/ and (ii)
Q `ND .8x < n/.; ; _ x D 0 _ x D 1 : : : _ x D n 1/.
The first disjunct ; ; in (ii) is to guarantee that the result is a well-formed
sentence, even when n D 0. We work part (ii). By induction on n.
Basis: We need to show .8x < ;/.; ;/. But this is easy with T6.47.
1.

j <;

A (g (8I))

2.

;D;

A (c I)

3.
4.

j 6< ;
?

from T6.47
1,3 ?I

5.

;;

6. .8x < ;/.; ;/

2-4 I
1-5 (8I)

Assp: For 0  i < k, Q `ND .8x < i /.; ; _ x D 0 _ : : : _ x D i

1/

Show: Q `ND .8x < k/.; ; _ x D 0 _ : : : _ x D k 1/. When i D k 1 by


assumption Q `ND ; ; _ x D 0 _ : : : _ x D k 1 1/; observe that in
the case when i D 0 (k D 1) this series remains defined but reduces to ; ;
since it contains all the members up to k 1 1 and there are not any; when
i D 1 (k D 2) the series is ; ; _ x D 0; and so forth. Here are the main
outlines of the derivation.

CHAPTER 8. MATHEMATICAL INDUCTION


1. .8x < k

410

1/.; ; _ x D 0 _ : : : _ x D k

1/

by assp

2.

j <k

A (g, !I)

3.
4.

j D 0 _ 9y.j D Sy/
j D0

from Q7
A (g 3_E)

5.

; ; _ j D 0 _ ::: _ j D k

6.

9y.j D Sy/

4 _I

A (g 3_E)

7.

j D Sl

A (g 69E)

8.
9.

9v.Sv C j D k/
Sh C j D k

2 abv
A (g 89E)

Sh C Sl D k
S.Sh C l/ D S k 1
Sh C l D k 1
9v.Sv C l D k 1/
l <k 1

10.
11.
12.
13.
14.

l <k 1
; ; _ l D 0 _ ::: _ l D k 1 1
; ; _ j D 1 _ ::: _ j D k 1
; ; _ j D 0 _ j D 1 _ ::: _ j D k

15.
16.
17.
18.

; ; _ j D 0 _ j D 1 _ ::: _ j D k

19.
20.

7,9 =E
10 with Q4
11 with Q2
12 9I
13 abv

; ; _ j D 0 _ j D 1 _ ::: _ j D k

8,9-149E
1,15 (8IE
7,16 with T8.15
17, _I

6,7-18 9E

3,4-5,6-19 _E

21. .8x < k/.; ; _ x D 0 _ x D 1 _ : : : _ x D k

1/

2-20 (8I)

So Q `ND .8x < k/.; ; _ x D 0 _ x D 1 _ : : : _ x D k

1/.

Indct: So for any n, Q `ND .8x < n/.; ; _ x D 0 _ x D 1 : : : _ x D n

1/

From Q7, either j is zero or it is not. If j is zero, then the result is easy. If j is a
successor, then (with a little work), there is an l < k 1 to which we may apply the
assumption; once we have done that, it is a short step to the result again.
E8.32. Complete the demonstration of T8.16 by showing part (i). Hint: You have
the basis already from T6.46.

8.4.5

Case

The next theorem is a sort of mirror to T8.16, and illustrates a pattern of reasoning
we have already seen in application to extended disjunctions.

CHAPTER 8. MATHEMATICAL INDUCTION

411

T8.17. For any n, (i) Q `ND 8x.x D 0 _ x D 1 : : : _ x D n/ ! x  n and (ii)


Q `ND 8x.; ; _ x D 0 _ : : : _ x D n 1/ ! x < n
Again I illustrate just (ii). For any n and a  n we show by induction on the
value of a that Q `ND .; ; _ j D 0 _ : : : _ j D a 1/ ! j < n; the case
when a D n gives Q `ND .; ; _ j D 0 _ : : : _ j D n 1/ ! j < n; and
the desired result follows immediately by 8I. Observe that a when a D 0 the
series reduces to ; ; as before.
Basis: a D 0. We need Q `ND ; ; ! j < n
1.

;;

A (1 !I)

2.

j 6< n

A (c E)

3.
4.

;D;
?

DI
3,1 ?I

5.

2-4 E

j <n

6. ; ; ! j < n

1-5 !I

Assp: For any i, 0  i < k  n, Q `ND .; ; _ j D 0 _ : : : _ j D i


Show: Q `ND .; ; _ j D 0 _ : : : _ j D k

1/ ! j < n

1. .; ; _ j D 0 _ : : : _ j D k 1 1/ ! j < n
2. ; ; _ j D ; _ : : : _ j D k 1 1 _ j D k 1

assp
A (g !I)

3.

; ; _ j D ; _ ::: _ j D k

4.

j <n

5.

j Dk

6.
7.

T8.14 (k  n)
6,5 DE

8.

j <n

2,3-4,5-7 _E

1/ ! j < n

A (g 2_E)

1,3 !E
A (g, 1_E)

1<n
j <n

9. .; ; _ j D 0 _ : : : _ j D k

1/ ! j < n

So Q `ND .; ; _ j D 0 _ : : : _ j D k

2-8 !I

1/ ! j < k.

Indct: For any n, Q `ND .; ; _ j D 0 _ : : : _ j D n

1/ ! j < n.

So by 8I, Q `ND 8x.; ; _ x D 0 _ : : : _ x D n 1/ ! x < n. The basis


is easy. Once we set it up by _E, the show is easy too. Observe the use of T8.14 in
the second case: since k  n, k 1 < n; so by T8.14, Q `ND k 1 < n. The next
theorem does not require mathematical induction at all.

CHAPTER 8. MATHEMATICAL INDUCTION

412

T8.18. For any n, (i) Q `ND 8xn  x ! .n D x _ S n  x/ and (ii) 8xn < x !
.S n D x _ S n < x/.
Again I illustrate (ii).
1.

n<j

A (g !I)

2.
3.

9v.Sv C n D j /
Sk C n D j

1 abv
A (g 29E)

4.
5.

k D ; _ 9y.k D Sy/
kD;

from Q7
A (g 4_E)

6.
7.
8.
9.

S; C n D j
S; C n D Sn
j D Sn
j D Sn _ Sn < j

3,5 DE
from T8.8
6,7 DE
8 _I

9y.k D Sy/

A (g 4_E)

10.
11.

k D Sl

A (g 109E)

12.
13.
14.
15.
16.

k C Sn D j
Sl C S n D j
9v.Sv C S n D j /
Sn < j
j D Sn _ Sn < j

from 3 with T8.11


12,11 DE
13 9I
14 abv
15 _I

17.
18.
19.

j D Sn _ Sn < j
j D Sn _ Sn < j
j D Sn _ Sn < j

20. n < j ! .j D S n _ S n < j /


21. 8xn < x ! .x D S n _ S n < x/

10,11-16 9E
4,5-9,10-17 _E
2,3-18 9E
1-19 !I
20 8I

From Q7, either k is zero or it is not. If k is zero, it is a simple addition problem to


show that j D S n and so obtain the desired result. If k is a successor, then S n < j
and again we have the desired result.
With these theorems in hand, we are ready to obtain the result at which we have
been aiming.
T8.19. For any n, (i) Q `ND 8x.x  n _ n  x/ and (ii) Q `ND 8x.x < n _ x D
n _ n < x/.
We show (ii). By induction on n we show Q `ND j < n _ j D n _ n < j ;
the result immediately follows by 8I.
Basis: n D 0. We need to show that Q `ND j < 0 _ j D 0 _ 0 < j .

CHAPTER 8. MATHEMATICAL INDUCTION


1. j D 0 _ 9y.j D Sy/
2. j D 0

from Q7
A (g 1_E)

3.

j D0_0<j

2 _I

4.

9y.j D Sy/

A (g 1_E)

5.

j D Sk

A (g 49E)

Sk C 0 D Sk
Sk C 0 D j
9v.Sv C 0 D j /
0<j
j D0_0<j

from Q3
6,5 =E
7 9I
8 abv
9 _I

6.
7.
8.
9.
10.
11.

413

j D0_0<j

4,5-10 9E

12. j D 0 _ 0 < j
13. j < 0 _ j D 0 _ 0 < j

1,2-3,4-11 _E
12 _I

Assp: For any i; 0  i < k, Q `ND j < i _ j D i _ i < j


Show: Q `ND j < k _ j D k _ k < j
1. j < k

1_j Dk

1_k

1<j

2.

j <k

3.
4.
5.
6.

; ; _ j D 0 _ ::: _ j D k
; ; _ j D 0 _ ::: _ j D k
j <k
j <k_j Dk_k<j

7.

j Dk

by assumption
A (g 1_E)

1
1

1_j Dk

1<k
j <k
j <k_j Dk_k<j

8.
9.
10.

11.

12.
13.

j Dk_k<j
j <k_j Dk_k<j

1<j

14. j < k _ j D k _ k < j

from 2 with T8.16


3 _I
from 4 with T8.17
5 _I
A (g 1_E)
T8.14 (k
8,7 =E
9 _I

1 < k)

A (g 1_E)
from 11 with T8.18
12 _I
1, etc. _E

So Q `ND j < k _ j D k _ k < j .


Indct: For any n, Q `ND j < n _ j D n _ n < j ; and the desired result follows by
8I.
Note the use of theorems T8.16, T8.17 and T8.18. In the first case of the show
we convert from one inequality to another by switching to an extended disjunction,

CHAPTER 8. MATHEMATICAL INDUCTION

414

adding a disjunct and then converting back to the second inequality. Also again you
should be clear about how the extended disjunctions work. If k 1 D 0, then the
disjunction at (3) reduces to ; ; and the one at (4) to ; ; _ j D 0. But this
is just why we have been sure that there is some formula in these cases, so that the
argument continues to work.
E8.33. Complete the demonstration of T8.19 by showing part (i) of T8.17, T8.18
and then T8.19.

8.4.6

Case

Finally, three theorems to round out results about inequality.


T8.20. For any n and formula P .x/, (i) if Q `ND P .0/ or Q `ND P .1/ or . . . or
Q `ND P .n/ then Q `ND .9x  n/P .x/, and (ii) if 0 0 or Q `ND P .0/ or
. . . or Q `ND P .n 1/ then Q `ND .9x < n/P .x/.
In the second case, again, we include the first disjunct to keep the conditional
defined in the case when n D 0; then the conditional obtains because the
antecedent does not. This theorem is nearly trivial. (i) For some m  n
suppose P .m/; by T8.14, Q `ND m  n; so by .9/I, Q `ND .9x  n/P .x/.
Similarly for (ii).
If P is true of some individual  n or < n then it is immediate that the corresponding
bounded existential generalization is true.
*T8.21. For any n and formula P .x/, (i) if Q `ND P .0/ and Q `ND P .1/ and . . . and
Q `ND P .n/ then Q `ND .8x  n/P .x/, and (ii) if 0 D 0 and Q `ND P .0/
and . . . and Q `ND P .n 1/ then Q `ND .8x < n/P .x/.
This time, in the second case we include a trivial truth in order to keep the
conditional defined when n D 0; when n D 0, then the antecedent is trivially
true, but the consequent follows from nothing. The argument is by induction
on the value of n.
If Q proves P for each individual  n or < n then Q proves the corresponding
bounded universal generalization.
*T8.22. For any n, (i) Q `ND 8xx  n $ .x < n _ x D n/, and (ii) Q `ND
8xx < n $ .x  n ^ x n/

CHAPTER 8. MATHEMATICAL INDUCTION

415

Hint: You will be able to move between the long disjunctions on the one hand,
and inequalities of the different types on the other. Part (i) does not require
induction. For (ii), it will be helpful to begin by showing, by induction on a,
that for any a  n, Q `ND j < a ! j n the case when a D n gives
Q `ND j < n ! j n.
In the obvious way, we are able to express s  t in terms of s < t and similarly,
s < t in terms of s  t.
*E8.34. Provide derivations to show both parts of T8.21.
*E8.35. Provide derivations to show both parts of T8.22.
E8.36. After a few days studying mathematical logic, Zeno hits upon what he thinks
is conclusive proof that all is one. He argues, by mathematical induction
that all the members of any n-tuple are identical. From this, he considers
the n-tuple consisting of you and Mount Rushmore, and concludes that you
are identical; similarly for you and G.W. Bush, and so forth. What is the
matter with Zenos reasoning? Hint: Is the reasoning at the show stage truly
arbitrary? does it apply to any k?
Basis: If A is a 1-tuple, then it is of the sort hoi, and every member of hoi is
identical. So every member of A is identical.
Assp: For any i, 1  i < k, all the members of any i -tuple are identical.
Show: All the members of any k-tuple are identical.
If A is a k-tuple, then it is of the form ho1 : : : ok 2 ; ok 1 ; ok i. But
both ho1 : : : ok 2 ; ok 1 i and ho1 : : : ok 2 ; ok i are k 1 tuples; so by
the inductive assumption, all their members are identical; but these
have o1 in common and together include all the members of A; so all
the members of A are identical to o1 and so to one another.
Indct: All the members of any A are identical.
E8.37. For each of the following concepts, explain in an essay of about two pages,
so that Hannah could understand. In your essay, you should (i) identify the
objects to which the concept applies, (ii) give and explain the definition, and
give and explicate examples (iii) where the concept applies, and (iv) where
it does not. Your essay should exhibit an understanding of methods from the
text.

CHAPTER 8. MATHEMATICAL INDUCTION

416

a. The use of the inductive assumption in an argument from mathematical induction.


b. The reason mathematical induction works as a deductive argument form.

CHAPTER 8. MATHEMATICAL INDUCTION

417

Final Theorems of Chapter 8


T8.8 For any a; b; c 2 U, if a C b D c, then Q `ND a C b D c.
T8.9 For any a, b, c 2 U, if a  b D c then Q `ND a  b D c.
T8.10 For any a; b 2 U, if a b, then Q `ND a b
T8.11 Q `ND Sj C n D j C S n.
T8.12 (i) If a 6 b, then Q `ND a 6 b; and (ii) If a 6< b, then Q `ND a 6< b.
T8.13 For any variable-free term t of LNT , where Nt D n, Q `ND t D n
T8.14 Q correctly decides atomic sentences of LNT . For any sentence P of the sort s D t,
s  t or s < t, if NP D T then Q `ND P ; and if NP T then Q `ND P .
T8.15 For any n and T , if T `ND x D Sy and T `ND y D 0 _ y D 1 _ : : : _ y D n,
then T `ND x D S 0 _ x D S 1 _ : : : _ x D S n
T8.16 For any n, (i) Q `ND .8x  n/.x D 0 _ x D 1 : : : _ x D n/ and (ii) Q `ND .8x <
n/.; ; _ x D 0 _ x D 1 : : : _ x D n 1/
T8.17 For any n, (i) Q `ND 8x.x D 0 _ x D 1 : : : _ x D n ! x  n/ and (ii)
Q `ND 8x.; ; _ x D 0 _ : : : _ x D n 1 ! x < n/
T8.18 For any n, (i) Q `ND 8xn  x ! .n D x _ S n  x/ and (ii) 8xn < x !
.S n D x _ S n < x/
T8.19 For any n, (i) Q `ND 8x.x  n_n  x/ and (ii) Q `ND 8x.x < n_x D n_n < x/
T8.20 For any n and formula P .x/, (i) if Q `ND P .0/ or Q `ND P .1/ or . . . or Q `ND
P .n/ then Q `ND .9x  n/P .x/, and (ii) if 0 0 or Q `ND P .0/ or . . . or
Q `ND P .n 1/ then Q `ND .9x < n/P .x/.
T8.21 For any n and formula P .x/, (i) if Q `ND P .0/ and Q `ND P .1/ and . . . and
Q `ND P .n/ then Q `ND .8x  n/P .x/, and (ii) if 0 D 0 and Q `ND P .0/ and
. . . and Q `ND P .n 1/ then Q `ND .8x < n/P .x/.
T8.22 For any n, (i) Q `ND 8xx  n $ .x < n _ x D n/, and (ii) Q `ND 8xx < n $
.x  n ^ x n/

Part III

Classical Metalogic: Soundness


and Adequacy

418

Introductory
In Part I we introduced four notions of validity. In this part, we set out to show that
they are interrelated as follows.

Validity in AD


Logical
Validity

Semantic
Validity

6

I
@
@
?
@
@
@
R Validity in ND
@

An argument is semantically valid iff it is valid in the derivation systems. So the


three formal notions apply to exactly the same arguments. And if an argument is
semantically valid, then it is logically valid. So any of the formal notions imply
logical validity for a corresponding ordinary argument.
More carefully, in Part I, we introduced four main notions of validity. There
are logical validity from chapter 1, semantic validity from chapter 4, and syntactic
validity in the derivation systems AD, from chapter 3 and ND from chapter 6. We
turn in this part to the task of thinking about these notions, and especially about
how they are related. The primary result is that  P iff `AD P iff `ND P
(iff `NDC P ). Thus our different formal notions of validity are met by just the
same arguments, and the derivation systems themselves defined in terms of form
are faithful to the semantic notion: what is derivable is neither more nor less than
what is semantically valid. And this is just right: If what is derivable were more
than what is semantically valid, derivations could lead us from true premises to false
conclusions; if it were less, not all semantically valid arguments could be identified as
such by derivations. That the derivable is no more than what is semantically valid, is
known as soundness of a derivation system; that it is no less is adequacy. In addition,
419

PART III.

CLASSICAL METALOGIC

420

we show that if an argument is semantically valid, then a corresponding ordinary


argument is logically valid. Given the equivalence between the formal notions of
validity, it follows that if an argument is valid in any of the formal senses, then it
is logically valid. This connects the formal machinery to the notion of validity with
which we began.2
We begin in chapter 9 showing that just the same arguments are valid in the
derivation systems ND and AD. This puts us in a position to demonstrate in chapter 10
the core result that the derivation systems are both sound and adequate. Chapter
chapter 11 fills out this core picture in different directions.

2 Adequacy

is commonly described as completeness. However, this only invites confusion with


theory completeness as described in Part IV.

Chapter 9

Preliminary Results
We have said that the aim of this part is to establish the following relations: An
argument is semantically valid iff it is valid in AD; iff it is valid in ND; and if an
argument is semantically valid, then it is logically valid.

Validity in AD


Logical
Validity

Semantic
Validity

6

I
@
@
?
@
@
@
R Validity in ND
@

In this chapter, we begin to develop these relations, taking up some of the simpler
cases. We consider the leftmost horizontal arrow, and the rightmost vertical ones.
Thus we show that quantificational (semantic) validity implies logical validity, that
validity in AD implies validity in ND, and that validity in ND implies validity in AD
(and similarly for ND+). Implications between semantic validity and the syntactical
notions will wait for chapter 10.

9.1

Semantic Validity Implies Logical Validity

Logical validity is defined for arguments in ordinary language. From LV, an argument is logically valid iff there is no consistent story in which all the premises are
true and the conclusion is false. Quantificational validity is defined for arguments in
421

CHAPTER 9. PRELIMINARY RESULTS

422

a formal language. From QV, an argument is quantificationally valid iff there is no


interpretation on which all the premises are true and the conclusion is not. So our
task is to show how facts about formal expressions and interpretations connect with
ordinary expressions and stories. In particular, where P1 : : : Pn =Q is an ordinarylanguage argument, and P10 : : : Pn0 , Q0 are the formulas of a good translation, we
show that if P10 : : : Pn0  Q10 , then the ordinary argument P1 : : : Pn =Q is logically
valid. The reasoning itself is straightforward. We will spend a bit more time discussing the result.
Recall our criterion of goodness for translation CG from chapter 5 (p. 136).
When we identify an interpretation function II (sentential or quantificational), we
thereby identify an intended interpretation II! corresponding to any way ! that the
world can be. For example, corresponding to the interpretation function,
II

B: Bill is happy
H : Hill is happy

II! B D T just in case Bill is happy at !, and similarly for H. Given this, a formal
translation A0 of some ordinary A is good only if at any !, II! A0 has the same truth
value as A at !. Given this, we can show,

T9.1. For any ordinary argument P1 : : : Pn =Q, with good translation consisting of
II and P10 : : : Pn0 , Q0 , if P10 : : : Pn0  Q0 , then P1 : : : Pn =Q is logically valid.
Suppose P10 : : : Pn0  Q0 but P1 : : : Pn =Q is not logically valid. From the
latter, by LV, there is some consistent story where each of P1 : : : Pn is true
but Q is false. Since P1 : : : Pn are true at !, by CG, II! P10 D T, and . . . and
II! Pn0 D T. And since ! is consistent with Q false at !; Q is not both true
and false at !; so Q is not true at !; so by by CG, II! Q10 T. So there is
an I that that makes each of IP10 D T, and . . . and IPn0 D T and IQ0 T;
so by QV, P10 : : : Pn0 6 Q0 . This is impossible; reject the assumption: if
P10 : : : Pn0  Q0 then P1 : : : Pn =Q is logically valid.
It is that easy. If there is no interpretation where P10 : : : Pn0 are true but Q0 is not, then
there is no intended interpretation where P10 : : : Pn0 are true but Q0 is not; so, by CG,
there is no consistent story where the premises are true and the conclusion is not; so
P1 : : : Pn =Q, is logically valid. So if P10 : : : Pn0  Q0 then P1 : : : Pn =Q is logically
valid.
Let us make a couple of observations: First, CG is stronger than is actually required for our application of semantic to logical validity. CG requires a biconditional
for good translation.

CHAPTER 9. PRELIMINARY RESULTS


!

423
II!

A is true at ! iff II! A0 D T. But our reasoning applies to premises just the leftto-right portion of this condition: if P is true at ! then II! P 0 D T. And for the
conclusion, the reasoning goes in the opposite direction: if II! Q0 D T then Q is
true at ! (so that if the consequent fails at !, then the antecedent fails at II! ). The
biconditional from CG guarantees both. But, strictly, for premises, all we need is
that truth of an ordinary expression at a story guarantees truth for the corresponding
formal one at the intended interpretation. And for a conclusion, all we need is that
truth of the formal expression on the intended interpretation guarantees truth of the
corresponding ordinary expression at the story.
Thus we might use our methods to identify logical validity even where translations are less than completely good. Consider, for example, the following argument.
Bob took a shower and got dressed
(A)
Bob took a shower
As discussed in chapter 5 (p. 154), where II gives S the same value as Bob took a
shower and D the same as Bob got dressed, we might agree that there are cases
where II! S ^ D D T but Bob took a shower and got dressed is false. So we might
agree that the right-to-left conditional is false, and the translation is not good.
However, even if this is so, given our interpretation function, there is no situation
where Bob took a shower and got dressed is true but S ^D is F at the corresponding
intended interpretation. So the left-to-right conditional is sustained. So, even if the
translation is not good by CG, it remains possible to use our methods to demonstrate
logical validity. Since it remains that if the ordinary premise is true at a story, then
the formal expression is true at the corresponding intended interpretation, semantic
validity implies logical validity. A similar point applies to conclusions. Of course,
we already knew that this argument is logically valid. But the point applies to more
complex arguments as well.
Second, observe that our reasoning does not work in reverse. It might be that
P1 : : : Pn =Q is logically valid, even though P10 : : : Pn0 Q0 . Finding a quantificational interpretation where P10 : : : Pn0 are true and Q0 is not shows that P10 : : : Pn0
Q0 . However it does not show that P1 : : : Pn =Q is not logically valid. Here is why:
There may be quantificational interpretations which do not correspond to any consistent story. The situation is like this:

CHAPTER 9. PRELIMINARY RESULTS

424

Quantificational
Interpretations
Intended
Interpretations

Intended interpretations correspond to stories. If no interpretation whatsoever has


the premises true and the conclusion not, then no intended interpretation has the
premises true and conclusion not, so no consistent story makes the premises true and
the conclusion not. But it may be that some (unintended) interpretation makes the
premises true and conclusion false, even though no intended interpretation is that
way. Thus, if we were to attempt to run the above reasoning in reverse, a move from
the assumption that P10 : : : Pn0 Q0 , to the conclusion that there is a consistent story
where P1 : : : Pn are true but Q is not, would fail.
It is easy to see why there might be unintended interpretations. Consider, first,
this standard argument.
All humans are mortal
(B)

Socrates is human
Socrates is mortal

It is logically valid. But consider what happens when we translate into a sentential
language. We might try an interpretation function as follows.
A: All humans are mortal
H : Socrates is human
M : Socrates is mortal
with translation, A, H=M . But, of course, there is a row of the truth table on which A
and H are T and M is F. So the argument is not sententially valid. This interpretation
is unintended in the sense that it corresponds to no consistent story whatsoever. Sentential languages are sufficient to identify validity when validity results from truth
functional structure; but this argument is not valid because of truth functional structure.
We are in a position to expose its validity only in the quantificational case. Thus
we might have,

CHAPTER 9. PRELIMINARY RESULTS

425

s: Socrates
H 1 : fo j o is humang
M 1 : fo j o is mortalg
with translation 8x.H x ! M x/, H s=M s. The argument is quantificationally valid.
And, as above, it follows that the ordinary one is logically valid.
But related problems may arise even for quantificational languages. Thus, consider,
Socrates is necessarily human
(C)
Socrates is human
Again, the argument is logically valid. But now we end up with something like an additional relation symbol N 1 for fo j o is necessarily humang, and translation N s=H s.
And this is not quantificationally valid. Consider, for example, an interpretation with
U D f1g, Is D 1, IN D f1g, and IH D fg. Then the premise is true, but
the conclusion is not. Again, the interpretation corresponds to no consistent story.
And, again, the argument includes structure that our quantificational language fails
to capture. As it turns out, modal logic is precisely an attempt to work with structure
introduced by notions of possibility and necessity. Where  represents necessity,
this argument, with translation H s=H s is valid on standard modal systems.
The upshot of this discussion is that our methods are adequate when they work
to identify validity. When an argument is semantically valid, we can be sure that it
is logically valid. But we are not in a position to identify all the arguments that are
logically valid. Thus quantificational invalidity does not imply logical invalidity. We
should not be discouraged by this or somehow put off the logical project. Rather, we
have a rationale for expanding the logical project! In Part I, we set up formal logic as
a tool or machine to identify logical validity. Beginning with the notion of logical validity, we introduce our formal languages, learn to translate into them, and to
manipulate arguments by semantical and syntactical methods. The sentential notions
have some utility. But when it turns out that sentential languages miss important
structure, we expand the language to include quantificational structure, developing
the semantical and syntactical methods to match. And similarly, if our quantificational languages should turn out to miss important structure, we expand the language
to capture that structure, and further develop the semantical and syntactical methods.
As it happens, the classical quantificational logic we have so far seen is sufficient to
identify validity in a wide variety of contexts and, in particular, for arguments in

CHAPTER 9. PRELIMINARY RESULTS

426

mathematics. Also, controversy may be introduced as one expands beyond the classical quantificational level. So the logical project is a live one. But let us return to
the kinds of validity we have already seen.
E9.1. (i) Recast the above reasoning to show directly a corollary to T9.1: If  Q0 ,
then Q is necessarily true (that is, true in any consistent story). (ii) Suppose
Q0 ; does it follow that Q is not necessary (that is, not true in some consistent
story)? Explain.

9.2

Validity in AD Implies Validity in ND

It is easy to see that if `AD P , then `ND P . Roughly, anything we can accomplish in AD, we can accomplish in ND as well. If a premise appears in an AD
derivation, that same premise can be used in ND. If an axiom appears in an AD derivation, that axiom can be derived in ND. And if a line is justified by MP or Gen in AD,
that same line may be justified by rules of ND. So anything that can be derived in AD
can be derived in ND. Officially, this reasoning is by induction on the line numbers
of an AD derivation, and it is appropriate to work out the details more formally. The
argument by mathematical induction is longer than anything we have seen so far, but
the reasoning is straightforward.
T9.2. If `AD P , then `ND P .
Suppose `AD P . Then there is an AD derivation A D hQ1 : : : Qn i of P
from premises in , with Qn D P . We show that there is a corresponding ND
derivation N , such that if Qi appears on line i of A, then Qi appears, under
the scope of the premises alone, on the line numbered i of N . It follows
that `ND P . For any premises Qa , Qb ,. . . Qj in A, let N begin,
0.a Qa
0.b Qb
::
:
0.j Qj

P
P

Now we reason by induction on the line numbers in A. The general plan is


to construct a derivation N which accomplishes just what is accomplished in
A. Fractional line numbers, as above, maintain the parallel between the two
derivations.
Basis: Q1 in A is a premise or an instance of A1, A2, A3, A4, A5, A6 or A7.

CHAPTER 9. PRELIMINARY RESULTS

427

(prem) If Q1 is a premise Qi , continue N as follows,


0.a
0.b
::
:
0.j
1

Qa
Qb

P
P

Qj
Qi

P
0.i R

So Q1 appears, under the scope of the premises alone, on the line


numbered 1 of N .
(A1) If Q1 is an instance of A1, then it is of the form, B ! .C ! B/, and
we continue N as follows,
0.a Qa
0.b Qb
::
:
0.j Qj
1.1 B
1.2
1.3

P
P

P
A (g, !I)
A (g, !I)

C
B

1.1 R

C !B

1.2-1.3 !I

1 B ! .C ! B/

1.1-1.4 !I

1.4

So Q1 appears, under the scope of the premises alone, on the line


numbered 1 of N .
(A2) If Q1 is an instance of A2, then it is of the form, .B ! .C ! D// !
..B ! C/ ! .B ! D// and we continue N as follows,
0.a Qa
0.b Qb
::
:
0.j Qj
1.1 B ! .C ! D/
1.2

B!C

P
P

P
A (g, !I)
A (g, !I)

1.3

A (g, !I)

1.4
1.5
1.6

C
C !D
D

1.2,1.3 !E
1.1,1.3 !E
1.5,1.4 !E

1.7
1.8

B!D
.B ! C / ! .B ! D/

1 .B ! .C ! D// ! ..B ! C/ ! .B ! D//

1.3-1.6 !I
1.2-1.7 !I
1.1-1.8 !I

CHAPTER 9. PRELIMINARY RESULTS

428

So Q1 appears, under the scope of the premises alone, on the line


numbered 1 of N .
(A3) Homework.
(A4) If Q1 is an instance of A4, then it is of the form 8xB ! Btx for some
variable x and term t that is free for x in B, and we continue N as
follows,
0.a Qa
0.b Qb
::
:
0.j Qj
1.1 8xB
1.2

Btx

1 8xB ! Btx

P
P

P
A (g, !I)
1.1 8E
1.1-1.2 !I

Since we are given that t is free for x in B, the parallel requirement on


8E is met at line 1.2. So Q1 appears, under the scope of the premises
alone, on the line numbered 1 of N .
(A5) Homework.
(A6) If Q1 is an instance of A6, then it is of the form .xi D y/ !
.hn x1 : : : xi : : : xn D hn x1 : : : y : : : xn / for some variables x1 : : : xn
and y and function symbol hn ; and we continue N as follows,
0.a Qa
0.b Qb
::
:
0.j Qj
1.1 xi D y
1.2
1.3

hn x1 : : : xi : : : xn D hn x1 : : : xi : : : xn
hn x1 : : : xi : : : xn D hn x1 : : : y : : : xn

1 .xi D y/ ! .hn x1 : : : xi : : : xn D hn x1 : : : y : : : xn /

P
P

P
A (g, !I)
=I
1.2,1.1 =E
1.1-1.3 !I

So Q1 appears, under the scope of the premises alone, on the line


numbered 1 of N .
(A7) Homework.
Assp: For any i , 1  i < k, if Qi appears on line i of A, then Qi appears,
under the scope of the premises alone, on the line numbered i of N .
Show: If Qk appears on line k of A, then Qk appears, under the scope of the
premises alone, on the line numbered k of N .

CHAPTER 9. PRELIMINARY RESULTS

429

Qk in A is a premise, an axiom, or arises from previous lines by MP


or Gen. If Qk is a premise or an axiom then, by reasoning as in the
basis (with line numbers adjusted to k:n) if Qk appears on line k of
A, then Qk appears, under the scope of the premises alone, on the line
numbered k of A. So suppose Qk arises by MP or Gen.
(MP) If Qk arises from previous lines by MP, then A is as follows,
i B
::
:
j B!C
::
:
k C

i; j MP

where i; j < k and Qk is C. By assumption, then, there are lines in


N,
i B
::
:
j B!C

So we simply continue derivation N ,


i B
::
:
j B!C
::
:
k C

i; j !E

So Qk appears under the scope of the premises alone, on the line numbered k of N .
(Gen) If Qk arises from previous lines by Gen, then A is as follows,
i B!C
::
:
k B ! 8xC

i Gen

where i < k, variable x is not free in B, and Qk is B ! 8xC. By


assumption N has a line i ,
::
:
i B!C
::
:

CHAPTER 9. PRELIMINARY RESULTS

430

under the scope of the premises alone. So we continue N as follows,


i B!C
::
:
k:1 B

A (g, !I)

k:2
k:3

C
8xC

i; k:1 !E
k:2 8I

k B ! 8xC

k:1-k:3 !I

Since k:1 is the only undischarged assumption, and we are given that x
is not free in B, x is not free in any undischarged assumption. Further,
since there is no change of variables, we can be sure that x is free for
every free instance of x in C, and that x is not free in 8xC. So the
restrictions are met on 8I at line k:3. So Qk appears under the scope
of the premises alone, on the line numbered k of N .
In any case then, Qk appears under the scope of the premises alone,
on the line numbered k of N .
Indct: For any line j of A, Qj appears under the scope of the premises alone,
on the line numbered j of N .
So `ND Qn , where this is just to say `ND P . So T9.2, if `AD P , then `ND P .
Notice the way we use line numbers, i:1, i:2,. . . i:n, i in N to make good on the claim
that for each Qi in A, Qi appears on the line numbered i of N where the line
numbered i may or may not be the ith line of N . We need this parallel between the
line numbers when it comes to cases for MP and Gen. With the parallel, we are in a
position to make use of line numbers from justifications in derivation A, directly in
the specification of derivation N .
Given an AD derivation, what we have done shows that there exists an ND derivation, by showing how to construct it. We can see into how this works, by considering
an application. Thus, for example, consider the derivation of T3.2 on p. 71.

(D)

1.
2.
3.
4.
5.
6.
7.

B!C
.B ! C/ ! A ! .B ! C /
A ! .B ! C/
A ! .B ! C / ! .A ! B/ ! .A ! C /
.A ! B/ ! .A ! C /
A!B
A!C

prem
A1
1,2 MP
A2
3,4 MP
prem
5,6 MP

Let this be derivation A; we will follow the method of our induction to construct a
corresponding ND derivation N . The first step is to list the premises.

CHAPTER 9. PRELIMINARY RESULTS


0.1 B ! C
0.2 A ! B

431

P
P

Now to the induction itself. The first line of A is a premise. Looking back to the basis
case of the induction, we see that we are instructed to produce the line numbered 1
by reiteration. So that is what we do.
0.1 B ! C
0.2 A ! B
1 B!C

P
P
0.1 R

This may strike you as somewhat pointless! But, again, we need B ! C on the line
numbered 1 in order to maintain the parallel between the derivations. So our recipe
requires this simple step.
Line 2 of A is an instance of A1, and the induction therefore tells us to get it by
reasoning as in the basis. Looking then to the case for A1 in the basis, we continue
on that pattern as follows,
0.1 B ! C
0.2 A ! B
1 B!C
2.1 B ! C

P
P
0.1 R
A (g, !I)

2.2

A (g, !I)

2.3

B!C

2.1 R

2.4

A ! .B ! C/

2 .B ! C / ! .A ! .B ! C//

2.2-2.3 !I
2.1-2.4 !I

Notice that this reasoning for the show step now applies to line 2, so that the line
numbers are 2.1, 2.2, 2.3, 2.4, 2 instead of 1.1, 1.2, 1.3, 1.4, 1 as for the basis. Also,
what we have added follows exactly the pattern from the recipe in the induction,
given the relevant instance of A1.
Line 3 is justified by 1,2 MP. Again, by the recipe from the induction, we continue,
0.1 B ! C
0.2 A ! B
1 B!C
2.1 B ! C

P
P
0.1 R
A (g, !I)

2.2

A (g, !I)

2.3

B!C

2.1 R

2.4

A ! .B ! C/

2 .B ! C / ! .A ! .B ! C//
3 A ! .B ! C/

2.2-2.3 !I
2.1-2.4 !I
1,2 !E

CHAPTER 9. PRELIMINARY RESULTS

432

Notice that the line numbers of the justification are identical to those in the justification from A. And similarly, we are in a position to generate each line in A. Thus, for
example, line 4 of A is an instance of A2. So we would continue with lines 4.1-4.8
and 4 to generate the appropriate instance of A2. And so forth. As it turns out, the
resultant ND derivation is not very efficient! But it is a derivation, and our point is
merely to show that some ND derivation of the same result exists. So if `AD P ,
then `ND P .
*E9.2. Set up the above induction for T9.2, and complete the unfinished cases to
show that if `AD P , then `ND P . For cases completed in the text, you
may simply refer to the text, as the text refers cases to homework.

E9.3. (i) Where A is the derivation for T3.2, complete the process of finding the
corresponding derivation N . Hint: if you follow the recipe correctly, the
result should have exactly 21 lines. (ii) This derivation N is not very efficient!
See if you can find an ND derivation to show A ! B, B ! C `ND A ! C
that takes fewer than 10 lines.

E9.4. Consider the axiomatic system A3 as described for E8.11 on p. 390, and
produce a complete demonstration that if `A3 P , then `ND P .

9.3

Validity in ND Implies Validity in AD

Perhaps the result we have just attained is obvious: if `AD P , then of course
`ND P . But the other direction may be less obvious. Insofar as AD may seem
to have fewer resources than ND, one might wonder whether it is the case that if
`ND P , then `AD P . But, in fact, it is possible to do in AD whatever can be
done in ND. To show this, we need a couple of preliminary results. I begin with an
important result known as the deduction theorem, turn to some substitution theorems,
and finally to the intended result that whatever is provable in ND is provable in AD.

9.3.1

Deduction Theorem

According to the deduction theorem subject to an important restriction if there


is an AD derivation of Q from the members of some set of sentences plus P , then
there is an AD derivation of P ! Q from the members of alone: if [fP g `AD Q
then `AD P ! Q. In practice, this lets us reason just as we do with !I.

CHAPTER 9. PRELIMINARY RESULTS

(E)

a.
b.

433

members of
P
Q

c. P ! Q

a-b deduction theorem

At (b), there is a derivation of Q from the mbembers of plus P . At (c), the


assumption is discharged to indicate a derivation of P ! Q from the members of
alone. By the deduction theorem, if there is a derivation of Q from plus P , then
there is a derivation of P ! Q from alone. Here is the restriction: The discharge
of an auxiliary assumption P is legitimate just in case no application of Gen under its
scope generalizes on a variable free in P . The effect is like that of the ND restriction
on 8I here, though, the restriction is not on Gen, but rather on the discharge of
auxiliary assumptions. In the one case, an assumption available for discharge is one
such that no application of Gen under its scope is to a variable free in the assumption;
in the other, we cannot apply 8I to a variable free in an undischarged assumption (so
that, effectively, every assumption is always available for discharge).
Again, our strategy is to show that given one derivation, it is possible to construct
another. In this case, we begin with an AD derivation (A) as below, with premises
[ fP g. Treating P as an auxiliary premise, with scope as indicated in (B), we set
out to show that there is an AD derivation (C), with premises in alone, and lines
numbered 1, 2, . . . corresponding to 1, 2, . . . in (A).

(F)

(A) 1. Q1
2. Q2
::
:
P
::
:
n. Qn

(B) 1. Q1
2. Q2
::
:
P

n.

::
:
Qn

(C) 1. P ! Q1
2. P ! Q2
::
:
P !P
::
:
n. P ! Qn

That is, we construct a derivation with premises in such that for any formula A
on line i of the first derivation, P ! A appears on the line numbered i of the
constructed derivation. The last line n of the resultant derivation is the desired result,
`AD P ! Q.
T9.3. (Deduction Theorem) If [ fP g `AD Q, and no application of Gen under
the scope of P is to a variable free in P , then `AD P ! Q.
Suppose A D hQ1 ; Q2 ; : : : Qn i is an AD derivation of Q from [ fP g,
where Q is Qn and no application of Gen under the scope of P is to a variable
free in P . By induction on the line numbers in derivation A, we show there

CHAPTER 9. PRELIMINARY RESULTS

434

is a derivation C with premises only in , such that for any line i of A,


P ! Qi appears on the line numbered i of C . The case when i D n gives
the desired result, that `AD P ! Q.
Basis: Q1 of A is an axiom, a member of , or P itself.
(i) If Q1 is an axiom or a member of , then begin C as follows,
1.1 Q1
1.2 Q1 ! .P ! Q1 /
1 P ! Q1

axiom / premise
A1
1.1, 1.2 MP

(ii) Q1 is P itself. By T3.1, `AD P ! P ; which is to say P ! Q1 ; so


begin derivation C ,
1 P !P

T3.1

In either case, P ! Q1 appears on the line numberd 1 of C with


premises in alone.
Assp: For any i , 1  i < k, P ! Qi appears on the line numbered i of
C , with premises in alone.
Show: P ! Qk appears on the line numbered k of C , with premises in
alone.
Qk of A is a member of , an axiom, P itself, or arises from previous
lines by MP or Gen. If Qk is a member of , an axiom or P itself then,
by reasoning as in the basis, P ! Qk appears on the line numbered
k of C from premises in alone. So two cases remain.
(MP) If Qk arises from previous lines by MP, then there are lines in derivation A of the sort,
i B
::
:
j B!C
::
:
k C

i,j MP

where i; j < k and Qk is C . By assumption, there are lines in C ,


i P !B
::
:
j P ! .B ! C/

So continue derivation C as follows,

CHAPTER 9. PRELIMINARY RESULTS

435

i P !B
::
:
j P ! .B ! C/
::
:
k.1 P ! .B ! C / ! .P ! B/ ! .P ! C/
k.2 .P ! B/ ! .P ! C/
k P !C

A2
j, k.1 MP
i, k.2 MP

So P ! Qk appears on the line numbered k of C , with premises in


alone.
(Gen) If Qk arises from a previous line by Gen, then there are lines in derivation A of the sort,
i B!C
::
:
k B ! 8xC

where i < k, Qk is B ! 8xC and x is not free in B. Either line k is


under the scope of P in derivation A or not.
(i) If line k is not under the scope of P , then B ! 8xC in A follows
from alone. So continue C as follows to derive B ! 8xC and
apply A1,
k.1 Q1
k.2 Q2
::
:
k.k B ! 8xC
k.k+1 .B ! 8xC/ ! P ! .B ! 8xC/
k P ! .B ! 8xC/

exactly as in A but with prefix


k. for numeric references
A1
k.k+1, k.k MP

Since each of the lines in A up to k is derived from alone, we have


P ! Qk on the line numbered k of C , from premises in alone.
(ii) If line k is under the scope of P , we depend on the assumption, and
continue C as follows,
i P ! .B ! C/
::
:
k.1 P ! 8x.B ! C/
k.2 8x.B ! C / ! .B ! 8xC/
k P ! .B ! 8xC/

(by inductive assumption)

i, Gen
T3.31
k.1, k.2 T3.2

CHAPTER 9. PRELIMINARY RESULTS

436

If line k is under the scope of P then, since no application of Gen


under the scope of P is to a variable free in P , x is not free in P ; so
k.1 meets the restriction on Gen. And since Gen is applied to line k
in A, x is not free in B; so line k.2 meets the restriction on T3.31. So
we have P ! Qk on the line numbered k of C , from premises in
alone.
Indct: For for any i , P ! Qk appears on the line numbered i of C , from
premises in alone.
So given an AD derivation of Q from [ fP g, where no application of Gen under the scope of assumption P is to a variable free in P , there is sure to be an AD
derivation of P ! Q from alone. Notice that Gen*, T3.30 and T3.31 abbreviate sequences which include applications of Gen. So the restriction on Gen for the
deduction theorem applies to applications of these results as well.
As a sample application of the deduction theorem (DT), let us consider another
derivation of T3.2. In tis case, D fA ! B; B ! Cg, and we argue as follows,

(G)

1. A ! B
2. B ! C
3. A
4.
5.

B
C

6. A ! C

prem
prem
assp (g, DT)
1,3 MP
2,4 MP
3-5 DT

At line (5) we have established that [ fAg `AD C; it follows from the deduction
theorem that `AD A ! C. But we should be careful: this is not an AD derivation
of A ! C from A ! B and B ! C . And it is not an abbreviation in the sense
that we have seen so far we do not appeal to a result whose derivation could be
inserted at that very stage. Rather, what we have is a demonstration, via the deduction
theorem, that there exists an AD derivation of A ! C from the premises. If there
is any abbreviating, the entire derivation abbreviates, or indicates the existence of,
another. Our proof of the deduction theorem shows us that, given a derivation of
[ fP g `AD Q, it is possible to construct a derivation for `AD P ! Q.
Let us see how this works in the example. Lines 1-5 become our derivation A,
with D fA ! B; B ! Cg. For each Qi in derivation A, the induction tells us
how to derive A ! Qi from alone. Thus Qi on the first line is a member of :
reasoning from the basis tells us to use A1 as follows,
1.1 A ! B
1.2 .A ! B/ ! .A ! .A ! B//
1 A ! .A ! B/

prem
A1
1.2,1.1 MP

CHAPTER 9. PRELIMINARY RESULTS

437

to get A arrow the form on line 1 of A. Notice that we are again using fractional
line numbers to make lines in derivation A correspond to lines in the constructed
derivation. One may wonder why we bother getting A ! Q1 . And again, the
answer is that our recipe calls for this ingredient at stages connected to MP and
Gen. Similarly, we can use A1 to get A arrow the form on line (2).
1.1
1.2
1
2.1
2.2
2

A!B
.A ! B/ ! .A ! .A ! B//
A ! .A ! B/
B!C
.B ! C / ! .A ! .B ! C //
A ! .B ! C /

prem
A1
1.2,1.1 MP
prem
A1
2.2,2.1 MP

The form on line (3) is A itself. If we wanted a derivation in the primitive system,
we could repeat the steps in our derivation of T3.1. But we will simply continue, as
in the induction,
1.1
1.2
1
2.1
2.2
2
3

A!B
.A ! B/ ! .A ! .A ! B//
A ! .A ! B
B ! C/
.B ! C/ ! .A ! .B ! C //
A ! .B ! C /
A!A

prem
A1
1.2,1.2 MP
prem
A1
2.2,2.1 MP
T3.1

to get A arrow the form on line (3) of A. The form on line (4) arises from lines (1)
and (3) by MP; reasoning in our show step tells us to continue,
1.1
1.2
1
2.1
2.2
2
3
4.1
4.2
4

A!B
.A ! B/ ! .A ! .A ! B//
A ! .A ! B/
B!C
.B ! C/ ! .A ! .B ! C //
A ! .B ! C/
A!A
.A ! .A ! B// ! ..A ! A/ ! .A ! B//
.A ! A/ ! .A ! B/
A!B

prem
A1
1.2,1.1 MP
prem
A1
2.2,2.1 MP
T3.1
A2
4.1,1 MP
4.2,3 MP

using A2 to get A ! B. Notice that the original justification from lines (1) and (3)
dictates the appeal to (1) at line (4.2) and to (3) at line (4). The form on line (5) arises
from lines (2) and (4) by MP; so, finally, we continue,

CHAPTER 9. PRELIMINARY RESULTS


1.1
1.2
1
2.1
2.2
2
3
4.1
4.2
4
5.1
5.2
5

A!B
.A ! B/ ! .A ! .A ! B//
A ! .A ! B/
B!C
.B ! C/ ! .A ! .B ! C //
A ! .B ! C /
A!A
.A ! .A ! B// ! ..A ! A/ ! .A ! B//
.A ! A/ ! .A ! B/
A!B
.A ! .B ! C// ! ..A ! B/ ! .A ! C//
.A ! B/ ! .A ! C /
A!C

438
prem
A1
1.2,1.1 MP
prem
A1
2.2,2.1 MP
T3.1
A2
4.1,1 MP
4.2,3 MP
A2
5.1,2 MP
5.2,4 MP

And we have the AD derivation which our proof of the deduction theorem told us
there would be. Notice that this derivation is not very efficient! We did it in seven
lines (without appeal to T3.1) in chapter 3. What our proof of the deduction theorem
tells us is that there is sure to be some derivation where there is no expectation
that the guaranteed derivation is particularly elegant or efficient.
Here is a last example which makes use of the deduction theorem. First, an
alternate derivation of T3.3.
1. A ! .B ! C/
2. B

(H)

prem
assp (g, DT)

3.

assp (g, DT)

4.
5.

B!C
C

1,3 MP
4,2 MP

A!C

3-5 DT

7. B ! .A ! C/

2-6 DT

6.

In chapter 3 we proved T3.3 in five lines (with an appeal to T3.2). But perhaps this
version is relatively intuitive, coinciding as it does, with strategies from ND. In this
case, there are two applications of DT, and reasoning from the induction therefore
applies twice. First, at line (5), there is an AD derivation of C from fA ! .B !
C/; Bg [ fAg. By reasoning from the induction, then, there is an AD derivation from
just fA ! .B ! C/; Bg with A arrow each of the forms on lines 1-5. So there
is a derivation of A ! C from fA ! .B ! C /; Bg. But then reasoning from
the induction applies again. By reasoning from the induction applied to this new
derivation, there is a derivation from just A ! .B ! C / with B arrow each of the
forms in it. So there is a derivation of B ! .A ! C/ from just A ! .B ! C/.
So the first derivation, lines 1-5 above, is replaced by another, by the reasoning from

CHAPTER 9. PRELIMINARY RESULTS

439

DT. Then it is replaced by another, again given the reasoning from DT. The result is
an AD derivation of the desired result.
Here are a couple more cases, where the latter at least, may inspire a certain
affection for the deduction theorem.
T9.4. `AD A ! .B ! .A ^ B//
T9.5. `AD .A ! C/ ! .B ! C / ! ..A _ B/ ! C/
E9.5. Making use of the deduction theorem, prove T9.4 and T9.5. Having done so,
see if you can prove them in the style of chapter 3, without any appeal to DT.

E9.6. By the method of our proof of the deduction theorem, convert the above
derivation (H) for T3.3 into an official AD derivation. Hint: As described
above, the method of the induction applies twice: first to lines 1-5, and then
to the new derivation. The result should be derivations with 13, and then 37
lines.

E9.7. Consider the axiomatic system A2 from E3.4 on p. 77, and produce a demonstration of the deduction theorem for it. That is, show that if [ fP g `A2 Q,
then `A2 P ! Q. You may appeal to any of the A2 theorems listed on 77.

9.3.2

Substitution Theorems

Recall what we are after. Our goal is to show that if `ND P , then `AD P . Toward
this end, the deduction theorem lets AD mimic rules in ND which require subderivations. For equality, we turn to some substitution results. Say a complex term r is
free in an expression P just in case no variable in r is bound. Then where T is any
term or formula, let T r==s be T where at most one free instance of r is replaced
by term s. Having shown in T3.37, that `AD .qi D s/ ! .Rn q1 : : : qi : : : qn !
Rn q1 : : : s : : : qn /, one might think we have proved that `AD .r D s/ ! .A !
Ar==s / for any atomic formula A and any terms r and s. But this is not so. Similarly,
having proved in T3.36 that `AD .qi D s/ ! .hn q1 : : : qi : : : qn D hn q1 : : : s : : : qn /,
one might think we have proved that `AD .r D s/ ! .t ! t r==s / for any terms r,
s and t. But this is not so. In each case, the difficulty is that the replaced term
r might be a component of the other terms q1 : : : qn , and so might not be any of

CHAPTER 9. PRELIMINARY RESULTS

440

q1 : : : qn . What we have shown is only that it is possible to replace any of the whole
terms, q1 : : : qn . Thus, .x D y/ ! .f 1 g 1 x D f 1 g 1 y/ is not an instance of T3.36
because we do not replace g 1 x but rather a component of it.
However, as one might expect, it is possible to replace terms in basic parts; use
the result to make replacements in terms of which they are parts; and so forth, all
the way up to wholes. Both .x D y/ ! .g 1 x D g 1 y/ and .g 1 x D g 1 y/ !
.f 1 g 1 x D f 1 g 1 y/ are instances of T3.36. (Be clear about these examples in your
mind.) From these, with T3.2 it follows that .x D y/ ! .f 1 g 1 x D f 1 g 1 y/. This
example suggests a method for obtaining the more general results: Using T3.36, we
work from equalities at the level of the parts, to equalities at the level of the whole.
For the case of terms, the proof is by induction on the number of function symbols in
an arbitrary term t.
T9.6. For arbitrary terms r, s and t, `AD .r D s/ ! .t D t r==s /.
Basis: If t has no function symbols, then t is a variable or a constant. In
this case, either (i) r t and t r==s D t (nothing is replaced) or (ii)
r D t and t r==s D s (all of t is replaced). (i) In this case, by T3.32,
`AD t D t; which is to say, `AD .t D t r==s /; so with A1, `AD .r D
s/ ! .t D t r==s /. (ii) In this case, .r D s/ ! .t D t r==s / is the same
as .r D s/ ! .r D s/; so by T3.1, `AD .r D s/ ! .t D t r==s /.
Assp: For any i , 0  i < k, if t has i function symbols, then `AD .r D
s/ ! .t D t r==s /.
Show: If t has k function symbols, then `AD .r D s/ ! .t D t r==s /.
If t has k function symbols, then t is of the form hn q1 : : : qn for terms
q1 : : : qn with < k function symbols. If all of t is replaced, or no part
of t is replaced, then reason as in the basis. So suppose r is some subcomponent of t; then for some qi , t r==s is hn q1 : : : qi r==s : : : qn . By
assumption, `AD .r D s/ ! .qi D qi r==s /; and by T3.36, `AD .qi D
qi r==s / ! .hn q1 : : : qi : : : qn D hn q1 : : : qi r==s : : : qn /; so by T3.2,
`AD .r D s/ ! .hn q1 : : : qi : : : qn D hn q1 : : : qi r==s : : : qn /; but
this is to say, `AD .r D s/ ! .t D t r==s /.
Indct: For any terms r, s and t, `AD .r D s/ ! .t D t r==s /.
We might think of this result as a further strengthened or generalized version of
the AD axiom A6. Where A6 lets us replace just variables in terms of the sort
hn x1 : : : xn , we are now in a position to replace in arbitrary terms with arbitrary
terms.

CHAPTER 9. PRELIMINARY RESULTS

441

Now we can go after a similarly strengthened version of A7. We show that for
any formula A, if s is free for any replaced instance of r in Ar==s , then `AD .r D
s/ ! .A ! Ar==s /. The argument is by induction on the number of operators in A.
T9.7. For any formula A and terms r and s, if s is free for any replaced instance
of r in A, then `AD .r D s/ ! .A ! Ar==s /.
Consider an arbitrary r, s and A, and suppose s is free for any replaced
instance of r in Ar==s .
Basis: If A is atomic then (i) Ar==s D A (nothing is replaced) or (ii) A is an
atomic of the form Rn t1 : : : ti : : : tn and Ar==s is Rn t1 : : : ti r==s : : : tn .
(i) In this case, by T3.1, `AD A ! A, which is to say `AD A ! Ar==s ;
so with A1, `AD r D s ! .A ! Ar==s /. (ii) In this case, by
T9.6, `AD .r D s/ ! .ti D ti r==s /; and by T3.37, `AD .ti D
ti r==s / ! .Rn t1 : : : ti : : : tn ! Rn t1 : : : ti r==s : : : tn /; so by T3.2,
`AD .r D s/ ! .Rn t1 : : : ti : : : tn ! Rn t1 : : : ti r==s : : : tn /; and
this is just to say, `AD .r D s/ ! .A ! Ar==s /.
Assp: For any i , 0  i < k, if A has i operator symbols and s is free for
any replaced instance of r in A, then `AD .r D s/ ! .A ! Ar==s /.
Corollary to the assumption. If A has < k operators, then Ar==s
has < k operators; and since s replaces only a free instance of r
in A, r is free for the replacing instance of s in Ar==s ; so where
the outer substitution is made to sustain Ar==s s==r D A, we have
`AD .s D r/ ! .Ar==s ! Ar==s s==r / as an instance of the inductive
assumption, which is just, `AD .s D r/ ! .Ar==s ! A/. And by
T3.33, `AD .r D s/ ! .s D r/; so with T3.2, `AD .r D s/ !
.Ar==s ! A/.
Show: If A has k operator symbols and s is free for any replaced instance of
r in A, then `AD .r D s/ ! .A ! Ar==s /.
If A has k operator symbols, then A is of the form, P , P ! Q or
8xP for variable x and formulas P and Q with < k operator symbols.
Suppose s is free for any replaced instance of r in A.
() Suppose A is P . Then Ar==s is P r==s which is the same as
P r==s . Since s is free for a replaced instance of r in A, it is free
for that instance of r in P ; so by the corollary to the assumption,
`AD .r D s/ ! .P r==s ! P /. But by T3.13, `AD .P r==s ! P / !
.P ! P r==s /; so by T3.2, `AD .r D s/ ! .P ! P r==s /;
which is to say, `AD .r D s/ ! .A ! Ar==s /.

CHAPTER 9. PRELIMINARY RESULTS

442

(!) Suppose A is P ! Q. Then Ar==s is P r==s ! Q or P ! Qr==s . (i)


In the former case, since s is free for a replaced instance of r in A, it is
free for that instance of r in P ; so by the corollary to the assumption,
`AD .r D s/ ! .P r==s ! P /; so we may reason as follows,
1. .r D s/ ! .P r==s ! P /
2. r D s

prem
assp (g, DT)

3.

P !Q

assp (g, DT)

4.

P r==s

assp (g, DT)

5.
6.
7.

P r==s ! P
P
Q

1,2 MP
5,4 MP
3,6 MP

8.
9.

P r==s ! Q

4-7 DT

.P ! Q/ ! .P r==s ! Q/

10. .r D s/ ! .P ! Q/ !

.P r==s

3-8 DT
! Q/

2-9 DT

So `AD .r D s/ ! .P ! Q/ ! .P r==s ! Q/; which is to say,


`AD .r D s/ ! .A ! Ar==s /. (ii) And similarly in the other case
[by homework], `AD .r D s/ ! .P ! Q/ ! .P ! Qr==s /. So in
either case, `AD .r D s/ ! .A ! Ar==s /.
(8) Suppose A is 8xP . Then a free instance of r in A remains free in
P and Ar==s is 8xP r==s . Since s is free for r in A, s is free for r
in P ; so by assumption, `AD .r D s/ ! .P ! P r==s /; so we may
reason as follows,
1. .r D s/ ! .P ! P r==s /
2. r D s
3.
4.
5.
6.

8xP ! P
P ! P r==s
8xP ! P r==s
8xP ! 8xP r==s

7. .r D s/ ! .8xP ! 8xP r==s /

prem
assp (g, DT)
A4
1,2 MP
3,4 T3.2
5 Gen
2-6 DT

Notice that x is sure to be free for itself in P , so that (3) is an instance


of A4. And x is bound in 8xP , so (6) is an instance of Gen. And
because r is free in A, and s is free for r in A, x cannot be a variable
in r or s; so the restriction on DT is met at (7). So `AD .r D s/ !
.8xP ! 8xP r==s /; which is to say, `AD .r D s/ ! .A ! Ar==s /.
So for any A with k operator symbols, `AD .r D s/ ! .A ! Ar==s /.

CHAPTER 9. PRELIMINARY RESULTS

443

Indct: For any A, `AD .r D s/ ! .A ! Ar==s /.


So T9.7, for any formula A, and terms r and s, if s is free for a replaced instance of
r in A, then `AD .r D s/ ! .A ! Ar==s /.
It is a short step from T9.7, which allows substitution of just a single term, to
T9.8 which allows substitution of arbitrarily many. Where, as in chapter 6, P t=s is
P with some, but not necessarily all, free instances of term t replaced by term s,
T9.8. For any formula A and terms r and s, if s is free for the replaced instances
of r in A, then `AD .r D s/ ! .A ! Ar=s /.
By induction on the number of instances of r that are replaced by s in A.
Say Ai is A with i free instances of r replaced by s. Suppose s is free for
the replaced instances of r in A. We show that for any i , `AD .r D s/ !
.A ! Ai /.
Basis: If no instances of r are replaced by s then A0 D A. But by T3.1,
`AD A ! A, and by A1, `AD .A ! A/ ! .r D s/ ! .A ! A/;
so by MP, `AD .r D s/ ! .A ! A/; which is to say, `AD .r D
s/ ! .A ! A0 /.
Assp: For any i , 0  i < k, `AD .r D s/ ! .A ! Ai /.
Show: `AD .r D s/ ! .A ! Ak /.
Ak is of the sort Ai r==s for i < k. By assumption, then, `AD .r D
s/ ! .A ! Ai /, and by T9.7, `AD .r D s/ ! .Ai ! Ai r==s /,
which is the same as `AD .r D s/ ! .Ai ! Ak /. So reason as
follows,
1. .r D s/ ! .A ! Ai /
2. .r D s/ ! .Ai ! Ak /
3. r D s
4.
5.
6.

A ! Ai
Ai ! Ak
A ! Ak

7. .r D s/ ! .A ! Ak /

by assumption
T9.7
assp (g, DT)
1,3 MP
2,3 MP
4,5 T3.2
3-6 DT

Since s is free for the replaced instances of r in A, (2) is an instance


of T9.7. So `AD .r D s/ ! .A ! Ak /.
Indct: For any i , `AD .r D s/ ! .A ! Ai /.

CHAPTER 9. PRELIMINARY RESULTS

444

In effect, the result is by multiple applications of T9.7. No matter how many instances
of r have been replaced by s, we may use T9.7 to replace another!
A final substitution result allows substitution of formulas rather than terms. Where
AB==C is A with exactly one instance of a subformula B replaced by formula C,
T9.9. For any formulas A, B and C , if `AD B $ C, then `AD A $ AB==C .
The proof is by induction on the number of operators in A. If you have
understood the previous two inductions, this one should be straightforward.
Observe that, in the basis, when A is atomic, B can only be all of A, and
AB==C is C. For the show, either B is all of A or it is not. If it is, then the
result holds by reasoning as in the basis. If B is a proper part of A, then the
assumption applies.

*E9.8. Set up the above demonstration for T9.7 and complete the unfinished case to
provide a complete demonstration that for any formula A, and terms r and s,
if s is free for the replaced instance of r in A, then `AD .r D s/ ! .A !
Ar==s /.
E9.9. Suppose our primitive operators are , ^ and 9 rather than , ! and 8.
Modify your argument for T9.7 to show that for any formula A, and terms r
and s, if s is free for the replaced instance of r in A, then `AD .r D s/ !
.A ! Ar==s /. Hint: Do not forget that you may appeal to T9.4.
*E9.10. Prove T9.9, to show that for any formulas A, B and C , if `AD B $ C, then
`AD A $ AB==C . Hint: Where P $ Q abbreviates .P ! Q/ ^ .Q ! P /,
you can use (abv) along with T3.19, T3.20 and T9.4 to manipulate formulas
of the sort P $ Q.
E9.11. Where AB=C replaces some, but not necessarily all, instances of formula B
with formula C, use your result from E9.10 to show that if `AD B $ C, then
`AD A $ AB=C .

CHAPTER 9. PRELIMINARY RESULTS

9.3.3

445

Intended Result

We are finally ready to show that if `ND P then `AD P . As usual, the idea is
that the existence of one derivation guarantees the existence of another. In this case,
we begin with a derivation in ND, and move to the existence of one in AD. Suppose
`ND P . Then there is an ND derivation N of P from premises in , with lines
hQ1 : : : Qn i and Qn D P . We show that there is an AD derivation A of the same
result (with possible appeal to DT). Say derivation A matches N iff any Qi from N
appears at the same scope on the line numbered i of A; and say derivation A is
good iff it has no application of Gen to a variable free in an undischarged auxiliary
assumption. Then, given derivation N , we show that there is a good derivation A
that matches N . The reason for the restriction on free variables is to be sure that
DT is available at any stage in derivation A. The argument is by induction on the
line number of N , where we show that for any i , there is a good derivation Ai that
matches N through line i . The case when i D n is an AD derivation of P under the
scope of the premises alone, and so a demonstration of the desired result.
T9.10. If `ND P , then `AD P .
Suppose `ND P ; then there is an ND derivation N of P from premises in
. We show that for any i , there is a good AD derivation Ai that matches N
through line i .
Basis: The first line of N is a premise or an assumption. Let A1 be the same.
Then A1 matches N ; and since there is no application of Gen, A1 is
good.
Assp: For any i , 1  i < k, there is a good derivation Ai that matches N
through line i .
Show: There is a good derivation Ak that matches N through line k.
Either Qk is a premise or assumption, or arises from previous lines by
R, ^E, ^I, !E, !I, E, I, _E, _I, $E, $I, 8E, 8I, 9E, 9I, =E or
=I.
(p/a) If Qk is a premise or an assumption, let Ak continue in the same way.
Then, by reasoning as in the basis, Ak matches N and is good.
(R) If Qk arises from previous lines by R, then N looks something like
this,
i B
k B

iR

CHAPTER 9. PRELIMINARY RESULTS

446

where i < k, B is accessible at line k, and Qk D B. By assumption


Ak 1 matches N through line k 1 and is good. So B appears at
the same scope on the line numbered i of Ak 1 and is accessible in
Ak 1 . So let Ak continue as follows,
i B
::
:
k:1 B ! B
k B

T3.1
k:1,i MP

So Qk appears at the same scope on the line numbered k of Ak ; so


Ak matches N through line k. And since there is no new application
of Gen, Ak is good.
(^E) If Qk arises by ^E, then N is something like this,
i B^C

i B^C

or
k B

i ^E

k C

i ^E

where i < k and B ^ C is accessible at line k. In the first case,


Qk D B. By assumption Ak 1 matches N through line k 1 and is
good. So B ^ C appears at the same scope on the line numbered i
of Ak 1 and is accessible in Ak 1 . So let Ak continue as follows,
i B^C
k:1 .B ^ C/ ! B
k B

T3.20
k:1,i MP

So Qk appears at the same scope on the line numbered k of Ak ; so


Ak matches N through line k. And since there is no new application
of Gen, Ak is good. And similarly in the other case, by application of
T3.19.
(^I) If Qk arises from previous lines by ^I, then N is something like this,
i B
j C
k B^C

i,j ^I

where i; j < k, B and C are accessible at line k, and Qk D B ^ C.


By assumption Ak 1 matches N through line k 1 and is good. So
B and C appear at the same scope on the lines numbered i and j
of Ak 1 and are accessible in Ak 1 . So let Ak continue as follows,

CHAPTER 9. PRELIMINARY RESULTS

447

i B
j C
k:1 B ! .C ! .B ^ C //
k:2 C ! .B ^ C /
k B^C

T9.4
k:1,i MP
k:2,j MP

So Qk appears at the same scope on the line numbered k of Ak ; so


Ak matches N through line k. And since there is no new application
of Gen, Ak is good.
(!E) If Qk arises from previous lines by !E, then N is something like this,
i B!C
j B
i,j !E

k C

where i; j < k, B ! C and B are accessible at line k, and Qk D C.


By assumption Ak 1 matches N through line k 1 and is good. So
B ! C and B appear at the same scope on the lines numbered i
and j of Ak 1 and are accessible in Ak 1 . So let Ak continue as
follows,
i B!C
j B
k C

i,j MP

So Qk appears at the same scope on the line numbered k of Ak ; so


Ak matches N through line k. And since there is no new application
of Gen, Ak is good.
(!I) If Qk arises by !I, then N is something like this,
i

k B!C

i-j !I

where i; j < k, the subderivation is accessible at line k and Qk D


B ! C. By assumption Ak 1 matches N through line k 1 and is
good. So B and C appear at the same scope on the lines numbered
i and j of Ak 1 ; since they appear at the same scope, the parallel

CHAPTER 9. PRELIMINARY RESULTS

448

subderivation is accessible in Ak 1 ; since Ak 1 is good, no application of Gen under the scope of B is to a variable free in B. So let Ak
continue as follows,
i

k B!C

i-j DT

So Qk appears at the same scope on the line numbered k of Ak ; so


Ak matches N through line k. And since there is no new application
of Gen, Ak is good.
(E) If Qk arises by E (and reverting to the unabbreviated form), then N
is something like this,
i

B

C ^ C
i -j E

k B

where i; j < k, the subderivation is accessible at line k, and Qk D B.


By assumption Ak 1 matches N through line k 1 and is good. So
B and C ^ C appear at the same scope on the lines numbered i
and j of Ak 1 ; since they appear at the same scope, the parallel subderivation is accessible in Ak 1 ; since Ak 1 is good, no application
of Gen under the scope of B is to a variable free in B. So let Ak
continue as follows,
i

B

C ^ C

k:1
k:2
k:3
k:4
k:5
k:6
k:7
k

B ! .C ^ C /
.C ^ C/ ! C
.C ^ C/ ! C
B ! C
B ! C
.B ! C / ! ..B ! C / ! B/
.B ! C / ! B
B

i -j DT
T3.20
T3.19
k:1,k:2 T3.2
k:1,k:3 T3.2
A3
k:6,k:5 MP
k:7,k:4 MP

So Qk appears at the same scope on the line numbered k of Ak ; so


Ak matches N through line k. And since there is no new application
of Gen, Ak is good.

CHAPTER 9. PRELIMINARY RESULTS

449

(I) Homework.
(_E) If Qk arises by _E, then N is something like this,
f B_C
B
g
h

D
f ,g-h,i-j _E

k D

where f; g; h; i; j < k, B _ C and the two subderivations are accessible at line k and Qk D D. By assumption Ak 1 matches N through
line k 1 and is good. So the formulas at lines f; g; h; i; j appear at
the same scope on corresponding lines in Ak 1 ; since they appear at
the same scope, B _ C and corresponding subderivations are accessible in Ak 1 ; since Ak 1 is good, no application of Gen under the
scope of B is to a variable free in B, and no application of Gen under
the scope of C is to a variable free in C. So let Ak continue as follows,
f B_C
g
B
h

j
k:1
k:2
k:3
k:4
k:5
k

D
B!D
C !D
.B ! D/ ! .C ! D/ ! ..B _ C/ ! D/
.C ! D/ ! ..B _ C/ ! D/
.B _ C/ ! D
D

g-h DT
i-j DT
T9.5
k:3,k:1 MP
k:4,k:2 MP
k:5,f MP

So Qk appears at the same scope on the line numbered k of Ak ; so


Ak matches N through line k. And since there is no new application
of Gen, Ak is good.
(_I) Homework.
($E) Homework.
($I) Homework.
(8E) Homework.

CHAPTER 9. PRELIMINARY RESULTS

450

(8I) If Qk arises by 8I, then N looks something like this,


i Bvx
k 8xB

i 8I

where i < k, Bvx is accessible at line k, and Qk D 8xB; further the


ND restrictions on 8I are met: (i) v is free for x in B, (ii) v is not
free in any undischarged auxiliary assumption, and (iii) v is not free
in 8xB. By assumption Ak 1 matches N through line k 1 and is
good. So Bvx appears at the same scope on the line numbered i of
Ak 1 and is accessible in Ak 1 . So let Ak continue as follows,
i Bvx
x
k:1 8vBv
x ! 8xB
k:2 8vBv
k 8xB

i Gen*
T3.27
k:1,k:2 MP

If v is x, we have the desired result already at k:1. So suppose x v.


To see that k:2 is an instance of T3.27, consider first, 8vBvx !
8xBvx v
x ; this is an instance of T3.27 so long as x is not free in
x
8vBv but free for v in Bvx . First, since Bvx has all its free instances
of x replaced by v, x is not free in 8vBvx . Second, since v x,
with the constraint (iii), that v is not free in 8xB, v is not free in B;
so every free instance of v in Bvx replaces a free instance of x; so x
is free for v in Bvx . So 8vBvx ! 8xBvx v
x is an instance of T3.27.
But since v is not free in B, and by constraint (i), v is free for x in
B, by T8.2, Bvx v
x D B. So k:2 is a version of T3.27.
So Qk appears at the same scope on the line numbered k of Ak ; so
Ak matches N through line k. This time, there is an application of Gen
in Gen* at k:1. But Ak 1 is good and since Ak matches N and, by
(ii), v is free in no undischarged auxiliary assumption of N , v is not
free in any undischarged auxiliary assumption of Ak ; so Ak is good.
(Notice that, in this reasoning, we appeal to each of the restrictions
that apply to 8I in N ).
(9E) If Qk arises by 9E, then N looks something like this,
h 9xB
x
i
Bv
j
k C

C
h,i-j 9E

CHAPTER 9. PRELIMINARY RESULTS

451

where h; i; j < k, 9xB and the subderivation are accessible at line k,


and Qk D C; further, the ND restrictions on 9E are met: (i) v is free
for x in B, (ii) v is not free in any undischarged auxiliary assumption,
and (iii) v is not free in 9xB or in C. By assumption Ak 1 matches
N through line k 1 and is good. So the formulas at lines h, i and j
appear at the same scope on corresponding lines in Ak 1 ; since they
appear at the same scope, 9xB and the corresponding subderivation
are accessible in Ak 1 . Since Ak 1 is good, no application of Gen
under the scope of Bvx is to a variable free in Bvx . So let Ak continue
as follows,
h 9xB
x
i
Bv
j
k:1
k:2
k:3
k:4
k:5
k:6
k:7
k

C
x !C
Bv
x !C
9vBv
x ! 8xB
8vBv
x ! 8xB/ ! .8xB ! 8vB x /
.8vBv
v
x
8xB ! 8vBv
x
9xB ! 9vBv
x
9vBv
C

i -j DT
k:1 T3.30
T3.27
T3.13
k:4,k:3 MP
k:5 abv
h,k:6 MP
k:2,k:7 MP

From constraint (iii), that v is not free in C , k:2 meets the restriction
on T3.30. If v D x we can go directly from h and k:2 to k. So suppose
v x. Then by [homework] 8vBvx ! 8xB at k:3 is an instance
of T3.27. So Qk appears at the same scope on the line numbered k of
Ak ; so Ak matches N through line k. There is an application of Gen
in T3.30 at k:2. But Ak 1 is good and since Ak matches N and, by
(ii), v is free in no undischarged auxiliary assumption of N , v is not
free in any undischarged auxiliary assumption of Ak ; so Ak is good.
(Notice again that we appeal to each of the restrictions that apply to
9E in N ).
(9I) Homework.
(=E) Homework.
(=I) Homework.
In any case, Ak matches N through line k and is good.
Indct: Derivation A matches N and is good.

CHAPTER 9. PRELIMINARY RESULTS

452

So if there is an ND derivation to show `ND P , then there is a matching AD


derivation to show the same; so T9.10, if `ND P , then `AD P . So with T9.2,
AD and ND are equivalent; that is, `ND P iff `AD P . Given this, we will often
ignore the difference between AD and ND and simply write ` P when there is a(n
AD or ND) derivation of P from premises in . Also given the equivalence between
the systems, we are in a position to transfer results from one system to the other
without demonstrating them directly for both. We will come to appreciate this, and
especially the relative simplicity of AD, as time goes by.
As before, given any ND derivation, we can use the method of our induction to
find a corresponding AD derivation. For a simple example, consider the following
demonstration that A ! .A ^ B/ `ND A.
1. A ! .A ^ B/

(I)

2.

A

A (c, E)

3.
4.
5.

A^B
A
A ^ A

1,2 !E
3 ^E
4,2 ^I

6. A

2-4 E

Given relevant cases from the induction, the corresponding AD derivation is as follows,
1 A ! .A ^ B/
2 A
3 A^B
4.1 .A ^ B/ ! A
4 A
5.1 A ! .A ! .A ^ A//
5.2 A ! .A ^ A/
5 A ^ A
6.1
6.2
6.3
6.4
6.5
6.6
6.7
6

A ! .A ^ A/
.A ^ A/ ! A
.A ^ A/ ! A
A ! A
A ! A
.A ! A/ ! ..A ! A/ ! A/
.A ! A/ ! A
A

prem
assp
1,2 MP
T3.20
4.1,3 MP
T9.4
4,5.1 MP
5.2,2 MP
2-5 DT
T3.20
T3.19
6.1,6.2 T3.2
6.1,6.3 T3.2
A3
6.6,6.5 MP
6.7,6.4 MP

For the first two lines, we simply take over the premise and assumption from the ND
derivation. For (3), the induction uses MP in AD where !E appears in ND; so that is
what we do. For (4), our induction shows that we can get the effect of ^E by appeal

CHAPTER 9. PRELIMINARY RESULTS

453

to T3.20 with MP. (5) in the ND derivation is by ^I, and, as above, we get the same
effect by T9.4 with MP. (6) in the ND derivation is by E. Following the strategy
from the induction, we set up for application of A3 by getting the conditional by DT.
As usual, the constructed derivation is not very efficient! You should be able to get
the same result in just five lines by appeal to T3.20, T3.2 and then T3.7 (try it). But,
again, the point is just to show that there always is a corresponding derivation.
*E9.12. Set up the above induction for T9.10 and complete the unfinished cases
(including the case for 9E) to show that if `ND P , then `AD P . For cases
completed in the text, you may simply refer to the text, as the text refers cases
to homework.

E9.13. Consider a system N2 which is like ND except that its only rules are ^E,
^I, E and I, along with the system A2 from E3.4 on p. 77. Produce a
complete demonstration that if `N2 P , then `A2 P . You may use any of
the theorems for A2 from E3.4, along with DT from E9.7.

E9.14. Consider the following ND derivation and, using the method from the induction, construct a derivation to show 9x.C ^ Bx/ `AD C .
1. 9x.C ^ Bx/

2.

C ^ By

A (g, 19E)

3.

2 ^E

4. C

1,2-3 9E

Hint: your derivation should have 12 lines.

9.4

Extending to ND+

ND+ adds fifteen rules to ND: the four inference rules, MT, HS, DS and NB and the
eleven replacement rules, DN, Com, Assoc, Idem, Impl, Trans, DeM, Exp, Equiv,
Dist, and QN where some of these have multiple forms. It might seem tedious to
go through all the cases but, as it happens, we have already done most of the work.
First, it is easy to see that,
T9.11. If `ND P then `NDC P .

CHAPTER 9. PRELIMINARY RESULTS

454

Suppose `ND P . Then there is an ND derivation N of P from premises in


. But since every rule of ND is a rule of ND+, N is a derivation in ND+ as
well. So `NDC P .
From T9.2 and T9.11, then, the situation is as follows,
9:2

9:11

`AD P 9 `ND P 9 `NDC P


If an argument is valid in AD, it is valid in ND, and in ND+. From T9.10, the
leftmost arrow is a biconditional. Again, however, one might think that ND+ has
more resources than ND, so that more could be derived in ND+ than ND. But this is
not so. To see this, we might begin with the closer systems ND and ND+, and attempt
to show that anything derivable in ND+ is derivable in ND. Alternatively, we choose
simply to expand the induction of the previous section to include cases for all the
rules of ND+. The result is a demonstration that if `NDC P , then `AD P . Given
this, the three systems are connected in a loop so that if there is a derivation in
any one of the systems, there is a derivation in the others as well.
T9.12. If `NDC P , then `AD P .
Suppose `NDC P ; then there is an ND+ derivation N of P from premises
in . We show that for any i, there is a good AD derivation Ai that matches
N through line i .
Basis: The first line of N is a premise or an assumption. Let A1 be the same.
Then A1 matches N ; and since there is no application of Gen, A1 is
good.
Assp: For any i , 0  i < k, there is a good derivation Ai that matches N
through line i .
Show: There is a good derivation of Ak that matches N through line k.
Either Qk is a premise or assumption, arises by a rule of ND, or by
the ND+ derivation rules, MT, HS, DS, NB or replacement rules, DN,
Com, Assoc, Idem, Impl, Trans, DeM, Exp, Equiv, Dist, or QN. If Qk
is a premise or assumption or arises by a rule of ND, then by reasoning
as for T9.10, there is a good derivation Ak that matches N through line
k. So suppose Qk arises by one of the ND+ rules.
(MT) If Qk arises from previous lines by MT, then N is something like this,
i B!C
j C
k B

i,j MT

CHAPTER 9. PRELIMINARY RESULTS

455

where i; j < k, B ! C and C are accessible at line k, and Qk D


B. By assumption Ak 1 matches N through line k 1 and is good.
So B ! C and C appear at the same scope on the lines numbered
i and j of Ak 1 and are accessible in Ak 1 . So let Ak continue as
follows,
i B!C
j C
k:1 .B ! C / ! .C ! B/
k:2 C ! B
k B

(HS)
(DS)
(NB)
(rep)

T3.13
k:1,i MP
k:2,j MP

So Qk appears at the same scope on the line numbered k of Ak ; so


Ak matches N through line k. And since there is no new application
of Gen, Ak is good.
Homework.
Homework.
Homework.
If If Qk arises from a replacement rule rep of the form C GF D, then
N is something like this,
i B

i B

or
k B C==D

i rep

k B D==C

i rep

where i < k, B is accessible at line k and, in the first case, Qk D


B C==D . By assumption Ak 1 matches N through line k 1 and is
good. But by T6.10 - T6.26, T6.29 and T6.30, `ND C $ D; so with
T9.10, `AD C $ D; so by T9.9, `AD B $ B C==D . Call an arbitrary
particular result of this sort, Tx, and augment Ak as follows,
0:k B $ B C==D

Tx

i B
k:1
k:2
k:3
k

.B ! B C==D / ^ .B C==D ! B/
.B ! B C==D / ^ .B C==D ! B/ ! .B ! B C==D /
B ! B C==D
B C==D

0:k abv
T3.20
k:2,k:1 MP
k:3,i MP

So Qk appears at the same scope on the line numbered k of Ak ;


so Ak matches N through line k. There may be applications of Gen

CHAPTER 9. PRELIMINARY RESULTS

456

in the derivation of Tx; but that derivation is under the scope of no


undischarged assumption. And under the scope of any undischarged
assumptions, there is no new application of Gen. So Ak is good. And
similarly in the other case, with some work to flip the biconditional
`AD C $ D to `AD D $ C.
In any case, Ak matches N through line k and is good.
Indct: Derivation A matches N and is good.
That is it! The key is that work we have already done collapses cases for all the
replacement rules into one. So each of the derivation systems, AD, ND, and ND+ is
equivalent to the others. That is, `AD P iff `ND P iff `NDC P . And that is
what we set out to show.
*E9.15. Set up the above induction and complete the unfinished cases to show that if
`NDC P , then `AD P . For cases completed in the text, you may simply
refer to the text, as the text refers cases to homework.

E9.16. Consider a sentential language with  and ^ primitive, along with systems
N2 with rules ^E, ^I, E and I from E9.13, and A2 from E3.4 on p. 77.
Suppose N2 is augmented to a system N2+ that includes rules MT and Com
(for ^). Augment your argument from E9.13 to produce a complete demonstration that if `N2C P then `A2 P . Hint: You will have to prove some
A2 results parallel to ones for which we have merely appealed to theorems
above. Do not forget that you have DT from E9.7.

E9.17. For each of the following concepts, explain in an essay of about two pages,
so that Hannah could understand. In your essay, you should (i) identify the
objects to which the concept applies, (ii) give and explain the definition, and
give and explicate examples (iii) where the concept applies, and (iv) where
it does not. Your essay should exhibit an understanding of methods from the
text.
a. The reason semantic validity implies logical validity, but not the other way
around.
b. The notion of a constructive proof by mathematical induction.

Chapter 10

Main Results
We have introduced four notions of validity, and started to think about their interrelations. In chapter 9, we showed that if an argument is semantically valid, then it is
logically valid, and that an argument is valid in AD iff it is valid in ND. We turn now
to the relation between these derivation systems and semantic validity. This completes the project of demonstrating that the different notions of validity are related as
follows.

Validity in AD


Logical
Validity

Semantic
Validity

6

I
@
@
?
@
@
@
R Validity in ND
@

Since AD and ND are equivalent, it is not necessary separately to establish the relations between AD and semantic validity, and between ND and semantic validity.
Because it is relatively easy to reason about AD, we mostly reason about a system
like AD to establish that an argument is valid in AD iff it is semantically valid. From
the equivalence between AD and ND it then follows that an argument is valid in ND
iff it is semantically valid.
The project divides into two parts. First, we take up the arrows from right to
left, and show that if an argument is valid in AD, then it is semantically valid: if
`AD P , then  P . Thus our derivation system is sound. If a derivation system is
sound, it never leads from premises that are true on an interpretation, to a conclusion
457

CHAPTER 10. MAIN RESULTS

458

that is not. Second, moving in the other direction, we show that if an argument is
semantically valid, then it is valid in AD: if  P , then `AD P . Thus our
derivation system is adequate. If a derivation system is adequate, there is a derivation
from the premises to the conclusion for every argument that is semantically valid.

10.1

Soundness

It is easy to construct derivation systems that are not sound. Thus, for example,
consider a derivation system like AD but without the restriction on A4 that the substituted term t be free for the variable x in formula P . Given this, we might reason
as follows,
(A)

1. 8x9y.x D y/
2. 8x9y.x D y/ ! 9y.y D y/
3. 9y.y D y/

prem
A4
1,2 MP

y is not free for x in 9y.x D y/; so line (2) is not an instance of A4. And it is
a good thing: Consider any interpretation with at least two elements in U. Then it is
true that for every x there is some y not identical to it. So the premise is true. But
there is no y in U that is not identical to itself. So the conclusion is not true. So the
true premise leads to a conclusion that is not true. So the derivation system is not
sound.
We would like to show that AD is sound that there is no sequence of moves,
no matter how complex or clever, that would lead from premises that are true to a
conclusion that is not true. The argument itself is straightforward: suppose `AD P ;
then there is an AD derivation A D hQ1 : : : Qn i of P with Qn D P . By induction
on line numbers in A, we show that for any i ,  Qi . The case when i D n is the
desired result. So if `AD P , then  P . This general strategy should by now
be familiar. However, for the case involving A4, it will be helpful to obtain a pair of
preliminary results.

10.1.1

Switching Theorems

In this section, we develop a couple theorems which link substitutions into formulas
and terms with substitutions in variable assignments. As we have seen before, the
results are a matched pair, with a first result for terms, that feeds into the basis clause
for a result about formulas. Perhaps the hardest part is not so much the proofs of the
theorems, as understanding what the theorems say. So let us turn to the first.
Suppose we have some terms t and r with interpretation I and variable assignment d. Say Id r D o. Then the first proposition is this: term t is assigned the same

CHAPTER 10. MAIN RESULTS

459

object on Id.xjo/ , as trx is assigned on Id . Intuitively, this is because the same object
is fed into the x-place of the term in each case. With t and d.xjo/,
t : hn . . . x . . .

(B)
d.xjo/:

|
... o ...

object o is the input to the slot occupied by x. But we are given that Id r D o. So
with trx and d,
x : hn . . . r . . .
tr

(C)
d:

|
... o ...

object o is the input into the slot that was occupied by x. So if Id r D o, then
x . In the one case, we guarantee that object o goes into the x-place
Id.xjo/ t D Id tr
by meddling with the variable assignment. In the other, we get the same result by
meddling with the term. Be sure you are clear about this in your own mind. This will
be our first result.
T10.1. For any interpretation I, variable assignment d, with terms t and r, if Id r D
x .
o, then Id.xjo/ t D Id tr
For arbitrary terms t and r, with interpretation I and variable assignment d,
suppose Id r D o. By induction on the number of function symbols in t,
x .
Id.xjo/ t D Id tr
Basis: If t has no function symbols, then it is a constant or a variable. Either
t is the variable x or it is not. (i) Suppose t is a constant or variable
other than x; then trx D t (no replacement is made); but d and d.xjo/
assign just the same things to variables other than x; so they assign
just the same things to any variable in t; so by T8.3, Id t D Id.xjo/ t.
So Id trx D Id.xjo/ t. (ii) If t is x, then trx is r (all of t is replaced by
r); so Id trx D Id r D o. But t is x; so Id.xjo/ t D Id.xjo/ x; and by
TA(v), Id.xjo/ x D d.xjo/x D o. So Id trx D Id.xjo/ t.
Assp: For any i , 0  i < k, for t with i function symbols, Id trx D Id.xjo/ t.
Show: If t has k function symbols, then Id trx D Id.xjo/ t.
If t has k function symbols, then it is of the form, hn s1 : : : sn where
s1 : : : sn have < k function symbols. In this case, trx D hn s1 : : : sn xr
D hn s1 xr : : : sn xr . So Id trx D Id hn s1 xr : : : sn xr ; by TA(f), this is
x
n
Ihn hId s1 x
r : : : Id sn r i. Similarly, Id.xjo/ t D Id.xjo/ h s1 : : : sn ;
n
and by TA(f), this is Ih hId.xjo/ s1 : : : Id.xjo/ sn i. But by assumption, Id s1 xr D Id.xjo/ s1 , and . . . and Id sn xr D Id.xjo/ sn ; so

CHAPTER 10. MAIN RESULTS

460

hId s1 xr : : : Id sn xr i D hId.xjo/ s1 : : : Id.xjo/ sn i; so Ihn hId s1 xr


: : : Id sn xr i D Ihn hId.xjo/ s1 : : : Id.xjo/ sn i; so Id trx D Id.xjo/ t.
Indct: For any t, Id trx D Id.xjo/ t.
Since the switching leaves assignments to the parts the same, assignments to the
whole remains the same as well.
Similarly, suppose we have we have term r with interpretation I and variable
assignment d, where Id r D o as before. Suppose r is free for variable x in formula
Q. Then the second proposition is that a formula Q is satisfied on Id.xjo/ iff Qrx
is satisfied on Id . Again, intuitively, this is because the same object is fed into the
x-place of the formula in each case. With Q and d.xjo/,
Q: Q . . . x . . .

(D)
d.xjo/:

|
... o ...

object o is the input to the slot occupied by x. But Id r D o. So with Qrx and d,
x: Q . . . r . . .
Qr

(E)
d:

|
... o ...

object o is the input into the slot that was occupied by x. So if Id r D o (and r is
free for x in Q), then Id.xjo/ Q D S iff Id Qrx D S. In the one case, we guarantee
that object o goes into the x-place by meddling with the variable assignment. In the
other, we get the same result by meddling with the formula. This is our second result,
which draws directly upon the first.
T10.2. For any interpretation I, variable assignment d, term r, and formula Q, if
x D S iff I
Id r D o, and r is free for x in Q, then Id Qr
d.xjo/ Q D S.
For arbitrary formula Q, term r and interpretation I, suppose r is free for x
in Q. By induction on the number of operator symbols in Q,
Basis: Suppose Id r D o. If Q has no operator symbols, then it is a sentence
letter S or an atomic of the form Rn t1 : : : tn . In the first case, Qrx D
Srx D S. So Id Qrx D S iff Id S D S; by SF(s), iff IS D T;
by SF(s) again, iff Id.xjo/ S D S; iff Id.xjo/ Q D S. In the second
case, Qrx D Rn t1 : : : tn xr D Rn t1 xr : : : tn xr . So Id Qrx D S iff
x
x
x
n
Id Rn t1 x
r : : : tn r D S; by SF(r), iff hId t1 r : : : Id tn r i 2 IR ;
n
since Id r D o, by T10.1, iff hId.xjo/ t1 : : : Id.xjo/ tn i 2 IR ; by
SF(r), iff Id.xjo/ Rn t1 : : : tn D S; iff Id.xjo/ Q D S.

CHAPTER 10. MAIN RESULTS

461

Assp: For any i , 0  i < k, if Q has i operator symbols, r is free for x in Q


and Id r D o, then Id Qrx D S iff Id.xjo/ Q D S.
Show: If Q has k operator symbols, r is free for x in Q and Id r D o, then
x D S iff I
Id Qr
d.xjo/ Q D S.
Suppose Id r D o. If Q has k operator symbols, then Q is of the form
B, B ! C, or 8vB for variable v and formulas B and C with
< k operator symbols.
() Suppose Q is B. Then Qrx D Bxr D Brx . Since r is free for
x in Q, r is free for x in B; so the assumption applies to B. Id Qrx D
S iff Id Brx D S; by SF(), iff Id Brx S; by assumption iff
Id.xjo/ B S; by SF(), iff Id.xjo/ B D S; iff Id.xjo/ Q D S.
(!) Homework.
(8) Suppose Q is 8vB. Either there are free occurrences of x in Q or not.
(i) Suppose there are no free occurrences of x in Q. Then Qrx is just
Q (no replacement is made). But since d and d.xjo/ make just the
same assignments to variables other than x, they make just the same
assignments to all the variables free in Q; so by T8.4, Id Q D S iff
x D S iff I
Id.xjo/ Q D S. So Id Qr
d.xjo/ Q D S.
(ii) Suppose there are free occurrences of x in Q. Then x is some
variable other than v, and Qrx D 8vBxr D 8vBrx .
First, since r is free for x in Q, r is free for x in B, and v is not a
variable in r; from this, for any m 2 U, the variable assignments d and
d.vjm/ agree on assignments to variables in r; so by T8.3, Id r D
Id.vjm/ r; so Id.vjm/ r D o; so the requirement of the assumption is
met for the assignment d.vjm/ and, as an instance of the assumption,
for any m 2 U, we have, Id.vjm/ Brx D S iff Id.vjm;xjo/ B D S.
Now suppose Id.xjo/ Q D S but Id Qrx S; then Id.xjo/ 8vB D S
but Id 8vBrx S. From the latter, by SF(8), there is some m 2 U
such that Id.vjm/ Brx S; so by the above result, Id.vjm;xjo/ B S;
so by SF(8), Id.xjo/ 8vB S; this is impossible. And similarly [by
homework] in the other direction. So Id.xjo/ Q D S iff Id Qrx D S.
If Q has k operator symbols, if r is free for x in Q and Id r D o, then
x D S iff I
Id Qr
d.xjo/ Q D S.
Indct: For any Q, if r is free for x in Q and Id r D o, then Id Qrx D S iff
Id.xjo/ Q D S.

CHAPTER 10. MAIN RESULTS

462

Perhaps the quantifier case looks more difficult than it is. The key point is that since
r is free for x in Q, changes in the assignment to v do not affect the assignment
to r. Thus the assumption applies to B for variable assignments that differ in their
assignments to v. This lets us take the quantifier off, apply the assumption, and
then put the quantifier back on in the usual way. Another way to make this point
is to see how the argument fails when r is not free for x in Q. If r is not free for x
in Q, then a change in the assignment to v may affect the assignment to r. In this
case, although Id r D o, Id.vjm/ r might be something else. So there is no reason
to think that substituting r for x will have the same effect as assigning x to o. As we
shall see, this restriction corresponds directly to the one on axiom A4. An example
of failure for the axiom is the one (A) with which we began the chapter.
*E10.1. Complete the cases for (!) and (8) to complete the demonstration of T10.2.
You should set up the complete demonstration, but for cases completed in the
text, you may simply refer to the text, as the text refers cases to homework.

10.1.2

Soundness

We are now ready for our main proof of soundness for AD. Actually, all the parts are
already on the table. It is simply a matter of pulling them together into a complete
demonstration.
T10.3. If `AD P , then  P .

(Soundness)

Suppose `AD P . Then there is an AD derivation A D hQ1 : : : Qn i of P


from premises in , with Qn D P . By induction on the line numbers in A,
we show that for any i ,  Qi . The case when i D n is the desired result.
Basis: The first line of A is a premise or an axiom. So Q1 is either a member
of or an instance of A1, A2, A3, A4, A5, A6 or A7. The cases for
A1, A2, A3, A5, A6 and A7 are parallel.
(prem) If Q1 is a member of , then there is no interpretation where all the
members of are true and Q1 is not; so by QV,  Q1 .
(Ax) Suppose Q1 is an instance of A1, A2, A3, A5, A6 or A7 and Q1 .
Then by QV, there is some I such that I D T but IQ1 T. But by
T7.2, T7.3, T7.4, T7.7, T7.8 and T7.9,  Q1 ; so by QV, IQ1 D T.
This is impossible, reject the assumption:  Q1 .
(A4) If Q1 is an instance of A4, then it is of the form 8xB ! Brx where
term r is free for variable x in formula B. Suppose Q1 . Then by

CHAPTER 10. MAIN RESULTS

Assp:
Show:

(MP)
(Gen)

463

QV, there is an I such that I D T, but I8xB ! Brx T. From


the latter, by TI, there is some d such that Id 8xB ! Brx S; so
by SF(!), Id 8xB D S but Id Brx S; from the first of these, by
SF(8), for any m 2 U, Id.xjm/ B D S; in particular, where for some
object o, Id r D o, Id.xjo/ B D S; so, with r free for x in formula
B, by T10.2, Id Brx D S. This is impossible; reject the assumption:
 Q1 .
For any i , 1  i < k,  Qi .
 Qk .
Qk is either a premise, an axiom, or arises from previous lines by MP
or Gen. If Qk is a premise or an axiom then, as in the basis,  Qk .
So suppose Qk arises by MP or Gen.
Homework.
If Qk arises by Gen, then A is something like this,
i B!C
::
:
k B ! 8xC

i Gen

where i < k, variable x is not free in formula B, and Qk D B !


8xC. Suppose Qk ; then B ! 8xC; so by QV, there
is some I such that I D T but IB ! 8xC T; from the latter,
by TI, there is a d such that Id B ! 8xC S; so by SF(!),
Id B D S but Id 8xC S; from the second of these, by SF(8),
there is some o 2 U, such that Id.xjo/ C S. But I D T, and
by assumption,  B ! C; so by QV, IB ! C D T; so by TI,
for any variable assignment h, Ih B ! C D S; in particular, then,
Id.xjo/ B ! C D S; so by SF(!), Id.xjo/ B S or Id.xjo/ C D S.
But since Id.xjo/ C S, we have Id.xjo/ B S; since x is not free in
B, d and d.xjo/ agree in their assignments to all the variables free in
B; so by T8.4, Id B S. This is impossible; reject the assumption:
 Qk .
 Qk .
Indct: For any n,  Qn .
So if `AD P , then  P . So AD is sound. And since AD is sound, with theorems
T9.2, T9.11 and T9.12 it follows that ND and ND+ are sound as well.
It is worth commenting on the restriction for Gen. If the restriction fails and x is
free in B, then B ! C is satisfied on an arbitrary assignment to x just in case each

CHAPTER 10. MAIN RESULTS

464

object is such that if it satisfies B then it satisfies C where this may be the case
though not every object satisfies C. On the other hand, if x is not free in B, then
B ! C is satisfied on arbitrary assignments to x just in case if B is satisfied, then
C is satisfied for each assignment to x. Thus, for example, consider the following
derivation which violates the restriction on Gen.
1.
2.
3.
4.

8x.Bx ! C x/
8x.Bx ! C x/ ! .Bx ! C x/
Bx ! C x
Bx ! 8xC x

prem
A4
2,1 MP
3 Gen

Suppose U is N , the set of all natural numbers, with IB D fo j o > 5g and IC D


fo j o > 4g. Then every x is such that if it is greater than 5, then it is greater than 4. So
the premise is true. But on any assignment that makes x a number greater than 5, it
will not be the case that every number is greater than 4. So the conclusion is not true.
So the derivation system with Gen is not sound. This point transfers from Gen to
the associated restriction on uses of DT: From the proof of DT, every Qi under an
auxiliary assumption B is implicitly of the form B ! Qi . So the restriction on Gen
naturally transfers to variables free in the assumption B.
*E10.2. Complete the case for (MP) to round out the demonstration that AD is sound.
You should set up the complete demonstration, but for cases completed in the
text, you may simply refer to the text, as the text refers cases to homework.

E10.3. Consider a derivation system A4 which has axioms and rules,


A4 A1. Any sentential form P such that  P .
A2. ` Ptx ! 9xP

where t is free for x in P

MP P ! Q, P ` Q
9E P ! Q ` 9xP ! Q

where x is not free in Q

Provide a complete demonstration that A4 is sound. You may appeal to substitution results from the text as appropriate. Hint: By the soundness of AD,
if P is a sentential form and `AD P then P is among the A1 axioms.

10.1.3

Consistency

The proof of soundness is the main result we set out to achieve in this section. But
before we go on, it is worth pausing to make an application to consistency. Say a set
(Sigma) of formulas is consistent iff there is no formula A such that ` A and

CHAPTER 10. MAIN RESULTS

465

` A. Consistency is thus defined in terms of derivations rather than semantic


notions. But we show,
T10.4. If there is an interpretation M such that M D T (a model for ), then is
consistent.
Suppose there is an interpretation M such that M D T but is inconsistent.
From the latter, there is a formula A such that ` A and ` A; so by
T10.3,  A and  A. But M D T; so by QV, MA D T and
MA D T; so by TI, for any d, Md A D S and Md A D S; from
the second of these, by SF(), Md A S. This is impossible; reject the
assumption: if there is an interpretation M such that M D T, then is
consistent.
This is an interesting and important theorem. Suppose we want to show that some
set of formulas is inconsistent. For this, it is enough to derive a contradiction from
the set. But suppose we want to show that there is no way to derive a contradiction.
Merely failing to find a derivation does not show that there is not one! But, with
soundness, we can demonstrate that there is no such derivation by finding a model
for the set.
Similarly, if we want to show that ` A, it is enough to produce the derivation.
But suppose we want to show that A. Merely failing to find a derivation does
not show that there is not one! Still, as above, given soundness, we can demonstrate
that there is no derivation by finding a model on which the premises are true, with
the negation of the conclusion.
T10.5. If there is an interpretation M such that M [ fAg D T, then A.
The reasoning is left for homework. But the idea is very much as above. With
soundness, it is impossible to have both M [ fAg D T and ` A.
Again, the result is useful. Suppose, for example, we want to show that 8xAx
Aa. You may be unable to find a derivation, and be able to point out flaws in
a friends attempt. But we show that there is no derivation by finding a model on
which both 8xAx and Aa are true. And this is easy. Let U D f1; 2g with
Ma D 1 and MA D f1g.
(i) Suppose M8xAx T; then by TI, there is some d such that Md 8xAx S;
so by SF(), Md 8xAx D S; so by SF(8), for any o 2 U, Md.xjo/ Ax D S; so
Md.xj2/ Ax D S. But d.xj2/x D 2; so by TA(v), Md.xj2/ x D 2; so by SF(r),

CHAPTER 10. MAIN RESULTS

466

2 2 MA; but 2 62 MA. This is impossible; reject the assumption: M8xAx D T.


(ii) Suppose MAa T; then by TI, there is some d such that Md Aa S;
so by SF(), Md Aa D S; and by SF() again, Md Aa S. But Ma D 1; so by
TA(c), Md a D 1; so by SF(r), 1 62 MA; but 1 2 MA. This is impossible; reject the
assumption: MAa D T. So M8xAx D T and MAa D T. So by T10.5,
8xAx Aa.

If there is a model on which all the members of are true and A is true, then it
is not the case that every model with true has A true. So, with soundness, there
cannot be a derivation of A from .
*E10.4. Provide an argument to show T10.5. Hint: The reasoning is very much as
for T10.4.

E10.5. (a) Show that f9xAx; Aag is consistent. (b) Show that 8x.Ax ! Bx/;
Ba 9xAx.

10.2

Sentential Adequacy

The proof of soundness is straightforward given methods we have used before. But
the proof of adequacy was revolutionary when Gdel first produced it in 1930. It
is easy to construct derivation systems that are not adequate. Thus, for example,
consider a system like the sentential part of AD but without A1. It is easy to see that
such a system is sound, and so that derivations without A1 do not go astray. (All
we have to do is leave the case for A1 out of the proof for soundness.) But, by our
discussion of independence from section 11.2 (see also E8.13), there is no derivation
of A3 from A1 and A2 alone. So there are sentential expressions P such that  P ,
but for which there is no derivation. So the resultant derivation system would not be
adequate. We turn now to showing that our derivation systems are in fact adequate:
if  P , then ` P . Given this, with soundness, we have  P iff ` P , so
that our derivation systems deliver just the results they are supposed to.
Adequacy for a system like AD was first proved by K. Gdel in his 1930 doctoral
dissertation. The version of the proof that we will consider is the standard one,
essentially due to L. Henkin.1 An interesting feature of these proofs is that they are
1 Henkin,

Completeness of the First-Order Calculus. Kurt Gdel, Die Vollstndigkeit der Axiome des Logischen Funktionenkalkls. English translation in From Frege to Gdel, reprint in Gdels
Collected Works.

CHAPTER 10. MAIN RESULTS

467

not constructive. So far, in proving the equivalence of deductive systems, we have


been able to show that there are certain derivations, by showing how to construct
them. In this case, we show that there are derivations, but without showing how to
construct them. As we shall see in Part IV, a constructive proof of adequacy for our
full predicate logic is impossible. So this is the only way to go.
The proof of adequacy is more involved than any we have encountered so far.
Each of the parts is comparable to what has gone before, and all the parts are straightforward. But there are enough parts that it is possible to lose the forest for the trees.
I thus propose to do the proof three times. In this section, we will prove sentential
adequacy that for expressions in a sentential language, if  P , then ` P .
This should enable us to grasp the overall shape of the argument without interference
from too many details. We will then consider a basic version of the quantificational
argument and, after addressing a few complications, put it all together for the full
version. Notation and theorem numbers are organized to preserve parallels between
the cases.

10.2.1

Basic Idea

The basic idea is straightforward: Let us restrict ourselves to an arbitrary sentential


language Ls and to sentential semantic rules. Derivations are automatically restricted
to sentential rules by the restricted language. So derivations and semantics are particularly simple. For formulas in this language, our goal is to show that if s P ,
then ` P . We can see how this works with just a couple of preliminaries.
We begin with a definition and a theorem. As before, let us say,
Con A set of formulas is consistent iff there is no formula A such that ` A
and ` A.
So consistency is a syntactical notion. A set of formulas is consistent just in case
there is no way to derive a contradiction from it. Now for the theorem,
T10.6s . For any set of formulas and sentence P , if P , then [ fP g is
consistent.
Suppose P , but [ fP g is not consistent. From the latter, there is
some A such that [fP g ` A and [fP g ` A. So by DT, ` P ! A
and ` P ! A; by T3.10, ` P ! P ; so by T3.2, ` P ! A,
and ` P ! A; but by A3, ` .P ! A/ ! .P ! A/ !
P ; so by two instances of MP, ` P . But this is impossible; reject the
assumption: if P , then [ fP g is consistent.

CHAPTER 10. MAIN RESULTS

468

The idea is simple: if [ fP g is inconsistent, then by reasoning as for I in ND,


P follows from alone; so if P cannot be derived from alone, then [ fP g
is consistent. Notice that, insofar as the language is sentential, the derivation does
not include any applications of Gen, so the applications of DT are sure to meet the
restriction on Gen.
In the last section, we saw that any set with a model is consistent. Now suppose
we knew the converse, that any consistent set has a model.
./ For any consistent set of formulas 0 , there is an interpretation M0 such that
M0 0 D T.
This sets up the key connection between syntactic and semantic notions, between
consistency on the one hand, and truth on the other, that we will need for adequacy.
Schematically, then, with ./ we have the following,
1.
2.
3.

[ fP g has a model

[ fP g is consistent

[ fP g is not consistent


6 sP
[ fP g has a model
`P

./

(2) is just ./. (1) is by simple semantic reasoning: Suppose [ fP g has a model;
then there is some M such that M [ fP g D T; so M D T and MP D T;
from the latter, by ST(), MP T; so M D T and MP T; so by SV, 6s P .
(3) is by straightforward syntactic reasoning: Suppose [ fP g is not consistent;
then by an application of T10.6s , ` P ; but by T3.10, ` P ! P ; so by
MP, ` P . Now suppose s P ; then by (1), reading from right to left, [ fP g
does not have a model; so by (2), again from right to left, [ fP g is not consistent;
so by (3), ` P . So if s P , then ` P , which was to be shown. Of course,
knowing that there is some way to derive P is not the same as knowing what that way
is. All the same, ./ tells us that there must exist a model of a certain sort, from which
it follows that there must exist a derivation. And the work of our demonstration of
adequacy reduces to a demonstration of ./.
So we need to show that every consistent set of formulas 0 has an interpretation
0
M such that M0 0 D T. Here is the basic idea: We show that any consistent 0 is
a subset of a corresponding big set 00 specified in such a way that it must have a
model M0 which in turn is a model for the smaller 0 . Following the arrows,

CHAPTER 10. MAIN RESULTS

469
00
Z
~
6 Z
0

M0



=

Given a consistent 0 , we show that there is the big set 00 . From this we show that
there must be an M0 that is a model not only for 00 but for 0 as well. So if 0 is
consistent, then it has a model. We proceed through a series of theorems to show that
this can be done.

10.2.2

Gdel Numbering

In constructing our big sets, we will want to consider formulas, for inclusion or
exclusion, serially one after another. For this, we need to line them up for
consideration. Thus, in this section we show,
T10.7s . There is an enumeration Q1 ; Q2 : : : of all formulas in Ls .
The proof is by construction. We develop a method by which the formulas
can be lined up. The method is interesting in its own right, and foreshadows
the method of Gdels Incompleteness Theorem for arithmetic.
In subsection 2.1.1, we required that any sentential language Ls has countably many
sentence letters, which can be ordered into a series, S0 , S1 . . . . Assume some such
series. We want to show that the formulas of Ls can be so ordered as well. Begin by
assigning to each symbol (alpha) in the language an integer g, called its Gdel
Number.
a. g. D 3
b. g/ D 5
c. g D 7
d. g! D 9
e. gSn D 11 C 2n
So, for example, gS0 D 11 and gS4 D 11C24 D 19. Clearly each symbol gets
a unique Gdel number, and Gdel numbers for individual symbols are odd positive
integers.

CHAPTER 10. MAIN RESULTS

470

Now we are in a position to assign a Gdel number to each formula as follows:


Where 0 ; 1 : : : n are the symbols, in order from left to right, in some expression
Q,
gQ D 2g0  3g1  5g2  : : :  n gn
where 2, 3, 5. . . n are the first n prime numbers. So, for example, gS0 D
27  37  511 ; similarly, g.S0 ! S4 / D 27  33  511  79  1119  135 D
15463; 36193; 79608; 90364; 71042; 41201; 87066; 87500; 00000 a very big integer! All the same, it is an integer, and it is clear that every expression is assigned to
some integer.
Further, different expressions get different Gdel numbers. It is a theorem of
arithmetic that every integer is uniquely factored into primes (see the arithmetic for
Gdel numbering and more arithmetic for Gdel numbering references). So a given
integer can correspond to at most one formula: Given a Gdel number, we can find
its unique prime factorization; then if there are seven 2s in the factorization, the first
symbol is ; if there are seven 3s, the second symbol is ; if there are eleven 5s,
the third symbol is S0 ; and so forth. Notice that numbers for individual symbols are
odd, where numbers for expressions are even (where the number for an atomic comes
out odd when it is thought of as a symbol, but then even when it is thought of as a
formula).
The point is not that this is a practical, or a fun, procedure. Rather, the point is
that we have integers associated with each expression of the language. Given this,
we can take the set of all formulas, and order its members according to their Gdel
numbers so that there is an enumeration Q1 , Q2 . . . of all formulas. And this is
what was to be shown.
E10.6. Find Gdel numbers for the following sentences (for the last, you need not
do the calculation).
S7

S0

S0 ! .S1 ! S0 /

E10.7. Determine the expressions that have the following Gdel numbers.
49

1944

27  33  511  79  117  1313  175

E10.8. Which would come first in the official enumeration of formulas, S1 ! S2
or S2 ! S2 ? Explain. Hint: you should be able to do this without actually
calculating the Gdel numbers.

CHAPTER 10. MAIN RESULTS

471

Some Arithmetic Relevant to Gdel Numbering


Say an integer i has a representation as a product of primes if there are some
primes pa ; pb : : : pj such that pa  pb  : : :  pj D i . We understand a single
prime p to be its own representation.
G1. Every integer > 1 has at least one representation as a product of primes.
Basis: 2 is prime and so is its own representation; so the first integer > 1 has
a representation as a product of primes.
Assp: For any i , 1 < i < k, i has a representation as a product of primes.
Show: k has a representation as a product of primes.
If k is prime, the result is immediate; so suppose there are some i; j <
k such that k D i  j ; by assumption i has a representation as a
product of primes pa : : :pb and j has a representation as a product
of primes qa  : : :  qb ; so k D i  j D pa  : : :  pb  qa  : : :  qb
has a representation as a product of primes.
Indct: Any i > 1 has a representation as a product of primes.
Corollary: any integer > 1 is divided by at least one prime.
G2. There are infinitely many prime numbers.
Suppose the number of primes is finite; then there is some list p1 , p2 . . . pn of
all the primes; consider q D p1 p2 : : :pn C1; no pi in the list p1 : : : pn
divides q evenly, since each leaves remainder 1; but by the corollary to (G1),
q is divided by some prime; so some prime is not on the list; reject the
assumption: there are infinitely many primes.
Note: Sometimes q, calculated this way, is itself prime: when the list is f2g,
q D 2 C 1 D 3, and 3 is prime. Similarly, 2  3 C 1 D 7, 2  3  5 C 1 D 31,
2  3  5  7 C 1 D 211, and 2  3  5  7  11 C 1 D 2311, where 7, 31, 211,
and 2311 are all prime. But 2  3  5  7  11  13 C 1 D 30031 D 59  509.
So we are not always finding a prime not on the list, but rather only showing
that there is a prime not on it.
G3. For any i > 1, if i is the product of the primes p1 , p2 : : : pa , then no distinct
collection of primes q1 , q2 : : : qb is such that i is the product of them. (The
Fundamental Theorem of Arithmetic)
For a proof, see the more arithmetic for Gdel numbering reference in the
corresponding part of the next section.

CHAPTER 10. MAIN RESULTS

10.2.3

472

The Big Set

Recall that a set is consistent iff there is no A such that implies both A and A.
Now, a set is maximal iff for any A the set implies one or the other.
Max A set of formulas is maximal iff for any sentence A, ` A or ` A.
Again, this is a syntactical notion. If a set is maximal, then it implies A or A for
any sentence A; if it is consistent, then it does not imply both. We set out to construct
a big set 00 from 0 , and show that 00 is both maximal and consistent.
Cns00 Construct 00 from 0 as follows: By T10.7s , there is an enumeration,
Q1 , Q2 . . . of all the formulas in Ls . Consider this enumeration, and let 0
(Omega0 ) be the same as 0 . Then for any i > 0, let
i D i

i D i

if

i

` Qi

if

i

Qi

else,
[ fQi g

then,
00 D

i 0 i

that is, 00 is the union of all the i s

Beginning with set 0 (D 0 ), we consider the formulas in the enumeration Q1 ,


Q2 . . . one-by-one, adding a formula to the set just in case its negation is not already
derivable. 00 contains all the members of 0 together with all the formulas added
this way. Observe that 0  00 . One might think of the i s as constituting a big
sack of formulas, and the Qi s as coming along on a conveyor belt: for a given Qi ,
if there is no way to derive its negation from formulas already in the sack, we throw
the Qi in; otherwise, we let it go on by. Of course, this is not a procedure we could
complete in finite time. Rather, we give a logical condition which specifies, for any
Qi in the language, whether it is to be included in 00 or not. The important point is
that some 00 meeting these conditions exists.
As an example, suppose 0 D fA ! Bg and consider an enumeration which
begins A, A, B, B. . . . Then,
0 D 0 ; so 0 D fA ! Bg.
Q1 D A, and 0 A; so 1 D fA ! Bg [ fAg D fA ! B; Ag.

(F)

Q2 D A, and 1 ` A; and 2 is unchanged; so 2 D fA ! B; Ag.


Q3 D B, and 2 B; so 3 D fA ! B; Ag [ fBg D fA ! B; A; Bg.
Q4 D B, and 3 ` B; and 4 is unchanged; so 4 D fA ! B; A; Bg.

CHAPTER 10. MAIN RESULTS

473

So we include Qi each time its negation is not implied. Ultimately, we will use this
set to construct a model. For now, though, the point is simply to understand the
condition under which a formula is included or excluded from the set.
We now show that if 0 is consistent, then 00 is maximal and consistent. Perhaps
the first is obvious: We guarantee that 00 is maximal by including Qi as a member
whenever Qi is not already a consequence.
T10.8s . If 0 is consistent, then 00 is maximal and consistent.
The proof comes to the demonstration of three results. Given the assumption
that 0 is consistent, we show, (a) 00 is maximal; (b) each i is consistent;
and use this to show (c), 00 is consistent. Suppose 0 is consistent.
(a) 00 is maximal. Suppose otherwise. Then there is some Qi such that
both 00 Qi and 00 Qi . For this i , by construction, each member
of i 1 is in 00 ; so if i 1 ` Qi then 00 ` Qi ; but 00 Qi ; so
i 1 Qi ; so by construction, i D i 1 [ fQi g; and by construction
again, Qi 2 00 ; so 00 ` Qi . This is impossible; reject the assumption: 00
is maximal.
(b) Each i is consistent. By induction on the series of i s.
Basis: 0 D 0 and 0 is consistent; so 0 is consistent.
Assp: For any i , 0  i < k, i is consistent.
Show: k is consistent.
k is either k 1 or k 1 [ fQk g. Suppose the former; by assumption, k 1 is consistent; so k is consistent. Suppose the latter; then
by construction, k 1 Qk ; so by T10.6s , k 1 [ fQk g is consistent; so k is consistent. So, either way, k is consistent.
Indct: For any i , i is consistent.
(c) 00 is consistent. Suppose 00 is not consistent; then there is some A such
that 00 ` A and 00 ` A. Consider derivations D1 and D2 of these results,
and the premises Qi : : : Qj of these derivations. Where Qj is the last of these
premises in the enumeration of formulas, by the construction of 00 , each of
Qi : : : Qj must be a member of j ; so D1 and D2 are derivations from j ;
so j is inconsistent. But by the previous result, j is consistent. This is
impossible; reject the assumption: 00 is consistent.
Because derivations of A and A have only finitely many premises, all the premises
in a derivation of a contradiction must show up in some j ; so if 00 is inconsistent,

CHAPTER 10. MAIN RESULTS

474

then some j is inconsistent. But no j is consistent. So 00 is consistent. So we


have what we set out to show. 0  00 , and if 0 is consistent, then 00 is both
maximal and consistent.
E10.9. (i) Suppose 0 D fA ! Bg and the enumeration of formulas begins A,
A, B, B. . . . What are 0 , 1 , 2 , 3 , and 4 ? (ii) What are they
when the enumeration begins B, B, A, A. . . ? In each case, produce a
(sentential) model to show that the resultant 4 is consistent.

10.2.4

The Model

We now construct a model M0 for 0 . In this sentential case, the specification is


particularly simple.
CnsM0 For any atomic S, let M0 S D T iff 00 ` S.
Notice that there clearly exists some such interpretation M0 : We assign T to every
sentence letter that can be derived from 00 , and F to the others. It will not be the
case that we are in a position to do all the derivations, and so to know what are all
the assignments to the atomics. Still, it must be that any atomic either is or is not
a consequence of 0 , and so that there exists a corresponding interpretation M0 on
which those sentence letters either are or are not assigned T.
We now want to show that if 0 is consistent, then M0 is a model for 0 that if
0
is consistent then M0 0 D T. As we shall see, this results immediately from the
following theorem.
T10.9s . If 0 is consistent, then for any sentence B, of Ls , M0 B D T iff 00 ` B.
Suppose 0 is consistent. Then by T10.8s , 00 is maximal and consistent.
Now by induction on the number of operators in B,
Basis: If B has no operators, then it is an atomic of the sort S. But by the
construction of M0 , M0 S D T iff 00 ` S; so M0 B D T iff 00 ` B.
Assp: For any i , 0  i < k, if B has i operator symbols, then M0 B D T iff
00 ` B.
Show: If B has k operator symbols, then M0 B D T iff 00 ` B.
If B has k operator symbols, then it is of the form P or P ! Q
where P and Q have < k operator symbols.

CHAPTER 10. MAIN RESULTS

475

(/ Suppose B is P . (i) Suppose M0 B D T; then M0 P D T; so


by ST(), M0 P T; so by assumption, 00 P ; so by maximality,
00 ` P ; which is to say, 00 ` B. (ii) Suppose 00 ` B; then 00 `
P ; so by consistency, 00 P ; so by assumption, M0 P T; so
by ST(), M0 P D T; which is to say, M0 B D T. So M0 B D T
iff 00 ` B.
(!) Suppose B is P ! Q. (i) Suppose M0 B D T; then M0 P ! Q D
T; so by ST(!), M0 P T or M0 Q D T; so by assumption, 00 P
or 00 ` Q. Suppose the latter; by A1, ` Q ! .P ! Q/; so by MP,
00 ` P ! Q. Suppose the former; then by maximality, 00 ` P ;
but by T3.9, ` P ! .P ! Q/; so by MP, 00 ` P ! Q. So in
either case, 00 ` P ! Q; where this is to say, 00 ` B. (ii) Suppose
00 ` B but M0 B T; by [homework], this is impossible: so if
00 ` B, then M0 B D T. So M0 B D T iff 00 ` B.
If B has k operator symbols, then M0 B D T iff 00 ` B.
Indct: For any B, M0 B D T iff 00 ` B.
So if 0 is consistent, then for any B 2 00 , M0 B D T iff 00 ` B.
The key to this is that 00 is both maximal and consistent. In (F), for example,
0 D fA ! Bg; so 0 A and 0 B; if we were simply to follow our
construction procedure as applied to this set, the result would have M0 A T and
M0 B T; but then M0 A ! B T and there is no model for 0 . But 4 has
A and B as members; so 4 ` A and 4 ` B. So by the construction procedure,
M0 A D T and M0 B D T; so M0 A ! B D T. Thus it is the construction
with maximality and consistency of 00 that puts us in a position to draw the parallel
between the implications of 00 and what is true on M0 . It is now a short step to seeing
that we have a model for 0 and so ./ that we have been after.
*E10.10. Complete the second half of the conditional case to complete the proof of
T10.9s . You should set up the entire induction, but may refer to the text for
parts completed there, as the text refers to homework.
E10.11. (i) Where 0 D fA ! Bg, and the enumeration of formulas are as in the
first part of E10.9, what assignments does M0 make to A and B? (ii) What
assignments does it make on the second enumeration? Use a truth table to
show, for each case, that the assignments result in a model for 0 . Explain.

CHAPTER 10. MAIN RESULTS

10.2.5

476

Final Result

The proof of sentential adequacy is now a simple matter of pulling together what we
have done. First, it is a simple matter to show,
T10.10s . If 0 is consistent, then M0 0 D T.
0

./

M0 0

Suppose
is consistent but
T. From the latter, there is some
0
0
formula B 2 such that M B T. Since B 2 0 , by construction,
B 2 00 ; so 00 ` B; so, since 0 is consistent, by T10.9s , M0 B D T. This
is impossible; reject the assumption: if 0 is consistent, then M0 0 D T.
That is it! Going back to the beginning of our discussion of sentential adequacy, all
we needed was ./, and now we have it. So the final argument is as sketched before:
T10.11s . If s P , then ` P .

(sentential adequacy)

Suppose s P but P . Say, for the moment, that ` P ; by T3.10,


` P ! P ; so by MP, ` P ; but this is impossible; so P .
Given this, by T10.6s , [fP g is consistent; so by T10.10s , there is a model
M0 such that M0 [ fP g D T; so M0 P D T; so by ST(), M0 P T;
so M0 D T but M0 P T; so by SV, 6s P . This is impossible; reject
the assumption: if s P , then ` P .
Try again to get the complete picture in your mind: The key is that consistent sets
always have models. If there is no derivation of P from , then [ fP g is consistent; and if [ fP g is consistent, then it has a model so that 6s P . Thus, put
the other way around, if s P , then there is a derivation of P from . We get the
key point, that consistent sets have models, by finding a relation between consistent,
and maximal consistent sets. If a set is both maximal and consistent, then it contains
enough information about its atomics that a model for its atomics is a model for the
whole.
It is obvious that the argument is not constructive we do not see how to show
that ` P whenever s P . But it is interesting to see why. The argument turns
on the existence of our big sets under certain conditions, and so on the existence of
models. We show that the sets must exist and have certain properties, though we are
not in a position to find all their members. This puts us in a position to know the
existence of derivations, though we do not say what they are.2
2 In

fact, there are constructive approaches to sentential adequacy. See, for example, Lemma 1.13
and Proposition 1.14 of Mendelson, Introduction to Mathematical Logic. Our primary purpose, however, is to set up the argument for the quantificational case, where such methods do not apply.

CHAPTER 10. MAIN RESULTS

477

E10.12. Suppose our primitive operators are  and ^ and the derivation system is A2
from E3.4 on p. 77. Present a complete demonstration of adequacy for this
derivation system with all the definitions and theorems. You may simply
appeal to the text for results that require no change.

10.3

Quantificational Adequacy: Basic Version

As promised, the demonstration of quantificational adequacy is parallel to what we


have seen. Return to a quantificational language and to our regular quantificational
semantic and derivation notions. The goal is to show that if  P , then ` P . Certain complications are avoided if we suppose that the language L0 includes infinitely
many constants not in , and does not include the D symbol for equality. The constants not already in are required for the construction of our big sets. And without
D in the language, the model specification is simplified. We will work through the
basic argument in this section and, dropping constraints on the language, return to
the general case in the next. If you are confused at any stage, it may help to refer
back to the parallel section for the sentential case.
Before launching into the main argument, it will be helpful to have a preliminary
theorem. Where D D hB1 : : : Bn i is an AD derivation, and 0 D fC1 : : : Cn g is a
a
set of formulas, for some constant a and variable x, say Dxa D hB1 a
x : : : Bn x i and
a
a
0 a
x D fC1 x : : : Cn x g. By induction on the line numbers in D, we show,
T10.12. If D is a derivation from 0 , and x is a variable that does not appear in D,
then for any constant a, Dxa is a derivation from 0 a
x.
Basis: B1 is either a member of 0 or an axiom.
0a
a
(prem) If B1 is a member of 0 , then B1 a
x is a member of x ; so hB1 x i is
a derivation from 0 a
x.

(eq) If B1 is an equality axiom, A5, A6 or A7, then it includes no cona


a
stants; so B1 D B1 a
x ; so B1 x is an equality axiom, and hB1 x i is a
derivation from 0 a
x.
(A1) If B1 is an instance of A1, then it is of the form, P ! .Q ! P /; so
a
a
a
B1 a
x is Px ! .Qx ! Px /; but this is an instance of A1; so if B1
a
is an instance of A1, then B1 a
x is an instance of A1, and hB1 x i is a
derivation from 0 a
x.
(A2) Homework.
(A3) Homework.

CHAPTER 10. MAIN RESULTS

478

(A4) If B1 is an instance of A4, then it is of the form, 8vP ! Ptv , for


some variable v and term t that is free for v in P . So B1 a
x D
v
a
a
v
a
8vP ! Pt x D 8vP x ! Pt x . But since x does not apa
v a
pear in D, x v; so 8vP a
x D 8vPx . And by T8.7, Pt x D
a
a
a v
Pxa v
a . So B1 x D 8vPx ! Px a ; and since x is new to D and
tx
tx
a
t is free for v in P , tx is free for v in Pxa ; so 8vPxa ! Pxa v
a
tx
is an instance of A4; so if B1 is an instance of A4, then B1 a
is
an
x
0a
instance of A4, and hB1 a
x i is a derivation from x .
a
0a
Assp: For any i, 1  i < k, hB1 a
x : : : Bi x i is a derivation from x .
a
0a
Show: hB1 a
x : : : Bk x i is a derivation from x .

Bk is a member of 0 , an axiom, or arises from previous lines by MP


or Gen. If Bk is a member of 0 or an axiom then, by reasoning as in
the basis, hB1 : : : Bk i is a derivation from 0 a
x . So two cases remain.
(MP) Homework.
(Gen) If Bk arises by Gen, then there are some lines in D,
i P !Q
::
:
k P ! 8vQ

i Gen

where i < k, v is not free in P and Bk D P ! 8vQ. By assumpa


a
tion .P ! Q/a
x is a member of the derivation hB1 x : : : Bk 1 x i from
0
a
a
a
a
x ; but .P ! Q/x is Px ! Qx ; and since x does not appear in D,
it cannot be that x is the same variable as v; so v is not free in Pxa ;
so Pxa ! 8vQxa D P ! 8vQa
x follows in this new derivation by
a i is a derivation from 0 a .
Gen. So hB1 a
:
:
:
B
kx
x
x
a
0a
So hB1 a
x : : : Bk x i is a derivation from x .
a
0a
Indct: For any n, hB1 a
x : : : Bn x i is a derivation from x .

The reason this works is that none of the justifications change: switching x for a
leaves each line justified for the same reasons as before. The only sticking point
may be the case for A4. But we did the real work for this by induction in T8.7.
And that result should be intuitive, once we see what it says. Given this, the rest is
straightforward.
*E10.13. Finish the cases for A2, A3 and MP to complete the proof of T10.12. You
should set up the complete demonstration, but may refer to the text for cases
completed there, as the text refers cases to homework.

CHAPTER 10. MAIN RESULTS

479

E10.14. Where 0 D fAbg and D is as follows,


1.
2.
3.
4.
5.
6.
7.
8.

8xAx ! Ab
.8xAx ! Ab/ ! .Ab ! 8xAx/
Ab ! 8xAx
Ab ! Ab
Ab ! 8xAx
Ab
8xAx
9xAx

A4
T3.13
2,1 MP
T3.11
4,3 T3.2
prem
5,6 MP
7 abv

apply T10.12 to show that Dyb is a derivation from 0 by . Do any of the justifications change? Explain.

10.3.1

Basic Idea

As before, our main argument turns on the idea that every consistent set has a model.
Thus we begin with a definition and a theorem.
Con A set of formulas is consistent iff there is no formula A such that ` A
and ` A.
So a set of formulas is consistent just in case there is no way to derive a contradiction
from it. Of course, now we are working with full quantificational languages, and so
with our complete quantificational derivation systems.
For the following theorem, notice that is a set of formulas, and P a sentence
(a distinction without a difference in the sentential case). Again as before,
T10.6. For any set of formulas and sentence P , if P , then [ fP g is
consistent.
For some sentence P , suppose P but [ fP g is not consistent. From
the latter, there is some formula A such that [ fP g ` A and [ fP g `
A; since P is a sentence, it has no free variables; so by DT, ` P ! A
and ` P ! A; by T3.10, ` P ! P ; so by T3.2, ` P ! A
and ` P ! A; but by A3, ` .P ! A/ ! .P ! A/ !
P ; so by two instances of MP, ` P . This is impossible; reject the
assumption: if P , then [ fP g is consistent.
Insofar as P is required to be a sentence, the restriction on applications of DT is sure
to be met: since P has no free variables, no application of Gen is to a variable free
in P . So T10.6 does not apply to arbitrary formulas.

CHAPTER 10. MAIN RESULTS

480

To the extent that T10.6 plays a direct role in our basic argument for adequacy,
this point that it does not apply to arbitrary formulas might seem to present a problem
about reaching our general result, that if  P then ` P , which is supposed to
apply in the arbitrary case. But there is a way around the problem. For any formula
P , let its (universal) closure P c be P prefixed by a universal quantifier for every
variable free in P . To make P c unique, for some enumeration of variables, x1 ; x2 : : :
let the quantifiers be in order of ascending subscripts. So if P has no free variables,
P c D P ; if x1 is free in P , then P c D 8x1 P ; if x1 and x3 are free in P , then
P c D 8x1 8x3 P ; and so forth. So for any formula P , P c is a sentence. As it turns
out, we will be able to argue about arbitrary formulas P , by using their closures P c
as intermediaries.
Suppose that the members of [ fP c g D 0 are formulas of L0 . Then it will
be sufficient for us to show that any consistent set of this sort has a model.
.?/ For any consistent set 0 of formulas in L0 , there is an interpretation M0 such
that M0 0 D T.
Again, this sets up the key connection between syntactic and semantic notions
between consistency on the one hand, and truth on the other that we will need for
adequacy. Supposing .?/ we have the following,
1.
2.
3.

[ fP c g has a model


[ fP c g is consistent
[ fP c g is not consistent


6 P
[ fP c g has a model
`P

.?/

(2) is just .?/. (1) and (2) are easy. (1) by semantic reasoning: Suppose [ fP c g
has a model; then there is some M such that M [ fP c g D T; so M D T and
MP c D T; from the latter, by TI, for arbitrary d, Md P c D S; so by SF(),
Md P c S; so by TI, MP c T; so by repeated applications of T7.6 on page 363,
MP T; so M D T and MP T; so by QV, 6 P . (3) is by syntactic
reasoning: Suppose [ fP c g is not consistent; then since P c is a sentence, by
an application of T10.6, ` P c ; but by T3.10, ` P c ! P c ; so by MP,
` P c ; and by repeated applications of A4 and MP, ` P .
Now suppose  P ; then from (1), [ fP c g does not have a model; so by
(2), [ fP c g is not consistent; so by (3), ` P . So if  P , then ` P ,
and this is the result we want. T7.6, according to which MP D T iff M8xP D T,
along with A4 and Gen*, which let us derive P from 8xP and vice versa, bridge

CHAPTER 10. MAIN RESULTS

481

between P and P c so that our suppositions about formulas can be converted into
claims about sentences and then back again.
Again, it remains to show .?/, that every consistent set 0 of formulas has a
model. And, again, our strategy is to find a big set related to 0 which can be used
to specify a model for 0 .

10.3.2

Gdel Numbering

As before, in constructing our big sets, we will want to line up expressions serially
one after another. The method merely expands our approach for the sentential case.
T10.7. There is an enumeration Q1 , Q2 : : : of all the formulas, terms, and the like,
in L0 .
The proof is again by construction: We develop a method by which all the
expressions of L0 can be lined up. Then the collection of all formulas, taken
in that order, is an an enumeration of all formulas; the collection of all terms,
taken in that order, is an enumeration of all terms; and so forth.
Insofar as the collections of variable symbols, constant symbols, function symbols,
sentence letters, and relation symbols in any quantificational language are countable, they are capable of being sorted into series, x0 , x1 : : : and a0 , a1 : : : and hn0 ,
hn1 : : : and R0n , R1n : : : for variables, constants, function symbols and relation symbols, respectively (where we think of sentence letters as 0-place relation symbols).
Supposing that they are sorted into such series, begin by assigning to each symbol
in L0 an integer g called its Gdel Number.
a.
b.
c.
d.
*e.

g. D 3
g/ D 5
g D 7
g! D 9
gD D 11

f.
g.
h.
i.
j.

g8 D 13
gxi D 15 C 10i
gai D 17 C 10i
ghni D 19 C 10.2n  3i /
gRin D 21 C 10.2n  3i /

Officially, we do not yet have D in the language, but it is easy enough to leave
it out for now. So, for example, gx0 D 15, gx1 D 15 C 10  1 D 25, and
gR12 D 21 C 10.22  31 / D 141.
To see that each symbol gets a distinct Gdel number, first notice th at numbers
in different categories cannot overlap: Each of (a) - (f) is obviously distinct and
 13. But (g) - (j) are all greater than 13, and when divided by 10, the remainder is
5 for variables, 7 for constants 9 for function symbols, and 1 for relation symbols;

CHAPTER 10. MAIN RESULTS

482

so variables, constants, and function symbols all get different numbers. Second,
different symbols get different numbers within the categories. This is obvious except
in cases (i) and (j). For these we need to see that each n= i combination results in a
different multiplier.
Suppose this is not so, that there are some combinations n; i and m; j such that 2n 
3i D 2m  3j but n m or i j . If n D m then, dividing both sides by 2n , we
get 3i D 3j , so that i D j . So suppose n m and, without loss of generality, that
n > m. Dividing each side by 2m and 3i , we get 2n m D 3j i ; since n > m, n m is
a positive integer; so 2n m is > 1 and even. But 3i j is either < 1 or odd. Reject the
assumption: if 2n  3i D 2m  3j , then n D m and i D j .

So each n= i combination gets a different multiplier, and we conclude that each symbol gets a different Gdel number. (This result is a special case of the Fundamental
theorem of Arithmetic treated in the arithmetic fore Gdel Numbering and more
arithmetic for Gdel numbering references.)
Now, as before, assign Gdel numbers to expressions as follows: Where 0 ; 1
: : : n are the symbols, in order from left to right, in some expression Q,
gQ D 2g0  3g1  5g2  : : :  n gn
where 2, 3, 5. . . n are the first n prime numbers. So, for example, gR12 x0 x1 D
27  37  5141  715  1125 a relatively large integer (one with over 130 digits)!
All the same, it is an integer, and different expressions get different Gdel numbers.
Given a Gdel number, we can find the corresponding expression by finding its prime
factorization; then if there are seven 2s in the factorization, the first symbol is ; if
there are seven 3s, the second symbol is ; if there are one hundred twenty three 5s,
the third symbol is R12 ; and so forth. Notice that numbers for individual symbols are
odd, where numbers for expressions are even.
So we can take the set of all formulas, the set of all terms, or whatever, and order
their members according to their Gdel numbers so that there is an enumeration
Q1 , Q2 : : : of all formulas, terms, and so forth. And this is what was to be shown.
E10.15. Find Gdel numbers for each of the following. Treat the first as a simple
symbol. (For the last, you need not do the calculation!)
R32

h11 x1

8x2 R12 a2 x2

E10.16. Determine the objects that have the following Gdel numbers.
61

213  315  53  715  1111  1315  175

CHAPTER 10. MAIN RESULTS

483

More Arithmetic Relevant to Gdel Numbering


G3. For any i > 1, if i is the product of the primes p1 , p2 : : : pa , then no distinct collection of primes q1 , q2 : : : qb is such that i is the product of them. (The Fundamental
Theorem of Arithmetic)
Basis: The first integer  1 D 2; but the only collection of primes such that their
product is equal to 2 is the collection containing just 2 itself; so no distinct
collection of primes is such that 2 is the product of them.
Assp: For any i , 1  i < k, if i is the product of primes p1 : : : pa , then no distinct
collection of primes q1 : : : qb is such that i is the product of them.
Show: k is such that if it is the product of the primes p1 : : : pa , then no distinct
collection of primes q1 : : : qb is such that k is the product of them.
Suppose there are distinct collections of primes p1 : : : pa and q1 : : : qb such
that k D p1  : : :  pa D q1  : : :  qb ; divide out terms common to both lists
of primes; then for some subclasses of the original lists, n D p1  : : :  pc D
q1  : : :  qd , where no member of p1 : : : pc is a member of q1 : : : qd and
vice versa (of course this p1 may be distinct from the one in the original list,
and so forth). So p1 q1 ; suppose, without loss of generality, that p1 > q1 ;
and let m D q1 .n=q1 n=p1 / D n .q1 =p1 /n D n q1  p2  : : :  pc .
Some preliminary results: (i) m < n  k; so m < k. Further, n=q1 and
n=p1 are integers, with the first greater than the second; so the difference is
an integer > 0; any prime is > 1; so q1 is > 1; so the product of q1 and
.n=q1 n=p1 / is > 1; so m > 1. So the inductive assumption applies to m.
(ii) q1 divides n and q1 divides q1 p2 : : :pc ; so n q1 p2 : : :pc =q1
is an integer; so m=q1 is an integer, and q1 divides m. (iii) .p1 q1 /=q1 D
p1 =q1 1; since p1 is prime, this is no integer; so q1 does not divide .p1 q1 /.
Notice that m D .p1 q1 /.n=p1 /; either p1 q1 D 1 or it has some prime
factorization, and n=p1 has a prime factorization, p2  : : :  pc ; the product
of the factorization(s) is a prime factorization of m. Given the cancellation of
common terms to get n, q1 is not a member of p2  : : :  pc ; by (iii), q1 is
not a member of the factorization of p1 q1 ; so q1 is not a member of this
factorization of m. By (ii), q1 divides m, and however many times it goes into
m, by (G1), that number has a prime factorization; the product of q1 and this
factorization is a prime factorization of m; so q1 is a member of some prime
factorization of m. But by (i), the inductive assumption applies to m; so m
has only one prime factorization. Reject the assumption: there are no distinct
collections of primes, p1 : : : pa and q1 : : : qb such that k D p1  : : :  pa D
q1  : : :  qb .
Indct: For any i > 1, if i is the product of the primes p1 , p2 : : : pa , then no distinct
collection of primes q1 , q2 : : : qb is such that i is the product of them.

CHAPTER 10. MAIN RESULTS

10.3.3

484

The Big Set

This section, along with the next, constitutes the heart of our demonstration of adequacy. Last time, to build our big set we added formulas to 0 to form a 00 that was
both maximal and consistent. A set of formulas is consistent just in case there is no
formula A such that both A and A are consequences. To accommodate restrictions
from T10.6, maximality is defined in terms of sentences.
Max A set of formulas is maximal iff for any sentence A, ` A or ` A.
This time, however, we need an additional property for our big sets. If a maximal
and consistent set has 8xP as a member, then it has Pax as a consequence for every
constant a. (Be clear about why this is so.) But in a maximal and consistent set, the
status of a universal 8xP is not always reflected at the level of its instances. Thus,
for example, though a set has Pax as a consequence for every constant a, it may
consistently include 8xP as well for it may be that a universal is falsified by
some individual to which no constant is assigned. But when we come to showing by
induction that there is a model for our big set, it will be important that the status of a
universal is reflected at the level of its instances. We guarantee this by building the
set to satisfy the following condition.
Scgt A set of formulas is a scapegoat set iff for any sentence 8xP , if `
8xP , then there is some constant a such that ` Pax .
Equivalently, is a scapegoat set just in case whenever any sentence 9xP is such
that if ` 9xP , then there is some constant a such that ` Pax . In a scapegoat
set, we assert the existence of a particular individual (a scapegoat) corresponding to
any existential claim. Notice that, since 8xP is a sentence, Pax is a sentence too.
So we set out to construct from 0 a maximal, consistent, scapegoat set. As
before, the idea is to line the formulas up, and consider them for inclusion one-byone. In addition, this time, we consider an enumeration of constants c1 , c2 : : : and
for any included sentence of the form 8xP , we include Pcx where c is a constant
that does not so far appear in the construction. Notice that if, as we have assumed,
L0 includes infinitely many constants not in , there are sure to be infinitely many
constants not already in a 0 built on .
Cns00 Construct 00 from 0 as follows: By T10.7, there is an enumeration, Q1 ,
Q2 . . . of all the sentences in L0 and also an enumeration c1 , c2 : : : of constants not in 0 . Let 0 D 0 . Then for any i > 0, let

CHAPTER 10. MAIN RESULTS


i D i

485
if

i

` Qi

[ fQi g

if

i

Qi

i D i 
i D i  [ fPcx g

if
if

Qi is not of the form 8xP


Qi is of the form 8xP ; c the first
constant not in i 

else,
i  D i

and,

then,
00 D

i 0 i

that is, 00 is the union of all the i s

Beginning with set 0 (D 0 ), we consider the sentences in the enumeration Q1 ,


Q2 : : : one-by-one, adding a sentence just in case its negation is not already derivable.
In addition, if Qi is of the sort 8xP , we add an instance of it, using a new constant.
This time, i  functions as an intermediate set. Observe that if c is not in i  , then
c is not in 8xP . 00 contains all the members of 0 , together with all the formulas
added this way.
It remains to show that if 0 is consistent, then 00 is a maximal, consistent,
scapegoat set.
T10.8. If 0 is consistent, then 00 is a maximal, consistent, scapegoat set.
The proof comes to showing (a) 00 is maximal. (b) If 0 is consistent then
each i is consistent. From this, (c) if 0 is consistent then 00 is consistent. And (d) if 0 is consistent, then 00 is a scapegoat set. Suppose 0 is
consistent.
(a) 00 is maximal. Suppose 00 is not maximal. Then there is some sentence
Qi such that both 00 Qi and 00 Qi . For this i , by construction,
each member of i 1 is in 00 ; so if i 1 ` Qi then 00 ` Qi ; but
00 Qi ; so i 1 Qi ; so by construction, i  D i 1 [ fQi g; and
by construction again, Qi 2 00 ; so 00 ` Qi . This is impossible; reject the
assumption: 00 is maximal.
(b) Each i is consistent. By induction on the series of i s.
Basis: 0 D 0 and 0 is consistent; so 0 is consistent.
Assp: For any i , 0  i < k, i is consistent.
Show: k is consistent.
k is either (i) k

1 , (ii) k 

D k

x
1 [fQk g, or (iii) k  [fPc g.

CHAPTER 10. MAIN RESULTS


(i) Suppose k is k
consistent.

486
1.

By assumption, k

is consistent; so k is

(ii) Suppose k is k  D k 1 [ fQk g. Then by construction, k 1


Qk ; so, since Qk is a sentence, by T10.6, k 1 [fQk g is consistent;
so k  is consistent, and k is consistent.
(iii) Suppose k is k  [ fPcx g for c not in k  or in 8xP . In this
case, as in (ii) above, k  is consistent; and, by construction 8xP 2
k  ; so k  ` 8xP . Suppose k is inconsistent; then there are
formulas A and A such that k ` A and k ` A; so k  [
fPcx g ` A and k  [ fPcx g ` A. But since Pcx is a sentence,
the restriction on DT is met, and both k  ` Pcx ! A and k  `
Pcx ! A; by A3, ` .Pcx ! A/ ! .Pcx ! A/ ! Pcx ;
so by two instances of MP, k  ` Pcx .
Consider some derivation of this result; by T10.12, we can switch c
for some variable v that does not occur in k  or in the derivation,
and the result is a derivation; so k  cv ` Pcx cv ; but since c does not
occur in k  or in 8xP , this is to say, k  ` Pvx ; so by Gen*,
k  ` 8vPvx ; but x is not free in 8vPvx and x is free for v in Pvx ,
x v
so by T3.27, ` 8vPvx ! 8xPvx v
x ; so by MP, k  ` 8xPv x ;
and since v is not a variable in P , it is not free in P and free for x in
P ; so by T8.2, Pvx v
x D P ; so k  ` 8xP .
But k  ` 8xP . So k  is inconsistent. This is impossible; reject
the assumption: k is consistent.
k is consistent
Indct: For any i , i is consistent.
(c) 00 is consistent. Suppose 00 is not consistent; then there is some A such
that 00 ` A and 00 ` A. Consider derivations D1 and D2 of these results,
and the premises Qi : : : Qj of these derivations. Where Qj is the last of these
premises in the enumeration of formulas, by the construction of 00 , each of
Qi : : : Qj must be a member of j ; so D1 and D2 are derivations from j ;
so j is inconsistent. But by the previous result, j is consistent. This is
impossible; reject the assumption: 00 is consistent.
(d) 00 is a scapegoat set. Suppose 00 ` Qi , for Qi of the form 8xP .
By (c), 00 is consistent; so 00 8xP ; which is to say, 00 Qi ;
so, i 1 Qi ; so by construction, i  D i 1 [ f8xP g and i D

CHAPTER 10. MAIN RESULTS

487

i  [ fPcx g; so by construction, Pcx 2 00 ; so 00 ` Pcx . So if


00 ` 8xP , then 00 ` Pcx , and 00 is a scapegoat set.
In a pattern that should be familiar by now, we guarantee maximal scapegoat sets,
by including instances as required. The most difficult case is (iii) for for consistency.
Having shown that k  ` Pcx for c not in k  or in P , we want to generalize to
show that k  ` 8xP . But, in our derivation systems, generalization is on variables,
not constants. To get the generalization we want, we first use T10.12 to replace c with
an arbitrary variable v. From this, we might have moved immediately to 8xP by the
ND rule 8I. However, in the above reasoning, we stick with the pattern of AD rules,
applying Gen*, and then T3.27 to switch bound variables, for the desired result, that
contradicts 8xP .
E10.17. Let 0 D f8xBx; C ag and consider enumerations of sentences and extra
constants in L0 that begin, Aa, Ba, 8xC x : : : and c1 , c2 : : :. What are 0 ,
1 , 1 , 2 , 2 , 3 , 3 ? Produce a model to show that the resultant set
3 is consistent.
E10.18. Suppose some i 1 D fAc2 ; 8x.Ax ! Bx/g. Show that i  is consistent, but i is not, if Qi D 8xBx, and we add 8xBx with Bc2 to form
i  and i . Why cannot this happen in the construction of 00 ?

10.3.4

The Model

We turn now to constructing the model M0 for 0 . As it turns out, the construction is
simplified by our assumption that D does not appear in the language. A quantificational interpretation has a universe, with assignments to sentence letters, constants,
function symbols, and relation symbols.
CnsM0 Let the universe U be the set of positive integers, f1; 2 : : :g. Then, where a
variable-free term consists just of function symbols and constants, consider
an enumeration t1 , t2 : : : of all the variable-free terms in L0 . If tz is a constant, set M0 tz D z. If tz D hn ta : : : tb for some function symbol hn and
n variable-free terms ta : : : tb , then let hha : : : bi; zi 2 M0 hn . For a sentence letter S, let M0 S D T iff 00 ` S. And for a relation symbol Rn , let
ha : : : bi 2 M0 Rn iff 00 ` Rn ta : : : tb .3
3 It is common to let U just be the set of variable-free terms in L0 , and the interpretation of a term be

itself. There is nothing the matter with this. However, working with the integers emphasizes continuity
with other models we have seen, and positions us for further results.

CHAPTER 10. MAIN RESULTS

488

Thus, for example, where t1 and t3 from the enumeration of terms are constants and
00 ` Rt1 t3 , then M0 t1 D 1, M0 t3 D 3 and h1; 3i 2 M0 R. Given this, it should
be clear why Rt1 t3 comes out satisfied on M0 : Put generally, where ta : : : tb are
constants, we set M0 ta D a, and . . . and M0 tb D b; so by TA(c), for any variable
assignment d, M0d ta D a, and . . . and M0d tb D b. So by SF(r), M0d Rn ta : : : tb D
S iff ha : : : bi 2 M0 Rn ; by construction, iff 00 ` Rn ta : : : tb . Just as in the
sentential case, our idea is to make atomic sentences true on M0 just in case they are
proved by 00 .
Our aim has been to show that if 0 is consistent, then 0 has a model. We have
constructed an interpretation M0 , and now show what sentences are true on it. As in
the sentential case, the main weight is carried by a preliminary theorem. And, as in
the sentential case, the key is that we can appeal to special features of 00 , this time
that it is a maximal, consistent, scapegoat set. Notice that B is a sentence.
T10.9. If 0 is consistent, then for any sentence B of L0 , M0 B D T iff 00 ` B.
Suppose 0 is consistent and B is a sentence of L0 . By T10.8, 00 is a maximal, consistent, scapegoat set. We begin with a preliminary result, which
connects arbitrary variable-free terms to our treatment of constants in the
example above: for any variable-free term tz and variable assignment d,
M0d tz D z.
Suppose tz is a variable-free term and d is an arbitrary variable assignment.
By induction on the number of function symbols in tz , M0d tz D z.
Basis: If tz has no function symbols, then it is a constant. In this case, by
construction, M0 tz D z; so by TA(c), M0d tz D z.
Assp: For any i , 0  i < k, if tz has i function symbols, then M0d tz D z.
Show: If tz has k function symbols, then M0d tz D z.
If tz has k function symbols, then it is of the form hn ta : : : tb for function symbol hn and variable-free terms ta : : : tb each with < k function symbols. By TA(f), M0d tz D M0d hn ta : : : tb D M0 hn hM0d ta
: : : M0d tb i; but by assumption, M0d ta D a, and . . . and M0d tb D
b; so M0d tz D M0 hn ha : : : bi. But since tz D hn ta : : : tb is a
variable-free term, by construction, hha : : : bi; zi 2 M0 hn ; so we have
M0d tz D M0 hn ha : : : bi D z.
Indct: For any tz , M0d tz D z.
Given this, we are ready to show, by induction on the number of operators in
B, that M0 B D T iff 00 ` B. Suppose B is a sentence.

CHAPTER 10. MAIN RESULTS

489

Basis: If B is a sentence with no operators, then it is a sentence letter S,


or an atomic Rn ta : : : tb for relation symbol Rn and variable-free
terms ta . . . tb . In the first case, by construction, M0 S D T iff 00 `
S. In the second case, by TI, M0 Rn ta : : : tb D T iff for arbitrary d, M0d Rn ta : : : tb D S; by SF(r), iff hM0d ta : : : M0d tb i 2
M0 Rn ; since ta . . . tb are variable-free terms, by the above result, iff
ha : : : bi 2 M0 Rn ; by construction, iff 00 ` Rn ta : : : tb . In either
case, then, M0 B D T iff 00 ` B.
Assp: For any i , 0  i < k if a sentence B has i operator symbols, then
M0 B D T iff 00 ` B.
Show: If a sentence B has k operator symbols, then M0 B D T iff 00 ` B.
If B has k operator symbols, then it is of the form, P , P ! Q or
8xP , for variable x and P and Q with < k operator symbols.
() Suppose B is P . Homework. Hint: given T8.6, your reasoning may
be very much as in the sentential case.
(!) Suppose B is P ! Q. Homework.
(8) Suppose B is 8xP . Then since B is a sentence, x is the only variable
that could be free in P .
(i) Suppose M0 B D T but 00 B; from the latter, 00 8xP ;
since 00 is maximal, 00 ` 8xP ; and since 00 is a scapegoat set,
for some constant c, 00 ` Pcx ; so by consistency, 00 Pcx ; but
Pcx is a sentence; so by assumption, M0 Pcx T; so by TI, for some
d, M0d Pcx S; but, where c is some ta , by construction, M0 c D a;
so by TA(c), M0d c D a; so, since c is free for x in P , by T10.2,
M0d.xja/ P S; so by SF(8), M0d 8xP S; so by TI, M0 8xP
T; and this is just to say, M0 B T. But this is impossible; reject the
assumption: if M0 B D T, then 00 ` B.
(ii) Suppose 00 ` B but M0 B T; from the latter, M0 8xP T;
so by TI, there is some d such that M0d 8xP S; so by SF(8), there
is some a 2 U such that M0d.xja/ P S; but for variable-free term ta ,
by our above result, M0d ta D a, and since ta is variable-free, it is free
for x in P , so by T10.2, M0d Ptxa S; so by TI, M0 Ptxa T; but Ptxa
is a sentence; so by assumption, 00 Ptxa ; so by the maximality of
00 , 00 ` Ptxa ; but ta is free for x in P , so by A4, ` 8xP ! Ptxa ;
and by T3.13, ` .8xP ! Ptxa / ! .Ptxa ! 8xP /; so by a
couple instances of MP, 00 ` 8xP ; so by the consistency of 00 ,

CHAPTER 10. MAIN RESULTS

490

00 8xP ; which is to say, 00 B. This is impossible; reject the


assumption: if 00 ` B, then M0 B D T.
If B has k operator symbols, then M0 B D T iff 00 ` B.
Indct: For any sentence B, M0 B D T iff 00 ` B.
So if 0 is consistent, then for any sentence B of L0 , M0 B D T iff 00 ` B. We
are now just one step away from .?/. It will be easy to see that M0 0 D T, and so
to reach the final result.
E10.19. Complete the  and ! cases to complete the demonstration of T10.9. You
should set up the complete demonstration, but may refer to the text for cases
completed there, as the text refers cases to homework.

10.3.5

Final Result

And now we are in a position to get the final result. This works just as before. First,
T10.10. If 0 is consistent, then M0 0 D T.

.?/

Suppose 0 is consistent, but M0 0 T. From the latter, there is some


formula B 2 0 such that M0 B T. Since B 2 0 , by construction, B 2
00 , so 00 ` B; so, where B c is the universal closure of B, by application of
Gen* as necessary, 00 ` B c ; so since 0 is consistent, by T10.9, M0 B c D
T; so by applications of T7.6 as necessary, M0 B D T. This is impossible;
reject the assumption: if 0 is consistent, then M0 0 D T.
Notice that this result applies to arbitrary sets of formulas. We are able to bridge
between formulas and sentences by T10.6 and Gen*. But now we have the .?/ that
we have needed for adequacy.
So that is it! All we needed for the proof of adequacy was .?/. And we have it.
So here is the final argument. Suppose the members of and P are formulas of L0 .
T10.11. If  P , then ` P .

(quantificational adequacy)

Suppose  P but P . Say, for the moment that ` P c ; by


T3.10, ` P c ! P c ; so by MP, ` P c ; so by repeated applications of
A4 and MP, ` P ; but this is impossible; so P c . Given this, since
P c is a sentence, by T10.6, [ fP c g D 0 is consistent; so by T10.10,
there is a model M0 constructed as above such that M0 0 D T. So M0 D T
and M0 P c D T; from the latter, by T8.6, M0 P c T; so by repeated

CHAPTER 10. MAIN RESULTS

491

applications of T7.6, M0 P T; so by QV, P . This is impossible;


reject the assumption: if  P then ` P .
Again, you should try to get the complete picture in your mind: The key is that
consistent sets always have models. If [ fP g is not consistent, then there is a
derivation of P from . So if there is no derivation of P from , [ fP g is
consistent and so must have a model with the result that P . We get the
key point, that consistent sets have models, by finding a relation between consistent,
and maximal, consistent, scapegoat sets. If a set is maximal and consistent and a
scapegoat set, then it contains enough information to specify a model for the whole.
The model for the big set then guarantees the existence of a model M for the original
. All of this is very much parallel to the sentential case.
E10.20. Consider a quantificational language L which has function symbols as usual
but with ^, , and 9 as primitive operators. Suppose axioms and rules are as
in A4 of E10.3 on p. 464. You may suppose there is no symbol for equality,
and there are infinitely many constants not in . Provide a complete demonstration that A4 is adequate. You may appeal to any results from the text
whose demonstration remains unchanged, but should recreate parts whose
demonstration is not the same.
Hints: As preliminaries you will need revised versions of DT and T10.12. In
addition, a few quick theorems for derivations, along with an analog to one
side of T7.6 might be helpful,
(a) ` 9yPyx ! 9xP
(b) ` 9xP !

9yPyx

(c) Pvx ` 9xP

y free for x in P and not free in 9xP


y free for x in P and not free in 9xP
use 9E with Q some X ^ X; note that  .X ^ X /

(7.6*) If I9xP D T then IP D T


Then redefine key notions (such as scapegoat set) in terms of the existential
quantifier, so that you can work cases directly within the new system. Say
P e is the existential closure of P . Note that .P /e is equivalent to P c
(imagine replacing all the added universal quantifiers in P c with 9x and
using DN on inner double tildes). This will help with T10.10 and T10.11.

CHAPTER 10. MAIN RESULTS

10.4

492

Quantificational Adequacy: Full Version

So far, we have shown that if  P , then ` P where the members of and P


are formulas of L0 . Now allow that the members of and P are in an arbitrary quantificational language L. Then we we shall require require not .?/ with application
just to L0 , but the more general,
.??/ For any consistent set of formulas , there is an interpretation M such that
M D T.
Given this, reasoning is exactly as before.
1.
2.
3.

[ fP c g has a model


[ fP c g is consistent
[ fP c g is not consistent


6 P
[ fP c g has a model
`P

.??/

Reasoning for (1) and (3) remains the same. (2) is .??/. Now suppose  P ;
then from (1), [ fP c g does not have a model; so by (2), [ fP c g is not
consistent; so by (3), ` P . So if  P , then ` P . Supposing that .??/
has application to arbitrary sets of formulas, the result has application to arbitrary
premises and conclusion. So we are left with two issues relative to our reasoning
from before: L might lack the infinitely many constants not in the premises, and L
might include equality.

10.4.1

Adding Constants

Suppose L does not have infinitely many constants not in . This can happen in different ways. Perhaps L simply does not have infinitely many constants. Or perhaps
the constants of L are a1 , a2 : : : and D fRa1 ; Ra2 : : :g; then L has infinitely
many constants, but there are not any constants in L that do not appear in . And
we need the extra constants for construction of the maximal, consistent, scapegoat
set. To avoid this sort of worry, we simply add infinitely many constants to form a
language L0 out of L.
CnsL0 Where L is a language whose constants are some of a1 , a2 : : : let L0 be like
L but with the addition of new constants c1 , c2 : : :
By reasoning as in the countability reference on p. 32, insofar as they can be lined up,
a1 , c1 , a2 , c2 : : : the collection of constants remains countable, so that L0 remains

CHAPTER 10. MAIN RESULTS

493

a perfectly legitimate quantificational language. Clearly, every formula of L remains


a formula of L0 . Thus, where is a set of formulas in language L, let 0 be like
except that its members are formulas of language L0 .
Our reasoning for .?/ has application to sets of the sort 0 . That is, where L0 has
infinitely many constants not in 0 , we have been able to find a maximal, consistent,
scapegoat set 00 , and from this a model M0 for 0 . But, give an arbitrary of
formulas in L, we need that it has a model M. That is, we shall have to establish a
bridge between and 0 , and between M0 and M. Thus, to obtain .??/, we show,
2a.
2b.
2c.

is consistent
0 is consistent
0 has a model M0

0 is consistent
0 has a model M0
has a model M

(2b) is just .?/ from before. And by a sort of hypothethical syllogism, together these
yield .??/.
For the first result, we need that if is consistent, then 0 is consistent. Of
course, and 0 contain just the same formulas, only sentences of the one are in a
language with extra constants. But there might be derivations in L0 from 0 that are
not derivations in L from . So we need to show that these extra derivations do not
result in contradiction. For this, the overall idea is simple: If we can derive a contradiction from 0 in the enriched language then, by a modified version of that very
derivation, we can derive a contradiction from in the reduced language. So if there
is no contradiction in the reduced language L, then there can be no contradiction in
the enriched language L0 . The argument is straightforward, given the preliminary
result T10.12. Let be a set of formulas in L, and 0 those same formulas in L0 .
We show,
T10.13. If is consistent, then 0 is consistent.
Suppose is consistent. If 0 is not consistent, then there is a formula A in
L0 such that 0 ` A and 0 ` A; but by T9.4, ` A ! A ! .A^A/;
so by two instances of MP, 0 ` A ^ A. So if 0 is not consistent, there is
a derivation of a contradiction from 0 . By induction on the number of new
constants which appear in a derivation D D hB1 ; B2 : : :i, we show that no
such D is a derivation of a contradiction from 0 .
Basis: Suppose D contains no new constants and D is a derivation of some
contradiction A ^ A from 0 . Since D contains no new constants,
every member of D is also a formula of L, so D D hB1 ; B2 : : :i is

CHAPTER 10. MAIN RESULTS

494

a derivation of A ^ A from ; so by T3.19 and T3.20 with MP,


` A and ` A; so is not consistent. This is impossible; reject
the assumption: D is not a derivation of a contradiction from 0 .
Assp: For any i , 0  i < k, if D contains i new constants, then it is not a
derivation of a contradiction from 0 .
Show: If D contains k new constants, then it is not a derivation of a contradiction from 0 .
Suppose D contains k new constants and is a derivation of a contradiction A ^ A from 0 . Where c is one of the new constants in D and
x is a variable not in D, by T10.12, Dxc is a derivation of A ^ Acx
from 0 cx . But all the members of 0 are in L; so c does not appear
in any member of 0 ; so 0 cx D 0 . And A ^ Acx D Acx ^ Acx .
So Dxc is a derivation of a contradiction from 0 . But Dxc has k 1
new constants and so, by assumption, is not a derivation of a contradiction from 0 . This is impossible; reject the assumption: D is not a
derivation of a contradiction from 0 .
Indct: No derivation D is a derivation of a contradiction from 0 .
So if is consistent, then 0 is consistent. So if we have a consistent set of sentences
in L, and convert to L0 with additional constants, we can be sure that the converted
set is consistent as well.
With the extra constants in-hand, all our reasoning goes through as before to
show that there is a model M0 for 0 . Officially, though, an interpretation for some
sentences in L0 is not a model for some sentences in L: a model for sentences in
L has assignments for its constants, function symbols and relation symbols, where a
model for L0 has assignments for its constants, function symbols and relation symbols. A model M0 for 0 , then, is not the same as a model M for . But it is a short
step to a solution.
CnsM Let M be like M0 but without assignments to constants not in L.
M is an interpretation for language L. M and M0 have exactly the same universe of

discourse, and exactly the same interpretations for all the symbols that are in L. It
turns out that the evaluation of any formula in L is therefore the same on M as on
M0 that is, for any P in L, MP D T iff M0 P D T. Perhaps this is obvious.
However, it is worthwhile to consider a proof. Thus we need the following matched
pair of theorems (in fact, we show somewhat more than is necessary, as M and M0

CHAPTER 10. MAIN RESULTS

495

differ only by assignments to constants). The proofs are straightforward, and mostly
left as an exercise. I do just enough to get you started.
Suppose L0 extends L and M0 is like M except that it makes assignments to
constants, functions symbols and relation symbols in L0 but not in L.
T10.14. For any variable assignment d, and for any term t in L, Md t D M0d t.
The argument is by induction on the number of function symbols in t. Let d
be a variable assignment, and t a term in L.
Basis: Homework
Assp: For any i , 0  i < k, if t has i function symbols, then Md t D M0d t.
Show: If t has k function symbols, then Md t D M0d t.
If t has k function symbols, then it is of the form, hn t1 : : : tn for
function symbol hn and terms t1 : : : tn with < k function symbols.
By TA(f), Md t D Md hn t1 : : : tn D Mhn hMd t1 : : : Md tn i; similarly, M0d t D M0d hn t1 : : : tn D M0 hn hM0d t1 : : : M0d tn i. But
by assumption, Md t1 D M0d t1 , and . . . and Md tn D M0d tn ; and
by construction, Mhn D M0 hn ; so Mhn hMd t1 : : : Md tn i D
M0 hn hM0d t1 : : : M0d tn i; so Md t D M0d t.
Indct: For any t in L, Md t D M0d t.
T10.15. For any variable assignment d, and for any formula P in L, Md P D S iff
M0d P D S.
The argument is by induction on the number of operator symbols in P . Let d
be a variable assignment, and P a formula in L.
Basis: If P has no operator symbols, then it is a sentence letter S or an atomic
Rn t1 : : : tn for relation symbol Rn and terms t1 : : : tn in L. In the
first case, by SF(s), Md S D S iff MS D T; by construction, iff
M0 S D T; by SF(s), iff M0d S D S. In the second case, by SF(r),
Md P D S iff Md Rn t1 : : : tn D S; iff hMd t1 : : : Md tn i 2 MRn ;
similarly, M0d P D S iff M0d Rn t1 : : : tn D S; iff hM0d t1 : : : M0d tn i
2 M0 Rn . But by T10.14, Md t1 D M0d t1 , and . . . and Md tn D
M0d tn ; and by construction, MRn D M0 Rn ; so hMd t1 : : : Md tn i
2 MRn iff hM0d t1 : : : M0d tn i 2 M0 Rn ; so Md P D S iff M0d P D
S.

CHAPTER 10. MAIN RESULTS

496

Assp: For any i , 0  i < k, and any variable assignment d, if P has i


operator symbols, Md P D S iff M0d P D S.
Show: Homework
Indct: For any formula P of L, Md P D S iff M0d P D S.
And now we are in a position to show that M is indeed a model for . In particular,
it is easy to show,
T10.16. If M0 0 D T, then M D T.
Suppose M0 0 D T, but M T. From the latter, there is some formula
B 2 such that MB T; so by TI, for some d, Md B S; so by T10.15,
M0d B S; so by TI, M0 B T; and since B 2 , we have B 2 0 ; so
M0 0 T. This is impossible; reject the assumption: if M0 0 D T, then
M D T.
T10.13, T10.10, and T10.16 together yield,
T10.17. If is consistent, then has a model M.
Suppose is consistent; then by T10.13, 0 is consistent; so by T10.10, 0
has a model M0 ; so by T10.16, has a model M.
And that is what we needed to recover the adequacy result for the generic language
L. Where L does not include infinitely many constants not in , we simply add
them to form L0 . Our theorems from this section ensure that the results go through
as before.
*E10.21. Complete the proof of T10.14. You should set up the complete induction,
but may refer to the text, as the text refers to homework.

*E10.22. Complete the proof of T10.15. As usual, you should set up the complete
induction, but may refer to the text for cases completed there, as the text refers
to homework.

E10.23. Adapt the demonstration of T10.11 for the supposition that L need not be
the same as L0 . You may appeal to theorems from this section.

CHAPTER 10. MAIN RESULTS

10.4.2

497

Accommodating Equality

Dropping the assumption that language L lacks the symbol D for equality results in another sort of complication. In constructing our models, where t1 and t3
from the enumeration of variable-free terms are constants and 00 ` Rt1 t3 , we set
M0 t1 D 1, M0 t3 D 3 and h1; 3i 2 M0 R. But suppose R is the equal sign, D;
then by our procedure, h1; 3i 2 M0 D. But this is wrong! Where U D f1; 2 : : :g,
the proper interpretation of D is fh1; 1i; h2; 2i : : :g, and h1; 3i is not a member of
this set at all. So our procedure does not result in the specification of a legitimate
model. The procedure works fine for relation symbols other than equality. There are
no restrictions on assignments to other relation symbols, so nothing stops us from
specifying interpretations as above. But there is a restriction on the interpretation of
D. So we cannot proceed blindly this way.
Here is the nub of a solution: Say 00 ` a1 D a3 ; then let the set f1; 3g be
an element of U, and let M0 a1 D M0 a3 D f1; 3g. Similarly, if a2 D a4 and
a4 D a5 are consequences of 00 , let f2; 4; 5g be a member of U, and M0 a2 D
M0 a4 D M0 a5 D f2; 4; 5g. That is, let U consist of certain sets of integers
where these sets are specified by atomic equalities that are consequences of 00 . Then
let M0 az be the set of which z is a member. Given this, if 00 ` Rn ta : : : tb , then
include the tuple consisting of the set assigned to ta , and . . . and the set assigned
to tb , in the interpretation of Rn . So on the above interpretation of the constants,
if 00 ` Ra1 a4 , then hf1; 3g; f2; 4; 5gi 2 M0 R. And if 00 ` a1 D a3 , then
hf1; 3g; f1; 3gi 2 M0 D. You should see why this is so. And it is just right! If
f1; 3g 2 U, then hf1; 3g; f1; 3gi should be in M0 D. So we respond to the problem
by a revision of the specification for CnsM0 .
Let us now turn to the details. Put abstractly, the reason the argument in the basis
of T10.9 works is that our model M0 assigns each t in the enumeration of variablefree terms an object m such that whenever 00 ` Rt then m 2 M0 R; and for the
universal case, it is important that for each object there is a constant to which it is
assigned. We want an interpretation that preserves these features. And it will be
important to demonstrate that our specifications are coherent. A model consists of a
universe U, along with assignments to constants, function symbols, sentence letters,
and relation symbols. We take up these elements, one after another.

CHAPTER 10. MAIN RESULTS

498

The universe. The elements of our universe U are to be certain sets of integers.4
Consider an enumeration t1 , t2 : : : of all the variable-free terms in L0 , and let there
be a relation ' on the set f1; 2 : : :g of positive integers such that i ' j iff 00 `
ti D tj . Let n be the set of integers which stand in the ' relation to n that is,
n D fz j z ' ng. So whenever z ' n, then z 2 n. The universe U of M0 is then the
collection of all these sets that is,
CnsM0 For each integer greater than or equal to one, the universe includes the class
corresponding to it. U D fn j n  1g.
The way this works is really quite simple. If according to 00 , t1 equals only itself,
then the only z such that z ' 1 is 1; so 1 D f1g, and this is a member of U. If,
according to 00 , t1 equals just itself and t2 , then 1 ' 2 so that 1 D 2 D f1; 2g,
and this set is a member of U. If, according to 00 , t1 equals itself, t2 and t3 , then
1 ' 2 ' 3 so that 1 D 2 D 3 D f1; 2; 3g, and this set is a member of U. And so
forth.
In order to make progress, it will be convenient to establish some facts about the
' relation, and about the sets in U. Recall that ' is a relation on the integers which
is specified relative to expressions in 00 , so that i ' j iff 00 ` ti D tj . First we
show that ' is reflexive, symmetric, and transitive.
Reflexivity. For any i, i ' i. By T3.32, ` ti D ti ; so 00 ` ti D ti ; so by
construction, i ' i.
Symmetry. For any i and j, if i ' j, then j ' i. Suppose i ' j; then by
construction, 00 ` ti D tj ; but by T3.33, ` ti D tj ! tj D ti ; so by MP,
00 ` tj D ti ; so by construction, j ' i.
Transitivity. For any i, j and k, if i ' j and j ' k, then i ' k. Suppose i ' j
and j ' k; then by construction, 00 ` ti D tj and 00 ` tj D tk ; but by
T3.34, ` ti D tj ! .tj D tk ! ti D tk /; so by two instances of MP,
00 ` ti D tk ; so by construction, i ' k.
A relation which is reflexive, symmetric and transitive is called an equivalence relation. As an equivalence relation, it divides or partitions the members of f1; 2 : : :g
into mutually exclusive classes such that each member of a class bears ' to each
it is common to let the universe be sets of terms in L0 . There is nothing the matter with
this. However, working with the integers emphasizes continuity with other models we have seen, and
positions us for further results.
4 Again,

CHAPTER 10. MAIN RESULTS

499

of the others in its partition, but not to integers outside the partition. More particularly, because ' is an equivalence relation, the collections n D fz j z ' ng in U are
characterized as follows.
Self-membership. For any n, n 2 n. By reflexivity, n ' n; so by construction,
n 2 n. Corollary: Every integer i is a member of at least one class.
Uniqueness. For any i, i is an an element of at most one class. Suppose i is an
element of more than one class; then there are some m and n such that i 2 m
and i 2 n but m n. Since m n there is some j such that j 2 m and j 62 n,
or j 2 n and j 62 m; without loss of generality, suppose j 2 m and j 62 n. Since
j 2 m, by construction, j ' m; and since i 2 m, by construction i ' m; so by
symmetry, m ' i; so by transitivity, j ' i. Since i 2 n, by construction i ' n;
so by transitivity again, j ' n; so by construction, j 2 n. This is impossible;
reject the assumption: i is an element of at most one class.
Equality. For any m and n, m ' n iff m D n. (i) Suppose m ' n. Then
by construction, m 2 n; but by self-membership, m 2 m; so by uniqueness,
n D m. Suppose m D n; by self-membership, m 2 m; so m 2 n; so by
construction, m ' n.
Corresponding to the relations by which they are formed, classes characterized by
self-membership, uniqueness and equality are equivalence classes. From self-membership and uniqueness, every n is a member of exactly one such class. And from
equality, m ' n just when m is the very same thing as n. So, for example, if 1 ' 1
and 2 ' 1 (and nothing else), then 1 D 2 D f1; 2g. You should be able to see that
these formal specifications develop just the informal picture with which we began.
Terms. The specification for constants is simple.
CnsM0 If tz in the enumeration of variable-free terms t1 , t2 : : : is a constant, then
M0 tz D z.
Thus, with self-membership, any constant tz designates the equivalence class of
which z is a member. In this case, we need to be sure that the specification picks
out exactly one member of U for each constant. The specification would fail if the
relation ' generated classes such that some integer was an element of no class, or
some integer was an element of more than one. But, as we have just seen, by selfmembership and uniqueness, every z is a member of exactly one class. So far, so
good!

CHAPTER 10. MAIN RESULTS

500

CnsM0 If tz in the enumeration of variable-free terms t1 , t2 : : : is hn ta : : : tb for


function symbol hn and variable-free terms ta : : : tb , then hha : : : bi; zi 2
M0 hn .
Thus when the input to hn is ha : : : bi, the output is z. This time, we must be sure
that the result is a function that (i) there is a defined output object for every input
n-tuple, and (ii) there is at most one output object associated with any one input ntuple. The former worry is easily dispatched. The second concern is that there might
be some tm D ht a and tn D ht b in the list of variable-free terms, where a D b.
Then ha; mi; hb; ni 2 M0 h, and we fail to specify a function.
(i) There is at least one output object. Corresponding to any ha : : : bi where
a : : : b are members of U, there is some variable-free tz D hn ta : : : tb in the
sequence t1 , t2 : : :; so by construction, hha : : : bi; zi 2 M0 hn . So M0 hn
has a defined output object when the input is ha : : : bi.
(ii) There is at most one output object. Suppose hha : : : ci; mi 2 M0 hn
and hhd : : : fi; ni 2 M0 hn , where ha : : : ci D hd : : : fi, but m n. Since
ha : : : ci D hd : : : fi, a D d, and . . . and c D f; so by equality, a ' d, and
. . . and c ' f; so by construction, 00 ` ta D td , and . . . and 00 ` tc D tf .
Since hha : : : ci; mi 2 M0 hn and hhd : : : fi; ni 2 M0 hn , by construction,
there are some variable-free terms, tm D hn ta : : : tc and tn D hn td : : : tf
in the enumeration; but by T3.36, ` tb D te ! hn ta : : : tb : : : tc D
hn ta : : : te : : : tc , and so forth; so collecting repeated applications of this
theorem with MP and T3.35, 00 ` hn ta : : : tc D hn td : : : tf ; but this is to
say, 00 ` tm D tn ; so by construction, m ' n; so by equality, m D n.
This is impossible; reject the assumption: if hha : : : ci; mi 2 M0 hn and
hhd : : : fi; ni 2 M0 hn , where ha : : : ci D hd : : : fi, then m D n.
So, as they should be, functions are well-defined.
We are now in a position to recover an analogue to the preliminary result for
demonstration of T10.9: for any variable-free term tz and variable assignment d,
M0d tz D z. The argument is very much as before. Suppose tz is a variable-free
term. By induction on the number of function symbols in tz .
Basis: If tz has no function symbols, then it is a constant. In this case, by construction, M0 tz D z; so by TA(c), M0d tz D z.
Assp: For any i , 0  i < k, if tz has i function symbols, then M0d tz D z.

CHAPTER 10. MAIN RESULTS

501

Show: If tz has k function symbols, then M0d tz D z.


If tz has k function symbols, then it is of the form, hn ta : : : tb where ta : : : tb
have < k function symbols. By TA(f) we have, M0d tz D M0d hn ta : : : tb D
M0 hn hM0d ta : : : M0d tb i; but by assumption, M0d ta D a, and . . . and M0d tb
D b; so M0d tz D M0 hn ha : : : bi. But since tz D hn ta : : : tb is a variablefree term, hha : : : bi; zi 2 M0 hn ; so M0 hn ha : : : bi D z; so M0d tz D z.
Indct: For any variable-free term tz , M0d tz D z.
So the interepretation of any variable-free term is the equivalence class corresponding
to its position in the enumeration of terms.
Atomics. The result we have just seen for terms makes the specification for atomics
seem particularly natural. Sentence letters are easy. As before,
CnsM0 For a sentence letter S, M0 S D T iff 00 ` S.
Then for relation symbols, the idea is as sketched above. We simply let the assignment be such as to make a variable-free atomic come out true iff it is a consequence
of 00 .
CnsM0 For a relation symbol Rn , where ta : : : tb are n members of the enumeration
of variable-free terms, let ha : : : bi 2 M0 Rn iff 00 ` Rn ta : : : tb .
To see that the specification for relation symbols is legitimate, we need to be clear
that the specification is consistent that we do not both assert and deny that some
tuple is in the extension of Rn , and we need to be sure that M0 D is as it should be
that it is fhn; ni j n 2 Ug. The case for equality is easy. The former concern is that
we might have some a 2 M0 R and b 62 M0 R but a D b.
(i) The specification is consistent. Suppose otherwise. Then there is some
ha : : : ci 2 M0 Rn and hd : : : fi 62 M0 Rn , where ha : : : ci D hd : : : fi. From
the latter, a D d, and . . . and c D f; so by equality, a ' d, and . . . and c ' f;
so by construction, 00 ` ta D td , and . . . and 00 ` tc D tf . But since
ha : : : ci 2 M0 Rn and hd : : : fi 62 M0 Rn , by construction, 00 ` Rn ta : : : tc
and 00 Rn td : : : tf ; and by T3.37, ` tb D te ! .Rn ta : : : tb : : : tc !
Rn ta : : : te : : : tc /, and so forth; so by repeated applications of this theorem
with MP, 00 ` Rn td : : : tf . This is impossible; reject the assumption: if
ha : : : ci 2 M0 Rn and hd : : : fi 62 M0 Rn , then ha : : : ci hd : : : fi.

CHAPTER 10. MAIN RESULTS

502

(ii) The case for equality is easy. By equality, m D n iff m ' n; by construction iff 00 ` tm D tn ; by construction iff hm; ni 2 M0 D.
This completes the specification of M0 . The specification is more complex than for
the basic version, and we have had to work to demonstrate its consistency. Still,
the result is a perfectly ordinary model M0 , with a domain, assignments to constants,
assignments to function symbols, and assignments to relation symbols.
With this revised specification for M0 , the demonstration of T10.9 proceeds as
before. Here is the key portion of the basis. We are showing that M0 B D T iff
00 ` B.
Suppose B is an atomic Rn ta : : : tb ; then by TI, M0 Rn ta : : : tb D T iff for
arbitrary d, M0d Rn ta : : : tb D S; by SF(r), iff hM0d ta : : : M0d tb i 2 M0 Rn ;
since ta . . . tb are variable-free terms, as we have just seen, iff ha : : : bi 2
M0 Rn ; by construction, iff 00 ` Rn ta : : : tb . So M0 B D T iff 00 ` B.
So all that happens is that we depend on the conversion from individuals to sets
of individuals for both assignments to terms, and assignments to relation symbols.
Given this, the argument is exactly parallel to the one from before.
E10.24. Suppose the enumeration of variable-free terms begins, a, b, f 1 a, f 1 b : : :
(so these are t1 : : : t4 ) and, for these terms, 00 ` just a D a, b D b, f 1 a D
f 1 a, f 1 b D f 1 b, a D f 1 a, and f 1 a D a. What objects stand in the '
relation? What are 1, 2, 3, and 4? Which corresponding sets are members of
U?

E10.25. Return to the case from E10.24. Explain how ' satisfies reflexivity, symmetry and transitivity. Explain how U satisfies self-membership, uniqueness
and equality.
E10.26. Where 00 and U are as in the previous two exercises, what are M0 a, M0 b
and M0 f ? Supposing that 00 ` R1 a, R1 f 1 a and R1 f 1 b, but 00 R1 b,
what is M0 R1 ? According to the method, what is M0 D? Is this as it should
be? Explain.

CHAPTER 10. MAIN RESULTS

10.4.3

503

The Final Result

We are really done with the demonstration of adequacy. Perhaps, though, it will be
helpful to draw some parts together. Begin with the basic definitions.
Con A set of formulas is consistent iff there is no formula A such that ` A
and ` A.
Max A set of formulas is maximal iff for any sentence A, ` A or ` A.
Scgt A set of formulas is a scapegoat set iff for any sentence 8xP , if `
8xP , then there is some constant a such that ` Pax .
Then we proceed in language L0 , for a maximal, consistent, scapegoat set 00 constructed from any consistent 0 .
T10.6 For any set of formulas and sentence P , if P , then [ fP g is
consistent.
T10.7 There is an enumeration Q1 , Q2 : : : of all the formulas, terms, and the like,
in L0 .
Cns00 Construct 00 from 0 as follows: By T10.7, there is an enumeration, Q1 ,
Q2 . . . of all the sentences in L0 and also an enumeration c1 , c2 : : : of constants not in 0 . Let 0 D 0 . Then for any i > 0, let i D i 1 if
i 1 ` Qi . Otherwise, i  D i 1 [ fQi g if i 1 Qi . Then
i D i  if Qi is not of the form 8xP , and i D i  [ fPcx g if
Qi is of the form 8xP , where c is the first constant not in i  . Then
S
00 D i 0 i .
T10.8 If 0 is consistent, then 00 is a maximal, consistent, scapegoat set.
Given the maximal, consistent, scapegoat set 00 , there are results and a definition
for a model M0 such that M0 0 D T.
CnsM0 U D fn j n  1g. If tz in an enumeration of variable-free terms t1 , t2 : : :
is a constant, then M0 tz D z. If tz is hn ta : : : tb for function symbol hn
and variable-free terms ta : : : tb , then hha : : : bi; zi 2 M0 hn . For a sentence
letter S, M0 S D T iff 00 ` S. For a relation symbol Rn , where ta : : : tb are
n members of the enumeration of variable-free terms, let ha : : : bi 2 M0 Rn
iff 00 ` Rn ta : : : tb .

CHAPTER 10. MAIN RESULTS

504

This modifies the relatively simple version where U D f1; 2 : : :g. And for an
enumeration of variable-free terms, if tz is a constant, M0 tz D z. If tz D
hn ta : : : tb for some relation symbol hn and n variable-free terms ta : : : tb ,
hha : : : bi; zi 2 M0 hn . For a sentence letter S, M0 S D T iff 00 ` S. And
for a relation symbol Rn , ha : : : bi 2 M0 Rn iff 00 ` Rn ta : : : tb .
T10.9 If 0 is consistent, then for any sentence B of L0 , M0 B D T iff 00 ` B.
T10.10 If 0 is consistent, then M0 0 D T.

.?/

Then we have had to connect results for 0 in L0 to an arbitrary in language L.


T10.13 If is consistent, then 0 is consistent.
This is supported by T10.12 on which if D is a derivation from 0 , and x is a
variable that does not appear in D, then for any constant a, Dxa is a derivation
from 0 a
x.
T10.16 If M0 0 D T, then M D T.
This is supported by the matched pair of theorems, T10.14 on which, if d is a
variable assignment, then for any term t in L, Md t D M0d t, and T10.15 on
which, if d is a variable assignment, then for any formula P in L, Md P D S
iff M0d P D S.
These theorems together yield,
T10.17 If is consistent, then has a model M.

.??/

This puts us in a position to recover the main result. Recall that our argument runs
through P c the universal closure of P .
T10.11. If  P , then ` P .

(quantificational adequacy)

Suppose  P but P . Say, for the moment that ` P c ; by


T3.10, ` P c ! P c ; so by MP, ` P c ; so by repeated applications
of A4 and MP, ` P ; but this is impossible; so P c . Given
this, since P c is a sentence, by T10.6, [ fP c g is consistent. Since
D [ fP c g is consistent, by T10.17, there is a model M constructed
as above such that M D T. So M D T and MP c D T; from the
latter, by T8.6, MP c T; so by repeated applications of T7.6, MP T;
so by QV, P . This is impossible; reject the assumption: if  P then
` P.

CHAPTER 10. MAIN RESULTS

505

The sentential version had parallels to Con, Max, Cns00 and CnsM0 along with theorems T10.6s - T10.11s . (The distinction between .?/ and .??/ is a distinction without
a difference in the sentential case.) The basic quantificational version requires these
along with Sgt, T10.12 and the simple version of CnsM0 . For the full version, we
have had to appeal also to T10.13 and T10.16 (and so T10.17), and use the relatively
complex specification for CnsM0 .
Again, you should try to get the complete picture in your mind: As always, the
key is that consistent sets have models. If [ fP g is not consistent, then there is
a derivation of P from . So if there is no derivation of P from , then [ fP g
is consistent, and so has a model and the existence of a model for [ fP g is
sufficient to show that P . Put the other way around, if  P , then there is a
derivation of P from . We get the key point, that consistent sets have models, by
finding a relation between consistent, and maximal consistent scapegoat sets. If a set
is a maximal consistent scapegoat set, then it contains enough information to specify
a model for the whole. The model for the big set then guarantees the existence of a
model M for the original .
E10.27. Return to the case from E10.20 on p. 491, but dropping the assumptions
that there is no symbol for equality, and that L is identical to L0 . Add to the
derivation system axioms,
A3 ` t D t
A4 ` r D s ! .P ! P r=s /

where s is free for replaced instances of r in P

Provide a complete demonstration that this version of A4 is adequate. You


may appeal to any results from the text whose demonstration remains unchanged, but should recreate parts whose demonstration is not the same. Hint:
You may find it helpful to demonstrate a relation to T8.5 as follows,
T8.5* For any formula P , terms s and t, constant c, and variable x, P s=t cx
c
is the same formula as Pxc sx=txc where the same instance(s) of s
are replaced in each case.

E10.28. We have shown from T10.4 that if a set of formulas has a model, then it is
consistent; and now that if an arbitrary set of formulas is consistent, then it has
a model and one whose U is this set of sets of positive integers. Notice that
any such U is countable insofar as its members can be put into correspondence

CHAPTER 10. MAIN RESULTS

506

with the integers (we might, say, order the members by their least elements).
Considering what we showed in the more on countability reference on p. 46,
how might this be a problem for the logic of real numbers? Hint: Think about
the consequences sentences in an arbitrary may have about the number of
elements in U.

E10.29. For each of the following concepts, explain in an essay of about two pages,
so that Hannah could understand. In your essay, you should (i) identify the
objects to which the concept applies, (ii) give and explain the definition, and
give and explicate examples (iii) where the concept applies, and (iv) where
it does not. Your essay should exhibit an understanding of methods from the
text.
a. The soundness of a derivation system, and its demonstration by mathematical
induction.
b. The adequacy of a derivation system, and the basic strategy for its demonstration.
c. Maximality and consistency, and the reasons for them.
d. Scapegoat sets, and the reasons for them.

E10.30. Give yourself a pat on the back for having gotten this far!

Chapter 11

More Main Results


In this chapter, we take up results which deepen our understanding of the power
and limits of logic. The first sections restrict discussion to sentential forms, for
discussion of expressive completenes and independence. Then we turn to discussion
of the conditions under which models are isomorphic, and transition to a discussion
of submodels, and especially the Lwenheim-Skolem theoremes, which help us see
some conditions under which models are not isomorphic.

11.1

Expressive Completeness

In chapter 5 on translation, we introduced the idea of a truth functional operator,


where the truth value of the whole is a function of the truth values of the parts. We
exhibited operators as truth functional by tables. Thus, if some ordinary expression
P with components A and B has table,
AB P

(A)

T
T
F
F

T
F
T
F

T
F
F
F

then it is truth functional. And we translate by an equivalent formal operator: in this


case A ^ B does fine. Of course, not every such table, or truth function, is directly
represented by one of our operators. Thus, if P is neither A nor B we have the
table,

507

CHAPTER 11. MORE MAIN RESULTS

508

AB P

(B)

T
T
F
F

F
F
F
T

T
F
T
F

where none of our operators is equivalent to this. But it takes only a little ingenuity to
see that, say, .A ^ B/ or .A _ B/ have the same table, and so result in a good
translation. In chapter 5 (p. 156), we claimed that for any table a truth functional
operator may have, there is always some way to generate that table by means of our
formal operators and, in fact, by means of just the operators  and ^, or just the
operators  and _, or just the operators  and !. As it turns out, it is also possible
to express any truth function by means of just the operator . In this section, we prove
these results. First,
T11.1. It is possible to represent any truth function by means of an expression with
just the operators , ^, and _.
The proof of this result is simple. Given an arbitrary truth function, we provide a
recipe for constructing an expression with the same table. Insofar as for any truth
function it is always possible to construct an expression with the same table, there
must always be a formal expression with the same table.
Suppose we are given an arbitrary truth function, in this case with four basic
sentences as on the left.
S1 S2 S3 S4

(C)

1
2
3
4

T
T
T
T

T
T
T
T

T
T
F
F

T
F
T
F

F
F
T
F

5
6
7
8

T
T
T
T

F
F
F
F

T
T
F
F

T
F
T
F

T
F
F
F

9
10
11
12

F
F
F
F

T
T
T
T

T
T
F
F

T
F
T
F

F
F
F
T

13
14
15
16

F
F
F
F

F
F
F
F

T
T
F
F

T
F
T
F

T
F
F
F

C1
C2
C3
C4
C5
C6
C7
C8
C9
C10
C11
C12
C13
C14
C15
C16

=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=

S1 ^ S2 ^ S3 ^ S4
S1 ^ S2 ^ S3  ^ S4
S1 ^ S2 ^ S3 ^ S4
S1 ^ S2 ^ S3 ^ S4
S1 ^ S2 ^ S3 ^ S4
S1 ^ S2 ^ S3 ^ S4
S1 ^ S2 ^ S3 ^ S4
S1 ^ S2 ^ S3 ^ S4
S1 ^ S2 ^ S3 ^ S4
S1 ^ S2 ^ S3 ^ S4
S1 ^ S2 ^ S3 ^ S4
S1 ^ S2 ^ S3 ^ S4
S1 ^ S2 ^ S3 ^ S4
S1 ^ S2 ^ S3 ^ S4
S1 ^ S2 ^ S3 ^ S4
S1 ^ S2 ^ S3 ^ S4

For this sentence P with basic sentences S1 : : : Sn , begin by constructing the characteristic sentence Cj corresponding to each row: If the interpretation Ij corresponding

CHAPTER 11. MORE MAIN RESULTS

509

to row j has Ij Si D T, then let Si0 D Si . If Ij Si D F, let Si0 D Si . Then the
characteristic sentence Cj corresponding to Ij is the conjunction of each Si0 . So
Cj D S10 ^ : : : ^ Sn0 (with appropriate parentheses). These sentences are exhibited
above. The characteristic sentences are true only on their corresponding rows. Thus
C4 above is true only when IS1 D T, IS2 D T, IS3 D F, and IS4 D F.
Then, given the characteristic sentences, if P is F on every row, S1 ^ S1 has
the same table as P . Otherwise, where P is T on rows a, b. . . d , Ca _ Cb _ : : : Cd
(with appropriate parentheses) has the same table as P . Thus, for example, C3 _
C5 _ C12 _ C13 , that is,
.S1 ^ S2 ^ S3 ^ S4 / _ .S1 ^ S2 ^ S3 ^ S4 / _ .S1 ^ S2 ^ S3 ^ S4 / _ .S1 ^ S2 ^ S3 ^ S4 /

has the same table as P . Inserting parentheses, the resultant table is,
S1 S2 S3 S4

(D)

.C3 _ C5 / _ .C12 _ C13 / P

1
2
3
4

T
T
T
T

T
T
T
T

T
T
F
F

T
F
T
F

F
F
T
F

F
F
T
F

F
F
F
F

F
F
T
F

F
F
F
F

F
F
F
F

F
F
F
F

F
F
T
F

5
6
7
8

T
T
T
T

F
F
F
F

T
T
F
F

T
F
T
F

F
F
F
F

T
F
F
F

T
F
F
F

T
F
F
F

F
F
F
F

F
F
F
F

F
F
F
F

T
F
F
F

9
10
11
12

F
F
F
F

T
T
T
T

T
T
F
F

T
F
T
F

F
F
F
F

F
F
F
F

F
F
F
F

F
F
F
T

F
F
F
T

F
F
F
T

F
F
F
F

F
F
F
T

13
14
15
16

F
F
F
F

F
F
F
F

T
T
F
F

T
F
T
F

F
F
F
F

F
F
F
F

F
F
F
F

T
F
F
F

F
F
F
F

T
F
F
F

T
F
F
F

T
F
F
F

And we have constructed an expression with the same table as P . And similarly for
any truth function with which we are confronted. So given any truth function, there
is a formal expression with the same table.
In a by-now familiar pattern, the expressions produced by this method are not
particularly elegant or efficient. Thus for the table,
AB P

(E)

T
T
F
F

T
F
T
F

T
F
T
T

by our method we get the expression .A ^ B/ _ .A ^ B/ _ .A ^ B/. It has


the right table. But, of course, A ! B is much simpler! The point is not that the

CHAPTER 11. MORE MAIN RESULTS

510

resultant expressions are elegant or efficient, but that for any truth function, there
exists a formal expression that works the same way.
We have shown that we can represent any truth function by an expression with
operators , ^, and _. But any such expression is an abbreviation of one whose only
operators are  and !. So we can represent any truth function by an expression with
just operators  and !. And we can argue for other cases. Thus, for example,
T11.2. It is possible to represent any truth function by means of an expression with
just the operators  and ^.
Again, the proof is simple. Given T11.1, if we can show that any P whose
operators are , ^ and _ corresponds to a P  whose operators are just  and ^,
such that P and P  have the same table such that IP D IP  for any I we
will have shown that any truth function can be represented by an expression with just
 and ^. To see that this is so, where P is an atomic S, set P  D S; where P is
A, set P  D A ; where P is A ^ B, set P  D A ^ B  ; and where P is
A _ B, set P  D .A ^ B  /. Suppose the only operators in P are , ^, and
_, and consider an arbitrary interpretation I.
Basis: Where P is a sentence letter S, then P  is S. So IP D IP  .
Assp: For any i , 0  i < k, if P has i operator symbols, then IP D IP  .
Show: If P has k operator symbols, then IP D IP  .
If P has k operator symbols, then it is of the form A, A ^ B, or A _ B
where A and B have < k operator symbols.
() Suppose P is A; then P  is A . IP D T iff IA D T; by ST(),
iff IA D F; by assumption iff IA D F; by ST(), iff IA D T; iff
IP  D T.
(^) Suppose P is A ^ B; then P  is A ^ B  . IP D T iff IA ^ B D T;
by ST0 (^), iff IA D T and IB D T; by assumption iff IA D T and
IB  D T; by ST0 (^), iff IA ^ B  D T; iff IP  D T.
(_) Suppose P is A _ B; then P  is .A ^ B  /. IP D T iff IA _
B D T; by ST0 (_), iff IA D T or IB D T; by assumption iff IA D T
or IB  D T; by ST(), iff IA D F or IB  D F; by ST0 (^), iff
IA ^ B  D F; by ST(), iff I.A ^ B  / D T; iff IP  D T.
If P has k operator symbols then IP D IP  .
Indct: For any P , IP D IP  .

CHAPTER 11. MORE MAIN RESULTS

511

So if the operators in P are , ^ and _, there is a P  with just operators  and ^ that
has the same table. Perhaps this was obvious as soon as we saw that .A ^ B/
has the same table as A _ B. Since we can represent any truth function by an
expression whose only operators are , ^ and _, and we can represent any such
P by a P  whose only operators are  and ^, we can represent any truth function
by an expression with just operators  and ^. And, by similar reasoning, we can
represent any truth function by expressions whose only operators are  and _, and
by expressions whose only operator is . This is left for homework.
In E8.10, we showed that if the operators in P are limited to !, ^, _, and $
then when the interpretation of every atomic is T, the interpretation of P is T. Perhaps
this is obvious by consideration of the tables. It follows that not every truth function
can be represented by expressions whose only operators are !, ^, _, and $; for
there is no way to represent a function that is F on the top row, when all the atomics
are T. Though it is much more difficult to establish, we showed in E8.27 that any
expression whose only operators are  and $ (with at least four rows in its truth
table) has an even number of Ts and Fs under its main operator. It follows that not
every truth function can be represented by expressions whose only operators are 
and $.
E11.1. Use the method of this section to find expressions with tables corresponding
to P1 , P2 , and P3 . Then show on a table that your expression for P1 in fact
has the same truth function as P1 .
A B C P1 P2 P3
T
T
T
T

T
T
F
F

T
F
T
F

F
T
T
F

T
T
F
F

F
F
T
F

F
F
F
F

T
T
F
F

T
F
T
F

F
T
F
T

F
F
F
F

T
F
T
T

E11.2. (i) Show that we can represent any truth function by expressions whose only
operators are  and _. (ii) Show that we can represent any truth function
by expressions whose only operator is . Hint: Given what we have shown
above, it is enough to show that you can represent expressions whose only
operators are  and !, or  and ^.

CHAPTER 11. MORE MAIN RESULTS

512

E11.3. Show that it is not possible to represent arbitrary truth functions by expressions whose only operator is . Hint: it is easy to show by induction that any
such expression has at least one T and one F under its main operator.

11.2

Independence

As we have seen, axiomatic systems are convenient insofar as their compact form
makes reasoning about them relatively easy. Also, theoretically, axiomatic systems
are attractive insofar as they expose what is at the base or foundation of logical systems. Given this latter aim, it is natural to wonder whether we could get the same
results without one or more of our axioms. Say an axiom or rule is independent in a
derivation system just in case its omission matters for what can be derived. In particular, then, an axiom is independent in a derivation system if it cannot be derived
from the other axioms and rules. For suppose otherwise: that it can be derived from
the other axioms and rules; then it is a theorem of the derivation system without the
axiom, and any result of the system with the axiom can be derived using the theorem
in place of the axiom; so the omission of the axiom does not matter for what can be
derived, and the axiom is not independent. In this section, we show that A1, A2 and
A3 of the sentential fragment of AD are independent of one another.
Say we want to show that A1 is independent of A2 and A3. When we showed,
in chapter 8, that the sentential part of AD is weakly sound, we showed that A1, A2,
A3 and their consequences have a certain feature that there is no interpretation
where a consequence is false. The basic idea here is to find a sort of interpretation
on which A2, A3 and their consequences are sustained, but A1 is not. It follows that
A1 is not among the consequences of A2 and A3, and so is independent of A2 and
A3. Here is the key point: Any interpretation will do. In particular, consider the
following tables which define a sort of numerical property for forms involving  and
!.
P Q P !Q

P P

A1()

0 1
1 1
2 0

A1(!)

0 0
0 1
0 2

0
2
2

1 0
1 1
1 2

2
2
0

2 0
2 1
2 2

0
0
0

CHAPTER 11. MORE MAIN RESULTS

513

Do not worry about what these tables say; it is sufficient that, given a numerical
interpretation of the parts, we can always calculate the numerical value N of the
whole. Thus, for example,
A.0/

(F)

B .2/




A.1/

@

@
@

By A1() row 1

A ! B .0/

By A1(!) row 6

if NA D 0 and NB D 2, then NA ! B D 0. The calculation is straightforward, based on the tables. And similarly for sentential forms of arbitrary complexity.
Say a form is select iff it takes the value 0 on every numerical interpretation of its
parts. (Compare the notion of semantic validity on which a form is valid iff it is T
on every interpretation of its parts.) Again, do not worry about what the tables mean.
They are constructed for the special purpose of demonstrating independence: We
show that every consequence of A2 and A3 is select, but A1 is not. It follows that A1
is not a consequence of A2 and A3.
To see that A3 is select, and that A1 is not, all we have to do is complete the
tables.
A B A ! .B ! A/

(G)

. B !  A/ ! . B ! A/ ! B

0 0
0 1
0 2

0
2
0

0
2
0

1
1
0

2 1
2 1
2 1

0
0
0

1
1
0

2
2
0

0
0
2

1 0
1 1
1 2

0
0
2

2
2
0

1
1
0

2 1
2 1
2 1

0
0
0

1
1
0

2
2
2

0
0
0

2 0
2 1
2 2

0
0
0

2
0
0

1
1
0

2 0
2 0
0 0

0
0
0

1
1
0

0
0
2

0
2
0

Since A1 has twos in the second and sixth rows, A1 is not select. Since A3 has zeros
in every row, it is select. Alternatively, for A1, we might have reasoned as follows,
Suppose NA D 0 and NB D 1. Then by A1(!), NB ! A D 2; so by A1(!)
again, NA ! .B ! A/ D 2. Since there is such an assignment, A ! .B ! A/ is
not select.

And the result is the same. To see that A2 is select, again, it is enough to complete
the table it is painful, but we can do it:

CHAPTER 11. MORE MAIN RESULTS

514

A B C .A ! .B ! C // ! ..A ! B/ ! .A ! C //

(H)

0
0
0
0
0
0
0
0
0

0
0
0
1
1
1
2
2
2

0
1
2
0
1
2
0
1
2

0
2
2
2
2
0
0
0
0

0
2
2
2
2
0
0
0
0

0
0
0
0
0
0
0
0
0

0
0
0
2
2
2
2
2
2

0
2
2
0
0
0
0
0
0

0
2
2
0
2
2
0
2
2

1
1
1
1
1
1
1
1
1

0
0
0
1
1
1
2
2
2

0
1
2
0
1
2
0
1
2

2
0
0
0
0
2
2
2
2

0
2
2
2
2
0
0
0
0

0
0
0
0
0
0
0
0
0

2
2
2
2
2
2
0
0
0

0
0
0
0
0
0
2
2
0

2
2
0
2
2
0
2
2
0

2
2
2
2
2
2
2
2
2

0
0
0
1
1
1
2
2
2

0
1
2
0
1
2
0
1
2

0
0
0
0
0
0
0
0
0

0
2
2
2
2
0
0
0
0

0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0

So both A2 and A3 are select. But now we are in a position to show,


T11.3. A1 is independent of A2 and A3.
Consider any derivation hQ1 ; Q2 : : : Qn i where there are no premises, and
the only axioms are instances of A2 and A3. By induction on line number,
for any i , Qi is select.
Basis: Q1 is an instance of A2 or A3, and as we have just seen, instances of
A2 and A3 are select. So Q1 is select.
Assp: For any i, 0  i < k, Qi is select.
Show: Qk is select.
Qk is an instance of A2 or A3 or arises from previous lines by MP. If
Qk is an instance of A2 or A3, then by reasoning as in the basis, Qk
is select. If Qk arises from previous lines by MP, then the derivation
has some lines,

CHAPTER 11. MORE MAIN RESULTS


a. B
b. B ! C
k. C

515

a,b MP

where a; b < k and C is Qk . By assumption, B and B ! C are


select. But by A1(!), both B and B ! C evaluate to 0 only in the
case when C also evaluates to 0; so if both B and B ! C are select,
then C is select as well. So Qk is select.
Indct: For any n, Qn is select.
So A1 cannot be derived from A2 and A3 which is to say, A1 is independent of A2 and A3.

E11.4. Use the following tables to show that A2 is independent of A1 and A3.
P Q P !Q

P P

A2()

0 1
1 0
2 1

A2(!)

0 0
0 1
0 2

0
2
1

1 0
1 1
1 2

0
2
0

2 0
2 1
2 2

0
0
0

E11.5. Use the table method to show that A3 is independent of A1 and A2. That
is, (i) find appropriate tables for  and !, and (ii) use your tables to show
by induction that A3 is independent of A1 and A2. Hint: You do not need
three-valued interpretations, and have already done the work in E8.13.

11.3

Isomorphic Models

Interpretations are isomorphic when they are structurally similar. Say a function f
from rn to s is onto set s just in case for each o 2 s there is some hm1 : : : mn i 2
rn such that hhm1 : : : mn i; oi 2 f; a function is onto set s when it reaches every
member of s. Then,

CHAPTER 11. MORE MAIN RESULTS

516

IS For some language L, interpretation I is isomorphic to interpretation I0 iff there


is a 1:1 function  (iota) from the universe of I onto the universe of I0 where:
for any sentence letter S, IS D I0 S; for any constant c, Ic D m iff I0 c D
.m/; for any relation symbol Rn , hma : : : mb i 2 IRn iff h.ma / : : : .mb /i 2
I0 .Rn /; and for any function symbol hn , hhma : : : mb i; oi 2 Ihn iff hh.ma / : : :
.mb /i; .o/i 2 I0 hn .
If I is isomorphic to I0 , we write, I I0 . Notice that the condition on constants requires
just that .Ic/ D I0 c; applying  to the thing assigned to c by I, results in the thing
assigned to c by I0 . And similarly, the condition on function symbols requires that
.Ihn hma : : : mb i/ D I0 hn h.ma / : : : .mb /i; for we have Ihn hma : : : mb i D o,
and .o/ D I0 hn h.ma / : : : .mb /i. We might think of the two interpretations as
already existing, and finding a function  to exhibit them as isomorphic. Alternatively,
given an interpretation I, and function  from the universe of I onto some set U0 , we
might think of I0 as resulting from application of  to I.
Here are some examples. In the first, it is perhaps particularly obvious that I and
0
I have the required structural similarity.
UW

(I)
U0 W

Rover
#
Ralph

Fido
#
Fredo

Morris
#
Manny

Sylvester
#
Salvador

U D fRover, Fido, Morris, Sylvesterg. As represented by the arrows, function  maps


these onto a disjoint set U0 . Then given I as below on the left, the corresponding
isomorphic interpretation is I0 as on the right.
Ir D Rover

I0 r D Ralph

Im D Morris

I0 m D Manny

ID D fRover, Fidog

I0 D D fRalph, Fredog

IC D fMorris, Sylvesterg

I0 C D fManny, Salvadorg

IP D fhRover, Morrisi; hFido, Sylvesterig

I0 P D fhRalph, Mannyi; hFredo, Salvadorig

On interpretation I, where Rover and Fido are dogs, and Morris and Sylvester are
cats, we have that every dog pursues at least one cat. And, supposing that Ralph and
Fredo are dogs, and Manny and Salvador are cats, the same properties and relations
are preserved on I0 with only the particular individuals changed.
For a second case, let U be the same, but U0 the very same set, only permuted or
shuffled so that each object in U has a mate in U0 .
UW

(J)
U0 W

Rover
#
Rover

Fido
#
Morris

Morris
#
Fido

Sylvester
#
Sylvester

CHAPTER 11. MORE MAIN RESULTS

517

So  maps members of U to members of the very same set. Then given I as before,
the corresponding isomorphic interpretation is I0 is as follows.
Ir D Rover

I0 r D Rover

Im D Morris

I0 m D Fido

ID D fRover, Fidog

I0 D D fRover, Morrisg

IC D fMorris, Sylvesterg

I0 C D fFido, Sylvesterg

IP D fhRover, Morrisi; hFido, Sylvesterig

I0 P D fhRover, Fidoi; hMorris, Sylvesterig

This time, there is no simple way to understand I0 D as the set of all dogs, and I0 C
as the set of all cats. And we cannot say that the interpretation of P reflects dogs
pursuing cats. But Morris plays the same role in I0 as Fido in I; and similarly Fido
plays the same role in I0 as Morris in I. Thus, on I0 , each thing in the interpretation of
D is such that it stands in the relation P to at least one thing in the interpretation of
C and this is just as in interpretation I.
<
A final example switches to LNT
and has an infinite U. We let U be the set N of
0
natural numbers, U be the set P of positive integers, and  be the function n C 1.
UW

(K)
U0 W

0
#
1

1
#
2

2
#
3

3
#
4

...
...

<
Then where N is the standard interpretation for symbols of LNT
,

N; D 0
N< D fhm; ni j m; n 2 N , and m is less than ng
NS D fhm; ni j m; n 2 N , and n is the successor of mg
NC D fhhm; ni; oi j m; n; o 2 N , and m plus n equals og

we obtain N0 as follows,
N0 ; D 1
N0 < D fhm C 1; n C 1i j m; n 2 N , and m is less than ng
N0 S D fhm C 1; n C 1i j m; n 2 N , and n is the successor of mg
N0 C D fhhm C 1; n C 1i; o C 1i j m; n; o 2 N , and m plus n equals og

Observe that anything in N0 is taken from P. In this case, we build N0 explicitly by


the rule for isomorphisms simply finding .m/ D m C 1 from the corresponding
element of N.

CHAPTER 11. MORE MAIN RESULTS

11.3.1

518

Isomorphism implies Equivalence

Given these examples, perhaps it is obvious that when interpretations are isomorphic,
they make all the same formulas true.1 Say,
EE For some language L, interpretations I and I0 are elementarily equivalent iff for
any formula P , IP D T iff I0 P D T.
If I is elementarily equivalent to I0 , write I  I0 . We show that isomorphic interpretations are elementarily equivalent. This is straightforward given a matched pair of
results, of the sort we have often seen before.
T11.4. For some language L, if interpretations D H, and assignments d for D
and h for H are such that for any x, .dx/ D hx, then for any term t,
.Dd t/ D Hh t.
Suppose D H, and corresponding assignments d and h are such that for any
x, .d.x// D h.x/. By induction on the number of operator symbols in t.
Basis: If t has no function symbols, then it is a variable or a constant. If t is
a variable x, then by TA(v), Dd x D d.x/; so .Dd x/ D .dx/; but
we have supposed .dx/ D hx; and by TA(v) again, hx D Hh x;
so .Dd x/ D Hh x. If t is a constant c, then by TA(c), Dd c D Dc;
so .Dd c/ D .Dc/; but since D H, .Dc/ D Hc; and by TA(c)
again, Hc D Hh c; so .Dd c/ D Hh c.
Assp: For any i , 0  i < k if t has i function symbols, then .Dd t/ D
Hh t.
Show: If t has k function symbols, then .Dd t/ D Hh t.
If t has k function symbols, then it is of the form hn t1 : : : tn for relation symbol hn and terms t1 : : : tn with < k function symbols. Then
Dd t D Dd hn t1 : : : tn ; by TA(f), Dd hn t1 : : : tn D Dhn hDd t1
: : : Dd tn i. So .Dd t/ D .Dhn hDd t1 : : : Dd tn i/; but since D
H, .Dhn hDd t1 : : : Dd tn i/ D Hhn h.Dd t1 / : : : .Dd tn /i; and by
assumption, .Dd t1 / D Hh t1 , and . . . and .Dd tn / D Hh tn ;
so Hhn h.Dd t1 / : : : .Dd tn /i D Hhn hHh t1 : : : Hh tn i; and by
TA(f), Hhn hHh t1 : : : Hh tn i D Hh hn t1 : : : tn ; which is just Hh t;
so .Dd t/ D Hh t.

1 In Reason, Truth and History, Hilary Putnam makes this point to show that truth values of sentences are not sufficient to fix the interpretation of a language. As we shall see in this section, the
technical point is clear enough. It is another matter whether it bears the philosophical weight he means
for it to bear!

CHAPTER 11. MORE MAIN RESULTS

519

Indct: For any t, .Dd t/ D Hh t.


So when D and H are isomorphic, and for any variable x,  maps dx to hx, then
for any term t,  maps Dd t to Hh t.
Now we are in a position to extend the result to one for satisfaction of formulas.
If D and H are isomorphic, and for any variable x,  maps dx to hx, then a formula
P will be satisfied on D with d just in case it is satisfied on H with h.
T11.5. For some language L, if interpretations D H, and assignments d for D
and h for H are such that for any x, .dx/ D hx, then for any formula P ,
Dd P D S iff Hh P D S.
By induction on the number of operators in P . Suppose D H.
Basis: Suppose P has no operator symbols and d and h are such that for any
x, .dx/ D hx. If P has no operator symbols, then it is sentence
letter S or an atomic Rn t1 : : : tn for relation symbol Rn and terms
t1 : : : tn . Suppose the former; then by SF(s), Dd S D S iff DS D T;
since D H iff HS D T; by SF(s), iff Hh S D S. Suppose the
latter; by SF(r), Dd Rn t1 : : : tn D S iff hDd t1 : : : Dd tn i 2 DRn ;
since D H, iff h.Dd t1 / : : : .Dd tn /i 2 HRn ; since D H and
.dx/ D hx, by T11.4, iff hHh t1 : : : Hh tn /i 2 HRn ; by SF(r),
iff Hh Rn t1 : : : tn D S.
Assp: For any i , 0  i < k, for d and h such that for any x, .dx/ D hx
and P with i operator symbols, Dd P D S iff Hh P D S.
Show: For d and h such that for any x, .dx/ D hx and P with k operator
symbols, Dd P D S iff Hh P D S.
If P has k operator symbols, then it is of the form A, A ! B, or
8xA for variable x and formulas A and B with < k operator symbols.
Suppose for any x, .dx/ D hx.
() Suppose P is of the form A. Then Dd P D S iff Dd A D S; by
SF(), iff Dd A S; by assumption, iff Hh A S; by SF(), iff
Hh A D S; iff Hh P D S.
(!) Homework.
(8) Suppose P is of the form 8xA. Then Dd P D S iff Dd 8xA D S;
by SF(8), iff for any m 2 UD , Dd.xjm/ A D S. Similarly, Hh P D S
iff Hh 8xA D S; by SF(8), iff for any n 2 UH , Hh.xjn/ A D S. (i)

CHAPTER 11. MORE MAIN RESULTS

520

Suppose Hh P D S but Dd P S; then any n 2 UH is such that


Hh.xjn/ A D S, but there is some m 2 UD such that Dd.xjm/ A S.
From the latter, insofar as d.xjm/ and h.xj.m// have each member
related by , the assumption applies and, Hh.xj.m// A S; so there
is an n 2 UH such that Hh.xjn/ A S; this is impossible; reject
the assumption: if Hh P D S, then Dd P D S. (ii) Similarly, [by
homework] if Dd P D S, then Hh P D S. Hint: given h.xjn/, there
must be an m such that .m/ D n; then d.xjm/ and h.xjn/ are related
so that the assumption applies.
For d and h such that for any x, .dx/ D hx and P with k operator
symbols, Dd P D S iff Hh P D S.
Indct: For d and h such that for any x, .dx/ D hx, and any P , Dd P D S
iff Hh P D S.
As often occurs, the most difficult case is for the quantifier. The key is that the
assumption applies to Dd P and Hh P for any assignments d and h related so that
for any x, .dx/ D hx. Supposing that d and h are so related, there is no reason
to think that d.xjm/ and h remain in that relation. The problem is solved with a
corresponding modification to h: with d.xjm/; we modify h so that the assignment
to x simply is .m/. Thus d.xjm/ and h.xj.m// are related so that the assumption
applies.
Now it is a simple matter to show that isomorphic models are elementarily equivalent.
T11.6. If D H, then D  H.
Suppose D H. By TI, DP T iff there is some assignment d such
that Dd P S; since D H, where d and h are related as in T11.5, iff
Hh P S; by TI, iff HP T. So DP D T iff HP D T; and D  H.
Thus it is only the structures of interpretations up to isomorphism that matter for the
truth values of formulas. And such structures are completely sufficient to determine
truth values of formulas. It is another question whether truth values of formulas are
sufficient to determine models, even up to isomorphism.
*E11.6. Complete the proof of T11.5. You should set up the complete induction, but
may refer to the text, as the text refers to homework.

CHAPTER 11. MORE MAIN RESULTS

521

E11.7. (i) Explain what truth value the sentence 8x.Dx ! 9y.Cy ^ P xy// has
on interpretation I and then I0 in example (I). Explain what truth values it
has on I and then I0 in example (J). (ii) Explain what truth value the sentence
S ; C S; D S S; has on interpretations N and N0 in example (K). Are these
results as you expect? Explain.

11.3.2

When Equivalence implies Isomorphism

It turns out that when the universe of discourse is finite, elementary equivalence is
sufficient to show isomorphism. Suppose UD is finite and interpretations D and H are
elementarily equivalent, so that every formula has the same truth value on the two
interpretations. We find a sequence of formulas which contain sufficient information
to show that D and H are isomorphic.
For some language L, suppose D  H and UD D fm1 ; m2 : : : mn g. For an enumeration x1 , x2 : : : of the variables, consider some assignment d such that dx1 D
m1 , dx2 D m2 , and . . . and dxn D mn , and let C0 be the open formula,
.x1 x2 ^ x1 x3 ^ : : : ^ x1 xn / ^ .x2 x3 ^ : : : ^ x2 xn / ^ .xn
8v.v D x1 _ v D x2 _ : : : _ v D xn /

xn / ^

with appropriate parentheses. You should see this expression on analogy with quantity expressions from chapter 5 on translation. Its existential closure, that is, 9x1 9x2
: : : xn C0 is true just when there are exactly n things.
Now consider an enumeration, A1 , A2 : : : of those atomic formulas in L whose
only variables are x1 : : : xn . And set Ci D Ci 1 ^ Ai if Dd Ai D S, and otherwise,
Ci D Ci 1 ^ Ai . It is easy to see that for any i , Dd Ci D S. The argument is by
induction on i .
T11.7. For any i , Dd Ci D S.
Basis: For any a and b such that 1  a; b  n and a b, since xa and xb
are assigned distinct members of UD , Dd xa D xb S; so by SF(),
Dd xa xb D S; so by repeated applications of SF(^), Dd .x1
x2 ^x1 x3 ^: : :^x1 xn /^.x2 x3 ^: : :^x2 xn /^.xn 1
xn / D S. And since each member of UD is assigned to some variable
in x1 : : : xn , for any m 2 UD , there is some a, 1  a  n such that
Dd.vjm/ v D xa D S. So by repeated applications of SF(_), for any
m 2 UD , Dd.vjm/ v D x1 _ v D x2 _ : : : v D xn D S; so by
SF(8), Dd 8v.v D x1 _ v D x2 _ : : : v D xn / D S; so by SF(^),
Dd C0 D S.

CHAPTER 11. MORE MAIN RESULTS

522

Assp: For any i , 0  i < k, Dd Ci D S.


Show: Dd Ck D S.
Ck is of the form Ck 1 ^ Ak or Ck 1 ^ Ak . In the first case, by
assumption, Dd Ck 1 D S, and by construction, Dd Ak D S; so by
SF(^), Dd Ck 1 ^ Ak D S; which is to say, Dd Ck D S. In the
second case, again Dd Ck 1 D S; and by construction, Dd Ak S;
so by SF(), Dd Ak D S; so by SF(^), Dd Ck 1 ^ Ak D S;
which is to say, Dd Ck D S.
Indct: For any i , Dd Ci D S.
So these formulas, though increasingly long, are all satisfied on assignment d.
Now, for the specification of an isomorphism between the interpretations, we set
out to show there is a corresponding assignment h on which all the same expressions
are satisfied. First, for any Ci , consider its existential closure, 9x1 : : : 9xn Ci . It
is easy to see that for any Ci , H9x1 : : : 9xn Ci D T. Suppose otherwise; then
since D  H, D9x1 : : : 9xn Ci T; so by TI, there is some assignment d0 such
that Dd0 9x1 : : : 9xn Ci S; so, since the closure of Ci has no free variables, by
T8.4, Dd 9x1 : : : 9xn Ci S; so by repeated application of SF(9), Dd Ci S; but
by T11.7, this is impossible; reject the assumption: H9x1 : : : 9xn Ci D T. When
the existential is not satisfied on d, as we remove the quantifiers, in each case, the
resultant formula without a quantifier is unsatisfied on d.xjm/ for any m 2 UD ; so
it is unsatisfied when m D dx so that the formula without the quantifier is
unsatisfied on the original d. Observe that there are thus exactly n members of UH :
H9x1 : : : 9xn C0 D T; and, as we have already noted, this can be the case iff there
are exactly n members of UH .
Now for some assignment h0 , let h range over assignments that differ from h0 at
T
most in assignment to x1 : : : xn . Set i D fh j Hh Ci D Sg, and  D i 0 i .
Observe: (i) No i is empty. Since H9x1 : : : 9xn Ci D T, by TI, for any assignment
h , Hh 9x1 : : : 9xn Ci D S; so Hh0 9x1 : : : 9xn Ci D S; so by repeated applications
of SF(9), there is some h such that Hh Ci D S. When the quantifiers come off, the
result is some assignment that differs at most in assignments to x1 : : : xn and so
some assignment in i . (ii) For any j  i , j  i . Suppose otherwise; then
there is some h such that h 2 j but h 62 i ; so by construction, Hh Cj D S but
Hh Ci S; if j D i this is impossible; so suppose j > i ; then Cj is of the sort,
Ci ^ Bi C1 ^ Bi C2 ^ : : : ^ Bj where Bi C1 : : : Bj are either atomics or negated
atomics; so by repeated application of SF(^), Hh Ci D S; this is impossible; reject
the assumption: j  i . (iii) Finally, there are at most finitely many assignments

CHAPTER 11. MORE MAIN RESULTS

523

of the sort h. Since any h differs from h0 at most in assignments to x1 : : : xn , and


there are just n members of UH , there are nn assignments of the sort h.
From these results it follows that  is non-empty. Suppose otherwise. Then
for any h, there is some i such that h 62 i . But there are only finitely many
assignments of the sort h. So we may consider finitely many a : : : b from which
for any h there is some i such that h 62 i . But where each subscript in a : : : b is
 b, for each i , b  i ; and since each h is missing from at least one i , we
have that b is therefore empty. b must lack each of the assignments missing from
prior members of the sequence. But this is impossible; reject the assumption:  is
not empty. So we have what we wanted: any h in  is an assignment that satisfies
every Ci .
Now we are ready to specify a mapping for our isomorphism! Indeed, we are
ready to show,
T11.8. If D  H and UD is finite, then D H.
Suppose D  H and UD is finite. Then there are  and formulas Ci as above.
For some particular h 2 , for any i , 1  i  n, let .dxi / D hxi . Since
h 2 , for any Ci , Hh Ci D S. So Hh C0 D S. So h assigns each xi to a
different member of UH , and  is onto UH , as it should be. We now set out to
show that the other conditions for isomorphism are met.
Sentence letters. Since D  H, for any sentence letter S, DS D T; iff HS D T;
so DS D HS.
Constants. We require that for any constant c, Dc D mi iff Hc D .mi /. (i)
For some constant c, suppose Dc D mi . Since dxi D mi , .mi / D
.dxi / D hxi . By TA(c), Dd c D Dc D mi ; and by TA(v), Dd xi D
dxi D mi ; so Dd c D Dd xi ; so hDd c; Dd xi i 2 DD; so by SF(r),
Dd c D xi D S; so c D xi is a conjunct in some Cn ; but Hh Cn D
S; so by repeated applications of SF(^), Hh c D xi D S; so by SF(r),
hHh c; Hh xi i 2 HD; so Hh c D Hh xi ; but by TA(c), Hh c D Hc, and
by TA(v), Hh xi D hxi ; so Hc D hxi ; so Hc D .mi /.
(ii) Suppose Dc mi . As before, .mi / D hxi ; and Dd xi D mi .
But by TA(c), Dd c D Dc; so Dd c mi ; so Dd c Dd xi ; so
hDd c; Dd xi i 62 DD; so by SF(r), Dd c D xi S; so c xi is a
conjunct in some Cn ; but Hh Cn D S; so by repeated applications of SF(^),
Hh c xi D S; so by SF(), and SF(r), hHh c; Hh xi i 62 HD; so
Hh c Hh xi ; but by TA(c), Hh c D Hc, and by TA(v), Hh xi D hxi ;
so Hc hxi ; so Hc .mi /.

CHAPTER 11. MORE MAIN RESULTS

524

Relation Symbols. We require that for any relation symbol Rn , hma : : : mb i 2 DRn
iff h.ma / : : : .mb /i 2 H.Rn /. (i) Suppose hma : : : mb i 2 DRn . Since
dxa D ma , and . . . and dxb D mb we have, .ma / D .dxa / D hxa ,
and . . . and .mb / D .dxb / D hxb , and also by TA(v), Dd xa D ma ,
and . . . and Dd xb D mb ; so hDd xa ; : : : Dd xb i 2 DRn ; so by SF(r),
Dd Rn xa : : : xb D S; so Rn xa : : : xb is a conjunct of some Cn ; but Hh Cn D
S; so by repeated applications of SF(^), Hh Rn xa : : : xb D S; so by SF(r),
hHh xa ; : : : Hh xb i 2 HRn ; but by TA(v), Hh xa D hxa D .ma /, and
. . . and Hh xb D hxb D .mb /; so h.ma / : : : .mb /i 2 HRn .
(ii) Suppose hma : : : mb i 62 DRn . As before, .ma / D hxa , and . . . and
.mb / D hxb ; similarly, Dd xa D ma , and . . . and Dd xb D mb ; so
hDd xa ; : : : Dd xb i 62 DRn ; so by SF(r), Dd Rn xa : : : xb S; and
Rn xa : : : xb is a conjunct of some Cn ; but Hh Cn D S; so by repeated
applications of SF(^), Hh Rn xa : : : xb D S; so by SF() and SF(r),
hHh xa ; : : : Hh xb i 62 HRn ; but as before, Hh xa D .ma /, and . . . and
Hh xb D .mb /; so h.ma / : : : .mb /i 62 HRn .
Function symbols. We require that for any function symbol hn , hhma : : : mb i; mc i 2
Dhn iff hh.ma / : : : .mb /i; .mc /i 2 Hhn . (i) Suppose hhma : : : mb i; mc i 2
Dhn . Since dxa D ma , and . . . and dxb D mb , and dxc D mc , we
have, .ma / D .dxa / D hxa , and . . . and .mb / D .dxb / D hxb , and
.mc / D .dxc / D hxc ; and also by TA(v), Dd xa D ma , and . . . and
Dd xb D mb , and Dd xc D mc ; so hhDd xa : : : Dd xb i; Dd xc i 2 Dhn ;
so Dhn hDd xa : : : Dd xb i D Dd xc ; so by TA(f), Dd hn xa : : : xb D
Dd xc ; so hDd hn xa : : : xb ; Dd xc i 2 DD; so by SF(r), Dd hn xa : : : xb D
xc D S; so hn xa : : : xb D xc is a conjunct of some Cn ; but Hh Cn D S;
so by repeated applications of SF(^), Hh hn xa : : : xb D xc D S; so by
SF(r), hHh hn xa : : : xb ; Hh xc i 2 HD; so Hh hn xa : : : xb D Hh xc ; but
by TA(f), Hh hn xa : : : xb D Hhn hHh xa : : : Hh xb i; so Hhn hHh xa : : :
Hh xb i D Hh xc ; so hhHh xa : : : Hh xb i; Hh xc i 2 Hhn ; but by TA(v),
Hh xa D hxa D .ma /, and . . . Hh xb D hxb D .mb /, and Hh xc D
hxc D .mc /; so hh.ma / : : : .mb /i; .mc /i 2 Hhn .
(ii) Suppose hhma : : : mb i; mc i 62 Dhn . As before, .ma / D hxa , and
. . . and .mb / D hxb , and .mc / D hxc ; and also Dd xa D ma , and
. . . and Dd xb D mb , and Dd xc D mc ; so hhDd xa : : : Dd xb i; Dd xc i 62
Dhn ; so Dhn hDd xa : : : Dd xb i Dd xc ; so by TA(f), Dd hn xa : : : xb
Dd xc ; so hDd hn xa : : : xb ; Dd xc i 62 DD; so by SF(r), Dd hn xa : : : xb
D xc S; so hn xa : : : xb xc is a conjunct of some Cn ; but Hh Cn D S;

CHAPTER 11. MORE MAIN RESULTS

525

so by repeated applications of SF(^), Hh hn xa : : : xb xc D S; so by


SF() and SF(r), hHh hn xa : : : xb ; Hh xc i 62 HD; so Hh hn xa : : : xb
Hh xc ; but by TA(f), Hh hn xa : : : xb D Hhn hHh xa : : : Hh xb i; and
Hhn hHh xa : : : Hh xb i Hh xc ; so hhHh xa : : : Hh xb i; Hh xc i 62 Hhn ;
but as before, Hh xa D .ma /, and . . . Hh xb D .mb /, and Hh xc D .mc /;
so hh.ma / : : : .mb /i; .mc /i 62 Hhn .
Thus elementary equivalence is sufficient for isomorphism in the case where the universe of discourse is finite. This is an interesting result! Consider any interpretation
D with a finite UD , and the set of formulas (Delta) true on D. By our result, any
other model H that makes all the formulas in true any H such that D  H
is such that D is isomorphic to H. As we shall shortly see, the situation is not so
straightforward when UD is infinite.

11.4

Compactness and Isomorphism

Compactness takes the link between syntax and semantics from adequacy, and combines it with the finite length of derivations. The result is simple enough, and puts us
in a position to obtain a range of further conclusions.
ST A set of formulas is satisfiable iff it has a model. is finitely satisfiable iff
every finite subset of it has a model.
Now compactness draws a connection between satisfiability, and finite satisfiability,
T11.9. A set of formulas is satisfiable iff it is finitely satisfiable.

(compactness)

(i) Suppose is satisfiable, but not finitely satisfiable. Then there is some
M such that M D T; but there is a finite 0  such that any M0 has
M0 0 T; so M0 T; so there is a formula P 2 0 such that MP T;
but since 0  , P 2 ; so M T. This is impossible; reject the
assumption: if is satisfiable, then it is finitely satisfiable.
(ii) Suppose is finitely satisfiable, but not satisfiable. By T10.17, if
is consistent, then it has a model M. But since is not satisfiable, it has no
model; so it is not consistent; so there is some formula A such that ` A and
` A; consider derivations of these results, and the set  of premises
of these derivations; since derivations are finite,  is finite; and since 
includes all the premises,  ` A and  ` A; so by soundness,   A
and   A; since is finitely satisfiable, there must be some model M

CHAPTER 11. MORE MAIN RESULTS

526

such that M  D T; then by QV, M A D T and M A D T. But


by T7.5, there is no M and A such that M A D T and M A D T.
This is impossible; reject the assumption: if is finitely satisfiable, then it is
satisfiable.
This theorem puts us in a position to reason from finite satisfiability to satisfiability. And the results of such reasoning may be startling. Consider again the standard
<
interpretation N1 for LNT
,
N; D 0
N< D fhm; ni j m; n 2 N , and m is less than ng
NS D fhm; ni j m; n 2 N , and n is the successor of mg
NC D fhhm; ni; oi j m; n; o 2 N , and m plus n equals og
N D fhhm; ni; oi j m; n; o 2 N , and m times n equals og
<
Let include all the sentences true on N. Now consider a language L0 like LNT
but
with the addition of a single constant c. And consider a set of sentences,

0 D [ f; < c; S ; < c; S S ; < c; S S S ; < c; S S S S ; < c : : :g


that is like but with the addition of sentences asserting that c is greater than each
integer. Clearly there is no such individual on the standard interpretation N. A finite
subset of 0 can have at most finitely many of these sentences as members. Thus a
finite subset of 0 is a subset of,,
nSs


[ f; < c; S ; < c; S S ; < c : : : S S : : : S ; < cg
for some n. But any such set is finitely satisfiable: Simply let the interpretation N0
be like N but with Nc D n C 1. It follows from T11.9 that 0 has a model M0 . But,
further, by reasoning as for T10.16, a model M like M0 but without the assignment to
<
c is a model of LNT
for all the sentences in . So N  M. But N 6 M. For there must
be a member of UM with infinitely many members of UM that stand in the < relation
to it. [Clean this up.]
It is worth observing that we have demonstrated the existence of a model for the
completely nonstandard M by appeal to the more standard models M0 for finite subsets
of 0 , through the compactness theorem. Also, it is now clear that there can be no
analog to the result of the previous section for models with an infinite domain: For
models with an infinite domain, elementary equivalence does not in general imply
isomorphism. In the next section, we begin to see just how general this phenomenon
is.

CHAPTER 11. MORE MAIN RESULTS

11.5

527

Submodels and Lwenheim-Skolem

The construction for the adequacy theorem gives us a countable model for any consistent set of sentences. Already, this suggests that sentences for some models do not
always have the same size domain. Suppose has a model I. Then by T10.4, is
consistent; so by T10.17, has a model M where the universe of this latter model
is constructed of disjoint sets of integers. But this means that if has a model at all,
then it has a countable model, for we might order the members of UM by, say, their
least elements into a countable series. In fact, we might set up a function  from each
set in UM to its least element, to establish an isomorphic interpretation M whose
universe just is a set of integers. Then by T11.6, M D T. So consider any model
whose universe is not countable; it must be elementarily equivalent to one whose
universe is a countable set of integers. But, of course, there is no one-to-one map
from an uncountable universe to a countable one, so the models are not isomorphic.
This sort of result is strengthened in an interesting way by the LwenheimSkolem theorems. In the first form, we show that every model has a submodel with a
countable domain.

11.5.1

Submodels

SM A model M of a language L is a submodel of model N (M  N/ iff


1. UM  UN ,
2. For any sentence letter S, MS D NS,
3. For any constant c of L, M.c/ D N.c/,
4. For any function symbol hn of L and any ha1 : : : an i from the members
of UM , hha1 : : : an i; bi 2 M.hn / iff hha1 : : : an i; bi 2 N.hn /,
5. For any relation symbol Rn of L and any ha1 : : : an i from the members
of UM , ha1 : : : an i 2 M.Rn / iff ha1 : : : an i 2 N.Rn /.
The interpretation of hn and of Rn on M are the restrictions of their respective interpretations on N. Observe that a submodel is completely determined, once its domain
is given. A submodel is not well defined if it does not include objects for the interpretation of the constants, and the closure of its functions.
ES Say d is a variable assignment into the members of UM . Then M is an elementary submodel of N iff M  N and for any formula P of L and any such d,
Md P D S iff Nd P D S.

CHAPTER 11. MORE MAIN RESULTS

528

If M is an elementary submodel of N, we write, M  N. First,


T11.10. If M  N then for any sentence P of L, MP D T iff NP D T.
Suppose M  N and consider some sentence P . By TI, MP D T iff
Md P D S for every assignment d into UM ; since P is a sentence, by T8.4, iff
for some particular assignment h, Mh P D S; since M  N, iff Nh P D S;
since P is a sentence, by T8.4, iff Nd P D S for every d into UN ; by TI, iff
NP D T. So MP D T iff NP D T.
This much is clear. It is not so easy demonstrate the conditions under which a submodel is an elementary submodel. We make a beginning with the following theorems.
T11.11. Suppose M  N and d is a variable assignment into UM . Then for any term
t, Md t D Nd t.
By induction on the number of function symbols in t. Suppose M  N and d
is a variable assignment into UM .
Basis: Suppose t has no function symbols. Then t is a variable x or a constant c. (i) Suppose t is a constant c. Then Md t is Md c; by TA(c)
this is Mc; and since M  N, this is Nc; by TA(c) again, this is
Nd c; which is just Nd t. (ii) Suppose t is a variable x. Then Md t
is Md x; by TA(v), this is dx and by TA(v) again, this is Nd x;
which is just Nd t.
Assp: For any i , 0  i < k, if t has i function symbols, then Md t D Nd t.
Show: If t has k function symbols, Md t D Nd t.
If t has k function symbols, then it is of the form hn t1 : : : tn for some
terms t1 : : : tn with < k function symbols. So Md t is Md hn t1 : : : tn ;
by TA(f) this is Mhn hMd t1 ; : : : Md tn i; since M  N, with the assumption, this is Nhn hNd t1 ; : : : Nd tn i; by TA(f), this is Nd hn t1
: : : tn ; which is just Nd t.
Indct: For any term t, Md t D Nd t.
T11.12. Suppose that M  N and that for any formula P and every variable assignment d such that Nd 9xP D S there is an m 2 UM such that Nd.xjm/ P D S.
Then M  N.

CHAPTER 11. MORE MAIN RESULTS

529

Suppose M  N and that for any formula P and every variable assignment d
such that Nd 9xP D S there is an m 2 UM such that Nd.xjm/ P D S. We
show by induction on the number of operators in P , that for d any assignment
into the members of UM , Md P D S iff Nd P D S.
Basis: If P is atomic then it is either a sentence letter S or an atomic of the
form Rn t1 : : : tn for some relation symbol Rn and terms t1 : : : tn .
(i) Suppose P is S. Then Md P D S iff Md S D S; by SF(s), iff
MS D T; since M  N, iff NS D T; by SF(s), iff Nd S D S;
iff Nd P D S. (ii) Suppose P is Rn t1 : : : tn . Then Md P D S
iff Md Rn t1 : : : tn D S; by SF(r) iff hMd t1 ; : : : Md tn i 2 MRn ;
since M  N with T11.11 iff hNd t1 ; : : : Nd tn i 2 NRn ; by SF(r)
iff Nd Rn t1 : : : tn D S; iff Nd P D S.
Assp: For any i , 0  i < k, for d any assignment into the members of UM ,
if P has i operator symbols, then Md P D S iff Nd P D S.
Show: If P has k operator symbols, then for d any assignment into the members of UM , Md P D S iff Nd P D S.
If P has k operator symbols, then it is of the form A, A ! B or
9xA for variable x and formulas A and B with < k operator symbols
(treating universally quantified expressions as equivalent to existentially quantified ones). Let d be an assignment into the members of
UM .
() Suppose P is A. Md P D S iff Md A D S; by SF() iff
Md A S; by assumption iff Nd A S; by SF() iff Nd A D S;
iff Nd P D S.
(!) Homework.
(9) Suppose P is 9xA. (i) Suppose Md P D S; then Md 9xA D S;
so by SF(9), there is some o 2 UM such that Md.xjo/ A D S; so
since d.xjo/ is an assignment into the members of UM , by assumption, Nd.xjo/ A D S; so by SF(9), Nd 9xA D S; so Nd P D S.
(ii) Suppose Nd P D S; then Nd 9xA D S; so by the assumption of the theorem, there is an m 2 UM such that Nd.xjm/ A D S;
since d.xjm/ is an assignment into the members of UM , by assumption
Md.xjm/ A D S; so by SF(9), Md 9xA D S; so Md P D S. So
Md P D S iff Nd P D S.
In any case, if P has k operator symbols, Md P D S iff Nd P D S.
Indct: For any P , Md P D S iff Nd P D S.

CHAPTER 11. MORE MAIN RESULTS

530

So the result works, only so long as the quantifier case is guaranteed by witnesses
for each existential claim in the universe of the submodel. The Lwenheim Skolem
Theorem takes advantage of what we have done by producing a model in which these
witnesses are present.

11.5.2

Downward Lwenheim-Skolem

The Lwenheim Skolem Theorem takes advantage of what we have just done by
producing a model in which the required witnesses are present.
UM Consider some model N and suppose a well-ordering of the objects of UN . We
construct a countable submodel M as follows. Let A0 be a countable subset
of UN . We construct a series A0 , A1 , A2 . . . . For a formula of the form 9xP
in the language L, and a variable assignment d into Ai , let d0 be like d for

the initial segment that assigns to variables free in P , and after assigns to a
constant object m0 in A0 . Then for any P and d such that Nd 9xP D S,
find the first object o in the well-ordering of UN such that Nd0 .xjo/ P D S.
To form Ai C1 , augment Ai with all the objects obtained this way. Because
there are countably many formulas, and countably many initial segments of
the variable assignments, countably many objects are added to form Ai C1 ,
S
and if Ai is countable, Ai C1 is countable. Let UM be i 0 Ai . Again, if each
Ai is countable, UM is countable.
There may be uncountably many variable assignments into a given Ai . However,
for a given formula P , no matter how may assignments there may be on which it is
satisfied, there can be at most countably many initial segments of the sort d0 . So at
most countably many objects are added. The functions from formulas and variable
assignments to individuals are Skolem functions, and we consider the closure of A
under the set of all Skolem functions.
T11.13. With UM constructed as above, a submodel M of N is well-defined.
Clearly UM  UN . For constants, consider the case when 9xP is 9x.x D c/;
then at any stage i , Md0 .xjo/ x D c D S iff o D Mc. So Mc is a
member of Ai C1 and so of UM . Similarly, for functions, consider the case
when 9xP is 9x.hn v1 : : : vn D x/ for some function symbol hn and variables v1 : : : vn and x. For any d, consider some d0 which assigns objects
to each of the variables v1 : : : vn ; then there there is some Ai such that
d0 is an assignment into it; so by construction, Ai C1 includes an object o

CHAPTER 11. MORE MAIN RESULTS

531

such that Nd0 .xjo/ hn v1 : : : vn D x D S. But this must be the object


Nhn hNd0 v1 ; : : : Nd0 v1 i.
T11.14. For any model N there is an M  N such that M has a countable domain.
(Lwenheim-Skolem)
To show M  N by T11.12, it remains to show that for any formula P and
every variable assignment d such that Nd 9xP D S there is an m 2 UM
such that Nd.xjm/ P D S. But this is easy. Suppose Nd 9xP D S; then
where d and d0 agree on assignments to all the free variables in P , by T8.4,
Nd0 9xP D S. But all assignments from d0 are elements of some Ai ; so by
construction there is object m such that Nd0 .xjm/ P D S in Ai C1 and so in
UM ; and since d and d0 agree on their assignments to all the free variables in
P , by T8.4, Nd.xjm/ P D S.
[applications]

11.5.3

Upward Lwenheim-Skolem

Part IV

Logic and Arithmetic:


Incompleteness and
Computability

532

Introductory
In Part III we showed that our semantical and syntactical logical notions are related
as we want them to be: exactly the same arguments are semantically valid as are
provable. So,
`P

iff

P

Thus our derivation system is both sound and adequate, as it should be. In this part,
however, we encounter a series of limiting results with particular application to
arithmetic and computing.
First, it is natural to think of mathematics as characterized by proofs and derivations. Thus, one might anticipate that there would be some system of premises
such that for any P in LNT , with N the standard interpretation of number theory, we
would have,
`P

iff

NP D T

Note the difference between our claims. In the first, derivations from premises are
matched to entailments from premises; in the second, derivations (and so entailments) are matched to truths on an interpretation. Perhaps inspired by suspicions
about the existence or nature of numbers, one might expect that derivations would
even entirely replace the notion of mathematical truth. And Q or PA may already
seem to be deductive systems of this sort. But we shall see that there can be no
such deductive system. From Gdels first incompleteness theorem, under certain
constraints, no consistent deductive system has as consequences either P or P for
every P of LNT ; any such theory is (negation) incomplete. But then, subject to those
constraints, any consistent deductive system must omit some truths of arithmetic
from among its consequences.2
2 Gdels groundbreaking paper is On the Formally Undecidable Propositions of Principia Mathematica and Related Systems.

533

PART IV.

LOGIC AND ARITHMETIC

534

Suppose there is no one-to-one map between truths of arithmetic and consequences of our theories. Rather, we propose a theory R(eal) whose consequences
are unproblematically true, and another theory I (deal) whose consequences outrun
those of R and whose literal truth is therefore somehow suspect. Perhaps R is sufficient only for something like basic arithmetic, whereas I seems to quantify over all
members of a far-flung infinite domain. Even though not itself a vehicle for truth,
theory I may be useful under certain circumstances. Suppose,
(a) For any P in the scope of R, if P is not true, then R ` P
(b) I extends R: If R ` P then I ` P
(c) I is consistent: There is no P such that I ` P and I ` P
Then theory I may be treated as a tool for achieving results in the scope of R: Suppose P is a result in the scope of R, and I ` P ; then by consistency, I 6` P ;
and because I extends R, R 6` P ; so by (a), P is true. This is (a sketch of) the
famous Hilbert program for mathematics, which aims to make sense of infinitary
mathematics based not on the truth but rather the consistency of theory I .
Because consistency is a syntactical result about proof systems, not itself about
far-flung mathematical structures, one might have hoped for proofs of consistency
from real, rather than ideal, theories. But Gdels second incompleteness theorem
tells us that derivation systems extending PA cannot prove even their own consistency. So a weaker real theory will not be able to prove the consistency of PA and
its extensions. But this seems to remove a demonstration of (c) and so to doom the
Hilbert strategy.3
Even though no one derivation system has as consequences every mathematical
truth, derivations remain useful, and mathematicians continue to do proofs! Given
that we care about them, there is a question about the automation of proofs. Say a
property or relation is effectively decidable iff there is an algorithm or program that
3 We

are familiar with the Pythagorean Theorem according to which the hypotenuse and sides of
a right triangle are such that a2 D b 2 C c 2 . In the 1600s Fermat famously proposed that there are
no integers a; b; c such that an D b n C c n for n > 2; so, for example, there are no a; b; c such
that a3 D b 3 C c 3 . In 1995 Andrew Wiles proved that this is so. But Wiless proof requires some
fantastically abstract (and difficult) mathematics. Even if Wiless abstract theory (I ) is not true Hilbert
could still accept the demonstration of Fermats (real) theorem so long as I is shown to be consistent.
Gdels result seems to doom this strategy. Of course, one might simply accept Wiless proof on the
ground that his advanced mathematics is true so that its consequences are true as well. But this is a
topic in philosophy of mathematics, not logic! See, for example, Shapiro, Thinking About Mathematics
for an introduction to options in the philosophy of mathematics. Our limiting results may very well
stimulate interest in that field!

PART IV.

LOGIC AND ARITHMETIC

535

could decide in a finite number of steps whether the property or relation applies in
any given case. Abstracting from the limitations of particular computing devices,
we shall identify a class of relations which are decidable. A corollary of Gdels
first theorem is that validity in systems like ND and AD is not among the decidable
relations. Thus there are interesting limits on the decidable relations where it is
possible also to look back through this lense at Gdels first theorem.
Chapter 12 lays down background required for chapters that follow. It begins
with a discussion of recursive functions, and concludes with a few essential results,
including a demonstration of the incompleteness of arithmetic. Chapters 13 and 14
deepen and extend those results in different ways. Chapter 13 includes Gdels own
argument for incompleteness from the construction of a sentence such that neither
it nor its negation is provable, along with demonstration of the second incompleteness theorem. Chapter 14 again shows that there must exist a sentence such that
neither it nor its negation is provable, but this time in association with an account of
computability. Chapter 12 is required for either chapter 13 or chapter 14; but those
chapters may be taken in either order.

Chapter 12

Recursive Functions and Q


A formal theory consists of a language, with some axioms and proof system. Q and
PA are example theories. A theory T is (negation) complete iff for any sentence P in
its language L, either T ` P or T ` P . Observe again that a derivation system is
adequate when it proves every entailment of some premises. Our standard logic does
that. Granting then, the adequacy of the logic, negation completeness is a matter of
premises proving a sufficiently robust set of consequences proving consequences
which include P or P for every P in the language.
Let us pause to consider why completeness matters. From E8.22, as soon as a
language L has an interpretation I, for any sentence P in L, either IP D T or
IP D T. So if we set out to characterize by means of a theory the sentences that
are true on some interpretation, our theory is bound to omit some sentences unless
it is such that for any P , either T ` P or T ` P . To the extent that we desire
a characterization of all true sentences in some domain, of arithmetic or whatever, a
complete theory is a desirable theory.
By itself negation completeness is no extraordinary thing. Consider a theory
whose language has just two sentence letters A and B, along with the usual sentential
operators and rules. The axioms of our theory are just A and B. On a truth table,
there is just one row were these axioms are both true, and on that row, any P in the
language is either T or F, so that one of P or P is T.
AB A

(A)

T
T
F
F

T
F
T
F

T
T
F
F

B / P
F
T
F
T

T /F

P
F /T

So for any P , either A; B  P or A; B  P . But from the adequacy of the


536

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

537

derivation system if  P , then ` P (T10.11, p. 476); so for any P , either


A; B ` P or A; B ` P . So our little theory with its restricted language is
negation complete. Contrast this with a theory that has the same language and rules,
but A as its only axiom. In this case, it is easy to see from truth tables that, say,
A 6 B and A 6 B. But by soundness, if ` P then  P (T10.3, p. 462); it
follows that A 6` B and A 6` B. So this theory is not negation complete.
These theories are not very interesting. However, let LSNTC be a language like LNT

whose only function symbols are S and C (without ), and let LNT
be a language like
LNT whose only function symbol is  (without S and C). Then there is a complete
theory for the arithmetic of LSNTC (Presburger Arithmetic), and a complete theory

for the arithmetic of LNT
(Skolem Arithmetic).1 These are interesting and powerful
theories. So, again, by itself negation completeness is not so extraordinary.
However there is no complete theory for the arithmetic of LNT which includes
all of S , C and . It turns out that theories are something like superheros. In the
ordinary case, a complete, and so a happy life is at least within reach. However,
as theories acquire certain powers, they take on a fatal flaw just because of their
powers where this flaw makes completeness unattainable. On its face, theory Q
does not appear particularly heroic. We have seen already in E7.20 that Q 6` x  y D
y  x and Q 6` .x  y D y  x/. So Q is negation incomplete. PA which does
prove x  y D y  x along with other standard results in arithmetic might seem
a more likely candidate for heroism. But Q includes already features sufficient to
generate the flaw which appears also in any theories, like PA, which have at least all
the powers of Q. It is our task to identify this flaw.
It turns out that a system with the powers of Q including S , C and  can express
and capture all the recursive functions and a system with these powers must have
the fatal flaw. Thus, in this chapter we focus on the recursive functions, and associate
them with powers of our formal systems. We conclude with a few applications from
these powers.

12.1

Recursive Functions

In chapter 6 (p. 311) for Q and PA we had axioms of the sort,


a: x C ; D x
b: x C Sy D S.x C y/
1 For demonstration of completeness for Presburger Arithmetic, see Fisher, Formal Number Theory
and Computability chapter 7 along with Boolos, Burgess and Jeffrey, Computability and Logic chapter
24.

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

538

and
c: x  ; D ;
d: x  Sy D .x  y/ C x
These enable us to derive x C y and x  y for arbitrary values of x and y. Thus, by
(a) 2 C 0 D 2; so by (b) 2 C 1 D 3; and by (b) again, 2 C 2 D 4; and so forth. From
the values at any one stage, we are in a position to calculate values at the next. And
similarly for multiplication. From E6.35 on p. 312, all this should be familiar.
While axioms thus supply effective means for calculating the values of these
functions, the functions themselves might be similarly identified or specified. So,
given a successor function suc.x/, we may identify the functions plus.x; y/:
a: plus.x; 0/ D x
b: plus.x; suc.y// D suc.plus.x; y//

and times.x; y/:


c: times.x; 0/ D 0
d: times.x; suc.y// D plus.times.x; y/; x/

For ease of reading, let us typically revert to the more ordinary notation S, C and
 for these functions, though we stick with the (emphasized) sans serif font. We
have been thinking of functions as certain complex sets. Thus the plus function is a
set with elements f: : : hh2; 0i; 2i; hh2; 1i; 3i; hh2; 2i; 4i : : :g. Our specification picks
out this set. From the first clause, plus.x; y/ has hh2; 0i; 2i as a member; given this,
hh2; 1i; 3i is a member; and so forth. So the two clauses work together to specify the
plus function. And similarly for times.
But these are not the only sets which may be specified this way. Thus the standard
factorial fact.x/:
e: fact.0/ D S.0/
f: fact.Sy/ D fact.y/  Sy

Again, we will often revert to the more typical x notation. Zero factorial is one. And
the factorial of Sy multiplies 1  2  : : :  y by Sy. Similarly power.x; y/:
g: power.x; 0/ D S0
h: power.x; Sy/ D power.x; y/  x

Any number to the power of zero is one (x0 D 1). And then xSy multiplies xy D
x  x : : :  x (y times) by another x.
We shall be interested in a class of functions, the recursive functions, which may
be specified (in part) by this two-stage strategy. To make progress, we turn to a
general account in five stages.

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

12.1.1

539

Initial Functions

Our examples have simply taken suc.x/ as given. Similarly, we shall require a stock
of initial functions. There are initial functions of three different types.
First, we shall continue to include suc.x/ among the initial functions. So suc.x/ D
fh0; 1i; h1; 2i; h2; 3i : : :g.
Second, zero.x/ is a function which returns zero for any input value. So zero.x/ D
fh0; 0i; h1; 0i; h2; 0i : : :g.
j
Finally, for any 1  k  j, we require a collection of identity functions idntk .x1
j
: : : xj /. Each idntk function takes j places and simply returns the value from the kth
place. Thus idnt32 .4; 5; 6/ D 5. So, idnt32 D f: : : hh1; 2; 3i; 2i : : : hh4; 5; 6i; 5i : : :g.
And in the simplest case, idnt11 .x/ D x.

12.1.2

Composition

In our examples, we have let one function be composed from others as when we
consider times.x; suc.y// or the like. Say xE represents a (possibly empty) series of
variables x1 : : : xn .
CM Let g.Ey/ and h.Ex; w; zE/ be any functions. Then f.Ex; yE; zE/ is defined by composition from g.Ey/ and h.Ex; w; zE/ iff f.Ex; yE; zE/ D h.Ex; g.Ey/; zE/.
So h.Ex; w; zE/ gets its value in the w-place from g.Ey/. Here is a simple example:
f.y; z/ D zero.y/Cz results by composition from substitution of zero.y/ into plus.w; z/;
so plus.w; z/ gets its value in the w-place from zero.y/. The result is the set with
members, f: : : hh2; 0i; 0i; hh2; 1i; 1i; hh2; 2i; 2i : : :g. Given, say, input h2; 2i, zero.y/
takes the input 2 and supplies a zero to the first place of the plus.x; y/ function; then
from plus.x; y/ the result is a sum of 0 and 2 which is 2. And similarly in other cases.
In contrast, zero.x C y/ has members f: : : hh2; 0i; 0i; hh2; 1i; 0i; hh2; 2i; 0i : : :g. You
should see how this works.

12.1.3

Recursion

For each of our examples, plus.x; y/, times.x; y/, fact.y/, and power.x; y/, the value
of the function is set for y D 0 and then for suc.y/ given its value for y. These
illustrate the method of recursion. Put generally,
RC Given some functions g.Ex/ and h.Ex; y; u/, f.Ex; y/ is defined by recursion when,
f.Ex; 0/ D g.Ex/
f.Ex; Sy/ D h.Ex; y; f.Ex; y//

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

540

This general scheme includes flexibility that is not always required. In the cases
of plus, times and power, xE reduces to a simple variable x; for fact, xE disappears
altogether, so that the function g.Ex/ reduces to a constant. And, as we shall see, the
function h.Ex; y; u/ need not depend on each of its variables x, y and u.
However, by clever use of our initial functions, it is possible to see each of our
sample functions on this pattern. Thus for plus.x; y/, set gplus.x/ D idnt11 .x/ and
hplus.x; y; u/ D suc.idnt33 .x; y; u//. Then,
a0
b0

plus.x; 0/ D idnt11 .x/


plus.x; Sy/ D suc.idnt33 .x; y; plus.x; y///

And these work as they should: idnt11 .x/ D x and suc.idnt33 .x; y; plus.x; y/// is the
same as suc.plus.x; y//. So we recover the conditions (a) and (b) from above.
Similarly, for times.x; y/, let gtimes.x/ D zero.x/ and htimes.x; y; u/ D plus.idnt33 .x; y; u/; x/.
Then,
c0
d0

times.x; 0/ D zero.x/
times.x; Sy/ D plus.idnt33 .x; y; times.x; y//; x/

So times.x; 0/ D 0 and times.x; Sy/ D plus.times.x; y/; x//, and all is well. Observe
that we would obtain the same result with htimes.x; y; u/ D plus.u; idnt31 .x; y; u// or
perhaps, plus.idnt33 .x; y; u/; idnt31 .x; y; u//. The role of the identity functions in these
formulations is to preserve h as a function of x, y and u, even where not each place
is required as the y-place is not required for times, and so to adhere to the official
form which makes h.x; y; u/ a function of variables in each place. And there are these
different ways to produce a function of all the variables to achieve the desired result.
In the case of fact.y/, there are no places to the xE vector. So gfact is reduced
to a zero-place function, that is, to a constant, and hfact to a function of y and
u. (For times.x; y/, xE retains one place, so gtimes.x/ is not reduced to a constant;
rather gtimes.x/ D zero.x/ remains a full-fledged function only one which returns the same value for every value of x.) For fact.y/, set gfact D suc.0/ and
hfact.y; u/ D times.u; suc.y//.
Again, identity functions work to preserve h as a function y, and u, even where
not each place is required, in order to adhere to the official form. However, there is
no requirement that the places be picked out by identity functions! In this case, each
variable is used in a natural way, so identity functions are not required. It is left as
an exercise to show that gfact and hfact identify the same function as constraints (e),
(f), and to then to find gpower.x/ and hpower.x; y; u/.

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

541

The Recursion Theorem


One may wonder whether our specification f.x; y/ by recursion from g.Ex/ and
h.Ex; y; u/ results in a unique function. However it is possible to show that it does.
RT Suppose g.Ex/ and h.Ex; y; u/ are total functions on N ; then there exists a
unique function f.Ex; y/ such that for any xE and y 2 !,
a. f.Ex; 0/ D g.Ex/
b. f.Ex; suc.y// D h.Ex; y; f.Ex; y//

We identify this function as a union of functions which may be constructed by


means of g and h. The domain of a total function from rn to s is always rn ; for
a partial function, the domain of the function is that subset of rn whose members
are matched by the function to members of s (for background see the set theory
reference p. 112). Say a (maybe partial) function s.Ex; y/ is acceptable iff,
i. If hEx; 0i 2 dom.s/, then s.Ex; 0/ D g.Ex/
ii. If hEx; suc.n/i 2 dom.s/, then hEx; ni 2 dom.s/ and s.Ex; suc.n// D h.Ex; n; s.Ex; n//

A function with members fhhEx; 0i; g.Ex/i; hhEx; 1i; h.Ex; 0; g.Ex//ig would satisfy (i)
and (ii). A function which satisfies the theorem is acceptable, though not every
function which is acceptable satisfies the theorem; we show just one acceptable
function S
satisfies the theorem. Let F be the collection of all acceptable functions,
and f be F. Thus hhEx; ni; ai 2 f iff hhEx; ni; ai is a member of some acceptable s;
iff s.Ex; n/ D a for some acceptable s. We sketch reasoning to show that f has the
right features.
I. If hhEx; ni; ai 2 s and hhEx; ni; bi 2 s0 , then a D b. By induction on n: Suppose
hhEx; 0i; ai 2 s and hhEx; 0i; bi 2 s0 ; then by (i), a D b D g.Ex/. Assume that if
hhEx; ki; ai 2 s and hhEx; ki; bi 2 s0 then a D b. Show that if hhEx; suc.k/i; ci 2 s
and hhEx; suc.k/i; di 2 s0 then c D d. So suppose hhEx; suc.k/i; ci 2 s and
hhEx; suc.k/i; di 2 s0 . Then by (ii) c D h.Ex; k; s.Ex; k// and d D h.Ex; k; s0 .Ex; k//.
But by by assumption s.Ex; k/ D s0 .Ex; k/; so c D d.
II. dom.f/ includes every hEx; ni. By induction on n: For any xE, fhhEx; 0i; g.Ex/ig is itself an acceptable function. Assume that for any xE, hEx; ki 2 dom.f/. Show that
for any xE, hEx; suc.k/i 2 dom.f/. Suppose otherwise, and consider a function, s D
f [ fhhEx; suc.k/i; h.Ex; k; f.Ex; k//ig. But we may show that s so defined is an acceptable function; and since s is acceptable, it is a subset of f; so hEx; suc.k/i 2 dom.f/.
Reject the assumption.
III. Now by (I), if hhEx; ni; ai 2 f and hhEx; ni; bi 2 f, then a D b; so f is a function;
and by (II) the domain of f is all of !; by construction it is easy to see that f is
itself acceptable. From this, f satisfies the theorem. With (I), f is the unique acceptable function which satisfies the theorem; and since any function that satisfies the
theorem is acceptable, the theorem is uniquely satisfied.
*We employ weak induction from the induction schemes reference p. 380. Enderton, Elements of
Set Theory, and Drake and Singh, Intermediate Set Theory, include nice discussions of this result.

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

12.1.4

542

Regular Minimization

So far, the method of our examples is easily matched to the capacities of computing
devices. To find the value of a recursive function, begin by finding values for y D 0,
and then calculate other values, from one stage to the next. But this is just what
computing devices do well. So, for example, in the syntax of the Ruby language,2
given some functions g(x) and h(x,y,u),

(B)

1. def recfunc(a,b)
2. k = g(a)
3. for y in 0..b-1
4.
k = h(a,y,k)
5. end
6. return k
7. end

Using g(a) this program calculates the value of k for input (a,0). And then, given
current values for y, and k for input (a,y), repeatedly uses h to calculate k for the
next value of y, until it finally reaches and returns the value of k for input (a,b).
Observe that the calculation of recfunc(a,b) requires exactly b iterations before it
completes.
But there is a different repetitive mechanism available for computing devices
where this mechanism does not begin with a fixed number of iterations. Suppose
we have some function g(a,b) with values g(a,0), g(a,1), g(a,2). . . where for
each a there are at least some values of b such that g(a,b) = 0. For any value of a,
suppose we want the least b such that g(a,b) = 0. Then we might reason as follows.

(C)

1. def minfunc(a)
2. y = 0
3. until g(a,y) == 0
4.
y = y+1
5. end
7. return y
8. end

This program begins with y = 0 and tests each value of g(a,y) until it returns a
value of 0. Once it finds this value, minfunc(a) is set equal to y. Given g(a,b),
then, minfunc(a) calculates a function which returns some value of y for any input
value a.
2 Ruby

is convenient insofar as it is interpreted and so easy to run, and available at no cost on


multiple platforms (see http://www.ruby-lang.org/en/downloads/). We depend only on very
basic features familiar from most any exposure to computing.

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

543

But, as before, we might reason similarly to specify functions so calculated. For


this, recall that a function is total iff it is defined on all members of its domain. Say
a function g.Ex; y/ is regular iff it is total and for all values of xE there is at least one y
such that g.Ex; y/ D 0. Then,
RM If g.Ex; y/ is a regular function, the function f.Ex/ D yg.Ex; y/ which for each
xE takes as its value the least y such that g.Ex; y/ D 0 is defined by regular
minimization from g.Ex; y/.
For a simple example, suppose some sets of integers and g.x; y/ such that g.x; y/ D 0
iff y 2 x. Then f.x/ D yg.x; y/ is the least element of x.

12.1.5

Final Definition

Finally, our sample functions are cumulative. Thus plus.x; y/ depends on suc.x/;
times.x; y/, on plus.x; y/, and so forth. We are thus led to our final account.
RF A function fk is recursive iff there is a series of functions f0 , f1 . . . fk such that
for any i  k,
j

(i) fi is an initial function suc.x/, zero.x/ or idntk .x1 : : : xj /.


(c) There are a; b < i such that fi .Ex; yE; zE/ results by composition from fa .Ey/
and fb .Ex; w; zE/.
(r) There are a; b < i such that fi .Ex; y/ results by recursion from fa .Ex/ and
fb .Ex; y; u/.
(m) There is some a < i such that fi .Ex/ results by regular minimization from
fa .Ex; y/.
If there is a series of functions f0 , f1 . . . fk such that for any i  k, just (i), (c) or (r),
then (PR) fk is primitive recursive.
So any recursive function results from a series of functions each of which satisfies one of these conditions. And such a series demonstrates that its members are
recursive. For a simple example, plus is primitive recursive.

(D)

1.
2.
3.
4.
5.

idnt11 .x/
idnt33 .x; y; u/
suc.w/
suc.idnt33 .x; y; u//
plus.x; y/

initial function
initial function
initial function
2,3 composition
1,4 recursion

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

544

From this list by itself, one might reasonably wonder whether plus.x; y/, so defined,
is the addition function we know and love. What follows, given primitive recursive
functions idnt11 .x/ and suc.idnt33 .x; y; u// is that a primitive recursive function results
by recursion from them. It turns out that this is the addition function. It is left as an
exercise to exhibit times.x; y/, fact.x/ and power.x; y/ as primitive recursive as well.
*E12.1. (a) Show that the proposed gfact and hfact.y; u/ result in conditions (e)
and (f). Then (b) produce a defininition for power.x; y/ by finding functions
gpower.x/, and hpower.x; y; u/ and then show that they have the same result
as conditions (g) and (h).

E12.2. Generate a sequence of functions sufficient to show that power.x; y/ is primitive recursive.

E12.3. Install some convenient version of Ruby on your computing platform (see
http://www.ruby-lang.org/en/downloads/). Then open recursive1.rb
from the course website. Extend the sequence of functions started there to
include fact(x) and power(x,y). Calculate some values of these functions and print the results, along with your program (do not worry if these
latter functions run slowly for even moderate values of x and y). This assignment does not require any particular computing expertise especially, there
should be no appeal to functions except from earlier in the chain. (This exercise suggests a point, to be developed in chapter 14, that recursive functions
are computable.)

12.2

Expressing Recursive Functions

Having identified the recursive functions, we turn now to the first of two powers to be
associated with theory incompleteness. In this case, it is an expressive power. Recall
that a theory is sound iff its axioms are true and its proof system is sound, so that
all the theorems of a sound theory are true. Then the first power is this: If a theory
is sound and its interpreted language expresses all the recursive functions, then it
must be negation incomplete. In this section, then, we show that LNT , on its standard
interpretation, expresses the recursive functions.

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

12.2.1

545

Definition and Basic Results

For a language L, and interpretation I, suppose that for each m 2 U there is some
unique variable-free term m such that in the sense of definition AI, I.m/ D m, so for
any variable assignment d, Id m D m. The simplest way for this to happen is if there
is exactly one constant assigned to each member of the universe. But the standard
interpretation for number theory N also has the special feature that a variable-free
term is assigned to each member of U. On this interpretation, ;, S ;. . . are terms
for each object. In this case, then, for any n, we simply take as n, S : : : S; with n
repetitions of the successor operator. So 0 abbreviates the term ;, 1 the term S;, etc.
Given this, we shall say that a formula R.x/ expresses a relation R.x/ on interpretation I, just in case if m 2 R then IR.m/ D T and if m 62 R then IR.m/ D T. So
the formula is true when the individual is a member of the relation and false when it
is not. To express a relation on an interpretation, a formula must say which individuals fall under the relation. Expressing a relation is closely related to translation. A
formula R.x/ expresses a relation R.x/ when every sentence R.m/ is results a good
translation of the sentence m 2 R (compare chapter 5). So there is a single intended
interpretation I, and a corresponding class of good translations when R.x/ expresses
R.x/ on the interpretation I. Thus, generalizing,
EXr For any language L, interpretation I, and objects m1 : : : mn 2 U, relation
R.x1 : : : xn / is expressed by formula R.x1 : : : xn / iff,
(i) If hm1 : : : mn i 2 R then IR.m1 : : : mn / D T
(ii) If hm1 : : : mn i 62 R then IR.m1 : : : mn / D T
Similarly, a one-place function f.x/ has members of the sort hx; vi and so is really
a kind of two-place relation. Thus to express a function f.x/, we require a formula
F .x; v/ where if hm; ai 2 f, then IF .m; a/ D T. It would be natural to go on to
require that if hm; ai 62 f then IF .m; a/ D T. However this is not necessary once
we build in another feature of functions that they have a unique output for each
input value. Thus we shall require,
EXf For any language L, interpretation I, and objects m1 : : : mn ; a 2 U, function
f.x1 : : : xn / is expressed by formula F .x1 : : : xn ; v/ iff,
if hhm1 : : : mn i; ai 2 f then
(i) IF .m1 : : : mn ; a/ D T
(ii) I8z.F .m1 : : : mn ; z/ ! z D a/ D T

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

546

From (i), F is true for a; from (ii) any z for for which it is true is identical to a.
Let us illustrate these definitions with some first applications. First, on any interpretation with the required variable-free terms, the formula x D y expresses the
equality relation EQ.x; y/. For if hm; ni 2 EQ then Im D In so that Im D n D T;
and if hm; ni 62 EQ then Im In so that Im n D T. This works because ID
just is the equality relation EQ. Similarly, on the standard interpretation N for number
theory, suc.x/ is expressed by S x D v, plus.x; y/ by x C y D v, and times.x; y/ by
x  y D v. Taking just the addition case, suppose hhm; ni; ai 2 plus; then Nm C n D
a D T. And because addition is a function, N8z..m C n D z/ ! z D a/ D T.
Again, this works because NC just is the plus function. And similarly in the other
cases. Put more generally,
T12.1. For an interpretation with the required variable-free terms assigned to members of the universe: (a) If R is a relation symbol and R is a relation, and
IR D R.x1 : : : xn /, then R.x1 : : : xn / is expressed by Rx1 : : : xn . And (b)
if h is a function symbol and h is a function and Ih D h.x1 : : : xn / then
h.x1 : : : xn / is expressed by hx1 : : : xn D v.
It is possible to argue semantically for these claims. However, as for translation, we take the project of demonstrating expression to be one of providing
or supplying relevant formulas. So the theorem is immediate.
Also, as we have suggested, (i) and (ii) of condition EXf taken together are sufficient to generate a condition like EXr(ii).
T12.2. Suppose function f.x1 : : : xn / is expressed by formula F .x1 : : : xn ; y/; then
if hhm1 : : : mn i; ai 62 f, IF .m1 : : : mn ; a/ D T.
For simplicity, consider just a one-place function f.x/. Suppose f.x/ is expressed by F .x; y/ and hm; ai 62 f. Then since f is a function, there is some b
such that hm; bi 2 f for a b and so ha; bi 62 eq. Suppose IF .m; a/ T;
then by TI, for some d, Id F .m; a/ S; let h be a particular assignment
of this sort; so Ih F .m; a/ S; so by SF(), Ih F .m; a/ D S.
But since hm; bi 2 f by EXf(ii), I8z.F .m; z/ ! z D b/ D T; so by TI, for
any d, Id 8z.F .m; z/ ! z D b/ D S; so Ih 8z.F .m; z/ ! z D b/ D S;
so by SF(8), Ih.zja/ F .m; z/ ! z D b D S; so since Ih a D a, by T10.2,
Ih F .m; a/ ! a D b D S; so by SF(!), Ih F .m; a/ S or Ih a D b D S;
so Ih a D b D S; but Ih a D a and Ih b D b; so by SF(r), ha; bi 2 ID; so
ha; bi 2 eq. This is impossible; reject the assumption: If f.x/ is expressed by
F .x; y/ and hm; ai 62 f, then IF .m; a/ D T.

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

547

So if both hm; ai 62 f and IF .m; a/ T, with condition EXf(i), we end up with an
assignment where both Ih F .m; a/ D S and Ih F .m; b/ D S. But this violates the
uniqueness constraint EXf(ii). So if hm; ai 62 f then IF .m; a/ D T. So this gives
us the same kind of constraint for functions as for relations.
E12.4. Provide semantic arguments to prove both parts of T12.1. So, for the first part
assume that IR.x1 : : : xn / D R.x1 : : : xn /. Then show (i) if hm1 : : : mn i 2 R
then IR.m1 : : : mn / D T; and (ii) if hm1 : : : mn i 62 R then IR.m1 : : : mn / D
T. And similarly for the second part based on EXf, where you may treat
hhm1 : : : mn i; ai as the same object as hm1 : : : mn ; ai.

12.2.2

Core Result

So far, on interpretation N, we have been able to express the relation eq, and the functions, suc, plus, and times. But our aim is to show that, on the standard interpretation
N of LNT , every recursive function f.Ex/ is expressed by some formula F .E
x; v/.
But it is not obvious that this can be done. At least some functions must remain
inexpressible in any language that has a countable vocabulary, and so in LNT . We
shall see a concrete example later in the chapter. For now, consider a straightforward
diagonal argument. By reasoning as from T10.7 (p. 469) there is an enumeration
of all the formulas in a countable language. Isolate just formulas P0 , P1 , P2 . . . that
express functions of one variable, and consider the functions f0 .x/, f1 .x/, f2 .x/. . . so
expressed. These are all the expressible functions of one variable. Consider a grid
with the functions listed down the left-hand column, and their values for each integer
from left-to-right.
0
f0 .x/
f1 .x/
f2 .x/

:::

f0 .0/ f0 .1/ f0 .2/


f1 .0/ f1 .1/ f1 .2/
f2 .0/ f2 .1/ f2 .2/

::
:
Moving along the diagonal, consider a function fd .x/ such that for any n, fd .n/ D
fn .n/ C 1. So fd .x/ is, fh0; f0 .0/ C 1i; h1; f1 .1/ C 1i; h2; f2 .2/ C 1i; : : :g. So for any
integer n, this function finds the value of fn along the diagonal, and adds one. But
fd .x/ cannot be any of the expressible functions. It differs from f0 .x/ insofar as
fd .0/ f0 .0/; it differs from f1 .x/ insofar as fd .1/ f1 .1/; and so forth. So fd .x/ is
an inexpressible function. Though it has a unique output for every input value, there
is no finite formula sufficient to express it.

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

548

We have already seen that plus.x; y/ and times.x; y/ are expresible in LNT . But
there is no obvious mechanism in LNT to express, say, fact.x/. Given that not all
functions are expressible, it is a significant matter, then, to see that all the recursive
functions are expressible with interpretation N in LNT . Our main argument shall be
an induction on the sequence of recursive functions. For one key case, we defer
discussion into the next section.
T12.3. On the standard interpretation N of LNT , each recursive function f.Ex/ is expressed by some formula F .E
x; v/.
For any recursive function fa there is a sequence of functions f0 , f1 . . . fa such
that each member is an initial function or arises from previous members by
composition, recursion or regular minimization. By induction on functions in
this sequence.
j

Basis: f0 is an initial function suc.x/, zero.x/, or idntk .x1 : : : xj /.


(s) f0 is suc.x/. Then by T12.1, f0 is expressed by F .x; v/ Ddef S x D v.
(z) f0 is zero.x/. Then f0 is expressed by F .x; v/ Ddef x D x ^ v D ;.
Suppose hm; ai 2 zero. Then since a is zero, Nm D m ^ a D ; D T.
And any z that is zero is equal to a so that N8z.m D m ^ z D
; ! z D a/ D T.
j

(i) f0 is idntk .x1 : : : xj /. Then f0 is expressed by F .x1 : : : xj ; v/ Ddef .x1 D


j
x1 ^ : : : ^ xj D xj / ^ xk D v.3 Suppose hhm1 : : : mj i; ai 2 idntk .
Then since a D mk , N.m1 D m1 ^ : : : ^ mj D mj / ^ mk D a D T.
And any z D mk is equal to a so that N8z..m1 D m1 ^ : : : ^ mj D
mj ^ mk D z/ ! z D a/ D T.
Assp: For any i, 0  i < k, fi .Ex/ is expressed by some F .E
x; v/
Show: fk .x/ is expressed by some F .E
x; v/.
fk is either an initial function or arises from previous members by com-

position, recursion or regular minimization. If it is an initial function


then as in the basis. So suppose fk arises from previous members.
(c) fk .Ex; yE; zE/ arises by composition from g.Ey/ and h.Ex; w; zE/. By assumption g.Ey/ is expressed by some G .y;
E w/ and h.Ex; w; zE/ by H .x;
E w; zE; v/;
then their composition f.Ex; yE; zE/ is expressed by F .x;
E y;
E zE; v/ Ddef
3 Perhaps

it will have occurred to the reader that idnt32 .x; y; z/, say, is expressed by x D x ^ z D
z ^ y D v as well as x D x ^ y D y ^ z D z ^ y D v where the first is relatively efficient insofar
as it saves a conjunct. But we are after a different efficiency of notation and demonstration, where
the formulation above serves our purposes nicely.

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

549

9wG .y;
E w/ ^ H .x;
E w; zE; v/. For simplicity, consider a case where xE
and zE drop out and yE is a single variable y; so F .y; v/ Ddef 9wG .y; w/^
H .w; v/. Suppose hm; ai 2 fk ; then by composition there is some b
such that hm; bi 2 g and hb; ai 2 h. Because G and H express g and
h, NG .m; b/ D T and NH .b; a/ D T; so NG .m; b/ ^ H .b; a/ D
T, and N9w.G .m; w/ ^ H .w; a// D T. Further, by expression,
N8z.G .m; z/ ! z D b/ D T and N8z.H .b; z/ ! z D a/ D T;
so that for a given m, there is just one w D b and so one z D a to satisfy the expression and N8z.9w.G .m; w/ ^ H .w; z// ! z D a/ D
T.
(r) fk .Ex; y/ arises by recursion from g.Ex/ and h.Ex; y; u/. By assumption
g.Ex/ is expressed by some G .x;
E v/ and h.Ex; y; u/ is expressed by H .x;
E
y; u; v/. And fk .Ex; y/ is therefore expressed by means of Gdels function as discussed in the next section.
(m) fk .Ex/ arises by regular minimization from g.Ex; y/. By assumption,
g.Ex; y/ is expressed by some G .x;
E y; z/. Then fk .Ex/ is expressed by
F .x;
E v/ Ddef G .x;
E v; ;/^.8y < v/G .x;
E y; ;/. Suppose xE reduces to
a single variable and hm; ai 2 f; then hhm; ai; 0i 2 g and for any n < a,
hhm; ni; 0i 62 g. So because G expresses g, NG .m; a; ;/ ^ .8y <
a/G .m; y; ;/ D T. And the result is unique: for any k < a,
NG .m; k; ;/ T; so the conjunction NF .m; k/ T. And for
k > a, the other clause, N.8y < k/G .m; y; ;/ fails in the case
when y D a; so the conjunction F .m; z/ is satisfied only in the case
when z is a and N8z..G .m; z; ;/ ^ .8y < z/G .m; y; ;// ! z D
a/ D T.
Indct: Any recursive f.Ex/ is expressed by some F .x;
E v/
We can fill out reasoning for the conclusions that proposed formulas express the functions. However, the general idea should be clear. There might be formulas other than
F .x;
E v/ to express a recursive f.Ex/ for example, if F .x;
E v/ expresses f.Ex/, then so
does F .x;
E v/ ^ A for any logical truth A. We shall see an important alternative formula in the following. Let us say that F .x;
E v/ so-described is the original formula
by which f.Ex/ is expressed. It remains to fill out the case for the recursion clause.
This is the task of the next section.
*E12.5. By the method of our core induction, write down formulas to express the
following recursive functions.

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

550

a. suc.zero.x//
b. idnt32 .x; suc.zero.x//; z/
Hint: As setup for the compositions, give each function a different output
variable, where the output to one is the input to the next.
*E12.6. Fill out semantic reasoning to demonstrate that proposed (original) formulas satisfy the conditions for expression for the (z), (i), (c) and (m) clauses
to T12.3 so, for example, for (c) you will apply semantic definitions to
show that N9w.G .m; w/ ^ H .w; a// D T and that N8z.9w.G .m; w/ ^
H .w; z// ! z D a/ D T. Rather than go to the unabbreviated form for the
bounded quantifier in case (m) it will be fine to anticipate T12.6 to apply the
(obvious) semantic clause directly.
E12.7. Say a function is -recursive just in case it satisfies the conditions for the
recursive functions but without the regularity requirement for minimization.
So all the recursive functions are -recursive, but some -recursive functions
are not recursive. Where every recursive function f.Ex/ is total in the sense that
it returns a value for every xE (recall the set theory reference on p. 112), some
-recursive functions are partial insofar as there may be values of xE for which
they return no value (as occurs when minimization is applied to a g.Ex; y/
that never evaluates to zero). Our argument for T12.2 simply assumes that
functions are recursive and so total. In the context of partial functions, EXf
would need to be augmented with the requirement that if hhm1 : : : mn i; ai 62 f,
then IF .m1 : : : mn ; a/ D T as a third condition. Extend the argument for
T12.3 to show that on the standard interpretation N of LNT , on the extended
account of expression, each -recursive function f.Ex/ is expressed by some
formula F .E
x; v/.

12.2.3

The -Function

Suppose a recursive function f.m; n/ D a. Then for the given value of m, there is a
sequence k0 ; k1 : : : kn with kn D a, such that k0 takes some initial value, and each of
the other members is specially related to the one before. Thus, in the simple case of
plus.m; n/ where m D 2, k0 D 2, and each ki is the successor of the one before. So,
corresponding to 2 C 5 D 7 is the sequence,
2

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

551

whose first member is set by gplus.2/, where subsequent members result from the
one before by plus.2; Sy/ D hplus.2; y; plus.2; y//, whose last member is 7. And,
generalizing, we shall be in a position to express recursive functions if we can express
the existence of sequences of integers so defined. We shall be able to say F .m; n/ D
a if we can say there is a sequence whose first member is g.m/, with members
related one to another by f.m; Sy/ D h.m; y; f.m; y//, whose nth member is a. This
is a mouthful. And LNT is not obviously equipped to do it. In, particular, LNT has
straightforward mechanisms for asserting the existence of integers but on its face,
it is not clear how to assert the existence of the arbitrary sequences which result from
the recursion clause.
But Gdel shows a way out. We have already seen an instance of the general
strategy we shall require in our discussion of Gdel numbering from chapter 10 (p.
469). In that case, we took a sequence of integers (keyed to vocabulary), g0 ; g1 : : : gn
and collected them into a single Gdel number G D 2g0  3g1  : : :  gnn where 2,
3. . . n are the first n primes. By the fundamental theorem of arithmetic, any number
has a unique prime factorization, so the original sequence is recovered from G by
factoring to find the power of 2, the power of 3 and so forth. So the single integer G
represents the original sequence. And LNT has no problem expressing the existence
of a single integer! Unfortunately, however, this particular way out is unavailable to
us insofar as it involves exponentiation, and the resources of LNT so-far include only
S , C and .4
All the same, within the resources of LNT , by the Chinese remainder theorem
(whose history reaches to ancient China), there must be pairs of integers sufficient to
represent any sequence. Consider the remainder function rm.x; y/ which returns the
remainder after x is divided by y. The remainder of x divided by y equals z just in
case z < y and for some w, x D .y  w/ C z. Then let,
.p; q; i/ Ddef rm.p; q  S.i/ C 1/
So for some fixed values of p and q the function yields different remainders for
different values of i. By the Chinese remainder theorem, for any sequence k0 , k1 : : : kn
there are some p and q such that for i  n, .p; q; i/ D ki . So p and q together code the
sequence, and the -function returns member ki as a function of p, q and i. Intuitively,
when we divide p by q  S.i/ C 1, for i  n, the result is a series of n remainders.
The theorem tells us that any series k0 , k1 : : : kn may be so represented (see the beta
function reference).
4 Some

treatments begin with a language including exponentiation precisely in order to smooth the
exposition at this stage. But our results are all the more interesting insofar as even the relatively weak
LNT retains powers sufficient for the fatal flaw.

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

552

Arithmetic for the Beta Function


Say rm.c; d/ is the remainder of c=d. For a sequence, d0 , d1 . . . dn , let jDj be the
product d0  d1  : : :  dn . We say d0 , d1 . . . dn are relatively prime if no two
members have a common factor other than 1. Then,
I. For any relatively prime sequence d0 , d1 . . . dn , the sequences of remainders
rm.c; d0 /, rm.c; d1 /. . . rm.c; dn / as c runs from 0 to jDj 1 are all different
from each other.
Suppose otherwise. Then there are c1 and c2 , 0  c1 < c2 < jDj
such that rm.c1 ; d0 /, rm.c1 ; d1 /. . . rm.c1 ; dn / is the same as rm.c2 ; d0 /,
rm.c2 ; d1 /. . . rm.c2 ; dn /. So for each di , rm.c1 ; di / D rm.c2 ; di /; say
c1 D adi C r and c2 D bdi C r; then since the remainders are equal,
c2
c1 D bdi
adi ; so each di divides c1
c2 evenly. So each di collects a distinct set of prime factors of c2 c1 ; and since c2 c1 is divided by
any product of its primes, c2 c1 is divided by jDj. So jDj  c2 c1 . But
0  c1 < c2 < jDj so c2 c1 < jDj. Reject the assumption: The sequences
of remainders as c runs from 0 to jDj 1 are distinct.
II. The sequences of remainders rm.c; d0 /, rm.c; d1 /. . . rm.c; dn / as c runs from
0 to jDj 1 are all the possible sequences of remainders.
There are di possible remainders a number might have when divided by di ,
(0; 1; : : : di 1). But if rm.c; d0 / takes d0 possible values, rm.c; d1 / may take
its d1 values for each value of rm.c; d0 /; etc. So the there are jDj possible
sequences of remainders. But as c runs from 0 to jDj 1, by (I), there are
jDj different sequences. So there are all the possible sequences.
III. Let s be the maximum of n; k0 ; k1 : : : kn . Then for 0  i < n, the numbers
di D s.i C 1/ C 1 are each greater than any kj and are relatively prime.
Since s is the the maximum of n; k0 ; k1 : : : kn , the first is obvious. To see that
the di are relatively prime, suppose otherwise. Then for some j; k, 1  j <
k  n C 1, sj C 1 and sk C 1 have a common factor p. But any number up
to s leaves remainder 1 when dividing sj C 1; so p > s. And since p divides
sj C 1 and sk C 1 it divides their difference, s.k j/; but if p divides s,
then it does not evenly divide sj C 1; so p does not divide s; so p divides
k j. But 1  j < k  n C 1; so k j  n; so p  n; so p  s. Reject the
assumption: the di are relatively prime.
IV. For any k0 ; k1 : : : kn , we can find a pair of numbers p; q such that for i  n,
.p; q; i/ D ki .
With s as above, set q D s, and let .p; q; i/ D rm.p; q.i C 1/ C 1/. By
(III), for 0  i  n the numbers qi D q.i C 1/ C 1 are relatively prime. So
by (II), there are all the possible sequences of remainders as p ranges from 0
to jDj 1. And since by (III) each of the qi is greater than any ki , the sequence
k0 ; k1 : : : kn is among the possible sequences of remainders. So there is some
p such that the ki are rm.p; q.i C 1/ C 1/.

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

553

Here is a simple example. Suppose k0 , k1 and k2 are 5, 2, 3. So the last subscript


in the series n D 2. Set s D max.n; 5; 2; 3/ D 5; and set q D s D 120. So
.p; q; i/ D rm.p; 120  S.i/ C 1/. So as i increases, we are looking at,
rm.p; 121/

rm.p; 241/

rm.p; 361/

But 121, 241 and 361 so constructed must have no common factor other than 1; the
remainder theorem therefore tells us that as p varies between 0 and 121  241  361
1 D 10527120 the remainders take on every possible sequence of remainder values.
But the remainders will be values up to 120, 240 and 360, which is to say, q D s is
large enough that our simple sequence must therefore appear among the sequences
of remainders. In this case, p D 5219340 gives rm.p; 121/ D 5, rm.p; 241/ D 3 and
rm.p; 361/ D 2. There may be easier ways to generate this sequence. But there is
no shortage of integers (!) so there are no worries about using large ones, and by this
method Gdel gives a perfectly general way to represent the arbitrary sequence.
And we can express the -function with the resources of LNT . Thus, for .p; q; i/,
B.p; q; i; v/ Ddef .9w  p/p D .S.q  S i /  w/ C v ^ v < S.q  S i /
So v is the remainder after p is divided by S.q  S i /. And for appropriate choice of
p and q, the variable v takes on the values k0 through kn as i runs through the values
; to n.
Now return to our claim that when a recursive function f.m; n/ D a there is a
sequence k0 ; k1 : : : kn with kn D a such that k0 takes some initial value, and each
of the other members is related to the one before according to some other recursive
function. More officially, a function f.Ex; y/ D z just in case there is a sequence
k0 ; k1 : : : ky with,
(i) k0 D g.Ex/
(ii) if i < y, then kSi D h.Ex; i; ki /
(iii) ky D z
Put in terms of the -function, this requires, f.Ex; y/ D z just in case there are some p,
q such that,
(i) .p; q; 0/ D g.Ex/
(ii) if i < y, then .p; q; Si/ D h.Ex; i; .p; q; i//

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

554

(iii) .p; q; y/ D z
By assumption, g.Ex/ is expressed by some G .x;
E v/ and h.Ex; y; u/ by some H .x;
E y; u; v/.
So we can express the combination of these conditions as follows. f.Ex; y/ is expressed
by F .x;
E y; z/ Ddef
9p9qf9vB.p; q; ;; v/ ^ G .x;
E v/ ^
.8i < y/9u9vB.p; q; i; u/ ^ B.p; q; S i; v/ ^ H .x;
E i; u; v/ ^
B.p; q; y; z/g
In the case of factorial, we have G .v/ Ddef .v D S ;/ and H .y; u; v/ Ddef .v D
Sy  u/. So the factorial function is expressed by F .y; z/ Ddef
9p9qf9vB.p; q; ;; v/ ^ v D S; ^
.8i < y/9u9vB.p; q; i; u/ ^ B.p; q; S i; v/ ^ v D S i  u ^
B.p; q; y; z/g
This expression is long particularly if expanded to unabbreviate the -function,
but it is just right. If hn; ai 2 fac, then NF .n; a/ D T and the expression satisfies
uniqueness as well. And similarly in the general case. So with LNT we satisfy the recursive clause for T12.3. So its demonstration is complete, and LNT has the resources
to express any recursive function.
E12.8. Suppose k0 , k1 , k2 and k3 are 3, 4, 0, 2. By the method of the text, find values
of p and q so that .i/ D ki . Use your values of p and q to calculate .p; q; 0/,
.p; q; 1/, .p; q; 2/ and .p; q; 3/. You will need some programmable device to search for the value of p. In Ruby, a routine along the following lines,
with numerical values for a, b, c and d should suffice.
1. def loop
2. y = 0
3. until y % a == 3 and y % b == 4 and y % c == 0 and y % d == 2
4.
y = y+1
5.
puts "y = #{y}"
6. end
7. return y
8. end
9. puts "p = #{loop}"

(In Ruby x % y returns the remainder of x divided by y.) Be prepared for it


to take a while!

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

555

E12.9. Produce a formula to show that LNT expresses the plus function by the initial
functions with the beta function. You need not reduce the beta form to its
primitive expression!
E12.10. Say a function fk is simple iff there is a series of functions f0 , f1 . . . fk such
that for any i  k,
(b) fi is plus.x; y/
(r) There are a; b < i such that fi .Ex; yE/ is plus.fa .Ex/; fb .Ey//
Show that on the standard interpretation N of LNT each simple f.Ex/ is expressed
by some formula F .x;
E v/. Except for appeal to T10.2 as appropriate, you
should not depend on special theorems from the text, but show your result
directly from basic definitions.

12.3

Capturing Recursive Functions

The second of the powers to be associated with theory incompleteness has to do with
the theorys proof system. If a theory is consistent and captures recursive functions,
then it is negation incomplete. In this section, we show that Q, and so any theory that
includes Q, captures the recursive functions.

12.3.1

Definition and Basic Results

Where expression requires that if objects stand in a given relation, then a corresponding formula be true, capture requires that when objects stand in a relation, a
corresponding formula be provable in the theory.
CP For any language L, interpretation I, objects m1 : : : mn ; a 2 U and theory T ,
(r) Relation R.x1 : : : xn / is captured by formula R.x1 : : : xn ; y/ in T just in case,
(i) If hm1 : : : mn i 2 R then T ` R.m1 : : : mn /
(ii) If hm1 : : : mn i 62 R then T ` R.m1 : : : mn /
(f) Function f.x1 : : : xn / is captured by formula F .x1 : : : xn ; y/ in T just in case,
if hhm1 : : : mn i; ai 2 f then
(i) T ` F .m1 : : : mn ; a/
(ii) T ` 8z.F .m1 : : : mn ; z/ ! z D a/

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

556

As a first result, and to see how these definitions work, it is easy to see that in a
theory at least as strong as Q, conditions (f.i) and (f.ii), combine to yield a result like
(r.ii).
T12.4. If T includes Q and function f.x1 : : : xn / is captured by formula F .x1 : : : xn ; y/
so that conditions (f.i) and (f.ii) hold, then if hhm1 : : : mn i; ai 62 f then T `
F .m1 : : : mn ; a/.
Suppose f.x1 : : : xn / is captured by F .x1 : : : xn ; y/ and hhm1 : : : mn i; ai 62 f.
Then, since f is a function, there is some b a such that hhm1 : : : mn i; bi 2 f;
so by (f.i), T ` F .m1 : : : mn ; b/; and instantiating (f.ii) to a, T ` F .m1 : : : mn ;
a/ ! a D b. But since a b, and T includes Q, by T8.14, T ` a b; so
by MT, T ` F .m1 : : : mn ; a/.
Our aim is to show that recursive functions are captured in Q. In chapter 8, we
showed that Q correctly decides atomic formulas of LNT . As a preliminary to showing
that Q captures the recursive functions, in this section we extend that result to show
that Q correctly decides a broadened range of formulas.
To understand the result to which we build in this section, we need to identify
some important subclasses of formulas in LNT : the 0 , 1 and 1 formulas.
0

(b) If P is of the form s D t, s < t or s  t for terms s and t, then P is a


0 formula.
(s) If P and Q are 0 formulas, then so are P , and .P ! Q/.
(q) If P is a 0 formula, then so are .8x  t/P , .8x < t/P , .9x  t/P
and .9x < t/P where x does not appear in t.
(c) Nothing else is a 0 formula.

1 A formula is strictly 1 iff it is of the form 9x1 9x2 : : : 9xn P for 0 P . A


formula is 1 iff it is logically equivalent to a strictly 1 formula.
1 A formula is strictly 1 iff it is of the form 8x1 8x2 : : : 8xn P for 0 P . A
formula is 1 iff it is logically equivalent to a strictly 1 formula.
Given the soundness and adequacy of our derivation systems, we may understand
equivalence in either the semantic or syntactical sense so that P and Q are equivalent
just in case  P $ Q or ` P $ Q. A 0 formula is (trivially) both 1 and 1
insofar as it is preceeded by a block of zero unbounded quantifiers. We allow the
usual abbreviations and so ^, _ and $. So, for example, n ; ^ .9v  n/.S S; 
v D n/ is 0 by a tree that works like ones we have seen many times before.

CHAPTER 12. RECURSIVE FUNCTIONS AND Q


nD;

SS ;  v D n

n;

.9v  n/.SS;  v D n/

HH

H
HH 
H

557
By 0 (b)

By 0 (s),(q)




n ; ^ .9v  n/.SS;  v D n/

By 0 (s)

It turns out that this formula is true just in case n is an even number other than zero.
For a 0 formula, all is as usual, except quantifiers are bounded. Its existential
quantification,
(E) 9nn ; ^ .9v  n/.SS;  v D n/

is strictly 1 , for it consists of an (in this case single) unbounded existential quantifier
followed by a 0 formula. This sentence asserts the existence of an even number
other than zero. Observe that,
(F) k D k ^ 9nn ; ^ .9v  n/.SS;  v D n/

is not strictly 1 . For it does not have the existential quantifier attached as main
operator to a 0 formula. However, by standard quantifier placement rules, the unbounded existential quantifier can be pulled to the main operator position to form an
equivalent strictly 1 sentence. Because (F) is equivalent to a sentence that is strictly
1 , it too is 1 . Finally, by reasoning as for QN in ND, observe that the negation of
a 1 formula is not 1 rather it is 1 , and the negation of a 1 formula is 1 .
We shall show that Q correctly decides 0 sentences: if P is 0 and NP D T
then Q `ND P , and if NP T then Q `ND P . Further, Q proves true 1
sentences: if P is 1 and NP D T, then Q `ND P . Observe that where P is 1 ,
if NP T, then NP D T where P is not 1 at all. So, though we show
Q correctly decides 0 sentences and proves true 1 sentences, we will not have
shown that Q proves P when NP T and so not have shown that Q decides all
1 sentences.
We begin with some preliminary theorems to set up the main result. These are not
hard, but need to be wrapped up before we can attack our intended result. First some
semantic theorems that work like derived clauses to SF for inequalities and bounded
quantifiers. We could not obtain these in chapter 7 because they rely on theorems
from chapter 8 (and since they are not inductions, they did not belong in chapter 8).
However, we introduce them now in order to make progress.
T12.5. On the standard interpretation N for LNT , (i) Nd s  t D S iff Nd s  Nd t,
and (ii) Nd s < t D S iff Nd s < Nd t.

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

558

(i) By abv Nd s  t D S iff Nd 9v.v C s D t/ D S, where v is not free in s


or t; by SF(9), iff there is some m 2 U such that Nd.vjm/ v Cs D t D S. But
d.vjm/v D m; so by TA(v), Nd.vjm/ v D m; so by TA(f), Nd.vjm/ v C s D
NChm; Nd.vjm/ si D m C Nd.vjm/ s. So by SF(r), Nd.vjm/ v C s D t D S
iff hm C Nd.vjm/ s; Nd.vjm/ ti 2 ND; iff m C Nd.vjm/ s D Nd.vjm/ t.
But since v is not free in s or t, d and d.vjm/ make the same assignments
to variables free in s and t; so by T8.3, Nd s D Nd.vjm/ s and Nd t D
Nd.vjm/ t; so m C Nd.vjm/ s D Nd.vjm/ t iff m C Nd s D Nd t; and
there exists such an m just in case Nd s  Nd t. So Nd s  t D S iff
Nd s  Nd t.
(ii) is homework.
As an immediate corollary, Nd s  t S just in case Nd s > Nd t; and similarly
for >.
T12.6. On the standard interpretation N for LNT , (i) Nd .8x  t /P D S iff for
every m  Nd t, Nd.xjm/ P D S and (ii), Nd .8x < t /P D S iff for every
m < Nd t, Nd.xjm/ P D S.
(i) By abv Nd .8x  t /P D S iff Nd 8x.x  t ! P / D S where x does
not appear in t; by SF(8), iff for any m 2 U, Nd.xjm/ x  t ! P D S; by
SF(!), iff for any m 2 U, Nd.xjm/ x  t S or Nd.xjm/ P D S; which is
to say, iff for any m 2 U, if Nd.xjm/ x  t D S, then Nd.xjm/ P D S. But
d.xjm/x D m; so Nd.xjm/ x D m; and since x is not free in t, d and d.xjm/
agree on assignments to variables free in t; so by T8.3, Nd.xjm/ t D Nd t;
so with T12.5, Nd.xjm/ x  t D S iff m  Nd t; so Nd .8x  t /P D S iff
for any m, if m  Nd t, then Nd.xjm/ P D S.
(ii) is homework.

T12.7. On the standard interpretation N for LNT , (i) Nd .9x  t /P D S iff for
some m  Nd t, Nd.xjm/ P D S and (ii), Nd .9x < t /P D S iff for some
m < Nd t, Nd.xjm/ P D S.
Homework
We are finally ready for the results to which we have been building: First, Q
correctly decides 0 sentences of LNT .

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

559

T12.8. For any 0 sentence P , if NP D T, then Q `ND P , and if NP T, then


Q `ND P .
By induction on the number of operators in P .
Basis: If P is an an atomic 0 sentence it is t D s, t  s or t < s. So by T8.14, if
NP D T, Q `ND P , and if NP T, Q `ND P .
Assp: For any i; 0  i < k, if a 0 setntence P has i operator symbols, then if
NP D T, Q `ND P and if NP T, Q `ND P .
Show: If a 0 sentence P has k operator symbols, then if NP D T, Q `ND P and
if NP T, Q `ND P .
If a 0 sentence P has k operator symbols, then it is of the form A, A !
B, .8x  t/A, .8x < t/A, .9x  t/A, or .9x < t/A where A, B have
< k operator symbols and x does not appear in t.
() P is A. (i) Suppose NP D T; then NA D T; so by T8.6, NA T;
so by assumption, Q `ND A; so Q `ND P . (ii) Suppose NP T; then
NA T; so by T8.6, NA D T; so by assumption Q `ND A; so by DN,
Q `ND A; so Q `ND P .
(!) P is A ! B. (i) Suppose NA ! B D T; then by T8.6, NA T or
NB D T. So by assumption, Q `ND A or Q `ND B. So by _I twice
Q `ND A _ B or Q `ND A _ B; so Q `ND A _ B; so by Impl,
Q `ND A ! B. Part (ii) is homework.
(8 ) P is .8x  t/A.x/. Since P is a sentence, x is the only variable free in
A; in particular, since x does not appear in t, t must be variable-free; so
Nd t D Nt and where Nt D n, by T8.13, Q `ND t D n; so by DE,
Q `ND P just in case Q `ND .8x  n/A.x/.
(i) Suppose NP D T; then N.8x  t/A.x/ D T; so by TI, for any d,
Nd .8x  t/A.x/ D S; so by T12.6, for any m  Nd t, Nd.xjm/ A.x/ D
S; so where Nd t D Nt D n, for any m  n, Nd.xjm/ A.x/ D S; but
Nd m D m, so with T10.2, for any m  n, Nd A.m/ D S; since x is the
only variable free in A, A.m/ is a sentence; so with T8.5, for any m  n,
NA.m/ D T; so NA.;/ D T and NA.1/ D T and . . . and NA.n/ D T;
so by assumption, Q `ND A.;/ and Q `ND A.1/ and . . . and Q `ND A.n/; so
by T8.21, Q `ND .8x  n/A.x/; so with our preliminary result, Q `ND P .
(ii) Suppose NP T; then N.8x  t/A.x/ T; so by TI, for some d,
Nd .8x  t/A.x/ S; so by T12.6, for some m  Nd t, Nd.xjm/ A.x/

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

560

S; so where Nd t D Nt D n, for some m  n, Nd.xjm/ A.x/ S; but


Nd m D m, so with T10.2, for some m  n, Nd A.m/ S; so by TI, for
some m  n, NA.m/ T; so by assumption for some m  n, Q `ND
A.m/; so by T8.20, Q `ND .9x  n/A.x/; so by bounded quantifier
negation (BQN), Q `ND .8x  n/A.x/; so with our preliminary result,

Q `ND P .
(oth) Other cases left for homework.
Indct: So for any 0 sentence P , if NP D T, then Q `ND P , and if NP T,
then Q `ND P .
And now, Q proves true 1 sentences.
T12.9. For any (strict) 1 sentence P if NP D T, then Q `ND P .
This is a simple induction on the number of unbounded existential quantifiers
in P . Hint: If P has no unbounded existential quantifiers, then it is 0 .
Otherwise, if 9xP is true, it will be easy to show that for some m, P .m/ is
true; you can then apply your assumption, and 9I.
Corollary: For any 1 sentence P , if NP D T, then Q `ND P . Suppose
a 1 P is such that NP D T; then by equivalence there is some strict
1 P  such that NP  D T; so by the main theorem, Q `ND P  ; and by
equivalence again, Q `ND P .
This completes what we set out to show in this subsecion. These results should seem
intuitive: Q proves results about particular numbers, 1 C 1 D 2 and the like. But
0 sentences assert (potentially complex) particular facts about numbers and we
show that Q proves any 0 sentence. Similarly, any 1 sentence is true because of
some particular fact about numbers; since Q proves that particular fact, it is sufficient
to prove the 1 sentence.
E12.11. Complete the demonstration of T12.5 - T12.7 by showing the remaining
parts. These should be straightforward, given parts worked in the text.
*E12.12. Complete the demonstration of T12.8 by finishing the remaining cases.
You should set up the entire argument, but may appeal to the text for parts
already completed, as the text appeals to homework.
E12.13. Provide an argument to demonstrate T12.9.

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

12.3.2

561

Basic Result

We now set out to show that Q captures all the recursive functions. In fact, we give
the result in two forms. First in a straightforward basic version. However, this version
gets a result slightly weaker than the one we would like. But it is easily strengthened
to the final form.
First, an argument that the original formulas by which we have expressed recursive functions are 1 . This argument merely reviews the strategy from T12.3 for
expression, to show that each formula is equivalent to a strictly 1 formula and so is
1 .
T12.10. The original formula by which any recursive function is expressed is 1 .
By induction on the sequence of recursive functions.
Basis: From T12.3, suc.x/ is originally expressed by S x D v; zero.x/ by
j
x D x ^ v D ; and idntk .x1 : : : xj / by .x1 D x1 ^ : : : ^ xj D
xj / ^ xk D v. These are all 0 , and therefore 1 .
Assp: For any any i, 0  i < k, the original formula F .E
x; v/ by which fi .Ex/
is expressed is 1
Show: The original formula F .E
x; v/ by which fk .Ex/ is expressed is 1
fk is either an initial function or arises from previous members by composition, recursion or regular minimization. If it is an initial function,
then as in the basis. So suppose fk arises from previous members.
(c) fk .Ex; yE; zE/ arises by composition from g.Ey/ and h.Ex; w; zE/. By assumption g.Ey/ is expressed by some 1 formula equivalent to 9jEG .y;
E w/
E .x;
and h.Ex; w; zE/ by a 1 formula equivalent to 9kH
E w; zE; v/ where G
and H are individually 0 . Then their original composition F .x;
E y;
E zE; v/
E
E
is equivalent to 9w9j G .y;
E w/ ^ 9kH .x;
E w; zE; v/; and by standard
E .y;
quantifier placement rules, this is equivalent to 9w9jE9kG
E w/ ^
H .x;
E w; zE; v/, where this is 1 .
(r) fk .Ex; y/ arises by recursion from g.Ex/ and h.Ex; y; u/. By assumption
g.Ex/ is expressed by some 1 formula 9jEG .x;
E v/ and h.Ex; y; u/ by
E
9kH .x;
E y; u; v/. And, as before, the -function is expressed by,
B.p; q; i; v/ Ddef .9w  p/p D .S.q  Si/  w/ C v ^ v < S.q  Si/

where this is 0 . Then the original formula F .x;


E y; z/ by which
fk .Ex; y/ is expressed is equivalent to,

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

562

9p9qf9vB.p; q; ;; v/ ^ 9jEG .x;


E v/ ^
E .x;
.8i < y/9u9vB.p; q; i; u/ ^ B.p; q; Si; v/ ^ 9kH
E i; u; v/ ^ B.p; q; y; z/g

This time, standard quantifier placement rules are not enough to identify the formula as 1 . We can pull the initial v and jE quantifiers
out. And the kE quantifiers come out with the u and v quantifiers. The
problem is getting these past the bounded universal i quantifier.
For this, we use a sort of trick: For a simplified case, consider .8i <
y/9vP .i; v/; this requires that for each i < y there is at least one v
that makes P .i; v/ true; for each i < y consider the least such v, and
let a be the greatest member of this collection. Then .8i < y/.9v <
a/P .i; v/ says the same as the original expression. And therefore, no
matter what y may be, 9j.8i < y/.9v < j /P .i; v/ is true iff the
original expression is true. Thus the existential u, v and kE quantifiers
come to the front, and the result is 1 .
(m) fk .Ex/ arises by regular minimization from g.Ex; y/. By assumption,
g.Ex; y/ is expressed by some 9jEG .x;
E y; z/. Then the original expression by which fk .Ex/ is expressed is equivalent to 9jEG .x;
E v; ;/^.8y <
v/9jEG .x;
E y; ;/; but since G expresses a function, 9jEG .x;
E y; ;/
E
just when 9z9j G .x;
E y; z/ ^ z ;; so the original expression is
equivalent to, 9jEG .x;
E v; ;/ ^ .8y < v/9z9jEG .x;
E y; z/ ^ z ;.
The first set of j quantifiers come directly to the front, and the second
set, together with the z quantifier come out, as in the previous case,
leaving bounded existential quantifiers behind. So the result is 1 .
Indct: The original formula by which any recursive function is expressed is
1 .
It is not proper to drag an existential quantifier out past a universal quantifier; however, it is legitimate to drag an existential past a bounded universal, with a bounded
existential quantifier left behind as shadow or witness.
Now for our main result. Here is the sense in which our result is weaker than
we might like: Rather than Q, let us suppose we are in a system Qs , strengthened Q,
which has (as an axiom or) a theorem uniqueness of remainder,
8x8y..9w  m/m D Sn  w C x ^ x < Sn ^ .9w  m/m D Sn  w C y ^ y < Sn/ ! x D y

for any x and y, if x is the remainder of m=.n C 1/ and y is the remainder of


m=.n C 1/ then x D y. As we shall see for Def [rm] in chapter 13, PA is a system

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

563

of this sort though, insofar as m and n are free variables rather than numerals, Q is
not. Notice that m and n are free in this formulation; if they are instantiated to p
and q  S i respectively, from uniqueness for remainder there immediately follows a
parallel uniqueness result for the -function.
8x8y.B.p; q; i; x/ ^ B.p; q; i; y// ! x D y

Further, if hhp; q; ii; ai 2 then since B expresses the -function, NB.p; q; i ; a/ D


T; and since B is 0 , by T12.8, Q `ND B.p; q; i ; a/. From this, with uniqueness, it
is immediate with 8E that Qs `ND 8zB.p; q; i ; z/ ! z D a/. So B captures in
Qs .
Now we are positioned to offer a perfectly straightforward argument for capture
of the recursive functions in Qs . Again our main argument is an induction on the
sequence of recursive functions. We show that Qs captures the initial functions, and
then that it captures functions from composition, recursion and regular minimization.
T12.11. On the standard interpretation N for LNT , any recursive function is captured
in Qs by the original formula by which it is expressed.
By induction on the sequence of recursive functions.
j

Basis: f0 is an initial function suc.x/, zero.x/, or idntk .x1 : : : xj /.


(s) The original formula F .x; v/ by which suc.x/ is expressed is S x D v.
Suppose hm; ai 2 suc.
(i) Since S x D v expresses suc.x/, NS m D a D T; so, since it is
0 , by T12.8, Q `ND S m D a; so Qs `ND F .m; a/.
(ii) Reason as follows,
1. S m D a

from (i)

2.

Sm D j

A (g, !I)

3.

j Da

1,2 =E

4. S m D j ! j D a
5. 8z.S m D z ! z D a/

2-3 !I
4 8I

So Qs `ND 8zF .m; z/ ! z D a.


(oth) It is left as homework to show that zero.x/ is captured by x D x ^ v D
j
; and idntk .x1 : : : xj / by .x1 D x1 ^ : : : ^ xj D xj / ^ xk D v.
Assp: For any i, 0  i < k, fi .Ex/ is captured in Qs by the original formula by
which it is expressed.

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

564

Show: fk .Ex/ is captured in Qs by the original formula by which it is expressed.


fk is either an initial function or arises from previous members by com-

position, recursion or regular minimization. If it is an initial function,


then as in the basis. So suppose fk arises from previous members.
(c) fk .Ex; yE; zE/ arises by composition from g.Ey/ and h.Ex; w; zE/. By assumption g.Ey/ is captured by some G .y;
E w/ and h.Ex; w; zE/ by H .x;
E w; zE; v/;
the original formula F .x;
E y;
E zE; v/ by which the composition f.Ex; yE; zE/
is expressed is 9wG .y;
E w/ ^ H .x;
E w; zE; v/. For simplicity, consider
a case where xE and zE drop out and yE is a single variable y. Suppose
hm; ai 2 fk ; then by composition there is some b such that hm; bi 2 g
and hb; ai 2 h.
(i) Since hm; ai 2 fk , and F .y; v/ expresses f, NF .m; a/ D T; so,
since F .y; v/ is 1 , by T12.9, Qs `ND F .m; a/.
(ii) Since G .y; w/ captures g.y/ and H .w; v/ captures h.w/, by assumption Qs `ND 8z.G .m; z/ ! z D b/ and Qs `ND 8z.H .b; z/ !
z D a/. It is then a simple derivation for you to show that Qs `ND
8z.9wG .m; w/ ^ H .w; z/ ! z D a/.
(r) fk .Ex; y/ arises by recursion from g.Ex/ and h.Ex; y; u/. By assumption
g.Ex/ is captured by some G .x;
E v/ and h.Ex; y; u/ by H .x;
E y; u; v/; the
original formula F .x;
E y; z/ by which fk .Ex; y/ is expressed is,
9p9qf9vB.p; q; ;; v/^G .x;
E v/^.8i < y/9u9vB.p; q; i; u/^B.p; q; S i; v/^H .x;
E i; u; v/^
B.p; q; y; z/g

Suppose xE reduces to a single variable and hm; n; ai 2 fk . (i) Then


since F .x; y; z/ expresses f, NF .m; n; a/ D T; so, since F .x; y; z/
is 1 , by T12.9, Qs `ND F .m; n; a/. And (ii) by T12.12, immediately
following, Qs `ND 8wF .m; n; w/ ! w D a.
(m) fk .Ex/ arises by regular minimization from g.Ex; y/. By assumption,
g.Ex; y/ is captured by some G .x;
E y; z/; the original formula by F .x;
E v/
by which fk .Ex/ is expressed is G .x;
E v; ;/^.8y < v/G .x;
E y; ;/. Suppose xE reduces to a single variable and hm; ai 2 fk .
(i) Since hm; ai 2 fk , and F .x; v/ expresses f, NF .m; a/ D T; so
since F .x; v/ is 1 , by T12.9, Qs `ND F .m; a/.
(ii) Reason as follows,

CHAPTER 12. RECURSIVE FUNCTIONS AND Q


1. G .m; a; ;/ ^ .8y < a/G .m; y; ;/
2. j < a _ j D a _ a < j
3.

G .m; j; ;/ ^ .8y < j /G .m; y; ;/

565
from (i)
T8.19
A (g, !I)

4.

j <a

A (c, I)

5.
6.
7.
8.

G .m; j; ;/
.8y < a/G .m; y; ;/
G .m; j; ;/
?

3 ^E
1 ^E
6,4 (8IE
5,7 ?I

9.
10.
11.
12.
13.
14.
15.
16.

j 6< a
a<j
G .m; a; ;/
.8y < j /G .m; y; ;/
G .m; a; ;/
?
a 6< j

j Da

17. G .m; j; ;/ ^ .8y < j /G .m; y; ;/ ! j D a


18. 8z.G .m; z; ;/ ^ .8y < z/G .m; y; ;/ ! z D a/

4-8 I
A (c, I)
1 ^E
3 ^E
12,10 (8IE
11,13 ?I
10-14 I
2,9,15 DS
3-16 !I
17 8I

So Qs `ND 8z.G .m; z; ;/ ^ .8y < z/G .m; y; ;/ ! z D a/.


Indct: Any recursive f.Ex/ is captured by the original formula by which it is
expressed in Qs .
For this argument, we simply rely on the ability of Q to prove particular truths, and
so the 1 sentences that express recursive functions. The uniqueness clauses are not
1 , so we have to show them directly. The case for recursion remains outstanding,
and is addressed in the theorem immediately following.
T12.12. Suppose f.Ex; y/ results by recursion from functions g.Ex/ and h.Ex; y; u/ where
g.Ex/ is captured by some G .x;
E v/ and h.Ex; y; u/ by H .x;
E y; u; v/. Then for
the original expression F .x;
E y; z/ of f.Ex; y/, if hhm1 : : : mb ; ni; ai 2 f, Qs `
8wF .m1 : : : mb ; n; w/ ! w D a.
Suppose xE reduces to a single variable and hm; n; ai 2 f. When hm; n; ai 2 f, there
are k0 : : : kn such that kn D a, k0 D g.m/; for 0  i < n, there are p; q such that
.p; q; i/ D ki , .p; q; Si/ D kSi , and h.m; i; ki / D kSi . The argument is by induction
on the value of n from f.m; n/ D a. Observe that F is long, and we shall better
be able to manage the formulas given its general form 9p9qA ^ C ^ B. Given
the structure of the definition for this recursion clause, it will be convenient to lapse

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

566

into induction scheme III from the induction schemes reference on p. 380, making
the assumption for a single member of the series n, and then showing that it holds
for the next. Thus, assuming that Qs ` 8wF .m; n; w/ ! w D kn , we show
Qs ` 8wF .m; S n; w/ ! w D kSn .
Basis: Suppose n D 0. From capture, Qs `ND 8zG .m; z/ ! z D k0 . By uniqueness of remainder (and generalizing on p and q), Qs `ND 8p8q8x8y.B.p;
q; ;; x/^B.p; q; ;; y// ! x D y. F is of the sort, 9p9qf9vB.p; q; ;; v/^
G .x;
E v/ ^ C ^ B.p; q; ;; z/g. You need to show, Qs ` 8w9p9qf9vB.p; q;
;; v/ ^ G .m; v/ ^ C ^ B.p; q; ;; w/g ! w D k0 . This is straightforward.
So Qs ` 8wF .m; ;; w/ ! w D a.
Assp: Qs ` 8wF .m; n; w/ ! w D kn
Show: Qs ` 8wF .m; S n; w/ ! w D kSn
From from capture, Qs `ND 8wH .m; n; kn ; w/ ! w D kSn . And again we
make an appeal to uniqueness:

CHAPTER 12. RECURSIVE FUNCTIONS AND Q


1. 8wF .m; n; w/ ! w D kn
2. 8wH .m; n; kn ; w/ ! w D kSn
3. 8p8q8x8y.B.p; q; S n; x/ ^ B.p; q; S n; y// ! x D y

567
by assumption
by capture
uniqueness

4.

F .m; S n; j /

A (g, !I)

5.
6.

9p9qA ^ C ^ B
9qA ^ C ^ B

4 abv
A (g, 59E)

7.

A^C ^B

A (g, 69E)

9vB.p; q; ;; v/ ^ G .m; v/
.8i < S n/9u9vB.p; q; i; u/ ^ B.p; q; Si; v/ ^ H .m; i; u; v/
B.p; q; S n; j /
n < Sn
9u9vB.p; q; n; u/ ^ B.p; q; S n; v/ ^ H .m; n; u; v/
9vB.p; q; n; u/ ^ B.p; q; S n; v/ ^ H .m; n; u; v/

7 ^E (A)
7 ^E (C)
7 ^E (B)
T8.14
9,11 (8E)
A (g, 129E)

8.
9.
10.
11.
12.
13.
14.

B.p; q; n; u/ ^ B.p; q; S n; v/ ^ H .m; n; u; v/

A (g, 139E)

15.
16.
17.
18.
19.
20.
21.
22.
23.
24.

B.p; q; n; u/
.8i < n/9u9vB.p; q; i; u/ ^ B.p; q; Si; v/ ^ H .m; i; u; v/
F .m; n; u/
u D kn
H .m; n; u; v/
H .m; n; kn ; v/
v D kSn
B.p; q; S n; v/
B.p; q; S n; kSn /
j D kSn

14 ^E
9 with T8.21
8,16,15 with 9I
1,17 with 8E
14 ^E
19,18 =E
2,20 with 8E
14 ^E
22,21 =E
3,10,23 with 8E

25.
26.
27.
28.

j D kSn
j D kSn
j D kSn
j D kSn

29. F .m; S n; j / ! j D kSn


30. 8wF .m; S n; w/ ! w D kSn

13,14-24 9E
12,13-25 9E
6,7-26 9E
5,6-27 9E
4-28 !I
29 8I

Lines 8 - 10 of show the content of the assumptions on 4 - 7 which are too long
to display in expanded form. Once we are able to show F .m; n; u/ at (17), the
inductive assumption lets us pin u onto kn . Then uniqueness conditions for
H and B allow us to move to unique outputs for H and B and so for F . Line
16 perhaps obviously follows from (9), but its derivation may be obscure:
by T8.14, Q ` 0 < S n and . . . and Q ` n 1 < S n; so where P is the
quantified formula, with (9) and (8E), Q ` P .0/ and . . . and Q ` P .n 1/;

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

568

so with T8.21 Q ` .8i < n/P .i /.


Indct: For any n, Qs `ND 8wF .m; n; w/ ! w D kn .
Observe that in both the basis and show clauses we require the generalized uniqueness for B: this is because it is being applied inside assumptions for 9E, where p
and q are arbitrary variables, not numerals p and q, to which the ordinary notion of
capture for B would apply. So 8wF .m; n; w/ ! w D a. So we satisfy the recursive clause for T12.11. So the theorem is proved. And we have shown that Qs has
the resources to capture any recursive function.
This theorem has a number of attractive features: We show that recursive functions are captured directly by the original formulas by which they are expressed. A
byproduct is that recursive functions are captured by 1 formulas. The argument is
a straightforward induction on the sequence of recursive functions, of a type we have
seen before. But we do not show that recursive functions are captured in Q. It is that
to which we turn.
*E12.14. Complete the demonstration of T12.11 by completing the remaining cases,
including the basis and part (ii) of the case for composition.

*E12.15. Produce a derivation to show the basis of T12.12.

E12.16. Continuing along the lines from E12.7, observe that T12.4 assumes that
functions are recursive and so total. In the context of partial functions, CPf
would have to be augmented with the condition that if hhm1 : : : mn i; ai 62 f
then T ` F .m1 : : : mn ; a/. Extend the argument for T12.11 to show that on
the standard interpretation N for LNT , any -recursive function is captured, on
the extended account, in Qs by the original formula by which it is expressed.
E12.17. Return to the simple functions from from E12.10. Show that on the standard
interpretation N of LNT each simple function f.Ex/ is captured in Qs by the
formula used to express it. Restrict appeal to external theorems just to your
result from E12.10 and T8.14 as appropriate.

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

12.3.3

569

The result strengthened

T12.11 shows that the recursive functions are captured in Qs by their 1 original
expressers. As we have suggested, this argument is easily strengthened to show that
the recursive functions are captured in Q. To do so, we give up the capture by original
expressers, though we retain the result that the recursive functions are captured by
1 formulas.
In the previous section, we appealed to uniqueness of remainder for the -function.
In Qs , the original formula B captures the -function, and gives a strengthened
uniqueness result important for T12.12. But we can simulate this effect by some
easy theorems. Recall that the -function is originally expressed by a 0 formula
B.
T12.13. If a function f.Ex/ is expressed by a 0 formula F .x;
E v/, then F 0 .x;
E v/ Ddef
F .x;
E v/ ^ .8z  v/F .x;
E z/ ! z D v is 0 and captures f in Q.
Suppose f.Ex/ is expressed by a 0 formula F .x;
E v/ and xE reduces to a single
variable. Suppose hm; ai 2 f. (a) Then, NF .m; a/ D T; and since F is 0 ,
by T12.8, Q `ND F .m; a/. (b) Suppose n a; then hm; ni 62 f; so with T12.2,
NF .m; n/ D T and NF .m; n/ T; so by T12.8, Q `ND F .m; n/.
(i) From (a), Q ` F .m; a/. And ` a D a, so ` F .m; a/ ! a D a; and from
(b), for q < a, Q ` F .m; q/; so trivially, Q ` F .m; q/ ! q D a; so for
any p  a, Q ` F .m; p/ ! p D a; so by T8.21, Q ` .8z  a/.F .m; z/ !
z D a/. So with ^I, Q ` F .m; a/ ^ .8z  a/.F .m; z/ ! z D a/; which is
to say, Q ` F 0 .m; a/.
(ii) Hint: You need to show Q ` 8w.F .m; w/ ^ .8z  w/.F .m; z/ ! z D
w/ ! w D a/. Take as premises F .m; a/ ^ .8z  a/.F .m; z/ ! z D a/
from (i), along with j  a _ a  j from T8.19.
This result tells effectively tells us that if conditions (a) and (b) are met, then there
is an F 0 that captures f. This F 0 is not the same as the original F that expresses the
function. Still, if the 0 B expresses the -function, we have B 0 that captures it in
Q. Intuitively, the second conjunct of F 0 tells us that any z < v cannot satisfy F .
Further, formulas of the sort F 0 yield a modified uniqueness result.
T12.14. For F 0 .x;
E v/ Ddef F .x;
E v/ ^ .8z  v/F .x;
E z/ ! z D v as above, for
0
0
any n, Q ` 8x8y.F
E
.x;
E n/ ^ F .x;
E y// ! y D n. Suppose xE reduces to a
single variable and reason as follows,

CHAPTER 12. RECURSIVE FUNCTIONS AND Q


1. 8x.x  n _ n  x/

T8.19

2.

F 0 .j; n/ ^ F 0 .j; k/

A (g !I)

3.
4.
5.
6.

F .j; n/ ^ .8z  n/.F .j; z/ ! z D n/


F .j; k/ ^ .8z  k/.F .j; z/ ! z D k/
k n_nk
kn

2 ^E (unabv)
2 ^E (unabv)
1 8E
A (g 5_E)

7.
8.
9.
10.

.8z  n/.F .j; z/ ! z D n/


F .j; k/ ! k D n
F .j; k/
kDn

3 ^E
7,6 .8/E
4 ^E
8,9 !E

11.

nk

A (g 5_E)

12.

::
:
kDn

13.

kDn
0 .j; n/ ^ F 0 .j; k//

14. .F
!kDn
15. 8y.F 0 .j; n/ ^ F 0 .j; y// ! y D n
16. 8x8y.F 0 .x; n/ ^ F 0 .x; y// ! y D n

570

5,6-10,11-12 _E
2-13 !I
14 8I
15 8I

Reasoning for the second subderivation is similar to the first.


So where p, q and v are universally quantified we shall have, Q ` 8p8q8v.B 0 .p; q;
m; n/ ^ B 0 .p; q; m; v// ! v D n. Because n is a numeral, this is not quite what we
had from Qs , but it it will be sufficient for what we want.
Observe also that insofar as F 0 .x;
E v/ is built on an F .x;
E v/ that expresses a
function continues to expresses f.Ex/. Perhaps this is obvious given what F 0 says.
However, we can argue for the result directly.
T12.15. If F .x;
E v/ expresses f.Ex/, then F 0 .x;
E v/ D F .x;
E v/ ^ .8z  v/F .x;
E z/ !
z D v expresses f.Ex/.
Suppose xE reduces to a single variable and f.x/ is expressed by F .x; v/. Suppose hm; ai 2 f. (a) By expression, NF .m; a/ D T. (b) Suppose n a;
then hm; ni 62 f; so with T12.2, NF .m; n/ D T.
(i) Suppose NF 0 .m; a/ T. This is impossible. You will need applications
of T12.6 and T10.2; observe that for n  a either n D a or n < a (so that
n a).
(ii) Suppose N8w.F .m; w/ ^ .8z  w/.F .m; z/ ! z D w/ ! w D
a/ T. This is impossible. This time, you will be able to reason that for any
n either n D a or n a.

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

571

And now we are in a position to recover the main result, except that the recursive
functions are captured in Q rather than Qs .
T12.16. Any recursive function is captured by a 1 formula in Q
The -function is expressed by a 0 formula B.p; q; i; v/; so by T12.15 and
T12.13 there is a 0 formula B 0 .p; q; i; v/ that expresses and captures it in
Q. For any f.Ex/ originally expressed by F .x;
E v/, let F be like F except that
instances of B are replaced by B 0 . Since B 0 is 0 , F remains 1 .
The argument is now a matter of showing that demonstrations of T12.3,
T12.11 and T12.12 go through with application to these formulas and in Q.
For the first two, the argument is nearly trivial: everything is the same as before with formulas of the sort F replacing F . For the last, it will be important that derivations which rely on uniqueness for the -function go through
with the result from T12.14, that for any m and n, Q ` 8p8q8v.B 0 .p; q; m; n/^
B 0 .p; q; m; v// ! v D n.
As in for the case of expression, formulas other than F .x;
E v/ might capture the

recursive functions for example, if F .x;


E v/ captures f.Ex/, then so does F .x;
E v/^

A for any theorem A. Let us say that F .x;


E v/ is the canonical formula that captures
f.Ex/ in Q. Of course, the canonical formula which captures f.Ex/ need not be the
same as the corresponding original formula for the -function is not captured
by its original formula (and so any formula which includes a -function fails to be
original). Because the -function is captured by a 0 formula we do, however, retain
the result that every recursive function is captured in Q by some 1 formula.
For the following, unless otherwise noted, when on the basis of our theorems, we
assert the existence of a formula to express or some capture recursive function, we
shall have in mind the canonical formula. Thus a function is expressed and captured
by the same formula.
E12.18. Provide an argument to demonstrate (ii) of T12.13.

E12.19. Finish the derivation for T12.14 by completing the second subderivation.

E12.20. Complete the demonstration of T12.15.

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

572

*E12.21. Work carefully through the demonstration of T12.16 by setting up revised


arguments T12.3 , T12.11 and T12.12 . As feasible, you may simply explain
how parts differ from the originals. For the last, be sure that derivations work
with revised uniqueness conditions.

12.4

More Recursive Functions

Now that we have seen what the recursive functions are, and the powers of our logical
systems to express and capture recursive functions, we turn to extending their range.
In fact, in this section, we shall generate a series of functions that are primitive recursive. In addition to the initial functions, so far, we have seen that plus, times, fact
and power are primitive recursive. As we increase the range of (primitive) recursive
functions, it immediately follows that our logical systems have the power to express
and capture all the same functions.

12.4.1

Preliminary Functions

We begin with some simple primitive recursive functions that will serve as a foundation for things to come.
Predecessor with cutoff. Set the predecessor of zero to zero itself, and for any
other value to the one before. Since pred.y/ is a one-place function, gpred is a
constant, in this case, gpred D 0. And hpred D idnt21 .y; u/. So, as we expect for
pred.y/,
pred.0/ D 0
pred.suc.y// D y

So predecessor is a primitive recursive function.


Subtraction with cutoff. When y  x, subc.x; y/ D 0. Otherwise subc.x; y/ D
x y. For subc.x; y/, set gsubc.x/ D idnt11 .x/. And hsubc.x; y; u/ D pred.idnt33 .x; y; u//.
So,
subc.x; 0/ D x
subc.x; suc.y// D pred.subc.x; y//

So as y increases by one, the difference decreases by one. Informally, indicate


:
subc.x; y/ D .x y/.

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

573

Absolute value. absval.x - y/ D .x : y/ C .y : x/. So we find the absolute value


of the difference between x and y by doing the subtraction with cutoff both ways.
One direction yields zero. The other yields the value we want. So the sum comes out
to the absolute value. This is a function with two arguments (only separated by -
rather than comma to remind us of the nature of the function). This function results
entirely by composition, without a recursion clause. Informally, we indicate absolute
value in the usual way, absval.x - y/ D jx - yj.
Sign. The function sg.y/ is zero when y is zero and otherwise one. For sg.y/, set
gsg D 0. And hsg.y; u/ D suc.zero.idnt21 .y; u///. So,
sg.0/ D 0
sg.suc.y// D suc.zero.y//

So the sign of any successor is just the successor of zero, which is one.
Converse sign. The function csg.y/ is one when y is zero and otherwise zero. So it
inverts sg. For csg.y/, set gcsg D suc.0/. And hcsg.y; u/ D zero.idnt21 .y; u//. So,
csg.0/ D suc.0/
csg.suc.y// D zero.y/

So the converse sign of any successor is just zero. Informally, we indicate the converse sign with a bar, sg.y/.
E12.22. Consider again your file recursive1.rb from E12.3. Extend your sequence of functions to include pred(x), subc(x,y), absval(x - y), sg(x),
and csg(x). Calculate some values of these functions and print the results,
along with your program. Again, there should be no appeal to functions except from earlier in the chain.

12.4.2

Characteristic Functions

(CF) For any function p.Ex/, sg.p.Ex// is the characteristic function of the relation R
such that xE 2 R iff sg.p.Ex// D 0. So a characteristic function for relation R takes just
the values 0 and 1 and if R.Ex/ is true, then chR .Ex/ D 0 and if R.Ex/ is not true, then
chR .Ex/ D 1.5 A (primitive) recursive property or relation is one that has a (primitive)
recursive characteristic function though when p already takes just the values 0
5 It is perhaps more common to reverse the values of zero and one for the characteristic function.
However, the choice is arbitrary, and this choice is technically convenient.

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

574

and 1 so that sg.p.Ex// D p.Ex/, we generally omit sg from our specifications. These
definitions immediately result in corollaries to T12.3 and T12.16.
T12.3 (corollary). On the standard interpretation N of LNT , each recursive relation
R.E
x/ is expressed by some formula R.x/.
E
Suppose R.Ex/ is a recursive relation; then it has a recursive characteristic
function chR .Ex/; so by T12.3 there is some formula R.x;
E y/ that expresses
chR .Ex/. So in the case where xE reduces to a single variable, if m 2 R,
then hm; 0i 2 chR ; and by expression, IR.m; ;/ D T; and if m 62 R, then
hm; 0i 62 chR , so that with T12.2, IR.m; ;/ D T. So, generally, R.x;
E ;/
expresses R.Ex/.
T12.16 (corollary). Any recursive relation is captured by a 1 formula in Q.
Suppose R.Ex/ is a recursive relation; then it has a recursive characteristic function chR .Ex/; so by T12.16 there is some 1 formula R.x;
E y/ that captures
chR .Ex/. So in the case where xE reduces to a single variable, if m 2 R, then
hm; 0i 2 chR ; and by capture T ` R.m; ;/; and if m 62 R, then hm; 0i 62 chR ;
so by capture with T12.4, T ` R.m; ;/. So, generally R.x;
E ;/ captures
R.E
x/.
So our results for the expression and capture of recursive functions extend directly to
the expression and capture of recursive relations: a recursive relation has a recursive
characteristic function; as such, the function is expressed and captured; so, as we
have just seen, the corresponding relation is expressed and captured.
Equality. Say t.Ex/ is a recursive term just in case it is a variable, constant, or a
recursive function. Then for any recursive terms s.Ex/ and t.Ey/, EQ.s.Ex/; t.Ey//
typically rendered s.Ex/ D t.Ey/, is a recursive relation with characteristic function
chEQ .Ex; yE/ D sgjs.Ex/ - t.Ey/j. When s.Ex/ is equal to t.Ey/, the absolute value of the
difference is zero so the value of sg is zero. But when s.Ex/ is other than t.Ey/, the
absolute value of the difference is other than zero, so value of sg is one. And, supposing that s.Ex/ and t.Ex/ are recursive, this characteristic function is a composition of
recursive functions. So the result is recursive. So s.Ex/ D t.Ey/ is a recursive relation.
A couple of observations: First, be clear that EQ is the standard relation we all
know and love. The trick is to show that it is recursive. We are not given that EQ is a
recursive relation so we demonstrate that it is, by showing that it has a recursive
characteristic function. Second, one might think that we could express, f.Ex/ D g.Ey/,
say, by some relatively simple expression that would compose expressions for the

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

575

functions with equality say, 9u9vF .x;


E u/ ^ G .y;
E v/ ^ u D v/. This would be
fine. However where sgjf.Ex/ - g.Ey/j is expressed and captured by some S.x;
E y;
E v/
our method, which works by modification of the characteristic function generates the
relatively complex, E.x;
E y/
E Ddef S.x;
E y;
E ;/.
Inequality. The relation LEQ.s.Ex/; t.Ey// has characteristic function sg.s.Ex/ : t.Ey//.
When s.Ex/  t.Ey/, s.Ex/ : t.Ey/ D 0; so sg D 0; Otherwise the value is 1. The relation
:
LESS.s.E
x/; t.Ey// has characteristic function sg.suc.s.Ex// t.Ey//. When s.Ex/ < t.Ey/,
: t.Ey/ D 0; so sg D 0. Otherwise the value is 1. These are typically
suc.s.Ex//
represented s.Ex/  t.Ey/ and s.Ex/ < t.Ey/.
Truth functions. Suppose P.Ex/ and Q.Ex/ are recursive relations. Then NEG.P.Ex//
and DSJ.P.Ex/; Q.Ex// are recursive relations. Suppose chP .Ex/ and chQ .Ex/ are the characteristic functions of P.Ex/ and Q.Ex/.
NEG.P.E
x// (typically P.Ex/) has characteristic function sg.chP .Ex//. When P.Ex/
does not obtain, the characteristic function of P.Ex/ takes value one, so the converse
sign goes to zero. And when when P.Ex/ does obtain, its characteristic function is
zero, so the converse sign is one which is as it should be.
DSJ.P.E
x/; Q.Ey// (typically P.Ex/_ Q.Ey/) has characteristic function chP .Ex/  chQ .Ey/.
When one of P.Ex/ or Q.Ey/ is true, the disjunction is true; but in this case, at least one
characteristic function, and so the product of functions goes to zero. If neither P.Ex/
nor Q.Ey/ is true, the disjunction is not true; in this case, both characteristic functions,
and so the product of functions take the value one.
Other truth functions are definable in the same terms as for negation and disjunction. So, for example, IMP.P.Ex/; Q.Ey// that is, P.Ex/ ! Q.Ey/ is just P.Ex/ _ Q.Ey/.
Bounded quantifiers: Consider a relation S.Ex; z/ D .9y  z/P.Ex; z; y/ which obtains when there is a y less than or equal to z such that P.Ex; z; y/. The variable z for
the bound may or may not have a natural place in P, though we treat it as at least
a placeholder insofar as it has a definite place in s.Ex; z/. Given chP .Ex; z; y/, consider a further relation R.Ex; z; v/ corresponding to .9y  v/P.Ex; z; y/. If we can find
chR .Ex; z; v/ then chS .Ex; z/ is automatic as chR .Ex; z; z/. For this chR .Ex; z; v/ set,
gchR .Ex; z/ D chP .Ex; z; 0/
hchR .Ex; z; y; u/ D u  chP .Ex; z; Sy/

In the simple case where xE drops out, chR .z; 0/ D chP .z; 0/. And chR .z; Sy/ D
chR .z; y/  chP .z; Sy/. The result is,
chR .z; v/ D chP .z; 0/  chP .z; 1/  : : :  chP .z; v/

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

576

Think of these as grouped to the left. So the result has chR .z; n/ D 1 unless and
until one of the members is zero, and then stays zero. So the function for R.y; v/
goes to zero just in case P.z; y/ is true for some value between 0 and v. So set
chS .Ex; z/ D chR .Ex; z; z/ so the characteristic function for the bounded quantifier
runs the R function up to the bound z.
For <S .Ex; z/ D .9y < z/P.Ex; z; y/, adopt <
R .E
x;z; v/ for .9y < v/P.Ex; z; y/ with ch<R .Ex;
<
z; v/ such that gchR .Ex; z/ D suc.zero.chP .Ex; z; 0///; so that ch<R .Ex; z; 0/ D 1; since
there is no y less than zero such that P.Ex; z; y/, ch<R goes automatically to one. And
set hch<R .Ex; z; y; u/ D u  chP .Ex; z; y/; so in the simple case, ch<R .z; Sy/ D ch<R .z; y/ 
chP .z; y/, and we check only values prior to Sy. Then as before, ch<S .Ex; z/ D ch<R .Ex;
z; z/.
For .8z  y/P.Ex; z/ and .8z < y/P.Ex; z/, it is simplest just to consider .9z 
y/P.Ex; z/; and similarly in the other case. And we are done by previous results.
Least element: Let m.Ex; z/ D .y  z/P.Ex; z; y/ be the least y  z such that
if one exists, and otherwise z. Then if P.Ex; z; y/ is a recursive relation,
.y  z/P.Ex; z; y/ is a recursive function. First take R.Ex; z; v/ for .9y  v/P.Ex; z; y/
and chR .Ex; z; v/ as described above. So chR .Ex; z; v/ goes to 0 when P is true for some
j  v. Then, second, adopt a function q.Ex; z; v/ corresponding to .y  v/P.Ex; z; y/.
Given this, very much as before, m.Ex; z/ is automatic as q.Ex; z; z/. For q.Ex; z; v/ set,
P.E
x; z; y/

gq.Ex; z/ D zero.chR .Ex; z; 0//


hq.Ex; z; y; u/ D u C chR .Ex; z; y/

So in the simple case where xE drops out, q.z; 0/ D 0; for the least z  0 that satisfies
any P can only be 0. And then q.z; Sy/ D q.z; y/ C chR .z; y/. The result is,
q.z; Sn/ D 0 C chR .z; 0/ C : : : C chR .z; n/

where chR is 1 until it hits a member that is P and then goes to 0 and stays there.
Observe that this series starts with y D 0 so that (excluding the first member) it has
Sn members, and so if all the values are 1 evaluates to Sn. We stop adding one at
the least n that is P. If there is some least a < Sn such that chR .z; a/ is zero, then all
the members prior to it are 1 and the sum is a. So set m.Ex; z/ D q.Ex; z; z/, so that we
take the sum up to the limit z. Observe that .y  z/P.Ex; z; y/ D z does not require
that P.Ex; z; z/ only that no a < z is such that P.Ex; z; a/.
Selection by cases. Suppose f0 .Ex/ : : : fk .Ex/ are recursive functions and C0 .Ex/ : : : Ck .Ex/
are mutually exclusive recursive relations. Then f.Ex/=C0 : : : Ck defined as follows is
recursive.

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

f.Ex/ D

<

577

f0 .Ex/ if C0 .Ex/
f1 .Ex/ if C1 .Ex/

::
:
fk .Ex/ if Ck .Ex/
and otherwise a

Observe that, f.Ex/ D


sg.chC0 .Ex//  f0 .Ex/ C sg.chC1 .Ex//  f1 .Ex/ C : : : C sg.chCk .Ex//  fk .Ex/ C
chC0 .Ex/  chC1 .Ex/  : : :  chCk .Ex/  a

works as we want. Each of the first terms in this sum is 0 unless the Ci is met in which
case sg.chCi .Ex// is 1 and the term goes to fi .Ex/. The final term is 0 unless no condition
Ci is met, in which case it is a. So f.E
x/ is a composition of recursive functions, and
itself recursive.
We turn now to some applications that will particularly useful for things to come.
In many ways, the project is like a cool translation exercise pitched at the level of
functions.
Factor. Let FCTR.m; n/ be the relation that obtains between m and n when m C 1
evenly divides n (typically, mjn). Division is by n C 1 to avoid worries about division
by zero.6 Then mjn is recursive. This relation is defined as follows.
.9y  n/.Sm  y D n/
Observe that this makes (the predecessor of) both 1 and n factors of n, and any
number a factor of zero. Since each part is recursive, the whole is recursive. The
argument is from the parts to the whole: Sm  y D n has a recursive characteristic
function; so the bounded quantification has a recursive characteristic function; so the
factor relation is recursive.
Prime number. Say PRIME.n/ is true just when n is a prime number. This property
is defined as follows.
n > 1 ^ .8j < n/jjn ! .j D 0 _ Sj D n/
6 In

626n9.

fact, this is a (minor) complication at this stage, but it will be helpful down the road. See p.

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

578

So n greater than 1 and the successor of any number that divides it is either 1 or n
itself.
Prime sequence. Say the primes are 0 , 1 . . . . Let the value of the function pi.n/
be n . Then .n/ is defined by recursion as follows.
gpi D suc.suc.0//
hpi.y; u/ D .y  u C 1/.u < y ^ PRIME.y//

So the first prime, .0/ D 2. And .Sn/ D .z  .n/ C 1/..n/ < y ^ PRIME.y//.
So at any stage, the next prime is the least prime which is greater than .n/. This
depends on the point that all the primes  n are included in the product .n/ Let
p.n/ D 0  1  : : :  n . By a standard argument (see G2 in the arithmetic for
Gdel numbering reference, p. 471), p.n/ C 1 is not divisible by any of the primes
up to n ; so either p.n/ C 1 is itself prime, or there is some prime greater than n but
less than p.n/ C 1. But since .n/ is a product including all the primes up to n ,
p.n/  .n/; so either .n/ C 1 is prime or there is a prime greater than n but less
than .n/ C 1 and the next prime is sure to appear in the specified range.
Prime exponent. Let exp.n; i/ be the (possibly 0) exponent of i in the unique
prime factorization of n. Then exp.n; i/ is recursive. This function may be defined as
follows.
.x  n/pred.xi /jn ^ pred.xi C1 /jn
And, of course, i is just .i/. Observe that no exponent in the prime factorization
of n is greater than n itself for any x  2, xn  n so the bound is safe. This
function returns the first x such that xi divides n but xi C1 does not.
Prime length. Say a prime a is included in the factorization of n just in case a  b
and for some exponent eb > 0, ebb is a member of the factorization of n. So we think
of a prime factorization as,
e00  e11  : : :  ebb
where eb > 0, but exponents for prior members of the series may be zero or not.
Then len.n/ is the number of primes included in the prime factorization of n; so

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

579

len.0/ D len.1/ D 0 and otherwise, since the series of primes begins with zero,
len.n/ D b C 1. For this set,
len.n/ Ddef .y  n/.8z W y  z  n/exp.n; z/ D 0

Officially: .y  n/.8z  n/z  y ! exp.n; z/ D 0. So we find the least y such


that none of the primes between y and n are part of the factorization of n; but then
all of the primes prior to it are members of the factorization so that y numbers the
length of the factorization. This depends on its being the case that n < n so that n
is never included in the factorization of n.
E12.23. Returning to your file recursive1.rb from E12.3 and E12.22, extend the
sequence of functions to include the characteristic function for FCTR.m; n/.
You will need to begin with cheq(a,b) for the characteristic function of
a D b and then the characteristic function of Sm  y D n. Then you will
require a function like chR .m; n; v/ corresponding to .9y  v/.Sm  y D n/.
Calculate some values of these functions and print the results, along with your
program.
E12.24. Continue in your file recursive1.rb to build the characteristic function
for PRIME.n/. You will have to build gradually to this result (treating the
existential quantifier as primitive so that the universal quantifier appears as
.9j < n/P). You will need chless(a,b) and then chneg(a), chdsj(a,b),
chimp(a,b), and chand(a,b) for the relevant truth functions. With these in
hand, you can build a function chp(n,j) corresponding to .jjn ! .j D
<
0 _ j D n//. And with that, you can obtain a function like R .n; j; v/ and then
the characteristic function of the bounded existential. Then, finally, build
prime(n). Calculate some values of these functions and print the results,
along with your program.
E12.25. Continue in your file recursive1.rb to generate lcm.m; n/ the least common multiple of Sm and Sn that is, .y  Sm  Sn/y > 0 ^ mjy ^ njy.
For this you will need the characteristic function of y > 0 ^ mjy ^ njy; and
then one like chR .m; n; v/ corresponding to .9y  v/y > 0 ^ mjy ^ njy.
Then you will be able to find the function like p.m; n; v/ corresponding to
.y  v/y > 0 ^ mjy ^ njy and finally the lcm.

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

580

*E12.26. Functions f1 .Ex; y/ and f2 .Ex; y/ are defined by simultaneous (mutual) recursion just in case,
f1 .Ex; 0/ D g1 .Ex/
f2 .Ex; 0/ D g2 .Ex/
f1 .Ex; Sy/ D h1 .Ex; y; f1 .Ex; y/; f2 .Ex; y//
f2 .Ex; Sy/ D h2 .Ex; y; f1 .Ex; y/; f2 .Ex; y//
f .E
x;y/

Show that f1 and f2 so defined are recursive. Hint: Let F.Ex; y/ D 01

f .E
x;y/
12
;

then find G.Ex/ in terms of g1 and g2 , and H.Ex; y; u/ in terms of h1


and h2 so that F.Ex; 0/ D G.Ex/ and F.Ex; Sy/ D H.Ex; y; F.Ex; y//. So F.Ex; y/ is
recursive. Then f1 .Ex; y/ D exp.F.Ex; y/; 0/ and f2 .Ex; y/ D exp.F.Ex; y/; 1/; so f1
and f2 are recursive.

12.4.3

Arithmetization

Our aim in this section is to assign numbers to to expressions and sequences of expressions in LNT and build a (primitive) recursive property PRFQ.m; n/ which is true
just in case m numbers a sequence of expressions that is a proof of the expression
numbered by n. This requires a number of steps. In this part, we develop at lest the
notion of a sentential proof which should be sufficient for the general idea. The next
section develops details for the the full quantificational case.
Gdel numbers. We begin with a strategy familiar from 10.2.2 and 10.3.2 (to
which you may find it helpful to refer), now adapted to LNT . The idea is to assign numbers to symbols and expressions of LNT . Then we shall be able to operate
on the associated numbers by means of ordinary numerical functions. Insofar as the
variable symbols in any quantificational language are countable, they are capable of
being sorted into series, x1 , x2 : : : Supposing that this is done, begin by assigning to
each symbol in LNT an integer g called its Gdel Number.
a.
b.
c.
d.
e.

g. D 3
g/ D 5
g D 7
g! D 9
gD D 11

f.
g.
h.
i.
j.
k.

gxi D 23 C 2i

g8 D 13
g; D 15
gS D 17
gC D 19
g D 21

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

581

So, for example, gx5 D 23 C 2  5 D 33. Clearly each symbol gets a unique Gdel
number, and Gdel numbers for individual symbols are odd positive integers.7
Now we are in a position to assign a Gdel number to each formula as follows:
Where 0 ; 1 : : : n are the symbols, in order from left to right, in some expression
Q,
gQ D 2g0  3g1  5g2  : : :  n gn
where 2, 3, 5. . . n are the first n prime numbers. So, for example, gx0  x5 D
223  321  533 . This is a big integer. But it is an integer, and different expressions
get different Gdel numbers. Given a Gdel number, we can find the corresponding
expression by finding its prime factorization; then if there are twenty three 2s in the
factorization, the first symbol is x0 ; if there are twenty one 3s, the second symbol is
; and so forth. Notice that numbers for individual symbols are odd, where numbers
for expressions are even.
Now consider a sequence of expressions, Q0 , Q1 . . . Qn (as in an axiomatic
derivation). These expressions have Gdel numbers g0 , g1 . . . , gn . Then,
g0 0  g1 1 ; g2 2  : : :  gnn
is the super Gdel number for the sequence Q0 , Q1 . . . Qn . Again, given a super
Gdel number, we can find the corresponding expressions by finding its prime factorization; then, if there are g0 2s, we can proceed to the prime factorization of g0 , to
discover the symbols of the first expression; and so forth. Observe that super Gdel
numbers are even, but are distinct from Gdel numbers for expressions, insofar as
the exponent of 2 in the factorization of any expression is odd (the first element of
any expression is a symbol and so has an odd number); and the exponent of 2 in the
factorization of any super Gdel number is even (the first element of a sequence is
an expression and so has an even number).
Recall that exp.n; i/ returns the exponent of i in the prime factorization of n. So
for a Gdel number n, exp.n; i/ returns the code of i ; and for a super Gdel number
n, exp.n; i/ returns the code of Qi .
Where P is any expression, let pP q be its Gdel number; and pP q the standard
numeral for its Gdel number. In this case, say, p0q D 215 rather than 15 for we
take the number of the bracketed expression.
7 There

are many ways to do this, we pick just one.

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

582

Concatenation. The function cncat.m; n/ ordinarily indicated m ? n, returns the


Gdel number of the expression with Gdel number m followed by the expression
with Gdel number n. So px  yq ? pD zq D px  y D zq, for some numbered
variables x, y and z. This function is (primitive) recursive. Recall that len.n/ is
recursive and returns the number of distinct prime factors of n. Set m ? n to,
.x  Bm;n /.8i < len.m//fexp.x; i/ D exp.m; i/g ^ .8i < len.n//fexp.x; i C len.m// D exp.n; i/g

We search for the least number x such that exponents of initial primes in its factorization match the exponents of primes in m and exponents of primes later match
eponents of primes in n. The bounded quantifiers take i < len.m/ and i < len.n/
insofar as len returns the number of primes, but exp.x; i/ starts the list of primes at 0;
so if len.m/ D 3, its primes are 0 , 1 and 2 . So the first len.m/ exponents of x
are the same as the exponents in m, and the next len.n/ exponents of x are the same
as the exponents in n.
To ensure that the function is recursive, we use the bounded least element quantifier as main operator, where Bm;n is the bound under which we search for x. In this
case it is sufficient to set
len.m/Clen.n/
mCn
Bm;n D len.m/Clen.n/

The idea is that all the primes in x will be  len.m/Clen.n/ . And any exponent in the
factorization of m must be  m and any exponent for n must be  n; so that m C n
is greater than any exponent in the factorization of x. So B results from multiplying
a prime larger than any in x to a power greater than that of any in x together as many
times as there are primes in x; so x must be smaller than B.
Observe that .m ? n/ ? o D m ? .n ? o/; so we often drop parentheses for the
concatenation operation.
Terms and Atomics. TERM.n/ is true iff n is the Gdel number of a term. Think of
the trees on which we show that an expression is a term. Put formally, for any term
tn , there is a term sequence t0 , t1 . . . tn such that each expression is either,
a. ;
b. a variable
c. S tj where tj occurs earlier in the sequence
d. Cti tj where ti and tj occur earlier in the sequence

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

583

e. ti tj where ti and tj occur earlier in the sequence


where we represent terms in unabbreviated form. A term is the last element of such
a sequence. Let us try to say this.
First, VAR.n/ is true just in case n is the Gdel number of a variable conceived
as an expression, rather than a symbol. Then VAR is (primitive) recursive. Set,
VAR.n/

Ddef .9x  n/.n D 223C2x /

If there is such an x, then n must be the Gdel number of a variable. And it is clear
that this x is less than n itself. So the result is recursive.
Now TERMSEQ.m; n/ is true when m is the super Gdel number of a sequence of
expressions whose last member has Gdel number n. For TERMSEQ.m; n/ set,
exp.m; len.m/

1/ D n ^ .8k < len.m//f

exp.m; k/ D p;q _ VAR.exp.m; k// _

.9j < k/exp.m; k/ D pSq ? exp.m; j/ _


.9i < k/.9j < k/exp.m; k/ D pCq ? exp.m; i/ ? exp.m; j/ _
.9i < k/.9j < k/exp.m; k/ D pq ? exp.m; i/ ? exp.m; j/g

Recall that len.m/ returns the number of primes in the prime factorization of m; so
supposing that m is other than zero or one, len.m/  1 and if there is one prime it
is 0 , if there are two primes they are 0 and 1 , etc. So the last member of the
sequence has Gdel number n and any member of the sequence is a constant or a
variable, or made up in the usual way by prior members.
Then set TERM.n/ as follows,
TERM.n/

Ddef .9x  Bn /TERMSEQ.x; n/

If some x numbers a term sequence for n, then n is a term. In this case, the Gdel
numbers of all prior terms in the sequence must be less than n. Further, the number of
terms in the sequence is the same as the number of variables and constants together
with the number of function symbols in the term (one term for each variable and
constant, and another corresponding to each function symbol); so the number of
terms in the sequence is the same as len.n/; so all the primes in the sequence are
len.n/
n
< len.n/ . So multiply nlen.n/ together len.n/ times and set Bn D len.n/
. We
take a prime len.n/ greater than all the primes in the sequence, to a power n greater
than all the powers in the sequence, and multiply it together as many times as there
are members of the sequence. The result must be greater than x, the number of the
term sequence.

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

584

Finally ATOM.n/ is true iff n is the number of an atomic formula. The only atomic
formulas of LNT are of the form Dt1 t2 . So it is sufficient to set,
ATOM.n/

Ddef .9x  n/.9y  n/TERM.x/ ^ TERM.y/ ^ n D pDq ? x ? y

Clearly the numbers of t1 and t2 are  n itself.


Formulas. WFF.n/ is to be true iff n is the number of a (well-formed) formula.
Again, think of the tree by which a formula is formed. There is a sequence of which
each member is,
a. an atomic
b. P for some previous member of the sequence P
c. .P ! Q/ for previous members of the sequence P and Q
d. 8xP for some previous member of the sequence P and variable x
So, on the model of what has gone before, we let FORMSEQ.m; n/ be true when m is
the super Gdel number of a sequence of formulas whose last member has Gdel
number n. For FORMSEQ.m; n/ set,
exp.m; len.m/

1/ D n ^ .8k < len.m//f

ATOMIC.exp.m; k// _

.9j < k/exp.m; k/ D pq ? exp.m; j/ _


.9i < k/.9j < k/exp.m; k/ D p.q ? exp.m; i/ ? p!q ? exp.m; j/ ? p/q _
.9i < k/.9j < n/VAR.j/ ^ exp.m; k/ D p8q ? j ? exp.m; i/g

So a formula is the last member of a sequence each member of which is an atomic,


or formed from previous members in the usual way. Clearly the number of a variable
in an expression with number n is itself  n. Then,
WFF.n/

Ddef .9x  Bn /.FORMSEQ.x; n//

An expression is a formula iff there is a formula sequence of which it is the last


member. Again, the Gdel numbers of all the prior formulas in the sequence must
be  n. And there are as many members of the sequence as there are atomics and
operator symbols in the formula numbered n. So all the primes are  len.n/ ; so
len.n/
n
multiply So multiply nlen.n/ together len.n/ times and set Bn D len.n/
.

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

585

Sentential Proof. SENTPRF.m; n/ is to be true iff m is the super Gdel number of a


sequence of formulas that is a (sentential) proof of the formula with Gdel number
n. We revert to the relatively simple axiomatic system of chapter 3. So, for example,
A1 is of the sort, .P ! .Q ! P //, and the only rule is MP. For the sentential case
we need, SENTAXIOM.n/ true when n is the number of an axiom. For this,
AXIOM1.n/ Ddef

.9x  n/.9y  n/WFF.x/ ^ WFF.y/ ^ n D p.q ? x ? p!q ? p.q ? y ? p!q ? x ? p//q

AXIOM2.n/ Ddef

Homework.

AXIOM3.n/ Ddef

Homework.

Then,
SENTAXIOM.n/

Ddef

AXIOM1.n/

_ AXIOM2.n/ _ AXIOM3.n/

In the next section, we will add all the logical axioms plus the axioms for Q. But this
is all that is required for proofs of theorems of sentential logic.
Now cnd.n; o/ D m when n D pP q, o D pQq and m D p.P ! Q/q. And
MP.m; n; o/ is true when the formula with Gdel number n follows from ones with
numbers o and m.
cnd.n; o/ D p.q ? n ? p!q ? o ? p/q
MP.m; n; o/

Ddef cnd.n; o/ D m

So o numbers the conditional, m its antecedent, and n the consequent.


And SENTPRF.m; n/ when m is the super Gdel number of a sequence that is a
proof whose last member has Gdel number n. This works like TERMSEQ and FORMSEQ.
For SENTPRF set,
exp.m; len.m/

1/ D n ^ .8k < len.m//f

SENTAXIOM.exp.m; k// _

.9i < k/.9j < k/MP.exp.m; i/; exp.m; j/; exp.m; k//g

So every formula is either an axiom, or follows from previous members by MP. It is


a significant matter to have shown that there is such a function! Again, in the next
section, we will extend this notion to include the rule Gen.
This construction for SENTPRF exhibits the essential steps that are required for
the parallel relation PRFQ.m; n/ for theorems of Q. That discussion is taken up in
the following section, and adds considerable detail. It is not clear that the detail is
required for understanding results to follow though of course, to the extent that
those results rely on the recursive PRFQ relation, the detail underlies proof of the
results!

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

586

E12.27. Find Gdel numbers for each of the following. Treat the first as an expression, rather than as simple symbol; the last is a sequence of expressions. For
the latter two, you need not do the calculation!
x2

x0 D x1

x0 D x1 ; ; D x0 ; ; D x1

E12.28. Complete the cases for AXIOM2.n/ and AXIOM3.n/.

E12.29. In chapter 8 we define the notion of a normal sentential form (p. 385). Using
ATOM from above, define a recursive relation NORM.n/ for LNT . Hint: You will
need a formula sequence to do this.

12.4.4

Completing the Construction

Quantifier rules for derivations include axioms like (A4), .8vP ! Psv / where term
s is free for variable v in P . This is easy enough to apply in practice. But it takes
some work to represent. We tackle the problem piece-by-piece.
Substitution in terms. Say t D ptq, v D pvq, and s D psq for some terms s, t,
and variable v. Then TERMSUB.t; v; s; u/ is true when u is the Gdel number of tsv .
For this, we begin with a term sequence (with Gdel number m) for t, and consider
a parallel sequence, not necessarily a term sequence (with Gdel number n), that
includes modified versions of the terms in the sequence with Gdel number m. For
TERMSUB.t; v; s; u/ set,
.9m  X/.9n  Y/.TERMSEQ.m; t/ ^ exp.n; len.n/

1/ D u/ ^ .8k < len.n//f

exp.m; k/ D p;q ^ exp.n; k/ D p;q _


VAR.exp.m; k// ^ exp.m; k/ v ^ exp.n; k/ D exp.m; k/ _
VAR.exp.m; k// ^ exp.m; k/ D v ^ exp.n; k/ D s _
.9i < k/exp.m; k/ D pSq ? exp.m; i/ ^ exp.n; k/ D pS q ? exp.n; i/ _
.9i < k/.9j < k/exp.m; k/ D pCq ? exp.m; i/ ? exp.m; j/ ^ exp.n; k/ D pCq ? exp.n; i/ ? exp.n; j/ _
.9i < k/.9j < k/exp.m; k/ D pq ? exp.m; i/ ? exp.m; j/ ^ exp.n; k/ D pq ? exp.n; i/ ? exp.n; j/g

So the sequence for tsv (numbered by n) is like one of our unabbreviating trees
from chapter 2. In any place where the sequence for t (numbered by m) numbers ;,
the sequence for tsv numbers ;. Where the sequence for t numbers a variable other
than v, the sequence for tsv numbers the same variable. But where the sequence for
t numbers variable v, the sequence for tsv numbers s. Then later parts are built out
of prior in parallel. The second sequence may not itself be a term sequence, insofar

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

587

as it need not include all the antecedents to s (just as an unabbreviating tree would
not include all the parts of a resultant term or formula).
In this case, reasoning as for WFF, the Gdel numbers in the sequence with number
m must be less than t and numbers in the sequence with number n must be less
than u. And primes in the sequence range up to len.t/ . So it is sufficient to set

len.t/
len.t/
u
t
and Y D len.t/
.
X D len.t/
Substitution in atomics. Say p D pP q, v D pvq, and s D psq for some
atomic formula P , variable v and term s. Then ATOMSUB.p; v; s; u/ is true when
u is the Gdel number of Psv . The condition is straightforward given TERMSUB. For
ATOMSUB.p; v; s; u/,
.9i  p/.9j  p/.9i0  u/.9j0  u/TERM.i/ ^ TERM.j/ ^ p D pDq ? i ? j ^ TERMSUB.i; v; s; i0 / ^ TERMSUB.j; v; s; j0 / ^ u D pDq ? i0 ? j0

Psv simply substitutes into the terms on either side of the equal sign.
Substitution into formulas. In the general case, Psv is complicated insofar as s
replaces only free instances of v. Again, we build a parallel sequence with number
n. No replacements are carried forward in subformulas beginning with a quantifier
binding instances of variable v. Where p D pP q, v D pvq, and s D psq for an
arbitrary formula P , variable v and term s, FORMSUB.p; v; s; u/ is true when u is the
Gdel number of Psv . For this set,
.9m  X/.9n  Y/.FORMSEQ.m; p/ ^ exp.n; len.n/

1/ D u/ ^ .8k < len.n//f

ATOM.exp.m; k// ^ ATOMSUB.exp.m; k/; v; s; exp.n; k// _


.9i < k/exp.m; k/ D pq ? exp.m; i/ ^ exp.n; k/ D pq ? exp.n; i/ _
.9i < k/.9j < k/exp.m; k/ D p.q ? exp.m; i/ ? p!q ? exp.m; j/ ? p/q ^ exp.n; k/ D p.q ? exp.n; i/ ? p!q ? exp.n; j/ ? p/q _
.9i < k/.9j < p/VAR.j/ ^ j v ^ exp.m; k/ D p8q ? j ? exp.m; i/ ^ exp.n; k/ D p8q ? j ? exp.n; i/ _
.9i < k/.9j < p/VAR.j/ ^ j D v ^ exp.m; k/ D p8q ? j ? exp.m; i/ ? exp.n; k/ D exp.m; k/g

So substitutions are made in atomics, and carried forward in the parallel sequence
so long as no quantifier binds variable v, at which stage, the sequence reverts to the

len.p/
len.p/
p
u
and Y D len.p/
.
form without substitution. Again, set X D len.p/
Given FORMSUB.p; v; s; u/, there is a corresponding function formusb.p; v; s/ D
.u  Z/.FORMSUB.p; v; s; u/. In this case, the number of symbols in Psv is sure to
be no greater than the number of symbols in P times the number of symbols in s.
And the Gdel number of each symbol is no greater than p C s. So it is sufficient

len.p/len.s/
pCs
to set Z D len.p/len.s/
. Again, we take a prime at least great as that

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

588

of any symbol, to a power greater than that of any exponent, and multiply it as many
times as there are symbols.
Free and bound variables. FREE.p; v/ is true when v is the Gdel number of a
variable that is free in a term or formula with Gdel number p. For a given variable
xi initially assigned number 23 C 2i, pxi q D 223C2i ; and pxi q2 D 223C2i C2 is
the number of the next variable. In particular then, for v the number of a variable,
v2 numbers a different variable. The idea is that if there is some change in a formula
upon substitution of a variable different from v, then v must have been free in the
original formula. For FREE.p; v/ set,
VARIABLE.v/

^ .TERM.p/ ^ TERMSUB.p; v; v2 ; p// _ .WFF.p/ ^ .FORMUSB.p; v; v2 ; p//

So v is free if the result upon substitution is other than the original expression.
Given FREE.p; v/, it is a simple matter to specify SENT.n/ true when n numbers a
sentence.
SENT.n/

Ddef

WFF.n/

^ .8x < n/VAR.x/ ! FREE.n; x/

So n numbers a sentence if it numbers a formula and nothing is a number of a variable


free in the formula numbered by n.
Finally, suppose s D psq and v D pvq; then FREEFOR.s; v; u/ is true iff s is
free for v in the formula numbered by u. For this, we set up a modified formula
sequence, that identifies just admissable subformulas ones where s is free for
v in the formula numbered by u. For FFSEQ.m; s; v; u/ set,
exp.m; len.m/

1/ D u ^ .8k < len.m//f

ATOMIC.exp.m; k// _

.9j < k/exp.m; k/ D pq ? exp.m; j/ _


.9i < k/.9j < k/exp.m; k/ D p.q ? exp.m; i/ ? p!q ? exp.m; j/ ? p/q _
.9j < u/WFF.j/ ^ exp.m; k/ D p8q ? v ? j _
.9i < k/.9j < u/VAR.j/ ^ j v ^ .FREE.s; j/ ! FREE.exp.m; i/; v// ^ exp.m; k/ D p8q ? j ? exp.m; i/g

If the main operator of a subformula Q binds variable v, then no variables in s are


bound upon substitution, because there are no substitutions as only free instances
of v are replaced. Observe that this Q need not appear earlier in the sequence, as
any formula with the v quantifier satisfies the condition. Alternatively, if the main
operator binds a different variable, we require that either the variable is not free in s
or not free in Q, else variables in s become bound upon substitution. Given this,

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

FREEFOR.s; v; u/

589

Ddef .9x < Bu /FFSEQ.x; s; v; u/

In this case, every member of the sequence for FFSEQ is a member of the FORMSEQ for
u so Bu may be set as before.
Proofs. After all this work, we are finally ready for axiom 4.
when n is the Gdel number of an instance of A4.

AXIOM4.n/

obtains

.9p  n/.9v  n/.9s  n/WFF.p/ ^ VAR.v/ ^ TERM.s/ ^ FREEFOR.s; v; p/ ^ n D cnd.p8q ? v ? p; formsub.p; v; s//

So there is a formula P , variable v and term s where s is free for v in P ; and the
axiom is of the form, .8vP ! Psv /.
GEN.m; n/ holds when n is the Gdel number of a formula that follows from a
formula with Gdel number m. Hint: you will need to assert the existence of numbers
for formulas P , Q and variable v, where v is not free in P . Then simply require
that m numbers a formula of the sort .P ! Q/ and n one of the sort .P ! 8vQ/.
After what we have done, axioms for equality and Robinson Arithmetic are not
hard. A few are worked as examples.
AXIOM5.n/

Ddef .9v  n/VAR.v/ ^ n D v ? pDq ? v

For simplicity I drop the unabbreviated style of the original formulas.


Axiom six is of the sort, .xi D y/ ! .hn x1 : : : xi : : : xn D hn x1 : : : y : : : xn /
for relation symbol h and variables x1 . . . xn and y. In LNT the function symbol
is S, C or . Because just a single replacement is made, we do not want to use
TERMSUB. However, we are in a position simply to list all the combinations in which
one variable is replaced. So, for AXIOM6.n/,
.9s < n/.9t < n/.9x < n/.9y < n/fVAR.x/ ^ VAR.y/ ^ .
s D pS q ? x ^ t D pS q ? y _
.9z < n/VAR.z/ ^ ..s D pCq ? x ? z ^ t D pCq ? y ? z/ _ .s D pCq ? z ? x ^ t D pCq ? z ? y// _
.9z < n/VAR.z/ ^ ..s D pq ? x ? z ^ t D pq ? y ? z/ _ .s D pq ? z ? x ^ t D pq ? z ? y// /^
n D p.Dq ? x ? y ? p! Dq ? s ? t ? p/qg

So there is a term s and a term t which replaces one instance of x in s with y. Then
the axiom is of the sort Dxy ! Dst.
Axiom seven is similar. It is stated in terms of atomics of the sort Rn x1 : : : xn
for relation symbol R and variables x1 . . . xn . In LNT the relation symbol is the equals
sign, so these atomics are of the form, x D y. Again, because just a single replace-

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

590

ment is made, we do not want to use FORMSUB. However, we may proceed by analogy
with AXIOM6. This is left as an exercise.
The axioms of Q are particular sentences. So, for example, axiom Q2 is of the
sort, .S x D Sy/ ! .x D y/. Let x and y be x0 and x1 respectively. Then,
AXIOMQ2.n/

Ddef n D p.Sx D Sy/ ! .x D y/q

For ease of reading, I do not reduce it to unabbreviated form. Other axioms of Q


may be treated in the same way.
And now it is straightforward to produce generalized versions of AXIOM.n/ and
prf.m; n/. For the latter, it will be convenient to have a relation ICON.m; n; o/ true
when the formula with Gdel number o is an immediate consequence of ones numbered m and n
ICON.m; n; o/

Ddef

MP.m; n; o/

_ .m D n ^ GEN.n; o//

It is a significant matter to have found these functions. Now we put them to work.
E12.30. Complete the construction with recursive relations for GEN.m; n/, AXIOM7.n/,
the remaining axioms for Robinson arithmetic, and then AXIOM.n/ and PRFQ.m; n/.

12.5

Essential Results

In this section, we develop some first fruits of our labor. We shall need some initial
theorems, important in their own right. With these theorems in hand, our results
follow in short order. The results are developed and extended in later chapters. But it
is worth putting them on the table at the start. (And some results at this stage provide
a fitting cap to our labors.) We have expended a great deal of energy showing that,
under appropriate conditons, recursive functions can be expressed and captured, and
that there are recursive functions and relations including PRFQ. Now we put these
results to work.

12.5.1

Preliminary Theorems

A couple of definitions: If f is a function from (an initial segment of) N onto some set
so that the objects in the set are f.0/, f.1/. . . say f enumerates the members of the
set. A set is recursively enumerable if there is a recursive function that enumerates it.
Also, say T is a recursively axiomatized formal theory if there is a recursive relation

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

591

First Results of Chapter 12


T12.1 For an interpretation with the required variable-free terms: (a) If R is a relation
symbol and R is a relation, and IR D R.x1 : : : xn /, then R.x1 : : : xn / is expressed
by Rx1 : : : xn . And (b) if h is a function symbol and h is a function and Ih D
h.x1 : : : xn / then h.x1 : : : xn / is expressed by hx1 : : : xn D v.
T12.2 Suppose function f.x1 : : : xn / is expressed by formula F .x1 : : : xn ; y/; then if
hhm1 : : : mn i; ai 62 f, IF .m1 : : : mn ; a/ D T.
T12.3 On the standard interpretation N of LNT , each recursive function f.Ex/ is expressed
by some formula F .E
x; v/. Corollary: On the standard interpretation N of LNT ,
each recursive relation R.Ex/ is expressed by some formula R.x/.
E
T12.4 If T includes Q and function f.x1 : : : xn / is captured by formula F .x1 : : : xn ; y/
so that conditions (f.i) and (f.ii) hold, then if hhm1 : : : mn i; ai 62 f then T `
F .m1 : : : mn ; a/.
T12.5 On the standard interpretation N for LNT , (i) Nd s  t D S iff Nd s  Nd t, and
(ii) Nd s < t D S iff Nd s < Nd t.
T12.6 On the standard interpretation N for LNT , (i) Nd .8x  t /P D S iff for every
m  Nd t, Nd.xjm/ P D S and (ii), Nd .8x < t /P D S iff for every m < Nd t,
Nd.xjm/ P D S.
T12.7 On the standard interpretation N for LNT , (i) Nd .9x  t /P D S iff for some
m  Nd t, Nd.xjm/ P D S and (ii), Nd .9x < t /P D S iff for some m < Nd t,
Nd.xjm/ P D S.
T12.8 For any 0 sentence P , if NP D T, then Q `ND P , and if NP T, then
Q `ND P .
T12.9 For any 1 sentence P if NP D T, then Q `ND P .
T12.10 The original formula by which any recursive function is expressed is 1 .
T12.11 On the standard interpretation N for LNT , any recursive formula is captured by the
original formula by which it is expressed in Qs .
T12.12 Suppose f.Ex; y/ results by recursion from functions g.Ex/ and h.Ex; y; u/ where
g.Ex/ is captured by some G .x;
E z/ and h.Ex; y; u/ by H .x;
E y; u; z/. Then for
the original expression F .x;
E y; z/ of f.Ex; y/, if hhm1 : : : mb ; ni; ai 2 f, Qs `
8wF .m1 : : : mb ; n; w/ ! w D a.
T12.13 If a function f.x1 : : : xn / is expressed by a 0 formula F .x1 : : : xn ; y/, then there
is a 0 formula F 0 that captures f in Q.
T12.14 For F 0 .x;
E y/ Ddef F .x;
E y/ ^ .8z  y/F .x;
E z/ ! z D y, and for any n,
0
Q ` 8x8y.F
E
.x;
E n/ ^ F 0 .x;
E y// ! y D n.
T12.16 Any recursive function is captured by a 1 formula in Q. Corollary: Any recursive relation is captured by a 1 formula in Q.
T12.15 If F .x;
E y/ expresses f.Ex/, then F 0 .x;
E y/ D F .x;
E y/ ^ .8z < y/F .x;
E z/ ! z D
y expresses f.Ex/.

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

592

PRFT.m; n/ which holds just in case m is the super Gdel number of a proof in T

of the
formula with Gdel number n. We have seen that Q is recursively axiomatized; but
so is PA and any reasonable theory whose axioms and rules are recursively described.
T12.17. If T is a recursively axiomatized formal theory then the set of theorems of
T is recursively enumerable.
Consider pairs hp; ti where p numbers a proof of the theorem numbered t, each
such pair itself associated with a number, 2p  3t . Then there is a recursive
function from the integers to these codes as follows.
code.0/ D z.9p < z/.9t < z/z D 2p  3t ^ PRFT.p; t/
code.Sn/ D z.9p < z/.9t < z/z > code.n/ ^ z D 2p  3t ^ PRFT.p; t/

So 0 is associated with the least integer that codes a proof of a sentence, 1


with the next, and so forth. Then,
enum.n/ D exp.code.n/; 1/

returns the Gdel number of theorem n in this ordering.


Recall that 1 is 3; so exp.code.n/; 1/ returns the number of the proved formula.
A given theorem might appear more than once in the enumeration, corresponding to
codes with different proofs of it, but this is no problem, as each theorem appears in
some position(s) of the list. Observe that we have, for the first time, made use of
regular minimization so that this function is recursive but not primitive recursive.
Supposing that T has an infinite number of theorems, there is always some z at which
the characteristic function upon which the minimization operates returns zero so
that the function is well-defined. So the theorems of a recursively axiomatized formal
theory T are recursively enumerable.
Suppose we add that T is consistent and negation complete. Then there is a
recursive relation THRMT.n/ true just of numbers for theorems of T : Intuitively, we
can enumerate the theorems; then if T is consistent and negation complete, for any
sentence P , exactly one of P or P must show up in the enumeration. So we can
search through the list until we find either P or P and if the one we find is P ,
then P is a theorem. In particular, we find P or P at the position, jenum.j/ D
pP q _ enum.j/ D pP q. For this, first take,
neg.n/ Ddef pq ? n

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

593

So if n is the number of a formula P , neg.n/ is the number of P . Now,


T12.18. For any recursively axiomatized, consistent, negation complete formal theory T there is a recursive relation THRMT.n/ true just in case n numbers a
theorem of T . Set,
THRMT.n/

Ddef enumfj.SENT.n/ ^ .enum.j/ D n _ enum.j/ D neg.n// _ SENT.n/ ^ j D 0/g D n

The inner minimization function returns 0 if n is not the number of a sentence (from the second disjunct), and otherwise the position of the sentence
numbered n or of its negation in the enumeration (from the first); then enum
applied to that value gives the Gdel number of the resultant formula; this is
either the number of the sentence numbered n, of its negation, or of the first
theorem in the enumeration. If this value is in fact equal to n, then n is the
number of a theorem and THRMT.n/ is true. Observe that if n is not even the
number of a sentence, then it is sure not to be equal to the value of the first
theorem so that THRMT.n/ is false; and if n is the number of the first theorem,
THRMT.n/ is true in the usual way.
Again, we appeal to regular minimization. It is only because T is negation complete
that the function to which the minimization operator applies is regular. So long as n
numbers a sentence, the characteristic function for the first square brackets is sure to
go to zero for one disjunct or the other, and when n does not number a sentence, the
function for the second square brackets goes to zero. So the function is well-defined.
Now consider a formula P .x/ with free variable x. The diagonalization of P is
the formula 9x.x D pP q ^ P .x//. So the diagonalization of P is true just when P
applies to its own Gdel number. To understand this nomenclature, consider a grid
with formulas listed down the left in order of their Gdel numbers and the integer
Gdel numbers across the top.
a

Pa .x/ P a .a/ Pa .b/ Pa .c/


Pb .x/ Pb .a/ P b .b/ Pb .c/
Pc .x/ Pc .a/ Pc .b/ P c .c/
::
:

:::

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

594

So, going down the main diagonal, formulas are of the sort Pn .n/ where the formula
numbered n is applied to its Gdel number n.
Let num.n/ be the Gdel number of the standard numeral for n. So,
num.0/ D p;q
num.Sy/ D pSq ? num.y/

So num is (primitive) recursive. Now diag.n/ is the Gdel number of the diagonalization of the formula with Gdel number n.
diag.n/ Ddef p9x.x Dq ? num.n/ ? p^q ? n ? p/q

Since diag.n/ is recursive, for any theory T extending Q there is a formula Diag.x; y/
that captures it. So if diag.m/ D n, then T ` Diag.m; n/ and T ` 8zDiag.m; z/ !
z D n .
T12.19. Let T be any theory that extends Q. Then for any formula F .y/ containing
just the variable y free, there is a sentence H such that T ` H $ F .pH q/.
The Diagonal Lemma.
Suppose T extends Q; since diag.n/ is recursive, there is a formula Diag.x; y/
that captures diag. Let A.x/ Ddef 9yF .y/ ^ Diag.x; y/ and a D pAq, the
Gdel number of A. Then set H Ddef 9x.x D a ^ 9yF .y/ ^ Diag.x; y//
and h D pH q, the Gdel number of H . H is the diagonalization of A; so
diag.a/ D h. Intuitively, A says F applies to the diagonalization of x; so
that H says that F applies to the diagonalization of A, which is just to say
that according to H , F .pH q/. Reason as follows.

CHAPTER 12. RECURSIVE FUNCTIONS AND Q


1. H $ 9x.x D a ^ 9yF .y/ ^ Diag.x; y//
2. Diag.a; h/
3. 8z.Diag.a; z/ ! z D h/

from def H
from capture
from capture

4.

A (g $I)

5.
6.

9x.x D a ^ 9yF .y/ ^ Diag.x; y//


j D a ^ 9yF .y/ ^ Diag.j; y/

1,4 $E
A (g 59E)

7.
8.
9.

j Da
9yF .y/ ^ Diag.j; y/
F .k/ ^ Diag.j; k/

6 ^E
6 ^E
A (g 89E)

10.
11.
12.
13.
14.
15.

F .k/
Diag.j; k/
Diag.a; k/
Diag.a; k/ ! k D h
kDh
F .h/

9 ^E
9 ^E
11,7 DE
3 8E
13,12 !E
10,14 DE

16.

F .h/

8,9-15 9E

17.

F .h/

5,6-16 9E

18.

F .h/

A g $I

19.
20.
21.
22.
23.
24.

F .h/ ^ Diag.a; h/
9yF .y/ ^ Diag.a; y/
aDa
a D a ^ 9yF .y/ ^ Diag.a; y/
9x.x D a ^ 9yF .y/ ^ Diag.x; y//
H

18,2 ^I
19 9I
DI
21,20 ^I
22 9I
1,23 $E

25. H $ F .h/
26. H $ F .pH q/

595

4-17,18-24 $I
25 abv

So T ` H $ F .pH q/.
If n is such that f.n/ D n, then n is said to be a fixed point for f. And by a (possibly
strained) analogy, H is said to be a fixed point for F .y/.
Given things to come, and especially Gdels own sentence G which is true
though unprovable, it is worth observing that if T is an unsound theory extending
Q, then there are false fixed points for F . To see this, recall that if Diag captures
diag, then so does Diag ^ X for any theorem X where this remains even if X
is among the theorems that are not true. So, for an unsound theory, let Diag be
Diag ^ X for any false theorem X, and everything else be the same. Then with
Diag in place of Diag, T ` H  $ F .pH  q/; but H  is not true, insofar as it
includes the false conjunct X.

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

596

Now we are very close to the incompleteness of arithmetic. As a final preliminary,


T12.20. For no consistent theory T that extends Q is there a recursive relation THRMT.n/
that is true just in case n is a Gdel number of a theorem of T .
Consider a consistent theory extending Q; and suppose there is a recursive
relation THRMT.n/ true just in case n numbers a theorem of T . Since T extends
Q and THRMT is recursive, with T12.16 there is some formula Thrmt.y/ that
captures THRMT. And again since T extends Q, by the diagonal lemma T12.19,
there is a formula H with Gdel number pH q D h such that,8
T ` H $ Thrmt.pH q/

Suppose T 6` H ; then H is not a theorem of T so that h 62 THRMT; so by


capture, T ` Thrmt.pH q/; so by $E, T ` H . This is impossible; reject
the assumption: T ` H . But then H is a theorem of T ; so h 2 THRMT; so
by capture, T ` Thrmt.pH q/; so by NB, T ` H , and T is inconsistent;
but by hypothesis, T is consistent. Reject the original assumption: there is no
recursive relation THRMT.
So from T12.18 any recursively axiomatized, consistent, negation complete formal
theory has a recursive relation THRMT.n/ true just in case n numbers a theorem. But
from T12.20 for no consistent theory extending Q is there such a relation. This
already suggests results to follow.
*E12.31. Let T be any theory that extends Q. For any formulas F1 .y/ and F2 .y/,
generalize the diagonal lemma to find sentences H1 and H2 such that,
T ` H1 $ F1 .pH2 q/
T ` H2 $ F2 .pH1 q/
Demonstrate your result. Hint: You will want to generalize the notion of
diagonalization so that the alternation of formulas F1 .z/, F2 .z/, and P is
9w9x9y.w D pP q ^ x D pF2 q ^ y D pF1 q ^ 9z.F1 .z/ ^ P //. Then
you can find a recursive function alt.p; f1 ; f2 / whose output is the number of
8 Often

G for Gdel, but this existential variable is not the same as Gdels constructed sentence;
so H , after Gdel.

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

597

the alternation of formulas numbered p, f1 and f2 , where this function is captured by some formula Alt.w; x; y; z/ that itself has Gdel number a. Then
alt.a; f 1 ; f 2 / and alt.a; f 2 ; f 1 / number the formulas you need for H1 and H2 .
E12.32. Use your version of the diagonal lemma from E12.31 to provide an alternate
demonstration of T12.20. Hint: You will be able to set up sentences such that
the first says the second is not a theorem, while the second says the first is a
theorem.

12.5.2

First Applications

Here are three quick results from our theorems. Do not let the simplicity of their proof
(if the proof can seem simple after all we have done) distract from the significance
of their content!
The Incompleteness of Arithmetic.
T12.21. No consistent, recursively axiomatizable theory extending Q is negation
complete. The incompleteness of arithmetic.
Consider a theory T that is a consistent, recursively axiomatizable extension
of Q. Then since T consistent and extends Q, by T12.20, there is no recursive
relation THRMT.n/ true iff n is the Gdel number of a theorem. Suppose T is
negation complete; then since T is also consistent and recursively axiomatized, by T12.18 there is a recursive relation THRMT.n/ true iff n is the Gdel
number of a theorem. This is impossible, reject the assumption: T is not
negation complete.
It immediately follows that Q and PA are not negation complete. But similarly for
any consistent recursively axiomatizable theory that extends Q. We already knew that
there were formulas P such that Q 6` P and Q 6` P . But we did not already have
this result for PA; and we certainly did not have the result generally for recursively
axiomatizable theories extending Q.
There are other ways to obtain this result. We explore Gdels own strategy in
the next chapter. And we shall see an approach from computability in chapter 14.
However, this first argument is sufficient to establish the point.

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

598

The Decision Problem


It is a short step from the result that if Q is consistent, then no recursive relation
identifies the theorems of Q, to the result that if Q is consistent, then no recursive
relation identifies the theorems of predicate logic.
T12.22. If Q is consistent, then no recursive relation THRMPL.n/ is true iff n numbers
a theorem of predicate logic.
Suppose otherwise, that Q is consistent and some recursive relation THRMPL.n/
is true iff n numbers a theorem of predicate logic. Let Q be the conjunction
of the axioms of Q; then P is a theorem of Q iff ` .Q ! P /. Let q D pQq;
then,
THRMQ.n/

Ddef

THRMPL.p.q

? q ? p!q ? n ? p/q/

defines a recursive function true iff n numbers a theorem of Q. But, given


the consistency of Q, by T12.20, there is no function THRMQ.n/. Reject the
assumption, if Q is consistent, then there is no recursive relation THRMPL.n/
true iff n numbers a theorem of predicate logic.
And, of course, given that Q is consistent, it follows that no recursive relation numbers the theorems of predicate logic. From T12.20 no recursive relation numbers the
theorems of Q. Now we see that this result extends to the theorems of predicate logic.
At at this stage, these results may seem to be a sort of curiosity about what recursive
functions do. They gain significance when, as we have already hinted can be done,
we identify the recursive functions with the computable functions in chapter 14.
Tarskis Theorems
A couple of related theorems fall under this heading. Say TRUE.n/ is true iff n numbers
a true sentence of some language L. Suppose True.x/ expresses TRUE.n/. Then by
expression, ITrue.pP q/ D T iff pP q 2 TRUE; and this iff IP D T. So, with some
manipulation,
ITrue.pP q/ $ P D T

Let us say T is a truth theory for language L, iff for any sentence of L, T proves
this result.
T ` True.pP q/ $ P

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

599

Nothing prevents theories of this sort. However, a first theorem is to the effect that
theories in our range cannot be theories of truth for their own language L.
T12.23. No recursively axiomatized consistent theory extending Q is a theory of
truth for its own language L.
Suppose otherwise, that a recursively axiomatized consistent T extending Q
is a theory of truth for its own L. Since T extends Q, by the diagonal lemma,
there is a sentence F (a false or liar sentence) such that
T ` F $ True.pF q/

But since T is a truth theory, T ` True.pF q/ $ F ; so T ` True.pF q/ $


True.pF q/; so T is inconsistent. Reject the assumption: T is not a truth
theory for its language L.
This theorem explains our standard jump to the metalanguage when we give conditions like ST and SF. Nothing prevents stating truth conditions trouble results
when a theory purports to give conditions for all the sentences in its own language.
A second theorem takes on the slightly stronger (but still plausible) assumption
that Q is a sound theory, so that all of its theorems are true. Under this condition,
there is trouble even expressing a truth predicate for language L in that language L.
T12.24. If Q is sound, and L includes LNT then there is no True to express
L.

TRUE

in

Suppose otherwise, that Q is sound and some formula True.x/ expresses


TRUE.n/ in L; since Q is a theory that extends Q, by the diagonal lemma,
there is a sentence F such that Q ` F $ True.pF q/; since the theorems
of Q are true, NF $ True.pF q/ D T; so with a bit of manipulation,
NF D T iff NTrue.pF q/ D T; iff NTrue.pF q/ T

(i) Suppose NTrue.pF q/ T; then by expression, pF q 62 TRUE, so that


NF T; so by the above equivalence, NTrue.pF q/ D T; reject the assumption. (ii) So NTrue.pF q/ D T; but then by the equivalence, NF T;
so pF q 62 TRUE; so by expression, NTrue.pF q/ D T; so NTrue.pF q/
T; this is impossible.
Reject the original assumption: no formula True.x/ expresses TRUE.n/.

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

600

Observe that some numerical properties are both expressed and captured as the
recursive relations. As we have seen, though THRMQ.n/ is a relation on the integers, it is not not a recursive relation. It can however be expressed by the formula,
9xPrfq.x; n/. Then, once we show (in T14.10) that all the functions captured by a
recursively axiomatized consistent theory extending Q are recursive, it follows that
THRMQ.n/ is expressed but not captured. And now we have seen a relation TRUE.n/
not even expressed in LNT .
This is a decent start into the results of Part IV of the text. In the following, we
turn to deepening and extending them in different directions.

Final Results of Chapter 12


T12.17 If T is a recursively axiomatized formal theory then the set of theorems of T is
recursively enumerable.
T12.18 For any recursively axiomatized, consistent, negation complete formal theory T
there is a recursive relation THRMT.n/ true just in case n numbers a theorem of T .
T12.19 Let T be any theory that extends Q. Then for any formula F .y/ containing just
the variable y free, there is a sentence H such that T ` H $ F .pH q/. The
Diagonal Lemma.
T12.20 For no consistent theory T that extends Q is there a recursive relation
that is true just in case n is a Gdel number of a theorem of T .

THRMT.n/

T12.21 No consistent, recursively axiomatizable extension of Q is negation complete.


The incompleteness of arithmetic.
T12.22 If Q is consistent, then no recursive relation
theorem of predicate logic

THRMPL.n/

is true iff n numbers a

T12.23 No recursively axiomatized consistent theory extending Q is a theory of truth for


its own language L.
T12.24 If Q is sound, and L includes LNT then there is no True to express TRUE in L.

E12.33. Use the alternate version of the diagonal lemma from E12.31 to provide
alternate demonstrations of T12.23 and T12.24. Include the bit of minipulation left out of the text for T12.24.

CHAPTER 12. RECURSIVE FUNCTIONS AND Q

601

E12.34. For each of the following concepts, explain in an essay of about two pages,
so that Hannah could understand. In your essay, you should (i) identify the
objects to which the concept applies, (ii) give and explain the definition, and
give and explicate examples (iii) where the concept applies, and (iv) where
it does not. Your essay should exhibit an understanding of methods from the
text.
a. The recursive functions and the role of the beta function in their expression
and capture.
b. The essential elements from this chapter contributing to the proof of the incompleteness of arithmetic.
c. The essential elements from this chapter contributing to the proof of that no
recursive relation identifies the theorems of predicate logic
d. The essential elements from this chapter contributing to the proof of Tarskis
theorem.

Chapter 13

Gdels Theorems
We have seen a demonstration of the incompleteness of arithmetic. In this chapter,
we take another run at that result, this time by Gdels original strategy of producing
sentences that are true iff not provable. This enables us to extend and deepen the
incompleteness result, and puts us in a position to take up Gdels second incompleteness theorem, according to which theories (of a certain sort) are not sufficient
for demonstrations of consistency.

13.1

Gdels First Theorem

Recall that the diagonalization of a formula P .x/ is 9x.x D pP q ^ P .x//. In


addition, there is a recursive function diag.n/ which numbers the diagonalization
of the formula with number n and, if T is recursively axiomatized, a recursive
relation PRFT.m; n/ true when m numbers a proof of the formula with number n.
Our previous argument for incompleteness required PRFT.m; n/ for T12.17, and a
Diag.x; y/ to capture diag.n/ for the diagonal lemma. Previously, under the assumption that there is a THRMT and so Thrmt we applied the diagonal lemma so that
T ` H $ Thrmt.pH q/ to reach contradiction, and argued that there must be a
sentence such that neither it nor its negation is provable without any suggestion
what that sentence might be. This time, by related methods, we construct a particular
sentence such that neither it nor its negation is provable.

13.1.1

Semantic Version

Consider some recursively axiomatized theory T whose language includes LNT . Since
PRFT.m; n/ and diag.n/ are recursive, they are expressed by some formulas Prft.x; y/
602

CHAPTER 13. GDELS THEOREMS

603

and Diag.x; y/. Let A.z/ Ddef 9x9y.Prft.x; y/ ^ Diag.z; y//, and a D pAq. So
A says nothing numbers a proof of the diagonalization of a formula with number z.
Then,
G Ddef 9z.z D a ^ 9x9y.Prft.x; y/ ^ Diag.z; y///

So G is the diagonalization of A, and intuitively G says that nothing numbers a


proof of it. Let g D pG q. Observe that G is defined relative to Prft for T ; so each
T yields its own Gdel sentence (if it were not ugly, we might sensibly introduce
subscripts GT ). Thus,
T13.1. For any recursively axiomatized theory T whose language includes LNT , G is
true iff it is unprovable in T (iff T G ).
Consider a recursively axiomatize theory T whose language includes LNT and
G as described above. (i) Suppose NG D T; then, with some work, there are
no m; n such that NPrft.m; n/ D T and NDiag.a; n/ D T; so by expression,
there are no m; n such that hm; ni 2 PRFT and ha; ni 2 diag; but diag.a/ D g;
so no m numbers a proof of G , which is to say T 6` G . (ii) Suppose NG T;
then, again with some work, the second conjunct of G must fail in the case
when z D a so that there are m and n such that both Prft.m; n/ and Diag.a; n/
are satisfied on N with d so that NPrft.m; n/ T and NDiag.a; n/ T;
and by expression hm; ni 2 PRFT and ha; ni 2 diag; but again, diag.a/ D g;
so hm; gi 2 PRFT; so T ` G ; so by transposition, if T 6` G , then NG D T.
It is not a difficult exercise to fill in the details. Intuitively this result should seem
right. Suppose G says that it is unprovable: then if it is true it is unprovable; and if
it is unprovable it is true; so it is true iff it is unprovable.
Now suppose that T is recursively axiomatized, and sound theory (so that its
theorems are true), whose language includes LNT . Then T is negation incomplete.
T13.2. If T is a recursively axiomatized sound theory whose language includes LNT ,
then T is negation incomplete.
Suppose T is a recursively axiomatized theory whose language includes LNT ;
then there is a sentence G to which the conditions for T13.1 apply. (i) Suppose
T ` G ; then, since T is sound, G is true; so by T13.1, T 6` G ; reject the
assumption, T 6` G . Suppose T ` G ; then since T is sound, G is true; so
G is not true; so by T13.1, T ` G ; so by soundness again, G is true; reject
the assumption: T 6` G .

CHAPTER 13. GDELS THEOREMS

604

So G is a sentence such that if T is a recursively axiomatized sound theory whose


language includes LNT , neither G nor its negation is a theorem. And, from T13.1,
given that G is unprovable, if T is a recursively axiomatized theory whose language
includes LNT , then G is a true non-theorem. This version of the incompleteness result
depends on the ability to express G , together with the soundness of theory T .

13.1.2

Syntactic Version

Gdels first theorem is usually presented with the capture and consistency, rather
than the expression and soundness constraints. We turn now to a version of this first
sort which, again, builds a particular sentence such that neither it nor its negation is
provable.
Since PRFT.m; n/ and diag.n/ are recursive, in theories extending Q they are captured by canonical formulas Prft.x; y/ and Diag.x; y/. As before, let A.z/ Ddef
9x9y.Prft.x; y/ ^ Diag.z; y//, and a D pAq. So A says nothing numbers a proof
of the diagonalization of a formula with number z. Then,
G Ddef 9z.z D a ^ 9x9y.Prft.x; y/ ^ Diag.z; y///

So G is the diagonalization of A; and let g be the Gdel number of G . This time, we


shall be able to prove the relation between G and a proof of it. Reasoning as for the
diagonal lemma,
T13.3. Let T be any recursively axiomatized theory extending Q; then T ` G $
9xPrft.x; pG q/.
Since T is recursively axiomatized, there is a recursive PRFT and since T extends Q there are Prft and Diag that capture PRFT and diag. From the definition
of G , T ` G $ 9z.z D a ^ 9x9yPrft.x; y/ ^ Diag.z; y//; from capture
T ` Diag.a; g/; and T ` 8z.Diag.a; z/ ! z D g/. From these it follows
that T ` G $ 9xPrft.x; g/; which is to say, T ` G $ 9xPrft.x; pG q/
(homework).
From the diagonal lemma, under appropriate conditions, given a formula F .y/, there
is some H such that T ` H $ F .pH q/. Under the assumption that there is THRMT,
we applied this to show there would be some H such that T ` H $ Thrmt.pH q/.
This led to contradiction. In this case, however, we show that there really is a particular sentence G such that T ` G $ 9xPrft.x; pG q/.
Our idea is to show that if T is a consistent, recursively axiomatized theory extending Q, then T 6` G and T 6` G . The first is easy enough.

CHAPTER 13. GDELS THEOREMS

605

T13.4. If T is a consistent, recursively axiomatized theory extending Q, then T 6` G .


Suppose T is a consistent recursively axiomatized theory extending Q. Suppose T ` G ; then since T is recursively axiomatized, for some m, PRFT.m; g/;
and since T extends Q, by capture, T ` Prft.m; g/; so by 9I, T ` 9xPrft.x; g/,
which is to say, T ` 9xPrft.x; pG q/. But since T ` G , by T13.3, T `
9xPrft.x; pG q/. So T is inconsistent; reject the assumption: T 6` G .
That is the first half of what we are after. But we cant quite get that if T is
a consistent, recursively axiomatized theory extending Q, then T 6` G . Rather,
we need a strengthened notion of consistency. Say a theory T is !-incomplete iff
for some P .x/, T can prove each P .m/ but T cannot go on to prove 8xP .x/.
Equivalently, T is !-incomplete iff for every m, can prove each T ` P .m/ but T 6`
9xP .x/. We have seen that Q is !-incomplete: we can prove, say nm D mn for
every m and n, but cannot go on to prove the corresponding universal generalization,
8x8y.x  y D y  x/. Say T is !-inconsistent iff for some P .x/, T proves each
P .m/ but also proves 8xP .x/. Equivalently, T is !-inconsistent iff for every m,
can prove each T ` P .m/ and T ` 9xP .x/. !-incompleteness is a theoretical
weakness there are some things true but not provable. But !-inconsistency is a
theoretical disaster: It is not possible for the theorems of an !-inconsistent theory all
to be true on any interpretation (assuming some m for each m 2 U). !-inconsistency
is not itself inconsistency for we do not have any sentence such that T ` P and
T ` P . But inconsistent theories are automatically !-inconsistent for from
contradiction all consequences follow (including each P .m/ and also 8xP .x/) so
that an !-consistent theory is consistent. Now we show,
T13.5. If T is an !-consistent, recursively axiomatized theory extending Q, then
T 6` G .
Suppose T is an !-consistent recursively axiomatized theory extending Q.
Suppose T ` G ; if T is !-consistent, then it is consistent, so T 6` G ;
so since T is recursively axiomatized, for all m, hm; gi 62 PRFT; and since
T extends Q, by capture, T ` Prft.m; g/; and since T is !-consistent,
T 6` 9xPrft.x; g/; which is to say, T 6` 9xPrft.x; pG q/. But since T `
G , by T13.3 with NB, T ` 9xPrft.x; pG q/. This is impossible; reject the
assumption: T 6` G .
So if a recursively axiomatized theory extending Q has the relevant consistency properties, then it is negation incomplete. Further, insofar as T canonically captures the
recursive functions, it expresses the recursive functions; so by T13.1, G is true iff

CHAPTER 13. GDELS THEOREMS

606

T G . So if T is a consistent recursively axiomatized theory extending Q, then G


is both unprovable and true.1
This is roughly the form in which Gdel proved the incompleteness of arithmetic
in 1931: If T is a consistent, recursively axiomatized theory extending Q, then T 6`
G ; and if T is an !-consistent, recursively axiomatized theory extending Q, then
T 6` G . Since we believe that standard theories including Q and PA are consistent
and !-consistent, this sufficient for the incompleteness of arithmetic.
E13.1. Fill in the details for the argument of T13.1.
*E13.2. Complete the demonstration of T13.3 by providing a derivation to show
T ` G $ 9xPrft.x; pG q/. The demonstration for the diagonal lemma
theorem is a model, though steps will be adapted to the particular form of
these sentences.

13.1.3

Rossers Sentence

But it is possible to drop the special assumption of !-consistency by means of a


sentence somewhat different from G .2 Recall that neg.n/ is the Gdel number of the
negation of the sentence with number n. So PRFT.m; n/ Ddef PRFT.m; neg.n// obtains
when m numbers a proof of the negation of the sentence numbered n. Since it is
recursive, it is captured by some Prft.x; y/. Set,
RPrft.x; y/ Ddef Prft.x; y/ ^ .8w  x/Prft.w; y/

So RPrft.x; y/ just in case x numbers a proof of the sentence numbered y and no


number less than or equal to x is a proof of the negation of that sentence. Now,
working as before, set A0 .z/ Ddef 9x9y.RPrft.x; y/ ^ Diag.z; y//, and a D pA0 q.
So A0 says nothing numbers an R-proof of the diagonalization of a formula with
number z. Then,
R Ddef 9z.z D a ^ 9x9y.RPrft.x; y/ ^ Diag.z; y///

So R is the diagonalization of A0 ; let r be the Gdel number of R. And R has


the key syntactic property just like G . Again, reasoning as we did for the diagonal
lemma,
1 Given

that an unsound theory has false fixed points, here is another reason to distinguish this
constructed G from the variable H of the previous chapter. See p. 596n8.
2 Barkley Rosser, Extensions of Some Theorems of Gdel and Church.

CHAPTER 13. GDELS THEOREMS

607

T13.6. Let T be any recursively axiomatized theory extending Q; then T ` R $


9xRPrft.x; pRq/.
You can show this just as for T13.3.
Now the first half of the incompleteness result is straightforward.
T13.7. If T is a consistent, recursively axiomatized theory extending Q, then T 6` R.
Suppose T is a consistent recursively axiomatized theory extending Q. Suppose T ` R; then since T is recursively axiomatized, for some m, PRFT.m; r/;
and since T extends Q, by capture, T ` Prft.m; r/. But by consistency,
T 6` R; so for all n, and in particular all n  m, hn; ri 62 PRFT; so by
capture, T ` Prft.n; r/; so by T8.21, T ` .8w  m/Prft.w; r/; so
T ` Prft.m; r/ ^ .8w  m/Prft.w; r/; so T ` RPrft.m; r/; so T `
9xRPrft.x; r/, which is to say, T ` 9xRPrft.x; pRq/. But since T ` R,
by T13.6, T ` 9xRPrft.x; pRq/; so T is inconsistent. This is impossible;
reject the assumption: T 6` R.
So, with consistency, it is not much harder to prove T ` 9xRPrft.x; pRq/ from the
assumption that T ` R than to prove T ` 9xPrft.x; pG q/ from the assumption that
T ` G.
Reasoning for the other direction is somewhat more involved, but still straightforward.
T13.8. If T is a consistent, recursively axiomatized theory extending Q, then T 6`
R.
Suppose T is a consistent recursively axiomatized theory extending Q. Suppose T ` R. Then since T is recursively axiomatized, for some m, hm; ri 2
PRFT; and since T extends Q, by capture, T ` Prft.m; r/. By consistency,
T 6` R; so for any n, and in particular, any n  m, hn; ri 62 PRFT; so by
capture, T ` Prft.n; r/; and by T8.21, T ` .8w  m/Prft.w; r/. Now
reason as follows.

CHAPTER 13. GDELS THEOREMS


1.
2.
3.
4.

R
Prft.m; r/
.8w  m/Prft.w; r/
R $ 9xRPrft.x; r/

5. 9xRPrft.x; r/
6. 9xPrft.x; r/ ^ .8w  x/Prft.w; r/
7. Prft.j; r/ ^ .8w  j /Prft.w; r/

608
from T
capture
capture and T8.21
from T13.6
1,4 NB
5 abv
A (g, 69E)

8.
9.

j m_mj
j m

10.
11.
12.

Prft.j; r/
Prft.j; r/
?

7^E
3,9 (8)E
10,11 ?I

13.

mj

A (g, 8_E)

14.
15.
16.

.8w  j /Prft.w; r/
Prft.m; r/
?

7 ^E
14,13 (8)E
2,15 ?I

17.

18. ?

T8.19
A (g 8_E)

8,9-12,13-16 _E
6,7-17 9E

So T `?, that is T ` Z ^ Z and T is inconsistent. Reject the assumption,


T 6` R.
In the previous case, with G , we had no way to convert 9xPrft.x; g/ to a contradiction
with Prft.0; g/, Prft.1; g/. . . ; that is why we needed !-consistency. In this case,
the special nature of R aids the argument: From 9xRPrft.x; r/, consider a j such
that RPrft.j; r/. If j  m, there is contradiction insofar as we are in the scope of the
bounded universal quantifier .8w  m/Prft.w; r/. If m  j , then we end up with
both Prft.m; r/ and Prft.m; r/, as RPrft.j; r/ builds in inconsistency with Prft.m; r/.
So T 6` R and T 6` R
Let us close this section with some reflections on what we have shown. First,
Q is sound Q is !-consistent Q is consistent
So our results are progressively stronger, as the assumptions have become correspondingly weaker. Of course,
canonical capture canonical expression
So the second requirement is increased as we move from expression to capture.
Second, we have not shown that there are truths of LNT not provable in any recursively axiomatizable, consistent theory extending Q. Rather, what we have shown

CHAPTER 13. GDELS THEOREMS

609

is that for any recursively axiomatizable consistent theory extending Q, there are
some truths of LNT not provable in that theory. For a given recursively axiomatizable theory, there will be a given relation PRFT.m; n/ and Prft.x; y/ depending on the
particular axioms of that theory and so unique sentences G and R constructed as
above. In particular, given that a theory cannot prove, say, R, we might simply add
R to its axioms; then of course there is a derivation of R from the axioms of the
revised theory! But then the new theory will generate a new relation PRFT.m; n/ and
a new Prft.x; y/ and so a new unprovable sentence R. So any theory extending Q is
negation incomplete.
But it is worth a word about what are theories extending Q. Any such theory
should build in equivalents of the LNT vocabulary ;, S , C, and  and should have
a predicate Nat.x/ to identify a class of objects to count as the numbers. Then if the
theory makes the axioms of Q true on these objects, it is incomplete. Straightforward
extensions of Q are ones like PA which simply add to its axioms. But ordinary ZF set
theory also falls into this category for it is possible to define a class of sets, say,
, fg, f; fgg, f; fg; f; fggg. . . where any n is the set of all the numbers
prior to it, along with operations on sets which obey the axioms of Q.3 It follows
that ZF is negation incomplete. In contrast, the domain for the standard theory of
real numbers has all the entities required to do arithmetic. However that theory does
not have a predicate Nat.x/ to pick out the natural numbers, and cannot recapitulate
the theory of natural numbers on any subclass of its domain. So our incompleteness
theorem does not get a grip, and in fact the theory of real numbers is demonstrably
complete. Observe, though, that it is a weakness in this theory of real numbers, its
inability to specify a certain class that makes room for its completeness.4
E13.3. Demonstrate T13.6.

13.2

Gdels Second Theorem: Overview

We turn now to Gdels second incompleteness theorem on the unprovability of consistency. In order to separate the forest from the trees, we divide this discussion into
four main parts. First, in this section, Gdels second theorem is proved subject to
3 For

discussion, see any introduction to set theory, for example, Enderton, Elements of Set Theory,
chapter 4.
4 There are real numbers 0 and 1; so it is natural to identify the integers with 0, 0 C 1, 0 C 1 C 1
and so forth. The difficulty is to define a property within the theory of real numbers that picks out just
the members of this series, as we have been able to define infinite recursive properties in LNT . The
completeness of the theory of real numbers was originally proved by Tarski, and is discussed in books
on model theory, for example, Hodges A Shorter Model Theory, theorems 2.7.2 and 7.4.4.

CHAPTER 13. GDELS THEOREMS

610

three derivability conditions. Then we turn to the derivability conditions themselves.


The first is easy. But the second and third require extended discussion. There is
some background (section 13.3). Then discussion of the second and third conditions
(section 13.4). This completes the proof. We conclude with some reflections and
consequences from our results (section 13.5). Many (most) texts end their discussion
of the second theorem at the first stage, offering only some general perspective on
the rest.5 However, even if you decide to bypass the details, this general perspective
will be enhanced if you have some object at which to wave as you pass them by.
For this discussion we switch to PA. The result is that that PA and its its extensions cannot prove their own consistency. The reason for this switch will become vivid in demonstration of the derivability conditions as many arguments
that would have been by induction are forced into the theory and so are by IN.
Main agument. We have seen that for recursively axiomatized theories there is
a recursive relation PRFT.m; n/. Since it is recursive, in theories extending Q, this
relation is captured by a corresponding Prft.x; y/. Let,
Prvt.y/ Ddef 9xPrft.x; y/

So Prvt.y/ just when something numbers a proof of the formula numbered y


when the formula numbered by y is provable. Insofar as the quantifier is unbounded,
there is no suggestion that there is a corresponding recursive relation in fact, we
have seen in T12.20 that no recursive relation numbers the theorems of Q. Let,
Cont Ddef Prvt.p; D S ;q/

So Cont is true just in case there is no proof of 0 D 1. There are different ways
to express consistency but, for theories extending Q this does as well as any other.
Suppose T extends Q. Then T ` 0 1; so if T ` 0 D 1 it proves a contradiction
and is inconsistent. And if T is inconsistent, then it proves anything; so T ` 0 D 1.
So T is inconsistent iff T ` 0 D 1; and, transposing, T is consistent iff T 6` 0 D 1.
The second theorem is this simple result: Under certain conditions, if T is consistent, then T 6` Cont. If it is consistent, then T cannot prove its own consistency.
Suppose the first theorem applies to T , and suppose we could show,
./
5 The

T ` Cont ! Prvt.pG q/

only other really complete development in English of the Second Theorem that I have found
is Tourlakis, Lectures in Logic and Set Theory: I. His presentation does not match in detail with the
one developed here though, of course, the underlying argument is the same.

CHAPTER 13. GDELS THEOREMS

611

Then, given what has gone before, we could make the following very simple argument. Suppose T is a recursively axiomatized theory extending Q.
By T13.3, T ` G $ 9xPrft.x; pG q/, which is to say, T ` G $ Prvt.pG q/;
from this and (**), T ` Cont ! G ; so if T ` Cont then T ` G ; but from
the first theorem (T13.4), if T is consistent, then T 6` G ; so if T is consistent,
T 6` Cont.
So the argument reduces to showing (**). Observe that, in reasoning for T13.4 we
have already shown,
T is consistent T 6` G

So the argument reduces to showing that T proves what we have already seen is so.
Let us abbreviate Prvt.pP q/ by P . Observe that this obscures the corner
quotes. Still, we shall find it useful. So we need T ` Cont ! G , which is
just to say, T ` .0 D 1/ ! G . Suppose T satisfies the following derivability
conditions.
D1. If T ` P then T ` P
D2. T ` .P ! Q/ ! .P ! Q/
D3. T ` P ! P
Then we shall be able to show T ` Cont ! G .
The utility of  in this context is that D1 - D3 are exactly the conditions that
define a standard modal logic, K4 and it is not surprising that provability should
correspond to a kind of necessity.6 There is an elegant natural derivation system for
this modal logic. For this you might check out Roy, Natural Derivations for Priest 2
(but in the nomenclature there borrowed from Priest, the system is NK ). However
rather than explain and introduce a new derivation system, we obtain a version of K4
simply by adding A1 - A3 and MP from AD to D1 - D3. So K4 has D1 as a new
rule, and D2 and D3 as new axioms. Since A1 - A3 and MP remain, we have all the
theorems from before. Thus, as a simple example,
6 While K4 correctly represents these principles, it is not a complete logic of provability. The
complete system GL of provability for PA strengthens D3 to an axiom .P ! P / ! P . For
discussion see Boolos, The Logic of Provability.

CHAPTER 13. GDELS THEOREMS

(A)

1. P ! .P ! Q/
2. P ! .P ! Q/
3. P ! .P ! Q/ ! P ! .P ! Q/
4. P ! .P ! Q/

612
T3.9
1 D1
D2
3,2 MP

So in this system ` P ! .P ! Q/.


Now, given that T ` G ! 9xPrft.x; pG q/ from T13.3 we shall be able to
show that T ` Cont ! G .
T13.9. Let T be a recursively axiomatized theory extending Q. Then supposing T
satisfies the derivability conditions and so the K4 logic of provability, T `
Cont ! Prvt.pG q/.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.

G ! G
.G ! G /
.G ! G / ! .G ! G /
G ! G
G ! .G ! 0 D 1/
G ! .G ! 0 D 1/
.G ! 0 D 1/ ! .G ! .0 D 1//
G ! .G ! .0 D 1//
G ! .G ! .0 D 1// ! .G ! G / ! .G ! .0 D 1//
.G ! G / ! .G ! .0 D 1//
G ! G
G ! .0 D 1/
G ! .0 D 1/ ! .0 D 1/ ! G
.0 D 1/ ! G

from T13.3
1 D1
D2
3,2 MP
(A)
4,5 T3.2
D2
6,7 T3.2
A2
9,8 MP
D3
10,11 MP
T3.13
13,12 MP

Which is to say, T ` Cont ! Prvt.pG q/.


As usual for an axiomatic derivation, the reasoning is not entirely transparent. However we are at the stage where, given the derivability conditions, T proves the result.
Given this, reason as before,
T13.10. Let T be a recursively axiomatized theory extending Q. Then supposing T
satisfies the derivability conditions, if T is consistent, T 6` Cont.
Suppose T is a recursively axiomatized theory extending Q that satisfies the
derivability conditions. Then by T13.9, T ` Cont ! Prvt.pG q/; and by
T13.3, T ` G $ Prvt.pG q/; so T ` Cont ! G ; so if T ` Cont then
T ` G ; but from the first incompleteness theorem (T13.4), if T is consistent,
then T 6` G ; so if T is consistent, T 6` Cont.

CHAPTER 13. GDELS THEOREMS

613

One might wonder about the significance of this theorem: If T were inconsistent,
it would prove Cont. So a failure to prove Cont is no reason to think that T is
inconsistent. And a proof of Cont might itself be an indication of inconsistency!
The interesting point here results from using one theory to prove the consistency of
another. Recall the main Hilbert strategy as outlined in the introduction to Part IV;
a key component is the demonstration by means of some real theory R that an ideal
theory I is consistent. But, supposing that PA cannot prove its own consistency, we
can be sure that no weaker theory can prove the consistecy of PA. And if PA cannot
prove even the consistency of PA, then PA and theories weaker than PA cannot be
used to prove the consistency of theories stronger than PA. So a leg of the Hilbert
strategy seems to be removed. Observe, however, that the theorem does not show
that the consistency of PA is unprovable: a theory stronger than PA at least in some
respects might still prove the consistency of PA.7 This may be a straightforward
theorem of the second theory. Of course, as a means of demonstrating consistency
such an argument may seem problematic insofar as one requires some reason for
thinking the second theory sound which does not already attach to the first, and so
already show that the first theory is consistent.
Another theorem is easy to show, and left as an exercise.
T13.11. Let T be a recursively axiomatized theory extending Q. Then supposing T
satisfies the derivability conditions and so the K4 logic of provability, T `
Cont $ Prvt.pContq/.
Hints: (i) Show that T ` Cont ! Cont; you can do this starting with
Cont ! G from T13.9 and G ! G from T13.3. Then (ii) show T `
Cont ! Cont; for this, use T3.39 with T3.9 to show T ` 0 D 1 ! Cont;
then you should be able to obtain Cont ! .0 D 1/ which is to say
Cont ! Cont. Together these give the desired result.
From this theorem, supposing the derivability conditions, Cont is another P which,
like G , is such that T ` P $ Prvt.pP q/; so Cont is another fixed point for
Prvt.x/. It follows that Cont is another sentence such that both it and its negation
are unprovable. Interestingly, Cont uses the notion of provability, but is not constructed so as to say anything about its own provability and so this instance of
incompleteness does not depend on self-reference for the unprovable sentence.
7 G. Gentzen shows this very thing, The Consistency of Elementary Number Theory, and New
Version of the Consistency Proof for Elementary Number Theory, both in The Collected Papers of
Gehard Gentzen, ed. Szabo. See also Gentzen, The Concept of Infinite in Mathematics also in
Szabo, along with Pohlers, Proof Theory, chpater 1, and Takeuti, Proof Theory, 12.

CHAPTER 13. GDELS THEOREMS

614

We have shown that the second theorem holds for a theory if it meets the derivability conditions. But this is not to show that the theorem holds for any theories! In
order to tie the result to something concrete, we turn now to showing that PA meets
the derivability conditions, and so that PA, and theories extending PA, satisfy the
theorem.
Demonstration of the first condition is simple.
T13.12. Suppose T is a recursively axiomatized theory extending Q. Then if T ` P ,
then T ` P .
Suppose T ` P ; then since T is recursively axiomatized, for some m,
PRFT.m; pP q/; and since T extends Q, there is a Prft that captures PRFT; so
T ` Prft.m; pP q/; so by 9I, T ` 9xPrft.x; pP q/; so T ` Prvt.pP q/; so
T ` P .
The next conditions are considerably more difficult. We build gradually to the required results in PA.
E13.4. Produce derivations to show both parts of T13.11.

13.3

The Derivability Conditions: Background

In this section we develop some results required for demonstration of derivability


conditions two and three. We proceed by introducing functions and relations into PA
by definition, and then proving some results about them.

13.3.1

Remarks on Definition

To obtain the derivability conditions, we begin with some remarks on definition. So


far, we have taken a language, as Lq or LNT as basic, and introduced any additional
symbols, for example , as means of abbreviation for expressions in the original
language. But in more complex contexts especially involving function symbols, it
will be convenient to extend the language with the addition of new symbols by means
of definition. Thus given a theory T in language L, we might introduce symbols with
corresponding axioms to obtain T 0 and L0 as follows,

CHAPTER 13. GDELS THEOREMS

615

Additional Theorems of PA
*T13.13. The following are theorems of PA:
(a) PA ` .r  s ^ s  t/ ! r  t
(b) PA ` .r < s ^ s < t/ ! r < t
(c) PA ` .r  s ^ s < t/ ! r < t
(d) PA ` ;  t
(e) PA ` ; < St
(f) PA ` t ; $ ; < t
(g) PA ` t < St
(h) PA ` S t D s ! t < s
(i) PA ` s  t $ Ss  S t
(j) PA ` s < t $ Ss < St
(k) PA ` s < t $ Ss  t
(l) PA ` s  t $ s < t _ s D t
(m) PA ` s < St $ s < t _ s D t
(n) PA ` s  S t $ s  t _ s D St
(o) PA ` s < t _ s D t _ t < s
(p) PA ` s  t _ t < s
(q) PA ` s  t $ t s
(r) PA ` t < s ! t s
(s) PA ` .s  t ^ t  s/ ! s D t
(t) PA ` s  s C t
(u) PA ` r  s $ r C t  s C t
(v) PA ` r < s $ r C t < s C t
(w) PA ` .r  s ^ t  u/ ! r C t  s C u
(x) PA ` .r < s ^ t  u/ ! r C t < s C u
(y) PA ` ; < t ! s  s  t
(z) PA ` r  s ! r  t  s  t
(aa) PA ` r  s > ; ! s > ;
(ab) PA ` .r > 1 ^ s > ;/ ! r  s > s
(ac) PA ` .t > ; ^ r < s/ ! r  t < s  t
(ad) PA ` .r < s ^ t < u/ ! r  t < s  u
(ae) PA ` 8x.8: < x/P:x ! P ! 8xP

strong induction (a)

(af) PA ` P;x ^ 8x.8:  x/P:x ! PSxx ! 8xP


(ag) PA ` 9xP ! 9xP ^ .8: <

x/P:x

strong induction (b)

least number principle

Some of these are related to results we obtained in chapter 8 for Q. But there results were of the
sort, for any n, Q ` t < n _ t D n _ n < t; with PA, the induction is in the logic rather than in
the metalanguage, and we obtain the universal quantifier (or rather, an arbitrary term which may be
a free variable) in the object formula.

CHAPTER 13. GDELS THEOREMS

616

Symbol

Axiom

Condition

x  y $ 9z.z C x D y/

y D $ 8x.x 62 y/

T ` 9y8x.x 62 y/

y D Sx $ 8zz 2 y $ .z 2 x _ z D x/

T ` 9y8zz 2 y $ .z 2 x _ z D x/

Let 9yP .y/ abbreviate 9yP .y/ ^ 8z.P .z/ ! z D y/ or equivalently 9yP .y/ ^
8y8z.P .y/^P .z// ! y D z so that exactly one thing is P . The first example, for
a relation symbol, is one with which we are familiar. The others, for a constant and
function symbol, are standard examples from set theory, where zero and successor
are defined (the condition for successor sets S x D x [ fxg so that the integers are
, fg, f; fgg and so forth). Observe that the constant and function cases require
that T prove a uniqueness condition for the symbol. The details of the examples
are not important; we illustrate only the idea of definition. We begin with a formal
account, and extend it in different directions.
Basic Account. Consider some theory T and language L. We will consider a language L0 extended with some new symbol and theory T 0 extended with the corresponding axiom. There are separate cases for a relation symbol, constant and function symbol.
Relation symbol. To introduce a new relation symbol RxE we require an axiom in
the extended theory such that,
T 0 ` R.x/
E $ Q.x/
E
where Q.x/
E is in L. Then for a formula F 0 including the new symbol, there should
be a conversion C such that CF 0 D F for F in the original L and,
T0 ` F0

iff

T ` CF 0

So F is like our unabbreviated formula, always available in the original T when F 0


is a theorem of T 0 . The conversion for a relation RE
s is straightforward. We are given
T ` R.x/
E $ Q.x/.
E Make sure the bound variables of Q do not overlap the bound
E
Rs
variables of RE
s. Then CF 0 D F 0 Q.E
s / . So, from the example above, suppose
F 0 D 9z.a  z/. So F 0 involves the new symbol. We are given T 0 ` x  y $
9z.z C x D y/. It will not make sense to instantiate x and y from Q to a and z
from F 0 insofar as z is not free in Q. But we solve the problem by revising bound
variables; so T 0 ` x  y $ 9w.w C x D y/; so T 0 ` a  z $ 9w.w C a D z/;
then CF 0 replaces .a  z/ in F 0 with 9w.wCa D z/ to obtain, 9z9w.wCa D z/.

CHAPTER 13. GDELS THEOREMS

617

Constant symbol. To introduce a new constant symbol we require an axiom in


the extended theory, along with a condition in the original theory such that,
T 0 ` y D c $ Q.y/

T ` 9yQ.y/

and

Again for a formula F 0 including the new symbol, we expect a conversion C such
that CF 0 D F , where T 0 ` F 0 iff T ` CF . Let z be a variable that does not
appear in F 0 or Q. Then
CF 0 D 9z.Q.z/ ^ F 0 cz /
So, from the example above, suppose F 0 D 9x. 2 x/. Then z is a variable that
does not appear in F 0 or Q. So CF 0 D 9z8x.x 62 z/ ^ 9x.z 2 x/.
Function symbol. To introduce a function symbol, there is an axiom and condition,
T 0 ` y D hxE $ Q.x;
E y/

and

T ` 9yQ.x;
E y/

The conversion for a function symbol works like that for constants when a single
instance of hE
s appears in F 0 . Again, make sure the bound variables of Q do not
overlap the variables of hE
s and let z be a variable that does not appear in F 0 or in
s
Q. Then it is sufficient to set CF 0 D 9z.Q.E
s; z/ ^ F 0 hE
z /. In general, however,
F 0 may include multiple instances of h, including one in the scope of another. For
the general case, begin where F 0 is an atomic R0 D Rt1 : : : tn and t1 : : : tn may
involve instances of hE
s. Order instances of hE
s in R0 from the left (or, on a chapter 2
tree, from the bottom) into a list hE
s1 ; hE
s2 ; : : : hE
sm , so that when i < j , no hE
si
0
appears in the scope of hE
sj . Then set R0 D R , and for i  1, Ri D 9z.Q.E
si ; z/ ^
R0
hE
si
0
0
0
0
.Ri 1 /z /. Then CR D Rm and for an arbitrary F , CF D F Rm . So, for
example, if R0 D R0 D Rwh2 h2 xyz, the tree is as follows,
w

,
,

@
@

h2 xy

,
,

(B)

,
S

,
,

S h2 h2 xyz
S
.......S
..............
S
S
Rwh2 h2 xyz

So instances of hqr are ordered hh2 h2 xyz; h2 xyi. Then we use Q to replace instances of h, working our way up through the tree. So,

CHAPTER 13. GDELS THEOREMS

618

R0 D Rwh2 h2 xyz
R1 D 9u.Qh2 xyzu ^ Rwu/
R2 D 9vQxyv ^ 9u.Qvzu ^ Rwu/
R1 uses Q to replace all of h2 h2 xyz, operating on the terms h2 xy and z. R2 uses Q
to replace h2 xy in R1 , operating on x and y. Observe that no quantifier ever binds
a variable still in the scope of h; and in the end, the free variables are the same as in
R0 .
To show that this works, that T 0 ` F 0 iff T ` F we need a couple of theorems.
T13.14. For a defined relation symbol, function symbol or constant, with its associated axiom and conversion procedure, T 0 ` F 0 $ F .
(a) For a relation symbol, we are given T 0 ` RxE $ Q.x/;
E then so long as
the bound variables of Q do not overlap the variables of RE
s (which we can
guarantee by reasoning as for T3.27) s
E is free for xE in Q, so T 0 ` RE
s $
E
R
s
0
0
0
0
0
Q.E
s/; so by T9.9, T ` F $ F Q.Es/ ; so T ` F $ F .
(b) The case for constants is left as an exercise.
(c) For a function symbol h, begin with a derivation to show T 0 ` Ri
Ri . For a Ri 1 h.E
s/, Ri .E
s/ is 9z.Q.E
s; z/ ^ Ri 1 z/.
Ri

2.
3.
4.
5.
6.

h.E
s/ D h.E
s/ $ Q.E
s; h.E
s//
hE
s D hE
s
Q.E
s; h.E
s//
Q.E
s; h.E
s// ^ Ri 1 h.E
s/
9z.Q.E
s; z/ ^ Ri 1 z/

from T 0
DI
2,3 $E
1,4 ^I
5 9I

7.

9z.Q.E
s; z/ ^ Ri

A (g $I)

8.

Q.E
s; j / ^ Ri

14.

s/
1 h.E

1 z/
1 j

A (g 79E)
8 ^E
from T 0
10,9 $E
8 ^E
11,12 DE

Q.E
s; j /
j D h.E
s/ $ Q.E
s; j /
j D h.E
s/
Ri 1 j
Ri 1 h.E
s/
Ri

15. Ri

7,8-13 9E

s/
1 h.E

s/
1 h.E

A (g $I)

1.

9.
10.
11.
12.
13.

$ 9z.Q.E
s; z/ ^ Ri

1 z/

1-6,7-14 $I

Things are arranged so that the variables of hE


s are not bound upon substitution into Q. So instances of the axiom at (2) and (10) and 9I at (6) satisfy

CHAPTER 13. GDELS THEOREMS

619

constraints. So T 0 ` Ri 1 $ Ri ; and by repeated applications of this theo0


0
0
rem, T 0 ` R0 $ Rm ; so by T9.9, T 0 ` F 0 $ F 0 R
Rm ; so T ` F $ F .
So far, so good, but this only says what the extended T 0 proves that the richer
proves F 0 iff it proves F . But we want to see that T 0 proves F 0 iff the original T
proves F . We bridge the gap between T and T 0 by a couple of additional theorems.
First,

T0

T13.15. For a T and L, given a defined relation symbol, function symbol or constant
with its associated axiom, then for any formula F in the original L, T 0 ` F
iff T ` F .
Since T 0 proves everything T proves, the direction from right to left is obvious. So suppose T 0 ` F . To show T ` F , we show T  F and apply
adequacy. So suppose there is a model M such that MT D T, to show
MF D T. Since T 0 ` F , by soundness, T 0  F .
(i) Relation symbol. Extend M to a model M0 like M except that for arbitrary d,
hdx1 : : : dxn i 2 M0 R iff Md Q.x1 : : : xn / D S; iff M0d Q.x1 : : : xn / D
S (the latter by T10.15 since M and M0 agree on assignments to symbols
in Q). Since M0 and M agree on assignments to symbols other than R, by
T10.15 M0 T D T. And M0 RxE $ Q.x/
E D T: suppose otherwise; then
0
by TI there is some d such that Md Rx1 : : : xn $ Q.x1 : : : xn / S; so
by SF($), M0d Rx1 : : : xn S and M0d Q.x1 : : : xn / D S (or the other
way around); so hdx1 : : : dxn i 62 M0 R and M0d Q.x1 : : : xn / D S; but
by construction, this is impossible; and similarly in the other case; reject
the assumption, M0 RxE $ Q.x/
E D T. So M0 T 0 D T; so since T 0  F ,
0
M F D T; and by T10.15 again, MF D T; and since this reasoning applies
for arbitrary M, T  F ; so by adequacy, T ` F .
(ii) Again, the case for constants is left as an exercise.
(iii) Function symbol. Since T ` 9yQ.x;
E y/, by soundness T  9yQ.x;
E y/;
so since MT D T, M9yQ.x;
E y/ D T; so by TI, for any d, Md 9yQ.x;
E y/
D S, and there is exactly one m 2 U such that Md.yjm/ Q.x;
E y/ D S. Extend
M to a model M0 like M except that for arbitrary d, hhdx1 : : : dxn i; mi 2
M0 h iff Md.yjm/ Q.x1 : : : xn ; y/ D S; by T10.15 iff M0d.yjm/ Q.x1 : : : xn ; y/
D S. Since M0 and M agree on assignments to symbols other than h, by
T10.15 M0 T D T. And M0 y D hxE $ Q.x;
E y/ D T: suppose otherwise;
then by TI there is some h such that M0h y D hx1 : : : xn $ Q.x1 : : : xn ; y/
S; so by SF($), M0h y D hx1 : : : xn S and M0h Q.x1 : : : xn ; y/ D S (or

CHAPTER 13. GDELS THEOREMS

620

the other way around). Where h.y/ D m, h D h.yjm/; so M0h.yjm/ Q.x1 : : :


xn ; y/ D S; so by construction with TA(f), M0h hx1 : : : xn D m; and since
h.y/ D m, M0h y D m; so M0h y D hx1 : : : xn D S; this is impossible; and
similarly in the other case; reject the assumption, M0 y D hxE $ Q.x;
E y/ D
0
0
0
0
T. So M T D T; so since T  F , M F D T; and by T10.15 again,
MF D T; and since this reasoning applies for arbitrary M, T  F ; so by
adequacy, T ` F .
It is, in fact, important to show that these specifications are consistent that
we do not both assert and deny that some objects are in the interpretation of a relation symbol, function symbol or constant when we specify for assignments that are
arbitrary. But this is easily done. Here is the case for function symbols.
This specification is consistent: Suppose otherwise; that is, suppose hhdx1
: : : dxn i; mi 2 M0 h and hhhx1 : : : hxn i; mi 62 M0 h but dx1 D hx1
and . . . and dxn D hxn . From the first, Md.yjm/ Q.x1 : : : xn ; y/ D S; from
the second, Mh.yjm/ Q.x1 : : : xn ; y/ S; but d.yjm/ and h.yjm/ make the
same assignments to variables free in Q.x;
E y/; so by T8.4, Md.yjm/ Q.x;
E y/ D
Mh.yjm/ Q.x;
E y/; so Mh.yjm/ Q.x;
E y/ D S; reject the assumption: if dx1 D
hx1 and . . . and dxn D hxn and hhdx1 : : : dxn i; mi 2 M0 h then
hhhx1 : : : hxn i; mi 2 M0 h.
And now our desired result is simple. The basic idea is that for some T and
L with a defined constant, relation symbol or function symbol, from T13.14 T 0 `
F 0 $ F and from T13.15 T 0 ` F iff T ` F ; so that T 0 ` F 0 iff T ` F . Put more
generally,
T13.16. For some defined relation symbols, function symbols or constants, with their
associated axioms and conversion procedures, T 0 ` F 0 iff T ` F .
Consider a sequence of formulas F0 : : : Fn and theories T0 : : : Tn ordered
according to the number of new symbols where for any i , Fi D CFi C1 . By
our results, Ti C1 ` Fi C1 $ Fi , and Ti C1 ` Fi iff Ti ` Fi . It follows that
Ti C1 ` Fi C1 iff Ti ` Fi . And by a simple induction, Tn ` Fn iff T0 ` F0 ,
which is to say T 0 ` F 0 iff T ` F .
In the following, we will be clear about when new symbols and associated axioms
are introduced, and about the conditions under which this may be done. In light of
the results we have achieved however, we will not generally distinguish between a
theory and its definitional extensions.

CHAPTER 13. GDELS THEOREMS

621

It is worth remarking on the increased requirement for definition relative to capture. In particular, for a function, capture requires T ` 8z.F .m1 : : : mn ; z/ ! z D
a/. For definition, the comparable condition is T ` 8y8z.F .x;
E y/ ^ F .x;
E z// !
y D z. So definition builds in a sort of generality not required in the other case. Q
is great about proving particular facts but not so great when it comes to generality
(this was a sticking point about the shift between Q and Qs in chapter 12 (p. 562
and below). But this is just the sort of thing PA is fitted to do. And once we have
functions within the theory, we shall be able to manipulate them in ways comparable
to ways we have so-far informally manipulated recursive functions.8
*E13.5. Show T13.13ae and T13.13ag. Hard core: show each of the results in
T13.13.

E13.6. (i) Complete the unfinished cases for constants in T13.14 and T13.15. (ii)
Show consistency results for both relation and constant symbols.
First applications. Here are a couple of quick results that will be helpful as we
move forward. First, if PA defines some functions h.x;
E w; zE/ and g.y/,
E then PA
defines their composition, f .x;
E y;
E zE/ D h.x;
E g.y/;
E zE/.
T13.17. If PA defines some h.x;
E w; zE/ and g.y/,
E then PA defines f .x;
E y;
E zE/ D
h.x;
E g.y/;
E zE/. Suppose PA defines some h.x;
E w; zE/ and g.y/.
E Let,
Def [f .x;
E y;
E zE/] PA ` v D f .x;
E y;
E zE/ $ v D h.x;
E g.y/;
E zE/. Then,
(i) PA ` 9vv D h.x;
E g.y/;
E zE/
1. h.x;
E g.y/;
E zE/ D h.x;
E g.y/;
E zE/
2. 9vv D h.x;
E g.y/;
E zE/

DI
1 9I

(ii) PA ` 8u8v.u D h.x;


E g.y/;
E zE/ ^ v D h.x;
E g.y/;
E zE// ! u D v
8 Is

definition so described necessary for reasoning to follow? We might continue to think in terms
of abbreviation or even unabbreviated formulas themselves, so that there are no new symbols. Even
so, the conditions on such formulas would be like those for definition, so that the overall argument
would remain the same.

CHAPTER 13. GDELS THEOREMS

622

1.

j D h.x;
E g.y/;
E zE/ ^ k D h.x;
E g.y/;
E zE/

A (g !I)

2.
3.
4.

j D h.x;
E g.y/;
E zE/
k D h.x;
E g.y/;
E zE/
j Dk

1 ^E
1 ^E
2,3 DE

5. .j D h.x;
E g.y/;
E zE/ ^ k D h.x;
E g.y/;
E zE// ! j D k
6. 8v.j D h.x;
E g.y/;
E zE/ ^ v D h.x;
E g.y/;
E zE// ! j D v
E g.y/;
E zE/ ^ v D h.x;
E g.y/;
E zE// ! u D v
7. 8u8v.u D h.x;

1-4 !I
5 8I
6 8I

So PA ` 9vv D h.x;
E g.y/;
E zE/ and PA defines f .x;
E y;
E zE/.
In addition, we can introduce a function for minimization. The idea is to set v D
yQ.x;
E y/ $ Q.x;
E v/ ^ .8z < v/Q.x;
E z/. In the ordinary case, a new function
symbol h is introduced with an axiom of the sort v D hxE $ Q.x;
E v/ under the condition T ` 9vQ.x;
E v/. But, in this case, the situation is simplified by the following
theorem.
T13.18. If PA ` 9vQ.x;
E v/, then PA ` 9vQ.x;
E v/ ^ .8z < v/Q.x;
E v/.
(i) Suppose PA ` 9vQ.x;
E v/. Then by the least number principle T13.13ag,
PA ` 9vQ.x;
E v/ ^ .8z < v/Q.x;
E v/.
(ii) Further, PA ` 8u8v.Q.x;
E u/ ^ .8z < u/Q.x;
E z/ ^ Q.x;
E v/ ^ .8z <
v/Q.x;
E z// ! u D v.
1.

Q.x;
E j / ^ .8z < j /Q.x;
E z/ ^ Q.x;
E k/ ^ .8z < k/Q.x;
E z/

A (g !I)

2.
3.

j <k_j Dk_k <j


j <k

T13.13o
A (c I)

4.
5.
6.
7.

.8z < k/Q.x;


E z/
Q.x;
E j/
Q.x;
E j/
?

1 ^E
4,3 (8E)
1 ^E
6,5 ?I

8.
9.
10.
11.
12.
13.
14.
15.

.j < k/
k<j
.8z < j /Q.x;
E z/
Q.x;
E k/
Q.x;
E k/
?
.k < j /
j Dk

16. .Q.x;
E j / ^ .8z < j /Q.x;
E z/ ^ Q.x;
E k/ ^ .8z < k/Q.x;
E z// ! j D k
17. 8v.Q.x;
E j / ^ .8z < j /Q.x;
E z/ ^ Q.x;
E v/ ^ .8z < v/Q.x;
E z// ! j D v
18. 8u8v.Q.x;
E u/ ^ .8z < u/Q.x;
E z/ ^ Q.x;
E v/ ^ .8z < v/Q.x;
E z// ! u D v

3-7 I
A (c I)
1 ^E
10,9 (8E)
1 ^E
12,11, ?I
9-13 I
2,8,14 DS
1-15 !I
16 8I
17 8I

CHAPTER 13. GDELS THEOREMS

623

So under the condition 9vQ.x;


E v/, we have 9vQ.x;
E v/ ^ .8z < v/Q.x;
E v/. Thus
we may define functions for minimization and bounded minimization under revised
conditions. Let,
Def [vQ.x;
E v/] PA ` v D vQ.x;
E v/ $ Q.x;
E v/ ^ .8z < v/Q.x;
E v/
(i) PA ` 9vQ.x;
E v/ ^ .8z < v/Q.x;
E v/.
(ii) 8u8v.Q.x;
E u/^.8z < u/Q.x;
E z/^Q.x;
E v/^.8z < v/Q.x;
E z// !
u D v
But given T13.18, these conditions are met so long as PA ` 9vQ.x;
E v/.
And,
Def [.y  z/Q.x;
E z; y/] PA ` v D .y  z/Q.x;
E z; y/ $ v D yy D z _
Q.x;
E z; y/
Let m.x;
E z/ D yy D z _ Q.x;
E z; y/ then we require,
(i) PA ` 9v.v D m.x;
E z//
(ii) PA ` 8u8v.u D m.x;
E z/ ^ v D m.x;
E z/ ! u D v/
These conditions are trivially met so long as m.x;
E z/ is defined; and for this,
the existential condition, PA ` 9yy D z _ Q.x;
E z; y/ follows immediately
from PA ` z D z; so the conditions for bounded minimization are automatically satisfied.
Given these notions, we may write down some immediate, simple results.
*T13.19. Let m.x/
E D vQ.x;
E v/; then,
(a) PA ` Q.x;
E m.x//
E ^ .8z < m.x//Q.
E
x;
E z/
(b) PA ` Q.x;
E m.x//
E
(c) PA ` .8z < m.x//Q.
E
x;
E z/
(d) PA ` Q.x;
E v/ ! m.x/
E v
Because it is always possible to switch bound variables so that Q is converted to an equivalent Q0 whose bound variables do not overlap with variables free in m.x/,
E we simply assume m.x/
E is free for v in Q.x;
E v/ (and
we will generally make this move). Thus (a) follows from the definition
v D m.x/
E $ Q.x;
E v/ ^ .8z < v/Q.x;
E v/ with v instantiated to m.x/
E
together with m.x/
E D m.x/.
E Both conjuncts, and so (b) and (c) follow from
(a). And (d) can be done in eight or nine lines with (c).

CHAPTER 13. GDELS THEOREMS

624

Of these, (a) - (c) simply observe that the definition applies to the function defined.
From (d), the least v such that Q.x;
E v/ is always  an arbitrary v such that Q.x;
E v/.
In addition, a couple of results for bounded minimization.
T13.20. The following result in PA,
(a) PA ` .y  ;/Q.x;
E ;; y/ D ;
(b) If PA ` .9v  t.u//Q.x;
E u; v/ then (i) PA defines vQ.x;
E u; v/ and (ii)
PA ` .v  t.u//Q.x;
E u; v/ D vQ.x;
E u; v/.
Hints: (a) follows easily from the definition. For (b), the existential for (i)
follows simply from .9v  t.u//Q.x;
E u; v/. For (ii),

CHAPTER 13. GDELS THEOREMS

625

1. .9v  t.u//Q.x;
E u; v/

2. n.x;
E u/ D .v  t.u//Q.x;
E u; v/
3. n.x;
E u/ D vv D t.u/ _ Q.x;
E u; v/
E u/ D t.u/ _ Q.x;
E u; n.x;
E u//
4. n.x;
5. Q.x;
E u; a/
6. a  t.u/

abv
4 def
5 T13.19b
A (g 1(9E))

7.
8.
9.
10.

a < t.u/ _ a D t.u/


a D t.u/
t.u/ D n.x;
E u/ _ t.u/ n.x;
E u/
t.u/ D n.x;
E u/

6 T13.13l
A (g 7_E)
T3.1
A (g 9_E)

11.
12.

Q.x;
E u; t.u//
Q.x;
E u; n.x;
E u//

5,8 DE
11,10 DE

13.

t.u/ n.x;
E u/

A (g 9_E)

14.

Q.x;
E u; n.x;
E u//

4,13 DS

15.

Q.x;
E u; n.x;
E u//

9,10-12,13-14 _E

16.

a < t.u/

A (g 7_E)

17.
18.
19.
20.
21.

a D t.u/ _ Q.x;
E u; a/
n.x;
E u/  a
n.x;
E u/ < t.u/
n.x;
E u/ t.u/
Q.x;
E u; n.x;
E u//

5 _I
17 T13.19d
18,16 T13.13c
19 T13.13r
4,20 DS

22.
23.
24.
25.
26.
27.
28.
29.
30.

Q.x;
E u; n.x;
E u//
.8w < n.x;
E u//w D t.u/ _ Q.x;
E u; w/
l < n.x;
E u/
l D t.u/ _ Q.x;
E u; l/
l t.u/ ^ Q.x;
E u; l/
Q.x;
E u; l/
.8w < n.x;
E u//Q.x;
E u; w/
Q.x;
E u; n.x;
E u// ^ .8w < n.x;
E u//Q.x;
E u; w/
n.x;
E u/ D vQ.x;
E u; v/

31. n.x;
E u/ D vQ.x;
E u; v/
32. .v  t.u//Q.x;
E u; v/ D vQ.x;
E u; v/

7,8-15,16-21 _E
3 T13.19c
A (g (8I))
23,24 (8E)
25 DeM
26 ^E
24-27 (8I)
22,28 ^I
29 def
1,5-30 (9E)
31 abv

The most interesting case is the one at (10) where n.x;


E u/ is equal to the bound t.u/;
then it remains that Q.x;
E u; n.x;
E u// because we have a D t.u/ and Q.x;
E u; a/; in
the other cases, we get Q.x;
E u; n.x;
E u// because n.x;
E u/ is not equal to the bound.
From, T13.20a it does not matter about Q, the least y under the bound ; is always

CHAPTER 13. GDELS THEOREMS

626

;. T13.20b converts a bounded minimization into one without a bound; thus when
T13.20b applies, results from from T13.19 for unbounded minimization apply to the
bounded case.
*E13.7. Produce the quick derivation to show T13.19d.

E13.8. Complete the unfinished parts of T13.20.

13.3.2

Definitions for recursive functions

We now set out to show that PA defines relations and functions corresponding to
recursive relations and functions. Insofar as we understand what a theorem of PA is,
not all of the demonstrations are required to understand the argument and some
may obscure the overall flow. Thus, for our main argument, we often list results (with
hints), shifting demonstrations into exercises and answers to exercises. To retain
demonstration of results, a great many exercises are in fact worked in the answers
section. Also, since the only constant in LNT is ;, and derivations are becoming
increasingly complex, it will be convenient to suppose that all of a : : : z are variables
of the language.
The core result.
With an eye to the -function, we begin showing that PA defines remainder rm.m; n/
and quotient qt.m; n/ functions corresponding to m=.n C 1/. Division is by n C 1 to
avoid the possibility of division by zero.9
*Def [rm] Let PA ` v D rm.m; n/ $ .9w  m/m D S n  w C v ^ v < S n.
Justification:
(i) PA ` 9x.9w  m/m D S nw Cx ^x < S n. Hint: This is an argument
by IN on m. It is easy to show 9x.9w  ;/; D S n  w C x ^ x < S n, from
; D S n  ; C ; ^ ; < S n with (9I) and 9I. Then, for the main argument,
for the remainder k, k < n _ k D n. In the first case Sj is divided by leaving
the quotient l the same, and incrementing k; in the second case Sj is divided
by S l with remainder zero.
9A

choice is made: Another option is define the functions with an arbitrary value for division by
zero. Our selection makes for somewhat unintuitive statements of that which is intuitively true rather
than (relatively) intuitive statements including that which is intuitively undefined or false.

CHAPTER 13. GDELS THEOREMS

627

(ii) PA ` 8x8y..9w  m/m D S n  w C x ^ x < S n ^ .9w  m/m D


S n  w C y ^ y < S n/ ! x D y. Hint: This does not require IN,
but is an involved derivation all the same. Once you instantiate the bounded
existential quantifiers to quotients p with remainder j and q with remainder
k, you have p < q _ p D q _ q < p. When p D q, j D k follows easily
with cancellation for addition. And the other cases contradict. So, if p < q,
you will be able to set up an l such that S l C p D q, and show j 6< S n. And
similarly in the other case.
Def [qt] Let PA ` v D qt.m; n/ $ m D S n  v C rm.m; n/. Justification:
(i) PA ` 9xm D S n  x C rm.m; n/. Hint: By DI, rm.m; n/ D rm.m; n/;
so with Def [rm], .9w  m/m D S n  w C rm.m; n/ ^ rm.m; n/ < S n;
and the result follows easily.
(ii) PA ` 8x8y.m D S nx Crm.m; n/^m D S ny Crm.m; n// ! x D
y. Hint: This is easy with cancellation laws for addition and multiplication.
Def [] PA ` .p; q; i / D rm.p; q  S i /. Justification: Since this is a composition
of functions, immediate from T13.17.
Observe that, from the definition, PA ` v D .p; q; i / $ .9w  p/p D S.q 
S i /  w C v ^ v < S.q  S i /, which is to say PA ` v D .p; q; i / $ B.p; q; i; v/,
where B is the original formula to express the beta function.
And now our main argument that PA defines relations and functions corresponding to recursive relations and functions. The main result is for functions; relations
follow as an easy corollary. But we shall not be able to show that PA defines relations
and functions corresponding to all the recursive relations and functions: Say an application of regular minimization to generate f.Ex/ from g.Ex; y/ is (PA) friendly just in
case PA ` 9yG .x;
E y; ;/ where G .x;
E y; v/ is the original formula that expresses and
captures g.Ex; y/; and a recursive function is (PA) friendly just in case it is an initial
function or arises by applications of composition, recursion or friendly regular minimization. Observe that all primitive recursive functions are automatically friendly
insofar as they involve no applications of minimization at all.
*T13.21. For any friendly recursive function r.Ex/ and original formula R.x;
E v/ by
which it is expressed and captured, PA defines a function r.x/
E such that PA `
v D r.x/
E $ R.x;
E v/.
By induction on the sequence of recursive functions.

CHAPTER 13. GDELS THEOREMS

628
j

Basis: r0 .Ex/ is an initial function suc.x/, zero.x/ or idntk .x1 : : : xj /.


(s) r0 .Ex/ is suc.x/. Let PA ` v D suc.x/ $ S x D v. But S x D v is the
original formula Suc.x; v/ by which suc.x/ is expressed and captured;
so PA ` v D suc.x/ $ Suc.x; v/. And by reasoning as follows,
1. Sx D Sx
2. 9y.Sx D y/

DI
1 9I

1.

Sx D j ^ Sx D k

A (g !I)

2.
3.
4.

Sx D j
Sx D k
j Dk

1 ^E
1 ^E
2,3 DE

5. .Sx D j ^ Sx D k/ ! j D k
6. 8z.Sx D j ^ Sx D z/ ! j D z
7. 8y8z.Sx D y ^ Sx D z/ ! y D z

1-4 !I
5 8I
6 8I

PA ` 9y.S x D y/. So PA defines suc.x/.


(z) r0 .Ex/ is zero.x/. Let PA ` v D zero.x/ $ x D x ^ v D ;. Then
PA ` v D zero.x/ $ Zero.x; v/. And by (homework) PA defines
zero.x/.
j

(i) r0 .Ex/ is idntk .x1 : : : xj /. Let PA ` v D idntk .x1 : : : xj / $ .x1 D


j
x1 ^ : : : ^ xj D xj / ^ xk D v. Then PA ` v D idntk .x1 : : : xj / $
j
j
Idntk .x1 : : : xj ; v/. And by (homework) PA defines idntk .x1 : : : xj /.
Assp: For any i, 0  i < k, and ri .Ex/ with Ri .x;
E v/, PA defines ri .x/
E such
that PA ` v D ri .x/
E $ Ri .x;
E v/.
Show: PA defines rk .x/
E such that PA ` v D rk .x/
E $ Rk .x;
E v/.
rk .Ex/ is either an initial function or arises by composition, recursion or
PA friendly regular minimization. If rk .Ex/ is an initial function, then

reason as in the basis. So suppose one of the other cases.


(c) rk .Ex; yE; zE/ is h.Ex; g.Ey/; zE/ for some hi .Ex; w; zE/ and gj .Ey/ where i; j < k.
By assumption PA defines h.x;
E w; zE/ such that PA ` v D h.x;
E w; zE/
$ H .x;
E w; zE; v/ and PA defines g.y/
E such that PA ` w D g.y/
E $
G .y;
E w/. Let PA ` rk .x;
E y;
E zE/ D h.x;
E g.y/;
E zE/. Then by T13.17 PA
defines rk . And PA ` v D rk .x/
E $ Rk .x;
E v/. Thus, dropping xE and
zE and reducing yE to a single variable,

CHAPTER 13. GDELS THEOREMS


1. r.y/ D h.g.y//
2. v D h.w/ $ H .w; v/
3. w D g.y/ $ G .y; w/

629
def
by assp
by assp

v D r.y/

A (g $I)

5.
6.
7.
8.
9.
10.
11.
12.
13.
14.

v D h.g.y//
g.y/ D g.y/
g.y/ D g.y/ $ G .y; g.y//
G .y; g.y//
h.g.y// D h.g.y//
h.g.y// D h.g.y// $ H .g.y/; h.g.y///
H .g.y/; h.g.y///
H .g.y/; v/
G .y; g.y// ^ H .g.y/; v/
9wG .y; w/ ^ H .w; v/

1,4 DE
DI
3 8E
7,6 $E
DI
2 8E
10,9 $E
11,5 DE
8,12 ^I
13 9I

15.

9wG .y; w/ ^ H .w; v/

A (g $I)

4.

16.

G .y; j / ^ H .j; v/

A (g 159E)

17.
18.
19.
20.
21.
22.
23.
24.

j D g.y/ $ G .y; j /
G .y; j /
j D g.y/
v D h.j / $ H .j; v/
H .j; v/
v D h.j /
v D h.g.y//
v D r.y/

3 8E
16 ^E
17,18 $E
2 8E
16 ^E
20,21 $E
22,19 DE
1,23 DE

25.

v D r.y/

26. v D r.y/ $ 9wG .y; w/ ^ H .w; v/

15,16-24 9E
4-14,15-25 $I

In the first subderivation, as usual, we suppose that quantifiers are arranged so that substitutions are allowed and in particular so that
g.y/ is free for w in H .w; v/ and G .y; w/. Thus, with dropped variables restored we have, PA ` v D rk .x;
E y;
E zE/ $ 9wG .y;
E w/ ^
H .x;
E w; zE; v/ which is to say, PA ` v D rk .x/
E $ Rk .x;
E v/.
(r) rk .Ex; y/ arises by recursion from some gi .Ex/ and hj .Ex; y; u/ where i; j <
k. By assumption PA defines g.x/
E such that PA ` v D g.x/
E $
G .x;
E v/ and PA defines h.x;
E y; u/ such that PA ` v D h.x;
E y; u/ $
H .x;
E y; u; v/. Let PA ` z D rk .x;
E y/ $
9p9q .p; q; ;/ D g.x/
E ^ .8i < y/h.x;
E i; .p; q; i // D .p; q; S i / ^ .p; q; y/ D z

By the argument of the next section, PA defines r.x;


E y/. And where
the original R.x;
E y; z/ D

CHAPTER 13. GDELS THEOREMS

630

9p9qf9vB.p; q; ;; v/^G .x;


E v/^.8i < y/9u9vB.p; q; i; u/^B.p; q; S i; v/^H .x;
E i; u; v/^
B.p; q; y; z/g

we require PA ` z D rk .x;
E y/ $ Rk .x;
E y; z/. Here is the argument
from left to right.
1. v D .p; q; i / $ B.p; q; i; v/
2. v D g.x/
E $ G .x;
E v/
3. v D h.x;
E y; u/ $ H .x;
E y; u; v/

def
assp
assp

4.

z D r.x;
E y/

A (g !I)

5.
6.

9p9q .p; q; ;/ D g.x/


E ^ .8i < y/h.x;
E i; .p; q; i // D .p; q; S i / ^ .p; q; y/ D z
.a; b; ;/ D g.x/
E ^ .8i < y/h.x;
E i; .a; b; i // D .a; b; S i / ^ .a; b; y/ D z

4 def r
A (g 59E)

7.
8.
9.
10.
11.
12.
13.
14.

.a; b; ;/ D g.x/
E
G .x;
E g.x//
E
B.a; b; ;; .a; b; ;//
B.a; b; ;; g.x//
E
B.a; b; ;; g.x//
E ^ G .x;
E g.x//
E
9vB.a; b; ;; v/ ^ G .x;
E v/
.8i < y/h.x;
E i; .a; b; i // D .a; b; S i /
l <y

6 ^E
from 2
from 1
7,9 DE
10,8 ^I
11 9I
6 ^E
A (g (8I))

15.
16.
17.
18.
19.
20.
21.

h.x;
E l; .a; b; l// D .a; b; S l/
B.a; b; l; .a; b; l//
B.a; b; Sl; .a; b; S l//
H .x;
E l; .a; b; l/; h.x;
E l; .a; b; l///
H .x;
E l; .a; b; l/; .a; b; S l//
B.a; b; l; .a; b; l// ^ B.a; b; S l; .a; b; S l// ^ H .x;
E l; .a; b; l/; .a; b; S l//
9u9vB.a; b; l; u/ ^ B.a; b; S l; v/ ^ H .x;
E l; u; v/

13,14 (8E)
from 1
from 1
from 3
18,15 DE
16,17,19 ^I
20 9I

22.
22.
23.
24.
25.

.8i < y/9u9vB.a; b; i; u/ ^ B.a; b; S i; v/ ^ H .x;


E i; u; v/
.a; b; y/ D z
B.a; b; y; .a; b; y//
B.a; b; y; z/
9vB.a; b; ;; v/ ^ G .x;
E v/ ^
.8i < y/9u9vB.a; b; i; u/ ^ B.a; b; S i; v/ ^ H .x;
E i; u; v/ ^ B.a; b; y; z/
9p9qf9vB.p; q; ;; v/ ^ G .x;
E v/ ^
.8i < y/9u9vB.p; q; i; u/ ^ B.p; q; S i; v/ ^ H .x;
E i; u; v/ ^ B.p; q; y; z/g
R.x;
E y; z/

14-21 (8I)
6 ^E
from 1
23,22 DE
12,22,24 ^I

26.
27.
28.

R.x;
E y; z/

29. z D r.x;
E y/ ! R.x;
E y; z/

The other direction is left as an exercise.


(m) fk .Ex/ arises by friendly regular minimization from g.Ex; y/. By assumption PA defines g.x;
E y/ such that PA ` v D g.x;
E y/ $ G .x;
E y; v/
where G is the original formula to express and capture g. Let PA `
rk .x/
E D yG .x;
E y; ;/. Since the minimization is friendly, PA `
9yG .x;
E y; ;/; so by T13.19, PA defines rk .x/.
E And by definition,
PA ` v D rk .x/
E $ G .x;
E v; ;/ ^ .8y < v/G .x;
E y; ;/. So PA `

25 9I
26 def
5,6-27 9E
4-28 !I

CHAPTER 13. GDELS THEOREMS

631

v D rk .x/
E $ Rk .x;
E v/.
Indct: For any friendly recursive function r.Ex/ and the original formula R.x;
E v/
by which it is expressed and captured, PA defines a function r.x/
E such
that PA ` v D r.x/
E $ R.x;
E v/ (subject to the recursion clause) .
*E13.9. Complete the justifications for Def [rm] and Def [qt].

*E13.10. Complete the unfinished cases to T13.21. You should set up the entire induction, but may refer to the text as the text refers unfinished cases to homework.

CHAPTER 13. GDELS THEOREMS

632

First theorems of chapter 13


T13.1 For any recursively axiomatized theory T whose language includes LNT , G is true iff it is
unprovable in T (iff T G ).
T13.2 If T is a recursively axiomatized sound theory whose language includes LNT , then T is
negation incomplete.
T13.3 Let T be any recursively axiomatized theory extending Q; then T
9xPrft.x; pG q/.

T13.4 If T is a consistent, recursively axiomatized theory extending Q, then T 6` G .


T13.5 If T is an !-consistent, recursively axiomatized theory extending Q, then T 6` G .
T13.6 Let T be any recursively axiomatized theory extending Q; then T
9xRPrf .x; pRq/.

T13.7 If T is a consistent, recursively axiomatized theory extending Q, then T 6` R.


T13.8 If T is a consistent, recursively axiomatized theory extending Q, then T 6` R.
T13.9 Let T be a recursively axiomatized theory extending Q. Then supposing T satisfies the
derivability conditions and so the K4 logic of provability, T ` Cont ! Prvt.pG q/.
T13.10 Let T be a recursively axiomatized theory extending Q. Then supposing T satisfies the
derivability conditions, if T is consistent, T 6` Cont.
T13.11 Let T be a recursively axiomatized theory extending Q. Then supposing T satisfies the
derivability conditions and so the K4 logic of provability, T ` Cont $ Prvt.pContq/.
T13.12 Suppose T is a recursively axiomatized theory extending Q. Then if T ` P , then T ` P .
T13.13 This lists a number of straightforward theorems of PA.
T13.14 For a defined relation symbol, function symbol or constant, with its associated axiom and
conversion procedure, T 0 ` F 0 $ F .
T13.15 For a defined relation symbol, function symbol or constant, with its associated axiom, and
any formula F in the original language, T 0 ` F iff T ` F .
T13.16 For some defined relation symbols, function symbols or constants, with their associated
axioms and conversion procedures, T 0 ` F 0 iff T ` F .
T13.17 If PA defines some h.x;
E w; zE/ and g.y/,
E then PA defines f .x;
E y;
E zE/ D h.x;
E g.y/;
E zE/.
T13.18 If PA ` 9vQ.x;
E v/, then PA ` 9vQ.x;
E v/ ^ .8z < v/Q.x;
E v/.
T13.19 Where m.x/
E D vQ.x;
E v/, (a) PA ` Q.x;
E m.x//
E ^ .8z < m.x//Q.
E
x;
E z/; (b) PA `
Q.x;
E m.x//;
E (c) PA ` .8z < m.x//Q.
E
x;
E z/; (d) PA ` Q.x;
E v/ ! m.x/
E  v.
T13.20 (a) PA ` .y  ;/Q.x;
E ;; y/ D ;; (b) if PA ` .9v  t.u//Q.x;
E u; v/ then (i) PA defines
vQ.x;
E u; v/ and (ii) PA ` .v  t.u//Q.x;
E u; v/ D vQ.x;
E u; v/.
T13.21 For any friendly recursive function r.Ex/ and original formula R.x;
E v/ by which it is expressed and captured, PA defines a function r.x/
E such that PA ` v D r.x/
E $ R.x;
E v/.
This theorem depends on conditions for the recursion clause and so on T13.22 and T13.31.

CHAPTER 13. GDELS THEOREMS

633

The Recursion Clause


We turn now to a series of results with the aim of showing that PA defines r in the case
when r arises by recursion. This will require a series of definitions and results in PA.
Some of the functions so defined are like ones that result from recursive functions.
However, insofar as we have not yet proved the core result, we cannot use it! So we
are showing directly that PA gives the required results.
Uniqueness. It will be easiest to begin with the uniqueness clause. Where F .x;
E y; v/
is our formula,
9p9q .p; q; ;/ D g.x/
E ^ .8i < y/h.x;
E i; .p; q; i // D .p; q; S i / ^ .p; q; y/ D z

we want PA ` 8m8n.F .x;


E y; m/ ^ F .x;
E y; n// ! m D n. The argument is
structured very much as for the parallel uniqueness case in Q (T12.12) except that
the argument is in PA and so by IN, and uniqueness conditions are simplified by the
use of function symbols. The argument is simplified but that does not mean that
it is simple!
T13.22. With F .x;
E y; v/ as described above, PA ` 8m8n.F .x;
E y; m/^F .x;
E y; n//
! m D n.
The zero case is simple enough and left as an exercise. Given the zero case, here is
the main argument by IN.

CHAPTER 13. GDELS THEOREMS

634

1. 8m8n.F .x;
E ;; m/ ^ F .x;
E ;; n// ! m D n
2.

8m8n.F .x;
E j; m/ ^ F .x;
E j; n// ! m D n

zero case
A (g !I)

3.

F .x;
E Sj; u/ ^ F .x;
E Sj; v/

A (g !I)

4.
5.
6.

9p9q .p; q; ;/ D g.x/


E ^ .8i < Sj /h.x;
E i; .p; q; i // D .p; q; S i / ^ .p; q; Sj / D u
9p9q .p; q; ;/ D g.x/
E ^ .8i < Sj /h.x;
E i; .p; q; i // D .p; q; S i / ^ .p; q; Sj / D v
.a; b; ;/ D g.x/
E ^ .8i < Sj /h.x;
E i; .a; b; i // D .a; b; S i / ^ .a; b; Sj / D u

3 ^E
3 ^E
A (g 49E)

7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
43.
44.

.a; b; ;/ D g.x/
E
.8i < Sj /h.x;
E i; .a; b; i // D .a; b; S i /
.a; b; Sj / D u
.c; d; ;/ D g.x/
E ^ .8i < Sj /h.x;
E i; .c; d; i // D .c; d; S i / ^ .c; d; Sj / D v
.c; d; ;/ D g.x/
E
.8i < Sj /h.x;
E i; .c; d; i // D .c; d; S i /
.c; d; Sj / D v
j < Sj
h.x;
E j; .a; b; j // D .a; b; Sj /
h.x;
E j; .c; d; j // D .c; d; Sj /
k<j
k < Sj
h.x;
E k; .a; b; k// D .a; b; S k/
.8i < j /h.x;
E i; .a; b; i // D .a; b; S i /
.a; b; j / D .a; b; j /
.a; b; ;/ D g.x/
E ^ .8i < j /h.x;
E i; .a; b; i // D .a; b; S i / ^ .a; b; j / D .a; b; j /
9p9q .p; q; ;/ D g.x/
E ^ .8i < j /h.x;
E i; .p; q; i // D .p; q; S i / ^ .p; q; j / D .a; b; j /
F .x;
E j; .a; b; j //
k<j
k < Sj
h.x;
E k; .c; d; k// D .c; d; S k/
.8i < j /h.x;
E i; .c; d; i // D .c; d; S i /
.c; d; j / D .c; d; j /
.c; d; ;/ D g.x/
E ^ .8i < j /h.x;
E i; .c; d; i // D .c; d; S i / ^ .c; d; j / D .c; d; j /
9p9q .p; q; ;/ D g.x/
E ^ .8i < j /h.x;
E i; .p; q; i // D .p; q; S i / ^ .p; q; j / D .c; d; j /
F .x;
E j; .c; d; j //
.a; b; j / D .c; d; j /
h.x;
E j; .c; d; j // D .a; b; Sj /
.a; b; Sj / D .c; d; Sj /
uDv
uDv
uDv
.F .x;
E Sj; u/ ^ F .x;
E Sj; v// ! u D v
8m8n.F .x;
E Sj; m/ ^ F .x;
E Sj; n// ! m D n
8m8n.F .x;
E j; m/ ^ F .x;
E j; n// ! m D n ! 8m8n.F .x;
E Sj; m/ ^ F .x;
E Sj; n// ! m D n
8yf8m8n.F .x;
E y; m/ ^ F .x;
E y; n// ! m D n ! 8m8n.F .x;
E Sy; m/ ^ F .x;
E Sy; n// ! m D ng
8y8m8n.F .x;
E y; m/ ^ F .x;
E y; n// ! m D n
8m8n.F .x;
E y; m/ ^ F .x;
E y; n// ! m D n

6 ^E
6 ^E
6 ^E
A (g 59E)
10 ^E
10 ^E
10 ^E
T13.13g
8,14 (8E)
12,14 (8E)
A (g (8I))
17, T13.13m
8,18 (8E)
17-19 (8I)
DI
7,20,21 ^I
22 9I
23 abv
A (g (8I))
25, T13.13m
12,26 (8E)
25-27 (8I)
DI
11,28,29 ^I
30 9I
31 abv
2,24,32 8E
15,33 DE
34,16 DE
9,13,35 DE
5,10-36 9E
4,6-37 9E
3-38 !I
39 8I
2-40 !I
41 8I
1,42 IN
43 8E

CHAPTER 13. GDELS THEOREMS

635

As before, the key to this argument is attaining F .x;


E j; .a; b; j // and F .x;
E j; .c;
d; j // on lines (24) and (32). From these the assumption on (2) comes into play, and
the result follows with other equalities.
*E13.11. Complete the demonstration for T13.22 by completing the demonstration
of the zero case.
Existence. Considerably more difficult is the existential condition. To show this,
we must show the Chinese remainder theorem in PA. Again, we build by a series of
results.
First, subtraction with cutoff. The definition is not recursive as before. However
the effect is the same: x : y works like subtraction when x  y, and otherwise goes
to ;.
*Def [ : ] PA ` v D x : y $ x D y C v _ .x < y ^ v D ;/
(i) PA ` 9vx D y C v _ .x < y ^ v D ;/
(ii) PA ` 8m8n.x D yCm_.x < y^m D ;/^x D yCn_.x < y^n D ;// ! m D n

The proof of (i) and (ii) is left as an exercise. So PA defines ( : ). And it proves a
series of intuitive results.
*T13.23. The following result in PA:
(a) PA ` a  b ! a D b C .a : b/
(b) PA ` b  a ! a : b D ;
(c) PA ` a : b  a
*(d) PA ` a > b ! a : b > ;
(e) PA ` a : ; D a
(f) PA ` Sa : a D 1
(g) PA ` a > ; ! a : 1 < a
*(h) PA ` a  c ! .a : c/ C b D .a C b/ : c
*(i) PA ` .a : b/ : c D a : .b C c/
(j) PA ` .a C c/ : .b C c/ D a : b

CHAPTER 13. GDELS THEOREMS

636

*(k) PA ` a  .b : c/ D a  b : a  c
Hints. (d): with the assumption you can get both a D Sj C b and a D b C
.a : b/; then you have what you need with T6.66. (h): with the assumption
a  c you have also a C b  c; so that both a D c C .a : c/ and a C
b D c C .a C b/ : c; then DE and T6.66 do the work. (i): First, a 
b C c _ a < b C c; in the second case, a  b _ a < b; in each of these
cases, both sides equal ;; for the first main option, you will be able to show
that .b C c/ C .a : b/ : c D .b C c/ C a : .b C c/ and apply T6.66. (k):
First a D ; _ a > ;; in the first case, both sides equal ;; then in the second
case, b  c _ b < c; again in the first of these cases, both sides equal ;; in
the last case, you will be able to show ac C a.b : c/ D ac C .ab : ac/ and
apply T6.66.
Many of these state standard results for subtraction except where the inequalities
are required to protect against cases when a : b goes to ;. (a) and (b) extract basic
facts from the definition upon which others depend. (c) - (g) are simple subtraction
facts. And (h) - (k) are some results for association and distribution.
Next factor. Again, consistent with remainder and quotient, we say mjn when
m C 1 divides n.
Def [j] PA ` mjn $ 9q.S m  q D n/
Since factor is a relation, no condition is required over and above the axiom so that
the definition is good as it stands. And, again, PA proves a series of results. These
are reasonably intuitive. Observe, however that our choice to divide by m C 1 means
that, as in T13.24a below, ;ja.
*T13.24. The following result in PA:
(a) PA ` ;ja
(b) PA ` ajSa
(c) PA ` aj;
(d) PA ` ajb ! aj.b  c/
(e) PA ` .ajS b ^ bjc/ ! ajc
*(f) PA ` ajb ! aj.b C c/ $ ajc

CHAPTER 13. GDELS THEOREMS

637

(g) PA ` .b  c ^ ajb/ ! aj.b : c/ $ ajc


(h) PA ` b > a ! b Sa
(i) PA ` ajb $ rm.b; a/ D ;
*(j) PA ` rma C .y  Sd /; d D rm.a; d /
*(k) PA ` Sd  z  a ! z  qt.a; d /
*(l) PA ` a  y  Sd ! rma : .y  Sd /; d D rm.a; d /
Hints. (f): The assumption ajb gives Sa  j D b; then aj.b C c/ gives
Sa  k D b C c; you will have to show j  k so that l C j D k; ajc follows
with these; then ajc gives Sa  k D c and you will be able to substitute for
both b and c to get .Sa  j / C .Sa  k/ D b C c; the result follows with
this. (j): From the assumption you have a D .Sd  j / C r ^ r < Sd ; and if
you assert a C .y  Sd / D a C .y  Sd / by DI you should be able to show,
aC.ySd / D Sd .j Cy/Cr ^r < Sd ; then with j Cy  aC.ySd / you
can apply (9I) and the definition. (k): With r D rm.a; d / and q D qt.a; d /
by Def [qt] you have a D Sd  q C r ^ r < Sd ; assume Sd  z  a for !I
and z > q for I; then you should be able to show a < Sd  z to contradict
the assumption for !I. (l): Again let r D rm.a; d / and q D qt.a; d /; then
by Def [qt] you have a D Sd  q C r ^ r < Sd ; assume a  y  Sd for
!I; you should be able to show, a : .y  Sd / D Sd.q : y/ C r ^ r < Sd
toward .9w < a : .y  Sd //a : .y  Sd / D Sd  w C r ^ r < Sd by
(9I), to apply Def [rm].
So (a) (the successor of) ; divides any number; (b) (the successor of) a divides Sa;
and (c) any number divides into ; zero times. (d) if a divides b then it divides b  c;
and (e) if a divides S b and (the successor of) b divides c, then a divides c. (f) is like
.b C c/=a D b=a C c=a so that dividing the sum breaks into dividing the members;
(g) is the comparable principle for subtraction. From (h) if b > a, then (the successor
of) b does not divide Sa. (i) makes the obvious connection between reminder and
factor. In (j) the remainder of the second part .y  Sd / is ; so that the remainder
of the sum is just whatever there is from the first rm.a; d /; (l) is the comparable
principle for subtraction. The intervening (k) is required for (l) and tells us that if
z multiples of (the successor of) d come to  a, then z  qt.a; d / since the
quotient maximizes the multiples of (the successor of) d that are  a.
And now PA defines relations prime and relatively prime. Prime has its usual
sense. And numbers are relatively prime when they have no common divisor other

CHAPTER 13. GDELS THEOREMS

638

than one though they may not therefore individually be prime. Though division is
by successor, these notions are given their usual sense by adjusting the numbers that
are said to divide.
Def [Pr] PA ` Pr.n/ $ 1 < n ^ 8xxjn ! .x D ; _ S x D n/
Def [Rp] PA ` Rp.a; b/ $ 8x.xja ^ xjb/ ! x D ;
Since these are relations, no condition is required over and above the axioms. Observe that 1 is relatively prime with anything, since the only number that divides both
1 and some b is (the successor of) ;; so ; is relatively prime with 1, but not with
anything else, since for b D S l with l > ;, both lj; and ljb. It will also be helpful
to introduce a couple of subsidiary notions. When G.a; b; i / we say that i is good,
and d is the least good (with a and b),
Def [G] PA ` G.a; b; i / $ 9x9y.ax C i D by/
Def [d ] PA ` d.a; b/ D v.a > ; ^ b > ;/ ! G.a; b; S v/
(i) PA ` 9v.a > ; ^ b > ;/ ! G.a; b; S v/
Begin with b D ; _ b > ; and go for the existentially quantified goal. In
the second case, there is some l such that b D S l and it is easy to show,
a  ; C b D b  1 and generalize.
Again, PA proves a series of results. Observe again that if we are interested in
whether a prime divides some b we are interested in whether Pr.Sa/ ^ ajb since
it is the successor that is divided into b.
*T13.25. The following result in PA:
(a) PA ` Pr.;/
(b) PA ` Pr.1/
(c) PA ` Pr.2/
*(d) PA ` 8xx > 1 ! 9z.Pr.S z/ ^ zjx/
*(e) PA ` Rp.a; b/ $ 9xPr.S x/ ^ xja ^ xjb

CHAPTER 13. GDELS THEOREMS

639

(f) PA ` 8x8yG.a; b; x/ ! G.a; b; x  y/


*(g) PA ` .a > ; ^ b > ;/ ! 8x8y.G.a; b; x/ ^ G.a; b; y/ ^ x  y/ !
G.a; b; x : y/
*(h) PA ` Rp.a; b/ ^ a > ; ^ b > ; ! G.a; b; 1/
*(i) PA ` Pr.Sa/ ^ aj.b  c/ ! .ajb _ ajc/
Hints. (c): This is straightforward with T13.24h. (d): You can do this by the
second form of strong induction T13.13af; the zero case is trivial; to reach
8xf.8y  x/y > 1 ! 9z.Pr.S z/ ^ zjy/ ! S x > 1 ! 9z.Pr.S z/ ^
zjS x/g assume .8y  k/y > 1 ! 9z.Pr.S z/ ^ zjy/ and S k > 1;
then S k is prime or not; if it is prime, the result is immediate; if it is not,
you will be able to show Sj  k and apply the assumption. (e): From
left to right, under the assumption for $I assume 9xPr.S x/ ^ xja ^ xjb
and Pr.Sj / ^ j ja ^ j jb for I and 9E; then you should be able to show
that 1 < Sj and 1 Sj ; in the other direction, under the assumption for
$I and then j ja ^ j jb for !I, j D ; _ j > ; by T13.13f; the latter is
impossible, which gives the result you want. (g): Under the assumptions
a > ; ^ b > ; and then G.a; b; i / ^ G.a; b; j / ^ i  j for !I and then
ap C i D bq and ar C j D bs for 9E, starting with .bq C bar/ C .bsa :
bs/ D .bq C bar/ C .bsa : bs/ by DI, with some effort, you will be able
to show a.p C bs/ C .br : r/ C .i : j / D b.q C ar/ C .sa : s/ and
generalize. (i): Under the assumption Pr.Sa/ ^ aj.b  c/ assume a b with
the idea of obtaining a b ! ajc for Impl; set out to show Rp.b; Sa/ for an
application of T13.25h to get 9x9ybx C 1 D Sa  y; with this, you will
have bp C 1 D Sa  q by 9E; and you should be able to show ajcbp and
aj.cbp C c/ for an application of T13.24f.
T13.25h is important. But the argument is relatively complex; it has the following main stages.

CHAPTER 13. GDELS THEOREMS

640

1. .a > ; ^ b > ;/ ! G.a; b; Sd.a; b// ^ .8y < d.a; b//.a > ; ^ ^b > ;/ ! G.a; b; Sy/

def d

2. .a > ; ^ b > ;/ ! G.a; b; Sd.a; b//


3. Rp.a; b/ ^ a > ; ^ b > ;

1 ^E
A (g !I)

4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.

Rp.a; b/
8x.xja ^ xjb/ ! x D ;
a >;^b >;
G.a; b; Sd.a; b//
G.a; b; a/
G.a; b; b/
8xG.a; b; x/ ! d.a; b/jx
d.a; b/ja
d.a; b/jb
d.a; b/ja ^ d.a; b/jb
d.a; b/ D ;
G.a; b; 1/

16. Rp.a; b/ ^ a > ; ^ b > ; ! G.a; b; 1/

Hint. For (c) let q D qt.i; d.a; b// and r D rm.i; d.a; b// then from the definitions you have i D .Sd.a; b/  q/ C r and r < Sd.a; b/ and from (1) of
the main argument .8y < d.a; b//.a > ; ^ b > ;/ ! G.a; b; Sy/;
then under the assumption G.a; b; i / for !I you should be able to show
G.a; b; i : .Sd.a; b/  q// using (6) from the main argument with (f) and
(g); but also i : .Sd.a; b/  q/ D r so that G.a; b; r/. Now the assumption
that r is a successor leads to contradiction; so r D ; and d.a; b/ji .
T13.25(a) - (c) are simple particular facts. From (d) every number greater than one
is divided by some prime (which may or may not be itself). From (e), a and b are
relatively prime iff there is no prime that divides them both; in one direction this is
obvious if a prime divides them both, then they are not relatively prime; in the
other direction, if some number other than (the successor of) zero divides them both,
then some prime of it divides them both. (f) and (g) let you manipulate G; they are
required for (h) which is in turn required for (i) according to which if Sa is prime
and (the successor of) a divides b  c then (the successor of) a divides b or c; if
Sa is prime and divides b  c then it must appear in the factorization of b or the
factorization of c so that it divides one or the other.
Now least common multiple. Given a function m.i /, lcmfm.i / j i < kg is the
least y > ; such that for any i < k, S m.i / divides y. We avoid worries about the
case when m.i / D ; by our usual account of factor. And since y > ; it is possible to

3 ^E
4 def
4 ^E
2,6 !E
[a]
[b]
[c]
8,10 8E
9,10 8E
11,12 ^I
5,13 8E
7,14 DE
3-15 !I

CHAPTER 13. GDELS THEOREMS

641

define a predecessor to the least common multiple, helpful when switching between
the numerator and denominator of fractions.
*Def [lcm] lcmfm.i / j i < kg D vv > ; ^ .8i < k/m.i /jv
(i) PA ` 9xx > ; ^ .8i < k/m.i /jx
Hint: This is an argument by IN on k. For the basis, you may assert that 1 >
;; then the argument is trivial. For the main argument, under the assumptions
9xx > ;^.8i < j /m.i /jx for !I and a > ;^.8i < j /m.i /ja for 9E, set
out to show a  S m.j / > ; ^ .8i < Sj /m.i /j.a  S m.j // and generalize.
Because lcm is defined by minimization, only the existence condition is required. As
a matter of notation, let lmk D lcmfm.i / j i < kg and, where m is understood, let
lk D lcmfm.i / W i < kg.
Def [plm] v D plmfm.i / j i < kg $ S v D lcmfm.i / j i < kg
(i) PA ` 9v.S v D lk /
(ii) PA ` 8x8y.S x D lk ^ Sy D lk / ! x D y
Again, let pmk D plmfm.i / j i < kg and, where m is understood, pk D plmfm.i / j
i < kg.
*T13.26. The following result in PA:
(a) PA ` l; D 1
(b) PA ` j < k ! m.j /jlk
*(c) PA ` .8i < k/m.i /jx ! pk jx
*(d) PA ` 8n.Pr.S n/ ^ njlk / ! .9i < k/njS m.i /
Hints. (c): Let Let q D qt.x; pk / and r D rm.x; pk /; assume .8i <
k/m.i /jx for !I; you have .8y < lk /y > ; ^ .8i < k/m.i /jy from def
lk with T13.19c; you should be able to apply this to show that r D ; and so
that pk jx. (d): This is an induction on k. The basis is straightforward given
l; D 1 from T13.26a; for the main argument, you have .8i < j /m.i /jlj
from def lj ; under assumptions 8n.Pr.S n/ ^ njlj / ! .9i < j /njS m.i /
and Pr.Sa/ ^ ajlSj for !I, you should be able to use T13.26c to show
pSj j.lj  S m.j //; and from this ajlj _ ajS m.j /; in either case, you have
your result.

CHAPTER 13. GDELS THEOREMS

642

(a) for any function m.i /, the least common multiple for i < 0 defaults to 1. (b)
applies the definition for the result that when j < k, m.j / divides lcmfm.i / j i < kg.
(c) is perhaps best conceived by prime factorization: the least common multiple of
some collection has all the primes of its members an no more; but any number into
which all the members of the collection divide must include all those primes; so the
least common multiple divides it as well. (d) is the related result that if a prime
divides the least common multiple of some collection, then it divides some member
of the collection.
Finally we arrive at the Chinese Remainder Theorem. As one might expect, this
is fundamental to the result we want. Let m.i / be a function ultimately to be specified
in the beta function; h.i / is the one whose values are to be matched by remainders.
Then the theorem tells us that if for all i < k, m.i / > ; and m.i /  h.i /, and if for
all i < j < k, Rp.S m.i /; S m.j //, then 9p.8i < k/rm.p; m.i // D h.i /. This will
be the p that figures in the recursion clause.
*T13.27.

PA ` .8i < k/.m.i / > ;^m.i /  h.i //^8i 8j.i < j ^j < k ! Rp.S m.i /; S m.j /// !
9p.8i < k/rm.p; m.i // D h.i /
(CRT).
Let A.k/ Ddef .8i < k/.m.i / > ;^m.i /  h.i //^8i 8j.i < j ^j < k ! Rp.S m.i /; S m.j ///
and B.k/ Ddef 9p.8i < k/rm.p; m.i // D h.i /.

So we want PA ` A.k/ ! B.k/. By induction on n we show .8n 


k/.A.n/ ! B.n//. The result follows immediately with k  k. Here is the
overall structure of the argument:

CHAPTER 13. GDELS THEOREMS

643

1. ;  k ! .A.;/ ! B.;//
2.

a  k ! .A.a/ ! B.a//

[a]
A (g !I)

3.

Sa  k

A (g !I)

4.
5.
6.
7.

a<k
ak
A.a/ ! B.a/
A.Sa/

3 T13.13k
4 T13.13l
2,5 !E
A (g !I)

8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.

.8i < a/.m.i / > ; ^ m.i /  h.i // ^ 8i 8j..i < j ^ j < a/ ! Rp.S m.i /; S m.j /// !
9p.8i < a/rm.p; m.i // D h.i /
.8i < Sa/.m.i / > ; ^ m.i /  h.i // ^ 8i 8j..i < j ^ j < Sa/ ! Rp.S m.i /; S m.j ///
.8i < Sa/.m.i / > ; ^ m.i /  h.i //
8i 8j..i < j ^ j < Sa/ ! Rp.S m.i /; S m.j ///
9p.8i < a/rm.p; m.i // D h.i /
.8i < a/rm.r; m.i // D h.i /
Rp.lma ; S m.a//
S m.a/ > ;
la > ;
G.la ; S m.a/; 1/
:
G.la ; S m.a/; r C .la 1/  h.a//
:
9x9y.la  x C r C .la 1/  h.a/ D S m.a/  y/
:
la  b C r C .la 1/  h.a/ D S m.a/  c
s D la  .b C h.a// C r
s D S m.a/  c C h.a/
.8i < Sa/rm.s; m.i // D h.i /
9p.8i < Sa/rm.p; m.i // D h.i /
B.Sa/
B.Sa/
B.Sa/
A.Sa/ ! B.Sa/
Sa  k ! .A.Sa/ ! B.Sa//
a  k ! .A.a/ ! B.a// ! Sa  k ! .A.Sa/ ! B.Sa//
8n.n  k ! .A.n/ ! B.n// ! S n  k ! .A.S n/ ! B.S n///
.8n  k/.A.n/ ! B.n//
kk
A.k/ ! B.k/

Hints. (c): Suppose otherwise; with T13.25e there is a u such that Pr.S u/ ^
ujla ^ujS m.a/; then with T13.26d there is a v < a such that ujS m.v/ so that
with (11) Rp.S m.v/; S m.a//. But this is impossible with ujS m.a/, ujS m.v/
and T13.25e. (d): By Def [lcm], la > ; so that h.a/la > h.a/. Then with
T13.23a and T13.23k you can show, s D .la bCr C.la : 1/h.a//Ch.a/
and apply (20). (e): Suppose for (8I) u < Sa; then u < a_u D a. In the first
case, with T13.26b and T13.24d m.u/jla .b C h.a//; so that there is a v such
that S m.u/v D la .b C h.a//; then using (21) and T13.24j, rm.d; m.u// D

6 abv
7 abv
9 ^E
9 ^E
[b]
A (g 129E)
[c]
T13.13e
def la
14,15,16 T13.25h
17 T13.25f
18 def G
A (g 199E)
def
[d]
[e]
23 9I
24 abv
19,20-25 9E
12,13-26 9E
7-27 !I
3-28 !I
2-29 !I
30 8I
1,31 IN
T13.13l
32,33 (8E)

CHAPTER 13. GDELS THEOREMS

644

rm.s; m.u//; so that you can apply (13). In the second case, with (22) and
T13.24j rm.d; m.u// D rm.h.u/; m.u//; but from (10), m.u/  h.u/ and
you will be able to show that rm.h.u/; m.u// D h.u/.
(12) is simple enough once you use (10) and (11) to generate the antecedent to (8).
After that, we expect (14) insofar as the values of S m.i / are relatively prime up
to and including a; so the values of S m.i / have no primes in common; since la
includes just the primes of members < a, it has no prime in common with S m.a/;
so la and S m.a/ are relatively prime. This yields a straight path to (20). Then the
idea is that s appears in the forms from both (21) and (22). From the version on (21),
for any i < a, the remainder of m.i / and s is the same as the remainder of m.i /
with r since m.i / divides the first term evenly. And from the version on (22),
the remainder of m.a/ and s is equal to h.a/ since m.a/ divides the first term
evenly and m.a/  h.a/. So for any i < Sa, the remainder of m.i / and s is h.i /.
The trick is in the construction of s (following Boolos, The Logic of Provability,
30-31).
For our final results, we require a couple notions for maximum value. First maxs
for the maximum from a set of values, and then maxp for the greatest of a pair.
*Def [maxs] PA ` v D maxsfm.i / j i < kg $ .k D ; ^ v D ;/ _ ..9i < k/m.i / D
v ^ .8i < k/m.i /  v/
Let A.k; v/ Ddef .k D ; ^ v D ;/ and B.k; v/ Ddef .9i < k/m.i / D
v ^ .8i < k/m.i /  v. Then we require,
(i) PA ` 9vA.k; v/ _ B.k; v/
(ii) PA ` 8y8z.A.k; y/ _ B.k; y/ ^ A.k; z/ _ B.k; z// ! y D z
The argument for (ii) is long and disjunctive, but straightforward. (i) is an
argument by IN on k. It is not difficult, but, again, long and disjunctive. Here
is the basic structure including key subgoals.

CHAPTER 13. GDELS THEOREMS


1. A.;; ;/ _ B.;; ;/
2. 9vA.;; v/ _ B.;; v/
9vA.j; v/ _ B.j; v/
3.

645
[a]
19I
A (g !I)

4.

A.j; u/ _ B.j; u/

A (g 39E)

5.
6.

j D;_j ;
j D;

T3.1
A (g 5_E)

7.
8.

A.Sj; m.;// _ B.Sj; m.;//


9vA.Sj; v/ _ B.Sj; v/

[b]
7 9I

9.

j ;

A (g 5_E)

j ;_u;
.j D ; ^ u D ;/
A.j; u/
B.j; u/
.9i < j /m.i / D u ^ .8i < j /m.i /  u
.8i < j /m.i /  u
.9i < j /m.i / D u
m.a/ D u
a<j

9 _I
10 DeM
11 abv
4,12 DS
13 abv
14 ^E
14 ^E
A (g 16(9E))

10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.

m.j /  m.a/ _ m.j / > m.a/


m.j /  m.a/

21.
22.

A.Sj; m.a// _ B.Sj; m.a//


9vA.Sj; v/ _ B.Sj; v/

[c]
21 9I

23.

m.j / > m.a/

A (g 19_E)

24.
25.

A.Sj; m.j // _ B.Sj; m.j //


9vA.Sj; v/ _ B.Sj; v/

[d]
24 9I

26.
27.
28.
29.

9vA.Sj; v/ _ B.Sj; v/
9vA.Sj; v/ _ B.Sj; v/
9vA.Sj; v/ _ B.Sj; v/
9vA.Sj; v/ _ B.Sj; v/

30. 9vA.j; v/ _ B.j; v/ ! 9vA.Sj; v/ _ B.Sj; v/


31. 8y.9vA.y; v/ _ B.y; v/ ! 9vA.Sy; v/ _ B.Sy; v//
32. 9vA.k; v/ _ B.k; v/

T13.13p
A (g 19_E)

19,20-22,23-25 _E
16,17-26 (9E)
5,6-8,9-27 _E
3,4-28 9E
3-29 !I
30 8I
2,31 IN

So the generalization is from different individuals in the different cases [a], [b], [c]
and [d]. As a matter of notation, let maxsmk D maxsfm.i / j i < kg and where m is
understood, maxsk D maxsfm.i / j i < kg.
*Def [maxp] PA ` v D maxp.x; y/ $ .x  y ^ v D x/ _ .x < y ^ v D y/
(i) PA ` 9v.x  y ^ v D x/ _ .x < y ^ v D y/
(ii) PA ` 8u8v..x  y ^ u D x/ _ .x < y ^ u D y/ ^ .x  y ^ v D
x/ _ .x < y ^ v D y// ! u D v

CHAPTER 13. GDELS THEOREMS

646

And a couple of results that make the obvious applications from the definitions.
*T13.28. The following result in PA.
(a) PA ` maxp.x; y/  x ^ maxp.x; y/  y
(b) PA ` .8i < k/m.i /  maxsk
These simply state the obvious: that the maximum is greater than or equal to the rest.
For (a) that the maximum is the greater of the the two in the pair; for (b) that the
maximum is the greater of the values of the function.
Now we are in a position to generate some results for the function. First, as
applied to the -function, we may demonstrate the antecedent to the CRT T13.27;
thus we may obtain its consequent.
*T13.29. PA ` 9p9q.8i < k/.p; q; i / D h.i /.
Let r Ddef maxp.k; maxshk /;
s Ddef S r;
q Ddef lcmfi j i < sg;
m.i / Ddef q  S i .
Then .p; q; i / D rm.p; q  S i /. And we may reason,
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.

.8i < k/.m.i / > ; ^ m.i /  h.i //


8i 8j .i < j ^ j < k/ ! Rp.S m.i /; S m.j //
9p.8i < k/rm.p; m.i // D h.i /
m.i / D q  S i
9p.8i < k/rm.p; q  S i / D h.i /
.p; q; i / D rm.p; q  S i /
9p.8i < k/ .p; q; i / D h.i /
.8i < k/ .p; q; i / D h.i /
9q.8i < k/ .p; q; i / D h.i /
9p9q.8i < k/ .p; q; i / D h.i /

11. 9p9q.8i < k/ .p; q; i/ D h.i /

[i]
[ii]
1,2 T13.27
def
3,4 DE
def
5,6 DE
A (g 79E)
8 9I
9 9I
7,8-10 9E

So the demonstration reduces to that of (i) and (ii), the two conjuncts to the
antecedent of CRT T13.27. (i): Under the assumption j < k for (8I) it will
be easy to show m.j / > ;; then you will be able to use T13.28 to show
h.j / < s; but also with T13.26b that rjq and from this that s  q which
gives s  q  Sj and the result you want. (ii): Here is the main outline of the
argument.

CHAPTER 13. GDELS THEOREMS

647

1.

i <j ^j <k

A g !I

2.
3.
4.

i <j
j <k
Rp.S m.i /; S m.j //

1 ^E
1 ^E
A (c I)

5.
6.
7.
8.
9.
10.
11.
12.

9xPr.S x/ ^ xjS.q  S i / ^ xjS.q  Sj /


Pr.Sa/ ^ ajS.q  S i / ^ ajS.q  Sj /
Pr.Sa/
ajS.q  S i /
ajS.q  Sj /
:
ajq.j
i/
:
ajq _ aj.j
i/
ajq
ajq

13.

aj.j

14.

26.
27.

6 ^E
6 ^E
6 ^E
[a]
7,10 T13.25i
A (g 11_E)
12 R

i/

ajq

15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.

4 T13.25e
A (c 59E)

ajq
aj.q  S i /
S.q  S i / > q  S i
S.q  S i /  q  S i
:
aj.S.q  S i / .q  S i //
aj1
S ; < Sa
;<a
a1
?
?
Rp.S m.i /; S m.j //

28. .i < j ^ j < k/ ! Rp.S m.i /; S m.j //


29. 8i 8j .i < j ^ j < k/ ! Rp.S m.i /; S m.j //

A (g 11_E)
[b]
11,12-13,14-15 _E
16 T13.24d
T13.13g
18 T13.13l
19,8,17 T13.24g
20 T13.23f
def Pr
22 T13.13j
23 T13.24h
21,24 ?I
5,6-25 9E
4-26 E
1-27 !I
28 8I

Hints. (a): With i < j you will be able to show aj.S.q  Sj / : S.q  S i //;
and with some work that S.q  Sj / : S.q  S i / D q.j : i /. (b): With
i < j , you have j : i > ;; so there is an l such that S l C ; D j : i ; you
will be able to show ajS l and with T13.26b, ljq so with T13.24e, ajq.
Next a theorem that leads directly to our main result. We show that given .r; s; i /
there are sure to be p and q such that .p; q; i / is like .r; s; i / for i < k and for
arbitrary n, .p; q; k/ D n. This is because we may define a function h which is like
.r; s; i / for i < k and otherwise n and find p; q such that .p; q; i / matches it.
As a preliminary,
Def [h.i/] PA ` v D h.i / $ .i < k ^ v D .r; s; i // _ i  k ^ v D n/
(i) PA ` 9v.i < k ^ v D .r; s; i // _ i  k ^ v D n/

CHAPTER 13. GDELS THEOREMS

648

(ii) PA ` 8x8y..i < k ^ x D .r; s; i // _ i  k ^ x D n/ ^ .i < k ^ y D


.r; s; i // _ i  k ^ y D n// ! x D y
Then,
*T13.30. PA ` 9p9q.8i < k/.p; q; i / D .r; s; i / ^ .p; q; k/ D n.
Hints: From Def [h.i /] you have .k < k ^ h.k/ D .r; s; k// _ .k  k ^
h.k/ D n/ and .l < k ^ h.l/ D .r; s; l// _ .l  k ^ h.l/ D n/; and from
T13.29 applied to S k, 9p9q.8i < S k/.p; q; i / D h.i /; then with .8i <
S k/.a; b; i / D h.i / for 9E, you will be able to show that .a; b; k/ D n
and under l < k for (8I) that .a; b; l/ D .r; s; l/.
For application, it is important that free variables are universally quantified. So
this theorem is effectively, 8k8n8r8s9p9q.8i < k/.p; q; i / D .r; s; i / ^
.p; q; k/ D n
And finally the result we have been after in this section: As before, let F .x;
E y; v/
be our formula,
9p9q .p; q; ;/ D g.x/
E ^ .8i < y/h.x;
E i; .p; q; i // D .p; q; S i / ^ .p; q; y/ D z

Then we want, PA ` 9vF .x;


E y; v/.
*T13.31. PA ` 9v9p9q.p; q; ;/ D g.x/^.8i
E
< y/h.x;
E i; .p; q; i // D .p; q; S i /^
.p; q; y/ D v.
Let F .x;
E y; v/ be as above; the argument is by IN on y. The zero case is left
as an exercise. Here is the main argument.

CHAPTER 13. GDELS THEOREMS

649

1. 9vF .x;
E ;; v/

zero case

2.

9vF .x;
E j; v/

A (g !I)

3.
4.

9v9p9q .p; q; ;/ D g.x/


E ^ .8i < j /h.x;
E i; .p; q; i // D .p; q; S i / ^ .p; q; j / D v
.a; b; ;/ D g.x/
E ^ .8i < j /h.x;
E i; .a; b; i // D .a; b; S i / ^ .a; b; j / D z

2 abv
A (g 39E)

5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.

.a; b; ;/ D g.x/
E
.8i < j /h.x;
E i; .a; b; i // D .a; b; S i /
9p9q.8i < Sj / .p; q; i / D .a; b; i / ^ .p; q; Sj / D h.x;
E j; .a; b; j /
.8i < Sj / .c; d; i / D .a; b; i / ^ .c; d; Sj / D h.x;
E j; .a; b; j /
.8i < Sj / .c; d; i / D .a; b; i /
.c; d; Sj / D h.x;
E j; .a; b; j /
; < Sj
.c; d; ;/ D .a; b; ;/
.c; d; ;/ D g.x/
E
l < Sj
.c; d; l/ D .a; b; l/
l <j _l Dj
l <j

4 ^E
4 ^E
T13.30 8E
A (g 79E)
8 ^E
8 ^E
T13.13e
9,11 (8E)
5,12 DE
A (g (8I))
9,14 (8E)
14 T13.13m
A (g 16_E)

18.
19.
20.
21.

h.x;
E l; .a; b; l// D .a; b; S l/
S l < Sj
.c; d; S l/ D .a; b; S l/
h.x;
E l; .a; b; l// D .c; d; S l/

6,17 (8E)
17 T13.13j
9,19 8E
18,20 DE

22.

l Dj

A (g 16_E)

23.
24.

h.x;
E l; .a; b; l// D .c; d; S l/
h.x;
E l; .c; d; l// D .c; d; S l/

10,22 DE
15,23 DE

25.
26.
27.
28.
29.
30.
31.
32.

h.x;
E l; .c; d; l// D .c; d; S l/
.8i < Sj /h.x;
E i; .c; d; i // D .c; d; S i /
.c; d; Sj / D .c; d; Sj /
.c; d; ;/ D g.x/
E ^ .8i < Sj /h.x;
E i; .c; d; i // D .c; d; S i / ^ .c; d; Sj / D .c; d; Sj /
9v9p9q .p; q; ;/ D g.x/
E ^ .8i < Sj /h.x;
E i; .p; q; i // D .p; q; S i / ^ .p; q; Sj / D v
9vF .x;
E Sj; v/
9vF .x;
E Sj; v/
9vF .x;
E Sj; v/

33. 9vF .x;


E j; v/ ! 9vF .x;
E Sj; v/
34. 8y9vF .x;
E y; v/ ! 9vF .x;
E Sy; v/
35. 9vF .x;
E y; v/

This completes the demonstration of T13.21. So for any friendly recursive function r.Ex/ and original formula R.x;
E v/ by which it is expressed and captured, PA
defines a function r.x/
E such that PA ` v D r.x/
E $ R.x;
E v/. In particular, then, PA
defines functions corresponding to all the primitive recursive functions from chapter 12.
In addition, say a recursive relation is friendly iff it has a friendly characteristic
function. Then as a simple corollary, PA defines relations corresponding to each

16,17-21,22-24 _E
14-25 (8I)
DI
13,26,27 ^I
28 9I
29 abv
7,8-30 9E
3,4-31 9E
2-32 !I
33 8I
1,34 IN

CHAPTER 13. GDELS THEOREMS

650

friendly recursive relation, equivalent to the original formulas used to express them.
T13.32. For any friendly recursive relation R.Ex/ with characteristic function chR .Ex/,
PA defines a relation R.x/
E such that PA ` R.x/
E $ chR .x/
E D ;. And for a recursive operator OP.P1 .Ex/ : : : Pn .Ex// with characteristic function f.chP1 .Ex/ : : :
chPn .Ex//, PA ` Op.P1 .x/
E : : : Pn .x//
E $ f.chP1 .x/
E : : : chPn .x//
E D ;. As a
simple corollary, where R.Ex/ is originally captured by R.x;
E ;/, PA ` R.x/
E $
R.x;
E ;/.
Reasoning somewhat informally,
(a) Say an atomic recursive relation is one like EQ, LEQ or LESS whose characteristic function does not depend on the characteristic functions of other recursive
relations. Then let,
PA ` R.x/
E $ chR .x/
E D;
So PA ` Eq.x/
E $ chEQ .x/
E D ; And similarly in other cases.
(b) Now consider a recursive operator, OP.P1 .Ex/ : : : Pn .Ex// with characteristic function f.chP1 .Ex/ : : : chPn .Ex//; as DSJ.P.Ex/; Q.Ex// with its characteristic function
times.chP .Ex/; chQ .Ex//. And suppose PA ` P1 .x/
E $ chP1 .x/
E D ; and
. . . and PA ` Pn .x/
E $ chPn .x/
E D ;. Let cP .x/
E D v.P.x/
E ^v D
;/ _ .P.x/
E ^ v D 1/ and set,
PA ` Op.P1 .x/
E : : : Pn .x//
E $ f.cP1 .x/
E : : : cPn .x//
E D;
On this basis, the operator is defined by a formula with P1 : : : Pn as parts.
However, by T13.37 (which we shall see shortly), PA ` chP .x/
E D ;_
chP .x/
E D 1; so with the assumption, PA ` v.P.x/
E ^ v D ;/ _ .P.x/
E ^
v D 1/ D chP .x/;
E which is to say, PA ` cP .x/
E D chP .x/;
E so with DE,
PA ` Op.P1 .x/
E : : : Pn .x//
E $ f.chP1 .x/
E : : : chPn .x//
E D ;. As for example,
then, PA ` Dsj.P.x/;
E Q.x//
E $ times.chP .x/;
E chQ .x//
E D ;.
(c) For any R.Ex/ D OP.P1 .Ex/ : : : Pn .Ex// set,
PA ` R.x/
E $ Op.P1 .x/
E : : : Pn .x//
E
Then, with the result from (b), PA ` R.x/
E $ f.chP1 .x/
E : : : chPn .x//
E D ;;
which is to say, PA ` R.x/
E $ chR .x/
E D ;. So where R.Ex/ D DSJ.P.Ex/; Q.Ex//,
PA ` R.x/
E $ Dsj.P.x/;
E Q.x//.
E

CHAPTER 13. GDELS THEOREMS

651

(d) Finally, with T13.21, PA ` v D chR .x/


E $ R.x;
E v/; so PA ` ; D chR .x/
E $
R.x;
E ;/; so with the result from (c), PA ` R.x/
E $ R.x;
E ;/.
Thus PA defines both functions and relations corresponding to the friendly recursive functions and relations, equivalent to the original formulas used to express and
capture them.
*E13.12. Show (i) and (ii) for Def [ : ]. Then show T13.23 (a) and (j). Hard core:
show all of the results in T13.23.
*E13.13. Show T13.24d and T13.24h. Hard core: show all of the results in T13.24.
*E13.14. Provide a complete demonstration of T13.25h including the justification
for d . Hard core: Show all of the results from T13.25.
*E13.15. Show the condition for Def [lcm] and provide a demonstration for T13.26d.
Hard core: show all of the results for Def [lcm], Def [plm] and T13.26.
*E13.16. Provide derivations to show each of [a] - [e] to complete the derivation for
T13.27.
*E13.17. Complete the argument for condition (i) of Def [maxs] for by producing
arguments for (a), (b), (c) and (d). Hard core: Provide complete justifications
for Def [maxs] and Def [maxp]; and show each of the results in T13.28.
*E13.18. Complete the demonstration for T13.29.
*E13.19. Show T13.30. Hard core: show the conditions for Def [h.i /].
*E13.20. Complete the demonstration of T13.31 by showing the zero case.
E13.21. Give the demonstration to show PA ` Op.P1 .x/
E : : : Pn .x//
E $ f.chP1 .x/
E
: : : chPn .x//
E D ; from (b) of T13.32.

CHAPTER 13. GDELS THEOREMS

13.3.3

652

Some Applications

We have now shown that PA defines all the functions we require. However, this is
not quite everything we want. Observe that plus.x; y/, say, is defined by a complex
expression through recursion, and so is not the same expression as our old friend x C
y. Thus it is not obvious that our standard means for manipulation of C apply to plus.
We could recover our ordinary results if we could show PA ` x C y D plus.x; y/.
And similar comments apply to other ordinary functions and relations. Thus initially
we seek to show that defined relations functions are equivalent to ones with which we
are familiar. Again many details are shifted to exercises and/or answers to exercises.
Equivalencies. We begin with equivalences between functions and relations already defined in PA, and those that result from the recursive functions and relations
(by T13.21 and T13.32). We begin with ones from LNT including S , C, , D, ,
<, truth functional operators, bounded quantifiers and bounded minimization. Given
the nature of recursive functions, these will require a few additional notions along
the way.
First a result that is fundamental to every case where a function is defined by
recursion. As above let F .x;
E y; v/ be,
9p9q.p; q; ;/ D g.x/
E ^ .8i < y/h.x;
E i; .p; q; i// D .p; q; Si/ ^ .p; q; y/ D z

Then when f.x;


E y/ is defined by recursion, the standard recursive conditions apply.
That is,
T13.33. Suppose f.x;
E y/ is defined by g.x/
E and h.x;
E y; u/ so that PA ` v D
f.x;
E y/ $ F .x;
E y; v/. Then,
(a) f.x;
E ;/ D g.x/
E
(b) f.x;
E S.y// D h.x;
E y; f.x;
E y//
Hint: (a) follows easily in 6 lines with 9p9q.p; q; ;/ D g.x/
E ^ .8i <
;/h.x;
E i; .p; q; i // D .p; q; S i / ^ .p; q; ;/ D f.x;
E ;/ from the definition. For (b),

CHAPTER 13. GDELS THEOREMS

653

1. 9p9q .p; q; ;/ D g.x/


E ^ .8i < Sy/h.x;
E i; .p; q; i // D .p; q; S i / ^ .p; q; Sy/ D f.x;
E Sy/

def

2.

.a; b; ;/ D g.x/
E ^ .8i < Sy/h.x;
E i; .a; b; i // D .a; b; S i / ^ .a; b; Sy/ D f.x;
E Sy/

A (g 19E)

3.
4.
5.
6.
7.
8.
9.

.8i < Sy/h.x;


E i; .a; b; i // D .a; b; S i /
y < Sy
h.x;
E y; .a; b; y// D .a; b; Sy/
.a; b; Sy/ D f.x;
E Sy/
f.x;
E Sy/ D h.x;
E y; .a; b; y//
.a; b; ;/ D g.x/
E
j <y

2 ^E
T13.13g
3,4 (8E)
2 ^E
5,6 DE
2 ^E
A (g (8I))

10.
11.
12.
13.
14.
15.
16.
17.

j < Sy
h.x;
E j; .a; b; j // D .a; b; Sj /
.8i < y/h.x;
E i; .a; b; i // D .a; b; S i /
.a; b; y/ D .a; b; y/
.a; b; ;/ D g.x/
E ^ .8i < y/h.x;
E i; .a; b; i // D .a; b; S i / ^ .a; b; y/ D .a; b; y/
9p9q .p; q; ;/ D g.x/
E ^ .8i < y/h.x;
E i; .p; q; i // D .p; q; S i / ^ .p; q; y/ D .a; b; y/
f.x;
E y/ D .a; b; y/
f.x;
E Sy/ D h.x;
E y; f.x;
E y//

E S.y// D h.x;
E y; f.x;
E y//
18. f.x;

The key stages of this argument are at (7) which has the result with .a; b; y/
where we want f.x;
E y/ and then (16) which shows they are one and the same.
From this theorem, our defined functions behave like ones we have seen before, with
clauses for the basis and then for successor. This lets us manipulate the functions
very much as before. The importance of this point will emerge shortly, in application
to recursive cases.
Observe that from T13.21 PA proves results parallel to friendly recursive definitions. From the basis, PA defines suc, zero and idnt. Then when f.Ex; yE; zE/ D
h.Ex; g.Ey/; zE/ by composition, PA ` f.x;
E y;
E zE/ D h.x;
E g.y/;
E zE/. If f.Ex/ D yg.Ex; y/
by friendly regular minimization, PA ` f.x/
E D yg.x;
E y/. And with T13.33,
when f.Ex; y/ is defined by recursion from g.Ex/ and h.Ex; y; u/, then PA ` f.x;
E ;/ D
g.x/
E and PA ` f.x;
E Sy/ D h.x;
E y; f.x;
E y//. And with T13.32 a similar point
applies to friendly recursive relations. There are Eq, Leq and Less. Then for any
R.E
x/ D OP.P1 .Ex/ : : : Pn .Ex//, PA ` R.x/
E $ Op.P1 .x/
E : : : Pn .x//.
E This lets us write
down defined functions directly from the recursive definitions. With this said, we
turn to our results.
T13.34. The following result in PA.
(a) PA ` suc.x/ D S x

9 and T13.13g
3,10 (8E)
9-11 (8I)
DI
8,12,13 ^I
14 9I
15 def
7,16 DE
1,2-17 9E

CHAPTER 13. GDELS THEOREMS


1.
2.
3.
4.

v D suc.x/ $ S x D v
suc.x/ D suc.x/ $ S x D suc.x/
suc.x/ D suc.x/
suc.x/ D S x

654
def suc
1 8E
DI
2,3 DE

(b) PA ` zero.x/ D ;
j

(c) PA ` idntk .x1 : : : xj / D xk


(d) PA ` plus.x; y/ D x C y
(e) PA ` times.x; y/ D x  y
Arguments for (a) - (c) are very much the same and nearly trivial. Arguments
for (d) and (e) are by IN. Here is the case for (d) as an example.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.

gplus.x/ D idnt11 .x/


gplus.x/ D x
plus.x; ;/ D gplus.x/
plus.x; ;/ D x
xC;Dx
plus.x; ;/ D x C ;
plus.x; j / D x C j
plus.x; Sj / D hplus.x; j; plus.x; j //
hplus.x; j; u/ D suc.idnt33 .x; j; u//
hplus.x; j; u/ D S u
hplus.x; j; plus.x; j // D S plus.x; j /
plus.x; Sj / D S plus.x; j /
plus.x; Sj / D S.x C j /
S.x C j / D x C Sj
plus.x; Sj / D x C Sj

16. plus.x; j / D x C j ! plus.x; Sj / D x C Sj


17. 8y.plus.x; y/ D x C y ! plus.x; Sy/ D x C Sy/
18. plus.x; y/ D x C y

def from plus, T13.21


1 with T13.34c
T13.33a
3,2 DE
T6.39
4,5 DE
A (g !I)
T13.33b
def from plus, T13.21
9 with T13.34a,c
10 8E
8,11 DE
12,7 DE
T6.40
13,14 DE
7-15 !I
16 8I
6,17 IN

It is crucial for this argument that we may simply write down the expressions on (1) and (9) with T13.21; and then that the induction works insofar as
T13.33 makes the conditions for plus.x; y/ work like the ones for x C y.
So this theorem establishes the equivalences we expect for the defined symbols suc,
zero, idnt, plus and times. It is important that C,  and the like are primitive symbols of LNT where plus and times are defined according to our induction from the
corresponding recursive functions. Having shown that the functions are equivalent,
however, we may manipulate the one with all the results we have achieved for the
other.

CHAPTER 13. GDELS THEOREMS

655

Some additional results will be facilitated by a couple of auxiliary definitions.


pred.y/, sg.y/ and csg.y/ are defined directly, without appeal to recursive functions
but still behave as we expect.
Def [pred] PA ` v D pred.y/ $ .y D ; ^ v D ;/ _ .y > ; ^ S v D y/
(i) 9v.y D ; ^ v D ;/ _ .y > ; ^ S v D y/
(ii) 8u8v..y D ; ^ u D ;/ _ .y > ; ^ S u D y/ ! .y D ; ^ v D ;/ _ .y >
; ^ S v D y//
Def [sg] PA ` v D sg.y/ $ .y D ; ^ v D ;/ _ .y > ; ^ v D S ;/
(i) PA ` 9v.y D ; ^ v D ;/ _ .y > ; ^ v D 1/
(ii) PA ` 8u8v..y D ; ^ u D ;/ _ .y > ; ^ u D 1/ ! .y D ; ^ v D
;/ _ .y > ; ^ v D 1//
Def [csg] PA ` v D csg.y/ $ .y D ; ^ v D 1/ _ .y > ; ^ v D ;/
(i) PA ` 9v.y D ; ^ v D 1/ _ .y > ; ^ v D ;/
(ii) PA ` 8u8v..y D ; ^ u D 1/ _ .y > ; ^ u D ;/ ! .y D ; ^ v D
1/ _ .y > ; ^ v D ;//
And some basic results on these notions,
T13.35. The following result in PA.
(a) PA ` pred.;/ D ;
(b) PA ` y > ; ! S pred.y/ D y
(c) PA ` pred.1/ D ;
(d) PA ` y D ; $ sg.y/ D ;
(e) PA ` y > ; $ sg.y/ D 1
(f) PA ` y D ; $ csg.y/ D 1
(g) PA ` y > ; $ csg.y/ D ;

CHAPTER 13. GDELS THEOREMS

656

(a) and (b) extract basic information from the definition of pred; (c) is a simple
particular result. (d) and (e) extract from the definition basic information for the
behavior of sg; and (f) and (g) for csg.
And given these notions in PA, we can build on them for another set of equivalents.
*T13.36. The following result in PA.
(a) PA ` pred.y/ D pred.y/
*(b) PA ` subc.x; y/ D x : y
(c) PA ` absval.x - y/ D .x : y/ C .y : x/
(d) PA ` sg.y/ D sg.y/
(e) PA ` csg.y/ D csg.y/
*(f) PA ` Eq.x; y/ $ x D y
(g) PA ` Leq.x; y/ $ x  y
(h) PA ` Less.x; y/ $ x < y
*(i) PA ` Neg.P.x//
E $ P.x/
E
(j) PA ` Dsj.P.x/;
E Q.y//
E $ P.x/
E _ Q.y/
E
Hints. (b): This works in the usual way up to the point in the show stage
where you get subc.x; Sj / D pred.x : j /; then it will take some work to
show x : Sj D pred.x : j /; for this begin with x  j _x > j by T13.13p;
the first case is straightforward; for the second, you will be able to show,
S.x : Sj / D Spred.x : j / and apply T6.38. (f): For this relation, you have
Eq.x; y/ $ sg.absval.x - y// D ; from the def EQ and T13.32; this gives
Eq.x; y/ $ .x : y/ C .y : x/ D ;; now for $I, the case from x D y is
easy; from Eq.x; y/, you have x  y _ x < y from T13.13p; the cases are
not hard and similar (since x < y gives y  x). (i): This is straightforward
with P.x/
E $ chP .x/
E D ; and Neg.P.x//
E $ csg.chP .x//
E D ; from NEG with
T13.32.
So this theorem delivers the equivalences we expect for pred, subc, absval, sg, csg,
Eq, Leq, Less, Neg, and Dsj. Given this, we will typically move without comment

CHAPTER 13. GDELS THEOREMS

657

from some PA ` Dsj.A; B/ given from T13.32 to PA ` A _ B. And similarly in


other cases.
We pause to remark on a on a simple consequence for characteristic functions.
Recall from (CF) that a characteristic function is (officially) of the sort sg.p.Ex// so
that,
T13.37. For any recursive characteristic function chR .Ex/, PA ` chR .x/
E D ;_
chR .x/
E D 1.
From (CF), chR .Ex/ is of the sort sg.p.Ex//; so with T13.21, PA ` chR .x/
E D
sg.p.x//.
E
The result is nearly immediate with PA ` p.x/
E D ; _ p.x/
E >;
and results for sg.
Now reasoning for corresponding results for the bounded quantifiers and bounded
minimization and a couple relations built on them.
*T13.38.
*(a) PA ` .99y  z/P.x;
E z; y/ $ .9y  z/P.x;
E z; y/
(b) PA ` .99y < z/P.x;
E z; y/ $ .9y < z/P.x;
E z; y/
8y  z/P.x;
(c) PA ` .8
E z; y/ $ .8y  z/P.x;
E z; y/
8y < z/P.x;
(d) PA ` .8
E z; y/ $ .8y < z/P.x;
E z; y/
*(e) PA ` .y  z/P.x;
E z; y/ $ .y  z/P.x;
E z; y/
(f) PA ` Fctr.m; n/ $ mjn
*(g) PA ` Prime.n/ $ Pr.n/
Hints. (a): Recall from chapter 12 that s.Ex; z/ D .9y  z/P.Ex; z; y/ is defined
by means of a R.Ex; z; n/ corresponding to .9y  n/P.Ex; z; y/; the main argument is to show by IN that PA ` chR .x;
E z; n/ D ; $ .9y  n/P.x;
E z; y/.
You have P.x;
E z; y/ $ chP .x;
E z; y/ D ; from T13.32. For the zero case, you
have chR .x;
E z; ;/ D gchR .x;
E z/ from T13.33a, and gchR .x;
E z/ D chP .x;
E z; ;/
from the definition with T13.21; for the main reasoning, you have chR .x;
E z; Sj /
D hchR .x;
E z; j; chR .x; z; j // from T13.33b, and hchR .x;
E z; j; u/ D timesu;
chP .x;
E z; suc.j // from the definition with T13.21; once you have finished
the induction, it is a simple matter of applying chS .x;
E z/ D chR .x;
E z; z/ from
the definition and T13.21, and where where S.x;
E z/ just abbreviates .99y 

CHAPTER 13. GDELS THEOREMS

658

z/P.x;
E z; y/, applying S.x;
E z/ $ chS .x;
E z/ D ; from T13.32 to get .99y 
z/P.x;
E z; y/ $ .9y  z/P.x;
E z; y/. (f) and (g): Give previous results, these
have nearly matching definitions except that the recursive side includes a
bounded quantifier so that you have to work to show the bound obtains
for one direction of the biconditional.
The argument for T13.38e is particularly involved. Recall from chapter 12
that m.Ex; z/ D .y  z/P.Ex; z; y/ is defined by means of R.Ex; z; n/ as above
and a q.Ex; z; n/ corresponding to .y  n/P.Ex; z; y/. The main reasoning is
by IN to show q.x;
E z; n/ D .y  n/P.x;
E z; y/; here are the main outlines of
that part.

CHAPTER 13. GDELS THEOREMS


1.
2.
3.
4.
5.

q.x;
E z; ;/ D .y  ;/P.x;
E z; y/
chR .x;
E z; j / D ; _ chR .x;
E z; j / D 1
chR .x;
E z; j / D ; $ .9y  j /P.x; z; y/
q.x;
E z; Sj / D hq.x;
E z; j; q.x;
E z; j //
hq.x;
E z; j; u/ D plus.u; chR .x;
E z; j //

6. hq.x;
E z; j; u/ D u C chR .x;
E z; j /
7. hq.x;
E z; j; q.x;
E z; j // D q.x;
E z; j / C chR .x;
E z; j /
8. q.x;
E z; Sj / D q.x;
E z; j / C chR .x;
E z; j /
q.x;
E z; j / D .y  j /P.x;
E z; y/
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.

a D q.x;
E z; j /
b D q.x;
E z; Sj /
b D a C chR .x;
E z; j /
a D .y  j /P.x;
E z; y/
a D yy D j _ P.x;
E z; y/
.8w < a/w j ^ P.x;
E z; w/
a D j _ P.x;
E z; a/
aDj
P.x;
E z; j / _ P.x;
E z; j /
P.x;
E z; j /

659
[a]
T13.37
from T13.38a
T13.33b
def from least, T13.21
5 T13.34d
6 8E
4,7 DE
A (g !I)
abv
abv
8,10,11 DE
9,10 DE
13 def
14 T13.19c
14 T13.19b
A (g 16_E)
T3.1
A (g 18_E)

20.

b D Sj _ P.x;
E z; b/ ^ .8w < b/.w Sj ^ P.x;
E z; w//

[b]

21.

P.x;
E z; j /

A (g18_E)

22.

b D Sj _ P.x;
E z; b/ ^ .8w < b/.w Sj ^ P.x;
E z; w//

[c]

23.

b D Sj _ P.x;
E z; b/ ^ .8w < b/.w Sj ^ P.x;
E z; w//

18,19-20,21-22 _E

24.

P.x;
E z; a/

A (g 16_E)

b D Sj _ P.x;
E z; b/ ^ .8w < b/.w Sj ^ P.x;
E z; w//

[d]

25.
26.
27.
28.
29.

b D Sj _ P.x;
E z; b/ ^ .8w < b/.w Sj ^ P.x;
E z; w//
b D yy D Sj _ P.x;
E z; j /
b D .y  Sj /P.x;
E z; y/
q.x;
E z; Sj / D .y  Sj /P.x;
E z; y/

E z; j / D .y  j /P.x;
E z; y/ ! q.x;
E z; Sj / D .y  Sj /P.x;
E z; y/
30. q.x;
31. 8n.q.x;
E z; n/ D .y  n/P.x;
E z; y/ ! q.x;
E z; S n/ D .y  S n/P.x;
E z; y//
32. q.x;
E z; n/ D .y  n/P.x;
E z; y/

16,17-23,24-25 _E
24 def 
27 def
28 abv
9-29 !I
30 8I
1,31 IN

Hints: The zero case (a) is straightforward with T13.20a; for (b) you will be
able to show that b D Sj ; for (c) and (d) you will be able to show b D a.
And the final result is nearly automatic from this.
T13.38 delivers the equivalences we expect for the bounded quantifiers, bounded
minimization, factor and prime.
At this stage, we have defined in PA functions and relations corresponding to the
recursive functions and relations. And we have taken advantage of equivalences to
functions and relations already defined. Thus we are in a position simply to write
down the following.

CHAPTER 13. GDELS THEOREMS

660

T13.39. The following are theorems of PA:


(a) PA ` Mp.m; n; o/ $ cnd.n; o/ D m
(b) PA ` Icon.m; n; o/ $ Mp.m; n; o/ _ .m D n ^ Gen.n; o//
(c) PA ` Prft.m; n/ $ exp.m; len.m/

1/ D n ^ .8k < len.m//Axiomt.exp.m; k// _ .9i <

k/.9j < k/Icon.exp.m; i/; exp.m; j /; exp.m; k//

These follow directly from our results with their definitions MP, ICON and
The definition with T13.32 gives us, say, PA ` MP.m; n; o/ $
Eq.cnd.n; o/; m/; then with T13.36f, we arrive at (a). And similarly in other
cases.
PRFQ.

Where Mp, cnd and the like are defined relative to corresponding recursive functions,
it is important that the operators in these expressions are the ordinary operators of
LNT . Thus we shall be able to manipulate them in the usual ways. We shall find these
results useful for the following!
E13.22. Produce derivations to show T13.33a and T13.34e. Hard core: show the
remaining cases from T13.34.

E13.23. Show (i) of the condition for Def [pred] and then T13.35b. Hard core: Show
each of the conditions for Def [pred], Def [sg] and Def [csg] and all of the
results in T13.35.

*E13.24. Show a, g and j from T13.36. Hard core: Demonstrate each of the results
in T13.36.

*E13.25. Show T13.38a. Hard core: show T13.37 along with each of the results in
T13.38.
Further results. Where T13.39 is interesting and important, we shall require some
further results especially involving functions from chapter 12 up to concatenation.
Thus we begin with some results for exponentiation, factorial and the like upon which
concatenation depends. In this case, we shall be acquiring results, not by demonstrating equivalence to expressions already defined (since there are no such expressions
already defined), but directly for symbols defined from the recursive funcions.

CHAPTER 13. GDELS THEOREMS

661

*T13.40. The following are theorems of PA.


(a) (i) PA ` m; D 1
(ii) PA ` mS n D mn  m
(b) PA ` m1 D m
(c) PA ` a > ; ! ;a D ;
(d) PA ` ma  mb D maCb
(e) PA ` m  n ! ma  na
(f) PA ` pred.mb /jmaCb
(g) PA ` m > ; ! ma > ;
(h) PA ` .m > ; ^ a  b/ ! ma  mb
(i) PA ` a > ; ! ma  m
*(j) PA ` m > 1 ! a < ma
Hints: (a) is from the the definition of power and prior results. (d) uses IN
on the value of b and (e) uses IN on a. (f) is straightforward with cases for
mb D ; and mb > ;. (g), (h) and (j) are by IN.
(a) gives the recursive conditions from which the rest follow. Then (b) - (i) are basic
results that should be accessible from ordinary arithmetic.
*T13.41. The following are theorems of PA.
(a) (i) PA ` fact.;/ D 1
(ii) PA ` fact.S n/ D fact.n/  S n
(b) PA ` fact.1/ D 1
(c) PA ` fact.n/ > ;
(d) PA ` .8y < n/yjfact.n/
*(e) PA ` .y  fact.n/ C 1/n < y ^ Pr.y/ D yn < y ^ Pr.y/
Hints: (a) is from the definition of fact and prior results. (c) and (d) are
straightforward by IN. (e) is from .9y  fact.n/ C 1/n < y ^ Pr.y/ and
then T13.20b.

CHAPTER 13. GDELS THEOREMS

662

These are some basic results for factorial. Again (a) gives the recursive conditions
from which the rest follow. (b) is a simple particular fact; and the result from (c)
is obvious. (d) is a consequence of the way the factorial includes all the numbers
less than it. Reasoning for (e) is like (G2) in the arithmetic for Gdel numbering
reference once you realize that all the primes less than n are included in fact.n/; we
will be able to take advantage of (e) immediately below.
*T13.42. The following are theorems of PA.
(a) (i) PA ` pi.;/ D 2
(ii) PA ` pi.S n/ D .y  fact.pi.n// C 1/pi.n/ < y ^ Pr.y/
(b) PA ` pi.S n/ D ypi.n/ < y ^ Pr.y/
(c) PA ` pi.n/ < pi.S n/ ^ Pr.pi.S n//
(d) PA ` .8w < pi.S n//pi.n/ < w ^ Pr.w/
(e) PA ` Pr.pi.n//
(f) PA ` pi.n/ > 1
(g) PA ` pi.n/a > ;
(h) PA ` S pred.pi.n/a / D pi.n/a
(i) PA ` .8m < n/pi.m/ < pi.n/
(j) PA ` .8m  n/S m < pi.n/
*(k) PA ` 8yPr.y/ ! 9j pi.j / D y
*(l) PA ` m n ! pred.pi.m// pi.n/a
(m) PA ` m n ! pred.pi.m/S b / pi.n/a
*(n) PA ` m n ^ pred.pi.m/b /j.s  pi.n/a / ! pred.pi.m/b /js
Hints: (a) is from definition pi and prior results. (b) is with T13.41e and then
(c) and (d) are by T13.19(b) and (c). (e), (i) and (j) are simple inductions.
(k) is by using IN to show .8y  pi.i //Pr.y/ ! 9j pi.j / D y; the result then follows easily with (j). Under the assumption for !I, (l) is by IN
on a. For (m) you will be able to show that if pred.pi.m/S b /jpi.n/a then
pred.pi.m//jpi.n/a and use (l). For (n) under the assumption for !I you

CHAPTER 13. GDELS THEOREMS

663

will be able to show i  b ! pred.pi.m/i /js by induction on i ; the result


then follows easily with b  b.
These are some basic results from prime sequences. (a) gives the basic recursive
conditions. Then (b) extracts the successor condition from bounded to unbounded
minimization; this allows application of the definition in (c) and (d). (e) - (h) are some
simple consequences of the fact that pi.n/ is prime. Then the primes are ordered (i).
And (j) each prime is greater than the successor of its index. (k) for any prime y,
there is some j such that pi.j / D y. And (l) - (n) echo results for factor except
combined with primes and exponentiation.
In order to manipulate exp, it will be convenient to introduce a function ex, that
finds the least exponent x such that pi.i /x does not divide S n.
Def [ex] ex.n; i / D xpred.pi.i /x / S n
(i) PA ` 9xpred.pi.i /x / S n
1.
2.
3.
4.
5.
6.
7.

pi.i/ > 1
Sn < pi.i/S n
Spred.pi.i/S n / D pi.i/S n
Sn < Spred.pi.i/S n /
n < pred.pi.i/S n /
pred.pi.i/S n / Sn
9xpred.pi.i/x / Sn

T13.42f
1 T13.40j
T13.42h
2,3 DE
4 T13.13j
5 T13.24h
6 9I

*T13.43. The following are theorems of PA.


(a) PA ` exp.n; i / D .x  n/pred.pi.i /x /jn ^ pred.pi.i /xC1 / n
(b) PA ` exp.;; i / D ;
*(c) PA ` exp.S n; i / D xpred.pi.i /x /jS n ^ pred.pi.i /xC1 / S n
(d) PA ` pred.pi.i /exp.S n;i / /jS n ^ pred.pi.i /exp.S n;i /C1 / S n
(e) PA ` .8w < exp.S n; i //pred.pi.i /w /jS n ^ pred.pi.i /wC1 / S n
(f) PA ` pred.pi.i /a jS n ^ pred.pi.i /aC1 S n ! exp.S n; i / D a
(g) PA ` exp.m; j /  m
(h) PA ` j  n ! exp.S n; j / D ;

CHAPTER 13. GDELS THEOREMS

664

(i) PA ` pred.pi.i //jS m $ exp.S m; i /  1


(j) PA ` 9qpi.i /exp.S n;i /  q D S n ^ pred.pi.i // q
*(k) PA ` 9qpi.i /exp.S n;i /  q D S n ^ 8y.y i ! exp.q; y/ D exp.S n; y//
Hints: (a) is from definition exp and prior results. (c) is by PA ` .9x 
S n/pred.pi.i /x /jS n ^ pred.pi.i /xC1 / S n and then T13.20b; ex.n; i / D
; _ ex.n; i / > ;; in the latter case, the trick is to generalize on the number prior to ex.n; i /. (f) is by showing that a D xpred.pi.i /x /jS n ^
pred.pi.i /xC1 / S n. (j): from pred.pi.i /exp.S n;i / /jS n there is an a such
that pred.pi.i /exp.S n;i / /a D S n you will be able to show that pred.pi.i //
a. (k): from pred.pi.i /exp.S n;i / /jS n there is a j such that pi.i /exp.S n;i / j D
S n; the hard part is to show k i ! exp.j; k/ D exp.S n; k/ for this, it
will be helpful to establish that j is a successor.
(a) is from the definition. (b) is the standard result with bound ;. (c) extracts the successor case from the bounded to an unbounded minimization; this allows application
of the definition in (d) and (e). From (f) the reasoning goes the other way around: not
only does the condition apply to the exponent, but if the condition applies to some
a, then a is the exponent. Then (g) the exponent of some prime in the factorization
of m cannot be greater than m; and (h) a prime whose index is greater than or equal
to n does not divide into S n. (i) makes the obvious connection between factor and
exponent. According to (j) once you divide S m by pi.i / exp.S n; i / times you are
left with a q such that pi.i / does not divide into it any more. And (k) once you divide
S m by pi.i / exp.S n; i / times you are left with a q such that the exponents of all the
other primes remain the same as in S m.
*T13.44. The following are theorems of PA.
(a) PA ` len.n/ D .y  n/.8z  n/z  y ! exp.n; z/ D ;
(b) PA ` len.;/ D ;
(c) PA ` len.S n/ D y.8z  S n/z  y ! exp.S n; z/ D ;
(d) PA ` .8z  S n/z  len.S n/ ! exp.S n; z/ D ;
(e) PA ` .8w < len.S n//.8z  S n/z  w ! exp.S n; z/ D ;
(f) PA ` len.1/ D ;
(g) PA ` len.m/ > ; ! m > 1

CHAPTER 13. GDELS THEOREMS

665

*(h) PA ` exp.m; i / > ; ! len.m/ > i


(i) PA ` .8z  len.n//exp.n; z/ D ;
*(j) PA ` len.n/ D S l ! exp.n; l/  1
Hints: (a) is from definition length and prior results. (c) follows with T13.43h
and existentially generalizing on S n itself. (f) is by application of (c). Under
the assumption for !I, (h) divides into cases for m D ; and m > ;; for the
latter, suppose len.m/ i ; then you will be able to make use of (d). (i): under
the assumption a  len.n/ for (8I), either n D ; or n > ;; the first case is
easy; for the second, there is some m such that n D S m; your main reasoning
will be to show exp.S m; a/ D ;. (j): under the assumption for !I, the case
when n D ; is impossible; so there is some m such that n D S m; with this,
suppose exp.S m; l/ 1; then you you will be able to show, contrary to your
assumption that len.S m/ D l.
Again (a) is from the definition and (b) gives the standard result for bound ;. (c)
extracts the successor case from bounded to unbounded minimization; (d) and (e)
then apply the definition. (f) is a simple particular result; and then (g) is an immediate
consequence of (b) and (f). From (h) if an exponent of some prime is greater than
zero, it cannot be the last prime involved in the factorization of m; and from (i) primes
 the length of n must all have exponent ;. Length is set up so that it finds the first
prime such that it and all the ones after have exponent zero; so (j) the prime prior to
the length has exponent  ;.
For our last results in this part, it will be helpful to introduce a couple of auxiliary
notions. We introduce them at the level of recursive functions.
First, exc.m; n; i/ which (indirectly) takes the value of the ith exponent in the
concatenation of m and n.
.y  exp.m; i/ C exp.n; i

len.m///.i < len.m/ ^ y D exp.m; i/ _ i  len.m/ ^ y D exp.n; i

len.m///

The idea is simply to set y to one or the other of exp.m; i/ or exp.n; i : len.m//
so that y takes the value of the ith exponent in the concatenation of m and n. Then
val.m; n; i/ is defined by recursion as follows.
val.m; n; 0/ D 1
val.m; n; Sy/ D val.m; n; y/  pi.y/exc.m;n;y/

So val.m; n; i/ returns the product of the first i primes in the factorization of m ? n.


This notion will be helpful for manipulating concatenation. Say m  n is the defined
correlate to m ? n and let l D len.m/ C len.n/.

CHAPTER 13. GDELS THEOREMS

666

*T13.45. The following are theorems of PA.


(a) PA ` exc.m; n; i / D .y  exp.m; i / C exp.n; i : len.m///.i < len.m/ ^
y D exp.m; i / _ i  len.m/ ^ y D exp.n; i : len.m///
(b) (i) PA ` val.m; n; ;/ D 1
(ii) PA ` val.m; n; Sy/ D val.m; n; y/  pi.y/exc.m;n;y/
(c) (i) PA ` m  n D .x  Bm;n /.8i < len.m//fexp.x; i / D exp.m; i /g ^
.8i < len.n//fexp.x; i C len.m// D exp.n; i /g
(ii) PA ` Bm;n D pi.l/mCn l
(d) PA ` exc.m; n; i / D y.i < len.m/ ^ y D exp.m; i / _ i  len.m/ ^ y D
exp.n; i : len.m///
(e) PA ` i < len.m/ ! exc.m; n; i / D exp.m; i /
(f) PA ` i  len.m/ ! exc.m; n; i / D exp.n; i : len.m//
(g) PA ` val.m; n; i / > ;
*(h) PA ` .8i  a/pred.pi.i // val.m; n; a/
*(i) PA ` .8j < i /exp.val.m; n; i /; j / D exc.m; n; j /
*(j) PA ` .8i < len.m//exp.val.m; n; l/; i / D exp.m; i / ^
.8i < len.n//exp.val.m; n; l/; i C len.m// D exp.n; i /
*(k) PA ` i  l ! pi.l/mCn i  val.m; n; i /
(l) PA ` mn D x.8i < len.m//fexp.x; i / D exp.m; i /g^.8i < len.n//fexp.x;
i C len.m// D exp.n; i/g
(m) PA ` .8i < len.m//fexp.m  n; i / D exp.m; i /g ^ .8i < len.n//fexp.m  n;
i C len.m// D exp.n; i/g
(n) PA ` .8w < m  n/.8i < len.m//fexp.w; i / D exp.m; i /g ^ .8i <
len.n//fexp.w; i C len.m// D exp.n; i /g
*(o) PA ` len.m  n/  l

CHAPTER 13. GDELS THEOREMS

667

*(p) PA ` len.m  n/ D l
Hints: (a), (b) and (c) are from the definitions exc, val and concatenation
with prior results. (h) is by IN on a. (i) is by IN on i ; in the show under
assumptions, .8j < i /exp.val.m; n; i /; j / D exc.m; n; j / and a < S i you
will have separate cases for a < i and a D i . (j) is straightforward with
applications of (i), (e) and (f). (k) is an induction on i ; in the show, the main
task is to obtain exc.m; n; i /  m C n; the result then follows with previously
established inequalities. (o) divides into cases for len.n/ D ; and len.n/ > ;;
and within the first, again, cases for len.m/ D ; and len.m/ > ;. For (p)
show len.m  n/  l and apply (o); for the main argument (which will be
long!) assume len.m  n/ l; then you will be able to apply T13.43k and
show that the q so obtained contradicts T13.45n.
A great deal of work goes into obtaining (k) as a basis for (l); the idea is the same as
behind the intuitive account of the bound from chapter 12: pi.l/mCn is greater than
every term in the factorization of m  n; so by a simple induction, for each i  l,
pi.l/mCn i remains greater than val.m; n; i /; and val.m; n; l/ is therefore both under
the bound and satisfies the condition for m  n so that the existential condition is
satisfied, and we may extract the bounded to an unbounded minimization. Once this
is acccomplished, we are most of the way home.
(a), (b) and (c) are from the definitions. Then (d) extracts exc from the bounded
to unbounded minimization; and (e) and (f) apply the definition. (g) is obvious.
(h) results because val.m; n; a/ is a product of primes prior to pi.a/ so that greater
primes do not divide it. Then (i) the exponents in in val are like the exponents in exc.
This gives us (j) that the exponents in val are like the exponents in m and n. But val
is constructed so that an induction enables a natural comparison between Bm;n and
val.m; n; i / so that (k), up to any i , pi.l/mCn i  val.m; n; i /. This enables us to
extract m  n from bounded to unbounded minimization (l) and apply the definition
(m) and (n). Then (o) and (p) establish that the length of m  n sums the lengths of
m and n.
the remainder of this section is work in progress
Recall from chapter 12 that where p D pP q, v D pvq, and s D psq there
is a recursive formsub.p; v; s/ which returns the Gdel number of Psv . In addition, there is a relation num.n/ that returns the Gdel number of the standard numeral for n. Let gvar.n/ Ddef 223C2n be the Gdel number of variable xn . Then

CHAPTER 13. GDELS THEOREMS

668

formsub.p; gvar.n/; num.y// is a function which returns the number of the formula
that substitutes a numeral for the value (number) assigned to y into the place of xn .
So, for example, if y is assigned the value of 2, then formsub.p; gvar.n/; num.y//
returns pP2xn q. So PA defines formsub.p; gvar.n/; num.y//. Now,

T13.46. The following are theorems of PA.


(a) If PA ` Prvt.p/ then PA ` Prvt.p8q  gvar.n/  p/
(b) PA ` Prvt.cnd.p8q  gvar.n/  p; formsub.p; gvar.n/; num.x////
Effectively, (a) is like Gen* and (b) like A4.
T13.47. The following are theorems of PA. Suppose x D xi and y D xj .
(a) If x is not free in P , then PA ` formsub.pP q; pxq; y/ D pP q
(b) PA ` formsub.formsub.p; gvar.m/; num.xm //; gvar.n/; num.xn // D
formsub.formsub.p; gvar.n/; num.xn //; gvar.m/; num.xm //
(c) PA ` formsub.cnd.pP q; pQq/; gvar. i /; num.x// D
cnd.formsub.pP q; gvar. i /; num.x//; formsub.pQq; gvar. i /; num.x///
(d) PA ` formsub.pPyx q; gvar. j /; num.y// D formsub.pP q; gvar. i /; num.y//.
x q;
(e) PA ` formsub.pPSy
gvar. j /; num.y// D formsub.pP q; gvar. i /; num.Sy//.

*E13.26. Show (d) and (h) from T13.40. Hard core: show each of the results from
T13.40.

*E13.27. Show (d) and (e) from T13.41. Hard core: show each of the results from
T13.41.

*E13.28. Show (i) and (j) from T13.42. Hard core: show each of the results from
T13.42.

*E13.29. Show (c) and (f) from T13.43. Hard core: show each of the results from
T13.43.

CHAPTER 13. GDELS THEOREMS

669

Theorems to carry forward from 13.3.3


Together with the results from T13.39, the following of the theorems that we have
achieved in this part have application for the sections that follow.
T13.44h PA ` exp.m; i/ > ; ! len.m/ > i
T13.45m PA ` .8i < len.m//fexp.mn; i/ D exp.m; i/g^.8i < len.n//fexp.mn; i Clen.m// D
exp.n; i/g
T13.45p PA ` len.m  n/ D len.m/ C len.n/
T13.46a If PA ` Prvt.p/ then PA ` Prvt.p8q  gvar.n/  p/
T13.46b PA ` Prvt.cnd.p8q  gvar.n/  p; formsub.p; gvar.n/; num.x////
T13.47a If x is not free in P , then PA ` formsub.pP q; pxq; y/ D pP q
T13.47b PA ` formsub.formsub.p; gvar.m/; num.xm //; gvar.n/; num.xn // D
formsub.formsub.p; gvar.n/; num.xn //; gvar.m/; num.xm //
T13.47c PA ` formsub.cnd.pP q; pQq/; gvar. i /; num.x// D
cnd.formsub.pP q; gvar. i /; num.x//; formsub.pQq; gvar. i /; num.x///
T13.47d PA ` formsub.pPyx q; gvar. j /; num.y// D formsub.pP q; gvar. i /; num.y//.
x q; gvar. j /; num.y// D formsub.pP q; gvar. i /; num.Sy//.
T13.47e PA ` formsub.pPSy

*E13.30. Show (f) and (i) from T13.44. Hard core: show each of the results from
T13.44.
*E13.31. Show (d), (e), (l) and (o) from T13.45. Hard core: show each of the results
from T13.45.

13.4

The second and third conditions

After all our preparation, we are ready to turn to the second and third conditions.

13.4.1

The second condition: .P ! Q/ ! .P ! Q/.

Suppose both .P ! Q/ and P . Then there are j and k such that PRFT.j; pP ! Qq/
and PRFT.k; pP q/. Where 2pQq numbers the sequence whose only member is Q, intuitively, l D j ? k ? 2pQq numbers a proof of Q for we prove P ! Q, then P ,

CHAPTER 13. GDELS THEOREMS

670

then Q follows immediately as the last line by MP. So the idea is that PRFT.l; pQq/
and so Q follows from the assumptions. The task is to prove all of this in PA.
Observe that we have on the table expressions of the sort, C, Plus and plus
where the first is a primitive symbol of LNT , the second the original relation to capture the recursive function plus, and the last a function symbol defined from the
recursive function. In view of demonstrated equivalences, we will tend to slide between them without notice. So, for example, given that hh2; 2i; 4i 2 plus by capture PA ` Plus.2; 2; 4/; and by demonstrated equivalences, PA ` 2 C 2 D 4 and
PA ` plus.2; 2/ D 4; and similarly in other cases. We require such a move at different stages in the following, and suppose that P and Q are particular sentences so
that pP q and pQq are particular numerals, and capture applies in the natural way.
*T13.48. PA ` .P ! Q/ ! .P ! Q/. Corollary: PA ` Prvt.cnd.pP q; pQq// !
.Prvt.pP q/ ! Prvt.pQq//
We begin showing the corollary. The main theorem is immediate from PA `
pP ! Qq D cnd.pP q; pQq/ by capture.

CHAPTER 13. GDELS THEOREMS


1.

671

Prvt.cnd.pP q; pQq//

A (g !I)

2.

Prvt.pP q/

A (g !I)

3.
4.
5.
6.

Icon.cnd.pP q; pQq/; pP q; pQq/


9vPrft.v; cnd.pP q; pQq//
9vPrft.v; pP q/
Prft.j; cnd.pP q; pQq//

T13.39a,b
1 abv
2 abv
A (g 49E)

7.
8.
9.
10.
11.
(A) 12.
13.
14.
15.
16.
(B) 17.
18.
19.
(C) 20.
21.
22.
23.

Prft.k; pP q/

A (g 59E)

l D .j  k/  2pQq
:
exp.j; len.j / 1/ D cnd.pP q; pQq/
:
exp.k; len.k/ 1/ D pP q
exp.l; len.j / C len.k// D pQq
:
:
Iconexp.j; len.j / 1/; exp.k; len.k/ 1/; exp.l; len.j / C len.k//
.8i < len.j //exp.l; i / D exp.j; i /
.8i < len.k//exp.l; len.j / C i / D exp.k; i /
:
:
exp.l; len.j / 1/ D exp.j; len.j / 1/
:
:
exp.l; len.j / C len.k/ 1/ D exp.k; len.k/ 1/
:
:
Iconexp.l; len.j / 1/; exp.l; len.j / C len.k/ 1/; exp.l; len.j / C len.k//
.8i < len.j //Axiom.exp.l; i // _ .9m < i /.9n < i /Icon.exp.l; m/; exp.l; n/; exp.l; i //
.8i < len.k//Axiom.exp.l; len.j / C i // _
.9m < i /.9n < i /Icon.exp.l; len.j / C m/; exp.l; len.j / C n/; exp.l; len.j / C i //
.8i W len.j /  i < len.j / C len.k//Axiom.exp.l; i // _
.9m < i /.9n < i /Icon.exp.l; m/; exp.l; n/; exp.l; i //
x < len.l/

def
6 T13.39c
7 T13.39c
8 T13.45m,p
3,9,10,11 DE
8 T13.45m
8 T13.45m
13 T13.44h (8E)
14 T13.44h (8E)
12,15,16 DE
6,13 T13.39c

x < len.j / _ len.j /  x < len.j / C len.k/ _ x D len.j / C len.k/


x < len.j /

7,14 T13.39c
from 19
A (g (8I))
8,21 T13.45p
A (g 22_E)

24.

Axiom.exp.l; x// _ .9m < x/.9n < x/Icon.exp.l; m/; exp.l; n/; exp.l; x//

18,23 (8E)

25.

len.j /  x < len.j / C len.k/

A (g 22_E)

26.

Axiom.exp.l; x// _ .9m < x/.9n < x/Icon.exp.l; m/; exp.l; n/; exp.l; x//

20 (8E)

27.

x D len.j / C len.k/

A (g (22_E)

28.
29.

.9m < x/.9n < x/Icon.exp.l; m/; exp.l; n/; exp.l; x//
Axiom.exp.l; x// _ .9m < x/.9n < x/Icon.exp.l; m/; exp.l; n/; exp.l; x//

17,27
28 _I

30.
(D) 31.
32.
33.
34.
35.
36.
37.
38.

Axiom.exp.l; x// _ .9m < x/.9n < x/Icon.exp.l; m/; exp.l; n/; exp.l; x//
.8x < len.l//Axiom.exp.l; x// _ .9m < x/.9n < x/Icon.exp.l; m/; exp.l; n/; exp.l; x//
:
exp.l; len.l/ 1/ D pQq
:
exp.l; len.l/ 1/ D pQq ^ .8x < len.l//Axiom.exp.l; x// _
.9m < x/.9n < x/Icon.exp.l; m/; exp.l; n/; exp.l; x//
Prft.l; pQq/
Prvt.pQq/
Prvt.pQq/
Prvt.pQq/
Prvt.pP q/ ! Prvt.pQq/

(E) 39. Prvt.cnd.pP q; pQq// ! Prvt.pP q/ ! Prvt.pQq/

This derivation is long, and skips steps; but it should be enough for you to see how

22 _E
21-30 (8I)
11 T13.45p
31,32 ^I
33, T13.39c
34 abv
5,7-35 9E
4,6-36 9E
2-37 !I
1-38 !I

CHAPTER 13. GDELS THEOREMS

672

the argument works and to fill in the details if you choose. First, at stage (A), under assumptions for !I, there are derivations numbered j , k and a longer sequence
numbered l. And the the last member of this longer sequence is an immediate consequence of last members from the derivations numbered j and k. At (B) the results
from (12) are all applied to the sequence numbered l; so the last sentence in the
longer sequence is an immediate consequence of its earlier members. At (C), the
different fragments of the longer sequence have the character of a proof. And at (D),
the whole sequence numbered l has the character of a proof. Finally, at (E) we observe that this longer sequence is a proof of Q, and discharge the assumptions for
the result that Prvt.cnd.pP q; pQq// ! Prvt.pP q/ ! Prvt.pQq/, so that with
T13.32, Prvt.cnd.pP q; pQq// ! Prvt.pP q/ ! Prvt.pQq/. This gives us the
corollary; and by capture, Prvt.pP ! Qq/ ! Prvt.pP q/ ! Prvt.pQq/ so that
PA ` .P ! Q/ ! .P ! Q/. Thus the second derivability condition is
established.
*E13.32. As a start to a complete demonstration of T13.48, provide a demonstration
through part (C) that does not skip any steps. You may find it helpful to
divide your demonstration into separate parts for (A), (B) and then for lines
(18), (19) and (20). Hard core: complete the entire derivation.

13.4.2

The third condition: PA ` P ! P

To show the third condition, that PA ` P ! P , it is sufficient to show PA `


Q ! Q. For when Q is P , the result is immediate. Further, P is Prvt.pP q/;
thus, since Prvt is 1 , for any P , Prvt.pP q/ is 1 . So it is sufficient to show that
for any 1 sentence Q, PA ` Q ! Q. We build gradually to this result. Observe
that, insofar as we appeal to theorems from before (including D2) the results of this
section are no less dependent on our work from section 13.3 than the one before.
Substitutions. Return to our function formsub.p; gvar.n/; num.y// and to the corresponding formsub.p; gvar.n/; num.y//. Where x
E is a (possibly empty) sequence
x1 : : : xn including at least all the free variables in P ,
PA ` sub0 .pP q; x/
E D pP q
PA ` subS i .pP q; x/
E D formsub.subi .pP q; x/;
E gvar.S i /; num.xS i //
And PA ` sub.pP q; x/
E D subn .pP q; x/.
E Observe that sub.pP q; x/
E has as free
variables the variables free in P but, intuitively, returns the Gdel number of a sen-

CHAPTER 13. GDELS THEOREMS

673

tence the sentence which substitutes into places for free variables numerals for the
values assigned to those variables.
With T13.47a, and T13.47b, we can show that sub.pP q; x/
E D sub.pP q; y/
E so
long as xE and yE include all the free variables of P . Thus,
*T13.49. If xE and yE are the same except that yE includes some variables not in xE (and
so not free in P ), then PA ` sub.pP q; x/
E D sub.pP q; y/.
E
Hint: Where the variables of xE are ordered x1:0 ; x2:0 : : : xn:0 let the variables
of yE be of the sort, x0:1 : : : x0:a I x1:0 : : : x1:b I : : : xn:0 : : : xn:c . So S.n:m/ is
either n:S m or S n:0. Then by a simple induction on the value of n:m you
will be able to show that subn:0 .pP q; x/
E D subn:m .pP q; y/.
E
E y/
E D sub.pP q; x;
E x0 ; y/.
E
*T13.50. PA ` sub.pP q; x0 ; x;
Observe that for any xE D x1 : : : xn the value of subnC1 .pP q; x0 ; x;
E y/
E and of
subnC1 .pP q; x;
E x0 ; y/
E does not depend on the variables in y.
E So, as a minor
simplification, it is enough to concentrate on showing PA ` subnC1 .pP q; x0 ;
x1 : : : xn / D subnC1 .pP q; x1 : : : xn ; x0 /.
The argument is an induction on the value of n. The key is that subi C2 .pP q;
x1 : : : xi C1 ; x0 / D formsubformsub.subi .pP q; x1 : : : xi /; gvar.i C 1/; num.xi C1 //;
gvar.0/; num.x0 /; then you will be able to apply T13.47b and the assumption.
This effectively gives the ability to sort variables from one order into another. Suppose the members of xE are in the standard order. To convert yE to x,
E a straightforward
approach is to switch members into the first position in the reverse of their order in
xE so for n members, at stage i , the result is xS n i : : : xn ; yE0 where yE0 is like yE
less the members that preceed it. So for a vector with 6 members, at stage 0 we
begin with some sub.pP q; y/;
E then at stage three PA proves this is equivalent to
0
sub.pP q; x4 ; x5 ; x6 ; yE /; and at stage 6 that it is equivalent to sub.pP q; x/.
E This is
an induction, but simple enough, so left as an exercise.
Given that PA ` sub.pP q; x/
E D sub.pP q; y/
E for vectors including all the free
variables in P , simply select a standard vector with just the free variables in P and
all the variables in a standard order. Then introducing double brackets as a special
notation,
PrvtP .x/
E Ddef Prvt.sub.pP q; x//
E

CHAPTER 13. GDELS THEOREMS

674

Where P has free variables x,


E Prvt.pP q/ asserts the provability of the open formula
P .x/.
E But PrvtP .x/
E itself has all the free variables of P and asserts the provability of whatever sentences have numerals for the variables free in P : so, for example,
x
, and so forth. When P is a sen8xPrvtP .x/ asserts the provability of P;x , PS;
tence, there are no substitutions to be made, and PrvtP is the same as Prvt.pP q/.
We show, PA ` P ! PrvtP for 1 formulas P . When P is a sentence, this gives
PA ` P ! Prvt.pP q/, which is to be shown.
Finally, in this part, we require some short theorems in order to manipulate this
new notion. Each is by a short induction. First analogs to D1 and D2.
T13.51. If PA ` P , then PA ` PrvtP

analog to D1

E
Suppose PA ` P . By induction on the value of n, PA ` Prvt.subn .pP q; x//;
the case when i D n gives the desired result.
Basis: sub0 .pP q; x/
E D pP q. Since PA ` P , by D1, PA ` Prvt.pP q/; so
PA ` Prvt.sub0 .pP q; x//.
E
E
Assp: PA ` Prvt.subi .pP q; x//.
Show: PA ` Prvt.subS i .pP q; x//.
E By assumption, PA ` Prvt.subi .pP q; x//;
E
E but by
so by T13.46a, PA ` Prvt.p8q  gvar.S i /  subi .pP q; x//;
T13.46b we have, PA ` Prvt.cnd.p8q  gvar.S i /  subi .pP q; x/;
E
formsub.subi .pP q; x/;
E gvar.S i /; num.xS i ////; so with D2, PA `
E ! Prvt.formsub.subi .pP q; x/;
E
Prvt.p8qgvar.S i /subi .pP q; x//
gvar.S i /; num.xS i ///; so by !E, PA ` Prvt.formsub.subi .pP q; x/;
E
gvar.S i /; num.xS i ///; so with the definition, PA ` Prvt.subSi .pP q/;
x/.
E
Indct: For any n, PA ` Prvt.subn .pP q/; x/
E
So PA ` Prvt.sub.pP q/; x/
E and PA ` PrvtP .
T13.52. PA ` PrvtP ! Q ! .PrvtP ! PrvtQ/

analog to D2

By induction on n, PA ` Prvt.subn .pP ! Qq; x//


E ! .Prvt.subn .pP q; x//
E
! Prvt.subn .pQq; x///.
E
Basis: PA ` sub0 .pP ! Qq; x/
E D pP ! Qq; PA ` sub0 .pP q; x/
E D
pP q; and PA ` sub0 .pQq; x//
E D pQq. By D2, PA ` Prvt.pP ! Qq/
! .Prvt.pP q/ ! Prvt.pQq//; so PA ` Prvt.sub0 .pP ! Qq; x//
E !
.Prvt.sub0 .pP q; x//
E ! Prvt.sub0 .pQq; x///.
E

CHAPTER 13. GDELS THEOREMS

675

Assp: PA ` Prvt.subi .pP ! Qq; x//


E ! .Prvt.subi .pP q; x//
E ! Prvt.subi .pQq;
x///.
E
Show: PA ` Prvt.subSi .pP ! Qq; x//
E ! .Prvt.subSi .pP q; x//
E ! Prvt.subSi .pQq;
x///.
E
Suppose PA ` Prvt.subSi .pP ! Qq; x//.
E By capture, PA ` pP ! Qq
D cnd.pP q; pQq/; so PA ` Prvt.subSi .cnd.pP q; pQq/; x//.
E
We
have, PA ` subSi .cnd.pP q; pQq/; x/
E D formsub.subi .cnd.pP q; pQq/;
x/;
E gvar.S i /; num.xS i //; PA ` subSi .pP q; x/
E D formsub.subi .pP q; x/;
E
E D formsub.subi .pQq; x/;
E
gvar.S i /; num.xS i //; and PA ` subSi .pQq; x/
gvar.S i /; num.xS i //. From PA ` Prvt.subSi .cnd.pP q; pQq/; x//,
E
PA ` Prvt.formsub.subi .cnd.pP q; pQq/; x/;
E gvar.S i /; num.xS i ///;
so with T13.47c, PA ` Prvt.cnd.formsub.subi .pP q; x/;
E gvar.S i /;
num.xS i //; formsub.subi .pQq; x/;
E gvar.S i /; num.xS i ///; so by DE,
E subSi .pQq; x//;
E so with D2 and MP,
PA ` Prvt.cnd.subSi .pP q; x/;
PA ` Prvt.subSi .pP q; x//
E ! Prvt.subSi .pQq; x//.
E So by DT, PA `
Prvt.subSi .pP ! Qq; x//
E ! .Prvt.subSi .pP q; x//
E ! Prvt.subSi .pQq; x///.
E
Indct: For any n, PA ` Prvt.subn .pP ! Qq; x//
E ! .Prvt.subn .pP q; x//
E
! Prvt.subn .pQq; x///.
E
So PA ` Prvt.sub.pP ! Qq; x//
E ! .Prvt.sub.pP q; x//
E ! Prvt.sub.pQq; x///.
E
And PA ` PrvtP ! Q ! .PrvtP ! PrvtQ/.
T13.53. If t is one of ;, y or Sy and t is free for x in P , then PA ` PrvtPtx $
PrvtP xt .
t D Sy. Suppose x D x0 , y is some xj , and consider variables in the
x0 q
order x0 ; x1 : : : xn D x0 ; x.
E First, by induction, PA ` subn .pPSy
; y; x/
E D

0
subn .pP q; x0 ; x/
E xSy
.

x0 q
x0 q
; y; x/
E D formsub.pPSy
; gvar. j /; num.y//
Basis: By def, PA ` sub1 .pPSy

D (by T13.47e) formsub.pP q; gvar.0/; num.Sy// D formsub.pP q;


0
0
gvar.0/; num.x0 //xSy
D (by def) sub1 .pP q; x0 ; x/
E xSy
.
x0 q
0
; y; x/
E D subi .pP q; x0 ; x/
Assp: For any i  1, PA ` subi .pPSy
E xSy
.
x0 q
0
Show: PA ` subi C1 .pPSy
; y; x/
E D subi C1 .pP q; x0 ; x/
E xSy
.
x0 q
x0 q
By def, PA ` subi C1 .pPSy
; y; x/
E D formsub.subi .pPSy
; y; x/;
E

0
E xSy
;
gvar.i C 1/; num.xi C1 // D (by assp) formsub.subi .pP q; x0 ; x/

CHAPTER 13. GDELS THEOREMS

676

gvar.i C 1/; num.xi C1 // D formsub.subi .pP q; x0 ; x/;


E gvar.i C 1/;
0
0
D (by def) subi C1 .pP q; x0 ; x/
num.xi C1 //xSy
.
E xSy
x0 q
0
; y; x/
E D subn .pP q; x0 ; x/
Indct: PA ` subn .pPSy
E xSy
x0 q
x0
0
So PA ` sub.pPSy
; y; x/
E D sub.pP q; x0 ; x/
E xSy
. And PA ` PrvtPSy
$
x0 q
0
; y; x//
E $ Prvt.sub.pP q; x0 ; x/
/ $ Prvt.sub.pP q; x0 ;
Prvt.sub.pPSy
E xSy
x0
x0
x//
E Sy $ PrvtP Sy .

Other cases are similar and left for homework.


*E13.33. Provide a demonstration for T13.49
*E13.34. (i) Provide a demonstration for T13.50. (ii) Then provide a demonstration
for the sorting result that is simple enough and so left as an exercise.
E13.35. Complete the demonstration of T13.53 by completing the remaining cases.
Sigma star. Now we introduce a sort of simplification. Our aim is to demonstrate a
result for all the 1 formulas. Given our minimal resources, the task will be simplified if we can give a minimal specification of the 0 formulas themselves. Toward
this end, we introduce a special class of formulas, the ? formulas, show that every
1 formula is a ? formula, and demonstrate our result with respect to this special
class. Say a ? formula is defined as follows.
(? ) For any variables x, y and :,
(a) ; D :, y D :, Sy D :, x C y D : and x  y D : are ? formulas.
(s) If P and Q are ? formulas, then so are .P _ Q/, and .P ^ Q/.
(8) If P is a ? formula, then so is .8x  y/P where y does not occur in
P.
(9) If P is a ? formula, then so is 9xP .
(c) Nothing else is a ? formula.
We aim to show that any 1 formula is provably equivalent to a ? formula.
Then results which apply to all the ? formulas immediately transfer to the 1
formulas. The argument proceeds through results for atomics and 0 formulas to
the 1 formulas. First, a preliminary result for atomic equalities,

CHAPTER 13. GDELS THEOREMS

677

T13.54. For any P of the form t D x, there is a P ? such that PA ` P $ P ? .


By induction on the function symbols in t.
Basis: If t has no function symbols, then it is the constant ; or a variable
y, so P is of the form, ; D x or y D x; but these are already ?
formulas. So let P ? be the same as P . Then PA ` P $ P ? .
Assp: For any i, 0  i < k, if t has i function symbols, there is a P ? such
that PA ` P $ P ? .
Show: If t has k function symbols, there is a P ? such that PA ` P $ P ? .
If t has k function symbols, then it is of the form, S r, r C s or r  s
for r and s with < k function symbols.
(S) t is S r, so that P is Sr D x. Set P ? D 9z.r D z/? ^ S z D x;
then by assumption, PA ` r D z $ .r D z/? . So reason as follows,
1. r D z $ .r D z/?

assp

2.

Sr D x

A (g $I)

3.
4.
5.

r D r ^ Sr D x
9zr D z ^ Sz D x
9z.r D z/? ^ Sz D x

from 2
3 9I
1,4 with T9.9

6.

9z.r D z/? ^ Sz D x

A (g $I)

7.

.r D z/? ^ Sz D x

A (g 69E)

8.
9.

rDz
Sr D x

1,7 $E
from 7,8

10.

Sr D x

11. S r D x $ 9z.r D z/? ^ Sz D x

6,7-9 9E
2-5,6-10 $I

So PA ` P $ P ? .
(+) t D s C r, so that P is s C r D x. Set P ? D 9u9v.s D u/? ^ .t D
v/? ^ u C v D x. Then PA ` P $ P ? .
() Similarly.
Indct: For any P of the form t D x, there is a P ? such that PA ` P $ P ? .
Now we can show that each 0 formula is provably equivalent to a ? formula.
First, it will be convenient to generalize the definitions for a normal form from T8.1.
Thus, in an extended sense, say a formula is in normal form iff its only operators
are _, ^, , or a bounded quantifier, and the only instances of  are immediately
prefixed to atomics (which may include inequalities). Where P is a normal form, let

CHAPTER 13. GDELS THEOREMS

678

P 0 be like P except that _ and ^, universal and existential quantifiers and, for an
atomic A, A and A are interchanged. So, for example, .9x  p/.x D p _ x 6>
p/0 D .8x  p/.x p ^ x > p/. Then, for any 0 formula with operators, ,
! and bounded quantifiers, for atomic A, A = A; P  D P  0 ; .P ! Q/ D
.P  0 _ Q /; .9x  t/P  D .9x  t/P  and .8x  t/P  D .8x  t/P 
(and similarly for .9x < t/ and .8x < t/). Then as a simple extension to the result
from E8.9,
T13.55. For any 0 formula P , there is a normal formula P  such that ` P $ P  .
The demonstration is straightforward extension of the reasoning from E8.9.
We show our result as applied to these normal forms. Thus,
*T13.56. For any 0 formula P there is a ? formula P ? such that PA ` P $ P ? .
From T13.55, for any 0 formula P , there is a normal P  such that ` P $
P  . Now by induction on the number of operators in P  , we show PA `
P  $ P ?.
Basis: If P  has no operators, then it is an atomic of the sort s D t, s  t
or s < t.
(D) P  is s D t. Set P ? D 9z.s D z/? ^ .t D z/? . By T13.54,
PA ` s D z $ .s D z/? and PA ` t D z $ .t D z/? ; so
PA ` P  $ P ? .
() P  is s  t, which is to say 9z.z C s D t/. By the case immediately
above, PA ` .zCs D t/ $ .zCs D t/? . Set P ? D 9z.zCs D t/? .
Then PA ` P  $ P ? . And similarly for <.
Assp: For any i, 0  i < k, if a normal P  has i operator symbols, then there
is a ? formula P ? such that PA ` P  $ P ? .
Show: If a normal P  has k operator symbols, then there is a ? formula P ?
such that PA ` P  $ P ? .
If P  has k operator symbols, then it is of the form A, B ^ C,
B _ C, .9x  t/B, .9x < t/B, .8x  t/B or .8x < t/B, where
A is atomic and B and C are normal with < k operator symbols.
() P  is A. (i) P  is s t. Set P ? D .s < t/? _ .t < s/? ; then by
assumption, PA ` s < t $ .s < t/? and PA ` t < s $ .t < s/? ;
and with T13.13o, PA ` P  $ P ? .

CHAPTER 13. GDELS THEOREMS

679

(ii) P  is s t; set P ? D .t  s/? ; then by assumption, PA ` t 


s $ .t  s/? ; and with T13.13q, PA ` P  $ P ? . And similarly
for P ? D s t.
(^) P  is B ^ C. Set P ? D B ? ^ C ? ; since B and C are normal, by
assumption PA ` B $ B ? and PA ` C $ C ? ; so PA ` P  $ P ? .
And similarly for _.
(8) P  is .8x  t/B. Set P ? D 9z.t D z/? ^ .8x  z/B ? ; by
T13.54 PA ` t D z $ .t D z/? and by assumption, PA ` B $ B ?
so PA ` P  $ P ? . And, by a related construction, similarly for
.8x < t/B.
(9) P  is .9x  t/B. Set P ? D 9x.x  t/? ^ B ? ; then by assumption
PA ` x  t $ .x  t/? and PA ` B $ B ? ; so PA ` P  $ P ? .
And similarly for .9x < t/B.
Indct: For any normal P  there is a P ? such that PA ` P  $ P ? .
So for any 0 formula P , there is a P  such that ` P $ P  and now
PA ` P  $ P ? . So PA ` P $ P ? .
Now it is immediate that for any 1 formula P there is a ? formula P ? such that
PA ` P $ P ? .
T13.57. For any 1 formula P there is a ? formula P ? such that PA ` P $ P ? .
Consider any 1 formula P . This formula is of the form 9x1 : : : 9xn A for
0 formula A. But by T13.56, there is an A? such that PA ` A $ A? . Let
P ? be 9x1 : : : 9xn A? . Then PA ` P $ P ? .
E13.36. Povide a demonstration to show T13.55.
*E13.37. Fill in the parts of T13.54 and T13.56 that are left as similarly to to show
that PA ` P $ P ? .
The result. And now we can show PA ` P ! PrvtP by induction on the
number of operators in a ? formula P . From this, by the previous theorem, we
have that PA ` P ! PrvtP for any 1 formula P . And this is the result we need
for D3.
Before we launch into the main argument, a word about substitution. From their
original statement, the rules 8I and DE result in formulas of the sort Ptx or P t=s .

CHAPTER 13. GDELS THEOREMS

680

So from, say, 8E applied to 8xPrvtP we get something of the sort PrvtP xt . But
we need to be careful about what the substitution comes to. In the simplest case,
PrvtP .x/ is of the sort Prvt.formsub.pP .x/q; gvar.i /; num.x///, where there is
a free x to be replaced by t; but this does not automatically convert to PrvtP .t/
insofar as that includes modifying the (number of the) formula to which substitutions
are applied. But we do have a theorem, T13.53 which tells us that in certain cases
PA ` PrvtPtx $ PrvtP xt , so that the replacements can be moved across the
bracket in the natural way. With this said, we turn to our theorem.
T13.58. For any ? formula P , PA ` P ! PrvtP .
By induction on the number of operators in P .
Basis: If a ? P has no operator symbols, then it is an atomic of the sort ; D :,
y D :, Sy D :, x C y D : or x  y D :.
(S y) Suppose P is Sy D z. Reason as follows,
1. Sy D Sy

DI

2. PrvtSy D Sy
3. Sy D z

1 T13.51
A (g !I)

4.

Prvt.Sy D z/zSy

2 abv

5.

PrvtSy D zzSy

4 T13.53

6.

PrvtSy D z

3,5 DE

7. Sy D z ! PrvtSy D z

3-6 !I

Observe that T13.51 applies to theorems, and so not to formulas under the
assumption for !I. Thus we take care to restrict its application to formulas
against the main scope line. Also, at (5) we use T13.53 to move the substitution across the bracket. With this done, the substitution applies only to the
free z of PrvtSy D z; so DE applies in a straightforward way to substitute
a z back into that place. The argument is similar for ; D : and y D :.
(+) Suppose P is xCy D z. The proof in PA requires appeal to IN, with induction
on the value of x in 8y8z.x C y D z ! Prvtx C y D z/.

CHAPTER 13. GDELS THEOREMS

681

1. ; C y D y

T6.49

2. Prvt; C y D y
3. .x C y D z/x;

1 T13.51
A (g !I)

4.
5.
6.
7.
8.
9.
10.

;Cy Dz
yDz
Prvt.; C y D z/zy
Prvt; C y D zzy
Prvt; C y D z
Prvt.x C y D z/x;
Prvtx C y D zx;

3 abv
1,4 DE
2 abv
6 T13.53
6,5 DE
8 abv
9 T13.53

11. .x C y D z/x; ! Prvtx C y D zx;


12. .x C y D z ! Prvtx C y D z/x;
13. 8y8z.x C y D z ! Prvtx C y D z/x;

3-10 !I
11 abv
12 8I

And the inductive stage,


14. x C Sy D z $ Sx C y D z

T6.40,T6.51

15. Prvtx C Sy D z ! Sx C y D z
16. 8y8z.x C y D z ! Prvtx C y D z/

14 T13.51
A (g !I)

17.

.x C y D z/xS x

A (g !I)

18.
19.
20.

Sx C y D z
x C Sy D z
y
x C Sy D z ! Prvtx C y D zSy

17 abv
14,18 $E
16 8E

21.

Prvtx C y D zSy

20,19 !E

22.
23.
24.
25.

Prvtx C Sy D z
Prvtx C Sy D z ! PrvtSx C y D z
PrvtSx C y D z
Prvtx C y D zxS x

21 T13.53
15 T13.52
23,22 !E
24 T13.53

26.
27.
28.

.x C y D z/xS x ! Prvtx C y D zxS x

17-25 !I

z/xS x

.x C y D z ! Prvtx C y D
8y8z.x C y D z ! Prvtx C y D z/xS x

26 abv
27 8I

29. 8y8z.x C y D z ! Prvtx C y D z/ ! 8y8z.x C y D z ! Prvtx C y D z/xS x


30. 8y8z.x C y D z ! Prvtx C y D z/
y

We are able to apply the assumption to get Prvtx C y D zSy and convert
this into the desired result. So PA ` x C y D z ! Prvtx C y D z.
() Suppose P is x  y D z. The proof in PA requires appeal to IN, on the
value of x in 8y8z.x  y D z ! Prvtx  y D z/. The zero case is
straightforward. Then,

16-28 !I
13,29 IN

CHAPTER 13. GDELS THEOREMS

682

1. 8y8z.x  y D z ! Prvtx  y D z/x;


2. Sx  y D z $ x  y C y D z
3. x  y D v ! .v C y D z ! x  y C y D z/

zero case
T6.58
simple ND

4. Prvtx  y C y D z ! Sx  y D z
5. Prvtx  y D v ! .v C y D z ! x  y C y D z/
6. 8y8z.x  y D z ! Prvtx  y D z/

2 T13.51
3 T13.51
A (g !I)

7.

.x  y D z/xS x

A (g !I)

8.
9.
10.
11.

Sx  y D z
xyCy Dz
9v.x  y D v/
xy Dv

7 abv
2,8 $E
DI,9I
A (g 109E)

12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.

vCy Dz
Prvtv C y D z
Prvtx  y D zzv
Prvtx  y D v
Prvtx  y D v ! Prvtv C y D z ! x  y C y D v
Prvtv C y D z ! x  y C y D z
Prvtv C y D z ! Prvtx  y C y D z
Prvtx  y C y D z
Prvtx  y C y D z ! PrvtSx  y D z
PrvtSx  y D z
Prvtx  y D zxS x
Prvtx  y D zxS x
z/xS x

! Prvtx  y D zxS x
Prvtx  y D z/xS x

.x  y D
.x  y D z !
8y8z.x  y D z ! Prvtx  y D z/xS x

27. 8y8z.x  y D z ! Prvtx  y D z/ ! 8y8z.x  y D z ! Prvtx  y D z/xS x


28. 8y8z.x  y D z ! Prvtx  y D z/

The previous result does not directly apply to x y Cy D z. However, having


identified x y with variable v we get PrvtvCy D z, and with the inductive
assumption Prvtx  y D v. These then unpack into PrvtS x  y D z. So
PA ` x  y D z ! Prvtx  y D z.
Assp: For any i, 0  i < k if a ? P has i operator symbols, then PA ` P !
PrvtP .
Show: If a ? P has k operator symbols, then PA ` P ! PrvtP .
If ? P has k operator symbols, then it is of the form, A _ B, A ^ B,
.8x  y/A (y not in A), or 9xA for ? A and B with < k operator symbols.

9,11 DE
12 (+) case
6,11 8E,!E
14 T13.53
5 T13.52
15,16 !E
17 T13.52
18,13 !E
4 T13.52
19,20 !E
21 T13.53
10,11-22 9E
7-23 !I
24 abv
25 8I
6-26 !I
1,27 IN

CHAPTER 13. GDELS THEOREMS

683

(^) P is A ^ B. Reason as follows.


1. A ! PrvtA
2. B ! PrvtB
3. A ! .B ! .A ^ B//

by assp
by assp
T9.4

4. PrvtA ! .B ! .A ^ B//
5. A ^ B

3 T13.51
A (g !I)

6.
7.
8.
9.
10.
11.

PrvtA
PrvtB
PrvtA ! PrvtB ! .A ^ B/
PrvtB ! .A ^ B/
PrvtB ! PrvtA ^ B
PrvtA ^ B

12. .A ^ B/ ! PrvtA ^ B

1,5
2,5
4 T13.52
6,8 !E
9 T13.52
7,10 !E
5-11 !I

And similarly for _.


(9) P is 9xA. Reason as follows.
1. A ! PrvtA

by assp

2. A ! 9xA
3. PrvtA ! 9xA
4. 9xA

T3.29
2 T13.51
A (g !I)

5.

A (g 49E)

6.
7.
8.

PrvtA
PrvtA ! Prvt9xA
Prvt9xA

1,5 !E
3 T13.52
7,6 !E

Prvt9xA

4,5-8 9E

10. 9xA ! Prvt9xA

5-9 !I

9.

(8) P is .8x  y/A. The argument in PA requires appeal to IN, for induction on
the value of y. For the zero case,

CHAPTER 13. GDELS THEOREMS

684

1. Ax; ! PrvtAx;

by assp

2. .8x  ;/A $ Ax;


3. PrvtAx; ! .8x  ;/A
y
4. .8x  y/A;

thrm (with T8.21)


2 T13.51
A (g !I)

5.
6.
7.
8.
9.
10.
11.

.8x  ;/A
Ax;
PrvtAx;
PrvtAx; ! Prvt.8x  ;/A
Prvt.8x  ;/A
y
Prvt.8x  y/A;
y
Prvt.8x  y/A;
y

4 abv
2,5 $E
1,6 !E
3 T13.52
8,7 !E
9 abv
10 T13.53
y

12. .8x  y/A; ! Prvt.8x  y/A;


y
13. ..8x  y/A ! Prvt.8x  y/A/;

5-11 !I
12 abv

For (6) and (10) it is important that y in a bound quantifier of the ? formula
does not appear in A. Now the inductive stage.
14. AxSy ! PrvtAxSy

by assp

15. .8x  Sy/A $ .8x  y/A ^ AxSy

with T13.13n

16. Prvt..8x  y/A ^ AxSy / ! .8x  Sy/A

15 T13.51

17.
18.
19.
20.

.8x  y/A ! Prvt.8x  y/A


..8x 

y/A ^ AxSy /

! Prvt.8x 

.8x 

A (g !I)

y/A ^ AxSy
y/A ^ AxSy
y/A ^ AxSy

Prvt.8x 

22.

Prvt.8x 

23.
24.

Prvt.8x  Sy/A
y
Prvt.8x  y/ASy

26.

14,17 as for ^

.8x  Sy/A

21.

25.

A (g !I)
y/A ^ AxSy

15,19 $E
18,20 !E
! Prvt.8x  Sy/A

16 T13.52
22,21 !E
23, T13.53

.8x  Sy/A ! Prvt.8x  y/ASy


..8x  y/A ! Prvt.8x 

y
y/A/Sy

19-24 !I
25 abv
y

27. ..8x  y/A ! Prvt.8x  y/A/ ! ..8x  y/A ! Prvt.8x  y/A/Sy

17-26 !I

28. .8x  y/A ! Prvt.8x  y/A

13,27 IN

So PA ` .8x  y/A ! Prvt.8x  y/A.


Indct: For any ? formula P , PA ` P ! PrvtP .
Now it is a simple matter to pull together our results into the third derivability
condition.
T13.59. For any formula P , PA ` P ! P

CHAPTER 13. GDELS THEOREMS

685

Consider any formula P and the 1 sentence P . By T13.57, there is a


.P /? such that PA ` P $ .P /? . By T13.58, PA ` .P /? !
Prvt.P /? . Reason as follows.
1. .P /? ! Prvt.P /?
2. P $ .P /?

T13.58
T13.57

3. Prvt.P /? ! P
4. Prvt.P /? ! PrvtP
5. P ! PrvtP

2 T13.51
3 T13.52
2,1,4 HS

So PA ` P ! PrvtP ; and since P is a sentence, this is to say,


PA ` P ! Prvt.pP q/; which is to say, PA ` P ! P .
So, at long last, we have a demonstration of D3 and so, given deomonstration of the
other conditions, of Gdels second incompleteness theorem.
E13.38. Complete the demonstration of T13.58 by completing the remaining cases.

13.5

Reflections on the theorem

We conclude this chapter with a couple final reflections and consequences on our
results.

13.5.1

Consistency sentences

As is typically done, we have let Cont be Prvt.p; D S ;q/. If T is inconsistent,


then T proves anything, so T ` 0 D 1. And, supposing T extends Q, T ` 0 1;
so if T ` 0 D 1, then T is inconsistent. But other sentences would do as well. So,
where T is any theorem of T , we might let Cont0 be Prvt.pT q/. In particular,
we might simply consider the case where T is (equivalent to) ? and set Cont0 D
Prvt.p?q/. (Where ? is Z ^ Z, it is equivalent to the negation of the theorem,
.Z ^ Z/). Then it is easy to see that PA ` Cont $ Cont0 .
PA ` ; D S; $ ?; so with D1, PA ` Prvt.p; D S ; $ ?q/; so with D2,
PA ` Prvt.p; D S ;q/ $ Prvt.p?q/; and contraposing, PA ` Cont $ Cont0 .
Again, one might let Cont00 D 9x.Prvt.x/ ^ Prvt.x//, where Prvt.x// just in
case there is a proof of the negation of the formula with Gdel number x. Then
T is consistent just in case there is no proof of a formula and its negation. Again,
PA ` Cont $ Cont00 . This time the result requires a bit more work.

CHAPTER 13. GDELS THEOREMS

686

(i) Since a contradiction implies anything, PA ` ; D S ; ! A and PA ` ; D


S ; ! A; reason as follows.
1. ; D S ; ! A
2. ; D S ; ! A

thrm
thrm

3. Prvt.p; D S ; ! Aq/
4. Prvt.p; D S ; ! Aq/
5. Prvt.p; D S ;q/

1 D1
2 D1
A (g !I)

6.
7.
8.
9.

Prvt.p; D S ;q/ ! Prvt.pAq/


Prvt.p; D S ;q/ ! Prvt.pAq/
Prvt.pAq/ ^ Prvt.pAq/
9x.Prvt.x/ ^ Prvt.x//

10. Prvt.p; D S ;q/ ! 9x.Prvt.x/ ^ Prvt.x//

3 D2
4 D2
5,6,7
8 9I
7-9 !I

So PA ` Prvt.p; D S ;q/ ! 9x.Prvt.x/ ^ Prvt.x//.


(ii) For the other direction,
1. A ! .A ! .A ^ A//
2. .A ^ A/ ! .; D S;/

T9.4
thrm

3. Prvt.pA ! .A ! .A ^ A//q/


4. Prvt.p.A ^ A/ ! .; D S ;/q/
5. Prvt.pAq/ ^ Prvt.pAq/

1 D1
2 D1
A (g !I)

6.
7.
8.
9.
10.
11.

Prvt.pAq/ ! Prvt.pA ! .A ^ A/q/


Prvt.pA ! .A ^ A/q/
Prvt.pAq/ ! Prvt.pA ^ Aq/
Prvt.pA ^ Aq/
Prvt.pA ^ Aq/ ! Prvt.p; D S;q/
Prvt.p; D S ;q/

12. Prvt.pAq/ ^ Prvt.pAq/ ! Prvt.p; D S;q/

3 D2
6,5
7 D2
8,5
4 D2
10,9 !E
5-11 !I

And this is to say, PA ` Prvt.pAq/ ^ Prvt.pAq/ ! Prvt.p; D S ;q/; and,


since A does not appear in an assumption to the theorem, with T10.12, we can
switch pAq for a variable that does not appear in the derivation so that PA `
Prvt.x/ ^ Prvt.x/ ! Prvt.p; D S;q/; and by T3.30, PA ` 9xPrvt.x/ ^
Prvt.x/ ! Prvt.p; D S;q/.
Putting (i) and (ii) together, PA ` Prvt.p; D S ;q/ $ 9x.Prvt.x/ ^ Prvt.x//;
and contraposing, PA ` Cont $ Cont00 .
So, to this extent, it does not matter which version of the consistency statement we
select. Underlying the point that these different statements are equivalent is that

CHAPTER 13. GDELS THEOREMS

687

anything follows from a contradiction so that the one follows from the others.10
Having proved PA Cont, we have PA Cont0 and PA Cont00 . These
are particular sentences which, like G , are unprovable. And, now that we have the
derivability conditions, with T13.11, neither are their negations provable. They have
special interest because because each says that PA is consistent. Still, it is worth
asking whether there is some different sentence to express the consistency of PA
such that it would be provable. Consider, for example a trick related to the Rosser
sentence,
Prftc .x; y/ Ddef Prft.x; y/ ^ .8v  x/Prft.v; p; D S;q/
Then so long as PA is consistent Prftc .x; y/ captures PRFT.x; y/.
(i) Suppose hm; ni 2 PRFT. (a) By capture, PA ` Prft.m; n/. And (b), since
PA is consistent, there is no proof of a contradiction in PA and again by capture, PA ` Prft.0; p; D S ;q/; PA ` Prft.1; p; D S ;q/ : : : and PA `
Prft.m; p; D S;q/; so with T8.21, PA ` .8v  m/Prft.v; p; D S ;q/; so
PA ` Prftc .m; n/.
(ii) Suppose hm; ni 62 PRFT; then by capture, PA ` Prft.m; n/. So PA ` Prft.x; y/
^ .8v  x/Prft.v; p; D S ;q/, which is to say PA ` Prftc .m; n/.
And, with T12.6, Prftc .x; y/ expresses PRFT.x; y/ as well. Given this, set Prvtc .y/ Ddef
9xPrftc .x; y/, and Contc Ddef Prvtc .p; D S ;q/. The idea, then is that Contc just
in case PA is consistent.
And it is easy to see that Contc is provable.
1.

9xPrft.x; p; D S ;q/ ^ .8v  x/Prft.v; p; D S ;q/

A (c, I)

2.

Prft.j; p; D S ;q/ ^ .8v  j /Prft.v; p; D S;q/

A (c 19E)

3.
4.
5.
6.
7.

Prft.j; p; D S ;q/
.8v  j /Prft.v; p; D S ;q/
j j
Prft.j; p; D S ;q/
?

2 ^E
2 ^E
with T13.13l
4,5 (8E)
3,6 ?I

8.

9. 9xPrft.x; p; D S;q/ ^ .8v  x/Prft.v; p; D S ;q/


10 This

1,2-7 9E
1-8 I

equivalence breaks down in a non-classical logic which blocks ex falso quodlibet, the principle that from a contradiction anything follows. So, for example, in relevant logic, it might be that
there is some A such that T ` A ^ A but T ; D S;. See Priest, Non-Classical Logics for an
introduction to these matters.

CHAPTER 13. GDELS THEOREMS

688

So PA ` 9xPrft.x; p; D S ;q/ ^ .8v  x/Prft.v; p; D S ;q/ which is to say


PA ` Contc . This is because Prftc builds in from the start that nothing numbers a
proof of ; D S ;.
Intuitively, so long as PA is consistent, Prftc works just fine. But if PA is not
consistent, then it no longer tracks with proof. Similarly, if PA is consistent, Contc
plausibly says PA is consistent. But if PA is inconsistent (so that everything is
provable) Contc remains provable but it is no longer plausibly construed as saying
; D S; is not provable. So its provability is, in this sense, uninteresting.
Insofar as Contc is provable it must be that Prvtc fails one or more of the derivability conditions. To see how this might be, consider D2 and suppose PA is inconsistent
and proofs are ordered according to their Gdel numbers as follows,
A!B

; D S;

Then PA ` Prvt.pBq/ but, insofar as the proof of B is numbered greater than the
proof of ; D S ;, PA ` Prvtc .pBq/. In this case, D2 fails, so that our main
argument to show PA Cont does not apply to Contc .

13.5.2

Lbs Theorem

If T is a recursively axiomatized theory extending Q, by the diagonal lemma there


is a sentence H , of which G is a sample, such that T ` H $ Prvt.pH q/ that
is, T ` H $ H . We have seen that such a formula H is not provable. But,
of course, by the diagonal lemma, there is another sentence H such that T ` H $
H . In a brief note, A Problem Concerning Provability L. Henkin asks whether
this H is provable. Supposing the first is analogous to the liar, this sentence is not
true, the latter is like the truth-teller, this sentence is true. An answer to Henkins
question follows immediately from Lbs theorem.
T13.60. Suppose T is a recursively axiomatized theory for which the derivability
conditions D1 - D3 hold and T ` P ! P , then T ` P . Lbs Theorem.
Suppose T is a recursively axiomatized theory for which the derivability conditions hold and T ` P ! P . Then the diagonal lemma obtains as well.
Consider Prvt.y/ ! P ; this is an expression of the sort F .y/ to which the
diagonal lemma applies; so by the diagonal lemma there is some H such that,

CHAPTER 13. GDELS THEOREMS

689

T ` H $ .Prvt.pH q/ ! P / that is, T ` H $ .H ! P /. Now


reason as follows.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.

!P
H $ .H ! P /
H ! .H ! P / ^ .H ! P / ! H
H ! .H ! P /
H ! .H ! P /
H ! .H ! P / ! H ! .H ! P /
H ! .H ! P /
.H ! P / ! .H ! P /
H ! .H ! P /
H ! .H ! P / ! .H ! H / ! .H ! P /
.H ! H / ! .H ! P /
H ! H
H ! P
H ! P
.H ! P / ! H
H
H
P

P

P
diag lemma
2 abv
3 with T3.20
4 D1
D2
6,5 MP
D2
7,8 T3.2
A2
10,9 MP
D3
11,12 MP
13,1 T3.2
3 with T3.19
15,14 MP
16 D1
14,17 MP

So T ` P . Now return to our original question. Suppose T ` H $ H ; then


T ` H ! H ; so by Lbs theorem, T ` H . So if T proves H $ H , then T
proves H . Observe that in the presence of incompleteness, it must be the case that
there is some sentence L such that T 6` L, so that T 6` L ! L. But for a sound
theory, any sentence L ! L must be true; so here is another sentence true, but not
provable.

CHAPTER 13. GDELS THEOREMS

690

Final theorems of chapter 13


T13.22 Where F .x;
E y; v/ is the formula for recursion, PA ` 8m8n.F .x;
E y; m/ ^ F .x;
E y; n//
! m D n.
:
T13.23 Results for a
b. T13.24 results for ajb. T13.25 results for Pr.a/ and Rp.a/. T13.26
results for lcm.a/.
T13.27 PA ` .8i < k/.m.i/ > ;^m.i/ > h.i//^8i 8j.i < j ^j < k ! Rp.Sm.i/; Sm.j ///
! 9p.8i < k/rm.p; m.i// D h.i/
(CRT).
T13.28 Results for maxp and maxs.
T13.29 PA ` 9p9q.8i < k/.p; q; i/ D h.i/.
T13.30 PA ` 9p9q.8i < k/.p; q; i/ D .r; s; i/ ^ .p; q; k/ D n.
T13.31 PA ` 9v9p9q.p; q; ;/ D g.x/
E ^ .8i < y/h.x;
E i; .p; q; i// D .p; q; Si/ ^ .p; q; y/
D v.
T13.32 For any friendly recursive relation R.Ex/ with characteristic function chR .Ex/, PA ` R.x/
E $
chR .x/
E D ;. And for a recursive operator OP.P1 .Ex/ : : : Pn .Ex// with characteristic function
f.chP1 .Ex/ : : : chPn .Ex//, PA ` Op.P1 .x/
E : : : Pn .x//
E $ f.chP1 .x/
E : : : chPn .x//
E D ;. Corollary: where R.Ex/ is originally captured by R.x;
E ;/, PA ` R.x/
E $ R.x;
E ;/.
T13.33 Suppose f.x;
E y/ is defined by g.x/
E and h.x;
E y; u/ so that PA ` v D f.x;
E y/ $
F .x;
E y; v/; then, (i) f.x;
E ;/ D g.x/
E and (ii) f.x;
E S.y// D h.x;
E y; f.x;
E y//.
j

T13.34 Equivalences for suc, zero, idntk , plus and times. T13.35 results for pred, sg and csg.
T13.36 Equivalences for pred, subc, absval, sg, csg, Eq, Leq, Less, Neg, and Dsj. T13.37
PA proves a characteristic function takes the value ; or 1. T13.38 Equivalences for .99y 
8y  z/, .8
8y < z/, .y  z/, Fctr, and Prime.
z/, .99y < z/, .8
T13.39 First applications to recursive functions.
T13.40 Results for ma . T13.41 results for fact. T13.42 results for pi. T13.43 results for exp.
T13.44 results for len. T13.45 results for m  n.
T13.48 PA ` .P ! Q/ ! .P ! Q/.
T13.51 If PA ` P , then PA ` PrvtP

analog to D1

T13.52 PA ` PrvtP ! Q ! .PrvtP ! PrvtQ/

analog to D2

T13.53 If t is one of ;, y or Sy, then PA ` PrvtPtx $ PrvtP xt .


T13.54 For any P of the form t D x, there is a P ? such that PA ` P $ P ? .
T13.55 For any 0 formula P , there is a normal formula P  such that ` P $ P  .
T13.56 For any 0 formula P there is a ? formula P ? such that PA ` P $ P ? .
T13.57 For any 1 formula P there is a ? formula P ? such that PA ` P $ P ? .
T13.58 For any ? formula P , PA ` P ! PrvtP .
T13.59 For any formula P , PA ` P ! P
T13.60 Suppose T is a recursively axiomatized theory for which the derivability conditions D1 D3 hold and T ` P ! P , then T ` P . Lbs Theorem.

Chapter 14

Logic and Computability


In this chapter, we begin with the notion of a Turing machine, and a Turing computable function. It turns out that the Turing computable functions are the same as
the recursive functions. Once we have seen this, it is a short step from a problem
about computability the halting problem, to another demonstration of essential
results. Further, according to Churchs thesis, the Turing computable functions, and
so the recursive functions, are all the algorithmically computable functions. This
converts results like T12.22 according to which no recursive function is true just of
(numbers for) theorems of predicate logic, into ones according to which no algorithmically computable function is true just of theorems of predicate logic where this
result is much more than a curiosity about an obscure class of functions.

14.1

Turing Computable Functions

We begin saying what a Turing machine, and the Turing computable functions are.
Then we turn to demonstrations that Turing computable functions are recursive, and
recursive functions are Turing computable.

14.1.1

Turing Machines

A Turing machine is a simple device which, despite its simplicity, is capable of computing any recursive function and capable of computing whatever is computable
by the more sophisticated computers with which we are familiar.1
1 So

called after Alan Turing, who originally proposed them hypothetically, prior to the existence
of modern computing devices, for purposes much like our own. Turing went on to develop electromechanical machines for code breaking during World War II, and was involved in development of early

691

CHAPTER 14. LOGIC AND COMPUTABILITY

692

We may think of a Turing machine as consisting of a tape and a finite set of


instruction quadruples.2
(A)

1 0 1
N

The tape is a sequence of cells, infinite in two directions, where the cells may be
empty or filled with 0 or 1. There is a machine head, indicated by the arrow, which
reads or writes the contents of a given cell, and moves left or right, one cell at a time.
The head is capable of five actions: (L) move left one cell; (R) move right one cell;
(B) write a blank; (0) write a zero; (1) write a one. When the head is over a cell it is
capable of reading or writing the contents of that cell.
Instruction quadruples are of the sort, hq1 ; C; A; q2 i and constitute a function in
the sense that no two quadruples have hq1 ; Ci the same but hA; q2 i different. For an
instruction quadruple: (q1 ) labels the quadruple; (C) is a possible state or content
of the scanned cell; (A) is one of the five actions; (q2 ) is a label for some (other)
quadruples. In effect, an instruction quadruple q1 says, if the current cell has content
C, perform action A and go to instruction q2 . The machine begins at an instruction
with label q1 D 1, and stops when q2 D 0.
For a simple example, consider the following quadruples, along with the tape (A)
from above.
(B)

h1; 0; R; 1i
h1; 1; 0; 1i
h1; B; L; 2i

if 0 move right
if 1 write 0
end of word, back up and go to instruction 2

h2; 0; L; 2i
h2; B; R; 0i

while value is 0, move left


end of word, return right and stop

The machine begins at label 1. In this case, the head is over a cell with content 1; so
from the second instruction the machine writes 0 in that cell and returns to instruction
label 1. Because the cell now contains 0, the machine reads 0; so, from instruction 1,
the head moves right one space and returns to instruction 1 again. Now the machine
reads 0; so it moves right again and goes returns to instruction 1. Because it reads 1,
again the machine writes 0 and goes to instruction 1 where it moves right and goes
to 1. Now the head is over a blank; so it moves left one cell, and goes to 2. At
instruction 2, the head moves left so long as the tape reads 0. When the head reaches
a blank, it moves right one space, back over the word, and stops. So the result is,
stored-program computers after the war.
2 Specifications of Turing machines differ somewhat. So, for example, some versions allow instruction quintuples, and allow different symbols on the tape. Nothing about what is computable changes on
the different accounts.

CHAPTER 14. LOGIC AND COMPUTABILITY

(C)

693

0 0 0
N

In the standard case, we begin with a blank tape except for one or more binary
words where the words are separated by single blank cells, and the machine head
is over the left-most cell of the left most block. The above example is a simple case
of this sort, but also,
(D)

1 0 1 1
N

1 1 0 1 0

And in the usual case the program halts with the head over the leftmost cell of a
single word on the tape. A function f.Ex/ is Turing computable when, beginning with
xE on the tape in binary digits, the result is f.Ex/. Thus our little program computes
zero.x/, beginning with any x, and returning the value 0.
It will be convenient to require that programs are dextral (right-handed), in the
sense that (a) in executing a program we never write in a cell to the left of the initial
cell, or scan a cell more than one to the left of the initial cell; and (b) when the
program halts, the head is over the initial cell and the final result begins in the same
cell as the initial scanned cell. This does not affect what can be computed, but aids in
predicting results when Turing programs are combined. Our little program is dextral.
A program to compute suc.x/ is not much more difficult. Let us begin by thinking
about what we want the program to do. With a three-digit input word, the desired
outputs are,
000
001
010
011

001
010
011
100

100
101
110
111

101
110
111
1000

Moving from the right of the input word, we want to turn any one to a zero until we
can turn a zero (or a blank) to a one. Here is a way to do that.

CHAPTER 14. LOGIC AND COMPUTABILITY

(E)

h1; 0; R; 1i
h1; 1; R; 1i
h1; B; L; 5i

move to end of word

h5; 0; 1; 7i
h5; 1; 0; 6i
h5; B; 1; 7i

flip 1 to 0 then 0 or blank to 1

694

h6; 0; L; 5i
h7; 0; L; 7i
h7; 1; L; 7i
h7; B; R; 0i

return to start

Do not worry about the gap in instruction labels. Nothing so-far requires instruction
labels be sequential. This program moves the head to the right end of the word; from
the right, flips one to zero until it finds a zero or blank; once it has acted on a zero or
blank, it returns to the start.
So-far, so-good. But there is a problem with this program: In the case when the
input is, say,
1 1 1
N

(F)
the output is,

1 0 0 0
N
with the first symbol one to the left of the initial position. We turn the first blank to
the left of the initial position to a one. So the program is not dextral. The problem is
solved by shifting the word in the case when it is all ones.

(G)

if solid ones shift right


h1; 0; R; 4i
h1; 1; R; 1i
h1; B; 1; 2i

flip 1 to 0 then 0 to 1
h5; 0; 1; 7i
h5; 1; 0; 6i
h5; B; 1; 7i

h2; 1; L; 2i
h2; B; R; 3i

h6; 0; L; 5i

h3; 1; B; 3i
h3; B; R; 4i
h4; 0; R; 4i
h4; 1; R; 4i
h4; B; L; 5i

return to start
h7; 0; L; 7i
h7; 1; L; 7i
h7; B; R; 0i

CHAPTER 14. LOGIC AND COMPUTABILITY

695

States 5, 6 and 7 are as before. This time we test to see if the word is all ones. If not,
the program jumps to 4 where it goes to the end, and to the routine from before. If it
gets to the end without encountering a zero, it writes a one, returns to the beginning
and deletes the initial symbol so that the entire word is shifted one to the right.
Then it goes to instruction 4 so that it goes to the right and works entirely as before.
This time the output from (F) is,
1 0 0 0
N
as it should be. It is worthwhile to follow the actual operation of this and the previous
program on one of the many Turing simulators available on the web (see E14.1).
More complex is a copy program to take an input x and return x:x. This program
has four basic elements.
(1) A sort of control section which says what to do, depending on what sort of
character we have in the original word.
(2) A program to copy 0; this will write a blank in the original word to mark the
spot; move right to the second blank (across the blank between words, and to
the blank to be filled); write a 0; move left to the original position, and replace
the 0.
(3) Similarly a program to copy 1; this will write a blank in the original word
to mark the spot; move right to the second blank; write a 1; move left to the
original position, and replace the 1.
(4) And a program to move the head back to the original position when we are
done.
Here is a program to do the job.

CHAPTER 14. LOGIC AND COMPUTABILITY


(1) Control
h1; 0; B; 10i
h1; 1; B; 20i
h1; B; L; 30i
(4) Finish
start of word
h30; 0; L; 30i
h30; 1; L; 30i
h30; B; R; 0i

(H)

696

(2) Copy 0
write blank
h10; B; R; 11i

(3) Copy 1
write blank
h20; B; R; 21i

right 2 blanks: 0
h11; 0; R; 11i
h11; 1; R; 11i
h11; B; R; 12i

right 2 blanks: 1
h21; 0; R; 21i
h21; 1; R; 21i
h21; B; R; 22i

h12; 0; R; 12i
h12; 1; R; 12i
h12; B; 0; 13i

h22; 0; R; 22i
h22; 1; R; 22i
h22; B; 1; 23i

left 2 blanks: 0
h13; 0; L; 13i
h13; 1; L; 13i
h13; B; L; 14i

left 2 blanks: 1
h23; 0; L; 23i
h23; 1; L; 23i
h23; B; L; 24i

h14; 0; L; 14i
h14; 1; L; 14i
h14; B; 0; 15i

h24; 0; L; 24i
h24; 1; L; 24i
h24; B; 1; 25i

next char: return


h15; 0; R; 1i

next char: return


h25; 1; R; 1i

You should be able to follow each stage.


E14.1. Study the copy program from the text along with the samples zero and suc
from the course website. Then, starting with the file blank.rb, create Turing
programs to compute the following. It will be best to submit your programs
electronically.
a. copy.n/. Takes input m and returns m:m. This is a simple implementation of
the program from the text.
b. Create a Turing program to compute pred.n/. Hint: Give your function two
separate exit paths: One when the input is a string of 0s, returning with the
input. In any other case, the output for input n is the predecessor of n. The
method simply flips that for successor: From the right, change 0 to 1 until
some 1 can be flipped to 0. There is no need to worry about the addition of a
possible leading 0 to your result.

CHAPTER 14. LOGIC AND COMPUTABILITY

697

c. Create a Turing program to compute ident33 .x; y; z/. For x:y:z observe that z
might be longer than x and y put together; but, of course, it is not longer than
x, y and z put together. Here is one way to proceed: Move to the start of the
third word; use copy to generate x:y:z:z then plug spaces so that you have one
long first word, xoyoz:z; you can mark the first position of the long word with
a blank (and similarly, each time you write a character, mark the next position
to the right with a blank so that you are always writing into the second blank
up from the one where the character is read); then it is a simple matter of
running a basic copy routine from right-to-left, and erasing junk when you
are done.

14.1.2

Turing Computable Functions are Recursive

We turn now to showing that the (dextral) Turing computable functions are the same
as the recursive functions. Our first aim is to show that every Turing computable
function is recursive. But we begin with the simpler result that there is a recursive
enumeration of Turing machines. We shall need this as we go forward, and it will let
us compile some important preliminary results along the way.
The method is by now familiar. It will require some work, but we can do it in the
same way as we approached recursive functions before. Begin by assigning to each
symbol a Gdel Number.
a.
b.
c.

gB D 3
g0 D 5
g1 D 7

f.
g.
h.

gL D 9
gR D 11
gqi D 13 C 2i

For a quadruple, say, hq1 ; B; L; q1 i, set g D 215  33  59  715 . And for a sequence
of quadruples with numbers g0 , g1 . . . gn the super Gdel number gs D 2g0  3g1 
: : :  gnn . For convenience we frequently refer to the individual symbol codes with
angle quotes around the symbol, so hBi D 3 where pBq, the number of the expression
is 23 .
Now we define simple recursive function with a series of recursive relations,
lb.v/ D 13 C 2v
LB.n/

Ddef .9v  n/.n D lb.v//

SYM.n/

Ddef n D hBi _ n D h0i _ n D h1i

ACT.n/

Ddef sym.n/ _ n D hLi _ n D hRi

QUAD.n/

Ddef len.n/ D 4 ^ LB.exp.n; 0// ^ SYM.exp.n; 1// ^ ACT.exp.n; 2// ^ LB.exp.n; 3//

CHAPTER 14. LOGIC AND COMPUTABILITY

698

lb.v/ is the Gdel number of instruction v. Then the relations are true when n is the

number for an instruction label, a symbol, an action and a quadruple. In particular, a


code for a quadruple numbers a sequence of four symbols of the appropriate sort.
We are now ready to number the Turing machines. For this, adopt a simple modification of our original specification: We have so-far supposed that a Turing machine
might lack any given quadruple, say h3; 1; x; yi. In case it lacks this quadruple, if
the machine reads 1 and is sent to state 3 it simply hangs with no place to go.
Where q is the largest label in the machine, we now suppose that for any p  q, if
no hp; C; x; yi is a member of the machine, the machine is simply supplemented with
hp; C; C; pi. The effect is as before: In this case, there is a place for the machine to go;
but if the machine goes to hp; C; C; pi, it remains in that state, repeating it over and
over. In the case of label 0, the states are added to the machine, but serve no function,
as the zero label forces halt. Further, we suppose that the quadruples in a Turing machine are taken in order, h0; 0; x; yi; h0; 1; x; yi; h0; B; x; yi; h1; 0; x; yi : : : hq; 0; x; yi;
hq; 1; x; yi; hq; B; x; yi. So each Turing machine has a unique specification. On this
account, a Turing machine halts only when it reaches a state of the sort hx; x; x; 0i.
And the ordered specification itself guarantees the functional requirement that there
are no two quadruples with the first inputs the same and the latter different. So for
TMACH.n/,
.9w < len.n//.len.n/ D 3  .w C 2// ^ .8v; 3  v C 2 < len.n//.8x  n/f
x D exp.n; 3  v/ ! .QUAD.x/ ^ exp.x; 0/ D lb.v/ ^ exp.x; 1/ D h0i/ ^
x D exp.n; 3  v C 1/ ! .QUAD.x/ ^ exp.x; 0/ D lb.v/ ^ exp.x; 1/ D h1i/ ^
x D exp.n; 3  v C 2/ ! .QUAD.x/ ^ exp.x; 0/ D lb.v/ ^ exp.x; 1/ D hBi/g

Given our modifications, the length of a Turing machine must be a non-zero multiple
of three including at least the initial labels zero and one. So for some w, len.n/ D
3  .w C 2/. Then for each initial label v, there are three quadruples; so there are
quadruples 3  v, 3  v C 1 and 3  v C 2, taken in the standard order, and each with
initial label v. Since n is a super Gdel number, and each x the number of a quadruple
it is the exponents of x that reveal the instruction label and cell content.
But now it is easy to see,
T14.1. There is a recursive enumeration of the Turing machines. Set,
mach.0/ D zTMACH.z/
mach.Sn/ D zz > mach.n/ ^ TMACH.z/

Since mach.n/ is is a recursive function from the natural numbers onto the Turing
machines, they are recursively enumerable. While this enumeration is recursive, it is
not primitive recursive.

CHAPTER 14. LOGIC AND COMPUTABILITY

699

Now, as we work toward a demonstration that Turing computable functions are


recursive, let us pause for some key ideas. Consider a tape divided as follows,

left

(I)

1 0

right

1 0 1 1 0
N

We shall code the tape with a pair of numbers. Where at any stage the head divides
the tape into left and right parts, first a standard code for the right hand side, p10110q,
and second, a code for the left side read from the inside out pB01q. Taken as a pair,
these numbers record at once contents of the tape, and the position of the head
always under the first digit of the coded right number.
Say a dextral Turing machine computes a function f.n/ D m. Let us suppose
that we have functions code.n/ and decode.m/ to move between m and n and their
codes (where this requires moving from the numbers m and n through their binary
representations, and then to the codes). So we concentrate on the machine itself,
and wish to track the status of the Turing machine i given input n for each step j of
its operation. In order to track the status of the machine, we shall require functions
left.i; n; j/, right.i; n; j/ to record codes of the left and right portions of the tape, and
state.i; n; j/ for the current quadruple state of the machine.
First, as we have observed, for any Turing machine, there is a unique quadruple
for any instruction label and tape value. Thus, machs.i; m; n/ numbers a quadruple
as a function of the number of the machine in the enumeration, and Gdel numbers
for initial label and tape value. Thus machs.i; m; n/ is,
.y  mach.i//.9v < len.mach.i///y D exp.mach.i/; v/ ^ exp.y; 0/ D m ^ exp.y; 1/ D n

So machs.i; m; n/ returns the number of that quadruple in machine i whose initial


label has number m, and initial value number n. Since the machine is a function,
there must be a unique state with those initial values.
In addition, where n D a ? b, let us adopt a sort of converse to concatenation
such that a n D b.
a n D .x  n/.8i < len.n/

len.a//.exp.x; i/ D exp.n; len.a/ C i//

So we want the least x such that its length is the length of n less the length of a, and
the values of x at any position i are the same as those of n at len.a/ C i. Thus a n
lops off the portion numbered a from the expression numbered n.

CHAPTER 14. LOGIC AND COMPUTABILITY

700

Recall that our Turing machine is to calculate a function f.n/ D m. Initial values
of left.i; n; j/, right.i; n; j/ and state.i; n; j/ are straightforward.
left.i; n; 0/ D pBBq
right.i; n; 0/ D code.n/
state.i; n; 0/ D machs.i; h1i; exp.right.i; n; 0/; 0//

On a dextral machine, the machine never writes to the left of its initial position, and
the head never moves more than one position to the left of its initial position; so we
simply set the value of the left portion to a couple of blanks. This ensures that there
is enough space on the left for the machine to operate (and that, for any position
of the machine head, there is always a left portion of the tape). The starting right
number is just the code of the input to the function. And the initial state value is
determined by the input label 1 and the first value on the tape which is coded by the
first exponent of right.i; n; 0/.
For the successor values,
left.i; n; Sj/ D

8
< left.i; n; j/

2exp.right.i;n;j/;0/ ? left.i; n; j/
: exp.left.i;n;j/;0/
2
left.i; n; j/

if
if
if

SYM.exp.state.i; n; j/; 2//

exp.state.i; n; j/; 2/ D hRi


exp.state.i; n; j/; 2/ D hLi

If a symbol is written in the current cell, there is no change in the left number. If
the head moves to the left or the right, the first value is either appended or deleted,
depending on direction. And similarly for right.i; n; Sj/ but with separate clauses
for each of the symbols that may be written onto the first position. And now the
successor value for state is determined by the Turing machine together with the new
label and the value under the head after the current action has been performed.
state.i; n; Sj/ D machs.i; exp.state.i; n; j/; 3/; exp.right.i; n; Sj/; 0//

The machine jumps to a new state depending on the label and value on the tape.
Observe that we are here proceeding by simultaneous recursion, defining multiple
functions together. It should be clear enough how this works (see E12.26, p. 579).
If the machine enters a zero state then it halts. So set,
stop.i; n; j/ Ddef .y  len.mach.i///.exp.state.i; n; j/; 0/ D lb.y//

exp.state.i; n; j/; 0/ is the number of of the instruction label. So exp.state.i; n; j/; 0/


D lb.y/ when y is the label. And stop.i; n; j/ takes the value 0 just in case machine i
with input n is halted at step j. When the first member of state.i; n; j/ codes zero, the
machine is halted, otherwise it is running. So y takes the value zero just in case the

machine is halted.

CHAPTER 14. LOGIC AND COMPUTABILITY

701

T14.2. Every Turing computable function is a recursive function. Supposing Turing


machine i computes a function f.n/,
f.n/ D decode.right.i; n; jstop.i; n; j/ D 0//

When a dextral Turing machine stops, the value of right is just the code of its output
value m; so if we decode right.i; n; j/ at that stage, we have the value of the function
calculated by the Turing machine. Supposing, as we have that the machine does
return a value, minimization operates on a regular function. Since this function is
recursive, the function calculated by Turing machine i is a recursive function.
E14.2. Find a recursive function to calculate right.i; n; j/. Hint: You might find a
combination of ? and useful for the case when a symbol is written into the
first cell.

E14.3. Find a recursive function to calculate decode.n/.

E14.4. Suppose a dual Turing machine has two tapes, with a machine head for
each. Instructions are of the sort hqi ; Cta ; Atb ; qj i where ta and tb indicate the
relevant tape. Show that every function that is dual Turing computable is
recursive.

14.1.3

Recursive Functions are Turing Computable

To show that the recursive functions are identical to the Turing computable functions,
we now show that all recursive functions are Turing computable.
T14.3. Every recursive function is Turing computable.
Suppose f.Ex/ is a recursive function. Then there is a sequence of recursive
functions f0 , f1 : : : fn such that fn D f, where each member is either an initial function or arises from previous members by composition, recursion, or
regular minimization. The argument is by induction on this sequence.
j

Basis: We have already seen that the initial functions zero.x/, suc.x/ and idntk , as
illustrated in E14.1, are Turing computable.
Assp: For any i, 0  i < k, fi .Ex/ is Turing computable.

CHAPTER 14. LOGIC AND COMPUTABILITY

702

Show: fk .Ex/ is Turing computable.


fk is either an initial function or arises from previous members by composition,

recursion, or regular minimization. If an initial function, then as in the basis.


So suppose fk arises from previous members.
(c) fk .Ex; yE; zE/ arises by composition from g.Ey/ and h.Ex; w; zE/. By assumption g.Ey/
and h.Ex; w; zE/ are Turing computable. For the simplest case, consider h.g.y//:
Chain together Turing programs to calculate g.y/ and then h.w/ so the
first program operates upon y to calculate g.y/ and the second begins where
the first leaves off, operating on the result to calculate h.g.y//. A case like
h.x; g.y/; z/ is more complex insofar as g.y/ may take up a different number
of cells from y: it is sufficient to run a copy to get x:y:z:y; then g.y/ to get
x:y:z:g.y/; then copy for x:y:z:g.y/:z and a copy that replaces the last two
numbers to get x:g.y/:z. Then you can run h. And similarly in other cases.
(r) fk .Ex; y/ arises by recursion from g.Ex/ and h.Ex; y; u/. By assumption g.Ex/ and
h.Ex; y; u/ are Turing computable. Recall our little programs from chapter 12
which begin by using g.Ex/ to find f.0/ and then use h.Ex; y; u/ repeatedly for
y in 0 to b 1 to find the value of f.Ex; b/ (see, for example, p. 542). For a
representative case, consider f.m; b/.
a. Produce a sequence,
m:b:m:b 1:m:b

2 : : : m:2:m:1:m:0:m

This requires a copypair.x; y/ that takes m:n and returns m:n:m:n and
pred.x/. Given m:b on the tape, run copypair to get m:b:m:b (and mark
the first m with a blank). Then loop as follows: run pred on the final
b; if it is already 0, erase final 0, go to previous m and move on to (b);
otherwise, move to previous m, run copypair and loop.
b. Run g on the last block of digits m. This gives,
m:b:m:b 1:m:b 2 : : : m:2:m:1:m:0:f.m; 0/
c. Back up to the previous m and run h on the concluding three blocks
m:0:f.m; 0/. This gives,
m:b:m:b 1:m:b 2 : : : m:2:m:1:f.m; 1/
And so forth. Stop when you reach the m with an extra blank (with
two blanks in a row). At that stage, we have, m :b:f.m; b/. Fill the first
blank, run idnt33 and you are done. Observe that the original m:b plays
no role in the calculation other to serve as the initial template for the
series, and then as an end marker on your way back up there is never

CHAPTER 14. LOGIC AND COMPUTABILITY


a need to apply h to any value greater than b
f.m; b/.

703
1 in the calculation of

(m) fk .Ex/ arises by regular minimization from g.Ex; y/. By assumption, g.Ex; y/ is
Turing computable. For a representative case, suppose we are given m and
want yg.m; y/ D 0.
a. Given, m, produce m:0:m:0.
b. From a tape of the form m:y:m:y loop as follows: Move to the second
m; run g on m:y; this gives m:y:g.m; y/; check to see if the result is zero;
if it is, run idnt32 and you are done (this is the same as deleting the last
zero and running ident22 ); if the result is not zero, delete g.m; y/ to get
m:y; run suc on y; and then a copier to get m:y0 :m:y0 , and loop. The
loop halts when it reaches the value of y for which g has output 0 and
there must be some such value if g is regular.
Indct: Any recursive function f.Ex/ is Turing computable.
And from T14.2 together with T14.3, the Turing computable functions are identical to the recursive functions. It is perhaps an amazing coincidence that functions independently defined in these ways should turn out to be identical. And we
have here the beginnings of an idea behind Churchs thesis which we shall explore in
section 14.3.
E14.5. From exercise E14.1 you should already have Turing programs for suc.x/,
pred.x/, copy.x/ and idnt33 .x; y; z/. Now produce each of the following, in
order, leading up to the recursive addition function. When you require one as
part of another simply copy it into the larger file.
a. The function, h.x; y; u/. For addition, h.x; y; u/ is suc.idnt33 .x; y; u//. So this
is a simple combination of suc and idnt33 . For addition, g.x/ D idnt11 .x/ D x,
which requires no action; so we will not worry about that.
b. The function, copypair. Take a:b and return a:b:a:b. One approach is to
produce a simple modification of copy that takes a:b and produces a:b:a.
Run this program starting at a, and then another copy of it starting at b.
c. The function, cascade. This is the program to produce m:n:m:n 1:m:n 2
: : : m:0:m. The key elements are copypair and pred. The main loop runs pred
on the last word; if the object is not zero, back up one and run copypair; and

CHAPTER 14. LOGIC AND COMPUTABILITY

704

so forth. To prepare for the next stage, you should begin by running copypair
and then damage the very first m by putting a blank in its first cell. Let the
program finish with the head on m at the end.
d. The function, plus.m; n/. From m at the far right of the sequence, back up
two words; check to see if there is an extra blank; if so, run idnt33 and you are
done; if not, run h.x; y; u/. Though m:n is part of the cascade series, we
never run h on m:n:u. In a program we may make use of m:n as described,
but in damaged form as an end marker for the series.
There are easier ways to do addition on a Turing machine! The obvious
strategy is to put m in a location x and n in a location y; run pred on the
value in location x and then suc on the value in location y; the result appears
in y when pred hits zero. The advantage of our approach is that it illustrates
(an important case of) the demonstration that a Turing machine can compute
any recursive function.
E14.6. Produce each of the following, leading up to a Turing program for the function ych.x D pred.y// D 0, that is the function which returns the least y
such that x equals the predecessor of y such that the characteristic function
of x D pred.y/ returns 0.
a. The function idnt22 .x; y/. This can be a simple modification of idnt33 .
b. The function ch.x D y/, which returns 0 when x D y and otherwise 1. This
is, of course, a recursive function. But you can get it more efficiently and
more directly. To compare numbers, you have to worry about leading zeros
that might make equivalent numbers physically distinct. Here is one strategy:
From x:y check to see if one or both are all zeros; exit with 1 or 0 in the
different cases; if neither works, apply pred to x and to y and return to the
start; eventually you will come to a stage where the check for zero returns a
result.
c. The function ch.x D pred.y//. This is a simple case of composition.
d. The function ych.x D pred.y// D 0, by the routine discussed in the text.
Of course, for any number except 0, this is nothing but a long-winded equivalent to suc.x/. The point, however, is to apply the algorithm for regular
minimization, and so to work through the last stage of the demonstration that
recursive functions are Turing computable.

CHAPTER 14. LOGIC AND COMPUTABILITY

14.2

705

Essential Results

In chapter 12 essential results were built on the diagonal lemma (T12.19). This time,
we depend on a halting problem with special application to Turing machines. Once
we have established the halting problem, results like ones from before follow in short
order.

14.2.1

Halting

A Turing machine is a set of quadruples. Things are arranged so that Turing machines
do not hang in the sense that they reach a state with no applicable instruction. But
a Turing machine may go into a loop or routine from which it never emerges. That
is, a Turing machine may or may not halt in a finite number of steps. This raises the
question whether there is a general way to tell whether Turing machines halt when
started on a given input. This is an issue of significance for computing theory. And,
as we shall see, the answer has consequences beyond computing.
The problem divides into narrower self-halting and broader general halting
versions. First, the self-halting problem: By T14.1 there is an enumeration of the
Turing machines. Consider some such enumeration, 0 , 1 : : : of Turing machines
and an array as follows,
0

(J)

:::

0 0 .0/ 0 .1/ 0 .2/


1 1 .0/ 1 .1/ 1 .2/
2 2 .0/ 2 .1/ 2 .2/
::
:

We run 0 on inputs 0, 1. . . ; 1 on 0, 1. . . ; and so forth. Now ask whether there


is a Turing program (that is, a recursive function) to decide in general whether i
halts when applied to its own number in the enumeration a program H.i/ such that
H.i/ D 0 if i .i/ halts, and H.i/ D 1 if i .i/ does not halt.
T14.4. There is no Turing machine H.i/ such that H.i/ D 0 if i .i/ halts and H.i/ D 1
if it does not.
Suppose otherwise. That is, suppose there is a halting machine H.i/ where
for any i .i/, H.i/ D 0 if i .i/ halts and H.i/ D 1 if it does not. Chain this
program into a simple looping machine .j/ defined as follows,
hq; 0; 0; qi
hq; 1; 1; 0i

CHAPTER 14. LOGIC AND COMPUTABILITY

706

So when j D 0, goes into an infinite loop, remaining in state q forever;


when j D 1, halts gracefully with output 1. Let the combination of H and
be .i/; so .i/ calculates .H.i//. On our assumption that there is a Turing machine H.i/, the machine must appear in the enumeration of Turing
machines with some number d.
But this is impossible. Consider .d/ and suppose .d/ halts; since halts
on input d, the halting machine, H.d/ D 0; and with this input, goes into
the infinite loop; so the composition .H.d// does not halt; and this is just
to say .d/ does not halt. Reject the assumption, .d/ does not halt. But
since .d/ does not halt, the halting machine H.d/ D 1; and with this input,
halts gracefully with output 1; so the composition .H.d// halts; and this
is just to say .d/ halts. Reject the original assumption, there is no machine
H.i/ which says whether an arbitrary i .i/ halts.
For this argument, it is important that H is a component of . Information about
whether halts gives information about the behavior of H, and information about
the behavior of H, about whether halts.
The more general question is whether there is a machine to decide for any i
and n whether i .n/ halts. But it is immediate that if there is no Turing machine to
decide the more narrow self-halting problem, there is no Turing machine to decide
this more general version.
T14.5. There is no Turing machine H.i; n/ such that H.i; n/ D 0 if i .n/ halts and
H.i; n/ D 1 if it does not.
Suppose otherwise. That is, suppose there is a halting machine H.i; n/ where
for any i .n/, H.i; n/ D 0 if i .n/ halts and H.i; n/ D 1 if it does not. Chain
this program after a copier K.n/ which takes input n and gives n:n. The
combination H.K.i// decides whether i .i/ halts. This is impossible; reject
the assumption: There is no such Turing machine H.i; n/.
And when combined with T14.3 according to which every recursive function is Turing computable, these theorems which tell us that no Turing program is sufficient to
solve the halting problem, yield the result that no recursive function solves the halting problem: if a function is recursive, then it is Turing computable; and since it is
Turing computable, it does not solve the halting problem. Observe that we may be
able to decide in particular cases whether a program halts. No doubt you have been
able to do so in particular cases! What we have shown is that there is no perfectly
general recursive method to decide whether i .n/ halts.

CHAPTER 14. LOGIC AND COMPUTABILITY

707

E14.7. Consider again the -recursive functions introduced in E12.7. Suppose that
these functions can be numbered and that there is a -recursive function enf.i/
to enumerate them; so enf.i/ returns the Gdel number of the ith function
in the enumeration. (You will have occasion to produce this function in a
later exercise.) Show that there is no -recursive function def.i/ such that
def.i/ D 0 if fi .i/ is defined and def.i/ D 1 if fi .i/ is undefined. Hint: Let your
diagonal function diag.i/ D ydef.i/ D y ^ y D 1. We might think of this
as the definition problem.

14.2.2

The Decision Problem

Recall our demonstration from section 12.5.2 that if Q is consistent then no recursive
relation identifies the theorems of predicate logic. With the identity between the
recursive functions and the Turing computable functions, this is the same as the result
that if Q is consistent then no Turing computable function identifies the theorems
of predicate logic. We are now in a position to obtain a related result directly, by
means of the halting problem. Recall from chapter 13 (p. 605) that a theory T is
!-inconsistent iff for some P .x/, T proves each P .m/ but also proves 8xP .x/.
Equivalently, T is !-inconsistent iff for every m, can prove each T ` P .m/ and
T ` 9xP .x/. We show,
T14.6. If Q is !-consistent, then no Turing computable function f.n/ is such that
f.n/ D 0 just in case n numbers a theorem of predicate logic.
Suppose Q is !-consistent, and consider our recursive function stop.i; n; j/
which takes the value 0 iff i .n/ is halted. Since it is recursive, stop is
captured by some Stop.i; n; j; z/ so that,
(i) If i .i/ is halted by step j, Q ` Stop. i ; i ; j ; ;/
(ii) If i .i/ never halts, Q ` Stop. i ; i ; j ; ;/ for any j
For any i, let H . i / D 9zStop. i ; i ; z; ;/. Then if i .i/ halts, there is some
j such that Q ` Stop. i ; i ; j ; ;/; so Q ` H . i /. And if i .i/ never halts, for
every j, Q ` Stop. i ; i ; j ; ;/; so since Q is !-consistent, Q H . i /. So
where Q is a conjunction of the axioms of Q, if i .i/ halts ` Q ! H . i / and
if i .i/ never halts Q ! H . i /; so,
` Q ! H. i /

iff

i .i/ halts

Suppose some Turing computable function f.n/ takes the value 0 just in case n
numbers a theorem. Then for any i, f applied to pQ ! H . i /q takes the value

CHAPTER 14. LOGIC AND COMPUTABILITY

708

0 iff Q ! H . i / is a theorem, iff i .i/ halts. But this is impossible; reject the

assumption: If Q is !-consistent, then there is no Turing computable function


that returns the value zero for numbers theorems of predicate logic.
And, of course, this result according to which if Q is !-consistent no Turing computable function returns zero just for theorems of predicate logic is equivalent to
the result that if Q is !-consistent, then no recursive function returns zero just for
theorems of predicate logic.3
E14.8. Return again to the -recursive functions from the previous exercise (with
E12.7 and E12.16). Suppose that addition to enf.i/ to enumerate the functions
there is a -recursive val.i; n/ to return the value of fi .n/; so val.i; n/ D fi .n/.
(Again, you will have the opportunity to construct this function in a later
exercise.) From E12.16 this function is captured in Qs by some Val.i; n; y/.
Use your result from the definition problem in E14.7 to show that if Qs is !consistent, then no  recursive function f.n/ is such that f.n/ D 0 just in case
n numbers a theorem of predicate logic. Hint: Let Def . i / Ddef 9zVal. i ; i ; z/.

14.2.3

Incompleteness Again

In T12.21 we saw that no consistent, recursively axiomatizable theory extending Q


is negation complete. We shall see this again. However, as described in chapter 13,
the incompleteness result comes in different forms. In particular, the one as from
chapter 12 which depends on consistency and capture, and another which depends
on soundness and expression. We are positioned to see the result in both forms.
Semantic Version
A key preliminary to the chapter 12 demonstration of incompleteness is T12.20
which applies the diagonal lemma to show that for no consistent theory T extending
Q is a recursive relation true of (numbers for) its theorems. This time, by means of
the halting result, we show that the truths of LNT are not recursively enumerable.
T14.7. The set of truths of LNT is not recursively enumerable.
3 This

argument, and the parallel one in chapter 12 have the advantage of simplicity. However, this
result that no recursive function is true just of the theorems of predicate logic need not be conditional
on the consistency (or !-consistency) of Q. For an illuminating version of the strengthened result from
the halting problem, see chapter 11 of Boolos et al., Computability and Logic.

CHAPTER 14. LOGIC AND COMPUTABILITY

709

Consider again our recursive function stop.i; n; j/; since it is recursive, it is expressed by some Stop.i; n; j; z/; for arbitrary i, set H . i / D 9zStop. i ; i ; z; ;/.
Suppose some e enumerates the truths of LNT , halting with output 0 if (the
number for) H . i / appears in the enumeration, and with output 1 if H . i /
appears. Exactly one of H . i / or H . i / is true; so the number for one of
them will eventually turn up since e enumerates all the truths of LNT .
(i) Suppose NH . i / D T; then for some m, NStop. i ; i ; m; ;/ D T; so
NStop. i ; i ; m; ;/ T; so by expression, hhi; i; mi; ;i 2 stop; so i .i/
stops.
(ii) Suppose NH . i / D T; then NH . i / T; so for any m 2 U,
NStop. i ; i ; m; ;/ T; so by expression, hhi; i; mi; ;i 62 stop; so i .i/ never
stops.
So e , halts with output 0 iff NH . i / D T (by its definition); iff i .i/ halts
(by (i) and (ii)); so e solves the halting problem. This is impossible; there is
no such Turing machine. And since no Turing machine enumerates the truths
of LNT , no recursive function enumerates the truths of LNT .
This theorem, together with T12.17 which tells us that if T is a recursively axiomatized formal theory then the set of theorems of T is recursively enumerable, puts
us in a position to obtain an incompleteness result mirroring T13.2.
T14.8. If T is a recursively axiomatized sound theory whose language includes LNT ,
then T is negation incomplete.
Suppose T is a recursively axiomatized sound theory whose language includes LNT . By T12.17, there is an enumeration of the theorems of T , and
since T is sound, all of the theorems in the enumeration are true. But by
T14.7, there is no enumeration of all the truths of LNT ; so the enumeration of
theorems is not an enumeration of all truths; so some true P is not among the
theorems of of T ; and since P is true, P is not true; and since T is sound,
neither is P among the theorems of T . So T P and T P .
This incompleteness result requires the soundness of T , where where soundness is
more than mere consistency. But it requires only that the language include LNT and
so have the power to express recursive functions where this leaves to the side a
requirement that T extends Q, and so be able to capture recursive functions.

CHAPTER 14. LOGIC AND COMPUTABILITY

710

Syntactic Version
From the halting problem, we can obtain the other sort of incompleteness result as
well. Thus we have a theorem like the combination of T13.4 and T13.5.
T14.9. If T is a recursively axiomatized theory extending Q, then there is a sentence
P such that if T is consistent T P , and if T is !-consistent, T P .
Suppose T is a recursively axiomatized theory extending Q. Once again consider stop.i; n; j/; since stop is recursive and T extends Q, stop is captured in
T by some Stop.i; n; j; z/; let H . i / D 9zStop. i ; i ; z; ;/, and consider a Turing machine s .i/ which for arbitrary i, tests whether successive values of m
number a proof of H . i /, halting if some m numbers a proof and otherwise
continuing forever so s .i/ evaluates PRFT.m; pH . i /q/, for successive
values of m;4 since T is a recursively axiomatized theory, this is a recursive
relation so that there must be some such Turing machine. We can think of
s .i/ as seeking a proof that i .i/ does not halt.
Suppose s .s/ halts. By definition, s .i/ halts just in case some m numbers a proof of H . i /; since s .s/ halts, then, there is some m such that
PRFT.m; pH .s/q/; so T ` H .s/. But if s .s/ halts, for some m, hhs; s; mi;
0i 2 stop; so by capture, T ` Stop.s; s; m; ;/; so T ` 9zStop.s; s; z; ;/,
which is to say, T ` H .s/. So if T is consistent, s .s/ does not halt.
(i) Suppose T is consistent and T ` H .s/; then for some m, PRFT.m; pH .s/q/;
so by its definition, s .s/ halts ; but if T is consistent, s .s/ does not halt;
so T H .s/.
(ii) Suppose T is !-consistent and T ` H .s/; then T ` H .s/; so T `
9zStop.s; s; z; ;/. But since s .s/ does not halt, for any m, hhs; s; mi; 0i 62
stop; and by capture, for any m, T ` Stop.s; s; m; ;/; so by !-consistency,
T 9zStop.s; s; z; ;/. This is impossible, T H .s/
Again, this is roughly the form in which Gdel first proved the incompleteness of
arithmetic. However, as we have seen it is possible to strengthen this version of the
result to drop the requirement of !-consistency for the simple result that no consistent, recursively axiomatizable theory extending Q is negation complete.
E14.9. Use the definition problem for -recursive functions to show that there is no
-recursive enumeration of the set of truths of LNT . Hint: Return to val.i; n/,
4 Or,

rather, since it has i free but numbers a formula with i for x, the second term is
See p. 672 and p. 716 below.

FORMSUB.pH .x/q; pxq; num.i//.

CHAPTER 14. LOGIC AND COMPUTABILITY

711

Val.i; n; y/ and Def . i / (this time depending on T12.7 for the result that Val
expresses val). Suppose there is an enumeration ent.n/ of the truths of LNT ;
then to get something that returns 0 and 1 in the right way, the characteristic
function of enffyent.y/ D pDef . i /q _ ent.y/ D pDef . i /qg D pDef . i /q
is 0 when the minimization finds Def . i / in the enumeration, and otherwise
1.

E14.10. Use your results for -recursive functions from other exercises to show that
if T is a recursively axiomatized theory extending Qs , then there is a sentence
P such that if T is consistent T P , and if T is !-consistent, T P .

14.3

Churchs Thesis

We have attained a number of negative results, as T14.6 that if Q is !-consistent


then no Turing computable function f.n/ returns zero just for numbers of theorems of
predicate logic, and T14.7 that the set of truths of LNT is not recursively enumerable.
These are interesting. But, one might very well think, if no Turing machine computes
a function, then we ought simply to compute the function some other way. So the
significance of our negative results is magnified if the Turing computable functions
are, in some sense, the only computable functions. If in some important sense the
Turing computable functions are the only computable functions, and no Turing machine computes a function, then in the relevant sense the function is not computable.
Thus Churchs Thesis:
CT The total numerical functions that are effectively computable by some algorithmic method are just the recursive functions.
We want to be clear first, on the content of this thesis, and once we know what it says
on reasons for thinking that it is true.

14.3.1

The content of Churchs thesis

Churchs thesis makes a claim about total numerical functions that are effectively
computable by an algorithmic method. Original motivations for computation of this
sort are from the simple routines we learn in grade school for addition, multiplication,
and the like. By themselves, such methods are of interest. However, we mean to
include the sorts of methods contemporary computing devices can execute. These

CHAPTER 14. LOGIC AND COMPUTABILITY

712

are of considerable interest as well. Let us take up the different elements of the
proposal in turn.
First, as always, a numerical function is total iff it is defined on the entire numerical domain. Arbitrary functions on a finite domain may be finitely specified by
listing their members, and then computed by simple lookup. This was our approach
with simple, but arbitrary, functions from chapter 4. The question of comuputability
becomes interesting when domains are not finite (and from methods like those in the
countability reference a function on an infinite domain is always comparable to one
that is total). So Churchs thesis is a thesis about the computability of total functions.
A function is effectively computable iff there is a method for finding its value for
any given argument. Correspondingly, a property or relation is effectively decidable
iff its characteristic function is effectively computable. So methods for addition and
multiplication are adequate to calculate the value of the function for any inputs. Or
consider a Turing machine programmed to enumerate the theorems of T , stopping
with output 0 if it reaches (the number for) P , and output 1 if it reaches P . If T
is a consistent recursively axiomatized and negation complete theory, then this is an
effective method for deciding the theorems of T . If P is a theorem, it eventually
shows up in the enumeration, and the Turing machine stops with output 0. If P is
not a theorem, P is a theorem, so P eventually shows up in the enumeration, and
the machine stops with output 1. This was the idea behind T12.18. But if T is not
negation complete, this is not an effective method for deciding theorems of T . If P
is a theorem, it eventually shows up in the enumeration, and the machine stops with
output 0. But if T is not negation complete, P might fail to be a theorem. In this
case, the machine continues forever, and does not stop with output 1; so for some
arguments, this method does not find the value of the characteristic function, and we
have not described an effective method for deciding the theorems of this T .
From the start, we may agree that there is some uncertainty about the notion
of an algorithmic method; so, for example, different texts offer somewhat different
definitions. However, as we did for logical validity and soundness in chapter 1,
we shall take a particular account as a technical definition partly as clarified in
examples that follow. Difficulties to the side, there does seem to be a relevant core
notion: for our purposes an algorithmic method is a finitely constrained rule-based
procedure (rote, if you will).5
There is some vagueness in how much processing is allowed in following a
rule. (So, an algorithm for multiplication does not typically include instructions
for required additions.) However, we may take it that if some instructions are suf5 We

have no intention of engaging Wittgenstenian concerns about following a rule. See, for example, Kripke Wittgenstein on Rules and Private Language.

CHAPTER 14. LOGIC AND COMPUTABILITY

713

ficient for a computer to calculate a function, then the function is algorithmically


computable. Thus that a function is Turing computable is sufficient to show that it is
algorithmically computable. Again, standard methods for addition and multiplication
are examples of algorithmic procedures. Truth table construction is another example
of a method that proceeds by rote in this way. Given the basic tables for the operators,
one simply follows the rules to complete the tables and determine validity and one
could program a computer to perform the same task. Thus validity in sentential logic
is effectively decidable by an algorithmic method. In contrast, derivations are not an
algorithmic method. The strategies are helpful! But, at least in complex cases, there
may come a stage where insight or something like lucky guessing is required. And
at such a stage, you are not following any rules by rote, and so not following any
specific algorithm to reach your result.
And algorithmic methods operate under finite constraints. In general, we shall
not worry about how large these constraints may be, so long as they remain finite.
Consider first, truth table construction. If this is to be an effective method for determining validity, it should return a result for any sentence. But for any n > 0 there
are sentences with that many atomic sentences (for example, A1 ^ A2 ^ : : : ^ An ),
so the corresponding table requires 2n rows. This number may be arbitrarily large
and a table may require more paper or memory than in the entire universe. But, in
every case, the limit is finite. So, for our purposes, it qualifies as an effective algorithmic method. Contrast this case with a device, which we may call gods mind,
that stores all the theorems of predicate logic sorted in order of their Gdel numbers.
To calculate whether P is a theorem, simply search up to the Gdel number of P
to see if that sentence is in the database: if it is, P is a theorem, if it is not P is
not a theorem. Alternatively, perhaps this machine does infinite parallel processing,
and for every n runs a Turing machine to evaluate PRFPL.n; pP q/ all at the same
time as it were so that if some calculation evaluates to 0, P is a theorem, and if
all evaluate to 1, P is not. It is not our intent to deny the existence of god, or that
one might very well solve mathematical problems by prayer (though this might not
go over very well on examinations which require that you show your work)! But,
insofar as this device requires infinite memory, infinitely many instructions, infinite
processing power, or the capacity to evaluate at once infinite ranges of data, it will
not, for our purposes count as an algorithmic method.
Or consider again a Turing machine programmed to enumerate the theorems of
T , stopping with output 0 if it reaches (the number for) P , but continuing forever
if P does not appear. One might suppose the information that P is not a theorem
is contained already in the fact that the machine never halts, and that god or some
being with an infinite perspective might very well extract this information from the

CHAPTER 14. LOGIC AND COMPUTABILITY

714

machine. Perhaps so. But this method is not algorithmic just because it requires
the infinite perspective. But there are interesting attempts to attain the effect of this
latter machine without appeals to god. Consider, first, Zenos machine. As before,
the machine enumerates theorems, this time flashing a light if P appears in the list.
However, for some finite time t (say 60 seconds), this machine takes its first step in
t=2 seconds, its second step in t =4 seconds, and for any n, step n in t =2n seconds.
But the sum of t =2 C t =4 C : : : D t , and the Turing machine runs through all of
infinitely many steps in time t. So start the machine. If the light flashes before t
seconds elapse, P is a theorem. If t elapses, the machine has run through all of
infinitely many steps, so if the light does not flash, P is not a theorem.
One might object this proposal reduces to a tautology of the sort, If such-andsuch (impossible) circumstances obtain, then the theorems are decidable. Great, but
who cares? However, we should not reject the general strategy out-of-hand. From
even a very basic introduction to special relativity, one is exposed to time dilation
effects (for a simple case see the time dilation reference). General relativity allows
a related effect. Where special relativity applies just to reference frames moving at
constant velocity relative to one another, general relativity allows accelerated frames.
And it is at least consistent with the laws of general relativity for one frame to have
an infinite elapsed time, while anothers time is finite.6 So, for a Malament-Hogarth
(M-H) machine, put a Turing machine in the one frame and an observer in the other.
The Turing machine operates in the usual way in its frame enumerating the theorems
forever. If P is a theorem, it sends a signal back to the observers frame that is
received within the finite interval. From the observers perspective, this machine runs
through infinitely many operations. So if a signal is received in the finite interval, P
is a theorem. If no signal is received in the finite interval, then P is not a theorem.
(And similarly, the M-H machine might search for a counterexample to the Goldbach
conjecture, or the like.) There is considerable room for debate about whether such a
machine is physically possible. But, even if physically realized, it is not algorithmic.
For we require that an algorithmic method terminates in a finite number of steps.
Churchs thesis is thus that the total numerical functions that are effectively computable by some algorithmic method are the the same as the recursive functions. Suppose we obtain a negative result that some function is not algorithmically computable.
Even with the finite limits we have placed on memory, number of instructions and the
like, the negative result remains of considerable interest: So long as a routine follows
6 Students

with the requisite math and physics background might be interested in Hogarth, Does
General Relativity Allow an Observer To View an Eternity In a Finite Time? See also Earman and
Norton, Forever is a Day, and for the same content, chapter 4 of Earman, Bangs, Crunches, Whimpers,
and Shrieks (but with additional, though still difficult, setup in earlier chapters of the text).

CHAPTER 14. LOGIC AND COMPUTABILITY

715

Simple Time Dilation


It is natural to think that, just as a wave in water approaches a boat faster when
the boat is moving is moving toward it than when the boat is moving away, so
light would approach an observer faster when she is moving toward it, and more
slowly whens she is moving away. But this is not so. The 1887 Michelson-Morley
experiment (and many others) verify that the speed of light has the same value for
all observers. Special relativity takes as foundational:
1. The laws of physics may be expressed in equations having the same form
in all frames of reference moving at constant velocity with respect to one
another.
2. The speed of light in free space has the same value for all observers, regardless of their state of motion.
These principles have many counterintuitive consequences. Here is one: Consider
a clock which consists of a pulse of light bouncing between two mirrors separated
by distance L as in (A) below. Where c is the constant speed of light, the time
between ticks is the distance traveled by the pulse divided by its speed L=c.
(A)

(B)

3Q

Q

Q

Q


6
L

6pulse


Q
Q
Q



?

QQ
s


v

Now consider the same clock as observed from a reference frame relative to which
it is in motion, as in (B). The speed of light remains c (instead of being increased,
as one might expect, by the addition of the horizontal component to its velocity).
But the distance traveled between ticks is greater than L, so the time between
ticks is greater than L=c which is to say the clock ticks more slowly from the
perspective of the second frame.
One might wonder happens if this clock is rotated 90 degrees so that the pulse is
bouncing parallel to the direction of motion, or what would happen if time were
measured by a pendulum clock. But within a frame, everything is coordinated
according to the usual laws: On special relativity, there are coordinated changes to
length, mass and the like so that the effect is robust. As observed from a reference
frame relative to which the frame is in motion, time, mass, and length are distorted
together. For further discussion, consult any textbook on introductory modern
physics.

CHAPTER 14. LOGIC AND COMPUTABILITY

716

definite rules, no (finite) amount of parallel processing, high-speed memory and so


forth is going to make a difference the function remains uncomputable.

14.3.2

The basis for Churchs thesis

It is widely accepted that Churchs thesis is true, but also that it is not susceptible
to proof. We shall return to the question of proof. There are perhaps three sorts
of reasons that have led philosophers, computer scientists and logicians to think it
is true. (i) A number of independently defined notions plausibly associated with
computability converge on the recursive functions. (ii) No plausible counterexamples
algorithmically computable functions not recursive, have come to light. And (iii)
there is a sort of rationale from the nature of an algorithm. This last may verge on,
or amount to, demonstration of Churchs thesis.
Independent definitions. We have already seen that the Turing computable functions are the same as the recursive functions. And we are in a position to close another
loop. From T12.16, any recursive function is captured by a recursively axiomatized
consistent theory extending Q. But also,
T14.10. Every (total) function that can be captured by a recursively axiomatized
consistent theory extending Q is recursive.
Suppose a function f.m/ D n can be captured in a recursively axiomatized
consistent theory T extending Q; then there is some F .x; y/ such that if
hm; ni 2 f, then T ` F .m; n/ and if hm; ni 62 f then T ` F .m; n/; and
from the latter, since T is consistent, T F .m; n/. But since f is a function,
if hm; ni 2 f, any k n, is such that hm; ki 62 f; so that T F .m; k/. So if
hm; ni 2 f then (i) for b D pF .m; n/q there is some a such that PRFT.a; b/;
and (ii) n is the only (and so least) number such that T ` F .m; n/.
Intuitively, we can find the value of f.m/ by searching the theorems until we
find one of the sort F .m; n/; and from this derive the value n. More formally:
First, for the number of F .m; n/,
numf.m; n/ Ddef formsubformsub.pF .x; y/q; pxq; num.m//; pyq; num.n/

Recall that formsub.p; v; s/ takes the Gdel numbers of a formula P , variable


x and term s and returns the number of Psx ; and num.m/ returns the Gdel
number of the standard numeral for m. So this gives the Gdel number of
F .m; n/ as a function of m and n. Now since T is recursively axiomatized
and extends Q there is a recursive PRFT and,

CHAPTER 14. LOGIC AND COMPUTABILITY

717

f.m/ Ddef exp.zlen.z/ D 2 ^ PRFT.exp.z; 0/; numf.m; exp.z; 1///; 1/

So z is of the sort 2a  3n , where a numbers a proof of F .m; n/; that is,


exp.z; 0/ numbers a proof of numf.m; exp.z; 1//. But there is only one n that
could result in a proof of F .m; n/. And n is easily recovered from z. So f.m/
is a recursive function.
So a function is captured in a recursively axiomatized consistent theory extending
Q iff it is recursive. So these three independently defined notions for computing
functions are extensionally equivalent.7 And increasing the power of a deductive
system from Q to PA and beyond does not extend the range of captured functions.
E14.11. (i) Explain how the result that the constructed f.m/ in T14.10 is recursive
requires that the original function captured by F .x; y/ is total. (ii) Explain
how the result changes in case we drop the requirement that captured functions be total. Hint: The construction still works, but the result is -recursive,
not recursive.
Failure of counterexamples. Another reason for accepting Churchs thesis is the
failure to find counterexamples. This may be very much the same point as before:
When we set out to define a notion of computability, or compute a function, what we
end up with are recursive functions, rather than something other. Of course, gods
mind, Zenos machine, an M-H machine, or the like might compute a non-recursive
function. Perhaps there are such devices. However, on our account, they are not
algorithmic. What we do not seem to have are algorithmic methods for computing
non-recursive functions.
But also in this category of reasons to accept Churchs thesis is the failure of a
natural strategy for showing that Churchs thesis is false. Suppose one were to to
propose that the primitive recursive functions are all the computable functions, and
so that regular minimization is redundant (perhaps you have had this very idea). Here
is a way to see this hypothesis false: Observe that the primitive recursive functions
are recursively enumerable. For this, treat composition and recursion as operations
on functions so that,
plus.x; y/ Ddef Reczero.x/; Comp.suc.x/; idnt33 .x; y; u//
7 And

there are more. Church himself was originally impressed by an equivalence between his
lambda calculus and the recursive functions. As additional examples, Markov algorithms are discussed
in Mendelson, Introduction to Mathematical Logic, 5.5; abacus machines in Boolos et al., Computability and Logic, 5; see below for discussion of the Kolmogorov-Uspenskii machine.

CHAPTER 14. LOGIC AND COMPUTABILITY

718

And so forth. Then assign numbers in the usual way,


a.
b.
c.
d.
e.

g; D 3
g/ D 5
g. D 7
gzero D 9
gsuc D 11

f.
g.
h.
i.
j.

gComp D 13
gRec D 15
gxi D 19 C 4i
j
gidntk D 21 C 4.2j  3k /

There is a gap at (h) for gMin that will be convenient later. When divided by 4, gxi
j
has remainder 3 and gidntk remainder 1; so every symbol gets a distinct number.
Number expressions as usual. Then, on the pattern of what has gone before, we
can specify a function PR.n/ true of the primitive recursive functions. In this case,
you will require rvar.n/ to track variables, and rvec.n/ for a vector of variables; and
perhaps new.j; n/ true when j does not number a variable in vector n. Then it will be
convenint to let members of PRSEQ.m; n/ be pairs 2i  3j to track both numbers for
functions and their free variables. So, for PRSEQ.m; n/,
exp.exp.m; len.m/

1/; 0/ D n ^ .8k < len.m//f

.9a < m/rvar.a/ ^ exp.m; k/ D 2psuc.q?a?p/q  3a _


.9a < m/rvar.a/ ^ exp.m; k/ D 2pzero.q?a?p/q  3a _
21C4.2i 3j / ?

.9a < m/.9i; 1  i < len.n//.9j; 1  j  i/rvec.a/ ^ len.a/ D i ^ exp.m; k/ D 22

p.q?a?p/q  3a _

.9a < m/.9b < m/.9c < m/.9d < m/.9i < k/.9j < k/.9e < m/.9f < m/rvec.a/ ^ rvec.b/ ^ rvec.c/ ^ rvar.d/ ^
exp.exp.m; i/; 0/ D e ^ exp.exp.m; j/; 0/ D f ^ exp.exp.m; i/; 1/ D a ? d ? c ^ exp.exp.m; j/; 1/ D b ^
exp.m; k/ D 2pComp.q?e?p;q?f?p/q  3a?b?c _

.9a < m/.9b < m/.9c < m/.9i < k/.9j < k/.9d < m/.9e < m/rvec.a/ ^ rvar.b/ ^ rvar.c/ ^ b c ^ new.b; a/ ^ new.c; a/ ^
exp.exp.m; i/; 0/ D d ^ exp.exp.m; j/; 0/ D e ^ exp.exp.m; i/; 1/ D a ^ exp.exp.m; j/; 1/ D a ? b ? c ^
exp.m; k/ D 2pRec.q?d?p;q?e?p/q  3a?b g

Except for the need for expressions like exp.exp.m; i/; 0/ and exp.exp.m; i/; 1/ to
extract the first and second components from members of the sequence m, all this
should be reasonably straightforward on the pattern of something like FORMSEQ from
chapter 12. Then, remembering that we have so far been tracking pairs, you can
construct PR.n/. And epr.0/ Ddef zPR.z/, and epr.Sy/ Ddef zz > f.y/ ^ PR.z/.
So there is a recursive enumeration of the primitive recursive functions, there is an
enumeration of the functions of one free variable, and so forth.
So consider an enumeration of the primitive recursive functions of one free variable and an array as follows.

CHAPTER 14. LOGIC AND COMPUTABILITY


0

(K)

f0
f1
f2

1
f0 .1/

2
f0 .2/
f1 .2/

719

:::

f0 .0/
f1 .0/ f1 .1/
f2 .0/ f2 .1/ f2 .2/

::
:
And consider the function d.n/ D fn .n/ C 1. This function is computable; for any
n: (i) run the enumeration to find fn ; (ii) run fn to find fn .n/; (iii) add one. Since
each step is recursive, the whole is computable. But d.n/ is not primitive recursive:
d.0/ f0 .0/; d.1/ f1 .1/; and in general, d.n/ fn .n/; so d is not identical to any
of the primitive recursive functions. So there are computable functions that are not
primitive recursive.
It is natural to think that a related argument would show that not all computable
functions are recursive: recursively enumerate the recursive functions; then diagonalize to find a computable function not on the list. But this does not work! It is
is an entirely grammatical matter to identify the primitive recursive functions
our function epr.n/ is purely a matter of form. Similarly, as we shall see in homework, it is a grammatical matter to identify the -recursive fuctions. But there is no
parallel means of identifying the recursive functions. The problem is that there is
no recursive means of saying when a minimization operation halts, and so when a
function is regular. A program to pick out regular functions would solve a version of
the definition problem explored in homework.
For any -recursive function f.x/, yy D f.x/ is a -recursive function equivalent to it. So we simply suppose that -recursive functions are cast in this
form, and consider an enumeration of the -recursive functions of a single
free variable that have main operator . Consider fi .x/ D yg.x; y/ and
fj .y/ D g.i; y/. Suppose some -recursive function reg.i/ returns zero when i
numbers a regular function and is otherwise 1. Then reg.j/ iff yg.i; y/ D fi .i/
is defined; iff def.i/. So for any i there is a j such that reg.j/ iff def.i/; so reg is
sufficient to solve the definition problem; reject the assumption.
So we are blocked from recursively enumerating the recursive functions, and so from
this means of finding a computable function that is not a recursive function.
*E14.12. (i) Clean up and complete the reasoning to show that there is a recursive enumeration of the primitive recursive functions; that is, find rvar.n/,
rvec.n/, new.j; n/ and then PR.n/. (ii) For any (primitive) recursive function
f.x/ there is a canonical formula F .x; y/ to capture it in theories extending

CHAPTER 14. LOGIC AND COMPUTABILITY

720

Q. Thus the enumeration eprf.n/ of primitive recursive functions extends to


an enumeration eprc.n/ whose value is the number of the formula to capture
eprf.n/. Given this enumeration, extend the construction from T14.10 to find
the (recursive) function that is (Turing) computable but not primitive recursive. Hint: You will be able to construct a function valpr.i; m/ to return the
the value of fi .m/ and use this for the final result.
E14.13. (i) Extend the demonstration that the primitive recursive functions are enumerable to show that there is an enumeration of the -recursive functions. (ii)
From E12.16 for any -recursive function f.x/ there is a canonical formula
F .x; y/ to capture it in theories extending Qs ; thus, again, your enumeration
emrf.n/ of -recursive functions extends to an enumeration emrc.n/ whose
value is the number of the formula to capture emrf.n/. Given this enumeration, extend the construction from T14.10 to find a -recursive val.i; n/ D
fi .n/.
The nature of an algorithm. There are also reasons for Churchs thesis from
the very nature of an algorithm.8 Perhaps the received wisdom with respect to
Churchs thesis is as follows.
The reason why Churchs [Thesis] is called a thesis is that it has not been rigorously proved and, in this sense, it is something like a working hypothesis. Its
plausibility can be attested inductively this time not in the sense of mathematical induction, but on the basis of particular confirming cases. The Thesis
is corroborated by the number of intuitively computable functions commonly
used by mathematicians, which can be defined within recursion theory. But
Churchs Thesis is believed by many to be destined to remain a thesis. The
reason lies, again, in the fact that the notion of effectively computable function
is a merely intuitive and somewhat fuzzy one. It is quite difficult to produce
a completely rigorous proof of the equivalence between intuitively computable
and recursive functions, precisely because one of the sides of the equivalence
is not well-defined (Berto, Theres Something About Gdel, pp. 76-77.)
There are a couple of themes in this passage. First, that Churchs thesis is typically
accepted on grounds of the sort we have already considered. Fair enough. But sec8 Material

in this section is developed from Smith, An Introduction to Gdels Theorems, chapter


45; Smith, Squeezing Arguments; along with Kolmogorov and Uspenskii, On the Definition of an
Algorithm. See also Black, Proving Churchs Thesis.

CHAPTER 14. LOGIC AND COMPUTABILITY

721

ond that it is not, and perhaps cannot, be proved. The idea seems to be that the recursive functions are a precise mathematically defined class, while the algorithmically
computable functions are not. Thus there is no hope of a demonstrable equivalence
between the two.
But we should be careful. Granted: If we start with an inchoate notion of computable function that includes, at once, calculations with pencil and paper, calculations on the latest and greatest supercomputer, and calculations on Zenos machine,
there will be no saying whether the computable functions definitely are, or are not,
identical to the Turing computable functions. But this is not the notion with which
we are working. We have a relatively refined technical account of algorithmic computability. Of course, it is not yet a mathematical definition. But neither are our
chapter 1 accounts of logical validity and soundness; yet we have been able to show
in T9.1 that any argument that is quantificationally valid (in our mathematical sense)
is logically valid. And similarly, the whole translation project of chapter 5 assumes
the possibility of moving between ordinary and mathematical notions. It is at least
possible that a vaguely defined predicate might pick out a precise object (the number of people on campus, on a university with a core campus area and other empty
but vaguely associated land, might be 15,214 despite vagueness in campus). The
question is whether we can translate the notion of an algorithm to formal terms.
So let us turn to the hard work of considering whether there is an argument for
accepting Churchs thesis. A natural first suggestion is that the step-by-step and finite
nature of any algorithm is always within the reach of, or reflected by, some Turing
program or recursive function, so that the algorithmically computable functions are
inevitably recursively computable.9 Already, this may amount to a consideration
or reason in favor of accepting the Thesis. In chapter 45 of his An Introduction to
Gdels Theorems, Peter Smith advances a proposal according to which such considerations amount to proof.
Smiths overall strategy involves squeezing algorithmic computability between
a pair of mathematically precise notions. Even if a condition C (say, being a tall
person) is vague, it might remain that there is some completely precise sufficient
condition S (being over seven feet tall), such that anything that is S is C , and perfectly precise necessary condition N (being over five feet tall) such that anything that
is C is N . So,
S

If it should also happen that N implies S , then the loop is closed, so that,
9 This idea is contained already in the foundational papers of Church, An Unsolvable Problem,
and Turing, On Computable Numbers.

CHAPTER 14. LOGIC AND COMPUTABILITY


S

722
N

And the target condition C is equivalent to (squeezed between) the precise necessary
and sufficient conditions. Of course, in our simple example, N does not imply C :
being over five feet tall does not imply being over seven feet tall.
For Churchs thesis, we already have that Turing computability is sufficient for
algorithmic computability. So what is required is some necessary condition so that,
T

Turing computability implies algorithmic computability and algorithmic computability implies the necessary condition. Churchs thesis follows if, in addition, N implies
Turing computability. As it turns out, we shall be able to specify a condition N which
(mathematically) implies T . For demonstration of Churchs thesis, it will be more
controversial whether A implies N .
The argument has three stages: The idea is that, (i) there are some necessary
features of an algorithm, such that any algorithm has those features; (ii) any routine
with those features is embodied in a generalized version of a Turing machine, a
Kolmogorov-Uspenskii (K-U) machine; (iii) every function that is K-U computable
is recursive, and so Turing computable.
Necessary features

- K-U computability

- Turing computability

The result is that K-U computability works as as the precise condition N in the
squeezing argument: A implies N , and N implies T . So T iff A iff N , and Churchs
thesis is established or no less plausible than is the conclusion of this argument.
Perhaps the following are necessary conditions on any algorithm. We are are
free, however, to hold that any routine which satisfies the constraints is an algorithm;
if this is so the conditions are necessary and sufficient, and we may see them as an
extension of our initial more sketchy account.10
AC

(1) There is some dataspace consisting of a finite array of cells which


may stand in some relations R0 , R1 : : : Ra and contain some symbols
s0 , s1 : : : sb .

10 Smith seems to grant that some such conditions are necessary, even though some method may
satisfy the conditions yet fail to count as an algorithm. Perhaps this is because he is impressed by the
initial examples of routines implemented by human agents with relatively limited computing power.
This is not a problem for his squeezing argument, since the corresponding recursive function may yet
be computable by some other method which satisfies more narrow constraints for example, by a
Turing machine.

CHAPTER 14. LOGIC AND COMPUTABILITY

723

(2) At every stage in a computation, there is some finite active portion of


the dataspace upon which the algorithm operates.
(3) The body of the algorithm includes finitely many instructions for modifying the active portion of the dataspace depending on its character, and
for jumping to the next set of instructions.
(4) For the calculation of a function f.Ex/ D y there is some finite initial representation of xE and some way to read off the value of y, after a finite
number of steps.
So this sets up an algorithm abstractly described. It is hard to see how an algorithm
would not involve some space, portions of which would stand in different relations.
At any given stage, the algorithm operates on some portion of the space, where these
operations may depend upon, and modify the arrangement of the active space. The
algorithm itself consists of some instructions for operating on the dataspace, where
these are generically of the sort, if the active area is of type t, perform action a, and
go to new instructions q. The calculation of a function f.Ex/ somehow takes xE as an
input, and gives a way to read off the value of y as an output. And an algorithm terminates in a finite number of steps. The finite constraints on the dataspace, relations
and symbols from (1) seem to be consequences of the rest: Beginning with a finite
initial representation of some xE, including finitely many cells of the dataspace standing in finitely many relations, filled with finitely many symbols and then modifying
finite portions of the space finitely many times by means of finite instructions, all we
are going to get are finitely many cells, standing in finitely many relations, filled with
finitely many symbols.
On the face of it, given their extreme simplicity, it is not obvious that Turing
machines compute every algorithmically computable function. But a related device,
the K-U machine, described in 1958 (the cited English translation is from later) purports to implement conditions along these lines. A somewhat modified version of the
original K-U machine is as follows.
KU

(1) There are some cells c0 , c1 : : : ca which may stand in binary relations
R0 , R1 : : : Rb and contain symbols s0 , s1 : : : sc . In simple cases, we
may think of such arrangements graphically as follows, where different
relations are represented by arrows of different colors.
?m

(L)

R
0

R1
am

bm

Each arrow, regardless of direction is an edge.

CHAPTER 14. LOGIC AND COMPUTABILITY

724

(2) The dataspace always includes exactly one origin whose content is
some arbitrary symbol as ? in the the upper cell of (L) where the active
area includes all cells on paths  n edges from the origin, for n  1.
(3) Instructions are finitely many quadruples of the sort hqi ; Sa ; Sb ; qj i where
qi and qj are instruction labels; Sa describes an active area; and Sb a
state with which the active area is to be replaced. Associate each cell in
Sa with the least number of edges between it and the origin; let n be the
greatest such integer in Sa ; this n remains the same in every quadruple
with label qi , though the value of n may vary as qi varies. Again, instructions are a function in the sense that no instruction has hqi ; Sa i the same
but hSb ; qj i different. We may see Sa and Sb as follows.
(Sa )

(Sb )

?m

(M)

?m

@
6
R  m
@
m
b - am
b
?
am

am- bm

@
R
@
m
m
a
b - cm

A 
AU 
am

In this case n D 2. The active area Sa is replaced by the configuration


Sb . The concentric rectangles indicate the boundary cells which may
themselves be related to cells not part of the active area; the replacing area
must have a boundary with cells to match boundary cells of the active
area.
(4) There is some finite initial setup, and some means of reading off the final
value of the function (for different relation and symbol sets, these may be
different). We think of the origin cell as the machine head, where an
algorithm always begins with an instruction label qi D 1 and terminates
when qi D 0.
So a K-U machine is a significant generalization of a Turing machine. We allow
arbitrarily many symbols. And the dataspace is no longer a tape with cells in a fixed
linear relation, but a space with cells in arbitrary relations which may themselves be
modified by the program. Instructions respond to, and modify, not just individual
cells, but arbitrarily large areas of the dataspace. At the same time, a K-U machine
remains a generalized Turing machine: It remains that an instruction qi is of the sort,
if Sa perform action A and go to instruction qj . So, the instruction (M) might be
applied to get,

CHAPTER 14. LOGIC AND COMPUTABILITY

725
(B)

(A)
?m

(N)

?m

@
6
R  m- m
@
m
b - am
b
d

 am- bm- cm- dm


am- bm- am
?

6
em

?
am
6
em

As indicated by the dotted line, the dataspace (A) has an active area of the sort required in instruction (M); so the active area is replaced according to the instruction
to for the resultant space (B). The example is arbitrary. But that is the point: The
machine allows arbitrary rote modifications of a dataspace. Observe that instructions
with Sa Sa0 might both map onto a given dataspace in case the number n of edges
from the origin in Sa is different from Sa0 (say an active area with a box for n D 1
inside the box in (N)). But the consistency requirement is satisfied with constant n:
for consistency, it is sufficient to require that so long as n.qi ; Sa / is a constant, there
is no instruction with hqi ; Sa i the same but hSb ; qj i different.
Perhaps the relation to Turing machines already makes it plausible that every KU computable function is recursive. But we can argue for this result directly, very
much as for T14.2. We have been through this sort of thing a couple of times now,
and I indicate only some of the key steps. (You will find further details in answers to
E14.14 though, of course, you should try it yourself!). Begin assigning numbers
to labels, symbols, cells and relations in some reasonable way.
a.
b.

gqi D 3 C 8i
gsi D 5 C 8i

c.
d.

gci D 7 C 8i
j
gri D 9 C 8.2i  3j /
j
hr i

The number for an edge, EDGE.e/ is of the sort h0ca i  1 i  h2si i  h3cb i , where
the superscript for the relation is 1 or 0 depending on the direction of the arrow. Thus
an edge represents information as follows,
(O)



ca
cb
r
s
 

There are cells ca and cb related by r (in one direction or another) where ca has
content s. The edge leaves content of cb undetermined, though it would be filled
by an edge in which that cell were the first. Then some data, DATA.d/ is a sequence
g.e /
g.e /
of edges 0 a  : : :  n b . Cells m and n are connected on d just when edges
beginning with the one reach to the other; that is, when there is a sequence with

CHAPTER 14. LOGIC AND COMPUTABILITY

726

members from d, beginning with m, ending with n, where the ends are linked by
intermediate members.
CONNECTED.d; m; n/ Ddef

.9x  d/f.8i < len.x//.9j < len.d//exp.x; i/ D exp.d; j/ ^

exp.exp.x; 0/; 0/ D m ^ exp.exp.x; len.x/


.8i; 0  i < len.x/

1/; 3/ D n ^

1/exp.exp.x; i/; 3/ D exp.exp.x; i C 1/; 0/g

So there is a sequence x with edges from d, whose first cell is numbered m and last
cell numbered n, such that the last cell of one edge is the same as the first cell of the
next. Then say a dataspace, DATASP.n/ is some data every cell of which is connected
to an origin cell 0, and no cell of which is connected back to itself (so connection in
a dataspace is a strict partial order).
DATASP.d/ Ddef

DATA.d/

^ .8i < len.d//CONNECTED.d; h0i; exp.exp.d; i/; 3// ^

.9i < len.d//CONNECTED.d; exp.exp.d; i/; 0/; exp.exp.d; i/; 0//

Then a subspace s of d, SUBSP.d; s/ is a dataspace every link of which belongs to


d. The minimum links to cell n, minlnks.d; n/ is the least y that is the length of a
subspace connecting n to the origin. The depth, depth.d/ of a dataspace is the least
y greater than or equal to the minimum number of links to every cell in the space.
A border cell, BORDER.d; n/ is a cell with minlinks.d; n/ D depth.d/. The n-space,
nspace.d; n/ is the least y including all the links in any subspace of d with depth n
so it includes all the cells in d up to depth n. And the maximum cell of a dataspace
maxcell.d/ is the least y greater than or equal to every cell number in the space.
Where the cells are sequenced and numbered, spaces are comparable, not when
they are identical, but when they are isomorphic. For this, a pair, PAIR.p/ is of the sort
j
g.p /
g.p /
i0  1 ; and a relation on a finite domain, REL.r/ is a sequence 0 0  : : :  n n .
A relation is a 1W1 map, MAP.m/ iff no x is related to more than one y, different objects
x are not related to the same y; so,
MAP.m/ Ddef

REL.m/

^ .8i < len.m//.8j < len.m//

exp.exp.m; i/; 0/ D exp.exp.m; j/; 0/ $ exp.exp.m; i/; 1/ D exp.exp.m; j/; 1/

Map m has the cells of dataspace d in its domain, DOM.m; d/ just in case m is a map
j
(that takes 0 to 0 and) for any edge hca ; ri ; si ; cb i in d, has some pair hcb ; xi in m.
The output value of a map for a given input mapv.m; x/ D y for the least y such
that hx; yi is in the map, and otherwise is some default value. Then dataspace b is a
projection of dataspace a on map m, proj.m; a/ D b, just in case a and b are identical
except that the cell numbers in a are mapped to cell numbers in b.
proj.m; a/ Ddef y.8i < len.a//

CHAPTER 14. LOGIC AND COMPUTABILITY

727

mapv.m; exp.exp.a; i/; 0// D exp.exp.y; i/; 0/ ^ exp.exp.a; i/; 1/ D exp.exp.y; i/; 1/ ^

.exp.exp.a; i/; 2/ D exp.exp.y; i/; 2/ ^ mapv.m; exp.exp.a; i/; 3// D exp.exp.y; i/; 3/

Spaces a and b match on map m, MATCH.m; a; b/ just in case each link in proj.m; a/
is identical to a link in b and each link in b is identical to one proj.m; a/. And spaces
are isomorphic on a, ISO.a; b/ just in case there exists a map including domain a on
which they so match (where the bound for the map is a function of the maximum cell
numbers from the spaces.)
hq i
hq i
The number for an instruction, INS.n/ is of the sort, i pSa q pSb q  j ,
0

where any cell in the border of Sa reappears in Sb . And a K-U machine, KUMACH.m/ is
g.i /
g.i /
a sequence of instructions 0 0 : : :n n , where instructions at any label have the
depth of Sa the same, but no instructions at the same label have Sa isomorphic. Then
each K-U machine is associated with a Gdel number, and there is an enumeration
of K-U machines. And from a K-U machine, instruction number, and dataspace,
there is a function to machine states: the machine state is that instruction which for
machine m has instruction label qi , with Sa isomorphic to the same-sized portion
of the dataspace d. As before, if a K-U machine includes states with instruction
label qi , but no instruction of the sort hqi ; Sa ; x; yi let the machine be augmented
to include hqi ; Sa ; Sa ; qi i; that way, it will loop rather than hang in that state.
Then, machs.m; q; d/ D
y.9i < len.m//y D exp.m; i/ ^ exp.y; 0/ D q ^ ISO.exp.y; 1/; nspace.d; depth.exp.y; 1////

So the machine state is the least y with label q such that Sa maps to the dataspace.
Now we are ready for recursive definitions space.m; n; j/ and state.m; n; j/ that
describe the dataspace and machine state as a function of the K-U machine, initial
value n of f.n/, and step j of operation. Suppose functions code.n/ and decode.d/
to take the initial value n into a dataspace, and the final dataspace into the value it
represents. We require an analog d a to a b that takes a dataspace d, an active area
a and returns d without a. For this, recursively define del.d; a; y/.
del.d; a; 0/ D 1
exp.d;y/

del.d; a; Sy/ D 0

? del.d; a; y/

del.d; a; Sy/ D del.d; a; y/

if

.9i < len.a/exp.a; i/ D exp.d; y/

otherwise

So del picks out the members of d that are not in a (since the length of 1 is 0, a ? 1
is just a). Then d a Ddef del.d; a; len.d//. Now the base cases for the functions are
straightforward.
space.m; n; 0/ D code.n/

CHAPTER 14. LOGIC AND COMPUTABILITY

728

state.m; n; 0/ D machs.m; h1i; space.m; n; 0//

And where state.m; n; j/ is some hqi ; Sa ; Sb ; qj i say the active area is the n-space of
space.m; n; j/ where n is the depth of Sa ; and for an active area a, the complement
space is space.m; n; j/ a. Then,
space.m; n; Sj/ D the least y such that there are maps a on Sa and b on Sb , and

Sa matches the active area on map a, and


a and b agree on the mapping of any cell in the border of Sa , and
b maps any cell not in the border of Sa to a cell not in the complement space, and
y is the projection of b with Sb , concatenated to the complement space.

The idea is to delete the cells from state.m; n; j/ that are matched with Sa and replace
them with the cells from Sb ; for this, it is important to get the mappings to line up
so that the borders match as they should, and new cells do not walk on old ones; once
this is done, the replacement is straightforward. So there is a map a on which Sa
matches the active area and a map b that gives the destination cells for Sb . Map
b is such that: numbers of border cells are properly connected up with the existing
dataspace; cells not in the border are sent to open numbers; and the new dataspace
consists of the complement space together with the projected cells from b and Sb .
Then,
state.m; n; Sj/ D machs.m; exp.state.m; n; j/; 3/; space.m; n; Sj//

At this stage, functions for stop.i; n; j/ and f.n/ are as before.


There are a lot of details (and you have a chance to work some of them out in exercises)! But it should already be clear that any K-U computable function is recursive.
So the squeezing argument is complete: Turing computability implies algorithmic
computability and algorithmic computability implies K-U computability. But every
K-U computable function is recursive and so Turing computable. So the algorithmically computable functions are the same as the Turing computable functions. So
Churchs thesis!
This argument is just as strong as the premise that algorithmic computability
implies K-U computability. For this, we have translated an informal notion into
a formal one. But this strategy is vulnerable to the charge that we have somehow
excluded from the formal account methods that are properly algorithmic, though not
Turing computable. There are different responses. First, we should be clear about the
range of K-U computability. Say we are interested in parallel computing, whether by
individuals following instructions or computing devices. A K-U machine has but a
single origin; this might seem to be a problem. Still, an active area might have many
shapes and things might be set up as follows,

CHAPTER 14. LOGIC AND COMPUTABILITY

729

@
@ m


(P)

6
@
@ m


?m- m
@
@
?
m

@
@

with satellite centers, to achieve the effect of parallel computing. So it is important


to recognize the generality already built into the K-U machine.
Second, it may be that we have ruled out some method that is properly algorithmic, but that our strategies naturally adapt to show that this new method calculates
nothing but recursive functions as well. So, for example, cells in our implementation
of the K-U machine stand just in binary relations. An obvious extension would be to
allow relations other than binary. Given an extended argument to show that the result
computes recursive functions, Churchs thesis is not threatened.
Finally, it may be that our argument goes some distance to illuminating the effective range of the equation between computability and recursive functions. Perhaps the K-U machine is plausible as a technical specification for algorithmic computability or for a specific (and important) sort of algorithmic computability. Then
Churchs thesis is demonstrably true with respect to it. Perhaps Zenos machine or
the M-H machine computes functions other than recursive functions. Still, insofar as
these are not algorithmic (or of the specified sort), they will be irrelevant to the thesis
as specified. In this case, Churchs thesis is established.
To the extent that Churchs thesis is either plausible or established, our limiting results become full-fledged incomputability results. And, together with incompleteness for our logical systems they are foundational to thinking about the subject
matter.
*E14.14. Assuming functions code.n/ and decode.d/, use the outline in the text to
complete the demonstration that any K-U computable function f.n/ is recursive.
E14.15. For each of the following concepts, explain in an essay of about two pages,
so that Hannah could understand. In your essay, you should (i) identify the
objects to which the concept applies, (ii) give and explain the definition, and
give and explicate examples (iii) where the concept applies, and (iv) where
it does not. Your essay should exhibit an understanding of methods from the
text.

CHAPTER 14. LOGIC AND COMPUTABILITY

730

a. The Turing computable functions, and their relation to the recursive functions.
b. The essential elements from the chapter contributing to a demonstration of
the decision problem, along with the significance of Churchs thesis for this
result.
c. The essential elements from this chapter contributing to a demonstration of
(the semantic version of) the incompleteness of arithmetic.
d. Churchs thesis, along with reasons for thinking it is true, including the possibility of demonstrating its truth.

Concluding Remarks

731

Looking Forward and Back


We began this text in Part I setting up the elements of classical symbolic logic. Thus
we began with four notions of validity: logical validity, validity in our derivation
systems AD and ND, along with semantic (sentential and) quantificational validity.
After a parenthesis in Part II to think about techniques for reasoning about logic, we
began to put those techniques to work. The main burden of Part III was to show
soundness and adequacy of our classical logic, that ` P iff  P . This is the
good news. In Part IV we established some limiting results. These include Gdels
first and second theorems, that no consistent, recursively axiomatizable extension
of Q is negation complete, and that no consistent recursively axiomatized theory
extending PA proves its own consistency. Results about derivations are associated
with computations, and the significance of this association extended by means of
Churchs thesis. This much constitutes a solid introduction to classical logic, and
should position you make progress in logic and philosophy, along with related areas
of mathematics and computer science.
Excellent texts which mostly overlap the content of one, but extend it in different
ways are Mendelson, Introduction to Mathematical Logic; Enderton, Introduction
to Mathematical Logic; and Boolos, Burgess and Jeffrey, Computability and Logic;
these put increased demands on the reader (and such demands are one motivation for
our text), but should be accessible to you now; Schonfield, Introduction to Mathematical Logic is excellent yet still more difficult. Smith, An Introduction to Gdels
Theorems extends the material of Part IV. Much of what we have done presumes
some set theory as Enderton, Elements of Set Theory, or model theory as Manzano,
Model Theory and, more advanced, Hodges, A Shorter Model Theory.
In places, we have touched on logics alternative to classical logic, including
multi-valued logic, modal logic, and logics with alternative accounts of the conditional. A good place to start is Priest, Non-Classical Logics, which is profitably read
with Roy, Natural Derivations for Priest which introduces derivations in a style
much like our own. Our logic is first-order insofar as quantifiers bind just variables
732

CONCLUSION

733

for objects. Second-order logic lets quantifiers bind variables for predicates as well
(so 8x8yx D y ! 8F .F x $ F y/ expresses the indiscernibility of identicals).
Second-order logic has important applications in mathematics, and raises important
issues in metalogic. For this, see Shapiro, Foundations Without Foundationalism,
and Manzano, Extensions of First Order Logic.
Philosophy of logic and mathematics is a subject matter of its own. Shapiro,
Philosophy of Mathematics and Its Logic (along with the rest of the articles in the
Oxford Handbook, and Shapiro, Thinking About Mathematics are a good place to
start. Benacerraf and Putnam, Philosophy of Mathematics is a collection of classic
articles.
Smiths online, Teach Yourself Logic is an excellent comprehensive guide to
further resources.
Have fun!

Answers to Selected Exercises

734

735

ANSWERS FOR CHAPTER 1

Chapter One
E1.1. Say whether each of the following stories is internally consistent or inconsistent. In either case, explain why.
a. Smoking cigarettes greatly increases the risk of lung cancer, although most
people who smoke cigarettes do not get lung cancer.
Consistent. Even though the risk of cancer goes up with smoking, it may
be that most people who smoke do not have cancer. Perhaps 49% of people
who smoke get cancer, and 1% of people who do not smoke get cancer. Then
smoking greatly increases the risk, even though most people who smoke do
not get it.
c. Abortion is always morally wrong, though abortion is morally right in order
to save a womans life.
Inconsistent. Suppose (whether you believe it or not) that abortion is always
morally wrong. Then it is wrong to save a womans life. So the story requires
that it is and is not wrong to save a womans life.
e. No rabbits are nearsighted, though some rabbits wear glasses.
Consistent. One reason for wearing glasses is to correct nearsightedness. But
glasses may be worn for other reasons. It might be that rabbits who wear
glasses are farsighted, or have astigmatism, or think that glasses are stylish.
Or maybe they wear sunglasses just to look cool.
g. Bill Clinton was never president of the United States, although Hillary is
president right now.
Consistent. Do not get confused by the facts! In a story it may be that Bill
was never president and his wife was. Thus this story does not contradict
itself and is consistent.
i. The death star is a weapon more powerful than that in any galaxy, though
there is, in a galaxy far far away, a weapon more powerful than it.
Inconsistent. If the death star is more powerful than any weapon in any
galaxy, then according to this story it is and is not more powerful than the
weapon in the galaxy far far away.
E1.2. For each of the following sentences, (i) say whether it is true or false in the
real world and then (ii) say if you can whether it is true or false according to
the accompanying story. In each case, explain your answers.
Exercise 1.2

736

ANSWERS FOR CHAPTER 1

c. Sentence: After overrunning Phoenix in early 2006, a herd of buffalo overran


Newark, New Jersey.
Story: A thundering herd of buffalo overran Phoenix Arizona in early 2006.
The city no longer exists.
(i) It is false in the real world that any herd of buffalo overran Newark anytime
after 2006. (ii) And, though the story says something about Phoenix, the story
does not contain enough information to say whether the sentence regarding
Newark is true or false.
e. Sentence: Jack Nicholson has swum the Atlantic.
Story: No human being has swum the Atlantic. Jack Nicholson and Bill
Clinton and you are all human beings, and at least one of you swam all the
way across!
(i) It is false in the real world that Jack Nicholson has swum the Atlantic. (ii)
This story is inconsistent! It requires that some human both has and has not
swum the Atlantic. Thus we refuse to say that it makes the sentence true or
false.
g. Sentence: Your instructor is not a human being.
Story: No beings from other planets have ever made it to this country. However, your instructor made it to this country from another planet.
(i) Presumably, the claim that your instructor is not a human being is false
in the real world (assuming that you are not working by independent, or
computer-aided study). (ii) But this story is inconsistent! It says both that
no beings from other planets have made it to this country and that some being
has. Thus we refuse to say that it makes any sentence true or false.
i. Sentence: The Yugo is the most expensive car in the world.
Story: Jaguar and Rolls Royce are expensive cars. But the Yugo is more
expensive than either of them.
(i) The Yugo is a famously cheap automobile. So the sentence is false in the
real world. (ii) According to the story, the Yugo is more expensive than some
expensive cars. But this is not enough information to say whether it is the
most expensive car in the world. So there is not enough information to say
whether the sentence is true or false.

Exercise 1.2.i

737

ANSWERS FOR CHAPTER 1

E1.3. Use our invalidity test to show that each of the following arguments is not
logically valid, and so not logically sound.
*For each of these problems, different stories might do the job.
a. If Joe works hard, then he will get an A
Joe will get an A
Joe works hard
a. In any story with premises true and conclusion false,
1. If Joe works hard, then he will get an A
2. Joe will get an A
3. Joe does not work hard
b. Story: Joe is very smart, and if he works hard, then he will get an A.
Joe will get an A; however, Joe cheats and gets the A without working
hard.
c. This is a consistent story that makes the premises true and the conclusion false; thus, by definition, the argument is not logically valid.
d. Since the argument is not logically valid, by definition, it is not logically
sound.
E1.4. Use our validity procedure to show that each of the following is logically
valid, and to decide (if you can) whether it is logically sound.
*For each of these problems, particular reasonings might take different forms.
a. If Bill is president, then Hillary is first lady
Hillary is not first lady
Bill is not president
a. In any story with premises true and conclusion false,
(1) If Bill is president, then Hillary is first lady
(2) Hillary is not first lady
(3) Bill is president

Exercise 1.4.a

738

ANSWERS FOR CHAPTER 1


b. In any such story,
Given (1) and (3),
(4) Hillary is first lady
Given (2) and (4),
(5) Hillary is and is not first lady

c. So no story with the premises true and conclusion false is a consistent


story; so by definition, the argument is logically valid.
d. In the real world Hillary is not first lady and Bill and Hillary are married
so it is true that if Bill is president, then Hillary is first lady; so all the
premises are true and by definition the argument is logically sound.
E1.5. Use our procedures to say whether the following are logically valid or invalid,
and sound or unsound. Hint: You may have to do some experimenting to
decide whether the arguments are logically valid or invalid and so to decide
which procedure applies.
c. Some dogs have red hair
Some dogs have long hair
Some dogs have long red hair
a. In any story with the premise true and conclusion false,
1. Some dogs have red hair
2. Some dogs have long hair
3. No dogs have long red hair
b. Story: There are dogs with red hair, and there are dogs with long hair.
However, due to a genetic defect, no dogs have long red hair.
c. This is a consistent story that makes the premise true and the conclusion
false; thus, by definition, the argument is not logically valid.
d. Since the argument is not logically valid, by definition, it is not logically
sound.
E1.6. Use our procedures to say whether the following are logically valid or invalid,
and sound or unsound.

Exercise 1.6

739

ANSWERS FOR CHAPTER 1


c. The earth is (approximately) round
There is no round square
a. In any story with the premise true and conclusion false,
(1) The earth is (approximately) round
(2) There is a round square
b. In any such story, given (2),
(3) Something is round and not round

c. So no story with the premises true and conclusion false is a consistent


story; so by definition, the argument is logically valid.
d. In the real world the earth is (approximately) round, so the premise is
true and by definition the argument is logically sound.
E1.8. Which of the following are true, and which are false? In each case, explain
your answers, with reference to the relevant definitions.
c. If the conclusion of an argument is true in the real world, then the argument
must be logically valid.
False. An argument is logically valid iff there is no consistent story that
makes the premises true and the conclusion false. Though the conclusion is
true in the real world (and so in the true story), there may be some other story
that makes the premises true and the conclusion false. If this is so, then the
argument is not logically valid.
e. If a premise of an argument is false in the real world, then the argument cannot
be logically valid.
False. An argument is logically valid iff there is no consistent story that
makes the premises true and the conclusion false. For logical validity, there
is no requirement that every story have true premises only that ones that
do, also have true conclusions. So an argument might be logically valid, and
have premises that are false in many stories, including the true story.
g. If an argument is logically sound, then its conclusion is true in the real world.
True. An argument is logically valid iff there is no consistent story that makes
the premises true and the conclusion false. An argument is logically sound
iff it is logically valid and its premises are true in the real world. Since the
Exercise 1.8.g

740

ANSWERS FOR CHAPTER 2

premises are true in the real world, they hold in the true story; since the
argument is valid, this story cannot be one where the conclusion is false. So
the conclusion of a sound argument is true in the real world.
i. If the conclusion of an argument cannot be false (is false in no consistent
story), then the argument is logically valid.
True. If there is no consistent story where the conclusion is false, then there
is no consistent story where the premises are true and the conclusion is false;
but an argument is logically valid iff there is no consistent story where the
premises are true and the conclusion is false. So the argument is logically
valid.

Chapter Two
E2.1. Assuming that S may represent any sentence letter, and P any arbitrary
expression of Ls , use maps to determine whether each of the following expressions is (i) of the form .S ! P / and then (ii) whether it is of the form
.P ! P /. In each case, explain your answers.
e. ..! / ! .! //
.S ! P /

.P ! P /

H
??@
R HH
j


H
 ??@
/


j
H

RH




. .! / !  .! / /

. .! / !  .! / /

(i) Since .! / is not a sentence letter, there is nothing to which S maps,


and ..! / ! .! // is not of the form .S ! P /. (ii) Since P maps
to any expression, ..! / ! .! // is of the form .P ! P / by the
above map.
E2.3. For each of the following expressions, demonstrate that it is a formula and a
sentence of Ls with a tree. Then on the tree (i) bracket all the subformulas,
(ii) box the immediate subformula(s), (iii) star the atomic subformulas, and
(iv) circle the main operator.
a. A
subformula: [ A?

This is a formula by FR(s)

In this case, the tree is very simple. There are no operators, and so no main
operator. There are no immediate subformulas.
Exercise 2.3.a

741

ANSWERS FOR CHAPTER 2

E2.4. Explain why the following expressions are not formulas or sentences of Ls .
Hint: you may find that an attempted tree will help you see what is wrong.
b. .P ! Q/
This is not a formula because P and Q are not sentence letters of Ls . They
are part of the metalanguage by which we describe Ls , but are not among the
Roman italics (with or without subscripts) that are the sentence letters. Since
it is not a formula, it is not a sentence.
E2.5. For each of the following expressions, determine whether it is a formula and
sentence of Ls . If it is, show it on a tree, and exhibit its parts as in E2.3. If it
is not, explain why as in E2.4.
a. ..A ! B/ ! ..A ! B/ ! A//
This is a formula and a sentence.

A?

B?

A?

@
@
@

.A ! B/

s
u
b
f
o
r
m
u
l
a
s

B?

A?

@
@
@

.A ! B/

By FR(!)




S
S


.A ! B/

S
S

H
S
S

HH
H




S
S


S
S

By FR()


H
H

..A ! B/ ! A/

By FR(s)




By FR(!)




..A ! B/ ! ..A ! B/ ! A//

By FR(!)


 ..A ! B/ ! ..A ! B/ ! A//

By FR()

Exercise 2.5.a

742

ANSWERS FOR CHAPTER 2


c. .A ! B/ ! ..A ! B/ ! A/
A

@
@

.A ! B/

By FR(s)




@
@
@

.A ! B/

By FR(!)




.A ! B/

.A ! B/

l
l

HH
l





H
HH 
H

..A ! B/ ! A/

l
l
l

l 
l

By FR()

By FR(!)




.A ! B/ ! ..A ! B/ ! A/

Mistake!

Not a formula or sentence. The attempt to apply FR(!) at the last step fails,
insofar as the outer parentheses are missing.
E2.6. For each of the following expressions, demonstrate that it is a formula and a
sentence of Ls with a tree. Then on the tree (i) bracket all the subformulas,
(ii) box the immediate subformula(s), (iii) star the atomic subformulas, and
(iv) circle the main operator.
a. .A ^ B/ ! C
s
u
b
f
o
r
m
u
l
a
s

A?

B?

C?



@
@

.A ^ B/

@

@ 

@
.A ^ B/ ! C

Formulas by FR(s)

Formula by FR0 (^)

Formula by FR(!), outer parentheses dropped

E2.7. For each of the formulas in E2.6a - e, produce an unabbreviating tree to find
the unabbreviated expression it represents.

Exercise 2.7

743

ANSWERS FOR CHAPTER 2


a. .A ^ B/ ! C
C






@
@
@

.A ! B/

By AB(^)


@ 
@

..A ! B/ ! C /

Adding outer ( )

E2.8. For each of the unabbreviated expressions from E2.7a - e, produce a complete
tree to show by direct application FR that it is an official formula.
a. ..A ! B/ ! C /
A



L

B

L
L

LL


.A ! B/



.A ! B/

e

ee

Formulas by FR(s)

L
L

..A ! B/ ! C /

Formula by FR()

Formula by FR(!)

Formula by FR()

Formula by FR(!)

E2.12. For each of the following expressions, demonstrate that it is a term of Lq


with a tree.
c. h3 cf 1 yx
This is a term as follows.
c

T
T

these are terms by TR(c), TR(v), and TR(v)


T
1
T f y 
T

TT
h3 cf 1 yx




since y is a term, this is a term by TR(f)

given the three input terms, this is a term by TR(f)

E2.13. Explain why the following expressions are not terms of Lq .


Exercise 2.13

744

ANSWERS FOR CHAPTER 2


d. g 2 yf 1 xc.

y is a term, f 1 x is a term and c is a term; but g 2 followed by these three


terms is not a term. g 2 yf 1 x is a term, but not g 2 yf 1 xc.
E2.14. For each of the following expressions, determine whether it is a term of Lq ;
if it is, demonstrate with a tree; if not, explain why.
a. g 2 g 2 xyf 1 x
This is a term as follows.
y

these are terms by TR(v), TR(v), and TR(v)

@
@
@

g 2 xy

f 1x

@ 
@

these are terms by TR(f) and TR(f)



g 2 g 2 xyf 1 x

this is a term by TR(f)

b. h3 cf 2 yx
This is not a term. c is a term, and f 2 yx is a term; but h3 followed by these
two terms is not a term.
E2.15. For each of the following expressions, (i) Demonstrate that it is a formula
of Lq with a tree. (ii) On the tree bracket all the subformulas, box the immediate subformulas, star the atomic subformulas, circle the main operator, and
indicate quantifier scope with underlines. Then (iii) say whether the formula
is a sentence, and if it is not, explain why.
b. B 2 ac
a

These are terms by TR(c)

. .
...................
@
sub@
form2 ac ?
B
ula

This is a formula by FR(r)

Since there are no variables, there are no free variables, and it is a sentence.
E2.16. Explain why the following expressions are not formulas or sentences of Lq .
c. 8xB 2 xg 2 ax
This is not a formula because x is not a variable and a is not a constant. These
are symbols of the metalanguage, rather than symbols of Lq .
Exercise 2.16.c

745

ANSWERS FOR CHAPTER 2

E2.17. For each of the following expressions, determine whether it is a formula and
a sentence of Lq . If it is a formula, show it on a tree, and exhibit its parts as
in E2.15. If it fails one or both, explain why.
d. 8z.L1 z ! .8wR2 wf 3 axw ! 8wR2 f 3 azww//
This has a tree, so it is a formula. But x is free, so it is not a sentence.
w a

D
D
D

x w

A

A 
A

z w w


A 
A





f 3 azw 
D f 3 axw
D 
A 
. . . . . . . . . . . . D. . . . . . . . . . . . . . . . . . . . . . . .

A
D
A

L1 z

R2 wf 3 axw ?

Terms by TR(v) and TR(c)

R2 f 3 azww ?

Terms by TR(f)

Formulas by FR(r)

C
s
u
b
f
o
r
m
u
l
a
s

C
C

2
3
8wR2 f 3 azww
C 8wR wf axw
Q

C
Q

C
Q

Q

C
Q
C .8wR2 wf 3 axw ! 8wR2 f 3 azww/
C


C

CC

.L1 z ! .8wR2 wf 3 axw ! 8wR2 f 3 azww//

Formulas by FR(8)

Formula by FR(!)

Formula by FR(!)


8z .L1 z ! .8wR2 wf 3 axw ! 8wR2 f 3 azww// Formula by FR(8)

E2.18. For each of the following expressions, (i) Demonstrate that it is a formula
of Lq with a tree. (ii) On the tree bracket all the subformulas, box the immediate subformulas, star the atomic subformulas, circle the main operator, and
indicate quantifier scope with underlines. Then (iii) say whether the formula
is a sentence, and if it is not, explain why.

Exercise 2.18

746

ANSWERS FOR CHAPTER 2


c. 9xAf 1 g 2 ah3 zwf 1 x _ S
a

z w x

D
D

C
C
C
C
C

These are terms by TR(c) and TR(v)

D
1
D f x
D 
DD

This is a term by TR(f)

C h3 zwf 1 x
C
C 
CC

This is a term by TR(f)

g 2 ah3 zwf 1 x

This is a term by TR(f)

f 1 g 2 ah3 zwf 1 x

This is a term by TR(f)

.............................

s Af 1 g 2 ah3 zwf 1 x ? S ?
u
b
f
o
1 2 3
1
r 9xAf g ah zwf x
HH
m
u
HH
l
H
H
 
a
_
S
9xAf 1 g 2 ah3 zwf 1 x 
s

These are formulas by FR(r) and FR(s)

This is a formula by FR0 (9)

This is a formula by FR0 (_)

This has a tree, so it has a formula, but z and w are free, so it is not a sentence.
E2.19. For each of the formulas in E2.18, produce an unabbreviating tree to find the
unabbreviated expression it represents.

Exercise 2.19

747

ANSWERS FOR CHAPTER 2


c. 9xAf 1 g 2 ah3 zwf 1 x _ S
z w x

D
D

C
C
C
C
C
C

D
1
D f x
D 
DD

C h3 zwf 1 x
C
C 
CC

g 2 ah3 zwf 1 x

f 1 g 2 ah3 zwf 1 x
.............................
A1 f 1 g 2 ah3 zwf 1 x

Superscript replaced

8xA1 f 1 g 2 ah3 zwf 1 x

HH
H

H
H
H

By AB(9)

.8xA1 f 1 g 2 ah3 zwf 1 x

! S/

By AB(_), with outer ( )

So 9xAf 1 g 2 ah3 zwf 1 x _ S abbreviates .8xA1 f 1 g 2 ah3 zwf 1 x !


S /.
E2.20. For each of the unabbreviated expressions from E2.19, produce a compete
tree to show by direct application of FR that it is an official formula. In
each case, using underlines to indicate quantifier scope, is the expression a
sentence? does this match with the result of E2.18?

Exercise 2.20

748

ANSWERS FOR CHAPTER 2


c. .8xA1 f 1 g 2 ah3 zwf 1 x ! S /
a

z w x

D
D

C
C
C
C
C

Terms by TR(c) and TR(v)

D
1
D f x
D 
DD

Term by TR(f)

C h3 zwf 1 x
C
C 
CC

Term by TR(f)

g 2 ah3 zwf 1 x

Term by TR(f)

f 1 g 2 ah3 zwf 1 x

Term by TR(f)

.............................
A1 f 1 g 2 ah3 zwf 1 x

Formulas by FR(r) and FR(s)

A1 f 1 g 2 ah3 zwf 1 x

Formula by FR()

8xA1 f 1 g 2 ah3 zwf 1 x

Formula by FR(8)

8xA1 f 1 g 2 ah3 zwf 1 x

Formula by FR()

8xA1 f 1 g 2 ah3 zwf 1 x

Formula by FR()

PP
P

PP
P

PP

.8xA1 f 1 g 2 ah3 zwf 1 x ! S/

Formula by FR(!)

Since it has a tree it is a formula. But z and w are free so it is not a sentence.
This is exactly the same situation as for E2.18(c).
E2.21. For each of the following expressions, (i) Demonstrate that it is a formula of
LNT with a tree. (ii) On the tree bracket all the subformulas, box the immediate subformulas, star the atomic subformulas, circle the main operator, and
indicate quantifier scope with underlines. Then (iii) say whether the formula
is a sentence, and if it is not, explain why.
Exercise 2.21

749

ANSWERS FOR CHAPTER 3


b. 9x8y.x  y D x/
Both a formula and a sentence.
y


A
A 
A




xy

. . . . . .@
. . . . . . . . . . . . .
@
@

s
u
b
f
o
r
m
u
l
a
s

Terms by TR(v)

Term by TR(f)

.x  y D x/?

Formula by FR(r)

8y.x  y D x/

Formula by FR(8)


9x
8y.x  y D x/

Formula by FR(9)

E2.22. For each of the formulas in E2.21, produce an unabbreviating tree to find the
unabbreviated expression it represents.
b. 9x8y.x  y D x/
y

A

A 
A




xy

. . . . . .@
. . . . . . . . . . . . .
@
@
Dxyx

The function symbol followed by two terms

The relation symbol followed by two terms

8yDxyx

8x8yDxyx

The existential unabbreviated

So 9x8y.x  y D x/ abbreviates 8x8yDxyx.

Chapter Three
E3.1. Where A1 is as above, construct derivations to show each of the following.

Exercise 3.1

750

ANSWERS FOR CHAPTER 3


a. A ^ .B ^ C/ `A1 B
1.
2.
3.
4.
5.

A ^ .B ^ C/
A ^ .B ^ C / ! .B ^ C/
B^C
.B ^ C / ! B
B

prem
^2
2,1 MP
^1
4,3 MP

E3.2. Provide derivations for T3.6, T3.7, T3.9, T3.10, T3.11, T3.12, T3.13, T3.14,
T3.15, T3.16, T3.18, T3.19, T3.20, T3.21, T3.22, T3.23, T3.24, T3.25, and
T3.26. As you are working these problems, you may find it helpful to refer to
the AD summary on p. 85.
T3.12. `AD .A ! B/ ! .A ! B/
1.
2.
3.
4.
5.
6.
7.

A ! A
.A ! A/ ! .A ! B/ ! .A ! B/
.A ! B/ ! .A ! B/
B ! B
.A ! B/ ! .A ! B/ ! .A ! B/
.A ! B/ ! .A ! B/
.A ! B/ ! .A ! B/

T3.10
T3.5
2,1 MP
T3.11
T3.4
5,4 MP
3,6 T3.2

T3.16. `AD A ! B ! .A ! B/


1.
2.
3.
4.

.A ! B/ ! .A ! B/
A ! .A ! B/ ! B
.A ! B/ ! B ! B ! .A ! B/
A ! B ! .A ! B/

T3.1
1 T3.3
T3.13
2,3 T3.2

T3.21. A ! .B ! C/ `AD .A ^ B/ ! C
1.
2.
3.
4.
5.
6.
7.

A ! .B ! C/
.B ! C / ! .C ! B/
A ! .C ! B/
C ! .A ! B/
C ! .A ! B/ ! .A ! B/ ! C
.A ! B/ ! C
.A ^ B/ ! C

prem
T3.13
1,2 T3.2
3, T3.3
T3.14
5,4 MP
6 abv

E3.3. For each of the following, expand the derivations to include all the steps
from theorems. The result should be a derivation in which each step is either
a premise, an axiom, or follows from previous lines by a rule.

Exercise 3.3

751

ANSWERS FOR CHAPTER 3


b. Expand the derivation for T3.4
1. .B ! C / ! A ! .B ! C /
2. A ! .B ! C / ! .A ! B/ ! .A ! C /
3. .A ! .B ! C / ! .A ! B/ ! .A ! C // !
.B ! C/ ! .A ! .B ! C/ ! .A ! B/ ! .A ! C //
4. .B ! C / ! .A ! .B ! C / ! .A ! B/ ! .A ! C //
5. .B ! C/ ! .A ! .B ! C/ ! .A ! B/ ! .A ! C// !
..B ! C / ! A ! .B ! C// ! ..B ! C / ! .A ! B/ ! .A ! C //
6. ..B ! C / ! A ! .B ! C // ! ..B ! C / ! .A ! B/ ! .A ! C //
7. .B ! C/ ! .A ! B/ ! .A ! C /

A1
A2
A1
3,2 MP
A2
5,4 MP
6,1 MP

E3.4. Consider an axiomatic system A2 as described in the main problem. Provide


derivations for each of the following, where derivations may appeal to any
prior result (no matter what you have done).
a. A ! B; B ! C `A2 .C ^ A/
1.
2.
3.
4.
5.
6.

A!B
.A ! B/ ! .B ^ C/ ! .C ^ A/
.B ^ C/ ! .C ^ A/
B!C
.B ^ C/
.C ^ A/

prem
A3
2,1 MP
prem
4 abv
5,3 MP

d. `A2 .A ^ B/ ! .B ! A/


1.
2.
3.
4.

A ! A
.A ! A/ ! .A ^ B/ ! .B ^ A/
.A ^ B/ ! .B ^ A/
.A ^ B/ ! .B ! A/

(c)
A3
2,1 MP
3 abv

g. A ! B `A2 B ! A
1.
2.
3.
4.
5.
6.

A ! B
.A ! B/ ! .B ^ B/ ! .B ^ A/
.B ^ B/ ! .B ^ A/
.B ^ B/
.B ^ A/
B!A

Exercise 3.4.g

prem
A3
2,1 MP
(b)
3,4 MP
5 abv

752

ANSWERS FOR CHAPTER 3


i. A ! B; B ! C; C ! D `A2 A ! D
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.

A!B
B!C
.C ^ A/
C !D
.C ! D/ ! .D ! C/
D ! C
.D ! C / ! .C ^ A/ ! .A ^ D/
.C ^ A/ ! .A ^ D/
.A ^ D/
A!D

prem
prem
1,2 (a)
prem
(f)
5,4 MP
A3
7,6 MP
8,3 MP
9 abv

u. `A2 A ! .B ! C/ ! .A ^ B/ ! C
.A ^ B/ ^ C ! A ^ .B ^ C /
.B ^ C / ! .B ^ C/
A ^ .B ^ C/ ! A ^ .B ^ C/
.A ^ B/ ^ C ! A ^ .B ^ C/
..A ^ B/ ^ C ! A ^ .B ^ C // !
.A ^ .B ^ C/ ! .A ^ B/ ^ C /
6. A ^ .B ^ C / ! .A ^ B/ ^ C
7. A ! .B ! C/ ! .A ^ B/ ! C
1.
2.
3.
4.
5.

(s)
(e)
2 (q)
1,3 (l)
(f)
5,4 MP
6 abv

w. A ! B; A ! .B ! C / `A2 A ! C
1.
2.
3.
4.
5.
6.
7.

A ! .B ! C /
A ! .B ! C/ ! .A ^ B/ ! C
.A ^ B/ ! C
A!A
A!B
A ! .A ^ B/
A!C

prem
(u)
2,1 MP
(j)
prem
4,5 (r)
6,3 (l)

E3.5. Provide derivations for T3.29, T3.30 and T3.31, explaining in words for
every step that has a restriction, how you know that that restriction is met.
T3.29. `AD Axt ! 9xA
1.
2.
3.
4.
5.
6.
7.

for any term t free for x in A

8xA ! Ax
t
8xA ! Ax
t
x
.8xA ! Ax
t / ! .At ! 8xA/
x
At ! 8xA
x
Ax
t ! At
x
At ! 8xA
Ax
t ! 9xA

Exercise 3.5 T3.29

A4
same expression
T3.13
3,2 MP
T3.11
4,5 T3.2
6 abv

753

ANSWERS FOR CHAPTER 3

For (1): Since t is free for x in A, we can be sure that t is free for x in A
and so that (1) is an instance of A4. Also for line (2) not strictly necessary
as it involves no change observe that Axt is the same expression as
Axt ; this shift is tracked by square brackets; it matters when it comes time
to apply T3.11.
E3.6. Provide derivations to show each of the following.
a. 8x.H x ! Rx/; 8yHy `AD 8zRz
1.
2.
3.
4.
5.
6.
7.
8.

8x.H x ! Rx/
8yHy
8x.H x ! Rx/ ! .H z ! Rz/
H z ! Rz
8yHy ! H z
Hz
Rz
8zRz

prem
prem
A4
3,1 MP
A4
5,2 MP
4,6 MP
7 Gen*

c. `AD 9x8yRxy ! 8y9xRxy


1.
2.
3.
4.
5.

8yRxy ! Rxy
Rxy ! 9xRxy
8yRxy ! 9xRxy
9x8yRxy ! 9xRxy
9x8yRxy ! 8y9xRxy

A4
T3.29
1,2 T3.2
3 T3.30
4 Gen

E3.8. Provide demonstrations for the following instances of T3.36 and T3.37.
Then, in each case, say in words how you would go about showing the results for an arbitrary number of places.
b. .s D t/ ! .A2 rs ! A2 rt/
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.

.y D u/ ! .A2 xy ! A2 xu/
8x.y D u/ ! .A2 xy ! A2 xu/
8x.y D u/ ! .A2 xy ! A2 xu/ ! .y D u/ ! .A2 ry ! A2 ru/
.y D u/ ! .A2 ry ! A2 ru/
8y.y D u/ ! .A2 ry ! A2 ru/
8y.y D u/ ! .A2 ry ! A2 ru/ ! .s D u/ ! .A2 rs ! A2 ru/
.s D u/ ! .A2 rs ! A2 ru/
8u.s D u/ ! .A2 rs ! A2 ru/
8u.s D u/ ! .A2 rs ! A2 ru/ ! .s D t/ ! .A2 rs ! A2 rt/
.s D t/ ! .A2 rs ! A2 rt/

Exercise 3.8.b

A7
1 Gen*
A4
3,2 MP
4 Gen*
A4
6,5 MP
7 Gen*
A4
9,8 MP

754

ANSWERS FOR CHAPTER 3

For an arbitrary ti D s and Rn t1 : : : tn begin with an instance of A7 that


has xi D y and Rn x1 : : : xn ; then apply the Gen* / A4 / MP pattern n times
to convert x1 : : : xn to t1 : : : tn , and then once more to convert y to s.
E3.9. Provide derivations to show each of T3.40, T3.41, T3.42, T3.43, T3.44,
T3.??, T3.49, T3.50, T3.51, T3.52, T3.53, and T3.54.
T3.40. `PA .St D Ss/ ! .t D s/
1.
2.
3.
4.
5.
6.
7.

.Sx D Sy/ ! .x D y/
8x.Sx D Sy/ ! .x D y/
8x.Sx D Sy/ ! .x D y/ ! .S t D Sy/ ! .t D y/
.St D Sy/ ! .t D y/
8y.S t D Sy/ ! .t D y/
8y.S t D Sy/ ! .t D y/ ! .S t D Ss/ ! .t D s/
.St D S s/ ! .t D s/

P2
1 Gen*
A4
3,2 MP
4 Gen*
A4
6,5 MP

T3.50. `PA ..r C s/ C t/ D .r C .s C t//


1. ..r C s/ C 0/ D .r C .s C 0//
2. ..r C s/ C x/ D .r C .s C x// ! S..r C s/ C x/ D S.r C .s C x//

T3.49
T3.36

3. S..r C s/ C x/ D ..r C s/ C Sx/


4. S..r C s/ C x/ D ..r C s/ C Sx/ !
.S..r C s/ C x/ D S.r C .s C x// ! ..r C s/ C Sx/ D S.r C .s C x///
5. S..r C s/ C x/ D S.r C .s C x// ! ..r C s/ C Sx/ D S.r C .s C x//
6. ..r C s/ C x/ D .r C .s C x// ! ..r C s/ C Sx/ D S.r C .s C x//

T3.42*

S.r C .s C x// D .r C S.s C x//


S.s C x/ D .s C Sx/
S.s C x/ D .s C Sx/ ! .r C S.s C x// D .r C .s C Sx//
.r C S.s C x// D .r C .s C Sx//
S.r C .s C x// D .r C .s C Sx//
S.r C .s C x// D .r C .s C Sx// !
...r C s/ C Sx/ D S.r C .s C x// ! ..r C s/ C Sx/ D .r C .s C Sx///
13. ..r C s/ C Sx/ D S.r C .s C x// ! ..r C s/ C Sx/ D .r C .s C Sx//
14. ..r C s/ C x/ D .r C .s C x// ! ..r C s/ C Sx/ D .r C .s C Sx//
7.
8.
9.
10.
11.
12.

15.
16.
17.
18.

8x...r C s/ C x/ D .r C .s C x// ! ..r C s/ C Sx/ D .r C .s C Sx///


8x..r C s/ C x/ D .r C .s C x//
8x..r C s/ C x/ D .r C .s C x// ! ..r C s/ C t/ D .r C .s C t//
..r C s/ C t/ D .r C .s C t//

Exercise 3.9 T3.50

T3.37
4,3 MP
2,5 T3.2
T3.42*
T3.42*
T3.36
9,8 MP
7,10 T3.35
T3.37
12,11 MP
6,13 T3.2
14 Gen*
1,15 Ind*
A4
17,16 MP

755

ANSWERS FOR CHAPTER 4


T3.53. `PA .St  s/ D ..t  s/ C s/
first (a): `PA ..t  x/ C .x C S t// D ..t  S x/ C S x/
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.

.x C S t/ D S.x C t/
S.x C t/ D .Sx C t/
.x C S t/ D .Sx C t/
.Sx C t/ D .t C Sx/
.x C S t/ D .t C Sx/

T3.42
T3.47*
1,2 T3.35
T3.48
3,4 T3.35

.x C S t/ D .t C Sx/ ! ..t  x/ C .x C St// D ..t  x/ C .t C Sx//


..t  x/ C .x C S t// D ..t  x/ C .t C Sx//
..t  x/ C .t C Sx// D ...t  x/ C t/ C Sx/
..t  x/ C .x C S t// D ...t  x/ C t/ C Sx/
..t  x/ C t/ D .t  Sx/
..t  x/ C t/ D .t  Sx/ ! ...t  x/ C t/ C Sx/ D ..t  Sx/ C Sx/
...t  x/ C t/ C Sx/ D ..t  Sx/ C Sx/
..t  x/ C .x C S t// D ..t  Sx/ C Sx/

T3.36
6,5 MP
T3.50*
7,8 T3.35
T3.44*
T3.36
11,10 MP
9,12 T3.35

main result:
1. St  ;/ D ..t  ;/ C ;/
2. .S t  x/ D ..t  x/ C x/ ! ..St  x/ C S t/ D ...t  x/ C x/ C S t/

T3.52
T3.36

3. ..St  x/ C S t/ D .St  Sx/


4. ..St  x/ C S t/ D .St  Sx/ !
...S t  x/ C St/ D ...t  x/ C x/ C St/ ! .St  Sx/ D ...t  x/ C x/ C St//
5. ..St  x/ C St/ D ...t  x/ C x/ C St/ ! .S t  Sx/ D ...t  x/ C x/ C St/
6. .St  x/ D ..t  x/ C x/ ! .St  Sx/ D ...t  x/ C x/ C S t/

T3.44*

...t  x/ C x/ C S t/ D ..t  x/ C .x C S t//


..t  x/ C .x C S t// D ..t  Sx/ C Sx/
...t  x/ C x/ C St/ D ..t  Sx/ C Sx/
...t  x/ C x/ C St/ D ..t  Sx/ C Sx/ !
..St  Sx/ D ...t  x/ C x/ C St/ ! .S t  Sx/ D ..t  Sx/ C Sx//
11. .S t  Sx/ D ...t  x/ C x/ C St/ ! .S t  Sx/ D ..t  Sx/ C Sx/
12. .S t  x/ D ..t  x/ C x/ ! .S t  Sx/ D ..t  Sx/ C Sx/

7.
8.
9.
10.

13.
14.
15.
16.

8x..St  x/ D ..t  x/ C x/ ! .S t  Sx/ D ..t  Sx/ C Sx//


8x.S t  x/ D ..t  x/ C x/
8x.St  x/ D ..t  x/ C x/ ! .S t  s/ D ..t  s/ C s/
.S t  s/ D ..t  s/ C s/

Exercise 3.9 T3.53

T3.37
4,3 MP
2,5 T3.2
T3.50
(a)
T3.35
T3.36
10,9 MP
6,11 T3.2
12 Gen*
1,13 Ind*
A4
15,14 MP

756

ANSWERS FOR CHAPTER 4

Chapter Four
E4.1. Where the interpretation is as in J from p. 95, use trees to decide whether the
following sentences of Ls are T or F.
a. A
A.T/

A.F/

false
From J

By T(), row 1

e. .A ! A/
A.T/

false
A.T/

@
@
@

From J

.A ! A/.T/

By T(!), row 1

.A ! A/.F/

By T(), row 1

f. .A ! A/
A.T/

true
A.T/




A.F/

@

@
@
.A ! A/.T/

From J

By T(), row 1

By T(!), row 3

Exercise 4.1.f

757

ANSWERS FOR CHAPTER 4


i. .A ! B/ ! .B ! A/
B .T/

A.T/

L
L
L
L
L
LL

true
A.T/

B .T/

L
L
L
L
L

B .F/

A.F/

LL

.B ! A/.F/

.A ! B/.F/

From J

By T(), row 1

By T(!), row 2

\
\
\
.B ! A/.T/
\

\


\\


.A ! B/ ! .B ! A/.T/

By T(), row 2

By T(!), row 3

E4.2. For each of the following sentences of Ls construct a truth table to determine
its truth value for each of the possible interpretations of its basic sentences.
a. A
A A
T T F
F F T

d. .B ! A/ ! B
A B . B ! A/ ! B
T
T
F
F

T
F
T
F

F
T
F
T

T
T
T
F

T
F
T
T

g. C ! .A ! B/
A B C C ! .A ! B/
T
T
T
T

T
T
F
F

T
F
T
F

T
T
F
T

T
T
F
F

F
F
F
F

T
T
F
F

T
F
T
F

T
T
T
T

T
T
T
T

Exercise 4.2.g

758

ANSWERS FOR CHAPTER 4


i. .A ! B/ ! .C ! D/
A B C D . A ! B/ ! . C ! D/
T
T
T
T

T
T
T
T

T
T
F
F

T
F
T
F

F
F
F
F

T
T
T
T

T
T
T
F

F
F
T
T

T
T
T
F

T
T
T
T

F
F
F
F

T
T
F
F

T
F
T
F

F
F
F
F

T
T
T
T

T
T
T
F

F
F
T
T

T
T
T
F

F
F
F
F

T
T
T
T

T
T
F
F

T
F
T
F

T
T
T
T

T
T
T
T

T
T
T
F

F
F
T
T

T
T
T
F

F
F
F
F

F
F
F
F

T
T
F
F

T
F
T
F

T
T
T
T

F
F
F
F

T
T
T
T

F
F
T
T

T
T
T
F

E4.3. For each of the following, use truth tables to decide whether the entialment
claims hold.
a. A ! A s A valid
A A ! A / A
T T F F T
F F T T F

F T
T F

c. A ! B, A s B
AB A!B
T
T
F
F

T
F
T
T

T
F
T
F

invalid

A /  B
F
F
T
T

F
T
F(
T

g. s A ! .C ! B/ ! .A ! C / ! .A ! B/ valid
A B C A ! .C ! B/ ! .A ! C / ! .A ! B/
T
T
T
T

T
T
F
F

T
F
T
F

T
T
F
T

T
T
F
T

T
T
T
T

T
F
T
F

T
T
F
T

T
T
F
F

F
F
F
F

T
T
F
F

T
F
T
F

T
T
T
T

T
T
F
T

T
T
T
T

T
T
T
T

T
T
T
T

T
T
T
T

Exercise 4.3.g

759

ANSWERS FOR CHAPTER 4

E4.4. For each of the following, use truth tables to decide whether the entialment
claims hold.
c. B _ C s B ! C

invalid

B C B _ C / B ! C
T
T
F
F

T
T
F
T

T
F
T
F

T
F
T
T

F
T
F
T

d. A _ B, C ! A, .B ^ C / s C
AB C A_B

C ! A

valid

 .B ^  C / / C

T
T
T
T

T
T
F
F

T
F
T
F

T
T
T
T

F
T
F
T

T
F
T
F

F
F
F
F

T
F
T
T

F
T
F
F

F
T
F
T

T
F
T
F

F
F
F
F

T
T
F
F

T
F
T
F

T
T
F
F

F
T
F
T

T
T
T
T

T
T
T
T

T
F
T
T

F
T
F
F

F
T
F
T

T
F
T
F

h. s .A $ B/ $ .A ^ B/ invalid


A B  .A $ B/ $ .A ^  B/
T
T
F
F

T
F
T
F

F
T
T
F

T
F
F
T

T
T
F
T

F
T
F
F

F
T
F
T

E4.5. For each of the following, use truth tables to decide whether the entailment
claims hold.
a. 9xAx ! 9xBx, 9xAx s 9xBx
9xAx 9xBx 9xAx ! 9xBx
T
T
F
F

T
F
T
F

T
F
T
T

invalid

 9xAx / 9xBx
F
F
T
T

T
F
T
F

E4.8. For LNT and interpretation N as on p. 113, with d as described in the main
problem, use trees to determine each of the following.

Exercise 4.8

760

ANSWERS FOR CHAPTER 4


a. Nd CxS; D 3
;0

x 2

L
L
L
L
L
LL

By TA(v) and TA(c)

S ;1

Since h0; 1i 2 NS, by TA(f)

CxS ;3

Since hh2; 1i; 3i 2 NC, by TA(f)

d. Nd.xj4/ x C .S S;  x/ D 12
x 4

;0

x 4

By TA(v), TA(c) and TA(v)


C
C




S ;1

Since h0; 1i 2 NS, by TA(f)

C
C



SS ;2

C
C
C

@
@ 
@

.SS ;  x/8

C
C

Since h1; 2i 2 NS, by TA(f)



Since hh2; 4i; 8i 2 N, by TA(f)




CC 
12

x C .SS ;  x/

Since hh4; 8i; 12i 2 NC, by TA(f)

E4.9. For LNT and interpretation I as above on p. 114, with d as described in the
main problem, use trees to determine each of the following.
a. Id CxS; D Hill
x Hill

L
L
L
L
L
LL

;Bill

S;Bill

CxS;Hill

By TA(v) and TA(c)

Since hBill, Billi 2 IS, by TA(f)

Since hhHill, Billi, Hilli 2 IC, by TA(f)

Exercise 4.9.a

761

ANSWERS FOR CHAPTER 4


d. Id.xjBill/ x C .S S;  x/ D Bill
;Bill

x Bill

x Bill

By TA(v), TA(c) and TA(v)


C
C




S;Bill

Since hBill, Billi 2 IS, by TA(f)


C
C



SS;Bill

C
C
C

@
@ 
@

.SS;  x/Hill

C
C

Since hBill,Billi 2 IS, by TA(f)



Since hhBill, Billi, Hilli 2 I, by TA(f)




CC 
Bill

x C .SS ;  x/

Since hhBill, Hilli, Billi 2 IC, by TA(f)

E4.11. For Lq and interpretation K with variable assignment d as described in the


main problem, use trees to determine each of the following.
b. Kd g 2 yf 1 c D Amy
y Amy

c Chris

L
L
L
f 1 c Amy
L
L
LL
g 2 yf 1 c Amy

By TA(v) and TA(c)

Since hChris, Amyi 2 Kf 1 , by TA(f)

Since hhAmy, Amyi, Amyi 2 Kg 2 , by TA(f)

E4.12. Where the interpretation K and variable assignment d are as described in


the main problem, use trees to determine whether the following formulas are
satisfied on K with d.
a. H x

Satisfied

Kd H x.S/ ..

.
.

x [Amy]

Exercise 4.12.a

Satisfied
f. 8x.H x ! S /

762
ANSWERS FOR CHAPTER 4

Kd 8x.H x

! S/.S/

Kd 8x.H x

! S/.N/

8x

! S.N/

! S.S/

Kd.xjAmy/ H x ! S.N/

Kd.xjBob/ H x

Kd.xjChris/ H x

.
.

Kd.xjAmy/ H x.S/..

Kd.xjAmy/ S .N/

.
.

Kd.xjBob/ H x.S/ ..
Kd.xjBob/ S .N/

.
.

Kd.xjChris/ H x.N/..

Kd.xjChris/ S .N/

x [Amy]
Kd.xjAmy/ S .S/

x [Bob]
Kd.xjBob/ S .S/

x [Chris]
Kd.xjChris/ S .S/

Exercise 4.12.f

Satisfied
g. 8y8xLxy

763
ANSWERS FOR CHAPTER 4

Kd 8y8xLxy.S/

8y

Kd.yjAmy/ 8xLxy.S/

Kd.yjBob/ 8xLxy.S/

Kd.yjAmy/ 8xLxy.N/

Kd.yjBob/ 8xLxy.N/

8x

8x

[Amy]
x
.  [Amy]
Hy
.H

Kd.yjAmy;xjAmy/ Lxy.S/..

[Bob]
x
.  [Amy]
Hy
.H

Kd.yjAmy;xjBob/ Lxy.N/..

.N./
x [Chris]
. 
.  [Amy]
Hy
.H

Exercise 4.14

Kd.yjAmy;xjChris/ Lxy

[Amy]
x
.
H y [Bob]
.H

Kd.yjBob;xjAmy/ Lxy.S/..

[Bob]
x
.
H y [Bob]
.H

Kd.yjBob;xjBob/ Lxy.S/ ..

[Chris]
x
.
H y [Bob]
.H

Kd.yjBob;xjChris/ Lxy.N/..

[Amy]
x
.  [Chris]
Hy
.H

Kd.yjChris;xjAmy/ Lxy.S../

[Chris]

[Bob]
Kd.yjChris/ 8xLxy.S/
Kd.yjChris/ 8xLxy.N/
Kd.yjChris;xjBob/ Lxy.S/..  x

.  [Chris]
8x
Hy
.H

.  [Chris]
Hy
.H

Kd.yjChris;xjChris/ Lxy.N.. /  x

E4.14. For language Lq consider an interpretation I such that U D f1; 2g, Ia D 1,


IA D T, IP 1 D f1g, If 1 D fh1; 2i; h2; 1ig. Use interpretation I and
trees to show that (a) is not quantificationally valid. Each of the others can be
shown to be invalid on an interpretation I that modifies just one of the main
parts of I. Produce the modified interpretations, and use them to show that the

764

ANSWERS FOR CHAPTER 4


other arguments also are invalid.
c. 8xPf 1 x 6 8xP x
Set I f 1 D fh1; 1i; h2; 1ig.
I 8xPf 1 x.S/
d

I
Pf 1 x.S/ ..
d.xj1/

f 1 x 1

x 1

Pf 1 x.S/ ..
I
d.xj2/

f 1 x 1

x 2

I
P x.S/
d.xj1/

.
.
.
.

x 1

P x.N/
I
d.xj2/

.
.
.
.

x 2

.
.

8x

.
.

.N/
I
d 8xP x

8x

Since the premise is satisfied and a sentence, it is true; since the conclusion is
not satisfied, it is not true. Since I makes the premise T and the conclusion
not, the argument is not quantificationally valid.
E4.15. Find interpretations and use trees to demonstrate each of the following. Be
sure to explain why your interpretations and trees have the desired result.
For these exercises, other interpretations might do the job!
a. 8x.Qx ! P x/ 6 8x.P x ! Qx/
For an interpretation I set U D f1g, IP D f1g, IQ D fg.

Id 8x.Qx !

P x/.S/

8x

Id.xj1/ Qx !

P x.S/

Id.xj1/ Qx.N/ ..

x 1

Id.xj1/ P x.S/ ..

x 1

Id.xj1/ P x.S/ ..

x 1

Id.xj1/ Qx.N/ ..

x 1

.
.

.
.

Id 8x.P x ! Qx/.N/

8x

Id.xj1/ P x ! Qx.N/

.
.

.
.

Since the premise is satisfied and a sentence, it is true; since the conclusion
is not satisfied, it is not true. Since I makes the premise T and the conclusion
not, the argument is not quantificationally valid.

Exercise 4.15.a

765

ANSWERS FOR CHAPTER 4


c. 8xP x 6 P a
For an interpretation I, set U D f1; 2g, Ia D 1, and IP D f1g.
Id 8xP x.S/

Id 8xP x.N/

Id.xj1/ P x.S/ ..

x 1

Id.xj2/ P x.N/ ..

x 2

.
.

8x

.
.

Id P a.N/

Id P a.S/ ..

.
.

a1

Since the premise is satisfied and a sentence, it is true; since the conclusion
is not satisfied, it is not true. Since I makes the premise T and the conclusion
not, the argument is not quantificationally valid.
h. 8y8xRxy 6 8x8yRxy
For an interpretation I, set U D f1; 2g, IR D fh1; 1i; h1; 2ig.
1
x
.
H
. H y 1

Id.xj1;yj1/ Rxy.S/ ..
Id.xj1/ 8yRxy.S/

Id 8x8yRxy.S/

Id 8x8yRxy.N/

8y

1
x
.
H
. H y 2

Id.xj1;yj2/ Rxy.S/ ..

8x
2
x
.
H
. H y 1

Id.xj2;yj1/ Rxy.N/ ..
Id.xj2/

8yRxy.N/

8y

2
x
.
H
. H y 2

Id.xj2;yj2/ Rxy.N/ ..

1
x
.
H
. H y 1

Id.xj1;yj1/ Rxy.S/ ..
Id.xj1/

Id 8x8yRxy.N/

8yRxy.S/

Id.xj1/

8yRxy.S/

8y

1
x
.
H
. H y 2

Id.xj1;yj2/ Rxy.S/ ..

8x
2
x
.
H y 1
.H

Id.xj2;yj1/ Rxy.N/ ..
Id.xj2/ 8yRxy.N/

Id.xj2/ 8yRxy.N/

8y

2
x
.
H y 2
.H

Id.xj2;yj2/ Rxy.N/ ..

Since the premise is satisfied and a sentence, it is true; since the conclusion
is not satisfied, it is not true. Since I makes the premise T and the conclusion
not, the argument is not quantificationally valid.
Exercise 4.15.h

766

ANSWERS FOR CHAPTER 4

E4.17. Produce interpretations to demonstrate each of the following. Use trees, with
derived clauses as necessary, to demonstrate your results. Be sure to explain
why your interpretations and trees have the results they do.
a. 9xP x 6 8yP y
For an interpretation I, set U D f1; 2g, and IP D f1g.
Id 9xP x.S/

Id.xj1/ P x.S/ ..

x 1

Id.xj2/ P x.N/ ..

x 2

Id.yj1/ P y.S/ ..

y 1

Id.yj2/ P y.N/ ..

y 2

.
.

9x

.
.

Id 8yP y.N/

.
.

8y

.
.

Since the premise is satisfied and a sentence, it is true; since the conclusion
is not satisfied, it is not true. Since I makes the premise T and the conclusion
not, the argument is not quantificationally valid.
g. 8x.9yRxy $ A/ 6 9xRxx _ A
For an interpretation I, set U D f1; 2g, IA D F, IR D fh1; 2i; h2; 1ig.
1
x
.
H y 1
.H

Id.xj1;yj1/ Rxy.N/ ..
Id.xj1/ 9yRxy.S/
Id.xj1/ 9yRxy $

A.S/

1
x
.
H y 2
.H

Id.xj1;yj2/ Rxy.S/ ..

$
Id.xj1/ A.S/

Id 8x.9yRxy $ A/.S/

9y

Id.xj1/ A.N/
2
x
.
H y 1
.H

Id.xj2;yj1/ Rxy.S/ ..

8y
Id.xj2/ 9yRxy.S/
Id.xj2/ 9yRxy $ A.S/

2
x
.
H y 2
.H

Id.xj2;yj2/ Rxy.N/ ..

$
Id.xj2/ A.S/

Exercise 4.17.g

9y

Id.xj2/ A.N/

767

ANSWERS FOR CHAPTER 5

1
x
.
H
. H x 1

Id.xj1/ Rxx.N/ ..
Id 9xRxx.N/
Id 9xRxx

_ A.N/

9x

2
x
.
H
. H x 2

Id.xj2/ Rxx.N/ ..

_
Id A.N/

Since the premise is satisfied and a sentence, it is true; since the conclusion
is not satisfied, it is not true. Since I makes the premise T and the conclusion
not, the argument is not quantificationally valid.
E4.18. Produce an interpretation to demonstrate each of the following (now in LNT ).
Use trees to demonstrate your results. Be sure to explain why your interpretations and trees have the results they do.
d. 6 8x8y.x D y/ ! .x < y _ y < x/
For an interpretation I, set U D f1; 2g, and I< D fg. The interpretation of =
is given.

.x < y _y < x//.N/

Id.xj1;yj2/ .x D y/ !

Id.xj1/ 8y..x D y/ !

Id 8x8y..x D y/ !

.x < y _ y <

x//.N/

.x < y _ y < x/.N/

! ..
.
...
......
.
.
.
.
.
....
......
......
.
.
.
.
.
...
......
......
.
.
.
.
.
.
.
.
1
Id.xj1;yj2/ .x D y/.S/
I
x D y.N/ .. 
.
x
.
 d.xj1;yj2/
.H
.
H
.
y 2
.
.
.
.
1
Id.xj1;yj2/ x < y.N/ ..  x

.
H y 2
.H
Id.xj1;yj2/ x < y _ y < x.N/
_
2
Id.xj1;yj2/ y < x.N/ ..  y
.
H x 1
.H
8x

8y

The quantifiers generate additional branches. However, this part of the tree
is sufficient to show that the entire formula is not satisfied. Since it is not
satisfied, it is not true. And since I makes the formula not true, it is not quantificationally valid.

Exercise 4.18.d

768

ANSWERS FOR CHAPTER 5

Chapter Five
E5.1. For each of the following, identify the simple sentences that are parts. If the
sentence is compound, use underlines to exhibit its operator structure, and
say what is its main operator.
h. Hermoine believes that studying is good, and Hermione studies hard, but Ron
believes studying is good, and it is not the case that Ron studies hard.
Simple sentences:
Stydying is good
Hermione studies hard
Ron studies hard
Hermoine believes that studying is good and Hermione studies hard but Ron
believes studying is good and it is not the case that Ron studies hard.
main operator:

but

E5.2. Which of the following operators are truth-functional and which are not? If
the operator is truth-functional, display the relevant table; if it is not, give a
case to show that it is not. Clearly explain your response.
a. It is a fact that

truth functional

It is a fact that
T
F

T
F

In any situation, the compound takes the same value as the sentence in the
blank. So the operator is truth-functional.
c.

but

truth functional

but
T
T
F
F

T
F
F
F

T
F
T
F

and
. Though
In any situation this operator takes the same value as
but may carry a conversational sense of opposition not present with and
the truth value of the compound works the same. Thus, where Bob loves Sue
even Bob loves Sue but Bob loves Sue might elicit the response True, but
why did you say that?

Exercise 5.2.c

769

ANSWERS FOR CHAPTER 5


f. It is always the case that

not truth functional

It may be that any false sentence in the blank results in a false compound.
However, consider something true in the blank: perhaps I am at my desk
and Life is hard are both true. But
It is always the case that I am at my desk
It is always the case that life is hard
are such that the first is false, but the second remains true. For perhaps I
sometimes get up from my desk (so that the first is false), but the difficult
character of living goes on and on (and on). Thus there are situations where
truth values of sentences in the blanks are the same, but the truth values of
resultant compounds are different. So the operator is not truth-functional.
E5.3. Use our method to expose truth functional structure and produce parse trees
for each of the following. Use your trees to produce an interpretation function
for the sentences.
d. It is not the case that: Bingo is spotted and Spot can play bingo.
It is not the case that Bingo is spotted and Spot can play bingo

Bingo is spotted and Spot can play bingo


PPP

PP


P
Bingo is spotted
Spot can play bingo

From this sentence, II includes,


B: Bingo is spotted
S: Spot can play bingo
E5.4. Use our method to expose truth functional structure and produce parse trees
for each of the following. Use your trees to produce an interpretation function
for the sentences.

Exercise 5.4

770

ANSWERS FOR CHAPTER 5


a. People have rights and dogs have rights, but rocks do not.

People have rights and dogs have rights but it is not the case that rocks have rights
```
```


```


`
People have rights and dogs have rights
it is not the case that rocks have rights
HHH


H
People have rights dogs have rights
rocks have rights

From this sentence, II includes,


P : People have rights
D: Dogs have rights
R: Rocks have rights
E5.5. Construct parallel trees to complete the translation of the sentences from
E5.3 and E5.4.
d. It is not the case that: Bingo is spotted and Spot can play bingo.
It is not the case that Bingo is spotted and Spot can play bingo

Bingo is spotted and Spot can play bingo


PP

PP

PP

Bingo is spotted
Spot can play bingo

.B ^ S /

.B ^ S /

@
@
S

Where II includes,
B: Bingo is spotted
S: Spot can play bingo
a. People have rights and dogs have rights, but rocks do not.
People have rights and dogs have rights but it is not the case that rocks have rights

XXXXX


XXX

People have rights and dogs have rights

it is not the case that rocks have rights

HH

HH

People have rights

dogs have rights

Exercise 5.5.4a

rocks have rights

771

ANSWERS FOR CHAPTER 5


..P ^ D/ ^ R/
HH
HH


.P ^ D/
R

Where II includes,
P : People have rights
D: Dogs have rights
R: Rocks have rights

@
@
D

E5.6. Use our method to translate each of the following. That is, generate parse
trees with an interpretation function for all the sentences, and then parallel
trees to produce formal equivalents.

Exercise 5.6

772
ANSWERS FOR CHAPTER 5

c. It is not the case that: everything Plato, and Aristotle, and Ayn Rand said was true.

It is not the case that everything Plato said was true and everything Aristotle said was true and everything Ayn Rand said was true

((XX
X
X
XXX
X

@
@
R

@
@
.A ^ R/

P ^ .A ^ R//

.P ^ .A ^ R//

everything Aristotle said was true

Another natural result is


..P ^ A/ ^ R/

everything Ayn Rand said was true

XXXX

XXX

X

everything Aristotle said was true and everything Ayn Rand said was true

((
(((

everything Plato said was true and everything Aristotle said was true and everything Ayn Rand said was true

(
((((
everything Plato said was true

Where II includes,
P : Everything Plato said was true
A: Everything Aristotle said was true
R: Everything Ayn Rand said was true

Exercise 5.6.c

773

ANSWERS FOR CHAPTER 5

E5.8. Using the given interpretation function, produce parse trees and then parallel
ones to complete the translation for each of the following.
h. Not both Bob and Sue are cool.
It is not the case that Bob is cool and Sue is cool

.B1 ^ S1 /

Bob is cool and Sue is cool


HH
HH


Bob is cool
Sue is cool

.B1 ^ S1 /

B1

@
@
S1

E5.9. Use our method to translate each of the following. That is, generate parse
trees with an interpretation function for all the sentences, and then parallel
trees to produce formal equivalents.

Exercise 5.9

774
ANSWERS FOR CHAPTER 5

d. Neither Harry, nor Ron, nor Hermione are Muggles.

(((

@
@
R

hhhh

Other natural options are,


.H ^ .R ^ M //
..H _ R/ _ M /
.H _ .R _ M //

Hermione is a Muggle

It is not the case that Hermione is a Muggle

((((hhhhhh
hh

It is not the case that Harry is a Muggle and it is not the case that Ron is a Muggle and it is not the case that Hermione is a Muggle

(
((((

It is not the case that Harry is a Muggle and it is not the case that Ron is a Muggle

It is not the case that Ron is a Muggle

XXXX
XXX

X

It is not the case that Harry is a Muggle

Ron is a Muggle

H

..H ^ R/ ^ M /
Q

Q

Q
.H ^ R/
M

Harry is a Muggle

Include in the interpretation function,


H : Harry is a Muggle
R: Ron is a Muggle
M : Hermione is a Muggle

Exercise 5.9.d

775
ANSWERS FOR CHAPTER 5

g. Although blatching and blagging are illegal in Quidditch, the woolongong shimmy is not.

((((

((
(((

((h

hhh
hh
h

it is not the case that it is not the case that


the woolongon shimmy is legal in Quidditch

h
h
h
h
hhh
hh

Although It is not the case that blatching is legal in Quidditch and it is not the case that blagging is legal in Quidditch,
it is not the case that it is not the case that the woolongon shimmy is legal in Quidditch

((((
(

XX

It is not the case that blatching is legal in Quidditch and


it is not the case that blagging is legal in Quidditch

XX
X
X
XXX
X
X

It is not the case that


the woolongon shimmy is legal in Quidditch




It is not the case that


blagging is legal in Quidditch

the woolongon shimmy is legal in Quidditch




It is not the case that


blatching is legal in Quidditch

blagging is legal in Quidditch

@
@
G

W

.T ^ G/ ^ W /


Q

Q

Q
.T ^ G/
W

T

It is tempting to build Blatching is illegal and


so forth into the interpretation function. But
this is to leave out sentential structure. And it
is tempting to leave out the double negation on
the right conjunct the result is in fact equivalent, though the above translation reflects the
actual claim.

blatching is legal in Quidditch

Include in the interpretation function,


T : Blatching is legal in Quidditch
G: Blagging is legal in Quidditch
W : The woolongon shimmy is legal in Quidditch

Exercise 5.9.g

776

ANSWERS FOR CHAPTER 5

E5.10. Using the given interpretation function, produce parse trees and then parallel
ones to complete the translation for each of the following.
e. If Timmy is in trouble, then if Lassie barks Pa will help.
If Timmy is in trouble then if Lassie barks, Pa will help
PP

PP

PP

if Lassie barks Pa will help
Timmy is in trouble
H
HH

H

Pa will help
Lassie barks

Exercise 5.10.e

.T ! .L ! P //
Q

Q

Q
.L ! P /
T

@
@
P

777
ANSWERS FOR CHAPTER 5

i. If Timmy is in trouble, then either Lassie is not healthy or if Lassie barks then Pa will help.

If Timmy is in trouble then either it is not the case that Lassie is healthy or if Lassie barks then Pa will help

(
((( PPP
PP
P

either it is not the case that Lassie is healthy or if Lassie barks then Pa will help

Exercise 5.11.g

((((
((((
Timmy is in trouble

V : Vegetarianism is right

if Lassie Barks then Pa will help

Pa will help

HH
HH

Lassie barks

Include in the interpretation function,

XX
X
X
XXX

X

it is not the case thatLassie is healthy

Lassie is healthy

HH
.L ! P /

.T ! .H _ .L ! P ///
HHH

H
.H _ .L ! P //
T
HH
H

@
@
P
H

E5.11. Use our method, with or without parse trees, to produce a translation, including interpretation function for the following.

g. If you think animals do not feel pain, then vegetarianism is not right.

778

ANSWERS FOR CHAPTER 5


N : You think it is not the case that animals feel pain
.N ! V /

i. Vegetarianism is right only if both animals feel pain, and animals have intrinsic value just in case they feel pain; but it is not the case that animals have
intrinsic value just in case they feel pain.
Include in the interpretation function,
V : Vegetarianism is right
P : Animals feel pain
I : Animals have intrinsic value
V ! .P ^ .I $ P // ^ .I $ P /
E5.12. For each of the following arguments: (i) Produce an adequate translation,
including interpretation function and translations for the premises and conclusion. Then (ii) use truth tables to determine whether the argument is sententially valid.
a. Our car will not run unless it has gasoline
Our car has gasoline
Our car will run
Include in the interpretation function:
R: Our car will run
G: Our car has gasoline
Formal sentences:
R _ G
G
R

Truth table:
G R R _ G
T
T
F
F

T
F
T
F

F
T
F
T

T
T
F
T

G / R
T
T
F
F

T
F(
T
F

Exercise 5.12.a

779

ANSWERS FOR CHAPTER 5


Not sententially valid

E5.17. Using the given interpretation function for Lq , complete the translation for
each of the following.
e. If Harold gets a higher grade than Ninfa, then he gets a higher grade than her
homework partner.
If Harold gets a higher grade than Ninfa then Harold gets a higher grade than Ninfas
homework partner.

.Hda ! Hdp 1 a/
g. If someone gets a good grade, then Ninfas homework partner does.
If someone gets a good grade then Ninfas homework partner gets a good grade.

9xGx ! Gp 1 a
i. Nobody gets a grade higher than their own grade.
8xH xx

//

9xH xx

E5.18. Produce an adequate quantificational translation for each of the following.


In this case you should provide an interpretation function for the sentences.
Let U be the set of famous philosophers, and, assuming that each has a unique
successor, implement a successor function.
d. If Plato is good, then his successor and successors successor are good.
If Plato is good, then Platos successor is good and Platos successors successor is
good.

Where the interpretation function includes,


a: Plato
s 1 : fhm; ni j m; n 2 U and n is the successor of mg
G 1 : fo j o 2 U and o is a good philosopherg
Ga ! .Gs 1 a ^ Gs 1 s 1 a/
i. If some philosopher is better than Plato, then Aristotle is.
If some philosopher is better than Plato then Aristotle is better than Plato

Where the interpretation function includes,


Exercise 5.18.i

780

ANSWERS FOR CHAPTER 5


a: Plato
b: Aristotle
B 2 : fhm; ni j m; n 2 U and m is a better philospher than ng
9xBxa ! Bba

E5.20. Using the given interpretation function, complete the translation for each of
the following.
b. Some Ford is an unreliable piece of junk.
9xF x ^ .Rx ^ J x/
g. Any Ford built in the eighties is a piece of junk.
8x.F x ^ Ex/ ! J x
k. If a car is unreliable, then it is a piece of junk.
8x.Rx ! J x/
E5.21. Using the given interpretation function, complete the translation for each of
the following.
b. Someone is married to Bob.
9xM xb
h. Anyone who loves and is loved by their spouse is happy, though some are not
employed.
8x.Lxs 1 x ^ Ls 1 xx/ ! H x ^ 9x.Lxs 1 x ^ Ls 1 x/ ^ Ex
l. Anyone married to Bob is happy if Bob is not having an affair.
Ab ! 8x.M xb ! H x/

//

8xM xb ! .Ab ! H x/

E5.25. Using the given interpretation function, complete the translation for each of
the following.
g. Any man is shaved by someone.
8x.M x ! 9ySyx/
j. Any man who shaves everyone is a barber.
8x.M x ^ 8yS xy/ ! Bx
Exercise 5.25.j

781

ANSWERS FOR CHAPTER 5


n. A barber shaves only people who do not shave themselves.
8xBx ! 8y.S xy ! Syy/

E5.26. Using the given extended version of LNT and standard interpretation, complete the translation for each of the following.
a. One plus one equals two.
.S ; C S ;/ D S S ;
g. Any odd (non-even) number is equal to the successor of some even number.
8xEx ! 9y.Ey ^ .x D Sy//
m. The sum of one odd with another odd is even.
8x8y.Ex ^ Ey/ ! E.x C y/
E5.28. Using the given the following interpretation function, complete the translation for each of the following.
c. There are at least three snakes in the grass.
9x9y9z..Gx ^ Gy/ ^ Gz ^ ..x D y/ ^ .x D z// ^ .y D z//
k. The snake in the grass is deadly.
9x.Gx ^ 8y.Gy ! x D y// ^ Dx
m. Aaalph is bigger than any other snake in the grass.
8x.Gx ^ .x D a// ! Bax ^ Ga
E5.29. Given LNT and the standard interpretation, complete the translation for each
of the following.
e. If a number a is less than a number b, then b is not less then a.
8x8y.x < y/ ! .y < x/
h. Four is even.
9x.S S;  x/ D S S S S;
j. Any odd number is the sum of an odd and an even.
8x9w..SS ;  w/ C S;/ D x !
9y9z..9w..SS;  w/ C S;/ D y ^ 9w.SS;  w/ D z/ ^ x D .y C z//

Exercise 5.29.j

782

ANSWERS FOR CHAPTER 6


n. Three is prime.
9x..x D S ;/ ^ .x D S S S ;/ ^ 9y.x  y D S S S ;//

E5.30. For each of the following arguments: (i) Produce an adequate translation, including interpretation function and translations for the premises and conclusion. Then (ii) for each argument that is not quantificationally valid, produce
an interpretation (trees optional) to show that the argument is not quantificationally valid.
c. Bob is taller than every other man
Only Bob taller than every other man
U: fo j o is a mang

b: Bob
T 2 : fhm; ni j m; n 2 U and m is taller than ng
8x.x D b/ ! T bx
8x8y..x D y/ ! T xy/ ! .x D b/
This argument is quantificationally invalid. To see this, consider an (nonintended) interpretation with,
U D f1; 2g
Ib D 1
IT D fh1; 2i; h2; 1ig

This makes the premise true, but the conclusion not. To see this, you may
want to consider trees. So the argument is not quantificationally valid.

Chapter Six
E6.1. Show that each of the following is valid in N1. Complete (a) - (d) using just
rules R1, R3 and R4. You will need an application of R2 for (e).
a. .A ^ B/ ^ C `N1 A
1. .A ^ B/ ^ C

2. A ^ B
3. A

1 R3
2 R3 Win!

Exercise 6.1.a

783

ANSWERS FOR CHAPTER 6

E6.2. (i) For each of the arguments in E6.1, use a truth table to decide if the argument is sententially valid.
a. .A ^ B/ ^ C `N1 A
A B C .A ^ B/ ^ C / A
T
T
T
T

T
T
F
F

T
F
T
F

T
T
F
F

T
F
F
F

T
T
T
T

F
F
F
F

T
T
F
F

T
F
T
F

F
F
F
F

F
F
F
F

F
F
F
F

There is no row where the premise is true and the conclusion is false; so this
argument is sententially valid.
E6.3. Consider a derivation with structure as in the main problem. For each of the
lines (3), (6), (7) and (8) which lines are accessible? which subderivatrions
(if any) are accessible?
line 6

accessible lines
(1), (4), (5)

accessible subderivations
2-3

E6.4. Suppose in a derivation with structure as in E6.3 we have obtained a formula


A on line (3). (i) On what lines would we be allowed to conclude A by 3
R? Suppose there is a formula B on line (4). (ii) On what lines would be be
allowed to conclude B by 4 R?
(i) There are no lines on which we could conclude A by 3 R.
E6.6. The following are not legitimate ND derivations. In each case, explain why.
a.

1. .A ^ B/ ^ .C ! B/

2. A

1 ^E

This does not apply the rule to the main operator. From (1) by ^E we can get
A ^ B or C ! B. From the first A would follow by a second application of
the rule.
E6.7. Provide derivations to show each of the following.

Exercise 6.7

784

ANSWERS FOR CHAPTER 6


b. A ^ B, B ! C `ND C
1. A ^ B
2. B ! C

P
P

3. B
4. C

1 ^E
2,3 !E

e. A ! .A ! B/ `ND A ! B
1. A ! .A ! B/

2.

A (g, !I)

3.
4.

A!B
B

1,2 !E
3,2 !E

5. A ! B

2-4 !I

h. A ! B, B ! C `ND .A ^ K/ ! C
1. A ! B
2. B ! C

P
P

3.

A^K

A (g, !I)

4.
5.
6.

A
B
C

3 ^E
1,4 !E
2,5 !E

7. .A ^ K/ ! C

3-6 !I

l. A ! B `ND .C ! A/ ! .C ! B/
1. A ! B

C !A

2.

A (g, !I)

3.

A (g, !I)

4.
5.

A
B

2,3 !E
1,4 !E

C !B

6.

3-5 !I

7. .C ! A/ ! .C ! B/

2-6 !I

E6.9. The following are not legitimate ND derivations. In each case, explain why.
c.

1. W
2.

A (c, I)

3.

W

A (c, I)

4.

1,3 ?I

5. R

2-4 I

Exercise 6.9.c

785

ANSWERS FOR CHAPTER 6

There is no contradiction against the scope line for assumption R. So we are


not justified in exiting the subderivation that begins on (2). The contradiction
does justify exiting the subderivation that begins on (3) with the conclusion
W by 3-4 E. But this would still be under the scope of assumption R, and
does not get us anywhere, as we already had W at line (1)!
E6.10. Produce derivations to show each of the following.
c. A ! B, B `ND A
1. A ! B
2. B

P
P

3.

A

A (c, E)

4.
5.

B
?

1,3 !E
4,2 ?I
3-5 E

6. A

g. A _ .A ^ B/ `ND A
1. A _ .A ^ B/

2.

A (g, 1_E)

3.

2R

4.

A^B

A (g, 1_E)

5.

4 ^E
1,2-3,4-5 _E

6. A

l. A ! B `ND B ! A
1. A ! B
2.

P
A (g, !I)

3.

A (c, I)

4.
5.

B
?

1,3 !E
2,4 ?I

A

3-5 I

7. B ! A

2-6 !I

6.

E6.12. Each of the following are not legitimate ND derivations. In each case, explain
why.

Exercise 6.12

786

ANSWERS FOR CHAPTER 6


c.

1. A $ B

2. A

1 $E

$E takes as inputs a biconditional and one side or the other. We cannot get
A from (1) unless we already have B.
E6.13. Produce derivations to show each of the following.
a. .A ^ B/ $ A `ND A ! B
1. .A ^ B/ $ A

2.

A (g, !I)

3.
4.

A^B
B

1,2 $E
3 ^E

5. A ! B

2-4 !I

e. A $ .B ^ C /, B `ND A $ C
1. A $ .B ^ C /
2. B

P
P

3.

A (g, $I)

4.
5.

B ^C
C

1,3 $E
4 ^E

6.

A (g, $I)

7.
8.

B ^C
A

2,6 ^I
1,7 $E

9. A $ C

3-5,6-8 $I

Exercise 6.13.e

787

ANSWERS FOR CHAPTER 6


k. `ND A $ A
1.

A

A (g, $I)

2.

A

A (c, E)

3.
4.

A
?

1R
2,3 ?I

5.

2-4 E

6.

A (g $I)

7.

A

A (g, I)

8.
9.

A
?

6R
8,7 ?I

10.

A

7-9 I

11. A $ A

1-5,6-10 $I

E6.14. For each of the following, (i) which primary strategy applies? and (ii) what
is the next step? If the strategy calls for a new subgoal, show the subgoal; if it
calls for a subderivation, set up the subderivation. In each case, explain your
response.
c.

1. A $ B

B $ A

(i) There is no contradiction in accessible lines so SG1 does not apply. There
is no disjunction in accessible lines so SG2 does not apply. The goal does not
appear in the premises so SG3 does not apply. (ii) Given this, we apply SG4
and go for the goal by $I. For this goal $I requires a pair of subderivations
which set up as follows.
1. A $ B
2.

P
A (g $I)

A
A

A (g $I)

B
B $ A

$I

Exercise 6.14.c

788

ANSWERS FOR CHAPTER 6

E6.15. Produce derivations to show each of the following. No worked out answers
are provided. However, if you get stuck, you will find strategy hints in the
back.
a. A $ .A ! B/ `ND A ! B
Hint: There is no contradiction or disjunction; and the goal is not in the
premises. So set up to get the primary goal by !I in application of SG4.
b. .A _ B/ ! .B $ D/, B `ND B ^ D
Hint: There is no contradiction or disjunction; and the goal is not in the
premises. So plan to get the primary goal by ^I in application of SG4. Then
it is a matter of SG3 to get the parts.
c. .A ^ C /, .A ^ C / $ B `ND A _ B
Hint: There is no contradiction or disjunction; and the goal is not in the
premises. So plan to get the primary goal by (one form of) _I in application
of SG4.
d. A ^ .C ^ B/, .A _ D/ ! E `ND E
Hint: There is no contradiction or disjunction; but the goal exists in the
premises. So proceed by application of SG3.
e. A ! B, B ! C `ND A ! C
Hint: There is no contradiction or disjunction; and the goal is not in the
premises. So set up to get the primary goal by !I in application of SG4.
f. .A ^ B/ ! .C ^ D/ `ND .A ^ B/ ! C ^ .A ^ B/ ! D
Hint: There is no contradiction or disjunction; and the goal is not in the
premises. So set up to get the primary goal by ^I in application of SG4. Then
apply SG4 and !I again for your new subgoals.
g. A ! .B ! C /, .A ^ D/ ! E, C ! D `ND .A ^ B/ ! E
Hint: There is no contradiction or disjunction; and the goal is not in the
premises. So set up to get the primary goal by !I in application of SG4.
Then it is a matter of SG3.
h. .A ! B/ ^ .B ! C /, .D _ E/ _ H ! A, .D _ E/ ^ H `ND C
Hint: There is no contradiction or disjunction; but the goal is in the premises.
So proceed by application of SG3.
Exercise 6.15.h

789

ANSWERS FOR CHAPTER 6


i. A ! .B ^ C /, C `ND .A ^ D/

Hint: There is no contradiction or disjunction; and the goal is not in the


premises. So set up to get the primary goal by I in application of SG4.
j. A ! .B ! C /, D ! B `ND A ! .D ! C /
Hint: There is no contradiction or disjunction; and the goal is not in the
premises. So set up to get the primary goal by !I in application of SG4.
Similar reasoning applies to the secondary goal.
k. A ! .B ! C / `ND C ! .A ^ B/
Hint: There is no contradiction or disjunction; and the goal is not in the
premises. So set up to get the primary goal by !I in application of SG4. You
can also apply SG4 to the secondary goal.
l. .A ^ B/ ! A `ND A ! B
Hint: There is no simple contradiction or disjunction; and the goal is not in
the premises. So set up to get the primary goal by !I in application of SG4.
This time the secondary goal has no operator, and so falls all the way through
to SG5.
m. B $ A, C ! B, A ^ C `ND K
Hint: There is no contradiction or disjunction; and the goal is not in the
premises. So set up to get the primary goal by I in application of SG4. This
works because the premises are themselves inconsistent.
n. A `ND A ! B
Hint: After you set up for the main goal, look for an application of SG1.
o. A $ B `ND A $ B
Hint: After you set up for the main goal, look for applications of SG5.
p. .A _ B/ _ C , B $ C `ND C _ A
Hint: This is not hard, if you recoginize each of the places where SG2 applies.
q. `ND A ! .A _ B/
Hint: Do not panic. Without premises, there is definately no contradiction
or disjunction; and the goal is not in accessible lines! So set up to get the
primary goal by !I in application of SG4.
Exercise 6.15.q

790

ANSWERS FOR CHAPTER 6


r. `ND A ! .B ! A/
Hint: Apply SG4 to get the goal, and again for the subgoal.
s. `ND .A $ B/ ! .A ! B/
Hint: This requires multiple applications of SG4.
t. `ND .A ^ A/ ! .B ^ B/
Hint: Once you set up for the main goal, look for an application of SG1.
u. `ND .A ! B/ ! .C ! A/ ! .C ! B/
Hint: This requires multiple applications of SG4.
v. `ND .A ! B/ ^ B ! A
Hint: Apply SG4 to get the main goal, and again to get the subgoal.
w. `ND A ! B ! .A ! B/
Hint: This requires multiple applications of SG4.
x. `ND A ! .B ^ A/ ! C

Hint: After a couple applications of SG4, you will have occaision to make use
of SG1 or equivalently, SG5.
y. `ND .A ! B/ ! B ! .A ^ D/
Hint: This requires multiple applications of SG4.
E6.16. Produce derivations to demonstrate each of T6.1 - T6.18.
T6.3. `ND .Q ! P / ! ..Q ! P / ! Q/
1.

Q ! P

A (g, !I)

2.

Q ! P

A (g, !I)

3.

Q

A (c, E)

4.
5.
6.

P
P
?

2,3 !E
1,3 !E
4,5 ?I

7.
8.

3-6 E

Q
.Q ! P / ! Q

2-7 !I

9. .Q ! P / ! ..Q ! P / ! Q/

1-8 !I

Exercise 6.16 T6.3

791

ANSWERS FOR CHAPTER 6


T6.11. `ND .A _ B/ $ .B _ A/
1.

A_B

A (g, $I)

2.

A (g, 1_E)

3.

B_A

2 _I

4.

A (g, 1_E)

B_A

4 _I

5.
6.

B_A

1,2-3,4-5 _E

7.

B_A

A (g, $I)

8.

A (g, 7_E)

9.

A_B

8 _I

A (g, 7_E)

10.
11.
12.

A_B

10 _I

A_B

7,8-9,10-11 _E

13. .A _ B/ $ .B _ A/

1-6,7-12 $I

E6.17. Each of the following begins with a simple application of I or E for SG4
or SG5. Complete the derivations, and explain your use of secondary strategy.
a.

1. A ^ B
2. .A ^ C /
3.

C
?
C

P
P

1. A ^ B
2. .A ^ C /

A (c, I)

3.

A (c, I)

4.
5.
6.

A
A^C
?

1 ^E
4,3 ^I
5,2 ?I

7. C

P
P

3-6 I

There is no contradiction by atomics and negated atomics. And there is no


disjunction in the scope of the assumption for I. So we fall through to SC3.
For this set the opposite of (2) as goal, and use primary strategies for it. The
derivation of A ^ C is easy.
E6.18. Produce derivations to show each of the following. No worked out answers
are provided. However, if you get stuck, you will find strategy hints in the
back.

Exercise 6.18

792

ANSWERS FOR CHAPTER 6


a. A ! .B ^ C /, B ! C `ND A ! B

Apply primary strategies for !I and I. Then there will be occasion for a
simple application of SC3.
b. `ND .A ! A/ ! A
Apply primary strategies for !I and E. Then there will be occasion for a
simple application of SC3.
c. A _ B `ND .A ^ B/
This requires no more than SC1, if you follow the primary strategies properly.
From the start, apply sg2 to go for the whole goal .A ^ B/ by _E.
d. .A ^ B/, .A ^ B/ `ND A
You will go for the main goal by I in an instance of SG4. Then it is easiest
to see this as a case where you use the premises for separate instances of SC3.
It is, however, also possible to see the derivation along the lines of SC4.
e. `ND A _ A
For your primary strategy, fall all the way through to SG5. Then you will be
able to see the derivation either along the lines of SC3 or 4, building up to the
opposite of .A _ A/ twice.
f. `ND A _ .A ! B/
Your primary strategy falls through to SG5. Then A is sufficient to prove
A ! B, and this turns into a pure version of the pattern (AQ) for formulas
with main operator _.
g. A _ B, A _ B `ND B
For this you will want to apply SG2 to one of the premises (it does not matter
which) for the goal. This gives you a pair of subderivations. One is easy. In
the other, SG2 applies again!
h. A $ .B _ C /, B ! C `ND A
The goal is in the premises, so your primary strategy is SG3. The real challenge is getting B _ C . For this you will fall through to SG5, and assume its
negation. Then the derivation can be conceived either along the lines of SC3
or SC4, and on the standard pattern for disjunctions.

Exercise 6.18.h

793

ANSWERS FOR CHAPTER 6


i. A $ B `ND .C $ A/ $ .C $ B/

Applying SG4, set up for the primary goal by $I. You will then need $I for
the subgoals as well.
j. A $ .B $ C /, .A _ B/ `ND C
Fall through to SG5 for the primary goal. Then you can think of the derivation
along the lines of SC3 or SC4. The derivation of A _ B works on the standard
pattern, insofar as with the assumption C , A gets you B.
k. C _ .A _ B/ ^ .C ! E/, A ! D, D ! A `ND C _ B
Though officially there is no formula with main operator _, a minor reshuffle
exposes C _ .A _ B/ on an accessible line. Then the derivation is naturally
driven by applications of SG2.
l. .A ! B/, .B ! C / `ND D
Go for the main goal by I in applicaiton of SG4. Then it is most natural
to see the derivation as involving two separate applications of SC3. It is also
possible to set the derivation up along the lines of SC4, though this leads to a
rather different result.
m. C ! A, .B ^ C / `ND .A _ B/ ! C
Go for the primary goal by !I in application of SG4. Then you will need to
apply SG2 to reach the subgoal.
n. .A $ B/ `ND A $ B
Go for the primary goal by $I in application of SG4. You can go for one subgoal by E, the other by I. Then fall through to SC3 for the conradictions,
where this will involve you in further instances of $I. The derivation is long,
but should be straightforward if you follow the strategies.
o. A $ B, B $ C `ND .A $ C /
Go for the primary goal by I in application of SG4. Then the contradiction
comes by application of SC4.
p. A _ B, B _ C , C `ND A
This will set up as a couple instances of _E. If you begin with A _ B, one
subderivation is easy. In the second, be on the lookout for a couple instances
of SG1.
Exercise 6.18.p

794

ANSWERS FOR CHAPTER 6


q. .A _ C / _ D, D ! B `ND .A ^ B/ ! C

Officially, the primary strategy should be _E in application of SG2. However,


in this case it will not hurth to begin with !I, and set up _E inside the
subderivation for that.
r. A _ D, D $ .E _ C /, .C ^ B/ _ C ^ .F ! C / `ND A
The two disjunctions require applications of SG2. In fact, there are ways to
simplify this from the mechanical version entirely driven by the strategy.
s. .A_B/_.C ^D/, .A $ E/^.B ! F /, G $ .E _F /, C ! B `ND G
This derivation is driven by _E in application of SG2 and then SC3. Again,
there are ways to make the derivation relatively more elegant.
t. .A _ B/ ^ C , C ! .D ^ A/, B ! .A _ E/ `ND E _ F
Since there is no F in the premises, it makes sense to think the conclusion is
true because E is true. So it is safe to set up to get the conclusion from E by
_I. After some simplification, the overall strategy is revealed to be _E based
on A _ B, in application of SG2. One subderivation has another formula with
main operator _, and so another instance of _E.
E6.19. Produce derivations to demonstrate each of T6.19 - T6.26.

Exercise 6.19

795

ANSWERS FOR CHAPTER 6


T6.19. `ND .A ^ B/ $ .A _ B/
1.
2.

.A ^ B/

A (g, $I)

.A _ B/

A (c, E)

3.

A

A (c, E)

4.
5.

A _ B
?

3 _I
4,2 ?I

6.
7.

3-5 E
A (c, E)

A
B
A _ B
?

8.
9.

7 _I
8,2 ?I

10.
11.
12.

B
A^B
?

7-9 E
6,10 ^I
11,1 ?I

13.

A _ B

2-12 E

14.

A _ B

A (g, $I)

15.

A

A (g, 14_E)

16.

A^B

A (c, I)

17.
18.

A
?

16 ^E
17,15 ?I

19.

.A ^ B/

16-18 I

20.

B

A (g, 14_E)

21.

A^B

A (c, I)

22.
23.

B
?

21 ^E
22,20 ?I

24.
25.

.A ^ B/

21-23 I

.A ^ B/

14,15-19,20-24 _E

26. .A ^ B/ $ .A _ B/

1-13,14-25 $I

E6.23. Complete the following derivations by filling in justifications for each line.
Then for each application of 8E or 9I, show that the free for constraint is
met.
b.

1. Gaa

2. 9yGay
3. 9x9yGxy

1 9I
2 9I

Exercise 6.23.b

796

ANSWERS FOR CHAPTER 6

For (2), a is free for y in Gay (as a constant must be). And again, for (3), a
is free for x in 9yGxy (as a constant must be). So the restriction is met in
each case.
E6.24. The following are not legitimate ND derivations. In each case, explain why.
b.

1. 8x9yGxy

2. 9yGyy

1 8E

y is not free for x in 9yGxy. So the constraint is not met: We cannot instantiate to a term whose variables are bound in the result!
E6.25. Provide derivations to show each of the following.
b. 8x8yF xy `ND F ab ^ F ba
1. 8x8yF xy
2.
3.
4.
5.
6.

8yF ay
F ab
8yF by
F ba
F ab ^ F ba

1 8E
2 8E
1 8E
4 8E
3,5 ^I

g. Gaf 1 z `ND 9x9yGxy


1. Gaf 1 z

2. 9yGay
3. 9x9yGxy

1 9I
2 9I

k. 8x.F x ! Gx/, 9yGy ! Ka `ND F a ! 9xKx


1. 8x.F x ! Gx/
2. 9yGy ! Ka

P
P

3.

Fa

A (g, !I)

4.
5.
6.
7.
8.

F a ! Ga
Ga
9yGy
Ka
9xKx

1 8E
4,3 !E
5 9I
2,6 !E
7 9I

9. F a ! 9xKx

3-8 !I

E6.26. Complete the following derivations by filling in justifications for each line.
Then for each application of 8I or 9E show that the constraints are met by
running through each of the three requirements.
Exercise 6.26

797

ANSWERS FOR CHAPTER 6


b.

1. 8y.F y ! Gy/
2. 9zF z

P
P

3.

Fj

A (g, 29E)

4.
5.
6.

Fj ! Gj
Gj
9xGx

1 8E
3,4 !E
5 9I

7. 9xGx

2,3-6 9E

For 9E at (7): (i) j is free for z in F z; (ii) j is not free in any undischarged
auxiliary assumption; (iii) j is not free in 9zF z or in 9xGx. So the restrictions are met.
E6.27. The following are not legitimate ND derivations. In each case, explain why.
a.

1. Gjy ! Fjy

2. 8z.Gzy ! Fjy/

1 8I

j is free in 8z.Gzy ! Fjy/; so constraint (iii) on 8I is not met. The


restriction requires that each instance of the variable be replaced!
E6.28. Provide derivations to show each of the following.
c. 8xKx, 8x.Kx ! S x/ `ND 8x.H x _ S x/
1. 8xKx
2. 8xKx ! Sx
3.
4.
5.
6.
7.

P
P

Kj
Kj ! Sj
Sj
Hj _ Sj
8x.H x _ Sx/

1 8E
2 8E
4,3 !E
5 _I
6 8I

f. 9yByyy `ND 9x9y9zBxyz


1. 9yByyy

2.

Bjjj

A (g, 19E)

3.
4.
5.

9zBjjz
9y9zBjyz
9x9y9zBxyz

2 9I
3 9I
4 9I

6. 9x9y9zBxyz

1,2-5 9E

Exercise 6.28.f

798

ANSWERS FOR CHAPTER 6


k. 8x8y.F x ! Gy/ `ND 8x.F x ! 8yGy/
1. 8x8y.F x ! Gy/

2.

Fj

A (g, !I)

3.
4.
5.
6.

8y.Fj ! Gy/
Fj ! Gk
Gk
8yGy

1 8E
3 8E
4,2 !E
5 8I

7. Fj ! 8yGy
8. 8x.F x ! 8yGy/

2-6 !I
7 8I

E6.29. For each of the following, (i) which primary strategies apply? and (ii) show
the next two steps. If the strategies call for a new subgoal, show the subgoal;
if they call for a subderivation, set up the subderivation. In each case, explain
your response.
a.

1. 9x9y.F xy ^ Gyx/

9x9yF yx

There is no contradiction in accessible lines, so SG1 does not apply. Since


the premise has main operator, 9, SG2 does apply; so we set up for 9E. The
result leaves another accessible formula with main operator 9. So we set up
for 9E again. The result is as follows.
1. 9x9y.F xy ^ Gyx/

2.

9y.Fjy ^ Gyj /

A (g, 19E)

3.

Fj k ^ Gkj

A (g, 29E)

9x9yF yx
9x9yF yx
9x9yF yx

2,3-

9E

1,2-

9E

E6.30. Each of the following sets up an application of I or E for SG4 or SG5.


Complete the derivations, and explain your use of secondary strategy.

Exercise 6.30

799

ANSWERS FOR CHAPTER 6


a.

1. 9x.F x ^ Gx/
2.

P
A (g, !I)

Fj

A (c, I)

Gj

3.

?
Gj

3-

Fj ! Gj
8x.F x ! Gx/

I

2- !I
8I

There are no atomics and negated atomics to be had, other than the ones on
(2) and (3), so SC1 does not apply. There is no existential or disjunction in
the subderivation for I, so SC2 does not apply. But it is easy to build up to
the opposite of 9x.F x ^ Gx/ on (1) in application of SC3. The result is as
follows.
1. 9x.F x ^ Gx/
2.

Fj

P
A (g, !I)

3.

Gj

A (c, I)

4.
5.
6.

Fj ^ Gj
9x.F x ^ Gx/
?

2,3 ^I
4 9I
5,1 ?I

7.

Gj

8. Fj ! Gj
9. 8x.F x ! Gx/

3-6 I
2-7 !I
8 8I

E6.31. Produce derivations to show each of the following. Though no full answers
are provided, strategy hints are available for the first problems.
a. 8x.Bx ! W x/, 9xW x `ND 9xBx
With an existential in the premises, you can go for the primary goal by 9E, in
application of SG2. Then set up for 9I.
b. 8x8y8zGxyz `ND 8x8y8z.H xyz ! Gzyx/
Think repeatedly from the bottom up in terms of SG4. This sets you up for
three applications of 8I and one of !I. Then the derivation is easy by 8E in
application of SG3.
c. 8xAx ! 8y.Dxy $ Bf 1 f 1 y/, 8x.Ax ^ Bx/ `ND 8xDf 1 xf 1 x

Exercise 6.31.c

800

ANSWERS FOR CHAPTER 6

After setting up to go for the goal by 8I in application of SG4, it will be


natural to fall through to SG5, and go for a contradiction. For this, you can
aim for conflict at the level of atomics and negated atomics, in application of
SC1. Do not forget that you can use a premise more than once. And do not
forget that you can instantiate a universal quantifier to complex terms of the
sort f 1 j or even f 1 f 1 f 1 j .
d. 8x.H x ! 8yRxyb/, 8x8z.Razx ! S xzz/ `ND Ha ! 9xS xcc
The primary goal has main operator !, so set up to get it by !I, in application of SG4. This gives 9xS xcc as a subgoal which, again in another
application of SG4 you can set out to get from Stcc for some term t. The
key is then to chose terms so that in application of SG3, you can exploit the
premises for such an expression.
e. 8x.F x ^ Abx/ $ 8xKx, 8y9x.F x ^ Abx/ ^ Ryy `ND 8xKx
Though it is tempting to go for the goal in the usual way by I, notice that
it exists whole in the premises; so you should go for it by the higher priority
strategy SG3. This gives you 8x.F x ^ Abx/ as a subgoal. Also notice
that a little bookkeeping (8E with ^E) exposes an existential in the second
premise. The argument will be smoothest if you expose the existential, and
go for the goal by SG2.
f. 9x.J xa ^ C b/, 9x.S x ^ H xx/, 8x.C b ^ S x/ ! Ax `ND 9z.Az ^
H zz/
With two existential premises, set up to get the goal by two applications of
9E, in application of SG2 (order does not matter, though you will need to use
different variables). Then you can think about getting the existential goal by
9I in application of SG4.
g. 8x8y.Dxy ! C xy/, 8x9yDxy, 8x8y.Cyx ! Dxy/ `ND 9x9y.C xy ^
Cyx/
The second premise is one instance of 8E away from an existential, and it
makes sense to take this step, and go for the goal by 9E, in application of
SG2. Then you will want to go for the goal by 9I, by a couple of applications
of SG4. This gets you into a case of exploiting the premises by SG3. Do not
forget that you can use a premise more than once.
h. 8x8y.Ry _ Dx/ ! Ky, 8x9y.Ax ! Ky/, 9x.Ax _ Rx/ `ND
9xKx
Exercise 6.31.h

801

ANSWERS FOR CHAPTER 6

With an existential premise, go for the goal by 9E, in application of SG2.


Then in another application of SG2, you can go for the goal by _E. The
second premise will be helpful in one of the subderivations, and the first in
the other.
i. 8y.My ! Ay/, 9x9y.Bx ^M x/^.Ry ^Syx/, 9xAx ! 8y8z.Syz !
Ay/ `ND 9x.Rx ^ Ax/
Given the existentially quantified premise, set up to reach the primary goal
by 9E with a couple applications of SG2. Then you can go for the goal by
9I in application of SG4. You then have an extended project of exploiting the
premises to reach your subgoal, in application of SG3.
j. 8x8y.H by ^ H xb/ ! H xy, 8z.Bz ! H bz/, 9x.Bx ^ H xb/
`ND 9zBz ^ 8y.By ! H zy/
You can go for the primary goal by 9E, in application of SG2. Then set up for
subgoals by a series of applications of SG4 (for 9I, ^I, 8I and !I). Then the
derivation reduces to exploiting the premises for the subgoal in application of
SG3.
k. 8x..F x ^ Kx/ ! 9y.F y ^ Hyx/ ^ Ky/,
8x.F x ^ 8y.F y ^ Hyx/ ! Ky/ ! Kx ! M a `ND M a
The goal exists as such in the premises; so it is natural to set up to get it, in
application of SG3, by !E. This results in 8x.F x ^ 8y.F y ^ Hyx/ !
Ky/ ! Kx as a subgoal. Do not chicken out, this is the real problem! You
can set up for this by a couple applications of SG4. In the end, you end up
with an atomic subgoal, and may fall through for this to SG5; in this case,
SC2 helps for the contradiction.
l. 8x8y.Gx ^ Gy/ ! .H xy ! Hyx/, 8x8y8z..Gx ^ Gy/ ^ Gz !
.H xy ^ Hyz/ ! H xz/ `ND 8w.Gw ^ 9z.Gz ^ H wz/ ! H ww/
You can go for the primary goal by 8I, and then the subgoal by !I, in
straightforward applications of SG4. This gives you an accessible existential as a conjunct of the assumption for !I, and after ^E, you can go for
the goal by 9E, in application of SG2. then it is a matter of exploiting the
premises in application of SG3. Notice that you can instantiate x and z in the
second premise to the same variable.
m. 8x8y.Ax ^By/ ! C xy, 9yEy ^8w.H w ! Cyw/, 8x8y8z.C xy ^
Cyz/ ! C xz, 8w.Ew ! Bw/ `ND 8z8w.Az ^ H w/ ! C zw
Exercise 6.31.m

802

ANSWERS FOR CHAPTER 6

With an existential in the premises, you can go for the goal by 9E, in application of SG2. Then you can apply SG4 for a couple applications of 8I and
one of !I. After that, it is a matter of exploiting the premises for the goal.
n. 8x9y8z.Axyz _ Bzyx/, 9x9y9zBzyx `ND 8x9y8zAxyz
It is reasonable to think about reaching goals by SG4 and, after applications
of 8E, using 9E and then _E in application of SG2. The trick is to set things
up so that you do not screen off variables for universal introduction with
the assumptions.
o. A ! 9xF x `ND 9x.A ! F x/
It is reasonable to try for the goal by 9I, but this is a dead end, and we fall
through to SG5. You can get the contradiction by building up to the opposite
of your assumption in application of SC3. Then you will be able to use the
premise and, and then in an application of 9E, build up to the opposite of the
assumption again! Other options apply the SC3/SC4 model, with either the
consequent or the negation of the antecedent (either of which give you the
conditional and so a first contradiction) as starting assumption.
p. 8xF x ! A `ND 9x.F x ! A/
It is reasonable to try for the goal by 9I, but this is a dead end, and we fall
through to SG5. You can get the contradiction by building up to the opposite
of your assumption in application of SC3. This will let you use the premise
and, in order to obtain the antecedent of the premise, build up to the opposite
of the assumption again! Other options apply the SC3/SC4 model, with either
the consequent or the negation of the antecedent (either of which give you the
conditional and so a first contradiction) as starting assumption.
E6.32. Produce derivations to demonstrate each of T6.27 - T6.30, explaining for
each application how quantifier restrictions are met.
T6.28. P ! Q `ND P ! 8xQ
1. P ! Q

where variable x is not free in formula P

2.

A (g, !I)

3.
4.

Q
8xQ

1,2 !E
3 8I

5. P ! 8xQ

2-4 !I

Exercise 6.32 T6.28

803

ANSWERS FOR CHAPTER 6

On 8I at (4): (i) x is sure to be free for every free instance of itself in Q; so


the first condition is satisfied. (ii) It is given that x is not free in P , so that
it cannot be free in the auxiliary assumption at (2); so the second condition
is satisfied. (iii) x is automatically bound in 8xQ; so the third condition is
satisfied.
E6.33. Produce derivations to show T6.31 - T6.36.
T6.32. `ND .xi D y/ ! .hn x1 : : : xi : : : xn D hn x1 : : : y : : : xn /
1.

xi D y

A (g, !I)

2.
3.

hn x 1 : : : x i : : : x n D hn x 1 : : : x i : : : x n
hn x 1 : : : x i : : : x n D hn x 1 : : : y : : : x n

=I
2,1 =E

4. .xi D y/ ! .hn x1 : : : xi : : : xn D hn x1 : : : y : : : xn /

E6.34. Produce derivations to show each of the following.


a. `ND 8x9y.x D y/
1. j D j
2. 9y.j D y/
3. 8x9y.x D y/

=I
2 9I
2 8I

E6.35. Produce derivations to show the following.


T6.39. `PN .t C ;/ D t
1. .j C ;/ D j

Q3

2. 8x.x C ;/ D x
3. .t C ;/ D t

1 8I
2 8E

a. `PN .S S; C S;/ D S S S ;
1. .SS; C S;/ D S.SS; C ;/
2. .SS; C ;/ D SS;
3. .SS; C S;/ D SSS;

T6.40
T6.39
1,2 =E

f. `QN 9x.x C S S; D S ;/

Exercise 6.35.f

1-3 !I

804

ANSWERS FOR CHAPTER 6


1.
2.
3.
4.

.j C SS;/ D S.j C S ;/
.j C S ;/ D S.j C ;/
S.j C S;/ D S; ! .j C S ; D ;/
S.j C ;/ D ;

T6.40
T6.40
T6.38
T6.37

5.

9x.x C SS; D S ;/

A (g, I)

6.

j C SS; D S ;

A (c, 59E)

S.j C S ;/ D S ;
j C S; D ;
S.j C ;/ D ;
?

6,1 DE
3,8 !E
9,2 DE
9,4 ?I

7.
8.
9.
10.
11.

5,6-10 9E

12. 9x.x C SS;/ D S ;/

5-11 I

E6.36. Produce derivations to show T6.53 - T6.65.


T6.53. `PN .r C s/ C ; D r C .s C ;/
1. .r C s/ C ; D r C s
2. .s C ;/ D s

T6.39
T6.39

3. .r C s/ C ; D r C .s C ;/

1,2 =E

T6.54. `PN .r C s/ C t D r C .s C t/
1.
2.
3.
4.

.r C s/ C ; D r C .s C ;/
.r C s/ C Sj D S.r C s/ C j
r C S.s C j / D Sr C .s C j /
.s C Sj / D S.s C j /

5.

.r C s/ C j D r C .s C j /

A (g, !I)

6.
7.
8.

.r C s/ C Sj D Sr C .s C j /
.r C s/ C Sj D r C S.s C j /
.r C s/ C Sj D r C .s C Sj /

2,5 =E
6,3 =E
7,4 =E

9.
10.
11.
12.

..r C s/ C j D r C .s C j // ! ..r C s/ C Sj D r C .s C Sj //
8x..r C s/ C x D r C .s C x// ! ..r C s/ C Sx D r C .s C Sx//
8x..r C s/ C x D r C .s C x//
.r C s/ C t D r C .s C t/

E6.38. Produce derivations to show each of the following.


f. 8x8y9zAf 1 xyz, 8x8y8zAxyz ! .C xyz _ Bzyx/
`NDC 9x9y8zBzg 1 yf 1 g 1 x

Exercise 6.38.f

T6.53
T6.40
T6.40
T6.40

5-8 !I
9 8I
1,10 IN
11 8E

805

ANSWERS FOR CHAPTER 7


1. 8x8y9zAf 1 xyz
2. 8x8y8zAxyz ! .C xyz _ Bzyx/

P
P

3. 8y9zAf 1 g 1 jyz
4. 9zAf 1 g 1 jg 1 kz
5. Af 1 g 1 jg 1 kl

1 8E
3 8E
A (g, 49E)

6.
7.
8.
9.
10.
11.
12.
13.
14.
15.

8y8zAf 1 g 1 jyz ! .Cf 1 g 1 jyz _ Bzyf 1 g 1 j /


8zAf 1 g 1 jg 1 kz ! .Cf 1 g 1 jg 1 kz _ Bzg 1 kf 1 g 1 j /
Af 1 g 1 jg 1 kl ! .Cf 1 g 1 jg 1 kl _ Blg 1 kf 1 g 1 j /
.Cf 1 g 1 jg 1 kl _ Blg 1 kf 1 g 1 j /
Cf 1 g 1 jg 1 kl ^ Blg 1 kf 1 g 1 j
Blg 1 kf 1 g 1 j
9zBzg 1 kf 1 g 1 j
8zBzg 1 kf 1 g 1 j
9y8zBzg 1 yf 1 g 1 j
9x9y8zBzg 1 yf 1 g 1 x

16. 9x9y8zBzg 1 yf 1 g 1 x

2 8E
6 8E
7 8E
8,5 !E
9 DeM
10 ^E
11 9I
12 QN
13 9I
14 9I
4,5-15 9E

n. 9xF x ! 8yGy, 8x.Kx ! 9yJy/, 9yGy ! 9xKx


`NDC 9xF x _ 9yJy
1. 9xF x ! 8yGy
2. 8x.Kx ! 9yJy/
3. 9yGy ! 9xKx

P
P
P

4.

9xF x

A (g, !I)

5.
6.
7.
8.

8yGy
9yGy
9xKx
Kj

1,4 !E
5 QN
3,6 !E
A (g, 79E)

9.
10.
11.

Kj ! 9yJy
9yJy
9yJy

12. 9xF x ! 9yJy


13. 9xF x _ 9yJy

2 8E
9,8 !E
7,8-10 9E
4-11 !I
12 Impl

Chapter Seven
E7.1. Suppose IA D T, IB T and IC D T. For each of the following, produce a formalized derivation, and then non-formalized reasoning to demonstrate either that it is or is not true on I.

Exercise 7.1

806

ANSWERS FOR CHAPTER 7


b. IB ! C T
1.
2.
3.
4.
5.
6.

IB T
IB D T
IC D T
IC T
IB D T M IC T
IB ! C T

prem
1 ST()
prem
3 ST()
2,4 cnj
5 ST(!)

It is given that IB T; so by ST(),


IB D T. But it is given that IC D T;
so by ST(), IC T. So IB D T
and IC T; so by ST(!), IB !
C T.

E7.2. Produce a formalized derivation, and then informal reasoning to demonstrate


each of the following.
a. A ! B, A 6s B
Set JA T, JB D T
1.
2.
3.
4.
5.
6.
7.
8.
9.

JA T
JA D T
JA T O JB D T
JA ! B D T
JB D T
JB T
JA ! B D T M JA D T M JB T

S I.IA ! B D T M IA D T M IB T/


A ! B; A 6s B

ins (J particular)
1 ST()
1 dsj
3 ST(!)
ins
5 ST()
4,2,6 cnj
7 exs
8 SV

JA T; so by ST(), JA D T. But since JA T, JA T or JB D T;


so by ST(!), JA ! B D T. And JB D T; so by ST(), JB T. So
JA ! B D T, and JA D T, but JB T; so there is an interpretation I such
that IA ! B D T, and IA D T, but IB T; so by SV, A ! B, A 6s B.

b. A ! B, B s A
1. A ! B; B 6s A
2. S I.IA ! B D T M IB D T M IA T/
3. JA ! B D T M JB D T M JA T
4. JB D T
5. JB T
6. JA ! B D T
7. JA T O JB D T
8. JA T
9. JA T
10. JA D T
11. A ! B; B s A

assp
1 SV
2 exs (J particular)
3 cnj
4 ST()
3 cnj
6 ST(!)
7,5 dsj
3 cnj
9 ST()
1-9 neg

Suppose A ! B, B 6s A; then by SV there is an I such that IA ! B D T


and IB D T and IA T. Let J be a particular interpretation of this sort;

Exercise 7.2.b

807

ANSWERS FOR CHAPTER 7

then JA ! B D T and JB D T and JA T. Since JB D T, by ST(),


JB T. And since JA ! B D T, either JA T or JB D T; so JA T. But
since JA T, by ST(), JA D T. This is impossible; reject the assumption:
A ! B, B s A.

E7.4. Complete the demonstration of derived clauses ST0 by completing the demonstration for dst in the other direction (and providing demonstrations for other
clauses).
1. .A M B/ O .:A M :B/ M :.:A O B/ M .:B O A/
2. .A M B/ O .:A M :B/
3. :.:A O B/ M .:B O A/
4. :.:A O B/ O :.:B O A/
5.
:A O B
:.:B O A/
6.
7.
B M :A
B
8.
AOB
9.
10.
:.:A M :B/
AMB
11.
12.
A
:A
13.
14. :.:A O B/
15. A M :B
16. A
17. A O B
18. :.:A M :B/
19. A M B
20. B
21. :B
22. .A M B/ O .:A M :B/ ) .:A O B/ M .:B O A/

assp
1 cnj
1 cnj
3 dem
assp
4,5 dsj
6 dem
7 cnj
8 dsj
9 dem
2,10 dsj
11 cnj
7 cnj
5-13 neg
14 dem
15 cnj
16 dsj
17 dem
2,18 dsj
19 cnj
15 cnj
1-22 cnd

E7.5. Using ST() as on p. 330, produce non-formalized reasonings to show each


of the following.
b. IP .Q Q/ D T iff IP ! Q D T
By ST(), IP .Q Q/ D T iff IP T or IQ Q T; by ST(), iff IP T or
(IQ D T and IQ D T); iff iff IP T or IQ D T; by ST(!), iff IP ! Q D T.
So IP .Q Q/ D T iff IP ! Q D T.

E7.6. Produce non-formalized reasoning to demonstrate each of the following.

Exercise 7.6

808

ANSWERS FOR CHAPTER 7


b. .A $ B/, A, B s C ^ C

Suppose .A $ B/, A, B 6s C ^ C ; then by SV there is some I such that
I.A $ B/ D T, and IA D T, and IB D T, but IC ^ C T. Let J
be a particular interpretation of this sort; then J.A $ B/ D T, and JA D T,
and JB D T, but JC ^ C T. From the first, by ST(), JA $ B T;
so by ST0 ($), (JA D T and JB T) or (JA T and JB D T). But since
JA D T, by ST(), JA T; so JA T or JB D T; so it is not the case that
JA D T and JB T; so JA T and JB D T; so JB D T. But JB D T;
so by ST(), JB T. This is impossible; reject the assumption: .A $ B/, A,
B s C ^ C .

c. .A ^ B/ 6s A ^ B


Set JA D T and JB T.
JA D T; so by ST(), JA T; so JA T or JB T; so by ST0 (^),
JA ^ B T; so by ST(), J.A ^ B/ D T. But it is given that JB T;

so JA T or JB T; so by ST0 (^), JA ^ B T. So J.A ^ B/ D T and


JA ^ B T; so by SV, .A ^ B/ 6s A ^ B.

E7.8. Consider some Id and suppose IA D T, IB T and IC D T. For each of


the expressions in E7.1, produce the formalized and then informal reasoning
to demonstrate either that it is or is not satisfied on Id .
b. Id B ! C S
1.
2.
3.
4.
5.
6.
7.
8.

IB T
Id B S
Id B D S
IC D T
Id C D S
Id C S
Id B D S M Id C S
Id B ! C S

ins
1 SF(s)
2 SF()
ins
4 SF(s)
5 SF()
3,6 cnj
7 SF(!)

IB T; so by SF(s), Id B S; so by SF(), Id B D S. But IC D T; so by

SF(s), Id C D S; so by SF(), Id C S. So Id B D S and Id C S; so


by SF(!), Id B ! C S.

E7.9. Produce formalized derivations and non-formalized reasoning to show that


each of the expressions in E7.6 that is sententially valid (a,b,f,g,h,j) is quantificationally valid.

Exercise 7.9

809

ANSWERS FOR CHAPTER 7


b. .A $ B/, A, B C ^ C
1. .A $ B/; A; B 6 C ^ C
2. S I.I.A $ B/ D T M IA D T M IB D T M IC ^ C T/
3. J.A $ B/ D T M JA D T M JB D T M JC ^ C T
4. JC ^ C T
5. S d.Jd C ^ C S/
6. Jh C ^ C S
7. J.A $ B/ D T
8. Ad.Jd .A $ B/ D S/
9. Jh .A $ B/ D S
10. Jh A $ B S
11. .Jh A D S M Jh B S/ O .Jh A S M Jh B D S/
12. JA D T
13. Ad.Jd A D S/
14. Jh A D S
15. Jh A S
16. Jh A S O Jh B D S
17. :.Jh A D S M Jh B S/
18. Jh A S M Jh B D S
19. Jh B D S
20. JB D T
21. Ad.Jd B D S/
22. Jh B D S
23. Jh B S
24. .A $ B/; A; B C ^ C

assp
1 QV
2 exs (J particular)
3 cnj
4 TI
5 exs (h particular)
3 cnj
7 TI
8 unv
9 SF()
10 SF0 ($)
3 cnj
12 TI
13 unv
14 SF()
15 dsj
16 dem
11,17 dsj
18 cnj
3 cnj
20 TI
21 unv
22 SF()
1-23 neg

Suppose .A $ B/, A, B 6 C ^ C ; then by QV, there is some I such that
I.A $ B/ D T and IA D T and IB D T but IC ^ C T. Let J be
a particular interpretation of this sort; then J.A $ B/ D T and JA D T and
JB D T but JC ^ C T. From the latter, by TI, there is some d such that
Jd C ^ C S; let h be a particular assignment of this sort; then Jh C ^ C S.
Since J.A $ B/ D T, by TI, for any d, Jd .A $ B/ D S; so Jh .A $
B/ D S; so by SF(), Jh A $ B S; so by SF0 ($), (*) either both Jh A D S
and Jh B S or both Jh A S and Jh B D S. But since JA D T, by TI, for
any d, Jd A D S; so Jh A D S; so by SF(), Jh A S; so either Jh A S
or Jh B D S; so it is not the case that both Jh A D S and Jh B S; so with (*)
Jh A S and Jh B D S; so Jh B D S. But since JB D T, by TI, for any d,
Jd B D S; so Jh B D S; so by SF(), Jh B S. This is impossible; reject
the assumption: .A $ B/, A, B C ^ C .

E7.11. Consider an I and d such that U D f1; 2g, Ia D 1, If 2 D fhh1; 1i; 2i;
Exercise 7.11

810

ANSWERS FOR CHAPTER 7

hh1; 2i; 1i; hh2; 1i; 1i; hh2; 2i; 2ig, Ig 1 D fh1; 1i; h2; 1ig, dx D 1 and dy D
2. Produce formalized derivations and non-formalized reasoning to determine
the assignment Id for each of the following.
c. g 1 g 1 x
1.
2.
3.
4.
5.
6.
7.

dx D 1
Id x D 1
Id g 1 x D Ig 1 h1i
Ig 1 h1i D 1
Id g 1 x D 1
Id g 1 g 1 x D Ig 1 h1i
Id g 1 g 1 x D 1

ins (d particular)
1 TA(v) (I particular)
2 TA(f)
ins
3,4 eq
5 TA(f)
6,4 eq

dx D 1; so by TA(v), Id x D 1; so by TA(f), Id g 1 x D Ig 1 h1i. But Ig 1 h1i D

1; so Id g 1 x D 1; so by TA(f), Id g 1 g 1 x D Ig 1 h1i; so, since Ig 1 h1i D 1,


Id g 1 g 1 x D 1.

E7.12. Augment the above interpretation for E7.11 so that IA1 D f1g and IB 2 D
fh1; 2i; h2; 2ig. Produce formalized derivations and non-formalized reasoning
to demonstrate each of the following.
b. IByx T
1.
2.
3.
4.
5.
6.
7.
8.
9.

dy D 2
Id y D 2
dx D 1
Id x D 1
Id Byx D S , h2; 1i 2 IB

h2; 1i 62 IB
Id Byx S
S h.Ih Byx S/
IByx T

ins
1 TA(v)
ins
3 TA(v)
2,4 SF(r)
ins
5,6 bcnd
7 exs
8 TI

dy D 2 and dx D 1 so by TA(v), Id y D 2 and Id x D 2; so by SF(r), Id Byx D


S iff h2; 1i 2 IB; but h2; 1i 62 IB; so Id Byx S; so there is an assignment h

such that Ih Byx S; so by TI, IByx T.

E7.13. Produce formalized derivations and non-formalized reasoning to demonstrate each of the following.

Exercise 7.13

811

ANSWERS FOR CHAPTER 7


c. P a 9xP x
1. P a 6 9xP x
2. S I.IP a D T M I9xP x T/
3. JP a D T M J9xP x T
4. J9xP x T
5. S d.Jd 9xP x S/
6. Jh 9xP x S
7. JP a D T
8. Ad.Jd P a D S/
9. Jh P a D S
10. Jh a D m
11. Jh P a D S , m 2 IP
12. m 2 JP
13. Ao.Jh.xjo/ P x S/
14. Jh.xjm/ P x S
15. h.xjm/x D m
16. Jh.xjm/ x D m
17. Jh.xjm/ P x D S , m 2 JP
18. m 62 JP
19. P a 9xP x

assp
1 QV
2 exs (J particular)
3 cnj
4 TI
5 exs (h particular)
3 cnj
7 TI
8 unv
def
10 SF(r)
9,11 bcnd
6 SF0 .9/
13 unv
ins
15 TA(v)
16 SF(r)
17,14 bcnd
1-18 neg

Suppose P a 6 9xP x; then by QV, there is some I such that IP a D T but


I9xP x T; let J be a particular interpretation of this sort; then JP a D T but
J9xP x T; from the latter, by TI, there is a d such that Jd 9xP x S; let
h be a particular assignment of this sort; then Jh 9xP x S. Since JP a D T,
by TI, for any d, Jd P a D S; so Jh P a D S. Let Jh a D m; then by SF(r),
Jh P a D S iff m 2 IP ; so m 2 JP . But since Jh 9xP x S, by SF0 .9/, for
any o 2 U, Jh.xjo/ P x S; so Jh.xjm/ P x S. h.xjm/x D m; so by TA(v),
Jh.xjm/ x D m; so by SF(r), Jh.xjm/ P x D S iff m 2 JP ; so m 62 JP . This is
impossible; reject the assumption: P a 9xP x.

E7.14. Provide a demonstration for (b) T7.6 in the non-formalized style.


For any I and P , IP D T iff I8xP D T
(i) Show For arbitrary I and P , suppose IP D T but I8xP T . . . . This is
impossible; reject the assumption: if IP D T then I8xP D T. (ii) Suppose
I8xP D T but IP T; from the latter, by TI, there is some d such that Id P S;
let h be a particular assignment of this sort; then Ih P S. But I8xP D T; so for
any d, Id 8xP D S; so Ih 8xP D S; so by SF(8), for any o 2 U, Ih.xjo/ P D S;
let m D hx; then Ih.xjm/ P D S; but where m D hx, h.xjm/ D h; so Ih P D S.

Exercise 7.14

812

ANSWERS FOR CHAPTER 7

This is impossible; reject the assumption: if I8xP D T then IP D T. So from (i)


and (ii), for arbitrary I and P , IP D T iff I8xP D T.

E7.15. Produce interpretations (with, if necessary, variable assignments) and then


formalized derivations and non-formalized reasoning to show each of the following.
b. 6 f 1 g 1 x D g 1 f 1 x
For interpretation J set U D f1; 2g, Jg 1 D fh1; 1i; h2; 1ig, Jf 1 D fh1; 2i;
h2; 2ig, and for assignment h, set hx D 1.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.

hx D 1
Jh x D 1
Jh g 1 x D Jg 1 h1i
Jg 1 h1i D 1
Jh g 1 x D 1
Jh f 1 g 1 x D Jf 1 h1i
Jf 1 h1i D 2
Jh f 1 g 1 x D 2
Jh f 1 x D Jf 1 h1i
Jh f 1 x D 2
Jh g 1 f 1 x D Jg 1 h2i
Jg 1 h2i D 1
Jh g 1 f 1 x D 1
Jh f 1 g 1 x D g 1 f 1 x D S , h2; 1i 2 JD

h2; 1i 62 JD
Jh f 1 g 1 x D g 1 f 1 x S
S d.Jd f 1 g 1 x D g 1 f 1 x S/
Jf 1 g 1 x D g 1 f 1 x T
S I.If 1 g 1 x D g 1 f 1 x T/
6 f 1 g 1 x D g 1 f 1 x

ins (h particular)
1 TA(v) (J particular)
2 TA(f)
ins
3,4 eq
5 TA(f)
ins
6,7 eq
2 TA(f)
9,7 eq
10 TA(f)
ins
11,12 eq
8,13 SF(r)
ins
14,15 bcnd
16 exs
17 TI
18 exs
19 QV

hx D 1; so by TA(v), Jh x D 1; so by TA(f), Jh g 1 x D Jg 1 h1i; but Jg 1 h1i D


1; so Jh g 1 x D 1; so by TA(f), Jh f 1 g 1 x D Jf 1 h1i; but Jf 1 h1i D 2; so
Jh f 1 g 1 x D 2. Since Jh x D 1, by TA(f), Jh f 1 x D Jf 1 h1i; so Jh f 1 x D
2; so by TA(f), Jh g 1 f 1 x D Jg 1 h2i; but Jg 1 h2i D 1; so Jh g 1 f 1 x D 1.
So by SF(r), Jh f 1 g 1 x D g 1 f 1 x D S iff h2; 1i 2 JD; but h2; 1i 62 JD; so
Jh f 1 g 1 x D g 1 f 1 x S. So there is a d such that Jd f 1 g 1 x D g 1 f 1 x S; so
by TI, Jf 1 g 1 x D g 1 f 1 x T; so there is an I such that If 1 g 1 x D g 1 f 1 x T;

so by QV, 6 f 1 g 1 x D g 1 f 1 x.

E7.16. Provide demonstrations for T7.7 - T7.9 in the non-formalized style.

Exercise 7.16

813

ANSWERS FOR CHAPTER 8


T7.8. .xi D y/ ! .hn x1 : : : xi : : : xn D hn x1 : : : y : : : xn /
Simplified version: .x D y/ ! .h1 x D h1 y/

Suppose 6 .x D y/ ! .h1 x D h1 y/; then by QV, there is some I such that I.x D
y/ ! .h1 x D h1 y/ T; let J be a particular interpretation of this sort; then J.x D
y/ ! .h1 x D h1 y/ T; so by TI there is a d such that Jd .x D y/ ! .h1 x D
h1 y/ S; let h be a particular assignment of this sort; then Jh .x D y/ ! .h1 x D
h1 y/ S; so by SF(!), Jh x D y D S and Jh h1 x D h1 y S. From the
former, by SF(r), hJh x; Jh yi 2 JD; but for any o; p 2 U, ho; pi 2 JD iff o D p;
so Jh x D Jh y. From the latter, by SF(r), hJh h1 x; Jh h1 yi 62 JD; so Jh h1 x
Jh h1 y. But by TA(f), Jh h1 x D Jh1 hJh xi and Jh h1 y D Jh1 hJh yi; so
with Jh x D Jh y, Jh1 hJh xi D Jh1 hJh yi; so Jh h1 x D Jh h1 y. This is
impossible; reject the assumption: .x D y/ ! .h1 x D h1 y/.

E7.18. Suppose we want to show that 8x9yRxy, 8x9yRyx, 8x8y8z..Rxy ^


Ryz/ ! Rxz/ 6 9xRxx.
a. Explain why no interpretation with a finite universe will do.
Suppose U is finite and that things are related by R as indicated by the arrows.
o0 o1 o2 o3 o4 o5 : : : on

From the first premise, there can be no thing, like on , that does not have R to
any thing. From the second, there can be no thing, like o0 such that nothing
bears R to it. The third premise guarantees that R obtains along any path
along the arrows so in this case, we must have also ho0 ; o2 i, ho0 ; o3 i all
the way to ho0 ; on i. Given this, if there is a loop, so that one thing bears R
to a thing before, it bears R to itself, and the conclusion is not false. The
solution for keeping the premises true and conclusion false is to let the series
continue in both directions.

Chapter Eight
E8.1. For any (official) formula P of a quantificational language, where A.P /
is the number of its atomic formulas, and C.P / is the number of its arrow
symbols, show that A.P / D C.P / C 1.

Exercise 8.1

814

ANSWERS FOR CHAPTER 8

Basis: If P has no operator symbols, then P is a sentence letter S or an a


atomic Rn t1 : : : tn . In either case, A.P / D 1. But there are no arrow
symbols; so C.P / D 0; so C.P / C 1 D 1. So A.P / D C.P / C 1.
Assp: For any i , 0  i < k, if P has i operator symbols, then A.P / D
C.P / C 1.
Show: If P has k operator symbols, then A.P / D C.P / C 1.
If P has k operator symbols, then it is of the form A, .A ! B/, or
8xA for variable x and formulas A and B with less than k operator
symbols.
() Suppose P is A. Then A.P / D A.A/ and C.P / D C.A/. But by
assumption, A.A/ D C.A/ C 1. So A.P / D C.P / C 1.
(!) Suppose P is .A ! B/. Then A.P / D A.A/ C A.B/ and C.P / D
C.A/ C C.B/ C 1. Applying the assumption to the first, A.P / D
C.A/ C 1 C C.B/ C 1 D C.A/ C C.B/ C 1 C 1 D C.P / C 1. So
A.P / D C.P / C 1.
(8) Suppose P is 8xA. Then as in the case for (), A.P / D A.A/
and C.P / D C.A/. But by assumption, A.A/ D C.A/ C 1. So
A.P / D C.P / C 1.
So in any case, if P has k operator symbols, A.P / D C.P / C 1.
Indct: So for any P in a quantificational language, A.P / D C.P / C 1.
E8.3. Let S.n/ be the sum of the first n even integers; that is S.n/ D 2C4C: : :C2n.
Show, by mathematical induction, that for any n  1, S.n/ D n.n C 1/.
Basis: If n D 1 then S.n/ D 2, and n.n C 1/ D 1.1 C 1/ D 2. So S.n/ D
n.n C 1/.
Assp: For any i , 1  i < k, S.i / D i.i C 1/.
Show: S.k/ D k.k C 1/. S.k/ is equal to the sum of all the even numbers up
to the one before the kth even number, added to the kth even number
that is, S.k/ D S.k 1/ C 2k. But since k 1 < k, by assumption
S.k 1/ D .k 1/.k 1/ C 1 D k 2 k. So S.k/ D .k 2 k/ C 2k D
k 2 C k D k.k C 1/. So S.k/ D k.k C 1/.
Indct: For any n, S.n/ D n.n C 1/.
E8.5. Using the fact that any diagonal of a k-sided polygon divides it into two with
< k sides, show by mathematical induction that the sum of the interior angles
of any convex polygon P, S.P/ D .n 2/180.
Exercise 8.5

815

ANSWERS FOR CHAPTER 8

Basis: If n D 3, then P is a triangle; but by reasoning as in the main text,


the sum of the angles in a triangle is 180 . So S.P/ D 180. But
.3 2/180 D 180. So S.P/ D .n 2/180.
Assp: For any i , 3  i < k, every P with i sides has S.P/ D .i
Show: For every P with k sides, S.P/ D .k

2/180.

2/180.

If P has k sides, then for some a such that both a and k a are > 1
a diagonal divides it into a figure Q with a C 1 sides, and a figure R
with .k a/ C 1 sides, where S.P/ D S.Q/ C S.R/. Since a > 1,
k > .k a/ C 1; and since k a > 1, k > a C 1; so by assumption,
S.Q/ D .a C 1/ 2180 and S.R/ D .k a C 1/ 2180. So
S.P/ D .a C 1/ 2180 C .k a C 1/ 2180 D a C 1 2 C k
a C 1 2180 D .k 2/180.
Indct: For any P, S.P/ D .n

2/180.

E8.17. Provide a complete argument for T8.2, completing cases for () and (!).
You should set up the complete induction, but may appeal to the text at parts
that are already completed, just as the text appeals to homework.
T8.2 For variables x and v, if v is not free in a formula P and v is free for
x in P , then Pvx v
x D P.
Let P be any formula such that if v is not free P and v is free for x in
P . We show that Pvx v
x D P by induction on the number of operator
symbols in P .
Basis: If P has no operator symbols, then Pvx v
x D P [from text].
Assp: For any i , 0  i < k, if P has i operator symbols, where v is
not free in P and v is free for x in P , then Pvx v
x D P.
Show: Any P with k operator symbols is such that if v is not free in P
and v is free for x in P , then Pvx v
x D P.
If P has k operator symbols, then it is of the form A, A ! B
or 8wA for some variable w and formulas A and B with < k
operator symbols.
() Suppose P is A, v is not free in P , and v is free for x in P .
x v
Then Pvx v
x D Av x . Since v is not free in P , v is not free
in A; and since v is free for x in P , v is free for x in A. So
the assumption applies to A; so by assumption Axv v
x D A; so
x v D P .
Axv v
D
A;
which
is
to
say,
P
x
v x

Exercise 8.17

816

ANSWERS FOR CHAPTER 8

(!) Suppose P is A ! B, v is not free in P , and v is free for x


x v
x v
in P . Then Pvx v
x D Av x ! Bv x . Since v is not free
in P , v is not free in A or in B; and since v is free for x in
P , v is free for x in A and B. So the assumption applies to
x v
A and B; so by assumption Axv v
x D A and Bv x D B; so
x
v
x
v
x
Av x ! Bv x D A ! B; which is to say, Pv v
x D P.
(8) Suppose P is 8wA, v is not free in P , and v is free for x in
P . Then Pvx v
x D P [from text].
If P has k operator symbols, if v is not free in P and v is free
for x in P , then Pvx v
x D P.
Indct: For any P , if v is not free in P and v is free for x in P , then
Pvx v
x D P.
E8.19. Provide a complete argument for T8.4, completing the case for (!), and
expanding the other direction for (8). You should set up the complete induction, but may appeal to the text at parts that are already completed, as the text
appeals to homework.
T8.4 For any interpretation I, variable assignments d and h, and formula P , if
dx D hx for every free variable x in P , then Id P D S iff Ih P D
S.
By induction on the number of operator symbols in the formula P .
Let Let I, d, h and P be arbitrary, and suppose dx D hx for every
variable x free in P .
Basis: If P has no operator symbols, then Id P D S iff Ih P D S [as
in text].
Assp: For any i , 0  i < k, if P has < k operator symbols and
dx D hx for every free variable x in P , then Id P D S iff
Ih P D S.
Show: If P has k operator symbols and dx D hx for every free
variable x in P , then Id P D S iff Ih P D S.
If P has k operator symbols, then it is of the form A, A ! B,
or 8vA for variable v and formulas A and B with < k operator
symbols. Suppose dx D hx for every free variable x in P .
() Suppose P is A. Then Id P D S iff Ih P D S [as in text].
(!) Suppose P is A ! B. Then since dx D hx for every free
variable x in P , and every variable free in A and in B is free
in P , dx D hx for every free variable in A and in B; so
Exercise 8.19

817

ANSWERS FOR CHAPTER 8

the inductive assumption applies to A and to B. Id P D S iff


Id A ! B D S; by SF(!), iff Id A S or Id B D S; by
assumption iff Ih A S or Ih B D S; by SF(!) iff Ih A !
B D S; iff Ih P D S.
(8) Suppose P is 8vP . Then since dx D hx for every free
variable x in P , dx D hx for every free variable in A with
the possible exception of v; so for arbitrary o 2 U, d.vjo/x D
h.vjo/x for every free variable x in A. Since the assumption
applies to arbitrary assignments, it applies to d.vjo/ and h.vjo/;
so by assumption Id.vjo/ A D S iff Ih.vjo/ A D S.
If Id P D S, then Ih P D S [from the text]. Suppose Ih P D
S but Id P S; then Ih 8vA D S but Id 8vA S; from the
latter, by SF(8), there is some o 2 U such that Id.vjo/ A S;
so, as above, with the inductive assumption, Ih.vjo/ A S. But
Ih 8vA D S; so by SF(8), for any m 2 U, Ih.vjm/ A D S;
so Ih.vjo/ A D S. This is impossible; reject the assumption: if
Ih P D S, then Id P D S. So Id P D S iff Ih P D S.
If P has k operator symbols, then Id P D S iff Ih P D S.
Indct: For any P , Id P D S iff Ih P D S.
E8.23. Complete the proof of T8.7 by showing by induction on the number of operator symbols in an arbitrary formula P that if v is distinct from x, then
Ptv cx D Pxc v
.
tc
x

Suppose v is distinct from x


Basis: If P has no operator symbols, then it is a sentence letter S or an
atomic of the form Rn r1 : : : rn for some relation symbol Rn and
terms r1 : : : rn . (i) Suppose P is a sentence letter S. Then no changes
n
are made, and Stv cx D S D Sxc v
c . So suppose P is R r1 : : : rn .
tx
v
v
c
n
v
c
v
c
c
n
Then Pt x D R .r1 t x : : : rn t x / and Px tc D R .r1 cx v
c :::
tx
x
v
c
rn x tc /. But since v is distinct from x, by part (i) from the text,
x
c
c v , and . . . and r v c D r c v ; so Rn .r v c : : :
r1 v
c
nt x
n x tc
1t x
t x D r1 x tx
x
c / D Rn .r c v : : : r c v /; so P v c D P c v .
rn v

c
c
c
1
n
t x
x t
x t
t x
x t
x

Assp: For any i , 0  i < k, if P has i operator symbols, and v is distinct


from x, then Ptv cx D Pxc v
.
tc
x

Show: Any P with k operator symbols is such that if v is distinct from x


then Ptv cx D Pxc v
.
tc
x

Exercise 8.23

818

ANSWERS FOR CHAPTER 8

If P has k operator symbols, then it is of the form A, A ! B


or 8wA for variable w and formulas A and B with < k operator
symbols.
c
c v D Ac v .
() Suppose P is A. Then Ptv cx D Av
c
c
x tx
t x and Px tx
v
c
Since v is distinct from x, by assumption, At x D Acx v
;
so
c
tx
c D Ac v ; and this is just to say, P v c D P c v .
Av

c
c
t x
x tx
t x
x tx
c ! B v c and
(!) Suppose P is A ! B. Then Ptv cx D Av

t x
t x
c v
c v
Pxc v
c D Ax c ! Bx c . Since v is distinct from x, by assumptx
tx
tx
c
c v and B v c D B c v ; so Av c ! B v c D
tion Av
c
c
t x
t x
t x
x tx
t x D Ax tx
v
v
c
c
Ax tc ! Bx tc ; and this is just to say, Ptv cx D Pxc v
c.
tx
x
x
(8) Suppose P is 8wA. If w is the same variable as v, then there are no
free instances of v in P ; so Ptv D P and Ptv cx D Pxc ; but similarly,
c
v c
c v
Pxc v
c D Px ; so Pt x D Px c . If w is different from v then
tx
tx
just the same instances of v are replaced in P as in A; so Ptv cx D
c
c v D 8wAc v ; but since v is distinct from x,
8wAv
c
c
t x and Px tx
x tx
c D Ac v ; so 8wAv c D 8wAc v ; and
by assumption, Av

c
c
t x
x tx
t x
x tx
this is just to say, Ptv cx D Pxc v
c.
t
x

For any P with k operator symbols, Ptv cx D Pxc v


.
tc
x

Indct: For any P , Ptv cx D Pxc v


.
tc
x

E8.28. Provide an argument to show T8.9.


T8.9. For any a, b, c 2 U, if a  b D c then Q `ND a  b D c. By induction on the
value of b.
Basis: Suppose b D 0 and a  b D c; then c D 0; but by Q5, Q `ND a  0 D 0;
so Q `ND a  b D c.
Assp: For any i, 0  i < k if a  i D c, then Q `ND a  i D c.
Show: If a  k D c, then Q `ND a  k D c.
Suppose a  k D c. Since k > i, k > 0. So k is the same as S k 1;
and a  .k 1/ D a  k a D c a; and by assumption Q `ND
.a  k 1/ D c a. By Q6, Q `ND .a  S k 1/ D .a  k 1/ C a;
but S k 1 is k so Q `ND .a  k/ D .a  k 1/ C a; so with DE,
Q `ND .a  k/ D c a C a. But .c a/ C a D c; so with T8.9,
Q `ND c a C a D c; and by DE again, Q `ND a  k D c.
Indct: For any a, b and c, if a  b D c, then Q `ND a  b D c.
E8.34. Provide derivations to show both parts of T8.21.
Exercise 8.34

819

ANSWERS FOR CHAPTER 8

T8.21. For any n and formula P .x/, (i) if Q `ND P .0/ and Q `ND P .1/ and . . . and
Q `ND P .n/ then Q `ND .8x  n/P .x/.
Basis: n D 0; so we need if Q `ND P .0/ then Q `ND .8x  0/P .x/.
1. P .;/

given from Q

2.

j ;

A (g (8)I)

3.
4.

j D;
P .j /

2 with T8.16
1,3 DE

5. .8x  ;/P .x/

2-4 (8)I

Assp: For any i, 0  i < k, if Q `ND P .0/ and . . . and Q `ND P . i /, then
Q `ND .8x  i /P .x/
Show: if Q `ND P .0/ and . . . and Q `ND P .k/, then Q `ND .8x  k/P .x/.
Suppose Q `ND P .0/ and . . . and Q `ND P .k/.
1.

j k

A (g (8)I)

2.
3.

j D 0 _ ::: _ j D k 1 _ j D k
j D 0 _ ::: _ j D k 1

1 with T8.16
A (g 2_E)

4.
5.
6.

j k 1
.8x  k 1/P .x/
P .j /

3 with T8.17
by assp
5,4 (8)E

7.

j Dk

A (g 2_E)

8.
9.

P .k/
P .j /

given from Q
8,7 DE

10.

2,3-6,7-9 _E

P .j /

11. .8x  k/P .x/

1-10 (8)I

So Q `ND .8x  k/P .x/.


Indct: For any n, if Q `ND 0 and . . . and Q `ND n, then Q `ND .8x  n/P .x/
E8.35. Provide demonstrations to both parts of T8.22.
T8.22. For any n, (i) Q `ND 8xx  n $ .x < n _ x D n/.

Exercise 8.35 T8.22

820

ANSWERS FOR CHAPTER 9


1.

j n

A (g $I)

2.
3.

j D 0 _ j D 1 _ ::: _ j D n
j D 0 _ ::: _ j D n 1

from 1 with T8.16


A (g 2_E)

4.
5.
6.

; ; _ j D 0 _ ::: _ j D n
j <n
j <n_j Dn

7.

j Dn

A (g 2_E)

8.

j <n_j Dn

7 _I

3 _I
from 4 with T8.17
5 _I

9.

j <n_j Dn

2,3-6,7-8 _E

10.

j <n_j Dn

A (g $I)

11.

j <n

A (g 10_E)

12.
13.
14.
15.
16.

; ; _ j D 0 _ ::: _ j D n 1
.; ;/
j D ; _ ::: _ j D n 1
j D ; _ ::: _ j D n 1 _ j D n
j n

from 11 with T8.16


by DI and DN
12,13 DS
14 _I
from 15 with T8.17

17.

j Dn

A (g 10_E)

18.
19.

j D ; _ j D 1 _ ::: _ j D n
j n

17 _I
from 18 with T8.17

20.

j n

10,11-16,17-19 _E

21. j  n $ .j < n _ j D n/
22. 8xx  n $ .x < n _ x D n/

1-10,11-20 $I
21 8I

Chapter Nine
E9.2. Set up the above induction for T9.2, and complete the unfinished cases to
show that if if `AD P , then `ND P . For cases completed in the text, you
may simply refer to the text, as the text refers cases to homework.
Basis: Q1 in A is a premise or an instance of A1, A2, A3, A4, A5, A6 or A7.
(prem) From text.
(A1) From text.
(A2) From text.
(A3) If Q1 is an instance of A3, then it is of the form, .C ! B/ !
..C ! B/ ! C /, and we continue N as follows,
Exercise 9.2

821

ANSWERS FOR CHAPTER 9


0.a Qa
0.b Qb
::
:
0.j Qj
1.1 C ! B

P
P

P
A (g, !I)

C ! B

1.2

A (g, !I)

1.3

C

A (c, E)

1.4
1.5
1.6

B
B
?

1.2,1.3 !E
1.1,1.3 !E
1.4,1.5 ?I

1.8

1.3-1.6 E

1.7

.C ! B/ ! C

1.2-1.7 !I

1 .C ! B/ ! ..C ! B/ ! C /

1.1-1.8 !I

So Q1 appears, under the scope of the premises alone, on the line


numbered 1 of N .
(A4) From text.
(A5) If Q1 is an instance of A5, then it is of the form x D x for some
variable x, and we continue N as follows,
0.a
0.b
::
:
0.j
1

Qa
Qb

P
P

Qj
xDx

P
=I

So Q1 appears, under the scope of the premises alone, on the line


numbered 1 of N .
(A6) From text.
(A7) If Q1 is an instance of A7, then it is of the form .xi D y/ !
.Rn x1 : : : xi : : : xn ! Rn x1 : : : y : : : xn / for some variables x1 : : : xn
and y, and relation symbol Rn ; and we continue N as follows,

Exercise 9.2

822

ANSWERS FOR CHAPTER 9


0.a Qa
0.b Qb
::
:
0.j Qj
1.1 xi D y
1.2
1.3
1.4
1

P
P

P
A (g, !I)

Rn x1 : : : xi : : : xn
Rn x1 : : : y : : : xn
Rn x1 : : : xi : : : xn ! Rn x1 : : : y : : : xn
.xi D y/ ! .Rn x1 : : : xi : : : xn ! Rn x1 : : : y : : : xn /

A (g, !I)
1.2,1.1 =E
1.2-1.3 !I
1.1-1.4 !I

So Q1 appears, under the scope of the premises alone, on the line


numbered 1 of N .
Assp: For any i , 1  i < k, if Qi appears on line i of A, then Qi appears,
under the scope of the premises alone, on the line numbered i of N .
Show: If Qk appears on line k of A, then Qk appears, under the scope of the
premises alone, on the line numbered k of N .
Qk in A is a premise, an axiom, or arises from previous lines by MP
or Gen. If Qk is a premise or an axiom then, by reasoning as in the
basis (with line numbers adjusted to k:n) if Qk appears on line k of
A, then Qk appears, under the scope of the premises alone, on the line
numbered k of A. So suppose Qk arises by MP or Gen.
(MP) From text.
(Gen) From text.
In any case then, Qk appears under the scope of the premises alone,
on the line numbered k of N .
Indct: For any line j of A, Qj appears under the scope of the premises alone,
on the line numbered j of N .
E9.8. Set up the above demonstration for T9.7 and complete the unfinished case to
provide a complete demonstration that for any formula A, and terms r and s,
if s is free for the replaced instance of r in A, then `AD .r D s/ ! .A !
Ar==s /.
Consider an arbitrary r, s and A, and suppose s is free for the replaced
instance of r in Ar==s .
Basis: If A has no operators and some term in it is replaced, then [from text]
`AD .r D s/ ! .A ! Ar==s /.
Exercise 9.8

823

ANSWERS FOR CHAPTER 9

Assp: For any i , 0  i < k, if A has i operator symbols, then `AD .r D


s/ ! .A ! Ar==s /.
Show: If A has k operator symbols, then `AD .r D s/ ! .A ! Ar==s /.
If A has k operator symbols, then A is of the form, P , P ! Q or
8xP for variable x and formulas P and Q with < k operator symbols.
() Suppose A is P . Then [from text] `AD .r D s/ ! .A ! Ar==s /.
(!) Suppose A is P ! Q. Then Ar==s is P r==s ! Q or P ! Qr==s . (i)
In the former case [from text], `AD .r D s/ ! .A ! Ar==s /. (ii) In
the latter case, since s is free for the replaced instance of r in A, it is
free for that instance of r in Q; so by assumption, `AD .r D s/ !
.Q ! Qr==s /; so we may reason as follows,
1. .r D s/ ! .Q ! Qr==s /
2. r D s
3.

prem
assp (g, DT)

P !Q

assp (g, DT)

4.

assp (g, DT)

5.
6.
7.

Q
Q ! Qr==s
Qr==s

3,4 MP
1,2 MP
6,5 MP

8.
9.

P ! Qr==s

4-7 DT

.P ! Q/ ! .P ! Qr==s /

10. .r D s/ ! .P ! Q/ ! .P !

3-8 DT
Qr==s /

2-9 DT

So `AD .r D s/ ! .P ! Q/ ! .P ! Qr==s /; which is to say,


`AD .r D s/ ! .A ! Ar==s /. So in either case, `AD .r D s/ !
.A ! Ar==s /.
(8) Suppose A is 8xP . Then [from text] `AD .r D s/ ! .A ! Ar==s /.
So for any A with k operator symbols, `AD .r D s/ ! .A ! Ar==s /.
Indct: For any A, `AD .r D s/ ! .A ! Ar==s /.
E9.10. Prove T9.9, to show that for any formulas A, B and C, if `AD B $ C, then
`AD A $ AB==C .
Basis: If A is atomic, then the only formula to be replaced is A itself, and B
is A; so AB==C is C. But then A $ AB==C is the same as B $ C. So
if `AD B $ C, then `AD A $ AB==C .

Exercise 9.10

824

ANSWERS FOR CHAPTER 9

Assp: For any i , 0  i < k, if A has i operator symbols, then if `AD B $ C,


then `AD A $ AB==C .
Show: If A has k operator symbols, then if `AD B $ C , then `AD A $
AB==C .
If A has k operator symbols, then it is of the form P , P ! Q,
or 8xP , for variable x and formulas P and Q with < k operator
symbols. If B is all of A, then as in the basis, if `AD B $ C, then
`AD A $ AB==C . So suppose B is a proper subformula of A.
() Suppose A is P and B is a proper subformula of A. Then AB==C is
P B==C . Suppose `AD B $ C. By assumption, `AD P $ P B==C ;
so by (abv), `AD .P ! P B==C / ^ .P B==C ! P /; so by T3.20 with
MP, `AD P ! P B==C ; and by T3.13 with MP, `AD P B==C ! P ;
similarly, by T3.19 with MP, `AD P B==C ! P ; so by T3.13 with
MP, `AD P ! P B==C ; so by T9.4 with two applications of MP,
`AD .P ! P B==C / ^ .P B==C ! P /; so by abv, `AD P $
P B==C ; which is just to say, `AD A $ AB==C .
(!) Suppose A is P ! Q and B is a proper subformula of A. Then
AB==C is P B==C ! Q or P ! QB==C . Suppose `AD B $ C.
(i) Say AB==C is P B==C ! Q. By assumption, `AD P $ P B==C ;
so by (abv), `AD .P ! P B==C / ^ .P B==C ! P /; by T3.19 with
MP, `AD P B==C ! P ; but by T3.5, `AD .P B==C ! P / ! .P !
Q/ ! .P B==C ! Q/; so by MP, `AD .P ! Q/ ! .P B==C !
Q/. Similarly, by T3.20 with MP, `AD P ! P B==C ; and by T3.5,
`AD .P ! P B==C / ! .P B==C ! Q/ ! .P ! Q/; so by MP,
`AD .P B==C ! Q/ ! .P ! Q/. So by T9.4 with two applications
of MP, `AD .P ! Q/ ! .P B==C ! Q/ ^ .P B==C ! Q/ ! .P !
Q/; so by abv, `ND .P ! Q/ $ .P B==C ! Q/; which is just to say,
`AD A $ AB==C .
(ii) Say AB==C is P ! QB==C . By assumption, `AD Q $ QB==C ;
so by (abv), `AD .Q ! QB==C / ^ .QB==C ! Q/; so by T3.20 with
MP, `AD Q ! QB==C ; but by T3.4, `AD .Q ! QB==C / ! .P !
Q/ ! .P ! QB==C /; so by MP, `AD .P ! Q/ ! .P ! QB==C /.
Similarly, by T3.19 with MP, `AD QB==C ! Q; and by T3.4, `AD
.QB==C ! Q/ ! .P ! QB==C / ! .P ! Q/; so by MP, `AD
.P ! QB==C / ! .P ! Q/. So by T9.4 with two applications of MP,
`AD .P ! Q/ ! .P ! QB==C / ^ .P ! QB==C / ! .P ! Q/;
so by abv, `AD .P ! Q/ $ .P ! QB==C /; and this is just to say,
Exercise 9.10

825

ANSWERS FOR CHAPTER 9


`AD A $ AB==C .

(8) Suppose A is 8xP and B is a proper subformula of A. Then AB==C


is 8xP B==C . Suppose `AD B $ C . Then by assumption `AD P $
P B==C ; so by abv, `ND .P ! P B==C / ^ .P B==C ! P /; so by T3.20
with MP, `ND P ! P B==C . But since x is always free for itself
in P , by A4, `AD 8xP ! P ; so by T3.2, `AD 8xP ! P B==C ;
and since x is not free in 8xP , by Gen, `AD 8xP ! 8xP B==C .
Similarly, by T3.19 with MP, `AD P B==C ! P ; but, since x is free
for itself in P B==C , by A4, `AD 8xP B==C ! P B==C ; so by T3.2,
`AD 8xP B==C ! P ; and since x is not free in 8xP B==C , by Gen,
`AD 8xP B==C ! 8xP . So by T9.4 with two applications of MP,
`AD 8xP ! 8xP B==C ^ 8xP B==C ! 8xP ; so by abv, `AD
8xP $ 8xP B==C ; which is to say `AD A $ AB==C .
If A has k operator symbols, then if `AD B $ C , then `AD A $
AB==C .
Indct: For any A, if `AD B $ C, then `AD A $ AB==C .
E9.12. Set up the above induction for T9.10 and complete the unfinished cases (including the case for 9E) to show that if `ND P , then `AD P . For cases
completed in the text, you may simply refer to the text, as the text refers cases
to homework.
Suppose `ND P ; then there is an ND derivation N of P from premises in
. We show that for any i , there is a good AD derivation Ai that matches N
through line i .
Basis: The first line of N is a premise or an assumption. [From text] A1
matches N and is good.
Assp: For any i , 0  i < k, there is a good derivation Ai that matches N
through line i .
Show: There is a good derivation Ak that matches N through line k.
Either Qk is a premise or assumption, or arises from previous lines by
R, ^E, ^I, !E, !I, E, I, _E, _I, $E, $I, 8E, 8I, 9E, 9I, =E or
=I.
(p/a) From text.
(R) From text.
(^E) From text.
Exercise 9.12

826

ANSWERS FOR CHAPTER 9


(^I) From text.
(!E) From text.
(!I) From text.
(E) From text.
(I) If Qk arises by I, then N is something like this,
i

C ^ C

k B

i-j I

where i; j < k, the subderivation is accessible at line k, and Qk D


B. By assumption Ak 1 matches N through line k 1 and is good.
So B and C ^ C appear at the same scope on the lines numbered
i and j of Ak 1 ; since they appear at the same scope, the parallel
subderivation is accessible in Ak 1 ; since Ak 1 is good, no application of Gen under the scope of B is to a variable free in B. So let Ak
continue as follows,
i

C ^ C

k:1
k:2
k:3
k:4
k:5
k:6
k:7
k:8
k:9
k:10
k

B ! .C ^ C /
.C ^ C/ ! C
.C ^ C/ ! C
B!C
B ! C
B ! B
B ! C
B ! C
.B ! C/ ! ..B ! C / ! B/
.B ! C / ! B
B

i -j DT
T3.20
T3.19
k:1,k:2 T3.2
k:1,k:3 T3.2
T3.10
k:6,k:4 T3.2
k:6,k:5 T3.2
A3
k:9,k:8 MP
k:10,k:7 MP

So Qk appears at the same scope on the line numbered k of Ak ; so


Ak matches N through line k. And since there is no new application
of Gen, Ak is good.
(_E) From text.
(_I) If Qk arises by _I, then N is something like this,

Exercise 9.12

827

ANSWERS FOR CHAPTER 9


i B

i B

or
k B_C

i _I

k C _B

i _I

where i < k and B is accessible at line k. In the first case, Qk D


B _ C. By assumption Ak 1 matches N through line k 1 and is
good. So B appears at the same scope on the line numbered i of
Ak 1 and is accessible in Ak 1 . So let Ak continue as follows,
i B
k:1 B ! .B _ C /
k B_C

T3.17
k:1,i MP

So Qk appears at the same scope on the line numbered k of Ak ; so


Ak matches N through line k. And since there is no new application
of Gen, Ak is good. And similarly in the other case, by application of
T3.18.
($E) If Qk arises by $E, then N is something like this,
i B$C
j B

or
i,j $E

k C

i B$C
j C
k B

i,j $E

where i; j < k and B $ C and B or C are accessible at line k. In


the first case, Qk D C. By assumption Ak 1 matches N through line
k 1 and is good. So B $ C and B appear at the same scope on the
lines numbered i and j of Ak 1 and are accessible in Ak 1 . So let
Ak continue as follows,
i B$C
j B
k:1
k:2
k:3
k

.B ! C / ^ .C ! B/
.B ! C/ ^ .C ! B/ ! .B ! C /
B!C
C

i abv
T3.20
k:2,k:1 MP
k:3,j MP

So Qk appears at the same scope on the line numbered k of Ak ; so


Ak matches N through line k. And since there is no new application
of Gen, Ak is good. And similarly in the other case, by application of
T3.19.
Exercise 9.12

828

ANSWERS FOR CHAPTER 9


($I) If Qk arises by $I, then N is something like this,
g

k B$C

g-h,i-j $I

where g; h; i; j < k, the two subderivations are accessible at line


k and Qk D B $ C . By assumption Ak 1 matches N through
line k 1 and is good. So the formulas at lines g; h; i; j appear at
the same scope on corresponding lines in Ak 1 ; since they appear at
the same scope, corresponding subderivations are accessible in Ak 1 ;
since Ak 1 is good, no application of Gen under the scope of B is to
a variable free in B and no application of Gen under the scope of C is
to a variable free in C. So let Ak continue as follows,
g

k:1
k:2
k:3
k:4
k:5

B!C
C !B
.B ! C / ! .C ! B/ ! ..B ! C/ ^ .C ! B//
.C ! B/ ! ..B ! C / ^ .C ! B//
.B ! C / ^ .C ! B/

k B$C

g-h DT
i-j DT
T9.4
k:3,k:1 MP
k:4,k:2 MP
k:5 abv

So Qk appears at the same scope on the line numbered k of Ak ; so


Ak matches N through line k. And since there is no new application
of Gen, Ak is good.
(8E) If Qk arises by 8E, then N looks something like this,
i 8xB
k Btx

i 8E

where i < k, 8xB is accessible at line k, term t is free for variable


x in B, and Qk D Btx . By assumption Ak 1 matches N through line
k 1 and is good. So 8xB appears at the same scope on the line
Exercise 9.12

829

ANSWERS FOR CHAPTER 9


numbered i of Ak
as follows,

and is accessible in Ak

1.

So let Ak continue

i 8xB
k:1 8xB ! Btx
k Btx

A4
k:1,i MP

Since t is free for x in B, k:1 is an instance of A4. So Qk appears


at the same scope on the line numbered k of Ak ; so Ak matches N
through line k. And since there is no new application of Gen, Ak is
good.
(8I) From text.
(9E) If Qk arises by 9E, then N looks something like this,
h 9xB
x
i
Bv
j

C
h,i-j 9E

k C

where h; i; j < k, 9xB and the subderivation are accessible at line k,


and C is Qk ; further, the ND restrictions on 9E are met: (i) v is free
for x in B, (ii) v is not free in any undischarged auxiliary assumption,
and (iii) v is not free in 9xB or in C. By assumption Ak 1 matches
N through line k 1 and is good. So the formulas at lines h, i and j
appear at the same scope on corresponding lines in Ak 1 ; since they
appear at the same scope, 9xB and the corresponding subderivation
are accessible in Ak 1 . Since Ak 1 is good, no application of Gen
under the scope of Bvx is to a variable free in Bvx . So let Ak continue
as follows,
h 9xB
x
i
Bv
j
k:1
k:2
k:3
k:4
k:5
k:6
k:7
k

C
x !C
Bv
x !C
9vBv
x ! 8xB
8vBv
x ! 8xB/ ! .8xB ! 8vB x /
.8vBv
v
x
8xB ! 8vBv
x
9xB ! 9vBv
x
9vBv
C

Exercise 9.12

i -j DT
k:1 T3.30
T3.27
T3.13
k:4,k:3 MP
k:5 abv
h,k:6 MP
k:2,k:7 MP

830

ANSWERS FOR CHAPTER 9

Since from constraint (iii), v is not free in C, k:2 meets the restriction
on T3.30. If v D x we can go directly from h and k:2 to k. So
suppose v x. To see that k:3 is an instance of T3.27, consider first,
8vBvx ! 8xBvx v
x ; this is an instance of T3.27 so long as x is
x
not free in 8vBv but free for v in Bvx . First, since Bvx has all
its free instances of x replaced by v, x is not free in 8vBvx . Second,
since v x, with the constraint (iii), that v is not free in 9xB, v is
not free in B, and so B; but by (i), v is free for x in B and so B;
so v appears free in Bvx just where x is free in B; so x is free for
every free instance of v in Bvx . So 8vBvx ! 8xBvx v
x is an
instance of T3.27. But since v is not free in B, and free for x in
B, by T8.2, Bvx v
x D B. So k:3 is a version of T3.27.
So Qk appears at the same scope on the line numbered k of Ak ; so
Ak matches N through line k. There is an application of Gen in T3.30
at k:2. But Ak 1 is good and since Ak matches N and, by (ii), v is
free in no undischarged auxiliary assumption of N , v is not free in
any undischarged auxiliary assumption of Ak ; so Ak is good.
(9I) If Qk arises by 9I, then N looks something like this,
i Btx
k 9xB

i 9I

where i < k, Btx is accessible at line k, term t is free for variable


x in B, and Qk D 9xB. By assumption Ak 1 matches N through
line k 1 and is good. So Btx appears at the same scope on the line
numbered i of Ak 1 and is accessible in Ak 1 . So let Ak continue
as follows,
i Btx
k:1 Btx ! 9xB
k 9xB

T3.29
k:1,i MP

Since t is free for x in B, k:1 is an instance of T3.29. So Qk appears


at the same scope on the line numbered k of Ak ; so Ak matches N
through line k. And since there is no new application of Gen, Ak is
good.
(=E) If Qk arises by =E, then N is something like this,

Exercise 9.12

831

ANSWERS FOR CHAPTER 9


i B
j tDs
k B t=s

or

i B
j sDt
k B t=s

i,j =E

i,j =E

where i; j < k, s is free for the replaced instances of t in B, B and


the equality are accessible at line k, and Qk D B t=s . By assumption
Ak 1 matches N through line k 1 and is good. So in the first case,
B and t D s appear at the same scope on the lines numbered i and
j of Ak 1 and are accessible in Ak 1 . So augment Ak as follows,
0:k .t D s/ ! .B ! B t=s /

T9.8

i B
j tDs
k:1 B ! B t=s
k B t=s

0:k,j MP
k:1,i MP

Since s is free for the replaced instances of t in B, 0:k is an instance


of T9.8. So Qk appears at the same scope on the line numbered k
of Ak ; so Ak matches N through line k. There may be applications
of Gen in the derivation of T9.8; but that derivation is under the scope
of no undischarged assumption. And under the scope of any undischarged assumptions, there is no new application of Gen; so Ak is
good. And similarly in the other case, with an initial application of
T3.33 and MP.
(=I) If Qk arises by =I, then N looks something like this,
k tDt

=I

where Qk is t D t. By assumption Ak 1 matches N through line


k 1 and is good. So let Ak continue as follows,
k tDt

T3.32

So Qk appears at the same scope on the line numbered k of Ak ; so


Ak matches N through line k. And since there is no new application
of Gen, Ak is good.
In any case, Ak matches N through line k and is good.
Indct: Derivation A matches N and is good.
Exercise 9.12

832

ANSWERS FOR CHAPTER 9

E9.15. Set up the above induction and complete the unfinished cases to show that if
`NDC P , then `AD P . For cases completed in the text, you may simply
refer to the text, as the text refers cases to homework.
Suppose `NDC P ; then there is an ND+ derivation N of P from premises
in . We show that for any i , there is a good AD derivation Ai that matches
N through line i .
Basis: The first line of N is a premise or an assumption. Let A1 be the same.
Then A1 matches N ; and since there is no application of Gen, A1 is
good.
Assp: For any i , 0  i < k, there is a good derivation Ai that matches N
through line i .
Show: There is a good derivation of Ak that matches N through line k.
Either Qk is a premise or assumption, arises by a rule of ND, or by
a the ND+ derivation rules, MT, HS, DS, NB or a replacement rule.
If Qk arises by any of the rules other than HS, DS or NB, then by
reasoning from the text, there is a good derivation Ak that matches N
through line k.
(HS) If Qk arises from previous lines by HS then N is something like this,
i B!C
j C !D
k B!D

i,j HS

where i; j < k, B ! C and C ! D are accessible at line k, and


Qk D B ! D. By assumption Ak 1 matches N through line k 1
and is good. So B ! D and C ! D appear at the same scope on the
lines numbered i and j of Ak 1 and are accessible in Ak 1 . So let
Ak continue as follows,
i B!C
j C !D
k B!D

i,j T3.2

So Qk appears at the same scope on the line numbered k of Ak ; so


Ak matches N through line k. And since there is no new application
of Gen, Ak is good.

Exercise 9.15

833

ANSWERS FOR CHAPTER 9


(DS) If Qk arises by DS, then N is something like this,
i B_C
j C
k B

or
i,j DS

i B_C
j B
k C

i,j DS

where i; j < k, and the formulas at lines i and j are accessible at


line k. In the first case, Qk D B. By assumption Ak 1 matches N
through line k 1 and is good. So B _ C and C appear at the same
scope on the lines numbered i and j of Ak 1 and are accessible in
Ak 1 . So let Ak continue as follows,
i B_C
j C
k:1
k:2
k:3
k

B ! C
.B ! C / ! .C ! B/
C ! B
B

i abv
T3.14
k:2,k:1 MP
k:3,j MP

So Qk appears at the same scope on the line numbered k of Ak ; so


Ak matches N through line k. And since there is no new application
of Gen, Ak is good. And similarly in the other case, by application of
MP immediately after k:1.
(NB) If Qk arises by NB, then N is something like this,
i B$C
j B
k C

or

i B$C
j C
k B

i,j NB

i,j NB

where i; j < k, and the formulas at lines i and j are accessible at


line k. In the first case, Qk D C. By assumption Ak 1 matches N
through line k 1 and is good. So B $ C and B appear at the same
scope on the lines numbered i and j of Ak 1 and are accessible in
Ak 1 . So let Ak continue as follows,

Exercise 9.15

834

ANSWERS FOR CHAPTER 10


i B$C
j B
k:1
k:2
k:3
k:4
k:5
k

.B ! C / ^ .C ! B/
.B ! C/ ^ .C ! B/ ! .C ! B/
C !B
.C ! B/ ! .B ! C /
B ! C
C

i abv
T3.19
k:2,k:1 MP
T3.13
k:4,k:3 MP
k:5,j MP

So Qk appears at the same scope on the line numbered k of Ak ; so


Ak matches N through line k. And since there is no new application
of Gen, Ak is good. And similarly in the other case, with application
of T3.20 in place of T3.19.
In any case, Ak matches N through line k and is good.
Indct: Derivation A matches N and is good.

Chapter Ten
E10.1. Complete the case for (!) in to complete the demonstration of T10.2. You
should set up the complete demonstration, but for cases completed in the text,
you may simply refer to the text, as the text refers cases to homework.
For arbitrary formula Q, term r and interpretation I, suppose r is free for x
in Q. By induction on the number of operator symbols in Q,
Basis: Suppose Id r D o. Then [from the text], Id Qrx D S iff Id.xjo/ Q D
S.
Assp: For any i , 0  i < k, if Q has i operator symbols, r is free for x in Q
and Id r D o, then Id Qrx D S iff Id.xjo/ Q D S.
Show: If Q has k operator symbols, r is free for x in Q and Id r D o, then
x D S iff I
Id Qr
d.xjo/ Q D S.
Suppose Id r D o. If Q has k operator symbols, then Q is of the form
B, B ! C, or 8vB for variable v and formulas B and C with
< k operator symbols.
() Suppose Q is B. Then [from the text], Id Qrx D S iff Id.xjo/ Q D
S.
(!) Suppose Q is B ! C . Then Qrx D B ! Cxr D Brx ! Crx . Since
r is free for x in Q, r is free for x in B and C; so by assumption,
Exercise 10.1

835

ANSWERS FOR CHAPTER 10

Id Brx D S iff Id.xjo/ B D S and Id Crx D S iff Id.xjo/ C D S.


But by SF(!), Id Brx ! Crx D S iff Id Brx S or Id Crx D S;
by assumption, iff Id.xjo/ B S or Id.xjo/ C D S; by SF(!), iff
x D S iff I
Id.xjo/ B ! C D S. So Id Qr
d.xjo/ Q D S.

(8) Suppose Q is 8vB. From the text, by the assumption, for any m 2 U,
Id.vjm/ Brx D S iff Id.vjm;xjo/ B D S. In addition, if Id.xjo/ Q D S
then Id Qrx D S. Now suppose Id Qrx D S but Id.xjo/ Q S; then
Id 8vBrx D S but Id.xjo/ 8vB S. From the latter, by SF(8),
there is some m 2 U such that Id.vjm;xjo/ B S; so by the result
from the assumption, Id.vjm/ Brx S; so by SF(8), Id 8vBrx S;
this is impossible. So Id Qrx D S iff Id.xjo/ Q D S.
If Q has k operator symbols, if r is free for x in Q and Id r D o, then
x D S iff I
Id Qr
d.xjo/ Q D S.
Indct: For any Q, if r is free for x in Q and Id r D o, then Id Qrx D S iff
Id.xjo/ Q D S.
E10.2. Complete the case for (MP) to round out the demonstration that AD is sound.
You should set up the complete demonstration, but for cases completed in the
text, you may simply refer to the text, as the text refers cases to homework.
Suppose `AD P . Then there is an AD derivation A D hQ1 : : : Qn i of P
from premises in , with Qn D P . By induction on the line numbers in A,
for any i ,  Qi . The case when i D n is the desired result.
Basis: The first line of A is a premise or an axiom. Then [from the text],
 Q1 .
Assp: For any i , 1  i < k,  Qi .
Show:  Qk .
Qk is either a premise, an axiom, or arises from previous lines by MP
or Gen. If Qk is a premise or an axiom then, as in the basis,  Qk .
So suppose Qk arises by MP or Gen.
(MP) If Qk arises by MP, then A is something like this,
i B!C
j B
::
:
k C

i,j MP

where i; j < k and Qk D C. Suppose Qk ; then C ; so by


QV, there is some I such that I D T but IC T; from the latter,
Exercise 10.2

836

ANSWERS FOR CHAPTER 10

by TI, there is some d such that Id C S. But I D T and by


assumption,  B ! C and  B; so by QV, IB ! C D T and
IB D T; so by TI, Id B ! C D S and Id B D S; from the first
of these, by SF(!), Id B S or Id C D S; so Id C D S. This is
impossible; reject the assumption:  Qk .
(Gen) If Qk arises by Gen, then [from the text],  Qk .
 Qk .
Indct: For any n,  Qn .
E10.4. Provide an argument to show T10.5.
If there is an interpretation M such that M [ fAg D T, then A.
Suppose there is an interpretation M such that M [ fAg D T but ` A.
From the former, M D T and MA D T. From the latter, by soundness,
 A; but M D T; so by QV, MA D T; so by TI, for any d, Md A D S
and since MA D T, Md A D S; so by SF(), Md A S. This is
impossible; reject the assumption: if there is an interpretation M such that
M [ fAg D T, then A.
E10.10. Complete the second half of the conditional case to complete the proof of
T10.9s . You should set up the entire induction, but may refer to the text for
parts completed there, as the text refers to homework.
Suppose 0 is consistent. Then by T10.8s , 00 is maximal and consistent.
Now by induction on the number of operators in B,
Basis: If B has no operators, then it is an atomic of the sort S. But by the
construction of M0 , M0 S D T iff 00 ` S; so M0 B D T iff 00 ` B.
Assp: For any i , 0  i < k, if B has i operator symbols, then M0 B D T iff
00 ` B.
Show: If B has k operator symbols, then M0 B D T iff 00 ` B.
If B has k operator symbols, then it is of the form P or P ! Q
where P and Q have < k operator symbols.
(/ Suppose B is P . [From the text], M0 B D T iff 00 ` B.
(!) Suppose B is P ! Q. (i) Suppose M0 B D T; then [from the text],
00 ` B. (ii) Suppose 00 ` B but M0 B T; then 00 ` P ! Q
but M0 P ! Q T; from the latter, by ST(!), M0 P D T and
M0 Q T; so by assumption, 00 ` P and 00 Q; from the
Exercise 10.10

837

ANSWERS FOR CHAPTER 10

second of these, by maximality, 00 ` Q. But since 00 ` P and


00 ` P ! Q, by MP, 00 ` Q; so by consistency, 00 Q. This
is impossible; reject the assumption: If 00 ` B, then M0 B D T. So
M0 B D T iff 00 ` B.
If B has k operator symbols, then M0 B D T iff 00 ` B.
Indct: For any B, M0 B D T iff 00 ` B.
E10.13. Finish the cases for A2, A3 and MP to complete the proof of T10.12. You
should set up the complete demonstration, but may refer to the text for cases
completed there, as the text refers cases to homework.
Basis: B1 is either a member of 0 or an axiom.
(prem) If B1 is a member of 0 , then [from text], hB1 a
x i is a derivation from
0 a
.
x
(eq) If B1 is an equality axiom, A5, A6 or A7, then [from text], hB1 a
x i is
0
a
a derivation from x .
(A1) If B1 is an instance of A1, then [from text], hB1 a
x i is a derivation
from 0 a
.
x
(A2) If B1 is an instance of A2, then it is of the form, O ! .P ! Q/ !
a
a
a
.O ! P / ! .O ! Q/; so B1 a
x is Ox ! .Px ! Qx / !
.Oxa ! Pxa / ! .Oxa ! Qxa /; but this is an instance of A2; so if B1
a
is an instance of A2, then B1 a
x is an instance of A2, and hB1 x i is a
derivation from 0 a
x.
(A3) If B1 is an instance of A3, then it is of the form, .Q ! P / !
a
a
a
.Q ! P / ! Q; so B1 a
x is .Qx ! Px / ! .Qx !
a
a
Px / ! Qx ; but this is an instance of A3; so if B1 is an instance of
a
A3, then B1 a
x is an instance of A3, and hB1 x i is a derivation from
0
a
x.
(A4) If B1 is an instance of A4, then [from text], hB1 a
x i is a derivation
0
a
from x .
a
0a
Assp: For any i , 1  i < k, hB1 a
x : : : Bi x i is a derivation from x .
a
0a
Show: hB1 a
x : : : Bk x i is a derivation from x .

Bk is a member of 0 , an axiom, or arises from previous lines by MP


or Gen. If Bk is a member of 0 or an axiom then, by reasoning as in
the basis, hB1 : : : Bk i is a derivation from 0 a
x . So two cases remain.
(MP) If Bk arises by MP, then there are some lines in D,
Exercise 10.13

838

ANSWERS FOR CHAPTER 10


i P !Q
j P
::
:
k Q

i,j MP

a
where i; j < k and Bk D Q. By assumption .P ! Q/a
x and Px are
a
0a
members of the derivation hB1 a
x : : : Bk 1 x i from x ; but .P !
a
a
a
a
Q/x is Px ! Qx ; so by MP, Qx follows in this new derivation. So
a
0a
hB1 a
x : : : Bk x i is a derivation from x .
a
(Gen) If Bk arises by Gen, then [from text], hB1 a
x : : : Bk x i is a derivation
0
a
from x .
a
0a
So hB1 a
x : : : Bk x i is a derivation from x .
a
0a
Indct: For any n, hB1 a
x : : : Bn x i is a derivation from x .

E10.21. Complete the proof of T10.14. You should set up the complete induction,
but may refer to the text, as the text refers to homework.
The argument is by induction on the number of function symbols in t. Let d
be a variable assignment, and t a term in L.
Basis: If t has no function symbols, then it is a variable or a constant in L.
If t is a constant, then by construction, Mt D M0 t; so by TA(c),
Md t D M0d t. If t is a variable, by TA(v), Md t D dt D M0d t. In
either case, then, Md t D M0d t.
Assp: For any i , 0  i < k, if t has i function symbols, then Md t D M0d t.
Show: If t has k function symbols, then Md t D M0d t.
If t has k function symbols, then [from text] Md t D M0d t.
Indct: For any t in L, Md t D M0d t.
E10.22. Complete the proof of T10.15. As usual, you should set up the complete
induction, but may refer to the text for cases completed there, as the text refers
to homework.
The argument is by induction on the number of operator symbols in P . Let d
be a variable assignment, and P a formula in L.
Basis: If P has no operator symbols, then [from text] Md P D S iff M0d P D
S.
Assp: For any i, 0  i < k, and any common variable assignment d, if P
has i operator symbols, Md P D S iff M0d P D S.
Exercise 10.22

839

ANSWERS FOR CHAPTER 11

Show: For any variable assignment d for M, if P has k operator symbols,


Md P D S iff M0d P D S.
If P has k operator symbols, then it is of the form A, A ! B or
8xA for variable x and formulas A and B with < k operator symbols.
() Suppose P is of the form A. Then Md P D S iff Md A D S; by
SF(), iff Md A S; by assumption, iff M0d A S; by SF(), iff
M0d A D S; iff M0d P D S.
(!) Suppose P is of the form A ! B. Then Md P D S iff Md A !
B D S; by SF(!), iff Md A S or Md B D S; by assumption,
iff M0d A S or M0d B D S; by SF(!), iff M0d A ! B D S; iff
M0d P D S.
(8) Suppose P is of the form 8xA. Then Md P D S iff Md 8xA D S;
by SF(8), iff for any m 2 U, Md.xjm/ A D S; by assumption, iff
for any m 2 U, M0d.xjm/ A D S; by SF(8), iff M0d 8xA D S; iff
M0d P D S.
If P has k operator symbols, Md P D S iff M0d P D S.
Indct: For any formula P in L, Md P D S iff M0d P D S.

Chapter Eleven
E11.6. Complete the proof of T11.5. You should set up the complete induction, but
may refer to the text, as the text refers to homework.
By induction on the number of operators in P . Suppose D H.
Basis: Suppose P has no operator symbols and d and h are such that for any
x, .dx/ D hx. If P has no operator symbols, then [from text]
Dd P D S iff Hh P D S.
Assp: For any i , 0  i < k, for d and h such that for any x, .dx/ D hx
and P with i operator symbols, Dd P D S iff Hh P D S.
Show: For d and h such that for any x, .dx/ D hx and P with k operator
symbols, Dd P D S iff Hh P D S.
If P has k operator symbols, then it is of the form A, A ! B, or
8xA for variable x and formulas A and B with < k operator symbols.
Suppose for any x, .dx/ D hx.
() Suppose P is of the form A. Then [from text] Dd P D S iff
Hh P D S.
Exercise 11.6

840

ANSWERS FOR CHAPTER 12

Dd P D S iff Dd A D S; by SF(), iff Dd A S; by assumption, iff Hh A S; by SF(), iff Hh A D S; iff Hh P D S.

(!) Dd P D S iff Dd A ! B D S; by SF(!), iff Dd A S or


Dd B D S; by assumption, iff Hh A S or Hh B D S; by SF(!),
iff Hh A ! B D S; iff Hh P D S.
(8) Suppose P is of the form 8xA. Then Dd P D S iff Dd 8xA D S;
by SF(8), iff for any m 2 UD , Dd.xjm/ A D S. Similarly, Hh P D S
iff Hh 8xA D S; by SF(8), iff for any n 2 UH , Hh.xjn/ A D S. (i)
[From the text], if Hh P D S, then Dd P D S. (ii) Suppose Dd P D
S but Hh P S; then any m 2 UD is such that Dd.xjm/ A D S, but
there is some n 2 UH such that Hh.xjn/ A S. Since  is onto UH ,
there is some o 2 UD such that .o/ D n; so insofar as d.xjo/ and
h.xjn/ have each member related by , the assumption applies and
Dd.xjo/ A S; so there is some m 2 UD such that Dd.xjm/ A S;
this is impossible; reject the assumption: if Dd P D S, then Hh P D
S.
For d and h such that for any x, .dx/ D hx and P with k operator
symbols, Dd P D S iff Hh P D S.
Indct: For d and h such that for any x, .dx/ D hx, and any P , Dd P D S
iff Hh P D S.

Chapter Twelve
E12.1. (b) produce functions gpower.x/, and hpower.x; y; u/ and show that they
have the same result as conditions (g) and (h).
Set gpower.x/ D suc.zero.x// and hpower.x; y; u/ D times.idnt33 .x; y; u/; x/.
Then,
g0
h0

power.x; 0/ D S.zero.x// D S0
power.x; Sy/ D idnt33 .x; y; power.x; y//  x D power.x; y/  x

E12.5. (a) By the method of our core induction, write down formulas to express the
following recursive function: suc.zero.x//.
Z.x; w/ is x D x ^ w D ; and S.w; y/ is S w D y; so their composition
F .x; y/ D 9w.x D x ^ w D ;/ ^ S w D y.

Exercise 12.5

841

ANSWERS FOR CHAPTER 12

E12.6. Fill out semantic reasoning to demonstrate that proposed (original) formulas
satisfy the conditions for expression for the (z), (i), (c) and (m) clauses to
T12.3.
(c) fk .y/ arises by composition from g.y/ and h.w/. By assumption g.y/ is
expressed by some G .w/ and h.w/ by H .w; v/. And the composition f.y/
is expressed by F .y; v/ Ddef 9wG .y; w/ ^ H .w; v/. Suppose hm; ai 2 fk ;
then by composition there is some b such that hm; bi 2 g and hb; ai 2 h.
(i) Because G and H express g and h, NG .m; b/ D T and NH .b; a/ D T.
Suppose N9w.G .m; w/ ^ H .w; a// T; then by TI, there is some d such
that Nd 9w.G .m; w/ ^ H .w; a// S; let h be a particular assignment of
this sort; then Nh 9w.G .m; w/ ^ H .w; a// S; so by SF(9), for any o 2 U,
Nh.wjo/ G .m; w/ ^ H .w; a/ S; so Nh.wjb/ G .m; w/ ^ H .w; a/ S;
so since Nh b D b, with T10.2, Nh G .m; b/ ^ H .b; a/ S; so by SF(^),
Nh G .m; b/ S or Nh H .b; a/ S. But NG .m; b/ D T; so by TI, for
any d, Nd G .m; b/ D S; so Nh G .m; b/ D S; so Nh H .b; a/ S; but
NH .b; a/ D T; so by TI, for any d, Nd H .b; a/ D S; so Nh H .b; a/ D S.
This is impossible; reject the assumption: N9w.G .m; w/ ^ H .w; a// D T.
(ii) Suppose N8z.9w.G .m; w/ ^ H .w; z// ! z D a/ T; then by TI,
there is some d such that Nd 8z.9w.G .m; w/ ^ H .w; z// ! z D a/
S; let h be a particular assignment of this sort; then Nh 8z.9w.G .m; w/ ^
H .w; z// ! z D a/ S; so by SF(8), for some o 2 U, Nh.zjo/ 9w.G .m; w/
^ H .w; z// ! z D a S; let p be a particular individual of this sort;
then Nh.zjp/ 9w.G .m; w/ ^ H .w; z// ! z D a S; since Nh p D p,
with T10.2, Nh 9w.G .m; w/ ^ H .w; p// ! p D a S; so by SF(!),
Nh 9w.G .m; w/^H .w; p// D S and Nh p D a S. From the first of these,
by SF(9), there is some o 2 U such that Nh.wjo/ G .m; w/ ^ H .w; p/ D S;
let q be a particular individual of this sort; then Nh.wjq/ G .m; w/ ^ H .w; p/
D S; since Nh q D q, with T10.2, Nh G .m; q/ ^ H .q; p/ D S; so by SF(^),
Nh G .m; q/ D S; and Nh H .q; p/ D S.
Because G expresses g and hm; bi 2 g, N8z.G .m; z/ ! z D b/ D T; so
by TI, for any d, Nd 8z.G .m; z/ ! z D b/ D S; so Nh 8z.G .m; z/ ! z D
b/ D S; so by SF(8), for any o 2 U, Nh.zjo/ G .m; z/ ! z D b D S; so
Nh.zjq/ G .m; z/ ! z D b D S; since Nh q D q, with T10.2, Nh G .m; q/ !
q D b D S; so by SF(!), Nh G .m; q/ S or Nh q D b D S; but
Nh G .m; q/ D S; so Nh q D b D S; and since Nh q D q and Nh b D b,
with SF(r), q D b.

Exercise 12.6

842

ANSWERS FOR CHAPTER 12

Since H expresses h, and hb; ai 2 h, hq; ai 2 h and N8z.H .q; z/ !


z D a/ D T; so by TI, for any d, Nd 8z.H .q; z/ ! z D a/ D S; so
Nh 8z.H .q; z/ ! z D a/ D S; so by SF(8), for any o 2 U, Nh.zjo/ H .q; z/
! z D a D S; so Nh.zjp/ H .q; z/ ! z D a D S; since Nh p D p,
with T10.2, Nh H .q; p/ ! p D a D S; so by SF(!), Nh H .q; p/ S or
Nh p D a D S; but Nh H .q; p/ D S; so Nh p D a D S. This is impossible;
reject the assumption: N8z.9w.G .m; w/ ^ H .w; z// ! z D a/ D T.
E12.12. Complete the demonstration of T12.8 by finishing the remaining cases. You
should set up the entire argument, but may appeal to the text for parts already
completed, as the text appeals to homework.
(9 ) P is .9x  t/A.x/. Since P is a sentence, x is the only variable free
in A; in particular, since x does not appear in t, t is variable free; so
Nd t D Nt and where Nt D n, by T8.13, Q `ND t D n; so Q `ND P
just in case Q `ND .9x  n/A.x/.
(i) Suppose NP D T; then N.9x  t/A.x/ D T; so by TI, for
any d, Nd .9x  t/A.x/ D S; so by T12.7, for some m  Nd t,
Nd.xjm/ A.x/ D S; so where Nd t D Nt D n, for some m  n,
Nd.xjm/ A.x/ D S; so with T10.2, for some m  n, Nd A.m/ D S;
since x is the only variable free in A, A.m/ is a sentence; so with T8.5,
for some m  n, NA.m/ D T; so by assumption for some m  n,
Q `ND A.m/; so by T8.20, Q `ND .9x  n/A.x/; so Q `ND P .
(ii) Suppose NP T; then N.9x  t/A.x/ T; so by TI, for
some d, Nd .9x  t/A.x/ S; so by T12.7, for any m  Nd t,
Nd.xjm/ A.x/ S; so where Nd t D Nt D n, for any m  n,
Nd.xjm/ A.x/ S; so with T10.2, for any m  n, Nd A.m/ S;
so by TI, for any m  n, NA.m/ T; so NA.0/ T and . . . and
NA.n/ T; so by assumption, Q `ND A.;/ and . . . and Q `ND
A.n/; so by T8.21, Q `ND .8x  n/A.x/; so by BQN, Q `ND
.9x  n/A.x/; so Q `ND P .
E12.14. Complete the demonstration of T12.11 by completing the remaining cases,
including the basis and part (ii) of the case for composition.

Exercise 12.14

843

ANSWERS FOR CHAPTER 12


1. 8z.G .m; z/ ! z D b/
2. 8z.H .b; z/ ! z D a/
3.

G cap g
H cap h

9wG .m; w/ ^ H .w; j /

A (g, !I)

4.

G .m; k/ ^ H .k; j /

A (g, 3 9E)

5.
6.
7.
8.
9.
10.
11.

G .m; k/
G .m; k/ ! k D b
kDb
H .k; j /
H .b; j /
H .b; j / ! j D a
j Da

4 ^E
1 8E
6,5 !E
4 ^E
8,7 =E
2 8E
10,9 !E

12.

j Da

3,4-11 9E

13. 9wG .m; w/ ^ H .w; j / ! j D a


14. 8z.9wG .m; w/ ^ H .w; z/ ! z D a/

3-12 !I
13 8I

E12.15. Produce a derivation to show the basis in the argument for the uniqueness
condition.
1. 8zG .m; z/ ! z D a
2. 8p8q8x8y.B.p; q; ;; x/ ^ B.p; q; ;; y// ! x D y

from capture
from uniqueness

3.

9p9qf9zB.p; q; ;; z/ ^ G .m; z/ ^ A ^ B.p; q; ;; j /g

A (g, !I)

4.

9qf9zB.p; q; ;; z/ ^ G .m; z/ ^ A ^ B.p; q; ;; j /g

A (g, 39E)

5.

9zB.p; q; ;; z/ ^ G .m; z/ ^ A ^ B.p; q; ;; j /

A (g, 49E)

6.
7.
8.

9zB.p; q; ;; z/ ^ G .m; z/
B.p; q; ;; j /
B.p; q; ;; k/ ^ G .m; k/

5 ^E
5 ^E
A (g, 69E)

9.
10.
11.
12.
13.
14.
15.
16.
17.

B.p; q; ;; k/
G .m; k/
G .m; k/ ! k D a
kDa
B.p; q; ;; a/
j Da

8 ^E
8 ^E
1 8E
11,10 !E
9,12 DE
2,7,13

j Da

6,8-14 9E

j Da

4,5-15 9E

j Da

3,4-16 9E

18. 9p9qf9zB.p; q; ;; z/ ^ G .x;


E z/ ^ A ^ B.p; q; ;; j /g ! j D a
E z/ ^ A ^ B.p; q; ;; w/g ! w D a
19. 8w9p9qf9zB.p; q; ;; z/ ^ G .x;

Exercise 12.15

3-17 !I
18 8I

844

ANSWERS FOR CHAPTER 12

E12.21. Work carefully through the demonstration of T12.16 by setting up revised


arguments T12.3 , T12.11 and T12.12 .
T12.11 . For any recursive f.x/
E originally expressed by F .x;
E v/, let F .x;
E v/
0
be like F .x;
E v/ except that B is replaced by B . Then f.x/
E is captured in Q
by F .x;
E v/.
By induction on the sequence of recursive functions.
Basis: f0 is an initial function. Everything is the same, except that conclusions
are for Q rather than Qs .
Assp: For any i, 0  i < k, fi .Ex/ is captured in Q by F .x;
E v/.
Show: fk .Ex/ is captured in Q by F .x;
E v/.
fk is either an initial function or arises from previous members by com-

position, recursion or regular minimization. If it is an initial function,


then as in the basis. So suppose fk arises from previous members.
(c) fk .Ex; yE; zE/ arises by composition from g.Ey/ and h.Ex; w; zE/. By assumption g.Ey/ is captured by G .y;
E w/ and h.Ex; w; zE/ by H .x;
E w; zE; v/.

F .x;
E y;
E zE; v/ is 9wG .y;
E w/ ^ H .x;
E w; zE; v/. Consider the case
where xE and zE drop out and yE is a single variable y. Suppose hm; ai 2
fk ; then by composition there is some b such that hm; bi 2 g and
hb; ai 2 h.
(i) Since F .y; v/ expresses f.y/, by T12.15, F .y; v/ expresses f.y/;
thus, since hm; ai 2 fk , NF .m; a/ D T; so, since F .y; v/ is 1 , by
T12.9, Q `ND F .m; a/.
(ii) Same but with G , H uniformly substituted for G , H .
(r) fk .Ex; y/ arises by recursion from g.Ex/ and h.Ex; y; u/. By assumption
g.Ex/ is captured by G .x;
E v/ and h.Ex; y; u/ by H .x;
E y; u; v/. F .x;
E y; z/
is,
9p9qf9vB 0 .p; q; ;; v/^G .x;
E v/^.8i < y/9u9vB 0 .p; q; i; u/^B 0 .p; q; S i; v/^H .x;
E i; u; v/^
B 0 .p; q; y; z/g

Suppose xE reduces to a single variable and hm; n; ai 2 fk . (i) Since


F .x; y; v/ expresses f.x; y/, by T12.15, F .x; y; v/ expresses f.x; y/;
thus NF .m; n; a/ D T; so, since F .x; y; v/ is 1 , by T12.9, Q `ND
F .m; n; a/. And (ii) by T12.12 , Q `ND 8wF .m; n; w/ ! w D a.
(m) fk .Ex/ arises by regular minimization from g.Ex; y/. By assumption,
g.Ex; y/ is captured by some G .x;
E y; z/. F .x;
E v/ is G .x;
E v; ;/ ^

Exercise 12.21

845

ANSWERS FOR CHAPTER 12

.8y < v/G .x;


E y; ;/. Suppose xE reduces to a single variable and
hm; ai 2 fk .
(i) Since F .x; v/ expresses f.x/, by T12.15, F .x; v/ expresses f.x/;
thus, since hm; ai 2 fk , NF .m; a/ D T; so, since F .y; v/ is 1 , by
T12.9, Q `ND F .m; a/.
(ii) Same but with G uniformly substituted for G .
Indct: Any recursive f.Ex/ is captured in Q by F .x;
E v/.
E12.26. Functions f1 .Ex; y/ and f2 .Ex; y/ are defined by simultaneous (mutual) recursion just in case,
f1 .Ex; 0/ D g1 .Ex/
f2 .Ex; 0/ D g2 .Ex/
f1 .Ex; Sy/ D h1 .Ex; y; f1 .Ex; y/; f2 .Ex; y//
f2 .Ex; Sy/ D h2 .Ex; y; f1 .Ex; y/; f2 .Ex; y//
f .E
x;y/

Show that f1 and f2 so defined are recursive. For F.Ex; y/ D 01


set
g .E
x/

G.Ex/ D 01

f .E
x;y/

 12

g .E
x/

 12

h .E
x;y;exp.u;0/;exp.u;1//

H.Ex; y; u/ D 01

h .x;y;exp.u;0/;exp.u;1//

 12

You should explain how these contribute to the desired result.


E12.31. Let T be any theory that extends Q. For any formulas F1 .y/ and F2 .y/,
generalize the diagonal lemma to find sentences H1 and H2 such that,
T ` H1 $ F1 .pH2 q/
T ` H2 $ F2 .pH1 q/
Demonstrate your result.
Let alt.p; f1 ; f2 / D p9w9x9y.w Dq? num.p/?p^ x Dq? num.f2 /?p^ y Dq?
num.f1 / ? p^ 9z.q ? f1 ? p^q ? p ? p//q. Then by capture there is a formula Alt.w; x; y; z/ that captures alt; let a D pAlt.w; x; y; z/q. Then H1 D
9w9x9y.w D a ^ x D f 2 ^ y D f 1 ^ 9z.F1 .z/ ^ Alt.w; x; y; z///; and
h1 D pH1 q D alt.a; f 1 ; f 2 /. And H2 D 9w9x9y.w D a ^ x D f 1 ^ y D
f 2 ^ 9z.F2 .z/ ^ Alt.w; x; y; z///; and h2 D pH2 q D alt.a; f2 ; f1 /. The trick
to this is that H1 says F1 .h2 / and H2 says F2 .h1 /. For the first case, argue
as follows (broken into separate derivations for the biconditional).
Exercise 12.31

846

ANSWERS FOR CHAPTER 13


1. H1 $ 9w9x9y.w D a ^ x D f 2 ^ y D f 1 ^ 9z.F1 .z/ ^ Alt.w; x; y; z///
2. 8xAlt.a; f 2 ; f 1 ; x/ ! x D h2

def H1
from capture

3.

H1

A (g !I)

4.
5.

9w9x9y.w D a ^ x D f 2 ^ y D f 1 ^ 9z.F1 .z/ ^ Alt.w; x; y; z///


9x9y.j D a ^ x D f 2 ^ y D f 1 ^ 9z.F1 .z/ ^ Alt.j; x; y; z///

1,3 $E
A (g 49E)

6.

9y.j D a ^ k D f 2 ^ y D f 1 ^ 9z.F1 .z/ ^ Alt.j; k; y; z///

A (g 59E)

7.
8.
9.
10.
11.
12.

j D a ^ k D f 2 ^ l D f 1 ^ 9z.F1 .z/ ^ Alt.j; k; l; z//

A (g 69E)

j Da
k D f2
l D f1
9z.F1 .z/ ^ Alt.j; k; l; z//
F1 .m/ ^ Alt.j; k; l; m/

7 ^E
7 ^E
7 ^E
7 ^E
A (g 119E)

F1 .m/
Alt.j; k; l; m/
Alt.a; f 2 ; f 1 ; m/ ! m D h2
Alt.a; f 2 ; f 1 ; m/
m D h2
F1 .h2 /

13.
14.
15.
16.
17.
18.
19.
20.
21.

12 ^E
12 ^E
2 8E
14,8,9,10 DE
15,14 !E
13,17 DE
12,13-18 9E

F1 .h2 /

6,7-19 9E

F1 .h2 /

5,6-20 9E

F1 .h2 /
F1 .h2 /

4,5-21 9E

23. H1 ! F1 .h2 /

3-22 !I

22.

1. H1 $ 9w9x9y.w D a ^ x D f 2 ^ y D f 1 ^ 9z.F1 .z/ ^ Alt.w; x; y; z///


2. Alt.a; f 2 ; f 1 ; h2 /
3.
4.
5.
6.
7.
8.
9.
10.
11.

def H1
from capture

F1 .h2 /

A (g !I)

F1 .h2 / ^ Alt.a; f 2 ; f 1 ; h2 /
9z.F1 .z/ ^ Alt.a; f 2 ; f 1 ; z//
a D a ^ f2 D f2 ^ f1 D f1
a D a ^ f 2 D f 2 ^ f 1 D f 1 ^ 9z.F1 .z/ ^ Alt.a; f 2 ; f 1 ; z//
9y.a D a ^ f 2 D f 2 ^ y D f 1 ^ 9z.F1 .z/ ^ Alt.a; f 2 ; y; z///
9x9y.a D a ^ x D f 2 ^ y D f 1 ^ 9z.F1 .z/ ^ Alt.a; x; y; z///
9w9x9y.w D a ^ x D f 2 ^ y D f 1 ^ 9z.F1 .z/ ^ Alt.w; x; y; z///
H1

3,2 ^I
4 9I
DI, ^I
6,5 ^I
7 9I
8 9I
9 9I
1,10 $E

12. F1 .h2 / ! H1

3-11 !I

So T ` H1 $ F1 .pH2 q/.

Exercise 12.31

847

ANSWERS FOR CHAPTER 13

Chapter Thirteen
E13.2. Complete the demonstration of T13.3 by providing a derivation to show
T ` G $ 9xPrft.x; pG q/.
1.
1.
2.
3.

G $ 9z.z D a ^ 9x9yPrft.x; y/ ^ Diag.z; y//


G $ 9z.z D a ^ 9x9yPrft.x; y/ ^ Diag.z; y//
Diag.a; g/
8z.Diag.a; z/ ! z D g/

from def G
from def G
from capture
from capture

4.

A (g $I)

5.
6.

9z.z D a ^ 9x9yPrft.x; y/ ^ Diag.z; y//


j D a ^ 9x9yPrft.x; y/ ^ Diag.j; y/

1,4 $E
A (g 59E)

7.
8.
9.

j Da
9x9yPrft.x; y/ ^ Diag.j; y/
9xPrft.x; g/

6 ^E
6 ^E
A (c I)

10.

Prft.k; g/

A (c 99E)

11.
12.
13.
14.
15.

Diag.j; g/
Prft.k; g/ ^ Diag.j; g/
9yPrft.k; y/ ^ Diag.j; y/
9x9yPrft.x; y/ ^ Diag.j; y/
?

2,7 DE
10,11 ^I
12 9I
13 9I
8,14 ?I

16.
17.

9,10-15 9E

9xPrft.x; g/

9-16 I

18.

9xPrft.x; g/

5,6-17 9E

19.

9xPrft.x; g/

A (g $I)

20.

9x9yPrft.x; y/ ^ Diag.a; y/

21.

9yPrft.j; y/ ^ Diag.a; y/

A (c 209E)

22.

Prft.j; k/ ^ Diag.a; k/

A (c 219E)

23.
24.
25.
26.
27.
28.
29.

Diag.a; k/
Diag.a; k/ ! k D g
kDg
Prft.j; k/
Prft.j; g/
9xPrft.x; g/
?

22 ^E
2 8E
24,23 !E
22 ^E
26,25 DE
27 9I
19,28 ?I

30.
31.
32.
33.
34.
35.
36.

A (c I)

21,22-29 9E

20,21-30 9E

9x9yPrft.x; y/ ^ Diag.a; y/
aDa
a D a ^ 9x9yPrft.x; y/ ^ Diag.a; y/
9z.z D a ^ 9x9yPrft.x; y/ ^ Diag.z; y//
G

37. G $ 9xPrft.x; g/

20-31 I
DI
33,32 ^I
34 9I
1,35 $E
4-18,19-36 $I

Exercise 13.2

ANSWERS FOR CHAPTER 13

848

So T ` G $ 9xPrft.x; g/ which is to say, T ` G $ 9xPrft.x; pG q/.


E13.5. Complete the unfinished cases to T13.13.
T13.13.a. PA ` .r  s ^ s  t/ ! r  t
Hint: This does not require IN. It is not hard and can be worked directly from
the definitions.
T13.13.b. PA ` .r < s ^ s < t/ ! r < t
Hint: This does not require IN. It is not hard and can be worked directly from
the definitions.
T13.13.c. PA ` .r  s ^ s < t/ ! r < t
Hint: This does not require IN. It is not hard and can be worked directly from
the definitions.
T13.13.d. PA ` ;  t
Hint: This is nearly trivial with the definition.
T13.13.e. PA ` ; < St
Hint: This is nearly trivial with the definition.
T13.13.f. PA ` t ; $ ; < t
Hint: This does not require IN. It is straightforward with the definitions.
T13.13.g. PA ` t < St
Hint: This is easy. It does not require IN.
T13.13.h. PA ` S t D s ! t < s
Hint: This does not require IN. It is not hard and can be worked directly from
the definitions.
T13.13.i. PA ` s  t $ S s  S t
Hint: This does not require IN. It is not hard and can be worked directly from
the definitions. Do not forget about T6.38.
T13.13.j. PA ` s < t $ S s < St.
Hint: This does not require IN. It is not hard and can be worked directly from
the definitions.
Exercise 13.5 T13.13.j

ANSWERS FOR CHAPTER 13

849

T13.13.k. PA ` s < t $ S s  t
Hint: This does not require IN. It is not hard and can be worked directly from
the definitions.
T13.13.l. PA ` s  t $ s < t _ s D t
Hint: This does not require IN. It works as a direct argument from the definitions. Do not forget that you have j D ; _ j ; with T6.43.
T13.13.m. PA ` s < St $ s < t _ s D t
Hint: This does not require IN. It is simplified with (l).
T13.13.n. PA ` s  St $ s  t _ s D S t
Hint: This does not require IN. For one direction, it will be helpful to apply
(l) and (m).
T13.13.o. PA ` s < t _ s D t _ t < s
Hint: This is a moderately interesting argument by IN where P is s < x _
s D x _ x < s. Under the assumption s < j _ s D j _ j < s, for the third
case, you may find (k) and (l) helpful.
T13.13.p. PA ` s  t _ t < s
Hint: This is a direct consequence of (o) and (l).
T13.13.q. PA ` s  t $ t s
Hint: When s  t you will be able to show t s with the definitions. In the
other direction, use (o) and (l).
T13.13.r. PA ` t < s ! t s
Hint: This does not require IN. It works from the definitions.
T13.13.s. PA ` .s  t ^ t  s/ ! s D t
Hint: Use (q) and (l) with the assumption for !I.
T13.13.t. PA ` s  s C t
Hint: This is nearly trivial from the definition.
T13.13.u. PA ` r  s ! r C t  s C t
Hint: This does not require IN. It is straightforward from the definition and
T6.66.
Exercise 13.5 T13.13.u

850

ANSWERS FOR CHAPTER 13


T13.13.v. PA ` r < s ! r C t < s C t

Hint: This does not require IN. It is straightforward from the definition and
T6.66.
T13.13.w. PA ` .r  s ^ t  u/ ! r C t  s C u
Hint: This does not require IN. It is straightforward from the definitions.
T13.13.x. PA ` .r < s ^ t  u/ ! r C t < s C u
Hint: This does not require IN. It is straightforward from the definitions.
T13.13.y. PA ` ; < t ! s  s  t
Hint: This is straightforward with (f) and T6.48.
T13.13.z. PA ` r  s ! r  t  s  t
Hint: This is straightforward with distributivity (T6.62).
T13.13.aa. PA ` r  s > ; ! s > ;
Hint: Under the assumption for !I, assume the opposite and go for a contradiction.
T13.13.ab. PA ` .r > 1 ^ s > ;/ ! r  s > s
Hint: You can apply the definition for > multiple times.
T13.13.ac. PA ` .t > ; ^ r < s/ ! r  t < s  t
Hint: This this combines strategies from previous problems.
T13.13.ad. PA ` .r < s ^ t < u/ ! r  t < s  u
Hint: This does not require IN. It is straightforward with T6.63.
T13.13.ae. PA ` 8x.8: < x/P:x ! P ! 8xP

strong induction (a)

Hint: Under the assumption for !I, you will have a goal like P .j /; you can
get .8z < j /P .z/ ! P .j / from the assumption; go for .8z < j /P .z/ by
IN (where the induction is on j ). Then the goal follows immediately by !E.
T13.13.af. PA ` P;x ^ 8x.8:  x/P:x ! PSxx ! 8xP

strong induction (b)

Again under the assumption for !I, you will be able to obtain 8xP , this
time by (ae).

Exercise 13.5 T13.13.af

851

ANSWERS FOR CHAPTER 13


T13.13.ag. PA ` 9xP ! 9xP ^ .8: < x/P:x

least number principle

Hint: This follows immediately from T13.13ae applied to P .


E13.7. Produce the quick derivation to show T13.19d.
T13.19.
1. .8z < m.x//Q.
E
x;
E z/
2.

Q.x;
E v/

T13.19c
A (g !I)

3.

v < m.x/
E

A (c I)

4.
5.

Q.x;
E v/
?

1,3 (8E)
2,4 ?I

6.
7.

v m.x/
E
m.x/
E v

8. Q.x;
E v/ ! m.x/
E v

3-5 I
6 T13.13q
2-7 !I

E13.9. Complete the justifications for Def [rm] and Def [qt].
Def [rm]. (i) PA ` 9x.9w  ;/; D S n  w C x ^ x < S n.
Supposing the zero case is done,

Exercise 13.9 Def [rm]

ANSWERS FOR CHAPTER 13

852

1. 9x.9w  ;/; D S n  w C x ^ x < S n

zero case

2.

9x.9w  j /j D S n  w C x ^ x < S n

A (g !I)

3.

.9w  j /j D S n  w C k ^ k < S n

A (g 29E)

4.
5.
6.
7.
8.
9.
10.
11.
12.

j D Sn  l C k ^ k < Sn
l j

A (g 3 (9E))

j D Sn  l C k
k < Sn
Sj D S S n  l C k
S n  l C S k D S S n  l C k
Sj D S n  l C S k
k <n_k Dn
k<n

4 ^E
4 ^E
from 6
T6.40
8,9 DE
7 T13.13m
A (g 11_E)

13.
14.
15.
16.
17.
18.

Sk < Sn
Sj D S n  l C S k ^ S k < S n
l  j _ l D Sj
l  Sj
.9w  Sj /Sj D S n  w C S k ^ S k < S n
9x.9w  Sj /Sj D S n  w C x ^ x < S n

9 T13.13j
10,13 ^I
5 _I
15 T13.13n
14,16 (9I)
17 9I

19.

kDn

A (g 11_E)

20.
21.
22.
23.
24.
25.
26.
27.
28.
29.

Sj D S n  l C S n
Sn  Sl D Sn  l C Sn
Sj D S n  S l
Sn  Sl D Sn  Sl C ;
Sj D S n  S l C ;
; < Sn
Sj D S n  S l C ; ^ ; < S n
S l  Sj
.9w  Sj /Sj D S n  w C ; ^ ; < S n
9x.9w  Sj /Sj D S n  w C x ^ x < S n

10,19 DE
T6.42
20,21 DE
T6.39
22,23 DE
25 T13.13e
24,25 ^I
5 T13.13j
26,27 (9I)
28 9I

30.
31.
32.
33.
34.
35.
36.

9x.9w  Sj /Sj D S n  w C x ^ x < S n


9x.9w  Sj /Sj D S n  w C x ^ x < S n
9x.9w  Sj /Sj D S n  w C x ^ x < S n
9x.9w  j /j D S n  w C x ^ x < S n ! 9x.9w  Sj /Sj D S n  w C x ^ x < S n
8z.9x.9w  z/z D S n  w C x ^ x < S n ! 9x.9w  S z/S z D S n  w C x ^ x < S n/
8z9x.9w  z/z D S n  w C x ^ x < S n
9x.9w  m/m D S n  w C x ^ x < S n

(ii) PA ` 8x8y..9w  m/m D S n  w C x ^ x < S n ^ .9w  m/m D


S n  w C y ^ y < S n/ ! x D y

Exercise 13.9 Def [rm]

11,12-18,19-29 _E
3,4-30 (9E)
2,3-31 9E
2-32 !I
33 8I
1,34 IN
35 8E

853

ANSWERS FOR CHAPTER 13

1.

.9w  m/m D S n  w C j ^ j < S n ^ .9w  m/m D S n  w C k ^ k < S n

A (g !I)

2.
3.
4.
5.

.9w  m/m D S n  w C j ^ j < S n


.9w  m/m D S n  w C k ^ k < S n
m D Sn  p C j ^ j < Sn
pm

1 ^E
1 ^E
A (g 2(9E))

6.
7.

m D Sn  q C k ^ k < Sn
qm

A (g 3(9E))

8.
9.
10.
11.
12.
13.
14.

m D Sn  p C j
j < Sn
m D Sn  q C k
k < Sn
Sn  p C j D Sn  q C k
p <q_p Dq_q <p
p<q

4 ^E
4 ^E
6 ^E
6 ^E
8,10 DE
T13.13o
A (c I)

15.
16.

9v.S v C p D q/
Sl C p D q
p C Sl D q
S n  p C j D S n  .p C S l/ C k
S n  p C j D .S n  p C S n  S l/ C k
S n  p C j D S n  p C .S n  S l C k/
j D Sn  Sl C k
; < Sl
Sn  Sn  Sl
Sn  Sl  Sn  Sl C k
Sn  Sn  Sl C k
Sn  j
j 6< S n
?

17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.

14 abv
A (c 159E)

16, T6.52
12,17 DE
18 T6.61
19 T6.54
20 T6.66
T13.13e
22 T13.13y
T13.13t
23,24 T13.13a
21,25 DE
26 T13.13q
9,27 ?I
15,16-28 9E

p<
6 q
q<p

14-29 I
A (c I)

similarly

q 6< p
pDq
Sn  p C j D Sn  p C k
j Dk
j Dk

31-32 I
13,30,33 DS
12,34 DE
35 T6.66
3,6-36 (9E)

j Dk

2,4-37 (9E)

39. ..9w  m/m D S n  w C j ^ j < S n ^ .9w  m/m D S n  w C k ^ k < S n/ ! j D k


40. 8y..9w  m/m D S n  w C j ^ j < S n ^ .9w  m/m D S n  w C y ^ y < S n/ ! j D y
41. 8x8y..9w  m/m D S n  w C x ^ x < S n ^ .9w  m/m D S n  w C y ^ y < S n/ ! x D y

E13.10. Complete the unfinished cases to T13.21.


For the recursion clause from right to left:
Exercise 13.10

1-38 !I
39 8I
40 8I

ANSWERS FOR CHAPTER 13

854

1. v D .p; q; i / $ B.p; q; i; v/
2. v D g.x/
E $ G .x;
E v/
E y; u/ $ H .x;
E y; u; v/
3. v D h.x;

def
assp
assp

4.

R.x;
E y; z/

A (g !I)

5.

9p9qf9vB.p; q; ;; v/ ^ G .x;
E v/ ^
.8i < y/9u9vB.p; q; i; u/ ^ B.p; q; S i; v/ ^ H .x;
E i; u; v/ ^ B.p; q; y; z/g
9vB.a; b; ;; v/ ^ G .x;
E v/ ^
.8i < y/9u9vB.a; b; i; u/ ^ B.a; b; S i; v/ ^ H .x;
E i; u; v/ ^ B.a; b; y; z/

4 def

6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.

9vB.a; b; ;; v/ ^ G .x;
E v/
B.a; b; ;; k/ ^ G .x;
E k/
B.a; b; ;; k/
k D .a; b; ;/
G .x;
E k/
k D g.x/
E
.a; b; ;/ D g.x/
E

A (g 59E)
6 ^E
A (g 79E)
8 ^E
9 with 1
8 ^E
11 with 2
10,12 DE

.a; b; ;/ D g.x/
E
.8i < y/9u9vB.a; b; i; u/ ^ B.a; b; S i; v/ ^ H .x;
E i; u; v/
l <y
9u9vB.a; b; l; u/ ^ B.a; b; S l; v/ ^ H .x;
E l; u; v/
B.a; b; l; r/ ^ B.a; b; S l; s/ ^ H .x;
E l; r; s/
B.a; b; l; r/
r D .a; b; l/
B.a; b; S l; s/
s D .a; b; S l/
H .x;
E l; r; s/
s D h.x;
E l; r/
h.x;
E l; .a; b; l// D .a; b; S l/
h.x;
E l; .a; b; l// D .a; b; S l/
.8i < y/h.x;
E i; .a; b; i // D .a; b; S i /
B.a; b; y; z/
.a; b; y/ D z
.a; b; ;/ D g.x/
E ^ .8i < y/h.x;
E i; .a; b; i // D .a; b; S i / ^ .a; b; y/ D z
9p9q .p; q; ;/ D g.x/
E ^ .8i < y/h.x;
E i; .p; q; i // D .p; q; S i / ^ .p; q; y/ D z
z D r.x;
E y/
z D r.x;
E y/

7,8-13 9E
6 ^E
A (g (8I))
15,16 (8E)
A (g 179E)
18 ^E
19, with 1
18 ^E
21 with 1
18 ^E
23 with 3
24,20,22 DE
17,18-25 9E
16-26 (8I)
6 ^E
28 with 1
14,27,29 ^I
30 9I
31 def
5,6-32 9E

34. R.x;
E y; z/ ! z D r.x;
E y/

E13.11. Complete the justification for T13.22 by demonstrating the zero case.
T13.22. With F .x;
E y; v/ as described in the main text,

Exercise 13.11 T13.22

4-33 !I

ANSWERS FOR CHAPTER 13

855

1.

F .x;
E ;; m/ ^ F .x;
E ;; n/

A (g !I)

2.
3.
4.

9p9q .p; q; ;/ D g.x/


E ^ .8i < ;/h.x;
E i; .p; q; i // D .p; q; S i / ^ .p; q; ;/ D m
9p9q .p; q; ;/ D g.x/
E ^ .8i < ;/h.x;
E i; .p; q; i // D .p; q; S i / ^ .p; q; ;/ D n
.a; b; ;/ D g.x/
E ^ .8i < ;/h.x;
E i; .a; b; i // D .a; b; S i / ^ .a; b; ;/ D m

1 ^E
1 ^E
A (g 29E)

5.
6.
7.
8.
9.
10.
11.
12.
13.
14.

.a; b; ;/ D g.x/
E
.a; b; ;/ D m
m D g.x/
E
.c; d; ;/ D g.x/
E ^ .8i < ;/h.x;
E i; .c; d; i // D .c; d; S i / ^ .c; d; ;/ D n
.c; d; ;/ D g.x/
E
.c; d; ;/ D n
n D g.x/
E
mDn
mDn

4 ^E
4 ^E
5,6 DE
A (g 39E)
8 ^E
8 ^E
9,10 DE
7,11 DE
3,8-12 9E

mDn

2,4-13 9E

15. .F .x;
E ;; m/ ^ F .x;
E ;; n// ! m D n
16. 8m8n.F .x;
E ;; m/ ^ F .x;
E ;; n// ! m D n

E13.12. Show (i) and (ii) for Def [ : ]. Then show each of the results in T13.23.
Def [ : ].
(i) PA ` 9vx D y C v _ .x < y ^ v D ;/. Beginning with T13.13p, this is is
a straightforward derivation.
(ii) PA ` 8m8n.x D y C m _ .x < y ^ m D ;/ ^ x D y C n _ .x < y ^ n D
;// ! m D n

:
Exercise 13.12 Def [-]

1-14 !I
15 8I

ANSWERS FOR CHAPTER 13

856

1.

x D y C j _ .x < y ^ j D ;/ ^ x D y C k _ .x < y ^ k D ;/

A (g !I)

2.
3.
4.
5.

x D y C j _ .x < y ^ j D ;/
x D y C k _ .x < y ^ k D ;/
y x_x <y
yx

1 ^E
1 ^E
T13.13p
A (g 4_E)

6.
7.
8.
9.
10.
11.
12.

xy
.x < y ^ j D ;/
.x < y ^ k D ;/
x DyCj
x DyCk
yCj DyCk
j Dk

5 T13.13q
6 _I, DeM
6 _I, DeM
2,7 DS
3,8 DS
9,10 DE
11 T6.66

13.

x<y

A (g 4_E)

14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.

y
x
x
y
x
x
x
x
j
k
j

T13.13t
13,14 T13.13c
15 T13.13r
T13.13t
13,17 T13.13c
18 T13.13r
2,16 DS
3,19 DS
20 ^E
21 ^E
22,23 DE

25.

j Dk

yCj
<yCj
yCj
yCk
<yCk
yCk
<y^j D;
<y^k D;
D;
D;
Dk

4,5-12,13-24 _E

26. .x D y C j _ .x < y ^ j D ;/ ^ x D y C k _ .x < y ^ k D ;// ! j D k


27. 8m8n.x D y C m _ .x < y ^ m D ;/ ^ x D y C n _ .x < y ^ n D ;// ! m D n

T13.23.
T13.23.a. PA ` a  b ! a D b C .a : b/.
This is straightforward with a D b C .a : b/ _ a < b ^ a : b D ; from
the definition.
T13.23.b. PA ` b  a ! a : b D ;.
From your assumption b  a you have a < b _ a D b with T13.13l. In the
first case, as in the previous problem, you get the result with the definition. In
the second case, a  b by T13.13l and you can use (a) with T6.66.
T13.23.c. PA ` a : b  a.
By T13.13p, a  b _ a < b. In the first case apply (a); and in the second you
have a  b so that you can apply (b).

Exercise 13.12 T13.23.c

1-25!I
26 8I

857

ANSWERS FOR CHAPTER 13


T13.23.d. PA ` a > b ! a : b > ;.
1.

a>b

A (g !I)

2.
3.

9v.S v C b D a/
Sj C b D a

1 def
A (g 29E)

ab
:
a D b C .a b/
:
Sj C b D b C .a b/
:
Sj D a b
; < Sj
:
;<a b
:
;<a b
10.
:
11. a > b ! ; < a b
4.
5.
6.
7.
8.
9.

1 T13.13l
4 T13.23a
3,5 DE
6 T6.66
T13.13e
7,8 DE
2,3-9 9E

T13.23.e. PA ` a : ; D a.
T13.23.f. PA ` Sa : a D 1.
Given T6.66, this is simple once you see from (a) that Sa D a C .Sa : a/
and from T6.45 that Sa D a C 1.
T13.23.g. PA ` a > ; ! a : 1 < a
You can do this in just a few lines.
T13.23.h. PA ` a  c ! .a : c/ C b D .a C b/ : c.
1.

ac

A (g !I)
:

a D c C .a c/
aCb a
aCb c
:
a C b D c C .a C b/ c
:
:
c C .a c/ C b D c C .a C b/ c
:
:
c C .a c/ C b D c C .a C b/ c
:
:
.a c/ C b D .a C b/ c
:
:
9. a  c ! .a c/ C b D .a C b/ c
2.
3.
4.
5.
6.
7.
8.

1 T13.23a
T13.13t
1,3 T13.13a
4 T13.23a
2,5 DE
6 T6.54
7 T6.66
1-8 !I

T13.23.i. PA ` .a : b/ : c D a : .b C c/.

Exercise 13.12 T13.23.i

858

ANSWERS FOR CHAPTER 13


1. a  b C c _ a < b C c

T13.13p

2.

bCc >a

A (g 1_E)

3.
4.
5.
6.

bCc a
:
a .b C c/ D ;
a b_a <b
b>a

2 T13.13l
3 T13.23b
T13.13p
A (g 5_E)

7.
8.
9.
10.
11.

ba
:
a bD;
c;
:
ca b
:
:
.a b/ c D ;

6 T13.13l
7 T13.23b
T13.13d
8,9 DE
10 T13.23b

12.

ab

A (g 5_E)
:

12 T13.23a
3,13 DE
14 T13.13u
15 T13.23b

17.
18.

a D b C .a b/
:
b C c  b C .a b/
:
ca b
:
:
.a b/ c D ;
:
:
.a b/ c D ;
:
:
:
.a b/ c D a .b C c/

19.

a bCc

A (g 1_E)

13.
14.
15.
16.

a D .b C c/ C a .b C c/
bCc b
ab
:
a D b C .a b/
:
b C .a b/  b C c
:
a bc
:
:
:
a b D c C .a b/ c
:
:
b C .a b/ D .b C c/ C a .b C c/
:
:
:
b C .c C .a b/ c/ D .b C c/ C a .b C c/
:
:
:
.b C c/ C .a b/ c D .b C c/ C a .b C c/
:
:
:
.a b/ c D a .b C c/
:
:
:
31. .a b/ c D a .b C c/

20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.

5,6-11,12-16 _E
17,4 DE

19 T13.23a
T13.13t
19,21 T13.13a
22 T13.23a
19,23 DE
24 T13.13u
25 T13.23a
20,23 DE
26,27 DE
28 T6.54
29 T6.66
1,2-18,19-30 _E

T13.23.j. PA ` .a C c/ : .b C c/ D a : b.
Start with a  b _ a < b. The second case is easy. For the first, you can
apply T13.23a to both a  b and to a C c  b C c.
T13.23.k. PA ` a  .b : c/ D a  b : a  c.

Exercise 13.12 T13.23.k

859

ANSWERS FOR CHAPTER 13


1. a D ; _ a > ;

T13.13f
A (g 1_E)

3.
4.
5.
6.
7.
8.

aD;
:
a.b c/ D ;
ab D ;
ac  ;
ac  ab
:
ab ac D ;
:
:
a.b c/ D ab ac

9.

a>;

A (g 1_E)

b c_b <c
c>b

T13.13p
A (g 10_E)

2.

10.
11.

2 T6.56
2 T6.56
T13.13d
5,4 DE
6 T13.23b
3,7 DE

12.
13.
14.
15.
16.
17.

cb
:
b cD;
:
a.b c/ D ;
ac  ab
:
ab ac D ;
:
:
a.b c/ D ab ac

11 T13.13l
12 T13.23b
13 T6.41
12 T13.13z
15 T13.23b
14,16 DE

18.

bc

A (g 10_E)

:
b D c C .b c/
ab D ab
:
ab D ac C .b c/
:
ab D ac C a.b c/
ab  ac
:
ab D ac C .ab ac/
:
:
ac C a.b c/ D ac C .ab ac/
:
:
a.b c/ D ab ac
:
:
27.
a.b c/ D ab ac
:
:
28. a.b c/ D ab ac

19.
20.
21.
22.
23.
24.
25.
26.

18 T13.23a
DI
20,19 DE
21 T6.61
18 T13.13z
23 T13.23a
22,24 DE
25 T6.66
10,11-17,18-26 _E
1,2-8,9-27 _E

E13.13. Show each of the results in T13.24


T13.24.
T13.24.a. PA ` ;ja
This is nearly immediate from the definition and T6.55.
T13.24.b. PA ` ajSa.
This is nearly immediate from the definition and T6.55.
T13.24.d. PA ` ajb ! aj.b  c/.
With the assumption for !I, you will be able to get .Sa  j /c D bc; then
simple association and the definition give the result.

Exercise 13.13 T13.24.d

860

ANSWERS FOR CHAPTER 13


T13.24.e. PA ` .ajS b ^ bjc/ ! ajc.

This is straightforward once you apply the definition to your assumption for
!I, and then make the assumptions for 9E.
T13.24.f. PA ` ajb ! aj.b C c/ $ ajc.
1.

ajb

A (g !I)

2.
3.

9q.Sa  q D b/
Sa  j D b

1 def
A (g 29E)

4.

aj.b C c/

A (g $I)

5.
6.

9q.Sa  q D b C c/
Sa  k D b C c

4 def
A (g 59E)

7.
8.
9.

Sa  k D .Sa  j / C c
j k_k <j
k<j
Sa  j  .Sa  j / C c
; < Sa
Sa  k < Sa  j
Sa  k < .Sa  j / C c
Sa  k .Sa  j / C c
?

10.
11.
12.
13.
14.
15.
16.
17.
18.
19.

kj
j k
9v.v C j D k/
l Cj Dk

25.

T13.13t
T13.13e
9,11 T13.13ac
10,12 T13.13c
13 T13.13r
7,14 ?I
9-15 I
8,16 DS
17 def
A (g 189E)

Sa  .l C j / D .Sa  j / C c
.Sa  l/ C .Sa  j / D .Sa  j / C c
Sa  l D c
9q.Sa  q D c/
ajc

20.
21.
22.
23.
24.

3,6 DE
T13.13p
A (c I)

ajc

7,19 DE
20 T6.61
21 T6.66
22 9I
23 def
18,19-24 9E

26.

ajc

5,6-25 9E

27.

ajc

A (g $I)

28.
29.

9q.Sa  q D c/
Sa  k D c

27 def
A (g 289E)

30.
31.
32.
33.
34.
35.
36.

bCc DbCc
.Sa  j / C .Sa  k/ D b C c
Sa  .j C k/ D b C c
9q.Sa  q D b C c/
aj.b C c/
aj.b C c/

DI
30,3,29 DE
31 T6.61
32 9I
33 def
28,29-34 9E

aj.b C c/ $ ajc

4-26,27-35 $I

aj.b C c/ $ ajc

2,3-36 9E

38. ajb ! aj.b C c/ $ ajc

1-37 !I

37.

Exercise 13.13 T13.24.f

861

ANSWERS FOR CHAPTER 13


T13.24.g. PA ` .b  c ^ ajb/ ! aj.b : c/ $ ajc.

From the assumption for !I you have aj.c C .b : c//; then with each of the
assumptions for $I you will be able to apply (f).
T13.24.h. PA ` a < b ! b Sa.
Make the standard assumptions for !I, I and, from the definition, 9E to get
S b  j D Sa; then, using the last strategy for reaching a contradiction, both
j D ; and j ; lead to contradiction.
T13.24.i. PA ` ajb $ rm.b; a/ D ;.
This is a matter of connecting the definitions. From ajb you get Sa  j D b
and from rm.b; a/ D ;, b D Sa  j C ; ^ ; < Sa; observe also that when
Sa  j D b you have j  b for (9I).
T13.24.j. PA ` rma C .y  Sd /; d D rm.a; d /.
Let r D rm.a; d /
1. .9w  a/a D Sd  w C r ^ r < Sd
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.

def rm

a D .Sd  j / C r ^ r < Sd
j a

A (g 1(9E))

a D .Sd  j / C r
a C .y  Sd / D a C .y  Sd /
a C .y  Sd / D .Sd  j / C r C .y  Sd /
a C .y  Sd / D .Sd  j / C .Sd  y/ C r
a C .y  Sd / D Sd  .j C y/ C r
r < Sd
a C .y  Sd / D Sd  .j C y/ C r ^ r < Sd
a C .y  Sd / D d  .j C y/ C .j C y/ C r
a C .y  Sd / D .j C y/ C d  .j C y/ C r
9vv C .j C y/ D a C .y  Sd /
j C y  a C .y  Sd /
.9w  a C .y  Sd //a C .y  Sd / D Sd  w C r ^ r < Sd
rm.a C .y  Sd /; d / D r

2 ^E
DI
4,5 DE
6 with T6.54
7 T6.61
2 ^E
8,9 ^I
8 T6.58
11 with T6.54
12 9I
13 def
10,14 (9I)
15 def

17. rm.a C .y  Sd /; d / D r

T13.24.k. PA ` Sd  z  a ! z  qt.a; d /.
Let r D rm.a; d / and q D qt.a; d /

Exercise 13.13 T13.24.k

1,2-16 (9E)

862

ANSWERS FOR CHAPTER 13


1. a D Sd  q C r ^ r < Sd
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.

Sd  z  a

def qt
A (g !I)

z>q

A (c I)

z  Sq
a D Sd  q C r
Sd  S q D .Sd  q/ C Sd
Sd  z  Sd  S q
Sd  z  .Sd  q/ C Sd
r < Sd
.Sd  q/ C r < .Sd  q/ C Sd
a < .Sd  q/ C Sd
a < Sd  z
a Sd  z
?

3 T13.13k
1 ^E
T6.42
4 T13.13z
7,6 DE
1 ^E
9 T13.13v
5,10 DE
8,11 T13.13c
2 T13.13q
12,13 ?I

zq
zq

3-14 I
15 T13.13q

17. Sd  z  a ! z  q
18. Sd  z  a ! z  qt.a; d /

2-16 !I
17 abv

T13.24.l. PA ` a  y  Sd ! rma : .y  Sd /; d D rm.a; d /


Let r D rm.a; d / and q D qt.a; d /
1. a D Sd  q C r ^ r < Sd
2.

a  y  Sd

def qt
A (g !I)

a D Sd  q C r
:
a D .y  Sd / C a .y  Sd /
:
Sd  q C r D .y  Sd / C a .y  Sd /
yq
Sd  y  Sd  q
:
Sd  q D .Sd  y/ C .Sd  q/ .Sd  y/
.Sd  q/ C r D .Sd  q/ C r
:
.Sd  q/ .Sd  y/ C .Sd  y/ C r D .Sd  q/ C r
:
:
.Sd  q/ .Sd  y/ C .Sd  y/ C r D .y  Sd / C a .y  Sd /
:
:
.Sd  q/ .Sd  y/ C r D a .y  Sd /
:
:
a .y  Sd / D Sd.q y/ C r
r < Sd
:
:
a .y  Sd / D Sd.q y/ C r ^ r < Sd
:
:
:
a .y  Sd / D d.q y/ C .q y/ C r
:
:
9vv C .q y/ D a .y  Sd /
:
:
q y  a .y  Sd /
:
:
.9w < a .y  Sd //a .y  Sd / D Sd  w C r ^ r < Sd
:
rm.a .y  Sd /; d / D r
:
21. a  y  Sd ! rm.a .y  Sd /; d / D r

3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.

E13.14. Show each of the the results in T13.25.


T13.25.
Exercise 13.14 T13.25

1 ^E
2 T13.23a
3,4 DE
2 T13.24k
6 T13.13z
7 T13.23a
DI
8,9 DE
5,10 DE
11 T6.66
12 T13.23k
1 ^E
13,14 ^I
13 T6.58
16 9I
17 def
15,18 (9I)
19 def rm
2-20 !I

863

ANSWERS FOR CHAPTER 13


T13.25.d. PA ` 8xx > 1 ! 9z.Pr.S z/ ^ zjx/
1. ; > 1 ! 9z.Pr.S z/ ^ zj;/
2.

.8y  k/y > 1 ! 9z.Pr.S z/ ^ zjy/

trivial
A (g !I)

3.

Sk > 1

A (g !I)

4.
5.

Pr.S k/ _ Pr.S k/
Pr.S k/

T3.1
A (g 4_E)

6.
7.
8.

kjS k
Pr.S k/ ^ kjS k
9z.Pr.S z/ ^ zjS k/

9.

Pr.S k/

A (g 4_E)

.1 < S k ^ 8d d jS k ! .d D ; _ Sd D S k/
1 S k _ 9d d jS k ^ d ; ^ Sd S k
9d d jS k ^ d ; ^ Sd S k
j jS k ^ j ; ^ Sj S k

9 def
10 DeM,QN
3,11 DS
A (g 129E)

10.
11.
12.
13.
14.
15.
16.
17.
18.

j jS k
j ;
Sj S k
Sj  k _ k < Sj
k < Sj

19.
20.

k <j _k Dj
kDj

T13.24b
5,6 ^I
7 9I

13 ^E
13 ^E
13 ^E
T13.13p
A (c I)
18 T13.13m
A (c 19_E)

21.
22.
23.

Sk D Sk
Sj D S k
?

DI
21,20 DE
16,22 ?I

24.

k<j

A (c 19_E)

25.
26.

j Sk
?

24 T13.24h
14,25 ?I

27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
43.
44.

19,20-23,24-26 _E

k Sj
Sj  k
Sj > 1 ! 9z.Pr.S z/ ^ zjSj /
j >;
Sj > 1
9z.Pr.S z/ ^ zjSj /
Pr.S l/ ^ ljSj
ljSj
ljSj ^ j jS k
ljS k
Pr.S l/
Pr.S l/ ^ ljS k
9z.Pr.S z/ ^ zjS k
9z.Pr.S z/ ^ zjS k

18-27 I
17,28 DS
2,29 (8E)
15 T13.13f
31 T13.13j
30,32 !E
A (g 339E)
34 ^E
35,14 ^I
36 T13.24e
34 ^E
38,37 ^I
39 9I
33,34-40 9E

9z.Pr.S z/ ^ zjS k

12,13-41 9E

9z.Pr.S z/ ^ zjS k

4,4-8,9-42 _E

S k > 1 ! 9z.Pr.S z/ ^ zjS k/

45. .8y  k/y > 1 ! 9z.Pr.S z/ ^ zjy/ ! S k > 1 ! 9z.Pr.S z/ ^ zjS k/


46. 8xf.8y  x/y > 1 ! 9z.Pr.S z/ ^ zjy/ ! S x > 1 ! 9z.Pr.S z/ ^ zjS x/g
47. 8xx > 1 ! 9z.Pr.S z/ ^ zjx/

T13.25.e. PA ` Rp.a; b/ $ 9xPr.S x/ ^ xja ^ xjb.

Exercise 13.14 T13.25.e

3-43 !I
2-44 !I
45 8I
1,46 T13.13af

864

ANSWERS FOR CHAPTER 13

1.

Rp.a; b/

A (g $I)

2.
3.

8d .d ja ^ d jb/ ! d D ;
9xPr.S x/ ^ xja ^ xjb

1 def
A (c I)

4.

Pr.Sj / ^ j ja ^ j jb

A (c 39E)

j ja ^ j jb
j D;
11
Sj  1
1 Sj
Pr.Sj /
1 < Sj ^ 8d d jSj ! .d D ; _ Sd D Sj /
1 < Sj
?

4 ^E
2,5 8E
T13.13l
6,7 DE
8 T13.13q
4 ^E
10 def
11 ^E
9,12 ?I

5.
6.
7.
8.
9.
10.
11.
12.
13.
14.

3,4-13 9E

15.

9xPr.S x/ ^ xja ^ xjb

3-14 I

16.

9xPr.S x/ ^ xja ^ xjb

A (g $I)

17.
18.

8xPr.S x/ ! .xja ^ xjb/


j ja ^ j jb

16 QN,DeM
A (g !I)

19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.

j D;_j >;
j >;
Sj > 1
9z.Pr.S z/ ^ zjSj /
Pr.S k/ ^ kjSj
kjSj
j ja
kjSj ^ j ja
kja
j jb
kjSj ^ j jb
kjb
kja ^ kjb
Pr.S k/
.kja ^ kjb/
?

35.

36.
37.

j ;
j D;

38.
39.
40.

T13.13f
A (c I)
20 T13.13j
21 T13.25d
A (c 229E)
23 ^E
18 ^E
24,25 ^I
26 T13.24e
18 ^E
26,28 ^E
29 T13.24e
27,30 ^I
23 ^E
17,32 8E
31,33 ?I
22,23-34 9E
20-35 I
19,36 DS

.j ja ^ j jb/ ! j D ;
8d .d ja ^ d jb/ ! d D ;
Rpa; b

41. Rp.a; b/ $ 9xPr.S x/ ^ xja ^ xjb

18-37 !I
38 8I
39 def
1-15,16-40 $I

T13.25.f. PA ` 8x8yG.a; b; x/ ! G.a; b; x  y/


With the assumptions G.a; b; j / and then au C j D bv for !I and 9E, you
Exercise 13.14 T13.25.f

ANSWERS FOR CHAPTER 13

865

can show auk C j k D bkv and generalize.


T13.25.g. PA ` .a > ; ^ b > ;/ ! 8x8y.G.a; b; x/ ^ G.a; b; y/ ^ x  y/ !
G.a; b; x : y/
1.

a >;^b >;

A (g !I)

2.
3.
4.

a>;
b>;
G.a; b; i / ^ G.a; b; j / ^ i  j

1 ^E
1 ^E
A (g !I)

5.
6.
7.
8.
9.

G.a; b; i /
9x9y.ax C i D by/
G.a; b; j /
9x9y.ax C j D by/
ap C i D bq

10.

ar C j D bs

11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.

4 ^E
5 def
4 ^E
7 def
A (g 69E)
A (g 89E)

i j
bar  ar
abs  bs
ap C i  i
bq  i
bq  j
bar C bq  ar C j
bar C bq  bs
:
:
.bq C bar/ C .bsa bs/ D .bq C bar/ C .bsa bs/
:
:
bsa C .bq C bar/ bs D .bq C bar/ C .bsa bs/
:
:
.bq C bar/ bs C bsa D .bq C bar/ C .bsa bs/
:
:
.bq C bar/ .ar C j / C bsa D .bq C bar/ C .bsa bs/
:
:
:
..bq C bar/ j / ar C bsa D .bq C bar/ C .bsa bs/
:
:
:
..bq j / C bar/ ar C bsa D .bq C bar/ C .bsa bs/
:
:
:
.bar ar/ C .bq j / C bsa D .bq C bar/ C .bsa bs/
:
:
:
.bar ar/ C ..ap C i / j / C bsa D .bq C bar/ C .bsa bs/
:
:
:
.bar ar/ C ..i j / C ap C bsa D .bq C bar/ C .bsa bs/
:
:
:
.ap C abs/ C .bar ar/ C .i j / D .bq C bar/ C .bsa bs/
:
:
:
a.p C bs/ C .bar ar/ C .i j / D b.q C ar/ C .bsa bs/
:
:
:
a.p C bs/ C a.br r/ C .i j / D b.q C ar/ C b.sa s/
:
:
:
a.p C bs/ C .br r/ C .i j / D b.q C ar/ C .sa s/
:
9x9yax C .i j / D by
:
G.a; b; i j /
:
G.a; b; i j /
:
G.a; b; i j /
:
G.a; b; i / ^ G.a; b; j / ^ i  j ! G.a; b; i j /
:
8x8y.G.a; b; x/ ^ G.a; b; y/ ^ x  y ! G.a; b; x y//
:
.a > ; ^ b > ;/ ! 8x8y.G.a; b; x/ ^ G.a; b; y/ ^ x  y ! G.a; b; x y//

T13.25.h. PA ` Rp.a; b/ ^ a > 1 ^ b > 1 ! 9x9y.ax C 1 D by/


(a) Show a  .b : 1/ C a D b  a and generalize.
Exercise 13.14 T13.25.h

4 ^E
3 T13.13y
2 T13.13y
T13.13t
9,14 DE
11,15 T13.13a
12,16 T13.13w
10,17 DE
DI
13,19 T13.23h
18,20 T13.23h
10,21 DE
22 T13.23i
16,23 T13.23h
12,24 T13.23h
9,25 DE
11,26 T13.23h
27 assoc com
28 T6.61
29 T13.23k
30 T6.61
31 9I
32 def
8,10-33 9E
6,9-34 9E
4-35 !I
36 8I
1-37 !I

866

ANSWERS FOR CHAPTER 13


(b) Show a  ; C b D b  1 and generalize.
(c) Let q D qt.i; d.a; b// and r D rm.i; d.a; b//.
c1. i D .Sd.a; b/  q/ C r
c2. r < Sd.a; b/

def qt
from def rm

c3. .8y < d.a; b//.a > ; ^ b > ;/ ! G.a; b; Sy/


c4.
G.a; b; i /

1 ^E
A (g !I)

c5.
c6.
c7.
c8.
c9.
c10.
c11.
c12.
c13.
c14.

G.a; b; Sd.a; b/  q/
Sd.a; b/  q  .Sd.a; b/  q/ C r
Sd.a; b/  q  i
:
8x8y.G.a; b; x/ ^ G.a; b; y/ ^ x  y/ ! G.a; b; x y/
:
G.a; b; i .Sd.a; b/  q//
:
i D Sd.a; b/  q C i .Sd.a; b/  q/
:
Sd.a; b/  q C i .Sd.a; b/  q/ D .Sd.a; b/  q/ C r
:
i .Sd.a; b/  q/ D r
G.a; b; r/
9y.r D Sy/

7 T13.25f
T13.13t
c1,c6 DE
6 T13.25g
c4,c5,c7,c8 8E
c7 T13.23a
c1,c10 DE
c11 T6.66
c9,c11 DE
A (c I)

c15.

r D Sk

A (c c149E)

c16.
c17.
c18.
c19.
c20.
c21.
c22.

S k < Sd.a; b/
k < d.a; b/
.a > ; ^ b > ;/ ! G.a; b; S k/
.a > ; ^ b > ;/ ^ G.a; b; S k/
G.a; b; S k/
G.a; b; r/
?

c2,c16 DE
c16 T13.13j
c3,c17 (8E)
c18 Impl, Dem
c19 ^E
c20,c15 DE
c13,c21 ?I

c23.
c24.
c25.
c26.

c14,c15-c22 9E

9y.r D Sy/
r D;
d.a; b/ji

c14-c23 I
c24 T6.43
c25 T13.24i

c27. G.a; b; i / ! d.a; b/ji


c28. 8xG.a; b; x/ ! d.a; b/jx

T13.25.i. PA ` Pr.Sa/ ^ aj.b  c/ ! .ajb _ ajc/

Exercise 13.14 T13.25.i

c4-c24 !I
c27 8I

867

ANSWERS FOR CHAPTER 13

1.

Pr.Sa/ ^ aj.b  c/

A (g !I)

2.
3.
4.
5.
6.

Pr.Sa/
1 < Sa ^ 8xxjSa ! .x D ; _ S x D Sa/
8xxjSa ! .x D ; _ S x D Sa/
aj.b  c/
ab

1 ^E
2 def
3 ^E
1 ^E
A (g !I)

7.
8.
9.
10.

j jb ^ j jSa

A (g !I)

j jSa
j D ; _ Sj D Sa
Sj D Sa

7 ^E
4,8 8E
A (c I)

11.
12.
13.
14.

j Da
j jb
ajb
?

10 T6.38
7 ^E
12,11 DE
6,13 ?I

15.
16.

Sj Sa
j D;

10-14 I
9,15 DS

17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.

.j jb ^ j jSa/ ! j D ;
8x.xjb ^ xjSa/ ! x D ;
Rp.b; Sa/
Sa > ;
b;
bD;
aj;
ajb
?

7-16 !I
17 8I
18 def
T13.13e
A (c E)
21 T13.13f
T13.24c
22,23 DE
6,24 ?I

b>;
9x9ybx C 1 D Sa  y
bp C 1 D Sa  q
c.Sa  q/ D c.Sa  q/
c.bp C 1/ D c.Sa  q/
cbp C c D c.Sa  q/
ajcbp
ajSa
ajc.Sa  q/
aj.cbp C c/
ajc
ajc

21-25 E
19,20,26 T13.25h
A (g 279E)
DI
28,29 DE
30 T6.61
5 T13.24d
T13.24b
33 T13.24d
31,34 DE
32,35 T13.24f
27,28-36 9E

a b ! ajc
ajb _ ajc

6-37 !I
38 Impl

40. Pr.Sa/ ^ aj.b  c/ ! .ajb _ ajc/

1-39 !I

E13.15. Show the conditions for Def [lcm] and Def [plm]. Then show each of the
results in T13.26.
Def [lcm].
Exercise 13.15 Def [lcm]

868

ANSWERS FOR CHAPTER 13


(i) PA ` 9xx > ; ^ .8i < k/m.i /jx
Supposing the zero case is done.
1. 9xx > ; ^ .8i < ;/m.i /jx

zero case

2.

9xx > ; ^ .8i < j /m.i /jx

A (g !I)

3.

a > ; ^ .8i < j /m.i /ja

A (g 29E)

a>;
.8i < j /m.i /ja
S m.j / > ;
a  S m.j /  S m.j /
a  S m.j / > ;
l < Sj

3 ^E
3 ^E
T13.13e
4 T13.13y
6,8 T13.13c
A (g (8I))

4.
5.
6.
8.
9.
10.
11.
12.

l <j _l Dj
l <j

10 T13.13m
A (g 11_E)

13.
14.

m.l/ja
m.l/j.a  S m.j //

5,12 (8E)
13 T13.24d

15.

l Dj

A (g 11_E)

16.
17.
18.

m.j /jS m.j /


m.l/jS m.j /
m.l/j.a  S m.j //

T13.24b
16,15 DE
17 T13.24d

19.
20.
21.
22.
23.

m.l/j.a  S m.j //
.8i < Sj /m.i /j.a  S m.j //
a  S m.j / > ; ^ .8i < Sj /m.i /j.a  S m.j //
9xx > ; ^ .8i < Sj /m.i /jx
9xx > ; ^ .8i < Sj /m.i /jx

24. 9xx > ; ^ .8i < j /m.i /jx ! 9xx > ; ^ .8i < Sj /m.i /jx
25. 8y.9xx > ; ^ .8i < y/m.i /jx ! 9xx > ; ^ .8i < Sy/m.i /jx/
26. 9xx > ; ^ .8i < k/m.i /jx

11,12-14,15-18 _E
10-19 (8I)
9,20 ^I
21 9I
2,3-22 9E
2-23!I
248I
1,25 IN

Def [plm]. These are straightforward.


T13.26.
T13.26.a. Show 1 > ; ^ .8i < ;/m.i /j1 ^ .8z < 1/z > ; ^ .8i < ;/m.i /jz
and apply the definition.
T13.26.b. This is straightforward.
T13.26.c. PA ` .8i < k/m.i /jx ! pk jx
Let q D qt.x; pk / and r D rm.x; pk /.

Exercise 13.15 T13.26.c

869

ANSWERS FOR CHAPTER 13

1.
2.
3.
4.

.8y < lk /y > ; ^ .8i < k/m.i /jy


Spk D lk
x D .Spk  q/ C r
r < Spk

def lk T13.19c
def pk
def q
from def r

5.

.8i < k/m.i /jx

A (g !I)

6.
7.

r < lk
a<k

4,2 DE
A (g (8I))

8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.

m.a/jx
m.a/j..Spk  q/ C r/
m.a/jlk
m.a/jSpk
m.a/j.Spk  q/
m.a/jr
.8i < k/m.i /jr
r > ; ^ .8i < k/m.i /jr
r ; _ .8i < k/m.i /jr
r ;
r D;
pk jx

20. .8i < k/m.i /jx ! pk jx

5,7 (8E)
8,3 DE
7 T13.26b
2,10 DE
11 T13.24d
9,12 T13.24f
7-13 (8I)
1,6 (8E)
15 DeM
14,16 DS
17 T13.13f
18 T13.24i
5-19 !I

T13.26.d. PA ` 8n.Pr.S n/ ^ njlk / ! .9i < k/njS m.i /


Supposing the zero case is done.

Exercise 13.15 T13.26.d

870

ANSWERS FOR CHAPTER 13


1. 8n.Pr.S n/ ^ njl; / ! .9i < ;/njS m.i /
2. lj > ; ^ .8i < j /m.i /jlj

zero case
def lj T13.19b

3. .8i < j /m.i /jlj


4.
8n.Pr.S n/ ^ njlj / ! .9i < j /njS m.i /

2 ^E
A (g !I)

5.

Pr.Sa/ ^ ajlSj

A (g !I)

6.
7.

Pr.Sa/
b < Sj

5 ^E
A (g (8I))

8.
9.

b <j _b Dj
b<j

7 T13.13m
A (g 8_E)

10.
11.

m.b/jlj
m.b/j.lj  S m.j //

3,9 (8E)
10 T13.24d

12.

bDj

A (g 8_E)

13.
14.
15.

m.j /jS m.j /


m.b/jS m.j /
m.b/j.lj  S m.j //

T13.24b
12,13 DE
14 T13.24d

16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.

m.b/j.lj  S m.j //

8,9-11,12-15 _E

.8i < Sj /m.i /j.lj  S m.j //


pSj j.lj  S m.j //
SpSj D lSj
ajlSj
ajSpSj
aj.lj  S m.j //
ajlj _ ajS m.j /
j < Sj
ajlj
Pr.Sa/ ^ ajlj
.9i < j /ajS m.i /
ajS m.b/
b<j

7-16 (8I)
17 T13.26c
def pSj
5 ^E
20,19 DE
21,18 T13.24e
6,22 T13.25i
T13.13g
A (g 23_E)
6,25 ^I
4,26 8E
A (g 27(9E))

b < Sj
.9i < Sj /ajS m.i /

29,24 T13.13b
28,30 (9I)

32.

.9i < Sj /ajS m.i /

27,28-31 (9E)

33.

ajS m.j /

A (g 23_E)

.9i < Sj /ajS m.i /

33,24 (9I)

34.
35.

.9i < Sj /ajS m.i /

23,25-32,33-34 _E

36.
.Pr.Sa/ ^ ajlSj / ! .9i < Sj /ajS m.i /
37.
8n.Pr.S n/ ^ njlSj / ! .9i < Sj /njS m.i /
38. 8n.Pr.S n/ ^ njlj / ! .9i < j /njS m.i / ! 8n.Pr.S n/ ^ njlSj / ! .9i < Sj /njS m.i /
39. 8y.8n.Pr.S n/ ^ njly / ! .9i < y/njS m.i / ! 8n.Pr.S n/ ^ njlSy / ! .9i < Sy/njS m.i //
40. 8n.Pr.S n/ ^ njlk / ! .9i < k/njS m.i /

E13.16. Provide derivations to show each of [a] - [e] to complete the derivation for
T13.27.
Exercise 13.16

5-35 !I
36 8I
4-37 !I
38 8I
1,39 IN

871

ANSWERS FOR CHAPTER 13


T13.27.
a. PA ` ;  k ! .A.;/ ! B.;//

Trivially .8i < ;/rm.;; m.i // D h.i /; this gives you B.;/ and (1) follows
easily from this.
b. You will be able to use (10) and (11) to generate the antecedent to (8); (13)
then follows by !E.
c. PA; .11/ ` Rp.la ; S m.a//
c1.

Rp.la ; S m.a//

A (c E)

c2.
c3.

9xPr.S x/ ^ xjla ^ xjS m.a/


Pr.S u/ ^ ujla ^ ujS m.a/

c1, T13.25e
A (c c29E)

c4.
c5.
c6.
c7.
c8.
c9.
c10.

ujS m.a/
Pr.S u/
ujla
Pr.S u/ ^ ujla
.9i < a/ujS m.i /
ujS m.v/
v<a
a < Sa
v < a ^ a < Sa
.v < a ^ a < Sa/ ! Rp.S m.v/; S m.a//
Rp.S m.v/; S m.a//
Pr.S u/ ^ ujS m.v/ ^ ujS m.a/
9xPr.S x/ ^ xjS m.v/ ^ xjS m.a/
Rp.S m.v/; S m.a//
?

c11.
c12.
c13.
c14.
c15.
c16.
c17.
c18.
c19.
c20.

c3 ^E
c3 ^E
c3 ^E
c5,c6 ^I
c7 T13.26d
A (c c8(9E))

T13.13m
c10,c11 ^I
11 8E
c13,c12 !E
c5,c9,c4 ^I
c15 9I
c16 T13.25e
c14,c17 ?I
c8,c9-c18 (9E)

c2,c3-c19 9E

c21. Rp.la ; S m.a//

c1-c20 E

d. PA; .20/; .21/ ` s D Sm.a/  c C h.a/


d1.
d2.
d3.
d4.
d5.
d6.
d7.
d8.
d9.

s D .la b C r/ C h.a/la
la > ;
h.a/la  h.a/
:
h.a/la D h.a/ C h.a/la h.a/
:
h.a/la D h.a/ C h.a/la h.a/1
:
h.a/la D h.a/ C h.a/la 1
:
s D .la b C r/ C .h.a/ C h.a/la 1/
:
s D la b C .r C la 1h.a// C h.a/
s D S m.a/c C h.a/

21 T6.61
def la
d2 T13.13y
d3 T13.23a
d4 T6.55
d5 T13.23k
d1,d6 DE
d7 T6.53
20,d8 DE

e. PA; .10/; .13/; .21/; .22/ ` .8i < Sa/rm.s; m.i // D h.i /
Exercise 13.16 T13.27

872

ANSWERS FOR CHAPTER 13

e1.

u < Sa

A (g (8I))

e2.
e3.

u<a_uDa
u<a

e1 T13.13m
A (g e2_E)

e4.
e5.
e6.
e7.
e8.
e9.
e10.
e11.
e12.
e13.

m.u/jla
m.u/jla .b C h.a//
9qS m.u/q D la .b C h.a//
S m.u/v D la .b C h.a//
rm.s; m.u// D rm.s; m.u//
rm.s; m.u// D rm.la .b C h.a// C r; m.u//
rm.s; m.u// D rm.S m.u/v C r; m.u//
rm.s; m.u// D rm.r; m.u//
rm.r; .m.u// D h.u/
rm.s; m.u// D h.u/

e3 T13.26b
e4 T13.24d
def |
A (g e69E)
DI
e8,21 DE
e9,e7 DE
e10 T13.24j
13,e3 (8E)
e11,e12 DE

e14.

rm.s; m.u// D h.u/

e15.

uDa

A (g e2_E)

e16.
e17.
e18.
e19.
e20.
e21.
e22.
e23.
e24.
e25.
e26.
e27.
e28.
e29.
e30.

rm.s; m.u// D rm.s; m.u//


rm.s; m.u// D rm.S m.a/c C h.a/; m.u//
rm.s; m.u// D rm.S m.u/c C h.u/; m.u//
rm.s; m.u// D rm.h.u/; m.u//
a < Sa
u < Sa
m.u/  h.u/
h.u/ < S m.u/
S m.u/  ; D ;
; C h.u/ D h.u/
h.u/ D S m.u/  ; C h.u/
h.u/ D S m.u/  ; C h.u/ ^ h.u/ < S m.u/
9wh.u/ D S m.u/  w C h.u/ ^ h.u/ < S m.u/
rm.h.u/; m.u// D h.u/
rm.s; m.u// D h.u/

DI
e16,22 DE
e15,e17 DE
e18 T13.24j
T13.13g
e20,e15 DE
10,e21 (8E)
e22 T13.13l,m
T6.41
T6.49
e25,e24 =E
e26,e23 ^I
e26 9I
e28 def rm
e19,e29 DE

e31.

rm.s; m.u// D h.u/

e32. .8i < Sa/rm.s; m.i // D h.i /

e6,e7-e13 9E

e2,e3-e14,e15-e30 _E
e1-e31 (8I)

E13.17. Show the conditions for Def [maxs] and Def [maxp]. Then show each of the
results in T13.28.
Def [maxs].
(i) (a): A.;; ;/ is ; D ;^; D ;. (b): This time, you will obtain B.Sj; m.;//.
For the second part, you can use T8.21. (c): Again, you will obtain B.Sj; m.a//;
for the second part, under the assumption l < Sj for (8I) you have l D
j _ l < j by T13.13m; in either case, it is easy to show m.l/  m.a/. (d):
This time you obtain B.Sj; m.j //; for the second part under the assumption
l < Sj for (8I) again you have l D j _ l < j and you will be able to show
m.l/  m.j / in either case.
Exercise 13.17 Def [maxs]

ANSWERS FOR CHAPTER 13

873

(ii) From your assumption for !I, you have two disjunctions. Two pairs (A
from one and B from the other) are incompatible. The other options give the
result you want.
Def [maxp].
(i) This argument is very straightforward under x  y _x < y from T13.13p.
(ii) This is like (ii) from Def [maxs] with the disjunctions only easier.
T13.28.
T13.28.a. PA ` maxp.x; y/  x ^ maxp.x; y/  y
From the definition, .x  y ^maxp.x; y/ D x/_.x < y ^maxp.x; y/ D y/;
then the argument is straightforward under x  y _ x < y from T13.13p.
T13.28.b. PA ` .8i < k/m.i /  maxsk
From the definition, .k D ;^maxsk D ;/_..9i < k/m.i / D maxsk ^.8i <
k/m.i /  maxsk / then the argument is straightforward under k D ; _ k ;
from T3.1.
E13.18. Complete the demonstration for T13.29.
T13.29.

Exercise 13.18 T13.29

874

ANSWERS FOR CHAPTER 13


(i)

1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.

j <k

A (g (8I))

Sj > ;
q  Sj  q
q>;
q  Sj > ;
m.j / > ;
maxp.k; maxshk /  maxshk
r  maxshk
.8i < k/h.i /  maxshk
h.j /  maxshk
h.j /  r
r < Sr
Sr D s
r <s
h.j / < s
rjq
9vS r  v D q
Sr  a D q

T13.13e
2 T13.13y
def q
3,4 T13.13c
5 def m
T13.28a
7 def r
T13.28b
9,1 (8E)
8,10 T13.13a
T13.13g
def s
12,13 DE
11,14 T13.13c
14 T13.26b
def j
A (g 179E)

sa Dq
a D;_a >;
aD;

13,18 DE
T13.13f
A (c I)

s;D;
sa D;
qD;
q;
?

T6.41
22,21 DE
19,23 DE
4 T13.13f
24,25 ?I

a;
a>;
sa s
sq
q  Sj  s
q  Sj > h.j /
m.j / > h.j /
m.j /  h.j /

21-26 I
20,27 DS
28 T13.13y
19,29 DE
30,3 T13.13a
15,31 T13.13c
32 def m
33 T13.13zz

35.

m.j /  h.j /

17,18-34 9E

36.

m.j / > ; ^ m.j /  h.j /

6,35 ^I

37. .8i < k/.m.i / > ; ^ m.i /  h.i //

1-36 (8I)

Exercise 13.18 T13.29

875

ANSWERS FOR CHAPTER 13


(ii) (a)

(b)

a1.
a2.
a3.
a4.
a5.
a6.
a7.
a8.
a9.
a10.
a11.
a12.
a13.
a14.
a15.
a16.
a17.
a18.
a19.

i j
S i  Sj
q  S i  q  Sj
S.q  S i /  S.q  Sj /
:
aj.S.q  Sj / S.q  S i //
:
:
S.q  Sj / S.q  S i / D S.q  Sj / S.q  S i /
S.q  Sj / D .q  Sj / C 1
S.q  S i / D .q  S i / C 1
:
:
S.q  Sj / S.q  S i / D .q  Sj / C 1 .q  S i / C 1
:
:
.q  Sj / C 1 .q  S i / C 1 D .q  Sj / .q  S i /
:
:
.q  Sj / .q  S i / D q.Sj
Si/
:
:
q.Sj
S i / D q.Sj
Si/
Sj D j C 1
Si D i C 1
:
:
q.Sj
S i / D q..j C 1/ .i C 1//
:
:
:
.j C 1/ .i 1/ D j
i
:
:
q.Sj
S i / D q.j
i/
:
:
S.q  Sj / S.q  S i / D q.j
i/
:
ajq.j
i/

:
b1. j
i >;
:
i
b2. 9vS v C ; D j
:
b3.
Sl C ; D j
i
b4.
b5.
b6.
b7.
b8.
b9.
b10.
b11.
b12.
b13.
b14.
b15.
b16.
b17.

Sl C ; D Sl
:
Sl D j
i
ajS l
:
j
i j
:
j
i <k
maxp.k; maxshk /  k
Sr > r
k<s
:
j
i <s
Sl < s
l < Sl
l <s
ljq
ajq

b18. ajq

2 T13.13l
a1 T13.13i
a2 T13.13z
a3 T13.13i
a4,8,9 T13.24g
DI
T6.45
T6.45
a6,a7,a8 DE
T13.23j
T13.23k
DI
T6.45
T6.45
a12,a13 DE
T13.23j
a14,a15 DE
a6,a9,a10,a11,a17 DE
a5,a18 DE

2 T13.23d
b1 def
A (g b29E)
T6.39
b3,b4 DE
14,b5 DE
T13.23c
b7,3 T13.13c
T13.1
T13.13g
b9,b10 T13.13c
b8,b11 T13.13b
b5,b12 DE
T13.13g
b13,b14 T13.13b
b15 T13.26b
b6,b16 T13.24e
b2,b3-b17 9E

E13.19. Show the conditions for Def [h.i /] and then show T13.30.
Def [h.i /]. (i) is straightforward under i < k _ i  k from T13.13p. And (ii) is also
straightforward.
T13.30.

Exercise 13.19 T13.30

876

ANSWERS FOR CHAPTER 13


1. .k < k ^ h.k/ D .r; s; k// _ .k  k ^ h.k/ D n/
2. .l < k ^ h.l/ D .r; s; l// _ .l  k ^ h.l/ D n/
3. 9p9q.8i < S k/ .p; q; i / D h.i /
4.
.8i < S k/ .a; b; i / D h.i /
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.

k < Sk
.a; b; k/ D h.k/
kk
kk
k k _ h.k/ .r; s; k/
.k < k ^ h.k/ D .r; s; k//
k  k ^ h.k/ D n
h.k/ D n
.a; b; k/ D n
l <k
l < Sk
.a; b; l/ D h.l/
l k
l k _ h.l/ n
.l  k ^ h.l/ D n/
l < k ^ h.l/ D .r; s; l/
h.l/ D .r; s; l/
.a; b; l/ D .r; s; l/
.8i < k/ .a; b; i / D .r; s; i /
.8i < k/ .a; b; i / D .r; s; i / ^ .a; b; k/ D n
9p9q.8i < k/ .p; q; i / D .r; s; i / ^ .p; q; k/ D n

26. 9p9q.8i < k/ .p; q; i / D .r; s; i / ^ .p; q; k/ D n

def h
def h
T13.29
A (g 39E)
T13.13g
4,5 (8E)
T13.13l
7 T13.13q
8 _I
9 DeM
1,10 DS
11 ^E
6,12 DE
A (g (8I))
14,5 T13.13b
4,15 (8E)
14 T13.13q
17 _I
18 DeM
2,19 DS
20 ^E
16,21 DE
14-22 (8I)
23,13 ^I
24 9I
3,4-25 9E

E13.20. Complete the demonstration of T13.31 by showing the zero case.


E
T13.31. Apply T13.29 with h.i / D g.x/ to get 9p9q.8i < 1/.p; q; i / D g.x/;
then under an assumption for 9E, with ; < 1 the result easily follows.
E13.24. Demonstrate each of the results in T13.36.
T13.36.
T13.36.b. PA ` subc.x; y/ D x : y

Exercise 13.24 T13.36.b

877

ANSWERS FOR CHAPTER 13

1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.

gsubc.x/ D idnt11 .x/


gsubc.x/ D x
subc.x; ;/ D gsubc.x/
subc.x; ;/ D x
:
x ;Dx
:
subc.x; ;/ D x ;
:
subc.x; j / D x j

def from subc, T13.21


1 with T13.34c
T13.33a
3,2 DE
T13.23e
4,5 DE
A (g (!I)

subc.x; Sj / D hsubc.x; j; subc.x; j //


hsubc.x; j; u/ D pred.idnt33 .x; j; u//
hsubc.x; j; u/ D pred.u/
hsubc.x; j; subc.x; j // D pred.subc.x; j //
subc.x; Sj / D pred.subc.x; j //
:
subc.x; Sj / D pred.x j /
x j _x >j
xj

T13.33b
def from subc, T13.21
9 with T13.34c,T13.36a
10 8E
8,11 DE
7,12 DE
T13.13p
A (g 14_E)

16.
17.
18.
19.
20.
21.

x  Sj
:
x Sj D ;
:
x j D;
pred.;/ D ;
:
pred.x j / D ;
:
:
pred.x j / D x Sj

15 T13.13n
16 T13.23b
15 T13.23b
T13.35a
18,19 DE
20,17 DE

22.

x>j

A (g 14_E)

23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.

x  Sj
:
x D Sj C .x Sj /
xj
:
x D j C .x j /
:
:
Sj C .x Sj / D j C .x j /
:
:
j C S ; C .x Sj / D j C .x j /
:
:
S ; C .x Sj / D x j
:
:
S ; C .x Sj / D S ; C .x Sj /
:
:
S ; C .x Sj / D x j
:
:
; C .x Sj / D x Sj
:
:
S.x Sj / D x j
:
x j >;
:
:
S pred.x j / D x j
:
:
S.x Sj / D S pred.x j /
:
:
x Sj D pred.x j /
:
:
x Sj D pred.x j /
:
subc.x; Sj / D x Sj
:
:
subc.x; j / D x j ! subc.x; Sj / D x Sj
:
:
8y.subc.x; y/ D x y ! subc.x; Sy/ D x Sy/
:
subc.x; y/ D x y

T13.36.f. PA ` Eq.x; y/ $ x D y

Exercise 13.24 T13.36.f

T13.13k
23 13.23a
22 T13.13l
25 13.23a
24,26 DE
27 with T6.45
28 T6.66
T6.51
29,30 DE
T6.49
31,32 DE
22 T13.23d
34 T13.35a
33,35 DE
36 T6.38
14,15-21,22-37 _E
13,38 DE
7-39 !I
40 8I
6,41 IN

878

ANSWERS FOR CHAPTER 13


1. Eq.x; y/ $ sg.absval.x - y// D ;
:
:
2. Eq.x; y/ $ sg.x y/ C .y x/ D ;
:
:
3. Eq.x; y/ $ .x y/ C .y x/ D ;
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.

Eq.x; y/
:
:
.x y/ C .y x/ D ;
x y_x <y
xy
:
y xD;
:
.x y/ C ; D ;
;C;D;
:
.x y/ C ; D ; C ;
:
x yD;
:
x D y C .x y/
x DyC;
yC;Dy
xDy

def from EQ, T13.32


1 with T13.36d,c
2 T13.35d
A (g $I)
3,4 $E
T13.13p
A (g 6_E)
7 T13.23b
5,8 DE
T6.39
9,10 DE
11 T6.66
7 T13.23a
12,13 DE
T6.39
14,15 DE

17.

x<y

A (g 6_E)

18.
19.

yx
xDy

17 T13.13l
similarly

20.

xDy

6,7-17,18-19 _E

21.

xDy

A (g $I)

22.
23.
24.
25.
26.
27.
28.

yx
:
x yD;
xy
:
y xD;
;C;D;
:
:
.x y/ C .y x/ D ;
Eq.x; y/

21 T13.13l
22 T13.23b
21 T13.13l
24 T13.23b
T6.39
26,23,25 DE
3,27 $E

29 . Eq.x; y/ $ x D y

4-20,21-28 $I

T13.36.i. PA ` Neg.P.x//
E $ P.x/
E

Exercise 13.24 T13.36.i

879

ANSWERS FOR CHAPTER 13


1. P.x/
E $ chP .x/
E D;
2. Neg.P.x//
E $ csg.chP .x//
E D;

T13.32
def from NEG, T13.32

3. Neg.P.x//
E $ csg.chP .x//
E D;
4.
Neg.P.x//
E

2 T13.36e
A (g $I)

5.
6.
7.
8.

csg.chP .x//
E D;
chP .x/
E >;
chP .x/
E ;
P.x/
E

3,4 $E
5 T13.35g
6 T13.13f
1,7 NB

9.

P.x/
E

A (g $I)

chP .x/
E ;
chP .x/
E >;
csg.chP .x//
E D;
Neg.P.x//
E

1,9 NB
10 T13.13f
11 T13.35g
3,12 $E

10.
11.
12.
13.

14. Neg.P.x//
E $ P.x/
E

4-8,9-13 $I

E13.25. Demonstrate each of the results in T13.38.


T13.38.
T13.38.a. PA ` .99y  z/P.x;
E z; y/ $ .9y  z/P.x;
E y; z/
1. P.x;
E z; y/ $ chP .x;
E z; y/ D ;
E z; ;/ D gchR .x;
E z/
2. chR .x;
3. gchR .x;
E z/ D chP .x;
E z; ;/

T13.32
T13.33a
def from ELEQ, T13.21

4. chR .x;
E z; ;/ D chP .x;
E z; ;/
5.
chR .x;
E z; ;/ D ;

2,3 DE
A (g $I)

6.
7.
8.
9.

chP .x;
E z; ;/ D ;
P.x;
E z; ;/
;;
.9y  ;/P.x;
E z; y/

4,5 DE
1,6 8E, $E
T13.13l
7,8 (9I)

10.

.9y  ;/P.x;
E z; y/

A (g $I)

11.
12.

P.x;
E z; j /
j ;

A (g 10(9E))

13.
14.
15.
16.

j D;
P.x;
E z; ;/
chP .x;
E z; ;/ D ;
chR .x;
E z; ;/ D ;

12 T13.13l, T6.47
11,13 DE
1,14 8E, $E
4,15 DE

17.

chR .x;
E z; ;/ D ;

10,11-16 (9E)

18. chR .x;


E z; ;/ D ; $ .9y  ;/P.x;
E z; y/

5-9,10-17 $I

Exercise 13.25 T13.38.a

ANSWERS FOR CHAPTER 13

1.
2.
3.
4.

880

chR .x;
E z; ;/ D ; $ .9y  ;/P.x;
E z; y/
P.x;
E z; y/ $ chP .x;
E z; y/ D ;
chR .x;
E z; Sj / D hchR .x;
E z; j; chR .x; z; j //
hchR .x;
E z; j; u/ D timesu; chP .x;
E z; suc.j //

5. hchR .x;
E z; j; u/ D u  chP .x;
E z; Sj /
E z; j; chR .x; z; j // D chR .x; z; j /  chP .x;
E z; Sj /
6. hchR .x;
7. chR .x;
E z; Sj / D chR .x; z; j /  chP .x;
E z; Sj /
chR .x;
E z; j / D ; $ .9y  j /P.x;
E z; y/
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.

zero case
T13.32
T13.33b
def from ELEQ, T13.21
4 T13.34a,e
5 8E
3,6 DE
A (g !I)

chR .x;
E z; Sj / D ;

A (g $I)

chR .x; z; j /  chP .x;


E z; Sj / D ;
chR .x; z; j / D ; _ chR .x; z; j / > ;
chR .x; z; j / D ;

7,9 DE
T13.13f
A (g 11_E)

.9y  j /P.x;
E z; y/
P.x;
E z; a/
aj

8,12 $E
A (g 13(9E))

a  Sj
.9y  Sj /P.x;
E z; y/

15 T13.13n
14,16 (9I)

18.

.9y  Sj /P.x;
E z; y/

13,14-17 (9E)

19.

chR .x; z; j / > ;

A (g 11_E)

20.
22.
23.
24.
25.
26.
27.

chR .x; z; j / ;
chR .x; z; j /  ; D ;
chR .x; z; j /  chP .x;
E z; Sj / D chR .x; z; j /  ;
chP .x;
E z; Sj / D ;
P.x;
E z; Sj /
Sj  Sj
.9y  Sj /P.x;
E z; y/

19 T13.13f
T6.41
10,22 DE
23,20 T6.67
2,24 8E,$E
T13.13l
15,26 (9I)

28.

.9y  Sj /P.x;
E z; y/

11,12-18,19-27 _E

29.

.9y  Sj /P.x;
E z; y/

A (g $I)

30.
31.

P.x;
E z; a/
a  Sj

A (g 29(9E))

32.
33.

a  j _ a D Sj
aj

31 T13.13n
A (g 32_E)

34.
35.
36.
37.

.9y  j /P.x;
E z; y/
chR .x;
E z; j / D ;
chR .x;
E z; j /  chP .x;
E z; Sj / D ;
chR .x;
E z; Sj / D ;

38.

a D Sj

A (g 32_E)

39.
40.
41.
42.

P.x;
E z; Sj /
chP .x;
E z; Sj / D ;
chR .x;
E z; j /  chP .x;
E z; Sj / D ;
chR .x;
E z; Sj / D ;

30,38 DE
2,39 8E, $E
40 T6.56
7,41 DE

43.
44.
45.

chR .x;
E z; Sj / D ;

30,33 (9I)
8,34 $E
35 T6.56
7,36 DE

32,33-37,38-42 _E

chR .x;
E z; Sj / D ;

29,30-43 (9E)

chR .x;
E z; Sj / D ; $ .9y  Sj /P.x;
E z; y/

46. chR .x;


E z; j / D ; $ .9y  j /P.x;
E z; y/ ! chR .x;
E z; Sj / D ; $ .9y  Sj /P.x;
E z; y/
47. 8w.chR .x;
E z; w/ D ; $ .9y  w/P.x;
E z; y/ ! chR .x;
E z; S w/ D ; $ .9y  S w/P.x;
E z; y//
48. chR .x;
E z; n/ D ; $ .9y  n/P.x;
E z; y/

Exercise 13.25 T13.38.a

9-28,29-44 $I
8-45 !I
46 8I
1,47 IN

881

ANSWERS FOR CHAPTER 13

1.
2.
3.
4.
5.
6.
7.

chR .x;
E z; n/ D ; $ .9y  n/P.x;
E z; y/
chS .x;
E z/ D chR .x;
E z; z/
S.x;
E z/ $ chS .x;
E z/ D ;
chR .x;
E z; z/ D ; $ .9y  z/P.x;
E z; y/
chS .x;
E z/ D ; $ .9y  z/P.x;
E z; y/
S.x;
E z/ $ .9y  z/P.x;
E z; y/
.99y  z/P.x;
E z; y/ $ .9y  z/P.x;
E y; z/

from above
def from ELEQ, T13.21
T13.32
1 8E
2,4 DE
from 3,5
6 abv

T13.38.e. PA ` .y  z/P.x;


E z; y/ $ .y  z/P.x;
E z; y/
(a)

(b)

a1.
a2.
a3.
a4.
a5.
a6.

q.x;
E z; ;/ D gq.x;
E z/
gq.x;
E z/ D zero.chR .x;
E z; ;//
gq.x;
E z/ D ;
q.x;
E z; ;/ D ;
.y  ;/P.x;
E z; y/ D ;
q.x;
E z; ;/ D .y  ;/P.x;
E z; y/

T13.33a
def from least, T13.21
a2 T13.34b
a1,a3 DE
T13.20a
a4,a5 DE

b1.

kj

A (g (8I))

b2.
b3.

k <j _k Dj
k<j

b1 T13.13l
A (g b2_E)

b4.
b5.

k<a
P.x;
E z; k/

b3,17 DE
15,b4 (8E)

b6.

kDj

A (g b2_E)

P.x;
E z; k/

19,b6 DE

b7.
b8.
b9.
b10.
b11.
b12.
b13.
b14.
b15.
b16.
b17.

P.x;
E z; k/

b2,b3-b5,b6-b7 _E

.8y  j /P.x;
E z; y/
.9y  j /P.x;
E z; y/
chR .x;
E z; j / ;
chR .x;
E z; j / D 1
b DaC1
b D Sa
b D Sj
b D Sj _ P .x;
E z; b/
k<b

b1-b8 (8I)
b9 (QN)
3,b10 NB
2,b11 DS
12,b12 DE
b13 T6.45
b14,17 DE
b15 _I
A (g (8I))

b18.
b19.
b20.
b21.

k < Sj
k Sj
k <j _k Dj
k<j

b17,b15 DE
b18 T13.13r
b18 T13.13.l
A (g b20_E)

b22.
b23.

k<a
P.x;
E z; k/

b21,17 DE
15,b22 (8E)

b24.

kDj

A (g b20_E)

P.x;
E z; k/

19,b24 DE

b25.
b26.
b27.

P.x;
E z; k/
k Sj ^ P.x;
E z; k/

b28. .8w < b/.w Sj ^ P.x;


E z; w//
b29. b D Sj _ P.x;
E z; b/ ^ .8w < b/.w Sj ^ P.x;
E z; w//

Exercise 13.25 T13.38.e

b20,b21-b23,b24-b25 _E
b19,b26 ^I
b17-b27 (8I)
b16,b28 ^I

882

ANSWERS FOR CHAPTER 13


(c)

c1.
c2.
c3.
c4.
c5.
c6.
c7.
c8.
c9.
c10.
c11.
c12.
c13.
c14.
c15.
c16.

j j
.9y  j /P.x;
E z; y/
chR .x;
E z; j / D ;
b DaC;
aC;Da
bDa
bDj
P.x;
E z; b/
b D Sj _ P.x;
E z; b/
k<b

T13.13l
21,c1 (9I)
3,c2 $E
12,c3 DE
t6.39
c4,c5 DE
17,c6 DE
21,c7 DE
c8 _I
A (g (8I))

k<j
k < Sj
k Sj
k<a
P.x;
E z; k/
k Sj ^ P.x;
E z; k/

c10,c7 DE
c11 T13.13m
c12 T13.13r
c10,c6 DE
15,c14 (8E)
c13,c15 ^I

c17. .8w < b/.w Sj ^ P.x;


E z; w/
c18. b D Sj _ P.x;
E z; b/ ^ .8w < b/.w Sj ^ P.x;
E z; w//

(d)

d1.

j <a

A (c I)

d2.
d3.
d4.

j j
j Dj
?

15,d1 (8E)
DI
d2,d3 ?I

d5.
d6.
d7.
d8.
d9.
d10.
d11.
d12.
d13.
d14.
d15.
d16.
d17.
d18.
d19.
d20.

j a
aj
.9y  j /P.x;
E z; y/
chR .x;
E z; j / D ;
b DaC;
aC;Da
bDa
P.x;
E z; b/
b D Sj _ P.x;
E z; b/
k<b

d1-d4 I
d5 T13.13p
24,d6 (9I)
3,d7 $E
12,d8 DE
t6.39
d9,d10 DE
24,d11 DE
d12 _I
A (g (8I))

k<a
k<j
k < Sj
k Sj
P.x;
E z; k/
k Sj ^ P.x;
E z; k/

d14,d11 DE
d14,d6 T13.13c
d16 T13.13m
d17 T13.13r
15,d15 (8E)
d18,d19 ^I

d21. .8w < b/.w Sj ^ P.x;


E z; w//
d22. b D Sj _ P.x;
E z; b/ ^ .8w < b/.w Sj ^ P.x;
E z; w//
1.
2.
3.
4.
5.

c10-c16 (8I)
c9,c17 ^I

q.x;
E z; n/ D .y  n/P.x;
E z; y/
m.x;
E z/ D q.x;
E z; z/
q.x;
E z; z/ D .y  z/P.x;
E z; y/
m.x;
E z/ D .y  z/P.x;
E z; y/
.y  z/P.x;
E z; y/ $ .y  z/P.x;
E y; z/

from main arg


def from least, T13.21
1 8E
2,3 DE
4 abv

T13.38.g. PA ` Prime.n/ $ Pr.n/


Exercise 13.25 T13.38.g

d14-d20 (8I)
d13,d21 ^I

883

ANSWERS FOR CHAPTER 13

1.

Pr.n/

A (g $I)

2.
3.
4.
5.

1 < n ^ 8xxjn ! .x D ; _ S x D n/
1<n

1 Def [Pr]
2 ^E
2 ^E
A (g (8I))

6.

8xxjn ! .x D ; _ S x D n/
a<n
ajn ! .a D ; _ Sa D n/

4 8E

.8j < n/j jn ! .j D ; _ Sj D n/


1 < n ^ .8j < n/j jn ! .j D ; _ Sj D n/
Prime.n/

5-6 (8I)
3,7 ^I
8 def PRIME and equvalence

10.

Prime.n/

A (g $I)

11.
12.
13.
14.
15.

1 < n ^ .8j < n/j jn ! .j D ; _ Sj D n/


1<n

10 def PRIME and equvalence


11 ^E
11 ^E
T13.13p
A (g 14_E)

7.
8.
9.

.8j < n/j jn ! .j D ; _ Sj D n/


a <n_na
a<n

16.

ajn ! .a D ; _ Sa D n/

13,15 (8E)

17.

na

A (g 14_E)

18.
19.
20.
21.

;<1
;<n
9v.S v C ; D n/
Sb C ; D n

T13.13e
18,12 T13.13b
19 def
A (g 209E)

22.
23.
24.
25.
26.
27.

Sb C ; D Sb
Sb D n
Sb  a
b<a
a Sb
an

28.
29.
30.
31.
32.
33.
34.

T6.39
21,22 DE
17,23 DE
24 T13.13k
25 T13.24h
26,23 DE

an
a n _ .a D ; _ Sa D n/
ajn ! .a D ; _ Sa D n/
ajn ! .a D ; _ Sa D n/
8xxjn ! .x D ; _ S x D n/
1 < n ^ 8xxjn ! .x D ; _ S x D n/
Pr.n/

20,21-27 9E
28 _I
29 Impl
14,15-16,17-30 _E
31 8I
12,32 ^I
33 Def [Pr]

E13.26. Show each of the results from T13.40.


T13.40.
T13.40.j. PA ` m > 1 ! a < ma

Exercise 13.26 T13.40.j

884

ANSWERS FOR CHAPTER 13

1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.

m; D 1
;<1
; < m;
m 1 _ ; < m;
m > 1 ! ; < m;
m > 1 ! j < mj

T13.40a
T13.13e
1,2 DE
3 _I
4 Impl
A (g !I)

m>1

A (g !I)

mj

j <
Sj < S mj
mSj D mj  m
9v.S v C 1 D m/
Sl C 1 D m

6,7 !E
8 T13.13j
T13.40a
7 def
A (g 119E)

Sl C 1 D SSl
m D SSl
mj  S S l D mj  S l C mj
mj  m D mj  S l C mj
mSj D mj  S l C mj
Sl > ;
mj  S l  mj
m>;
mj > ;
mj  S l > ;
mj  S l  1
mj  S l C mj  1 C mj
1 C mj D S mj
mj  S l C mj  S mj
mSj  S mj
Sj < mSj
Sj < mSj

12 T6.45
12,13 DE
T6.42
15,14 DE
16,10 DE
T13.13e
T13.13y
7,2 T13.13b
T13.40g
19,21 T13.13c
T13.13k
T13.13u
T6.45
24,25 DE
17,26 DE
9,27 T13.13c
11,12-28 9E

m > 1 ! Sj < mSj

31. .m > 1 ! j < mj / ! .m > 1 ! Sj < mSj /


32. 8y.m > 1 ! y < my / ! .m > 1 ! Sy < mSy /
33. m > 1 ! a < ma

7-29 !I
6-30 !I
31 8I
5,32 IN

E13.27. Show each of the results from T13.41.


T13.41.
T13.41.e. PA ` .y  fact.n/ C 1/n < y ^ Pr.y/ D yn < y ^ Pr.y/

Exercise 13.27 T13.41.e

885

ANSWERS FOR CHAPTER 13


1. fact.n/ > 0
2. fact.n/ C 1 > 1
3. 9zPr.S z/ ^ zj.fact.n/ C 1/
4.
Pr.S k/ ^ kj.fact.n/ C 1/
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.

Pr.S k/
Sk > 1
kj.fact.n/ C 1/
k<n

1 T13.41c
1 with T13.13v
2 T13.25d
A (g 39E)
4 ^E
5 def
4 ^E
A (g I)

kjfact.n/
kj1
;<k
k1
?

8 T13.41d
7,9 T13.24f
6 T13.13j
11 T13.24h
10,12 ?I

kn
nk
n < Sk
n < S k ^ Pr.S k/
fact.n/ C 1 D S fact.n/
kjS fact.n/
fact.n/ k
k  fact.n/
S k  S fact.n/
S k  fact.n/ C 1
.9y  fact.n/ C 1/n < y ^ Pr.y/

25. .9y  fact.n/ C 1/n < y ^ Pr.y/


26. .y  fact.n/ C 1/n < y ^ Pr.y/ D yn < y ^ Pr.y/

E13.28. Show each of the results from T13.42.


T13.42.
T13.42.k. PA ` 8yPr.y/ ! 9j pi.j / D y

Exercise 13.28 T13.42.k

8-12 I
13 T13.13q
15 T13.13l,m
6,17 ^I
T 6.45
7,18 DE
19 T13.24h
20 T13.13q
21 T13.13i
22 T6.45
19,22 (9I)
3,4-24 9E
25 T13.20b

886

ANSWERS FOR CHAPTER 13

1.

a  pi.0/

A (g (8I))

2.
3.
4.
5.

pi.0/ D 2
a2
a D0_a D1_a D2
aD0

T13.42a
1,2 DE
3 T8.16
A (g 4_E)

6.
7.
8.

Pr.;/
Pr.a/
Pr.a/ _ 9j pi.j / D a

9.

T13.25a
6,5 DE
6 _I

aD1

A (g 4_E)

10.
11.
12.

Pr.1/
Pr.a/
Pr.a/ _ 9j pi.j / D a

T13.25b
10,9 DE
11 _I

13.

aD2

A (g 4_E)

14.
15.
16.

pi.0/ D a
9j pi.j / D a
Pr.a/ _ 9j pi.j / D a

2,13 DE
14 9I
15 _I

17.
18.

Pr.a/ _ 9j pi.j / D a
Pr.a/ ! 9j pi.j / D a

19. .8y  pi.;//Pr.y/ ! 9j pi.j / D y

4,5-8,9-12,13-16 _E
17 Impl
1-17 (8I)

Exercise 13.28 T13.42.k

ANSWERS FOR CHAPTER 13

20.

887

.8y  pi.k//Pr.y/ ! 9j pi.j / D y

A (g !I)

21.

a  pi.S k/

A (g (8I))

22.
23.

a D pi.S k/ _ a < pi.S k/


a D pi.S k/

21 T13.13l
A (g 22_E)

24.
25.
26.

9j pi.j / D a
Pr.a/ _ 9j pi.j / D a
Pr.a/ ! 9j pi.j / D a

23 9I
24 _I
25 Impl

27.

a < pi.S k/

A (g 22_E)

28.
29.

a  pi.k/ _ a > pi.k/


a  pi.k/

T13.13p
A (g 28_E)

30.

Pr.a/ ! 9j pi.j / D a

20,29 (8E)

31.

a > pi.k/

A (g 28_E)

32.
33.
34.
35.
36.
37.

.8w < pi.S k//pi.k/ < w ^ Pr.w/


pi.k/ < a ^ Pr.a/
pi.k/ a _ Pr.a/
Pr.a/
Pr.a/ _ 9j pi.j / D a
Pr.a/ ! 9j pi.j / D a

T13.42d
32,27 (8E)
33 DeM
34,31 DS
35 _I
36 Impl

38.
39.
40.

Pr.a/ ! 9j pi.j / D a
Pr.a/ ! 9j pi.j / D a
.8y  pi.S k//Pr.y/ ! 9j pi.j / D y

41. .8y  pi.k//Pr.y/ ! 9j pi.j / D y ! .8y  pi.S k//Pr.y/ ! 9j pi.j / D y


42. 8z..8y  pi.z//Pr.y/ ! 9j pi.j / D y ! .8y  pi.S z//Pr.y/ ! 9j pi.j / D y/
43. .8y  pi.i //Pr.y/ ! 9j pi.j / D y

T13.42.l. PA ` m n ! pred.pi.m// pi.n/a

Exercise 13.28 T13.42.l

28,29-30,31-37 _E
22,23-26,27-38 _E
21-39 (8I)
20-40 !I
41 8I
19,42 IN

888

ANSWERS FOR CHAPTER 13

1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.

mn

A (g !I)

pi.n/; D 1
S pred.pi.n/; / D pi.n/;
S pred.pi.n/; / D 1
S pred.pi.m/1 / D pi.m/1
pi.m/1 D pi.m/
S pred.pi.m// D pi.m/
pi.m/ > 1
S pred.pi.m// > Spred.pi.n/; /
pred.pi.m// > pred.pi.n/; /
pred.pi.m// S pred.pi.n/; /
pred.pi.m// pi.n/;
pred.pi.m// pi.n/j

T13.40a
T13.42h
2,3 DE
T13.42h
T13.40b
5,6 DE
T13.42f
8,7,4 DE
9 T13.13j
10 T13.24h
11,3 DE
A (g !I)

14.

pred.pi.m//jpi.n/Sj

A (c I)

15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.

pi.n/Sj

T13.40a
14,15 DE
T13.42e
7,17 DE
16,18 T13.25i
19,13 DS
T13.42e
20,21 def Pr
7,8 DE
23 T13.13j
24 T13.13f
22,25 DS
7,26 DE
1 with T13.13o
A (g 28_E)

pi.n/j

D
 pi.n/
pred.pi.m//j.pi.n/j  pi.n//
Prpi.m/
PrSpred.pi.m//
pred.pi.m//jpi.n/j _ pred.pi.m//jpi.n/
pred.pi.m//jpi.n/
Prpi.n/
pred.pi.m// D ; _ S pred.pi.m// D pi.n/
S pred.pi.m// > S;
pred.pi.m// > ;
pred.pi.m// ;
S pred.pi.m// D pi.n/
pi.m/ D pi.n/
m<n_n<m
m<n

30.
31.

pi.m/ < pi.n/


pi.m/ pi.n/

29 T13.42i
30 T13.13f

32.

n<m

A (g 28_E)

33.
34.

pi.n/ < pi.m/


pi.m/ pi.n/

32 T13.42i
33 T13.13f

35.
36.
37.
38.
39.
40.

pi.m/ pi.n/
?

28,29-31,32-34 _E
27,35 ?I

pred.pi.m// pi.n/Sj
pi.n/j

14-36 I
pi.n/Sj

pi.m//
! pred.pi.m//
8y.pi.m// pi.n/y ! pred.pi.m// pi.n/Sy /
pred.pi.m// pi.n/a

41. m n ! pred.pi.m// pi.n/a

13-37!I
38 8I
12,39 IN
1-40 !I

T13.42.n. PA ` m n ^ pred.pi.m/b /j.s  pi.n/a / ! pred.pi.m/b /js

Exercise 13.28 T13.42.n

889

ANSWERS FOR CHAPTER 13


1.

m n ^ pred.pi.m/b /j.s  pi.n/a /

A (g !I)

2.
3.
4.
5.
6.
7.
8.
9.

mn
pred.pi.m/b /j.s  pi.n/a /
pi.m/; D 1
pred.1/ D ;
;js
pred.pi.m/; /js
; b _ pred.pi.m/; /js
;  b ! pred.pi.m/; /js

1 ^E
1 ^E
T13.40a
T13.35c
T13.24a
6,4,5 DE
7 _I
8 Impl

Exercise 13.28 T13.42.n

890

ANSWERS FOR CHAPTER 13


10.

j  b ! pred.pi.m/j /js

A (g !I)

11.

Sj  b

A (g !I)

12.
13.
14.
15.
16.
17.
18.

j b
pred.pi.m/j /js
Spred.pi.m/b / D pi.m/b
Spred.pi.m/j / D pi.m/j
9qSpred.pi.m/b /  q D s  pi.n/a
9qSpred.pi.m/j /  q D s
S pred.pi.m/b /  u D s  pi.n/a

T13.13k,l
10,12 !E
T13.42h
T13.42h
3 def
13 def
A (g 169E)

19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
43.
44.
45.
46.
47.
48.
49.
50.
51.
52.
53.
54.
55.
56.
57.
58.
59.
60.
61.
62.
63.
64.

pi.m/b  u D s  pi.n/a
S pred.pi.m/j /  v D s
pi.m/j  v D s
j <b
9v.S v C j D b/
Sl C j D b

14,18 DE
A (g 179E)
15,20 DE
11 T13.13k
22 def
A (g 239E)

pi.m/S lCj D pi.m/S l  pi.m/j


pi.m/b D pi.m/S l  pi.m/j
pi.m/S l  pi.m/j  u D s  pi.n/a
pi.m/S l  pi.m/j  u D pi.m/j  v  pi.n/a
pi.m/j ;
pi.m/S l  u D v  pi.n/a
pred.pi.m/1 /jpi.m/lC1
pi.m/1 D pi.m/
l C 1 D Sl
pred.pi.m//jpi.m/S l
pred.pi.m//jpi.m/S l  u
pred.pi.m//jv  pi.n/a
Spred.pi.m/1 / D pi.m/1
pi.m/1 D pi.m/
Spred.pi.m// D pi.m/
Prpi.m/
PrS pred.pi.m//
pred.pi.m//jv _ pred.pi.m//jpi.n/a
pred.pi.m// pi.n/a
pred.pi.m//jv
9qSpred.pi.m//  q D v
S pred.pi.m//  t D v
pi.m/  t D v
pi.m/j  pi.m/  t D s
pi.m/j  pi.m/ D pi.m/Sj
pi.m/Sj  t D s
S pred.pi.m/Sj / D pi.m/Sj
S pred.pi.m/Sj /  t D s
9qSpred.pi.m/Sj /  q D s
pred.pi.m/Sj /js
pred.pi.m/Sj /js
pred.pi.m/Sj /js

T13.40d
25,24 DE
19,26 DE
27,21 DE
with T13.42g
28,29 T6.67
T13.40f
T13.40b
T6.45
31,32,33 DE
34 T13.24d
35,30 DE
T13.42h
T13.40b
37,38 DE
T13.42e
40,39 DE
36,41 T13.25i
2 T13.42l
42,43 DS
44 def
A (g 459E)
46,39 DE
21,47 DE
T13.40a
48,49 DE
T13.42h
50,51 DE
52 9I
53 def
45,46-54 9E
23,24-55 9E

pred.pi.m/Sj /js

17,20-56 9E

pred.pi.m/Sj /js

16,18-57 9E

Sj  b ! pred.pi.m/Sj /js
j  b ! pred.pi.m/j /js ! Sj  b ! pred.pi.m/Sj /js
i  b ! pred.pi.m/i /js
b  b ! pred.pi.m/b /js
bb
pred.pi.m/b /js

65. m n ^ pred.pi.m/b /j.s  pi.n/a / ! pred.pi.m/b /js

Exercise 13.28 T13.42.n

11-58 !I
10-59 !I
9,60 IN
61 8E
T13.13l
62,63 !E
1-64 !I

ANSWERS FOR CHAPTER 13

891

E13.29. Show each of the results from T13.43.


T13.43.
T13.43.c. PA ` exp.S n; i / D xpred.pi.i /x /jS n ^ pred.pi.i /xC1 / S n
1. pred.pi.i /ex.n;i/ / S n
2. .8z < ex.n; i //pred.pi.i /z /jS n
3. ex.n; i / D ; _ ex.n; i / > ;
ex.n; i / D ;
4.
5.
6.
7.
8.
9.
10.
11.
12.

pi.i /; D 1
pi.i /ex.n;i/ D 1
S pred.pi.i /ex.n;i/ / D pi.i /ex.n;i/
S pred.pi.i /ex.n;i/ / D S ;
pred.pi.i /ex.n;i/ / D ;
;jS n
pred.pi.i /ex.n;i/ /jS n
?

13. ex.n; i / ;
14. ex.n; i / > ;
15. 9vS v C ; D ex.n; i /
16.
Sa C ; D ex.n; i /
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.

Sa C ; D Sa
ex.n; i / D Sa
a < Sa
a < ex.n; i /
pred.pi.i /a /jS n
pred.pi.i /Sa / S n
Sa D a C 1
pred.pi.i /aC1 / S n
pred.pi.i /a /jS n ^ pred.pi.i /aC1 / S n
pi.i / > 1
a < pi.i /a
S pred.pi.i /a / D pi.i /a
n pred.pi.i /a /
pred.pi.i /a /  n
S pred.pi.i /a /  S n
pi.i /a  S n
a < Sn
a  Sn
.9x  S n/pred.pi.i /x /jS n ^ pred.pi.i /xC1 / S n

36. .9x  S n/pred.pi.i /x /jS n ^ pred.pi.i /xC1 / S n


37. .x  S n/pred.pi.i /x /jS n ^ pred.pi.i /xC1 / S n D xpred.pi.i /x /jS n ^ pred.pi.i /xC1 / S n
38. exp.S n; i / D xpred.pi.i /x /jS n ^ pred.pi.i /xC1 / S n

T13.43.k. PA ` 9qpi.i /exp.S n;i / q D S n^8y.y i ! exp.q; y/ D exp.S n; y//

Exercise 13.29 T13.43.k

T13.19b
T13.19c
T13.13d,l
A (c I)
T13.40a
4,5 DE
T13.42h
6,7 DE
8 T6.38
T13.24a
9,10 DE
1,11 ?I
4-12 I
3,13 DS
14 def
A (g 159E)
t6.39
16,17 DE
T13.13g
19,18 DE
2,20 (8E)
1,18 DE
T6.45
22,23 DE
21,24 ^I
T13.42f
26 T13.40j
T13.42h
21 T13.24h
29 T13.13q
30 T13.13i
28,31 DE
27,32 T13.13c
33 T13.13l
25,34 (9I)
15,16-35 9E
36 T13.20b
37 def

892

ANSWERS FOR CHAPTER 13

1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.

pred.pi.i /exp.Sn;i/ /jS n


exp.S n; i / D a
9qS pred.pi.i /a /  q D S n
S pred.pi.i /a / D pi.i /a
9qpi.i /a  q D S n
pi.i /a  j D S n
j D;_j >;
j D;

T13.43d
abv
1 def
T13.42h
3,4 DE
A (g 59E)
T13.13f
A (c I)

pi.i /a  ; D ;
pi.i /a  j D ;
; D Sn
; Sn
?

T6.41
9,8 DE
6,10 DE
with T13.13e
11,12 ?I

j ;
j >;
ki

8-13 I
7,14 DS
A (g !I)

9v.S v C ; D j /
Sl C ; D j

15 def
A (g 179E)

Sl C ; D Sl
Sl D j
pi.i /a  S l D S n
exp.S n; k/ D b
pred.pi.k/b /jS n
pred.pi.k/b /jpi.i /a  S l
pred.pi.k/b /jS l
pred.pi.k/bC1 / S n
pred.pi.k/bC1 /jS l
pred.pi.k/bC1 /jpi.i /a  S l
pred.pi.k/bC1 /jS n
?
pred.pi.k/bC1 / S l
pred.pi.k/b /jS l ^ pred.pi.k/bC1 / S l
exp.S l; k/ D b
exp.j; k/ D b
exp.j; k/ D exp.S n; k/
exp.j; k/ D exp.S n; k/

k i ! exp.j; k/ D exp.S n; k/
37.
38.
8y.y i ! exp.j; y/ D exp.S n; y//
39.
pi.i /a  j D S n ^ 8y.y i ! exp.j; y/ D exp.S n; y//
40.
9qpi.i /a  q D S n ^ 8y.y i ! exp.q; y/ D exp.S n; y//
41. 9qpi.i /exp.Sn;i/  q D S n ^ 8y.y i ! exp.q; y/ D exp.S n; y//

E13.30. Show each of the results from T13.44.


T13.44.

Exercise 13.30 T13.44

T6.39
18,19 DE
6,20 DE
abv
T13.43d
21,23 DE
16,24 T13.42n
T13.43d
A (g I)
27 T13.24d
28,21 DE
26,29 ?I
27-30 I
25,31 ^I
32 T13.43f
33,20 DE
34 abv
17,18-35 9E
16-36 !I
37 8I
6,38 ^I
39 9I
5,6-40 9E

893

ANSWERS FOR CHAPTER 13


T13.44.h. PA ` exp.m; i / > ; ! len.m/ > i
1.

exp.m; i / > ;

A (g !I)

2.
3.
4.

exp.m; i / ;
mD;_m>;
mD;

1 T13.13f
T13.13f
A (g 3_E)

5.

len.m/ i

A (c E)

6.
7.
8.

exp.;; i / D ;
exp.m; i / D ;
?

T13.43b
6,4 DE
2,7 ?I

9.
10.

len.m/ > i

5-8 E

m>;

A (g 3_E)

11.

len.m/ i

A (c E)

12.
13.
14.

len.m/  i
9v.S v C ; D m/
Sa C ; D m

11 T13.13q
10 def
A (g 139E)

15.
16.
17.
18.
19.

Sa C ; D Sa
Sa D m
exp.Sa; i / ;
len.Sa/  i
i > Sa
i a
exp.Sa; i / D ;
?

20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.

T6.39
14,15 DE
2,16 DE
12,16 DE
A (g I)

i Sa
i  Sa
.8z  Sa/z  len.Sa/ ! exp.Sa; z/ D ;
i  len.Sa/ ! exp.Sa; i / D ;
exp.Sa; i / D ;
?
?

19 T13.13l,m
20 T13.43h
17,21 ?I
19-22 I
T13.13q
T13.44d
25,24 (8E)
26,18 !E
17,27 ?I
13,14-28 9E

len.m/ > i

11-29 E

len.m/ > i

3,4-9,10-30 _E

32. exp.m; i / > ; ! len.m/ > i

T13.44.j. PA ` len.n/ D S l ! exp.n; l/  1

Exercise 13.30 T13.44.j

1-31 !I

894

ANSWERS FOR CHAPTER 13


1.

len.n/ D S l

A (g !I)

2.
3.

nD;_n>;
nD;

T13.13f
A (c I)

4.
5.
6.
7.
8.

len.;/ D ;
len.n/ D ;
; D Sl
; Sl
?

T13.44b
3,4 DE
1,5 DE
T6.37
6,7 ?I

9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.

n;
n>;
9v.S v C ; D n/
Sm C ; D n

3-8 I
2,9 DS
10 def
A (g 119E)

Sm C ; D Sm
Sm D n
len.S m/ D S l
.8z  S m/z  S l ! exp.S m; z/ D ;
.8w < S l/.8z  S m/z  w ! exp.S m; z/ D ;
exp.S m; l/ 1
exp.S m; l/ < 1
exp.S m; l/ D ;
a  Sm

T6.39
12,13 DE
1,14 DE
T13.44d
T13.44e
A (c E)
18 T13.13q
19 T8.16
A (g (8I))

22.

al

A (g !I)

23.
24.

l Da_l <a
l Da

22 T13.13l
A (g 23_E)

25.

exp.S m; a/ D ;

20,24 DE

26.

l <a

A (g 23_E)

27.
28.
29.

Sl  a
len.S m/  a
exp.S m; a/ D ;

26 T13.13k
27,15 DE
28 T13.44i

30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
43.

exp.S m; a/ D ;
a  l ! exp.S m; a/ D ;
.8z  S m/z  l ! exp.S m; z/ D ;
a<l
a < Sl
.8z  S m/z  a ! exp.S m; z/ D ;
.8w < l/.8z  S m/z  w ! exp.S m; z/ D ;
len.S m/ D l
Sl D l
Sl l
?
exp.S m; l/  1
exp.n; l/  1
exp.n; l/  1

23,24-25,26-29 _E
22-30 !I
21-31 (8I)
A (g (8I))
33 T13.13m
17,34 (8E)
33-35 (8I)
16,36 T13.44c
15,37 DE
T13.13g,r
38,39 ?I
18-40 E
14,41 DE
11,12-42 9E

44. len.n/ D S l ! exp.n; l/  1

Exercise 13.30 T13.44.j

1-43 !I

895

ANSWERS FOR CHAPTER 13


E13.31. Show each of the results from T13.45.
T13.45.
T13.45.h. PA ` .8i  a/pred.pi.i // val.m; n; i /
1.

j ;

A (g (8I))

2.
3.
4.
5.
6.
7.
8.
9.

val.m; n; ;/ D 1
pi.j / > 1
pi.j / > ;
S pred.pi.j // D pi.j /
S pred.pi.j // > S;
pred.pi.j // > ;
pred.pi.j // 1
pred.pi.j // val.m; n; ;/

T13.45b
T13.42f
3 with T13.13e
4 T13.35b
3,5 DE
7 T13.13j
7 T13.24h
8,2 DE

10. .8i  ;/pred.pi.i // val.m; n; ;/


11.
.8i  a/pred.pi.i // val.m; n; a/

1-9 (8I)
A (g !I)

12.

j  Sa

A (g (8I))

13.
14.
15.
16.
17.

j >a
j a
pred.pi.j // val.m; n; a/
val.m; n; Sa/ D val.m; n; a/  pi.a/exc.m;n;a/
pred.pi.j //jval.m; n; Sa/

12 T13.13k
13 T13.13l
11,14 (8E)
T13.45b
A (c I)

18.
19.
20.
21.

pred.pi.j //jval.m; n; a/  pi.a/exc.m;n;a/


j a
pred.pi.j //jval.m; n; a/
?

22.
23.

pred.pi.j // val.m; n; Sa/

17-21 I

.8i  Sa/pred.pi.i // val.m; n; Sa/

12-22 (8I)

24. .8i  a/pred.pi.i // val.m; n; a/ ! .8i  Sa/pred.pi.i // val.m; n; Sa/


25. .8i  a/pred.pi.i // val.m; n; a/

T13.45.i. PA ` .8j < i /exp.val.m; n; i /; j / D exc.m; n; j /


1.

a<;

A (g (8I))

2.

exp.val.m; n; ;/; a/ exc.m; n; a/

A (c E)

3.
4.

a;
?

T6.47
1,3 ?I

5.

exp.val.m; n; ;/; a/ D exc.m; n; a/

6. .8j < ;/exp.val.m; n; ;/; j / D exc.m; n; j /

16,17 DE
13 T13.13r
18,19 T13.42n
15,20 ?I

2-4 E
1-5 (8I)

Exercise 13.31 T13.45.i

11-23 !I
10,24 IN

ANSWERS FOR CHAPTER 13

7.

.8j < i /exp.val.m; n; i /; j / D exc.m; n; j /

896
A (g !I)

a < Si

A (g (8I))

9.
10.
11.
12.

val.m; n; S i / D val.m; n; i /  pi.i /exc.m;n;i /


exc.m; n; a/ D e
a <i _a Di
a<i

T13.45b
abv
8 T13.13l
A (g 11_E)

13.
14.
15.
16.
17.
18.

exp.val.m; n; i /; a/ D exc.m; n; a/
pred.pi.a/exp.val.m;n;i /;a/ /jval.m; n; i /
pred.pi.a/e /jval.m; n; i /
pred.pi.a/e /jval.m; n; i /  pi.i /exc.m;n;i /
pred.pi.a/e /jval.m; n; S i /
pred.pi.a/eC1 /jval.m; n; S i /

7,12 (8E)
T13.43d*
10,13,14 DE
15 T13.24d
16,9 DE
A (c I)

8.

19.
20.
21.
22.
23.
24.

pred.pi.a/eC1 /jval.m; n; i /  pi.i /exc.m;n;i /


ai
pred.pi.a/eC1 /jval.m; n; i /
pred.pi.a/exp.val.m;n;i /;a/C1 / val.m; n; i /
pred.pi.a/eC1 / val.m; n; i /
?

18,9 DE
12 T13.13f
19,20 T13.42n
T13.43d*
10,13,22 DE
21,23 ?I

25.
26.
27.

pred.pi.a/eC1 / val.m; n; S i /
pred.pi.a/e /jval.m; n; S i / ^ pred.pi.a/eC1 / val.m; n; S i /
exp.val.m; n; S i /; a/ D exc.m; n; a/

18-24 I
17,25 ^I
26 T13.43f*

28.

aDi

A (g 11_E)

29.
30.
31.
32.
33.
34.
35.

val.m; n; S i / D val.m; n; a/  pi.a/e


pred.pi.a/e /jS pred.pi.a/e /
Spred.pi.a/e / D pi.a/e
pred.pi.a/e /jpi.a/e
pred.pi.a/e /jval.m; n; a/  pi.a/e
pred.pi.a/e /jval.m; n; S i /
pred.pi.a/eC1 /jval.m; n; S i /

9,10,28 DE
T13.24b
T13.42h
30,31 DE
32 T13.24d
33,29 DE
A (c I)

36.
37.
38.
39.
40.
41.
42.
43.
44.
45.
46.
47.
48.
49.
50.
51.
52.
53.
54.
55.
56.
57.
58.

pred.pi.a/eC1 /jval.m; n; a/  pi.a/e


9qS pred.pi.a/eC1 /  q D val.m; n; a/  pi.a/e
S pred.pi.a/eC1 / D pi.a/eC1
9qpi.a/eC1  q D val.m; n; a/  pi.a/e
pi.a/eC1  q D val.m; n; a/  pi.a/e
e C 1 D Se
pi.a/Se  q D val.m; n; a/  pi.a/e
pi.a/e  pi.a/  q D val.m; n; a/  pi.a/e
pi.a/e ;
pi.a/  q D val.m; n; a/
S pred.pi.a// D pi.a/
S pred.pi.a//  q D val.m; n; a/
9qS pred.pi.a//  q D val.m; n; a/
pred.pi.a//jval.m; n; a/
aa
pred.pi.a// val.m; n; a/
?
?

35,29 DE
36 def
T13.42h
37,38 DE
A (c 399E)
T6.45
A (g 40,41 DE
42 T13.40a
with T13.42g
43,44 T6.67
with T13.42h
45,46 DE
47 9I
48 def
T13.13l
50 T13.45h
49,51 ?I
39,40-52 9E

pred.pi.a/eC1 / val.m; n; S i /
pred.pi.a/e /jval.m; n; S i / ^ pred.pi.a/eC1 / val.m; n; S i /
exp.val.m; n; S i /; a/ D exc.m; n; a/
exp.val.m; n; S i /; a/ D exc.m; n; a/
.8j < S i /exp.val.m; n; S i /; j / D exc.m; n; j /

59. .8j < i /exp.val.m; n; i /; j / D exc.m; n; j / ! .8j < S i /exp.val.m; n; S i /; j / D exc.m; n; j /


60. .8j < i /exp.val.m; n; i /; j / D exc.m; n; j /

Exercise 13.31 T13.45.i

35-53 I
34,54 ^I
55 T13.43f*
11,12-27,28-56 _E
8-57 (8I)
7-58 !I
6,59 IN

ANSWERS FOR CHAPTER 13

897

*In light of T13.45g and T13.35b, for application to T13.43d val.m; n; i /


must be S pred.val.m; n; i // and similarly for val.m; n; S i /.
T13.45.j. PA ` .8i < len.m//exp.val.m; n; l/; i / D exp.m; i /^.8i < len.n//exp.val.m; n; l/; iC
len.m// D exp.n; i /
1. l D len.m/ C len.n/
2. .8j < l/exp.val.m; n; l/; j / D exc.m; n; j /
3. j < len.m/ ! exc.m; n; j / D exp.m; j /
j < len.m/
4.
5.
6.
7.
8.
9.

exc.m; n; j / D exp.m; j /
len.m/  len.m/ C len.n/
j <l
exp.val.m; n; l/; j / D exc.m; n; j /
exp.val.m; n; l/; j / D exp.m; j /

10. .8i < len.m//exp.val.m; n; l/; i / D exp.m; i /


:
11. j C len.m/  len.m/ ! exc.m; n; j C len.m// D exp.n; .j C len.m// len.m//
12.
j < len.n/
13.
14.
15.
16.
17.
18.
19.
20.
21.

j C len.m/  len.m/
:
exc.m; n; j C len.m// D exp.n; .j C len.m// len.m//
:
j C len.m/ D len.m/ C .j C len.m// len.m/
:
j D .j C len.m// len.m/
exc.m; n; j C len.m// D exp.n; j /
j C len.m/ < len.n/ C len.m/
j C len.m/ < l
exp.val.m; n; l/; j C len.m// D exc.m; n; j C len.m//
exp.val.m; n; l/; j C len.m// D exp.n; j /

22. .8i < len.n//exp.val.m; n; l/; i C len.m// D exp.n; i /


23. .8i < len.m//exp.val.m; n; l/; i / D exp.m; i / ^ .8i < len.n//exp.val.m; n; l/; i C len.m// D exp.n; i /

T13.45.k. PA ` i  l ! pi.l/mCn i  val.m; n; i /

Exercise 13.31 T13.45.k

abv
T13.45i
T13.45e
A (g (8I))
3,4 !E
T13.13t
4,6 T13.13c
2,7 (8E)
5,8 DE
4-9 (8I)
T13.45f
A (g (8I))
T13.13t
11,12 !E
13 T13.23a
15 T6.66
14,16 DE
12 T13.13v
1,18 DE
2,19 (8E)
20,17 DE
12-21 (8I)
10,22 ^I

898

ANSWERS FOR CHAPTER 13

1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.

l D len.m/ C len.n/
11
pi.l/mCn ; D 1
val.m; n; ;/ D 1
pi.l/mCn ;  val.m; n; ;/
; l _ pi.l/mCn ;  val.m; n; ;/
;  l ! pi.l/mCn ;  val.m; n; ;/
i  l ! pi.l/mCn i  val.m; n; i /

abv
T8.14
T13.40a
T13.45b
2,3,4 DE
5 _I
6 Impl
A (g !I)

Si  l

A (g !I)

i <l
i l
pi.l/mCn i  val.m; n; i /
pi.l/mCn Si D pi.l/mCn i  pi.l/mCn
val.m; n; S i / D val.m; n; i /  pi.i /exc.m;n;i/
pi.i / < pi.l/
i < len.m/ _ i  len.m/
i < len.m/

9 T13.13k
10 T13.13l
8,11 !E
T13.40a
T13.45b
10 T13.42i
T14.13p
A (g 16_E)

18.
19.
20.
21.
22.

exc.m; n; i / D exp.m; i /
exp.m; i /  m
exc.m; n; i /  m
mmCn
exc.m; n; i /  m C n

17 T13.45e
T13.43g
18,19 DE
T13.13t
20,21 T13.13a

23.

i  len.m/

A (g 16_E)

24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.

exc.m; n; i / D exp.n; i
:
exp.n; i len.m//  n
exc.m; n; i /  n
nmCn
exc.m; n; i /  m C n

len.m//

exc.m; n; i /  m C n
pi.i /exc.m;n;i/  pi.l/exc.m;n;i/
pi.l/ > ;
pi.l/exc.m;n;i/  pi.l/mCn
pi.i /exc.m;n;i/  pi.l/mCn
val.m; n; i /  pi.i /exc.m;n;i/  val.m; n; i /  pi.l/mCn
val.m; n; i /  pi.l/mCn  pi.l/mCn i  pi.l/mCn
val.m; n; i /  pi.i /exc.m;n;i/  pi.l/mCn i  pi.l/mCn
pi.l/mCn Si  val.m; n; S i /
S i  l ! pi.l/mCn Si  val.m; n; S i /

39. i  l ! pi.l/mCn i  val.m; n; i / ! S i  l ! pi.l/mCn Si  val.m; n; S i /


40. i  l ! pi.l/mCn i  val.m; n; i /

T13.45.o. PA ` len.m  n/  l

Exercise 13.31 T13.45.o

23 T13.45f
T13.43g
24,25 DE
T13.13t
26,27 T13.13a
16,17-22,23-28 _E
15 T13.40e
with T13.42f
29,31 T13.40h
30,32 T13.13a
33 T13.13z
12 T13.13z
34,35 T13.13a
13,14,36 DE
9-37 !I
8-38 !I
7,39 IN

899

ANSWERS FOR CHAPTER 13


1. l D len.m/ C len.n/
2. len.n/ D ; _ len.n/ > ;
len.n/ D ;
3.
4.
5.

len.m/ D ; _ len.m/ > ;


len.m/ D ;

abv
T13.13f
A (g 2_E)
T13.13f
A (g 4 _E)

;C;D;
len.m  n/  ;
len.m  n/  ; C ;
len.m  n/  l

T6.39
T13.13d
6,7 DE
8,5,3 DE

10.

len.m/ > ;

A (g 4_E)

11.
12.

9vS v C ; D len.m/
Sa C ; D len.m/

10 def
A (g 119E)

6.
7.
8.
9.

13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.

Sa C ; D Sa
Sa D len.m/
exp.m; a/ > ;
a < len.m/
.8i < len.m//exp.m  n; i / D exp.m; i /
exp.m  n; a/ D exp.m; a/
exp.m  n; a/ > ;
len.m  n/ > a
len.m  n/  Sa
len.m  n/  len.m/
len.m/ C ; D len.m/
l D len.m/
len.m  n/  l
len.m  n/  l

T6.39
12,13 DE
14 with T13.44j
14 T13.13h
T13.45m
17,16 (8E)
15,18 DE
19 T13.44h
21 T13.13k
21,14 DE
T6.39
23,3 DE
22,24 DE
11,12-25 9E

27.

len.m  n/  l

4,5-9,10-26 _E

28.

len.n/ > ;

A (g 2_E)

29.
30.

9vS v C ; D len.n/
Sa C ; D len.n/

28 def
A (g 299E)

31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.

Sa C ; D Sa
Sa D len.n/
exp.n; a/ > ;
a < len.n/
.8i < len.n//exp.m  n; i C len.m// D exp.n; i /
exp.m  n; a C len.m// D exp.n; a/
exp.m  n; a C len.m// > ;
len.m  n/ > a C len.m/
len.m  n/  S.a C len.m//
S.a C len.m// D Sa C len.m/
S.a C len.m// D l
len.m  n/  l

43.
len.m  n/  l
44. len.m  n/  l

T6.39
30,31 DE
32 with T13.44j
32 T13.13h
T13.45m
35,34 (8E)
33,36 DE
37 T13.44h
38 T13.13k
T6.51
40,32 DE
39,41 DE
29,30-42 9E
2,3-27,28-43 _E

Exercise 13.31 T13.45.o

ANSWERS FOR CHAPTER 13

900

T13.45.p. PA ` len.m  n/ D l
1. l D len.m/ C len.n/
2. len.m  n/  l
3.
len.m  n/ l
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.

abv
T13.45o
A (c E)

len.m  n/ > l
l ;
len.m  n/ > ;
mn>1
1>;
mn>;
9vS v C ; D m  n
Sp C ; D m  n
Sp C ; D Sp
Sp D m  n
len.Sp/ > l
9vS v C l D len.Sp/
Sa C l D len.Sp/
S.a C l/ D Sa C l
S.a C l/ D len.Sp/
exp.Sp; a C l/  1
9qpi.a C l/exp.Sp;aCl/  q D Sp ^ 8y.y a C l ! exp.q; y/ D exp.Sp; y//
pi.a C l/exp.Sp;aCl/  j D Sp ^ 8y.y a C l ! exp.j; y/ D exp.Sp; y//
pi.a C l/exp.Sp;aCl/  j D Sp
pi.a C l/exp.Sp;aCl/  pi.a C l/
pi.a C l/ > 1
pi.a C l/exp.Sp;aCl/ > 1
Sp > ;
pi.a C l/exp.Sp;aCl/  j > ;
j >;
pi.a C l/exp.Sp;aCl/  j > j
j < Sp

Exercise 13.31 T13.45.p

3 T13.13q
T13.13d
4,5 T13.13c
6 T13.44g
T8.14
7,8 T13.13b
9 def
A (c 109E)
T6.39
11,12 DE
4,13 DE
14 def
A (c 159E)
T6.51
16,17 DE
18 T13.44j
T13.43k
A (c 209E)
21 ^E
19 with T13.40i
T13.42f
23,24 T13.13c
T13.13e
26,22 DE
27 T13.13aa
25,28 T13.13ab
22,29 DE

ANSWERS FOR CHAPTER 13

31.
32.

8y.y a C l ! exp.j; y/ D exp.Sp; y//


b < len.m/

33.
34.
35.
36.
37.
38.
39.
40.
41.

.8i < len.m//exp.m  n; i / D exp.m; i /


exp.m  n; b/ D exp.m; b/
exp.Sp; b/ D exp.m; b/
len.m/  a C l
b <aCl
b aCl
b a C l ! exp.j; b/ D exp.Sp; b/
exp.j; b/ D exp.Sp; b/
exp.j; b/ D exp.m; b/
.8i < len.m//exp.j; i / D exp.m; i /
b < len.n/

42.
43.

.8i < len.n//exp.m  n; i C len.m// D exp.n; i /


exp.m  n; b C len.m// D exp.n; b/
exp.Sp; b C len.m// D exp.n; b/
len.m/  a C len.m/
b C len.m/ < a C l
b C len.m/ a C l
b C len.m/ a C l ! exp.j; b C len.m// D exp.Sp; b C len.m//
exp.j; b C len.m// D exp.Sp; b C len.m//
exp.j; b C len.m// D exp.n; b/

44.
45.
46.
47.
48.
49.
50.
51.
52.

.8i < len.n//exp.j; i C len.m// D exp.n; i /


.8i < len.m//exp.j; i / D exp.m; i / ^ .8i < len.n//exp.j; i C len.m// D exp.n; i /
.8w < m  n/.8i < len.m//exp.w; i / D exp.m; i / ^
.8i < len.n//exp.w; i C len.m// D exp.n; i /
.8w < Sp/.8i < len.m//exp.w; i / D exp.m; i / ^
.8i < len.n//exp.w; i C len.m// D exp.n; i /
.8i < len.m//exp.j; i / D exp.m; i / ^ .8i < len.n//exp.j; i C len.m// D exp.n; i /
?

53.
54.
55.
56.
57.
58.

59.
60.
61.

901
21 ^E
A (g (8I))
T13.45m
33,32 (8E)
13,34 DE
T13.13t
32,36 T13.13c
37 T13.13r
31 8E
39,38 !E
35,40 DE
32-41 (8I)
A (g (8I))
T13.45m
44,43 (8E)
45,13 DE
T13.13t
43,47 T13.13x
48 T13.13r
31 8E
50,49 !E
46,51 DE
43-52 (8I)
42,53 ^I
T13.45n
55,13 DE
56,30 (8E)
54,57 ?I
20,21-58 9E

15,16-59 9E

10,11-60 9E

62. len.m  n/  l
63. len.m  n/ D l

3-61 E
2,62 T13.13s

E13.32. As a start to a complete demonstration of T13.48, provide a demonstration


through part (C) that does not skip any steps.
T13.48.
(A)

Exercise 13.32 T13.48

ANSWERS FOR CHAPTER 13

1.

Prvt.cnd.pP q; pQq//

902
A (g !I)

2.

Prvt.pP q/

A (g !I)

3.
4.
5.
6.
7.
8.

Mp.cnd.pP q; pQq/; pP q; pQq/


Mp.cnd.pP q; pQq/; pP q; pQq/ _ .cnd.pP q; pQq/ D pP q ^ Gen.pP q; pQq//
Icon.cnd.pP q; pQq/; pP q; pQq/
9vPrft.v; cnd.pP q; pQq//
9vPrft.v; pP q/
Prft.j; cnd.pP q; pQq//

T13.39a 8E
3 _I
4 T13.39b
1 abv
2 abv
A (g 69E)

Prft.k; pP q/

A (g 79E)

l Ddef .j  k/  2pQq
:
exp.j; len.j / 1/ D cnd.pP q; pQq/
:
exp.k; len.k/ 1/ D pP q
len.j  k/ D len.j / C len.k/
len.2pQq / D 1

def
8 T13.39c
9 T13.39c
T13.45p
10,14 T13.45m
T13.13e

17.

.8i < 1/exp.l; i C len.j  k// D exp.2pQq ; i /


;<1
exp.l; len.j  k// D exp.2pQq ; ;/

15,16 (8E)

18.
19.
20.
21.

exp.2pQq ; ;/ D pQq
exp.l; len.j  k// D pQq
exp.l; len.j / C len.k// D pQq
:
:
Iconexp.j; len.j / 1/; exp.k; len.k/ 1/; exp.l; len.j / C len.k//

cap
17,18 DE
19,13 DE
5,11,12,20 DE

9.
10.
11.
12.
13.
14.
15.
16.

(B)

Exercise 13.32 T13.48

cap

ANSWERS FOR CHAPTER 13

22.
23.
24.
25.
26.
27.
28.
29.
30.

.8i < len.j  k//exp.l; i / D exp.j  k; i /


.8i < len.j //exp.j  k; i / D exp.j; i /
a < len.j /
exp.j  k; a/ D exp.j; a/
len.j /  len.j / C len.k/
a < len.j / C len.k/
a < len.j  k/
exp.l; a/ D exp.j  k; a/
exp.l; a/ D exp.j; a/

31.
32.
33.

.8i < len.j //exp.l; i / D exp.j; i /


.8i < len.k//exp.j  k; i C len.j // D exp.k; i /
a < len.k/

34.
35.
36.
37.
38.

exp.j  k; a C len.j // D exp.k; a/


len.j / C a < len.j / C len.k/
len.j / C a < len.j  k/
exp.l; len.j / C a/ D exp.j  k; len.j / C a/
exp.l; len.j / C a/ D exp.k; a/

39.
40.
41.
42.
43.
44.
45.
46.
47.
48.

.8i < len.k//exp.l; len.j / C i / D exp.k; i /


cnd.pP q; pQq/ > ;
:
exp.j; len.j / 1/ > ;
:
len.j / 1 < len.j /
:
:
exp.l; len.j / 1/ D exp.j; len.j / 1/
pP q > ;
:
exp.k; len.k/ 1/ > ;
:
len.k/ 1 < len.k/
:
:
exp.l; len.j / C len.k/ 1/ D exp.k; len.k/ 1/
:
:
Iconexp.l; len.j / 1/; exp.l; len.j / C len.k/ 1/; exp.l; len.j / C len.k//

(C1)

Exercise 13.32 T13.48

903
10 T13.45m
T13.45m
A (g (8I))
23,24 (8E)
T13.13t
24,26 T13.13c
27,13 DE
22,28 (8E)
29,25 DE
24-30 (8I)
T13.45mT13.45m
A (g (8I))
32,33 (8E)
33 T13.13v
35,13 DE
22,36 (8E)
37,34 DE
33-38 (8I)
cap
11,40 DE
41 T13.44h
31,42 (8E)
cap
44,12 DE
45 T13.44h
39,46 (8E)
21,43,47 DE

ANSWERS FOR CHAPTER 13


49.
50.
51.
52.
53.

904

.8i < len.j //Axiomt.exp.j; i // _ .9m < i /.9n < i /Icon.exp.j; m/; exp.j; n/; exp.j; i //
a < len.j /
Axiomt.exp.j; a// _ .9m < a/.9n < a/Icon.exp.j; m/; exp.j; n/; exp.j; a//
exp.l; a/ D exp.j; a/
Axiomt.exp.j; a//

T13.39c
A (g (8I))
49,50 (8E)
31,50 (8E)
A (g 51_E)

54.
55.

Axiomt.exp.l; a//
Axiomt.exp.l; a// _ .9m < a/.9n < a/Icon.exp.l; m/; exp.l; n/; exp.l; a//

53,52 DE
54 _I

56.

.9m < a/.9n < a/Icon.exp.j; m/; exp.j; n/; exp.j; a//

A (g 51_E)

57.
58.
59.

Icon.exp.j; m0 /; exp.j; n0 /; exp.j; a//

60.
61.
62.
63.
64.
65.

m0 < len.j /
n0 < len.j /
exp.l; m0 / D exp.j; m0 /
exp.l; n0 / D exp.j; n0 /
Icon.exp.l; m0 /; exp.l; n0 /; exp.l; a//
.9m < a/.9n < a/Icon.exp.l; m/; exp.l; n/; exp.l; a//

66.
67.
68.
69.

A (g 569E)

m0 < a
n0 < a

.9m < a/.9n < a/Icon.exp.l; m/; exp.l; n/; exp.l; a//
Axiomt.exp.l; a// _ .9m < a/.9n < a/Icon.exp.l; m/; exp.l; n/; exp.l; a//
Axiomt.exp.l; a// _ .9m < a/.9n < a/Icon.exp.l; m/; exp.l; n/; exp.l; a//
.8i < len.j //Axiom.exp.l; i // _ .9m < i /.9n < i /Icon.exp.l; m/; exp.l; n/; exp.l; i //

50,58 T13.13b
50,59 T13.13b
31,60 (8E)
31,61 (8E)
57,62,63,52 DE
64,58,59 (9I)
56,57-65 (9E)
66 _I
51,53-55,56-67 _E
50-68 (8I)

(C2) The argument is similar for,


.8i <len.k//Axiom.exp.l;len.j /Ci //_.9m<i /.9n<i /Icon.exp.l;len.j /Cm/;exp.l;len.j /Cn/;exp.l;len.j /C
i //

(C3) Here is a schematic argument (or theorem) you can apply.

Exercise 13.32 T13.48

905

ANSWERS FOR CHAPTER 13


1. .8i < s/P .t C i / _ .9m < i /.9n < i /Q.t C m; t C n; t C i /

prem

2.

t a^a <tCs

A (g !I)

3.
4.
5.
6.

ta
a <tCs
9v.v C t D a/
l Ct Da

2 ^E
2 ^E
3 def
A (g 59E)

7.
8.
9.
10.

tCl <tCs
l <s
P .t C l/ _ .9m < l/.9n < l/Q.t C m; t C n; t C l/
P .t C l/

4,6 DE
7 T13.13v
1,8 (8E)
A (g 9_E)

11.
12.

P .a/
P .a/ _ .9m < a/.9n < a/Q.m; n; a/

10,6 DE
11 _I

13.

.9m < l/.9n < l/Q.t C m; t C n; t C l/

A (g 9_E)

14.
15.
16.

Q.t C m0 ; t C n0 ; t C l/
m0 < l
n0 < l

A (g 13(9E))

17.
18.
19.
20.
21.
22.

t C m0 < t C l
t C m0 < a
t C n0 < t C l
t C n0 < a
.9m < a/.9n < a/Q.m; n; t C l/
.9m < a/.9n < a/Q.m; n; a/

15 T13.13v
17,6 DE
16 T13.13v
19,6 DE
14,18,20 (9I)
21,6 DE

23.
24.

.9m < a/.9n < a/Q.m; n; a/


P .a/ _ .9m < a/.9n < a/Q.m; n; a/

13,14-22 (9E)
23 _I

25.
26.

P .a/ _ .9m < a/.9n < a/Q.m; n; a/


P .a/ _ .9m < a/.9n < a/Q.m; n; a/

27. .t  a ^ a < t C s/ ! P .a/ _ .9m < a/.9n < a/Q.m; n; a/


28. 8i .t  i ^ i < t C s/ ! P .i / _ .9m < i /.9n < i /Q.m; n; i /
29. .8i W t  i < t C s/P .i / _ .9m < i /.9n < i /Q.m; n; i /

9,10-12,13-24 _E
5,6-25 9E
2-26 !I
278I
28 abv

E13.33. Provide a demonstration for T13.49


T13.49.
Suppose variables are ordered as in the hint to T13.49.
Basis: PA ` sub0:0 .pP q; x/
E D pP q D sub0:0 .pP q; y/.
E
Assp: For any i:j , PA ` subi:0 .pP q; x/
E D subi:j .pP q; y/
E
Show: For k:l D S.i:j /, PA ` subk:0 .pP q; x/
E D subk:l .pP q; y/.
E k:l D
i:Sj or k:l D S i:0.
(i) k:l D i:Sj . PA ` subi:Sj .pP q; y/
E D formsub.subi:j .pP q; y/;
E
gvar. i :S j /; num.xi:Sj // D (by T13.47a) subi:j .pP q; y/
E D (by assp)
subi:0 .pP q; x/.
E So PA ` subk:0 .pP q; x/
E D subk:l .pP q; y/.
E
Exercise 13.33 T13.49

ANSWERS FOR CHAPTER 13

906

(ii) k:l D S i:0. PA ` subS.i:j / .pP q; y/


E D formsub.subi:j .pP q; y/;
E
gvar.S i :0/; num.xS i:0 // D (by assp) formsub.subi:0 .pP q; x/;
E gvar.S i :0/;
num.xS i:0 // D (by def) subS i:0 .pP q; x/.
E So PA ` subk:0 .pP q; x/
E D
subk:l .pP q; y/.
E
Indct: For any n:m, subn:0 .pP q; x/
E D subn:m .pP q; y/.
E
And sub.pP q; x/
E D sub.pP q; y/.
E
E13.34. Provide a demonstration for T13.50.
T13.50.
Basis: PA ` sub1 .pP q; x0 / D sub1 .pP q; x0 /.
Assp: For any i, PA ` subi C1 .pP q; x0 ; x1 : : : xi / D subi C1 .pP q; x1 : : : xi ; x0 /
Show: PA ` subi C2 .pP q; x0 ; x1 : : : xiC1 / D subi C2 .pP q; x1 : : : xi C1 ; x0 /
PA ` subi C2 .pP q; x1 : : : xi C1 ; x0 / D formsubformsub.subi .pP q;
x1 : : : xi /; gvar.i C 1/; num.xi C1 //; gvar.0/; num.x0 / D (by T13.47b)
formsubformsub.subi .pP q; x1 : : : xi /; gvar.0/; num.x0 //; gvar.i C 1/;
num.xi C1 / D (by def) formsubsubi C1 .pP q; x1 : : : xi ; x0 /; gvar.i C 1/;
num.xi C1 / D (by assp) formsubsubi C1 .pP q; x0 ; x1 : : : xi /; gvar.i C 1/;
num.xi C1 / D (by def) subi C2 .pP q; x0 ; x1 : : : xi C1 /.
Indct: For any n, PA ` subnC1 .pP q; x0 ; x1 : : : xn / D subnC1 .pP q; x1 : : : xn ; x0 /
So PA ` sub.pP q; x0 ; x/
E D sub.pP q; x;
E x0 /
E13.37. Fill in the parts of T13.56 that are left as similarly to to show that PA `
P $ P ?.
T13.56. For any 0 formula P there is a ? formula P ? such that PA ` P $ P ? .
P  is .8x < t/B. Set P ? D 9z.t D z/? ^ .8x  z/..x z/? ! B ? .

Exercise 13.37 T13.56

907

ANSWERS FOR CHAPTER 14


1. t D z $ .t D z/?
2. x z $ .x z/?
3. B $ B ?

T13.54
() case
by assp

4.

P?

A (g $I)

5.
6.
7.

9z.t D z/? ^ .8x  z/..x z/? ! B ? .x//


9zt D z ^ .8x  z/.x z ! B.x//
t D a ^ .8x  a/.x a ! B.x//

4 abv
5 with 1,2,3
A (g 69E)

8.
9.
10.
11.
12.
13.
14.
15.
16.

l <t

A (g (8I)

tDa
.8x  a/.x a ! B.x//
.8x  t/.x t ! B.x//
l t
l t ! B.l/
l t
B.l/

7 ^E
7 ^E
10,9 DE
8 T13.13l
11,12 (8E)
8 T13.13r
13,14 !E

.8x < t /B.x/

8-15 (8I)

17.
18.

.8x < t/B.x/


P

6,7-16 9E
abv

19.

P

A (g $I)

20.
21.
22.

.8x < t/B.x/


tDt
at

19 abv
DI
A (g (8I))

23.
24.

a <t_a Dt
a<t

22 T13.13l
A (g 23_E)

25.
26.
27.

B.a/
a D t _ B.a/
a t ! B.a/

28.

aDt

A (g 23_E)

29.
30.

a D t _ B.a/
a t ! B.a/

28 _I
29 Impl

31.
32.
33.
34.
35.
36.

20,24 (8E)
25 _I
26 Impl

a t ! B.a/

23,24-27,28-30 _E

.8x  t/.x t ! B.x//


t D t ^ .8x  t/.x t ! B.x//
9zt D z ^ .8x  z/.x z ! B.x//
9z.t D z/? ^ .8x  z/..x z/? ! B ? .x//
P?

37. P ? $ P 

22-31 (8I)
21,32 ^I
33 9I
34 with 1,2,3
35 abv
4-18,19-36 $I

Chapter Fourteen
E14.12. (ii) For any (primitive) recursive function f.x/ there is a canonical formula
F .x; y/ to capture it in theories extending Q. Thus the enumeration eprf.n/

Exercise 14.12

908

ANSWERS FOR CHAPTER 14

of primitive recursive functions extends to an enumeration eprc.n/ whose


value is the number of the formula to capture eprf.n/. Given this enumeration,
extend the construction from T14.10 to find the (recursive) function that is
(Turing) computable but not primitive recursive.
From T13.10 we set,
numf.m; n/ Ddef formsubformsub.pF .x; y/q; pxq; num.m//; pyq; num.n/

To generalize for arbitrary formulas numbered eprc.i/ take,


numfn.i; m; n/ Ddef formsubformsub.eprc.i/; pxq; num.m//; pyq; num.n/

Then,
valpr.i; m/ Ddef exp.zlen.z/ D 2 ^ PRFT.exp.z; 0/; numfn.i; m; exp.z; 1///; 1/

So the function that is recursive but not primitive recursive is,


fd .i/ D valpr.i; i/ C 1

E14.14. Assuming functions code.n/ and decode.d/, use the outline in the text to
complete the demonstration that any K-U computable function f.n/ is recursive.
Set INSNUM.n/ Ddef .9v  n/.n D 3 C 8  v/; SYMNUM.n/ Ddef .9v  n/.n D 5 C 8  v/;
Ddef .9v  n/.n D 7 C 8  v/; RELNUM.n/ Ddef .9u  n/.9v  n/.n D 9 C
8.3  5 //. Then EDGE.e/ Ddef len.e/ D 4 ^ CELLNUM.exp.e; 0//^ RELNUM.exp.e; 1//^
SYMNUM.exp.e; 2// ^ CELLNUM.exp.e; 3//; and DATA.d/ Ddef .8i < len.d//EDGE.exp.d; i//.
Then where both CONNECTED.d; m; n/ and DATASP.d/ are as in the text, SUBSP.d; s/ Ddef
DATASP.s/^.8i < len.s//.9j < len.d//exp.s; i/ D exp.d; j/; minlnks.d; n/ Ddef .y <
len.d//.9x  d/SUBSP.d; x/^ CONNECTED.x; h0i; n/^ y D len.x/; depth.d/ Ddef .y <
len.d//.8i < len.d//.y  minlnks.d;exp.d; i/; 3//; nspace.d; n/ Ddef .y < d/.8x 
d/.subsp.d; x/ ^ depth.x/ D n/ ! .8j < len.x//.9k < len.y//.exp.x; j/ D
exp.y; k//; and maxcell.d/ Ddef y.8j < len.d//y  exp.exp.d; i/; 3/.

CELLNUM.n/
u
v

Then PAIR.p/ Ddef .9i  p/.9j  p/.p D i0 1 /; REL.r/ Ddef .8i < len.r//PAIR.exp.n; i//.
Then with MAP.m/ as in the main text, DOM.m; d/ Ddef .9i < len.d//exp.exp.m; i/; 0/
D h0i ^ exp.exp.m; i/; 1/ D h0i ^ .8i < len.d//.9j < len.m//exp.exp.d; i/; 3/ D
exp.exp.m; j/; 0/. Then,
mapv.m; x/ Ddef yf.9i < len.m//exp.exp.m; i/; 0/ D x ^ y D exp.exp.m; i/; 1/ _

.9i < len.m//exp.exp.m; i/; 0/ D x ^ y D 0g

And with proj.m; a/ as in the main text, MATCH.m; a; b/ Ddef .8i < len.proj.m; a///.9j
< len.b//exp.proj.m; a/; i/ D exp.b; j/^.8j < len.b//.9i < len.proj.m; a///exp.b;
j/ D exp.proj.m; a/; i/. And ISO.a; b/ Ddef .9m  B/DOM.m; a/ ^ MATCH.m; a; b/;
 maxcell.a/ maxcell.b/ len.a/
 1
and set B D 0
, where the length of the map is the same as
len.a/

Exercise 14.14

909

ANSWERS FOR CHAPTER 14

the length of a, we take the largest prime in the map to a power as great as that of
any member of the map and multiply it together as many times are there are pairs in
the map.
INS.n/ Ddef

len.n/ D 4 ^ insnum.exp.n; 0// ^ datasp.exp.n; 1// ^

datasp.exp.n; 2// ^ insnum.exp.n; 3//^

.8i < len.exp.n; 1///border.exp.exp.exp.n; 1/; i/; 3/// !


.9j < len.exp.n; 2///.exp.exp.exp.n; 1/; i/; 3/ D exp.exp.exp.n; 2/; j/; 3//
KUMACH.m/ Ddef

.8i < len.m//INS.exp.m; i// ^ .8j < len.m//f.exp.exp.m; i/; 0/ D exp.exp.m; j/; 0// !

.depth.exp.exp.m; i/; 1// D depth.exp.exp.m; j/; 1// ^ ISO.exp.exp.m; i/; 1/; exp.exp.m; j/; 1///g

Now with machs and d a as in the main text, let space .m; n; j/ be
space.m; n; j/ nspace.space.m; n; j/;depth.exp.state.m; n; j/; 1///;

So space .m; n; j/ is the complement space which takes space.m; n; j/ with the active area deleted. Then,
space.m; n; Sj/ D y.9a  A/.9b  B/fDOM.a; exp.state.m; n; j/; 1// ^ DOM.b; exp.state.m; n; j/; 2// ^
MATCHa; exp.state.m; n; j/; 1/; nspace.space.m; n; j/; depth.exp.state.m; n; j/; 1/// ^

.8i < len.exp.state.m; n; j/; 1///border.exp.state.m; n; j/; 1/; exp.exp.exp.state.m; n; j/; 1/; i/; 3// !
mapv.a; exp.exp.exp.state.m; n; j/; 1/; i/; 3// D mapv.b; exp.exp.exp.state.m; n; j/; 1/; i/; 3// ^

.8k < len.exp.state.m; n; j/; 2///f.8i < len.exp.state.m; n; j/; 1///border.exp.state.m; n; j/; 1/; exp.exp.exp.state.m; n; j/; 1/; i/; 3// !
exp.exp.exp.state.m; n; j/; 2/; k/; 3/ exp.exp.exp.state.m; n; j/; 1/; i/; 3/ !

.8h < len.space .m; n; j///mapv.b; exp.exp.exp.state.m; n; j/; 2/; k/; 3// exp.exp.space .m; n; j/; h/; 3/g ^
y D proj.b; exp.state.m; n; j/; 2// ? space .m; n; j/g

For the third condition, b takes a cell not in the border of Sa to a cell not
in the complement space. Suppose sa numbers Sa , sb numbers Sb and d
 maxcell.sa / maxcell.d/ len.sa /
1
numbers the dataspace. Set A D 0
. And B D
len.sa /
 maxcell.sb / maxcell.d/Clen.sb / len.sb /

1
. In this case, the maximum cell number
len0.sb /
of the destination is the maximum cell number of the dataspace plus enough
room to fit all the cells from Sb .

Exercise 14.14

Bibliography
Benacerraf, P., and H. Putnam. Philosophy of Mathematics: Selected Readings.
Cambridge: Cambridge University Press, 1983, 2nd edition.
Bergmann, M., J. Moor, and J. Nelson. The Logic Book. New York: McGraw-Hill,
2004, 4th edition.
Berto, Francesco. Theres Something About Gdel: The Complete Guide to the Incompleteness Theorem. Oxford: Wiley-Blackwell, 2009.
Black, Robert. Proving Churchs Thesis. Philosophia Mathematica 8 (2000): 244
258.
Boolos, G., J. Burgess, and R. Jeffrey. Computability and Logic. Cambridge: Cambridge University Press, 2002, 4th edition.
Boolos, George. The Logic of Provability. Cambridge: Cambridge University Press,
1993.
Cederblom, J, and D Paulsen. Critical Reasoning. Belmont: Wadsworth, 2005, 6th
edition.
Church, Alonzo. An Unsolvable Problem of Elementary Number Theory. American Journal of Mathematics 58 (1936): 345363.
Dennett, Daniel, editor. The Philosophers Lexicon. 1987. URL http://www.
blackwellpublishing.com/lexicon/.
Drake, F., and Singh D. Intermediate Set Theory. Chichester, England: John Wiley
& Sons, 1996.
Earman, J, and J. Norton. Forever is a Day: Supertasks in Pitowsky and MalamentHogarth Spacetimes. Philosophy of Science 60 (1993): 2242.
910

BIBLIOGRAPHY

911

Earman, John. Bangs, Crunches, Whimpers, and Shrieks: Singularities and Acausalities in Relativistic Spacetimes. New York: Oxford University Press, 1995.
Enderton, H. Elements of Set Theory. Boston: Academic Press, Inc., 1977.
Feferman, et al., editors. Gdels Collected Works: Vol I. New York: Oxford University Press, 1986.
Fisher, A. Formal Number Theory and Computability. Oxford: Clarendon Press,
1982.
Gdel, K. On Formally Undecidable Propositions of Principia Mathematica and
Related Systems I. In Collected Works, Vol. I: Publications 1929-1936, Oxford:
Oxford University Press, 1986, 14495.
Gdel, Kurt. Die Vollstndigkeit der Axiome des Logischen Funktionenkalkls.
Monatshefte fr Mathematik und Physik 37 (1930): 349360.
von Heijenoort, editor. From Frege to Gdel. Cambridge: Harvard University Press,
1967.
Henkin, Leon. The Completeness of the First-Order Functional Calculus. Journal
of Symbolic Logic 14 (1949): 159166.
. A Problem Concerning Provability. Journal of Symbolic Logic 17 (1952):
160.
Hodges, W. A Shorter Model Theory. Cambridge: Cambridge University Press,
1997.
Hogarth, Mark. Does General Relativity Allow an Observer To View an Eternity In
a Finite Time? Foundations of Physics Letters 173181.
Kolmogorov, and Uspenskii. On the Definition of an Algorithm. American Mathematical Society Translations 29 (1963): 217245.
Kripke, Saul. Wittgenstein on Rules and Private Language: An Elementary Exposition. Cambridge, Mass.: Harvard University Press, 1982.
Manzano, Mara. Extensions of First Order Logic. Cambridge: Cambridge University Press, 1996.
. Model Theory. Oxford: Clarendon Press, 1999.

BIBLIOGRAPHY

912

Mendelson, Elliott. Introduction to Mathematical Logic. New York: Chapman and


Hall, 1997, 4th edition.
Pietroski, Paul. Logical Form. In The Stanford Encyclopedia of Philosophy, edited
by Edward N. Zalta, 2009. Fall, 2009 edition. URL http://plato.stanford.
edu/archives/fall2009/entries/logical-form/.
Plantinga, Alvin. God, Freedom, and Evil. Grand Rapids: Eerdmans, 1977.
Pohlers, W. Proof Theory. Berlin: Springer-Verlag, 1989.
Priest, Graham. Non-Classical Logics. Cambridge: Cambridge University Press,
2001.
Putnam, Hilary. Reason, Truth and History. Cambridge: Cambridge University Pres,
1981.
Robinson, R. An Essentially Undecidable Axiom System. Proceedings of the
International Congress of Mathematics 1 (1950): 729730.
Rosser, Barkley. Extensions of Some Theorems of Gdel and Church. Journal of
Symbolic Logic 1 (1936): 230235.
Roy, Tony. Natural Derivations for Priest, An Introduction to Non-Classical Logic.
The Australasian Journal of Logic 47192. URL http://ojs.victoria.ac.
nz/ajl/article/view/1779.
. Modality. In The Continuum Companion to Metaphysics, London: Continuum Publishing Group, 2012, 4666.
Russell, B. On Denoting. Mind 14.
Shapiro, S. Foundations Without Foundationalism: A Case for Second Order Logic.
Oxford: Clarendon Press, 1991.
. Thinking About Mathematics: The Philosophy of Mathematics. Oxford:
Oxford University Press, 2000.
. Philosophy of Mathematics and Its Logic: Introduction. In The Oxford
Handbook of Philosophy of Mathematics and Logic, edited by S. Shapiro, Oxford:
Oxford University Press, 2005, 328.
Smith, Peter. Squeezing Arguments. Analysis 71 (2011): 2230.

BIBLIOGRAPHY

913

. An Introduction to Gdels Theorems. Cambridge: Cambridge University


Press, 2013a, second edition.
. Teach Yourself Logic: A Study Guide., 2013b. URL http://www.
logicmatters.net/tyl/.
Szabo, M., editor. The Collected Papers of Gerhard Gentzen. Amsterdam: NorthHolland, 1969.
Takeuti, G. Proof Theory. Amsterdam: North-Holland, 1975.
Tourlakis, George. Lectures in Logic and Set Theory, Volume I: Mathematical Logic.
Cambridge: Cambridge University Press, 2003.
Turing, Alan. On Computable Numbers, With an Application to the Entscheidungsproblem. Proceedings of the London Mathematical Society 42 (1936): 230
265.
Wang, Hao. The axiomatization of Arithmetic. Journal of Symbolic Logic 22
(1957): 145158.

Index
expressive completeness, 428

914

You might also like