Language Proof and Logic
Language Proof and Logic
CSLI Publications
Center for the Study of Language and Information
Leland Stanford Junior University
First Edition 1999
Second Edition 2011
Printed in the United States
16 15 14 13 12
23456
Library of Congress Cataloging-in-Publication Data
Barker-Plummer, Dave.
Language, proof, and logic. 2nd ed. / Dave Barker-Plummer, Jon
Barwise, and John Etchemendy in collaboration with Albert Liu, Michael
Murray, and Emma Pease.
p. cm.
Rev. ed. of: Language, proof, and logic / Jon Barwise & John Etchemendy.
Includes index.
ISBN 978-1-57586-632-1 (pbk. : alk. paper)
1. Logic. I. Barwise, Jon. II. Etchemendy, John, 1952- III. Barwise, Jon.
Language, proof, and logic. IV. Title.
BC71.B25 2011
160dc23
CIP
2011019703
The acid-free paper used in this book meets the minimum requirements of the
American National Standard for Information SciencesPermanence of Paper for
Printed Library Materials, ANSI Z39.48-1984.
Acknowledgements
Our primary debt of gratitude goes to our main collaborators on this project:
Gerry Allwein and Albert Liu. They have worked with us in designing the
entire package, developing and implementing the software, and teaching from
and refining the text. Without their intelligence, dedication, and hard work,
LPL would neither exist nor have most of its other good properties.
In addition to the five of us, many people have contributed directly and indirectly to the creation of the package. First, over two dozen programmers have
worked on predecessors of the software included with the package, both earlier
versions of Tarskis World and the program Hyperproof, some of whose code
has been incorporated into Fitch. We want especially to mention Christopher
Fuselier, Mark Greaves, Mike Lenz, Eric Ly, and Rick Wong, whose outstanding contributions to the earlier programs provided the foundation of the new
software. Second, we thank several people who have helped with the development of the new software in essential ways: Rick Sanders, Rachel Farber, Jon
Russell Barwise, Alex Lau, Brad Dolin, Thomas Robertson, Larry Lemmon,
and Daniel Chai. Their contributions have improved the package in a host of
ways.
Prerelease versions of LPL have been tested at several colleges and universities. In addition, other colleagues have provided excellent advice that we have
tried to incorporate into the final package. We thank Selmer Bringsjord, Rensselaer Polytechnic Institute; Tom Burke, University of South Carolina; Robin
Cooper, Gothenburg University; James Derden, Humboldt State University;
Josh Dever, SUNY Albany; Avrom Faderman, University of Rochester; James
Garson, University of Houston; Christopher Gauker, University of Cincinnati;
Ted Hodgson, Montana State University; John Justice, Randolph-Macon Womens College; Ralph Kennedy, Wake Forest University; Michael ORourke,
University of Idaho; Greg Ray, University of Florida; Cindy Stern, California State University, Northridge; Richard Tieszen, San Jose State University;
Saul Traiger, Occidental College; and Lyle Zynda, Indiana University at South
Bend. We are particularly grateful to John Justice, Ralph Kennedy, and their
students (as well as the students at Stanford and Indiana University), for
their patience with early versions of the software and for their extensive comments and suggestions. We would also like to thank the many instructors and
students who have offered useful feedback since the initial publication of LPL.
We would also like to thank Stanfords Center for the Study of Language
vi / Acknowledgements
and Information and Indiana Universitys College of Arts and Sciences for
their financial support of the project. Finally, we are grateful to our publisher,
Dikran Karagueuzian and his team at CSLI Publications, for their skill and
enthusiasm about LPL, and to Lauri Kanerva for his dedication and skill in
the preparation of the final manuscript.
Acknowledgements
We have benefitted greatly from the feedback of the many instructors who
have adopted the LPL package in their teaching. We would particularly like
to thank Richard Zach, University of Calgary; S. Marc Cohen, University of
Washington and Bram van Heuveln, Rensselaer Polytechnic Institute for much
appreciated comments on the package. Bram suggested to us the addition of
the Add Support Steps feature of the new Fitch program. Richard Johns
of the University of British Columbia suggested the new goggles features
which are also included in that program.
The Openproof project continues to benefit from generous funding from
Stanford University and from its home in the intellectually stimulating environment of Stanfords Center for the Study of Language and Information
(CSLI). As always, we are grateful to our publisher, Dikran Karagueuzian,
and his team at CSLI Publications for their continued enthusiasm for LPL.
viii / Acknowledgements
Acknowledgements
Contents
Acknowledgements
Introduction
The special role of logic in rational inquiry . . . .
Why learn an artificial language? . . . . . . . . . .
Consequence and proof . . . . . . . . . . . . . . . .
Instructions about homework exercises (essential! )
To the instructor . . . . . . . . . . . . . . . . . . .
Web address . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
. 1
. 2
. 4
. 5
. 11
. 16
1 Atomic Sentences
1.1 Individual constants . . . . . . . . . . . . . . . .
1.2 Predicate symbols . . . . . . . . . . . . . . . . .
1.3 Atomic sentences . . . . . . . . . . . . . . . . . .
1.4 General first-order languages . . . . . . . . . . .
1.5 Function symbols (optional ) . . . . . . . . . . .
1.6 The first-order language of set theory (optional )
1.7 The first-order language of arithmetic (optional )
1.8 Alternative notation (optional ) . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
19
19
20
23
28
31
37
38
40
.
.
.
.
.
.
Propositional Logic
2 The
2.1
2.2
2.3
2.4
2.5
2.6
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
41
41
46
54
58
63
66
3 The
3.1
3.2
3.3
3.4
Boolean Connectives
Negation symbol: . . .
Conjunction symbol: .
Disjunction symbol: . .
Remarks about the game
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
67
68
71
74
77
.
.
.
.
.
.
.
.
ix
.
.
.
.
x / Contents
3.5
3.6
3.7
3.8
4 The
4.1
4.2
4.3
4.4
4.5
4.6
Contents
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
(optional )
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
79
82
84
90
.
.
.
.
.
.
.
.
.
.
.
.
93
94
106
110
114
118
122
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
128
. 129
. 132
. 137
. 141
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
143
144
149
156
165
168
175
7 Conditionals
7.1 Material conditional symbol: . . . . .
7.2 Biconditional symbol: . . . . . . . . .
7.3 Conversational implicature . . . . . . . .
7.4 Truth-functional completeness (optional )
7.5 Alternative notation (optional ) . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
178
. 180
. 183
. 189
. 192
. 198
8 The
8.1
8.2
8.3
8.4
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Logic of Conditionals
Informal methods of proof . . . . . . . .
Formal rules of proof for and . . .
Soundness and completeness (optional )
Valid arguments: some review exercises
.
.
.
.
199
199
207
215
223
Contents / xi
II
Quantifiers
9 Introduction to Quantification
9.1 Variables and atomic wffs . . . . . . . . . .
9.2 The quantifier symbols: , . . . . . . . . .
9.3 Wffs and sentences . . . . . . . . . . . . . .
9.4 Semantics for the quantifiers . . . . . . . .
9.5 The four Aristotelian forms . . . . . . . . .
9.6 Translating complex noun phrases . . . . .
9.7 Quantifiers and function symbols (optional )
9.8 Alternative notation (optional ) . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
229
230
232
233
237
241
245
253
257
. . .
. . .
laws
. . .
. . .
. . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
259
259
267
277
282
286
291
11 Multiple Quantifiers
11.1 Multiple uses of a single quantifier . . . . . . .
11.2 Mixed quantifiers . . . . . . . . . . . . . . . . .
11.3 The step-by-step method of translation . . . .
11.4 Paraphrasing Englishparaphrasing English . .
11.5 Ambiguity and context sensitivity . . . . . . .
11.6 Translations using function symbols (optional )
11.7 Prenex form (optional ) . . . . . . . . . . . . .
11.8 Some extra translation problems . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
298
298
302
307
309
313
317
320
324
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
328
328
331
332
338
347
10 The
10.1
10.2
10.3
10.4
10.5
10.6
Logic of Quantifiers
Tautologies and quantification . . . . .
First-order validity and consequence . .
First-order equivalence and DeMorgans
Other quantifier equivalences (optional )
The axiomatic method (optional ) . . . .
Lemmas . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Contents
xii / Contents
III
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
413
414
419
422
424
429
431
436
439
442
444
16 Mathematical Induction
16.1 Inductive definitions and inductive proofs . .
16.2 Inductive definitions in set theory . . . . . .
16.3 Induction on the natural numbers . . . . . .
16.4 Axiomatizing the natural numbers (optional )
16.5 Induction in Fitch . . . . . . . . . . . . . . .
16.6 Ordering the Natural Numbers (optional ) . .
16.7 Strong Induction (optional ) . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
454
455
463
465
468
473
475
478
.
.
.
.
484
. 484
. 486
. 495
. 504
Contents
.
.
.
.
.
.
373
. 375
. 383
. 388
. 392
. 398
. 406
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Contents / xiii
. . . . . .
. . . . . .
. . . . . .
(optional )
. . . . . .
. . . . . .
. . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
511
511
516
525
528
530
532
535
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
542
543
545
547
550
556
562
564
568
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
573
573
575
577
577
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Glossary
579
File Index
590
Exercise Index
593
General Index
595
Contents
Introduction
The special role of logic in rational inquiry
What do the fields of astronomy, economics, finance, law, mathematics, medicine, physics, and sociology have in common? Not much in the way of subject matter, thats for sure. And not all that much in the way of methodology.
What they do have in common, with each other and with many other fields, is
their dependence on a certain standard of rationality. In each of these fields,
it is assumed that the participants can differentiate between rational argumentation based on assumed principles or evidence, and wild speculation or
nonsequiturs, claims that in no way follow from the assumptions. In other
words, these fields all presuppose an underlying acceptance of basic principles
of logic.
For that matter, all rational inquiry depends on logic, on the ability of
people to reason correctly most of the time, and, when they fail to reason
correctly, on the ability of others to point out the gaps in their reasoning.
While people may not all agree on a whole lot, they do seem to be able to agree
on what can legitimately be concluded from given information. Acceptance of
these commonly held principles of rationality is what differentiates rational
inquiry from other forms of human activity.
Just what are the principles of rationality presupposed by these disciplines?
And what are the techniques by which we can distinguish correct or valid
reasoning from incorrect or invalid reasoning? More basically, what is it
that makes one claim follow logically from some given information, while
some other claim does not?
Many answers to these questions have been explored. Some people have
claimed that the laws of logic are simply a matter of convention. If this is so,
we could presumably decide to change the conventions, and so adopt different
principles of logic, the way we can decide which side of the road we drive
on. But there is an overwhelming intuition that the laws of logic are somehow
more fundamental, less subject to repeal, than the laws of the land, or even the
laws of physics. We can imagine a country in which a red traffic light means
go, and a world on which water flows up hill. But we cant even imagine a
world in which there both are and are not nine planets.
The importance of logic has been recognized since antiquity. After all, no
2 / Introduction
laws of logic
science can be any more certain than its weakest link. If there is something
arbitrary about logic, then the same must hold of all rational inquiry. Thus
it becomes crucial to understand just what the laws of logic are, and even
more important, why they are laws of logic. These are the questions that one
takes up when one studies logic itself. To study logic is to use the methods of
rational inquiry on rationality itself.
Over the past century the study of logic has undergone rapid and important advances. Spurred on by logical problems in that most deductive of
disciplines, mathematics, it developed into a discipline in its own right, with its
own concepts, methods, techniques, and language. The Encyclopedia Brittanica lists logic as one of the seven main branches of knowledge. More recently,
the study of logic has played a major role in the development of modern day
computers and programming languages. Logic continues to play an important
part in computer science; indeed, it has been said that computer science is
just logic implemented in electrical engineering.
This book is intended to introduce you to some of the most important
concepts and tools of logic. Our goal is to provide detailed and systematic
answers to the questions raised above. We want you to understand just how
the laws of logic follow inevitably from the meanings of the expressions we
use to make claims. Convention is crucial in giving meaning to a language,
but once the meaning is established, the laws of logic follow inevitably.
More particularly, we have two main aims. The first is to help you learn
a new language, the language of first-order logic. The second is to help you
learn about the notion of logical consequence, and about how one goes about
establishing whether some claim is or is not a logical consequence of other
accepted claims. While there is much more to logic than we can even hint at
in this book, or than any one person could learn in a lifetime, we can at least
cover these most basic of issues.
FOL
Introduction
This language of first-order logic is very important. Like Latin, the language is
not spoken, but unlike Latin, it is used every day by mathematicians, philosophers, computer scientists, linguists, and practitioners of artificial intelligence.
Indeed, in some ways it is the universal language, the lingua franca, of the symbolic sciences. Although it is not so frequently used in other forms of rational
inquiry, like medicine and finance, it is also a valuable tool for understanding
the principles of rationality underlying these disciplines as well.
The language goes by various names: the lower predicate calculus, the
functional calculus, the language of first-order logic, and fol. The last of
these is pronounced efohel, not fall, and is the name we will use.
Certain elements of fol go back to Aristotle, but the language as we know
it today has emerged over the past hundred years. The names chiefly associated with its development are those of Gottlob Frege, Giuseppe Peano, and
Charles Sanders Peirce. In the late nineteenth century, these three logicians
independently came up with the most important elements of the language,
known as the quantifiers. Since then, there has been a process of standardization and simplification, resulting in the language in its present form. Even
so, there remain certain dialects of fol, differing mainly in the choice of the
particular symbols used to express the basic notions of the language. We will
use the dialect most common in mathematics, though we will also tell you
about several other dialects along the way. Fol is used in different ways in
different fields. In mathematics, it is used in an informal way quite extensively. The various connectives and quantifiers find their way into a great deal
of mathematical discourse, both formal and informal, as in a classroom setting. Here you will often find elements of fol interspersed with English or
the mathematicians native language. If youve ever taken calculus you have
probably seen such formulas as:
! > 0 > 0 . . .
Here, the unusual, rotated letters are taken directly from the language fol.
In philosophy, fol and enrichments of it are used in two different ways. As
in mathematics, the notation of fol is used when absolute clarity, rigor, and
lack of ambiguity are essential. But it is also used as a case study of making
informal notions (like grammaticality, meaning, truth, and proof) precise and
rigorous. The applications in linguistics stem from this use, since linguistics
is concerned, in large part, with understanding some of these same informal
notions.
In artificial intelligence, fol is also used in two ways. Some researchers
take advantage of the simple structure of fol sentences to use it as a way to
encode knowledge to be stored and used by a computer. Thinking is modeled
by manipulations involving sentences of fol. The other use is as a precise
specification language for stating axioms and proving results about artificial
agents.
In computer science, fol has had an even more profound influence. The
very idea of an artificial language that is precise yet rich enough to program
computers was inspired by this language. In addition, all extant programming
languages borrow some notions from one or another dialect of fol. Finally,
there are so-called logic programming languages, like Prolog, whose programs
are sequences of sentences in a certain dialect of fol. We will discuss the
4 / Introduction
artificial languages
logical consequence
Introduction
Earlier, we asked what makes one claim follow from others: convention, or
something else? Giving an answer to this question for fol takes up a significant part of this book. But a short answer can be given here. Modern logic
teaches us that one claim is a logical consequence of another if there is no way
the latter could be true without the former also being true.
This is the notion of logical consequence implicit in all rational inquiry.
All the rational disciplines presuppose that this notion makes sense, and that
we can use it to extract consequences of what we know to be so, or what we
think might be so. It is also used in disconfirming a theory. For if a particular
claim is a logical consequence of a theory, and we discover that the claim
is false, then we know the theory itself must be incorrect in some way or
other. If our physical theory has as a consequence that the planetary orbits
are circular when in fact they are elliptical, then there is something wrong
with our physics. If our economic theory says that inflation is a necessary
consequence of low unemployment, but todays low unemployment has not
caused inflation, then our economic theory needs reassessment.
Rational inquiry, in our sense, is not limited to academic disciplines, and so
neither are the principles of logic. If your beliefs about a close friend logically
imply that he would never spread rumors behind your back, but you find that
he has, then your beliefs need revision. Logical consequence is central, not
only to the sciences, but to virtually every aspect of everyday life.
One of our major concerns in this book is to examine this notion of logical
consequence as it applies specifically to the language fol. But in so doing, we
will also learn a great deal about the relation of logical consequence in natural
languages. Our main concern will be to learn how to recognize when a specific
claim follows logically from others, and conversely, when it does not. This is
an extremely valuable skill, even if you never have occasion to use fol again
after taking this course. Much of our lives are spent trying to convince other
people of things, or being convinced of things by other people, whether the
issue is inflation and unemployment, the kind of car to buy, or how to spend
the evening. The ability to distinguish good reasoning from bad will help you
recognize when your own reasoning could be strengthened, or when that of
others should be rejected, despite superficial plausibility.
It is not always obvious when one claim is a logical consequence of others, but powerful methods have been developed to address this problem, at
least for fol. In this book, we will explore methods of proofhow we can
prove that one claim is a logical consequence of anotherand also methods
for showing that a claim is not a consequence of others. In addition to the
language fol itself, these two methods, the method of proof and the method
of counterexample, form the principal subject matter of this book.
proof and
counterexample
6 / Introduction
! vs. "
ing server, the Grade Grinder, which assesses your files and sends a report to
both you and your instructor. (If you are not using this book as a part of a
formal class, you can have the reports sent just to you.)
Exercises in the book are numbered n.m, where n is the number of the
chapter and m is the number of the exercise in that chapter. Exercises whose
solutions consist of one or more files created with the LPL applications that
you are to submit to the Grade Grinder are indicated with an arrow (!), so
that you know the solutions are to be sent off into the Internet ether. Exercises
that are not completed using the applications are indicated with a pencil (").
For example, Exercises 36 and 37 in Chapter 6 might look like this:
6.36
!
6.37
"
The arrow on Exercise 6.36 tells you that the world you create using
Tarskis World is to be submitted electronically, and that there is nothing
else to turn in. The pencil on Exercise 6.37 tells you that you do not complete
the exercise using one of the applications. Your instructor will tell you how
to turn in the solution. A solution to this exercise can be submitted as a text
file using Submit, or your instructor might prefer to collect solutions to this
exercise on paper.
Some exercises ask you to turn in something to your instructor in addition
to submitting a file electronically. These are indicated with both an arrow and
a pencil (!|"). This is also used when the exercise may require a file to be
submitted, but may not, depending on the solution. For example, the next
problem in Chapter 6 might ask:
6.38
!|"
Introduction
acknowledgement that the file has been received, and your instructor will be
able to retrieve a copy of your work from the Grade Grinder web site.
When you create files to be submitted to the Grade Grinder, it is important that you name them correctly. Sometimes we will tell you what to name
the files, but more often we expect you to follow a few standard conventions.
Our naming conventions are simple. If you are creating a proof using Fitch,
then you should name the file Proof n.m, where n.m is the number of the
exercise. If you are creating a world or sentence file in Tarskis World, then
you should call it either World n.m or Sentences n.m, where n.m is the number
of the exercise. If you are creating a truth table using Boole, you should name
it Table n.m, and finally if you are submitting a text file it must be named
Solution n.m. The key thing is to get the right exercise number in the name,
since otherwise your solution will be graded incorrectly. Well remind you of
these naming conventions a few times, but after that youre on your own.
When an exercise asks you to construct a formal proof using Fitch, you
will find a file on your disk called Exercise n.m. This file contains the proof set
up, so you should open it and construct your solution in this file. This is a lot
easier for you and also guarantees that the Grade Grinder will know which
exercise you are solving. So make sure you always start with the packaged
Exercise file when you create your solution.
Exercises may also have from one to three stars (#, ##, ###), as a rough
indication of the difficulty of the problem. For example, this would be an
exercise that is a little more difficult than average (and whose solution you
turn in to your instructor):
starting proofs
! stars
6.39
8 / Introduction
Remember
1. The arrow (!) means that you submit your solution electronically.
2. The pencil (") means that you turn in your solution to your instructor.
3. The combination (!|") means that your solution may be either a
submitted file or something to turn in, or possibly both.
4. Stars (#, ##, # # #) indicate exercises that are more difficult than
average.
5. Unless otherwise instructed, name your files Proof n.m, World n.m,
Sentences n.m, Table n.m, or Solution n.m, where n.m is the number
of the exercise.
6. When using Fitch to construct Proof n.m, start with the exercise file
Exercise n.m, which contains the problem setup.
7. If you use Submit to submit a text file, the Grade Grinder will not
assess the file, but simply acknowledge that it has been received.
to know your email address, your instructors name and email address, and
your Book ID number before you can do the exercise. If you dont know any
of these, talk to your instructor first. Your computer must be connected to
the internet to submit files. If its not, use a public computer at your school
or at a public library.
You try it
................................................................
Introduction
1. Were going to step you through the process of submitting a file to the
Grade Grinder. The file is called World Submit Me 1. It is a Tarskis World
file, but you wont have to open it using Tarskis World in order to submit it. Well pretend that it is an exercise file that youve created while
doing your homework, and now youre ready to submit it. More complete
instructions on running Submit are contained in the instruction manual
that came with the software.
2. Find the program Submit on the CD-ROM that came with your book. Sub-
mit has an icon that looks like a cog with a capital G on it, and appears
inside a folder called Submit Folder. Once youve found it, double-click on
the icon to launch the program.
3. After a moment, you will see the main Submit window, which has a rotating cog in the upper-left corner. The first thing you should do is fill in the
requested information in the five fields. Enter your Book ID first, then your
name and email address. You have to use your complete email address
for example, [email protected], not just claire or claire@cssince
the Grade Grinder will need the full address to send its response back to
you. Also, if you have more than one email address, you have to use the
same one every time you submit files, since your email address and Book ID
together are how Grade Grinder will know that it is really you submitting
files. Finally, fill in your instructors name and complete email address. Be
very careful to enter the correct and complete email addresses!
"
4. Were now ready to specify the file to submit. Click on the button Choose
Files To Submit in the lower-left corner. This opens a window showing
two file lists. The list on the left shows files on your computer, while the one
on the right (which is currently empty) will list files you want to submit.
We need to locate the file World Submit Me 1 on the left and copy it over to
the right. By the way, our software uses files with the extensions.sen, .wld,
.prf, .tt, but we dont mention these in this book when referring to files.
our informal name for the file we are looking for is World Submit Me 1.
"
The file World Submit Me 1 is located in the Tarskis World exercise files
folder. To find this folder you will have to navigate among folders until it
appears in the file list on the left. Start by clicking once on the Submit
Folder button above the left-hand list. A menu will appear and you can
then move up to higher folders by choosing their names (the higher folders
appear lower on this menu). Move to the next folder up from the Submit
Folder, which should be called LPL Software. When you choose this folder,
the list of files will change. On the new list, find the folder Tarskis World
Folder and double-click on its name to see the contents of the folder. The
list will again change and you should now be able to see the folder TW Exercise Files. Double-click on this folder and the file list will show the contents
of this folder. Toward the bottom of the list (you will have to scroll down
the list by clicking on the scroll buttons), you will find World Submit Me
1. Double-click on this file and its name will move to the list on the right.
5. When you have successfully gotten the file World Submit Me 1 on the righthand list, click the Done button underneath the list. This should bring you
"
10 / Introduction
back to the original Submit window, only now the file you want to submit
appears in the list of files. (Macintosh users can get to this point quickly by
dragging the files they want to submit onto the Submit icon in the Finder.
This will launch Submit and put those files in the submission list. If you
drag a folder of files, it will put all the files in the folder onto the list.)
!
6. When you have the correct file on the submission list, click on the Submit
Files button under this list. As this is your very first submission, Submit
requires you to confirm your email address, in addition to asking you to
confirm that you want to submit World Submit Me 1. The confirmation
of your email address will only happen this time. You also have an opportunity to saywhether you want to send the results just to you or also
to your instructor. In this case, select Just Me. When you are submitting finished homework exercises, you should select Instructor Too. Once
youve chosen who the results should go to, click the Proceed button and
your submission will be sent. (With real homework, you can always do a
trial submission to see if you got the answers right, asking that the results
be sent just to you. When you are satisfied with your solutions, submit
the files again, asking that the results be sent to the instructor too. But
dont forget the second submission!)
7. In a moment, you will get a dialog box that will tell you if your submission
has been successful. If so, it will give you a receipt message that you can
save, if you like. If you do not get this receipt, then your submission has
not gone through and you will have to try again.
8. A few minutes after the Grade Grinder receives your file, you should get
an email message saying that it has been received. If this were a real homework exercise, it would also tell you if the Grade Grinder found any errors
in your homework solutions. You wont get an email report if you put in
the wrong, or a misspelled, email address. If you dont get a report, try
submitting again with the right address.
9. Quit from Submit when you are done. Congratulations on submitting your
first file.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
what gets sents
Introduction
Heres an important thing for you to know: when you submit files to the
Grade Grinder, Submit sends a copy of the files. The original files are still
on the disk where you originally saved them. If you saved them on a public
computer, it is best not to leave them lying around. Put them on a thumb drive
To the instructor / 11
that you can take with you, and delete any copies from the public computers
hard disk.
You should carefully read the email that you receive from the Grade
Grinder since it contains information concerning the errors that the Grade
Grinder found in your work. Even if there are no errors, you should keep the
email that you receive as a reminder that you have submitted the work. In
addition, if you log in at our web site, http://lpl.stanford.edu, you will
be able to see the complete history of your submissions to the Grade Grinder.
To the instructor
Students, you may skip this section. It is a personal note from us, the authors,
to instructors planning to use this package in their logic courses.
Practical matters
We use the Language, Proof and Logic package (LPL) in two very different
sorts of courses. One is a first course in logic for undergraduates with no
previous background in logic, philosophy, mathematics, or computer science.
This important course, sometimes disparagingly referred to as baby logic,
is often an undergraduates first and only exposure to the rigorous study of
reasoning. When we teach this course, we cover much of the first two parts
of the book, leaving out many of the sections indicated as optional in the
table of contents. Although some of the material in these two parts may seem
more advanced than is usually covered in a traditional introductory course,
we find that the software makes it completely accessible to even the relatively
unprepared student.
At the other end of the spectrum, we use LPL in an introductory graduatelevel course in metatheory, designed for students who have already had some
exposure to logic. In this course, we quickly move through the first two parts,
thereby giving the students both a review and a common framework for use
in the discussions of soundness and completeness. Using the Grade Grinder,
students can progress through much of the early material at their own pace,
doing only as many exercises as is needed to demonstrate competence.
There are no doubt many other courses for which the package would be
suitable. Though we have not had the opportunity to use it this way, it would
be ideally suited for a two-term course in logic and its metatheory.
Our courses are typically listed as philosophy courses, though many of the
students come from other majors. Since LPL is designed to satisfy the logical
needs of students from a wide variety of disciplines, it fits naturally into logic
To the instructor
12 / Introduction
courses taught in other departments, most typically mathematics and computer science. Instructors in different departments may select different parts
of the optional material. For example, computer science instructors may want
to cover the sections on resolution in Part 14.6, though philosophy instructors
generally do not cover this material.
If you have not used software in your teaching before, you may be concerned about how to incorporate it into your class. Again, there is a spectrum
of possibilities. At one end is to conduct your class exactly the way you always
do, letting the students use the software on their own to complete homework
assignments. This is a perfectly fine way to use the package, and the students
will still benefit significantly from the suite of software tools. We find that
most students now have easy access to computers and the Internet, and so
no special provisions are necessary to allow them to complete and submit the
homework.
At the other end are courses given in computer labs or classrooms, where
the instructor is more a mentor offering help to students as they proceed at
their own pace, a pace you can keep in step with periodic quizzes and exams.
Here the student becomes a more active participant in the learning, but such
a class requires a high computer:student ratio, at least one:three. For a class
of 30 or fewer students, this can be a very effective way to teach a beginning
logic course.
In between, and the style we typically use, is to give reasonably traditional
presentations, but to bring a laptop to class from time to time to illustrate
important material using the programs. This requires some sort of projection
system, but also allows you to ask the students to do some of the computer
problems in class. We encourage you to get students to operate the computer
themselves in front of the class, since they thereby learn from one another,
both about strategies for solving problems and constructing proofs, and about
different ways to use the software. A variant of this is to schedule a weekly
lab session as part of the course.
The book contains an extremely wide variety of exercises, ranging from
solving puzzles expressed in fol to conducting Boolean searches on the World
Wide Web. There are far more exercises than you can expect your students
to do in a single quarter or semester. Beware that many exercises, especially
those using Tarskis World, should be thought of as exercise sets. They may, for
example, involve translating ten or twenty sentences, or transforming several
sentences into conjunctive normal form. Students can find hints and solutions
to selected exercises on our web site. You can download a list of these exercises
from the same site.
Although there are more exercises than you can reasonably assign in a
Introduction
To the instructor / 13
semester, and so you will have to select those that best suit your course, we
do urge you to assign all of the You try it exercises. These are not difficult
and do not test students knowledge. Instead, they are designed to illustrate
important logical concepts, to introduce students to important features of the
programs, or both. The Grade Grinder will check any files that the students
create in these sections.
We should say a few words about the Grade Grinder, since it is a truly
innovative feature of this package. Most important, the Grade Grinder will
free you from the most tedious aspect of teaching logic, namely, grading those
kinds of problems whose assessment can be mechanized. These include formal
proofs, translation into fol, truth tables, and various other kinds of exercises.
This will allow you to spend more time on the more rewarding parts of teaching
the material.
That said, it is important to emphasize two points. The first is that the
Grade Grinder is not limited in the way that most computerized grading
programs are. It uses sophisticated techniques, including a powerful first-order
theorem prover, in assessing student answers and providing intelligent reports
on those answers. Second, in designing this package, we have not fallen into
the trap of tailoring the material to what can be mechanically assessed. We
firmly believe that computer-assisted learning has an important but limited
role to play in logic instruction. Much of what we teach goes beyond what
can be assessed automatically. This is why about half of the exercises in the
book still require human attention.
It is a bit misleading to say that the Grade Grinder grades the homework. The Grade Grinder simply reports to you any errors in the students
solutions, leaving the decision to you what weight to give to individual problems and whether partial credit is appropriate for certain mistakes. A more
detailed explanation of what the Grade Grinder does and what grade reports
look like can be found at the web address given on page 16.
Before your students can request that their Grade Grinder results be sent
to you, you will have to register with the Grade Grinder as an instructor. This
can be done by going to the LPL web site and following the Instructor links.
registering with
the Grade Grinder
Philosophical remarks
This book, and the supporting software that comes with it, grew out of our
own dissatisfaction with beginning logic courses. It seems to us that students
all too often come away from these courses with neither of the things we
want them to have. They do not understand the first-order language or the
rationale for it, and they are unable to explain why or even whether one claim
To the instructor
14 / Introduction
follows logically from another. Worse, they often come away with a complete
misconception about logic. They leave their first (and only) course in logic
having learned what seem like a bunch of useless formal rules. They gain little
if any understanding about why those rules, rather than some others, were
chosen, and they are unable to take any of what they have learned and apply
it in other fields of rational inquiry or in their daily lives. Indeed, many come
away convinced that logic is both arbitrary and irrelevant. Nothing could be
further from the truth.
The real problem, as we see it, is a failure on the part of logicians to find a
simple way to explain the relationship between meaning and the laws of logic.
In particular, we do not succeed in conveying to students what sentences
in fol mean, or in conveying how the meanings of sentences govern which
methods of inference are valid and which are not. It is this problem we set
out to solve with LPL.
There are two ways to learn a second language. One is to learn how to
translate sentences of the language to and from sentences of your native language. The other is to learn by using the language directly. In teaching fol,
the first way has always been the prevailing method of instruction. There are
serious problems with this approach. Some of the problems, oddly enough,
stem from the simplicity, precision, and elegance of fol. This results in a distracting mismatch between the students native language and fol. It forces
students trying to learn fol to be sensitive to subtleties of their native language that normally go unnoticed. While this is useful, it often interferes with
the learning of fol. Students mistake complexities of their native tongue for
complexities of the new language they are learning.
In LPL, we adopt the second method for learning fol. Students are given
many tasks involving the language, tasks that help them understand the meanings of sentences in fol. Only then, after learning the basics of the symbolic
language, are they asked to translate between English and fol. Correct translation involves finding a sentence in the target language whose meaning approximates, as closely as possible, the meaning of the sentence being translated. To do this well, a translator must already be fluent in both languages.
We have been using this approach for several years. What allows it to
work is Tarskis World, one of the computer programs in this package. Tarskis
World provides a simple environment in which fol can be used in many of
the ways that we use our native language. We provide a large number of
problems and exercises that walk students through the use of the language in
this setting. We build on this in other problems where they learn how to put
the language to more sophisticated uses.
As we said earlier, besides teaching the language fol, we also discuss basic
Introduction
To the instructor / 15
methods of proof and how to use them. In this regard, too, our approach
is somewhat unusual. We emphasize both informal and formal methods of
proof. We first discuss and analyze informal reasoning methods, the kind
used in everyday life, and then formalize these using a Fitch-style natural
deduction system. The second piece of software that comes with the book,
which we call Fitch, makes it easy for students to learn this formal system
and to understand its relation to the crucial informal methods that will assist
them in other disciplines and in any walk of life.
A word is in order about why we chose a Fitch-style system of deduction,
rather than a more semantically based method like truth trees or semantic
tableau. In our experience, these semantic methods are easy to teach, but
are only really applicable to arguments in formal languages. In contrast, the
important rules in the Fitch system, those involving subproofs, correspond
closely to essential methods of reasoning and proof, methods that can be used
in virtually any context: formal or informal, deductive or inductive, practical
or theoretical. The point of teaching a formal system of deduction is not
so students will use the specific system later in life, but rather to foster an
understanding of the most basic methods of reasoningmethods that they
will useand to provide a precise model of reasoning for use in discussions of
soundness and completeness.
Tarskis World also plays a significant role in our discussion of proof, along
with Fitch, by providing an environment for showing that one claim does
not follow from another. With LPL, students learn not just how to prove
consequences of premises, but also the equally important technique of showing
that a given claim does not follow logically from its premises. To do this, they
learn how to give counterexamples, which are really proofs of nonconsequence.
These will often be given using Tarskis World.
The approach we take in LPL is also unusual in two other respects. One
is our emphasis on languages in which all the basic symbols are assumed to
be meaningful. This is in contrast to the so-called uninterpreted languages
(surely an oxymoron) so often found in logic textbooks. Another is the inclusion of various topics not usually covered in introductory logic books. These
include the theory of conversational implicature, material on generalized quantifiers, and most of the material in Part 14.6. We believe that even if these
topics are not covered, their presence in the book illustrates to the student
the richness and open-endedness of the discipline of logic.
To the instructor
16 / Introduction
Web address
In addition to the book, software, and grading service, additional material can
be found on the Web at the following address:
http://lpl.stanford.edu
At the web site you will hind hints and solutions to selected exercises, an
online version of the software manuals, support pages where you can browse
our list of frequently asked questions, and directly request technical support
and submit bug reports. In addition registered users may log in at the site.
Students can view their history of submissions to the Grade Grinder, and
download the latest versions of the software, while instructors can view the
history of submissions by all of their students. You are automatically registered
as a student user of the package when you make your first submission to the
Grade Grinder (do it now, if you havent already done so.)
Introduction
Part I
Propositional Logic
Chapter 1
Atomic Sentences
In the Introduction, we talked about fol as though it were a single language.
Actually, it is more like a family of languages, all having a similar grammar
and sharing certain important vocabulary items, known as the connectives
and quantifiers. Languages in this family can differ, however, in the specific
vocabulary used to form their most basic sentences, the so-called atomic sentences.
Atomic sentences correspond to the most simple sentences of English, sentences consisting of some names connected by a predicate. Examples are Max
ran, Max saw Claire, and Claire gave Scruffy to Max. Similarly, in fol atomic
sentences are formed by combining names (or individual constants, as they
are often called) and predicates, though the way they are combined is a bit
different from English, as you will see.
Different versions of fol have available different names and predicates. We
will frequently use a first-order language designed to describe blocks arranged
on a chessboard, arrangements that you will be able to create in the program
Tarskis World. This language has names like b, e, and n2 , and predicates
like Cube, Larger, and Between. Some examples of atomic sentences in this
language are Cube(b), Larger(c, f), and Between(b, c, d). These sentences say,
respectively, that b is a cube, that c is larger than f , and that b is between c
and d.
Later in this chapter, we will look at the atomic sentences used in two
other versions of fol, the first-order languages of set theory and arithmetic.
In the next chapter, we begin our discussion of the connectives and quantifiers
common to all first-order languages.
atomic sentences
Section 1.1
Individual constants
Individual constants are simply symbols that are used to refer to some fixed
individual object. They are the fol analogue of names, though in fol we
generally dont capitalize them. For example, we might use max as an individual constant to denote a particular person, named Max, or 1 as an individual
constant to denote a particular number, the number one. In either case, they
would basically work exactly the way names work in English. Our blocks
19
20 / Atomic Sentences
names in fol
Section 1.2
Predicate symbols
predicate or relation
symbols
logical subjects
Chapter 1
Predicate symbols / 21
a predicate, likes, that expresses a relation between the referents of the names.
Thus, atomic sentences of fol often have two or more logical subjects, and the
predicate is, so to speak, whatever is left. The logical subjects are called the
arguments of the predicate. In this case, the predicate is said to be binary,
since it takes two arguments.
In English, some predicates have optional arguments. Thus you can say
Claire gave, Claire gave Scruffy, or Claire gave Scruffy to Max. Here the
predicate gave is taking one, two, and three arguments, respectively. But in
fol, each predicate has a fixed number of arguments, a fixed arity as it is
called. This is a number that tells you how many individual constants the
predicate symbol needs in order to form a sentence. The term arity comes
from the fact that predicates taking one argument are called unary, those
taking two are binary, those taking three are ternary, and so forth.
If the arity of a predicate symbol Pred is 1, then Pred will be used to
express some property of objects, and so will require exactly one argument (a
name) to make a claim. For example, we might use the unary predicate symbol
Home to express the property of being at home. We could then combine this
with the name max to get the expression Home(max), which expresses the
claim that Max is at home.
If the arity of Pred is 2, then Pred will be used to represent a relation
between two objects. Thus, we might use the expression Taller(claire, max) to
express a claim about Max and Claire, the claim that Claire is taller than
Max. In fol, we can have predicate symbols of any arity. However, in the
blocks language used in Tarskis World we restrict ourselves to predicates
with arities 1, 2, and 3. Here we list the predicates of that language, this time
with their arity.
arguments of a
predicate
arity of a predicate
vagueness
Section 1.2
22 / Atomic Sentences
determinate property
Chapter 1
Interpretation
a is a tetrahedron
a is a cube
a is a dodecahedron
a is small
a is medium
a is large
a is the same size as b
a is the same shape as b
a is larger than b
a is smaller than b
a is in the same column as b
a is in the same row as b
a and b are located on adjacent (but
not diagonally) squares
a is located nearer to the left edge of
the grid than b
a is located nearer to the right edge
of the grid than b
a is located nearer to the front of the
grid than b
a is located nearer to the back of the
grid than b
a, b and c are in the same row, column, or diagonal, and a is between b
and c
an individual has the property in question or not. For example, Claire, who
is sixteen, is young. She will not be young when she is 96. But there is no
determinate age at which a person stops being young: it is a gradual sort of
thing. Fol, however, assumes that every predicate is interpreted by a determinate property or relation. By a determinate property, we mean a property
for which, given any object, there is a definite fact of the matter whether or
not the object has the property.
This is one of the reasons we say that the blocks language predicates are
Atomic sentences / 23
Section 1.3
Atomic sentences
In fol, the simplest kinds of claims are those made with a single predicate
and the appropriate number of individual constants. A sentence formed by a
predicate followed by the right number of names is called an atomic sentence.
For example Taller(claire, max) and Cube(a) are atomic sentences, provided
the names and predicate symbols in question are part of the vocabulary of
our language. In the case of the identity symbol, we put the two required
names on either side of the predicate, as in a = b. This is called infix notation, since the predicate symbol = appears in between its two arguments.
With the other predicates we use prefix notation: the predicate precedes
the arguments.
The order of the names in an atomic sentence is quite important. Just
as Claire is taller than Max means something different from Max is taller
than Claire, so too Taller(claire, max) means something completely different
than Taller(max, claire). We have set things up in our blocks language so that
the order of the arguments of the predicates is like that in English. Thus
LeftOf(b, c) means more or less the same thing as the English sentence b is
left of c, and Between(b, c, d) means roughly the same as the English b is
between c and d.
Predicates and names designate properties and objects, respectively. What
atomic sentence
Section 1.3
24 / Atomic Sentences
makes sentences special is that they make claims (or express propositions).
A claim is something that is either true or false; which of these it is we call
its truth value. Thus Taller(claire, max) expresses a claim whose truth value is
true, while Taller(max, claire) expresses a claim whose truth value is false.
(You probably didnt know that, but now you do.) Given our assumption
that predicates express determinate properties and that names denote definite
individuals, it follows that each atomic sentence of fol must express a claim
that is either true or false.
claims
truth value
You try it
................................................................
Chapter 1
1. It is time to try your hand using Tarskis World. In this exercise, you
will use Tarskis World to become familiar with the interpretations of the
atomic sentences of the blocks language. Before starting, though, you need
to learn how to launch Tarskis World and perform some basic operations.
Read the appropriate sections of the users manual describing Tarskis
World before going on.
2. Launch Tarskis World and open the files called Wittgensteins World and
Wittgensteins Sentences. You will find these in the folder TW Exercises. In
these files, you will see a blocks world and a list of atomic sentences. (We
have added comments to some of the sentences. Comments are prefaced
by a semicolon (;), which tells Tarskis World to ignore the rest of the
line.)
3. Move through the sentences using the arrow keys on your keyboard, mentally assessing the truth value of each sentence in the given world. Use the
Verify Sentence button to check your assessments. This button is on the
left of the group of three colored buttons on the toolbar (the one which
has T/F written on it). (Since the sentences are all atomic sentences the
Game button, on the right of the same group, will not be helpful.) If
you are surprised by any of the evaluations, try to figure out how your
interpretation of the predicate differs from the correct interpretation.
4. Next change Wittgensteins World in many different ways, seeing what happens to the truth of the various sentences. The main point of this is to
help you figure out how Tarskis World interprets the various predicates.
For example, what does BackOf(d, c) mean? Do two things have to be in
the same column for one to be in back of the other?
5. Play around as much as you need until you are sure you understand the
meanings of the atomic sentences in this file. For example, in the original
Atomic sentences / 25
world none of the sentences using Adjoins comes out true. You should try
to modify the world to make some of them true. As you do this, you will
notice that large blocks cannot adjoin other blocks.
6. In doing this exercise, you will no doubt notice that Between does not mean
exactly what the English between means. This is due to the necessity of
interpreting Between as a determinate predicate. For simplicity, we insist
that in order for b to be between c and d, all three must be in the same
row, column, or diagonal.
"
7. When you are finished, close the files, but do not save the changes you
have made to them.
"
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
Remember
In fol,
Atomic sentences are formed by putting a predicate of arity n in front
of n names (enclosed in parentheses and separated by commas).
Atomic sentences are built from the identity predicate, =, using infix
notation: the arguments are placed on either side of the predicate.
The order of the names is crucial in forming atomic sentences.
Exercises
You will eventually want to read the entire chapter of the users manual on how to use Tarskis World. To
do the following problems, you will need to read at least the first four sections. Also, if you dont remember
how to name and submit your solution files, you should review the section on essential instructions in
the Introduction, starting on page 5.
1.1
If you skipped the You try it section, go back and do it now. This is an easy but crucial
exercise that will familiarize you with the atomic sentences of the blocks language. There is
nothing you need to turn in or submit, but dont skip the exercise!
1.2
(Copying some atomic sentences) This exercise will give you some practice with the Tarskis
World keyboard window, as well as with the syntax of atomic sentences. The following are all
atomic sentences of our language. Start a new sentence file and copy them into it. Have Tarskis
World check each formula after you write it to see that it is a sentence. If you make a mistake,
edit it before going on. Make sure you use the Add Sentence command between sentences,
Section 1.3
26 / Atomic Sentences
not the return key. If youve done this correctly, the sentences in your list will be numbered
and separated by horizontal lines.
1. Tet(a)
2. Medium(a)
3. Dodec(b)
4. Cube(c)
5. FrontOf(a, b)
6. Between(a, b, c)
7. a = d
8. Larger(a, b)
9. Smaller(a, c)
10. LeftOf(b, c)
Remember, you should save these sentences in a file named Sentences 1.2. When youve finished
your first assignment, submit all of your solution files using the Submit program.
1.3
!
1.4
!
(Building a world) Build a world in which all the sentences in Exercise 1.2 are simultaneously
true. Remember to name and submit your world file as World 1.3.
(Translating atomic sentences) Here are some simple sentences of English. Start a new sentence
file and translate them into fol.
1. a is a cube.
2. b is smaller than a.
3. c is between a and d.
4. d is large.
5. e is larger than a.
6. b is a tetrahedron.
7. e is a dodecahedron.
8. e is right of b.
9. a is smaller than e.
10. d is in back of a.
11. b is in the same row as d.
12. b is the same size as c.
After youve translated the sentences, build a world in which all of your translations are true.
Submit your sentence and world files as Sentences 1.4 and World 1.4.
1.5
!
(Naming objects) Open Lestrades Sentences and Lestrades World. You will notice that none of
the objects in this world has a name. Your task is to assign the objects names in such a way
that all the sentences in the list come out true. Remember to save your solution in a file named
World 1.5. Be sure to use Save World As. . . , not Save World.
Chapter 1
Atomic sentences / 27
1.6
!!
1.7
!|"
(Naming objects, continued) Not all of the choices in Exercise 1.5 were forced on you. That
is, you could have assigned the names differently and still had the sentences come out true.
Change the assignment of as many names as possible while still making all the sentences true,
and submit the changed world as World 1.6. In order for us to compare your files, you must
submit both World 1.5 and World 1.6 at the same time.
(Context sensitivity of predicates) We have stressed the fact that fol assumes that every
predicate is interpreted by a determinate relation, whereas this is not the case in natural
languages like English. Indeed, even when things seem quite determinate, there is often some
form of context sensitivity. In fact, we have built some of this into Tarskis World. Consider,
for example, the difference between the predicates Larger and BackOf. Whether or not cube a is
larger than cube b is a determinate matter, and also one that does not vary depending on your
perspective on the world. Whether or not a is back of b is also determinate, but in this case it
does depend on your perspective. If you rotate the world by 90 , the answer might change.
Open Austins Sentences and Wittgensteins World. Evaluate the sentences in this file and
tabulate the resulting truth values in a table like the one below. Weve already filled in the first
column, showing the values in the original world. Rotate the world 90 clockwise and evaluate
the sentences again, adding the results to the table. Repeat until the world has come full circle.
1.
2.
3.
4.
5.
6.
Original
false
false
true
false
true
false
Rotated 90
Rotated 180
Rotated 270
You should be able to think of an atomic sentence in the blocks language that would produce
a row across the table with the following pattern:
true
false
true
false
Add a seventh sentence to Austins Sentences that would display the above pattern.
Are there any atomic sentences in the language that would produce a row with this pattern?
false
true
false
false
If so, add such a sentence as sentence eight in Austins Sentences. If not, leave sentence eight
blank.
Are there any atomic sentences that would produce a row in the table containing exactly
three trues? If so, add such a sentence as number nine. If not, leave sentence nine blank.
Submit your modified sentence file as Sentences 1.7. Turn in your completed table to your
instructor.
Section 1.3
28 / Atomic Sentences
Section 1.4
translation
designing languages
choosing predicates
Chapter 1
First-order languages differ in the names and predicates they contain, and so in
the atomic sentences that can be formed. What they share are the connectives
and quantifiers that enable us to build more complex sentences from these
simpler parts. We will get to those common elements in later chapters.
When you translate a sentence of English into fol, you will sometimes
have a predefined first-order language that you want to use, like the blocks
language of Tarskis World, or the language of set theory or arithmetic described later in this chapter. If so, your goal is to come up with a translation
that captures the meaning of the original English sentence as nearly as possible, given the names and predicates available in your predefined first-order
language.
Other times, though, you will not have a predefined language to use for
your translation. If not, the first thing you have to do is decide what names and
predicates you need for your translation. In effect, you are designing, on the fly,
a new first-order language capable of expressing the English sentence you want
to translate. Weve been doing this all along, for example when we introduced
Home(max) as the translation of Max is at home and Taller(claire, max) as the
translation of Claire is taller than Max.
When you make these decisions, there are often alternative ways to go.
For example, suppose you were asked to translate the sentence Claire gave
Scruffy to Max. You might introduce a binary predicate GaveScruffy(x, y),
meaning x gave Scruffy to y, and then translate the original sentence as
GaveScruffy(claire, max). Alternatively, you might introduce a three-place predicate Gave(x, y, z), meaning x gave y to z, and then translate the sentence as
Gave(claire, scruffy, max).
There is nothing wrong with either of these predicates, or their resulting
translations, so long as you have clearly specified what the predicates mean.
Of course, they may not be equally useful when you go on to translate other
sentences. The first predicate will allow you to translate sentences like Max
gave Scruffy to Evan and Evan gave Scruffy to Miles. But if you then run into
the sentence Max gave Carl to Claire, you would be stuck, and would have
to introduce an entirely new predicate, say, GaveCarl(x, y). The three-place
predicate is thus more flexible. A first-order language that contained it (plus
the relevant names) would be able to translate any of these sentences.
In general, when designing a first-order language we try to economize on
the predicates by introducing more flexible ones, like Gave(x, y, z), rather than
less flexible ones, like GaveScruffy(x, y) and GaveCarl(x, y). This produces a
more expressive language, and one that makes the logical relations between
various claims more perspicuous.
Names can be introduced into a first-order language to refer to anything
that can be considered an object. But we construe the notion of an object
pretty flexiblyto cover anything that we can make claims about. Weve already seen languages with names for people and the blocks of Tarskis World.
Later in the chapter, well introduce languages with names for sets and numbers. Sometimes we will want to have names for still other kinds of objects,
like days or times. Suppose, for example, that we want to translate the sentences:
objects
1.8
"
Suppose we have two first-order languages: the first contains the binary predicates
GaveScruffy(x, y) and GaveCarl(x, y), and the names max and claire; the second contains the
ternary predicate Gave(x, y, z) and the names max, claire, scruffy, and carl.
1. List all of the atomic sentences that can be expressed in the first language. (Some of
these may say weird things like GaveScruffy(claire, claire), but dont worry about that.)
2. How many atomic sentences can be expressed in the second language? (Count all of
them, including odd ones like Gave(scruffy, scruffy, scruffy).)
3. How many names and binary predicates would a language like the first need in order
to say everything you can say in the second?
Section 1.4
30 / Atomic Sentences
fol
Comment
Names:
Max
Claire
Folly
Carl
Scruffy
Pris
2 pm, Jan 2, 2011
2:01 pm, Jan 2, 2011
..
.
max
claire
folly
carl
scruffy
pris
2:00
2:01
..
.
Predicates:
x is a pet
x is a person
x is a student
t is earlier than t"
x was hungry at time t
x was angry at time t
x owned y at time t
x gave y to z at t
x fed y at time t
1.9
!
Pet(x)
Person(x)
Student(x)
t < t"
Hungry(x, t)
Angry(x, t)
Owned(x, y, t)
Gave(x, y, z, t)
Fed(x, y, t)
We will be giving a number of problems that use the symbols explained in Table 1.2. Start a
new sentence file in Tarskis World and translate the following into fol, using the names and
predicates listed in the table. (You will have to type the names and predicates in by hand.
Make sure you type them exactly as they appear in the table; for example, use 2:00, not 2:00
pm or 2 pm.) All references to times are assumed to be to times on January 2, 2011.
1. Claire owned Folly at 2 pm.
2. Claire gave Pris to Max at 2:05 pm.
3. Max is a student.
4. Claire fed Carl at 2 pm.
5. Folly belonged to Max at 3:05 pm.
6. 2:00 pm is earlier than 2:05 pm.
Name and submit your file in the usual way.
Chapter 1
Function symbols / 31
1.10
"
1.11
"!
Translate the following into natural sounding, colloquial English, consulting Table 1.2.
1. Owned(max, scruffy, 2:00)
2. Fed(max, scruffy, 2:30)
3. Gave(max, scruffy, claire, 3:00)
4. 2:00 < 2:00
For each sentence in the following list, suggest a translation into an atomic sentence of fol. In
addition to giving the translation, explain what kinds of objects your names refer to and the
intended meaning of the predicate you use.
1. Max shook hands with Claire.
2. Max shook hands with Claire yesterday.
3. AIDS is less contagious than influenza.
4. Spain is between France and Portugal in size.
5. Misery loves company.
Section 1.5
Function symbols
Some first-order languages have, in addition to names and predicates, other
expressions that can appear in atomic sentences. These expressions are called
function symbols. Function symbols allow us to form name-like terms from
names and other name-like terms. They allow us to express, using atomic
sentences, complex claims that could not be perspicuously expressed using
just names and predicates. Some English examples will help clarify this.
English has many sorts of noun phrases, expressions that can be combined
with a verb phrase to get a sentence. Besides names like Max and Claire,
other noun phrases include expressions like Maxs father, Claires mother,
Every girl who knows Max, No boy who knows Claire, Someone and so forth.
Each of these combines with a singular verb phrase such as likes unbuttered
popcorn to make a sentence. But notice that the sentences that result have
very different logical properties. For example,
function symbols
terms
Section 1.5
32 / Atomic Sentences
complex terms
called terms, and behave like the individual constants we have already discussed. In fact, individual constants are the simplest terms, and more complex
terms are built from them using function symbols. Noun phrases like No boy
who knows Claire are handled with very different devices, known as quantifiers, which we will discuss later.
The fol analog of the noun phrase Maxs father is the term father(max).
It is formed by putting a function symbol, father, in front of the individual
constant max. The result is a complex term that we use to refer to the father
of the person referred to by the name max. Similarly, we can put the function
symbol mother together with the name claire and get the term mother(claire),
which functions pretty much like the English term Claires mother.
We can repeat this construction as many times as we like, forming more
and more complex terms:
father(father(max))
mother(father(claire))
mother(mother(mother(claire)))
The first of these refers to Maxs paternal grandfather, the second to Claires
paternal grandmother, and so forth.
These function symbols are called unary function symbols, because, like
unary predicates, they take one argument. The resulting terms function just
like names, and can be used in forming atomic sentences. For instance, the
fol sentence
Taller(father(max), max)
says that Maxs father is taller than Max. Thus, in a language containing
function symbols, the definition of atomic sentence needs to be modified to
allow complex terms to appear in the argument positions in addition to names.
Students often confuse function symbols with predicates, because both
take terms as arguments. But there is a big difference. When you combine a
unary function symbol with a term you do not get a sentence, but another
term: something that refers (or should refer) to an object of some sort. This is
why function symbols can be reapplied over and over again. As we have seen,
the following makes perfectly good sense:
father(father(max))
This, on the other hand, is total nonsense:
Dodec(Dodec(a))
To help prevent this confusion, we will always capitalize predicates of fol and
leave function symbols and names in lower case.
Chapter 1
Function symbols / 33
Besides unary function symbols, fol allows function symbols of any arity. Thus, for example, we can have binary function symbols. Simple English
counterparts of binary function symbols are hard to come up with, but they
are quite common in mathematics. For instance, we might have a function
symbol sum that combines with two terms, t1 and t2 , to give a new term,
sum(t1 , t2 ), which refers to the sum of the numbers referred to by t1 and t2 .
Then the complex term sum(3, 5) would give us another way of referring to
8. In a later section, we will introduce a function symbol to denote addition,
but we will use infix notation, rather than prefix notation. Thus 3 + 5 will be
used instead of sum(3, 5).
In fol, just as we assume that every name refers to an actual object,
we also assume that every complex term refers to exactly one object. This
is a somewhat artificial assumption, since many function-like expressions in
English dont always work this way. Though we may assume that
arity of function
symbols
mother(father(father(max)))
refers to an actual (deceased) individualone of Maxs great-grandmothers
there may be other uses of these function symbols that dont seem to give
us genuinely referring expressions. For example, perhaps the complex terms
mother(adam) and mother(eve) fail to refer to any individuals, if Adam and Eve
were in fact the first people. And certainly the complex term mother(3) doesnt
refer to anything, since the number three has no mother. When designing a
first-order language with function symbols, you should try to ensure that your
complex terms always refer to unique, existing individuals.
The blocks world language as it is implemented in Tarskis World does not
contain function symbols, but we could easily extend the language to include
some. Suppose for example we introduced the function expressions fm, bm, lm
and rm, that allowed us to form complex terms like:
fm(a)
lm(bm(c))
rm(rm(fm(d)))
We could interpret these function symbols so that, for example, fm(a) refers
to the frontmost block in the same column as a. Thus, if there are several
blocks in the column with a, then fm(a) refers to whichever one is nearest the
front. (Notice that fm(a) may not itself have a name; fm(a) may be our only
way to refer to it.) If a is the only block in the column, or is the frontmost in
its column, then fm(a) would refer to a. Analogously, bm, lm and rm could be
interpreted to mean backmost, leftmost and rightmost, respectively.
With this interpretation, the term lm(bm(c)) would refer to the leftmost
block in the same row as the backmost block in the same column as c. The
Section 1.5
34 / Atomic Sentences
Exercises
1.12
"
1.13
"
Express in English the claims made by the following sentences of fol as clearly as you can.
You should try to make your English sentences as natural as possible. All the sentences are,
by the way, true.
1. Taller(father(claire), father(max))
2. john = father(max)
3. Taller(claire, mother(mother(claire)))
4. Taller(mother(mother(max)), mother(father(max)))
5. mother(melanie) = mother(claire)
Assume that we have expanded the blocks language to include the function symbols fm, bm, lm
and rm described earlier. Then the following formulas would all be sentences of the language:
1. Tet(lm(e))
2. fm(c) = c
3. bm(b) = bm(e)
4. FrontOf(fm(e), e)
5. LeftOf(fm(b), b)
Chapter 1
Function symbols / 35
6.
7.
8.
9.
10.
SameRow(rm(c), c)
bm(lm(c)) = lm(bm(c))
SameShape(lm(b), bm(rm(e)))
d = lm(fm(rm(bm(d))))
Between(b, lm(b), rm(b))
Fill in the following table with trues and falses according to whether the indicated sentence
is true or false in the indicated world. Since Tarskis World does not understand the function
symbols, you will not be able to check your answers. We have filled in a few of the entries for
you. Turn in the completed table to your instructor.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
1.14
!
1.15
"
Leibnizs
true
Bolzanos
Booles
Wittgensteins
false
true
false
As you probably noticed in doing Exercise 1.13, three of the sentences came out true in all
four worlds. It turns out that one of these three cannot be falsified in any world, because of
the meanings of the predicates and function symbols it contains. Your goal in this problem is
to build a world in which all of the other sentences in Exercise 1.13 come out false. When you
have found such a world, submit it as World 1.14.
Suppose we have two first-order languages for talking about fathers. The first, which well
call the functional language, contains the names claire, melanie, and jon, the function symbol
father, and the predicates = and Taller. The second language, which we will call the relational
language, has the same names, no function symbols, and the binary predicates =, Taller, and
FatherOf, where FatherOf(c, b) means that c is the father of b. Translate the following atomic
sentences from the relational language into the functional language. Be careful. Some atomic
sentences, such as claire = claire, are in both languages! Such a sentence counts as a translation
of itself.
1. FatherOf(jon, claire)
2. FatherOf(jon, melanie)
Section 1.5
36 / Atomic Sentences
3. Taller(claire, melanie)
Which of the following atomic sentences of the functional language can be translated into atomic
sentences of the relational language? Translate those that can be and explain the problem with
those that cant.
4. father(melanie) = jon
5. father(melanie) = father(claire)
6. Taller(father(claire), father(jon))
When we add connectives and quantifiers to the language, we will be able to translate freely
back and forth between the functional and relational languages.
1.16
"
1.17
"!
Lets suppose that everyone has a favorite movie star. Given this assumption, make up a firstorder language for talking about people and their favorite movie stars. Use a function symbol
that allows you to refer to an individuals favorite actor, plus a relation symbol that allows
you to say that one person is a better actor than another. Explain the interpretation of your
function and relation symbols, and then use your language to express the following claims:
1. Harrison is Nancys favorite actor.
2. Nancys favorite actor is better than Sean.
3. Nancys favorite actor is better than Maxs.
4. Claires favorite actors favorite actor is Brad.
5. Sean is his own favorite actor.
Make up a first-order language for talking about people and their relative heights. Instead of
using relation symbols like Taller, however, use a function symbol that allows you to refer to
peoples heights, plus the relation symbols = and <. Explain the interpretation of your function
symbol, and then use your language to express the following two claims:
1. George is taller than Sam.
2. Sam and Mary are the same height.
Do you see any problem with this function symbol? If so, explain the problem. [Hint: What
happens if you apply the function symbol twice?]
1.18
"!
For each sentence in the following list, suggest a translation into an atomic sentence of fol. In
addition to giving the translation, explain what kinds of objects your names refer to and the
intended meaning of the predicates and function symbols you use.
1. Indianas capital is larger than Californias.
2. Hitlers mistress died in 1945.
3. Max shook Claires fathers hand.
4. Max is his fathers son.
5. John and Nancys eldest child is younger than Jon and Mary Ellens.
Chapter 1
Section 1.6
membership ()
Notice that there is one striking difference between the atomic sentences
of set theory and the atomic sentences of the blocks language. In the blocks
language, you can have a sentence, like LeftOf(a, b), that is true in a world,
but which can be made false simply by moving one of the objects. Moving
an object does not change the way the name works, but it can turn a true
sentence into a false one, just as the sentence Claire is sitting down can go
from true to false in virtue of Claires standing up.
In set theory, we wont find this sort of thing happening. Here, the analog
of a world is just a domain of objects and sets. For example, our domain
might consist of all natural numbers, sets of natural numbers, sets of sets of
natural numbers, and so forth. The difference between these worlds and
those of Tarskis World is that the truth or falsity of the atomic sentences is
determined entirely once the reference of the names is fixed. There is nothing
that corresponds to moving the blocks around. Thus if the universe contains
2 For
the purposes of this discussion we are assuming that numbers are not sets, and that
sets can contain either numbers or other sets as members.
Section 1.6
38 / Atomic Sentences
the objects 2 and {2, 4, 6}, and if the names a and b are assigned to them,
then the atomic sentences must get the values indicated in the previous table.
The only way those values can change is if the names name different things.
Identity claims also work this way, both in set theory and in Tarskis World.
Exercises
1.19
!
Which of the following atomic sentences in the first-order language of set theory are true
and which are false? We use, in addition to a and b as above, the name c for 6 and d for
{2, 7, {2, 4, 6}}.
1. a c
2. a d
3. b c
4. b d
5. c d
6. c b
To answer this exercise, submit a Tarskis World sentence file with an uppercase T or F in each
sentence slot to indicate your assessment.
Section 1.7
Chapter 1
While neither the blocks language as implemented in Tarskis World nor the
language of set theory has function symbols, there are languages that use
them extensively. One such first-order language is the language of arithmetic.
This language allows us to express statements about the natural numbers
0, 1, 2, 3, . . . , and the usual operations of addition and multiplication.
There are several more or less equivalent ways of setting up this language.
The one we will use has two names, 0 and 1, two binary relation symbols, =
and <, and two binary function symbols, + and . The atomic sentences are
those that can be built up out of these symbols. We will use infix notation
both for the relation symbols and the function symbols.
Notice that there are infinitely many different terms in this language (for
example, 0, 1, (1 + 1), ((1 + 1) + 1), (((1 + 1) + 1) + 1), . . . ), and so an infinite
number of atomic sentences. Our list also shows that every natural number is
named by some term of the language. This raises the question of how we can
specify the set of terms in a precise way. We cant list them all explicitly, since
there are too many. The way we get around this is by using what is known as
an inductive definition.
Definition The terms of first-order arithmetic are formed in the following
way:
terms of arithmetic
atomic sentences of
arithmetic
(1 1) < (1 + 1)
Exercises
1.20
"
1.21
"
Show that the following expressions are terms in the first-order language of arithmetic. Do this
by explaining which clauses of the definition are applied and in what order. What numbers do
they refer to?
1. (0 + 0)
2. (0 + (1 0))
3. ((1 + 1) + ((1 + 1) (1 + 1)))
4. (((1 1) 1) 1)
Find a way to express the fact that three
is less than four using the first-order language of arithmetic.
1.22
!
"
Section 1.7
40 / Atomic Sentences
Section 1.8
Alternative notation
As we said before, fol is like a family of languages. But, as if that were not
enough diversity, even the very same first-order language comes in a variety
of dialects. Indeed, almost no two logic books use exactly the same notational
conventions in writing first-order sentences. For this reason, it is important
to have some familiarity with the different dialectsthe different notational
conventionsand to be able to translate smoothly between them. At the end
of most chapters, we discuss common notational differences that you are likely
to encounter.
Some notational differences, though not many, occur even at the level of
atomic sentences. For example, some authors insist on putting parentheses
around atomic sentences whose binary predicates are in infix position. So
(a = b) is used rather than a = b. By contrast, some authors omit parentheses
surrounding the argument positions (and the commas between them) when
the predicate is in prefix position. These authors use Rab instead of R(a, b).
We have opted for the latter simply because we use predicates made up of
several letters, and the parentheses make it clear where the predicate ends
and the arguments begin: Cubed is not nearly as perspicuous as Cube(d).
What is important in these choices is that sentences should be unambiguous and easy to read. Typically, the first aim requires parentheses to be used in
one way or another, while the second suggests using no more than is necessary.
Chapter 1
Chapter 2
41
arguments, premises,
and conclusions
identifying premises
and conclusions
logical consequence
logically valid
arguments
Chapter 2
One difference between these two arguments is the placement of the conclusion. In the first argument, the conclusion comes at the end, while in the
second, it comes at the start. This is indicated by the words so and after all,
respectively. A more important difference is that the first argument is good,
while the second is bad. We will say that the first argument is logically valid,
or that its conclusion is a logical consequence of its premises. The reason we
say this is that it is impossible for this conclusion to be false if the premises are
true. In contrast, our second conclusion might be false (suppose Lucretius is
my pet goldfish), even though the premises are true (goldfish are notoriously
mortal). The second conclusion is not a logical consequence of its premises.
Roughly speaking, an argument is logically valid if and only if the conclusion must be true on the assumption that the premises are true. Notice that
this does not mean that an arguments premises have to be true in order for it
to be valid. When we give arguments, we naturally intend the premises to be
true, but sometimes were wrong about that. Well say more about this possibility in a minute. In the meantime, note that our first example above would
be a valid argument even if it turned out that we were mistaken about one
of the premises, say if Socrates turned out to be a robot rather than a man.
It would still be impossible for the premises to be true and the conclusion
false. In that eventuality, we would still say that the argument was logically
valid, but since it had a false premise, we would not be guaranteed that the
conclusion was true. It would be a valid argument with a false premise.
Here is another example of a valid argument, this time one expressed in
the blocks language. Suppose we are told that Cube(c) and that c = b. Then it
certainly follows that Cube(b). Why? Because there is no possible way for the
premises to be truefor c to be a cube and for c to be the very same object
as bwithout the conclusion being true as well. Note that we can recognize
that the last statement is a consequence of the first two without knowing that
the premises are actually, as a matter of fact, true. For the crucial observation
is that if the premises are true, then the conclusion must also be true.
A valid argument is one that guarantees the truth of its conclusion on
the assumption that the premises are true. Now, as we said before, when we
actually present arguments, we want them to be more than just valid: we also
want the premises to be true. If an argument is valid and the premises are also
true, then the argument is said to be sound. Thus a sound argument insures
the truth of its conclusion. The argument about Socrates given above was not
only valid, it was sound, since its premises were true. (He was not, contrary
to rumors, a robot.) But here is an example of a valid argument that is not
sound:
sound arguments
All rich actors are good actors. Brad Pitt is a rich actor. So he must
be a good actor.
The reason this argument is unsound is that its first premise is false.
Because of this, although the argument is indeed valid, we are not assured
that the conclusion is true. It may be, but then again it may not. We in fact
think that Brad Pitt is a good actor, but the present argument does not show
this.
Logic focuses, for the most part, on the validity of arguments, rather than
their soundness. There is a simple reason for this. The truth of an arguments
premises is generally an issue that is none of the logicians business: the truth
of Socrates is a man is something historians had to ascertain; the falsity of
All rich actors are good actors is something a movie critic might weigh in
about. What logicians can tell you is how to reason correctly, given what you
know or believe to be true. Making sure that the premises of your arguments
are true is something that, by and large, we leave up to you.
In this book, we often use a special format to display arguments, which we
call Fitch format after the logician Frederic Fitch. The format makes clear
which sentences are premises and which is the conclusion. In Fitch format, we
would display the above, unsound argument like this:
Fitch format
Fitch bar
Section 2.1
format, the Fitch bar gives us this information, and so these words are no
longer needed.
Remember
1. An argument is a series of statements in which one, called the conclusion, is meant to be a consequence of the others, called the premises.
2. An argument is valid if the conclusion must be true in any circumstance in which the premises are true. We say that the conclusion of
a logically valid argument is a logical consequence of its premises.
3. An argument is sound if it is valid and the premises are all true.
Exercises
2.1
!|"
(Classifying arguments) Open the file Socrates Sentences. This file contains eight arguments
separated by dashed lines, with the premises and conclusion of each labeled.
1. In the first column of the following table, classify each of these arguments as valid or
invalid. In making these assessments, you may presuppose any general features of the
worlds that can be built in Tarskis World (for example, that two blocks cannot occupy
the same square on the grid).
Argument
1.
2.
3.
4.
5.
6.
7.
8.
Valid?
Sound in
Socrates World?
Sound in
Wittgensteins World?
2. Now open Socrates World and evaluate each sentence. Use the results of your evaluation
to enter sound or unsound in each row of the second column in the table, depending on
whether the argument is sound or unsound in this world. (Remember that only valid
arguments can be sound; invalid arguments are automatically unsound.)
3. Open Wittgensteins World and fill in the third column of the table.
Chapter 2
4. For each argument that you have marked invalid in the table, construct a world in
which the arguments premises are all true but the conclusion is false. Submit the
world as World 2.1.x, where x is the number of the argument. (If you have trouble
doing this, you may want to rethink your assessment of the arguments validity.) Turn
in your completed table to your instructor.
This problem makes a very important point, one that students of logic sometimes forget. The
point is that the validity of an argument depends only on the argument, not on facts about
the specific world the statements are about. The soundness of an argument, on the other hand,
depends on both the argument and the world.
By the way, the Grade Grinder will only tell you that the files that you submit are or are
not counterexamples. For obvious reasons, if there is a counterexample to an argument but you
dont submit one, the Grade Grinder will not complain (to you, but it will tell the instructor).
2.2
"
2.3
"
(Classifying arguments) For each of the arguments below, identify the premises and conclusion
by putting the argument into Fitch format. Then say whether the argument is valid. For the
first five arguments, also give your opinion about whether they are sound. (Remember that
only valid arguments can be sound.) If your assessment of an argument depends on particular
interpretations of the predicates, explain these dependencies.
1. Anyone who wins an academy award is famous. Meryl Streep won an academy award.
Hence, Meryl Streep is famous.
2. Harrison Ford is not famous. After all, actors who win academy awards are famous,
and he has never won one.
3. The right to bear arms is the most important freedom. Charlton Heston said so, and
hes never wrong.
4. Al Gore must be dishonest. After all, hes a politician and hardly any politicians are
honest.
5. Mark Twain lived in Hannibal, Missouri, since Sam Clemens was born there, and Mark
Twain is Sam Clemens.
6. No one under 21 bought beer here last night, officer. Geez, we were closed, so no one
bought anything last night.
7. Claire must live on the same street as Laura, since she lives on the same street as Max
and he and Laura live on the same street.
For each of the arguments below, identify the premises and conclusion by putting the argument
into Fitch format, and state whether the argument is valid. If your assessment of an argument
depends on particular interpretations of the predicates, explain these dependencies.
1. Many of the students in the film class attend film screenings. Consequently, there must
be many students in the film class.
2. There are few students in the film class, but many of them attend the film screenings.
So there are many students in the film class.
Section 2.1
3. There are many students in the film class. After all, many students attend film screenings and only students in the film class attend screenings.
4. There are thirty students in my logic class. Some of the students turned in their
homework on time. Most of the students went to the all-night party. So some student
who went to the party managed to turn in the homework on time.
5. There are thirty students in my logic class. Some student who went to the all-night
party must have turned in the homework on time. Some of the students turned in their
homework on time, and they all went to the party.
6. There are thirty students in my logic class. Most of the students turned in their homework on time. Most of the students went to the all-night party. Thus, some student
who went to the party turned in the homework on time.
2.4
"
(Validity and truth) Can a valid argument have false premises and a false conclusion? False
premises and a true conclusion? True premises and a false conclusion? True premises and a
true conclusion? If you answer yes to any of these, give an example of such an argument. If
your answer is no, explain why.
Section 2.2
Methods of proof
proof
Chapter 2
Methods of proof / 47
Section 2.2
methods of proof
formal systems
informal proofs
You then began to prove conclusions, called theorems, from these axioms. As
you went on to prove more interesting theorems, your proofs would cite earlier
theorems. These earlier theorems were treated as intermediate conclusions in
justifying the new results. What this means is that the complete proofs of
the later theorems really include the proofs of the earlier theorems that they
presuppose. Thus, if they were written out in full, they would contain hundreds
or perhaps thousands of steps. Now suppose we only insisted that each step
show with probability .99 that the conclusion follows from the premises. Then
each step in such a proof would be a pretty good bet, but given a long enough
proof, the proof would carry virtually no weight at all about the truth of the
conclusion.
This demand for certainty becomes even more important in proofs done by
computers. Nowadays, theorems are sometimes proven by computers, and the
proofs can be millions of steps long. If we allowed even the slightest uncertainty
in the individual steps, then this uncertainty would multiply until the alleged
proof made the truth of the conclusion no more likely than its falsity.
Each time we introduce new types of expressions into our language, we will
discuss new methods of proof supported by those expressions. We begin by
discussing the main informal methods of proof used in mathematics, science,
and everyday life, emphasizing the more important methods like indirect and
conditional proof. Following this discussion we will formalize the methods
by incorporating them into what we call a formal system of deduction. A
formal system of deduction uses a fixed set of rules specifying what counts as
an acceptable step in a proof.
The difference between an informal proof and a formal proof is not one of
rigor, but of style. An informal proof of the sort used by mathematicians is
every bit as rigorous as a formal proof. But it is stated in English and is usually more free-wheeling, leaving out the more obvious steps. For example, we
could present our earlier argument about Socrates in the form of the following
informal proof:
Proof: Since Socrates is a man and all men are mortal, it follows
that Socrates is mortal. But all mortals will eventually die, since
that is what it means to be mortal. So Socrates will eventually die.
But we are given that everyone who will eventually die sometimes
worries about it. Hence Socrates sometimes worries about dying.
formal proofs
Chapter 2
A formal proof, by contrast, employs a fixed stock of rules and a highly stylized method of presentation. For example, the simple argument from Cube(c)
and c = b to Cube(b) discussed in the last section will, in our formal system,
take the following form:
Methods of proof / 49
1. Cube(c)
2. c = b
3. Cube(b)
= Elim: 1, 2
As you can see, we use an extension of the Fitch format as a way of presenting
formal proofs. The main difference is that a formal proof will usually have more
than one step following the Fitch bar (though not in this example), and each
of these steps will be justified by citing a rule of the formal system. We will
explain later the various conventions used in formal proofs.
In the course of this book you will learn how to give both informal and
formal proofs. We do not want to give the impression that formal proofs are
somehow better than informal proofs. On the contrary, for purposes of proving
things for ourselves, or communicating proofs to others, informal methods are
usually preferable. Formal proofs come into their own in two ways. One is that
they display the logical structure of a proof in a form that can be mechanically
checked. There are advantages to this, if you are a logic teacher grading lots
of homework, a computer, or not inclined to think for some other reason. The
other is that they allow us to prove things about provability itself, such as
Godels Completeness Theorem and Incompleteness Theorems, discussed in
the final section of this book.
Remember
1. A proof of a statement S from premises P1 , . . . , Pn is a step-by-step
demonstration which shows that S must be true in any circumstances
in which the premises P1 , . . . , Pn are all true.
2. Informal and formal proofs differ in style, not in rigor.
indiscernibility of
identicals
identity elimination
Section 2.2
nation, abbreviated = Elim. The reason for this name is that an application
of this rule eliminates a use of the identity symbol when we move from the
premises of the argument to its conclusion. We will have another rule that
introduces the identity symbol.
The principle of identity elimination is used repeatedly in mathematics.
For example, the following derivation uses the principle in conjunction with
the well-known algebraic identity x2 1 = (x 1)(x + 1):
x2 > x 2 1
so
x2 > (x 1)(x + 1)
reflexivity of identity or
identity introduction
symmetry of identity
We are all familiar with reasoning that uses such substitutions repeatedly.
Another principle, so simple that one often overlooks it, is the so-called
reflexivity of identity. The formal rule corresponding to it is called Identity
Introduction, or = Intro, since it allows us to introduce identity statements
into proofs. It tells us that any sentence of the form a = a can be validly
inferred from whatever premises are at hand, or from no premises at all. This
is because of the assumption made in fol that names always refer to one and
only one object. This is not true about English, as we have noted before. But
it is in fol, which means that in a proof you can always take any name a
that is in use and assert a = a, if it suits your purpose for some reason. (As a
matter of fact, it is rarely of much use.) Gertrude Stein was surely referring
to this principle when she observed A rose is a rose is a rose.
Another principle, a bit more useful, is that of the symmetry of identity. It
allows us to conclude b = a from a = b. Actually, if we wanted, we could derive
this as a consequence of our first two principles, by means of the following
proof.
Proof: Suppose that a = b. We know that a = a, by the reflexivity
of identity. Now substitute the name b for the first use of the name
a in a = a, using the indiscernibility of identicals. We come up with
b = a, as desired.
The previous paragraph is another example of an informal proof. In an
informal proof, we often begin by stating the premises or assumptions of the
proof, and then explain in a step-by-step fashion how we can get from these
assumptions to the desired conclusion. There are no strict rules about how
detailed the explanation needs to be. This depends on the sophistication of
the intended audience for the proof. But each step must be phrased in clear
and unambiguous English, and the validity of the step must be apparent. In
the next section, we will see how to formalize the above proof.
Chapter 2
Methods of proof / 51
A third principle about identity that bears noting is its so-called transitivity. If a = b and b = c are both true, then so is a = c. This is so obvious
that there is no particular need to prove it, but it can be proved using the
indiscernibility of identicals. (See Exercise 2.5.)
If you are using a language that contains function symbols (introduced
in the optional Section 1.5), the identity principles weve discussed also hold
for complex terms built up using function symbols. For example, if you know
that Happy(john) and john = father(max), you can use identity elimination to
conclude Happy(father(max)), even though father(max) is a complex term, not
a name. In fact, the example where we substituted (x 1)(x + 1) for x2 1
also applied the indiscernibility of identicals to complex terms.
transitivity of identity
Remember
There are four important principles that hold of the identity relation:
1. = Elim: If b = c, then whatever holds of b holds of c. This is also
known as the indiscernibility of identicals.
2. = Intro: Sentences of the form b = b are always true (in fol). This
is also known as the reflexivity of identity.
3. Symmetry of Identity: If b = c, then c = b.
4. Transitivity of Identity: If a = b and b = c, then a = c.
The latter two principles follow from the first two.
other transitive
relations
Section 2.2
inverse relations
Chapter 2
Methods of proof / 53
Exercises
2.5
"
2.6
"
b=c
a=b
Give an informal proof that the following argument is valid. If you proved the
transitivity of identity by doing Exercise 2.5, you may use this principle; otherwise, use only the indiscernibility of
identicals.
SameRow(a, a)
a=b
b=c
a=c
SameRow(c, a)
2.7
"
Given the meanings of the atomic predicates in the blocks language, assess the following arguments for
validity. (You may again assume any general facts about the worlds that can be built in Tarskis World.)
If the argument is valid, give an informal proof of its validity and turn it in on paper to your instructor.
If the conclusion is not a consequence of the premises, submit a world in which the premises are true
and the conclusion false.
2.8
!|"
Large(a)
Larger(a, c)
2.9
!|"
Small(c)
2.11
!|"
LeftOf(a, b)
RightOf(c, a)
LeftOf(b, c)
LeftOf(a, b)
b=c
2.10
!|"
RightOf(c, a)
2.12
!|"
BackOf(a, b)
FrontOf(a, c)
FrontOf(b, c)
SameSize(b, c)
SameShape(b, c)
b=c
2.13
!|"
SameSize(a, b)
Larger(a, c)
Smaller(d, c)
Smaller(d, b)
Section 2.2
2.14
!|"
Between(b, a, c)
LeftOf(a, c)
LeftOf(a, b)
Section 2.3
Formal proofs
deductive systems
the system F
In this section we will begin introducing our system for presenting formal
proofs, what is known as a deductive system. There are many different
styles of deductive systems. The system we present in the first two parts of
the book, which we will call F, is a Fitch-style system, so called because
Frederic Fitch first introduced this format for giving proofs. We will look at
a very different deductive system in Part IV, one known as the resolution
method, which is of considerable importance in computer science.
In the system F, a proof of a conclusion S from premises P, Q, and R, looks
very much like an argument presented in Fitch format. The main difference is
that the proof displays, in addition to the conclusion S, all of the intermediate
conclusions S1 , . . . , Sn that we derive in getting from the premises to the
conclusion S:
P
Q
R
S1
..
.
Justification 1
..
.
Sn
S
Justification n
Justification n+1
There are two graphical devices to notice here, the vertical and horizontal
lines. The vertical line that runs on the left of the steps draws our attention
to the fact that we have a single purported proof consisting of a sequence
of several steps. The horizontal Fitch bar indicates the division between the
claims that are assumed and those that allegedly follow from them. Thus the
fact that P, Q, and R are above the bar shows that these are the premises of
our proof, while the fact that S1 , . . . , Sn , and S are below the bar shows that
these sentences are supposed to follow logically from the premises.
Chapter 2
Formal proofs / 55
Notice that on the right of every step below the Fitch bar, we give a
justification of the step. In our deductive system, a justification indicates
which rule allows us to make the step, and which earlier steps (if any) the rule
is applied to. In giving an actual formal proof, we will number the steps, so
we can refer to them in justifying later steps.
We already gave one example of a formal proof in the system F, back on
page 48. For another example, here is a formalization of our informal proof of
the symmetry of identity.
justification
1. a = b
2. a = a
3. b = a
= Intro
= Elim: 2, 1
In the right hand margin of this proof you find a justification for each step
below the Fitch bar. These are applications of rules we are about to introduce.
The numbers at the right of step 3 show that this step follows from steps 2
and 1 by means of the rule cited.
The first rule we use in the above proof is Identity Introduction. This
rule allows you to introduce, for any name (or complex term) n in use in
the proof, the assertion n = n. You are allowed to do this at any step in the
proof, and need not cite any earlier step as justification. We will abbreviate
our statement of this rule in the following way:
= Intro
$ n=n
= Elim
Section 2.3
Reiteration
We dont do this because there are just too many such rules. We could state
them for a few predicates, but certainly not all of the predicates you will
encounter in first-order languages.
There is one rule that is not technically necessary, but which will make
some proofs look more natural. This rule is called Reiteration, and simply
allows you to repeat an earlier step, if you so desire.
Reiteration (Reit):
P
..
.
$ P
To use the Reiteration rule, just repeat the sentence in question and, on the
right, write Reit: x, where x is the number of the earlier occurrence of the
sentence.
Chapter 2
Formal proofs / 57
= Elim: 1, ?
Since we have already seen how to prove the symmetry of identity, we can
now fill in all the steps of the proof. The finished proof looks like this. Make
sure you understand why all the steps are there and how we arrived at them.
1. SameRow(a, a)
2. b = a
3. b = b
4. a = b
5. SameRow(b, a)
= Intro
= Elim: 3, 2
= Elim: 1, 4
Section 2.3
Section 2.4
Fitch vs. F
You try it
................................................................
Chapter 2
2. Before we start to construct the proof, notice that at the bottom of the
proof window there is a separate pane called the goal strip, containing the goal of the proof. In this case the goal is to prove the sentence
SameRow(b, a). If we successfully satisfy this goal, we will be able to get
Fitch to put a checkmark to the right of the goal.
3. Lets construct the proof. What we need to do is fill in the steps needed
to complete the proof, just as we did at the end of the last section. Add
a new step to the proof by choosing Add Step After from the Proof
menu. In the new step, enter the sentence a = b, either by typing it in or
by using the toolbar at the top of the proof window. We will first use this
step to get our conclusion and then go back and prove this step.
4. Once you have entered a = b, add another step below this and enter the
goal sentence SameRow(b, a). Use the mouse to click on the word Rule?
that appears to the right of SameRow(b, a). In the menu that pops up, go
to the Elimination Rules and select =. If you did this right, the rule name
should now say = Elim. If not, try again.
"
5. Next cite the first premise and the intermediate sentence you first entered.
You do this in Fitch by clicking on the two sentences, in either order. If
you click on the wrong one, just click again and it will be un-cited. Once
you have the right sentences cited, choose Verify Proof from the Proof
menu. The last step should now check out, as it is a valid instance of =
Elim. The step containing a = b will not check out, since we havent yet
indicated what it follows from. Nor will the goal check out, since we dont
yet have a complete proof of SameRow(b, a). All in good time.
"
6. Now add a step before the first introduced step (the one containing a = b),
and enter the sentence b = b. Do this by moving the focus slider (the
triangle in the left margin) to the step containing a = b and choosing
Add Step Before from the Proof menu. (If the new step appears in
the wrong place, choose Delete Step from the Proof menu.) Enter the
sentence b = b and justify it by using the rule = Intro. Check the step.
"
7. Finally, justify the step containing a = b by using the = Elim rule. You
will need to move the focus slider to this step, and then cite the second
premise and the sentence b = b. Now the whole proof, including the goal,
should check out. To find out if it does, choose Verify Proof from the
Proof menu. The proof should look like the completed proof on page 57,
except for the absence of numbers on the steps. (Try out Show Step
Numbers from the Proof menu now. The highlighting on support steps
will go away and numbers will appear, just like in the book.)
"
8. We mentioned earlier that Fitch lets you take some shortcuts, allowing
you to do things in one step that would take several if we adhered strictly
to F. This proof is a case in point. We have constructed a proof that falls
under F but Fitch actually has symmetry of identity built into = Elim.
So we could prove the conclusion directly from the two premises, using a
single application of the rule = Elim. Well do this next.
"
Section 2.4
9. Add another step at the very end of your proof. Heres a trick you will find
handy: Click on the goal sentence at the very bottom of the window. This
puts the focus on the goal sentence. Choose Copy from the Edit menu,
and then click back on the empty step at the end of your proof. Choose
Paste from the Edit menu and the goal sentence will be entered into this
step. This time, justify the new step using = Elim and citing just the two
premises. You will see that the step checks out.
10. Save your proof as Proof Identity 1.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
Since the proof system F does not have any rules for atomic predicates
other than identity, neither does Fitch. However, Fitch does have a mechanism that, among other things, lets you check for consequences among atomic
sentences that involve many of the predicates in the blocks world language.1
This is a rule we call Analytic Consequence or Ana Con for short. Ana
Con is not restricted to atomic sentences, but that is the only application
of the rule we will discuss at the moment. This rule allows you to cite some
sentences in support of a claim if any world that makes the cited sentences
true also makes the conclusion true, given the meaning of the predicates as
used in Tarskis World. Lets get a feeling for Ana Con with some examples.
Analytic Consequence
You try it
................................................................
!
1. Use Fitch to open the file Ana Con 1. In this file you will find nine premises
followed by six conclusions that are consequences of these premises. Indeed,
each of the conclusions follows from three or fewer of the premises.
2. Position the focus slider (the little triangle) at the first conclusion following
the Fitch bar, SameShape(c, b). We have invoked the rule Ana Con but
we have not cited any sentences. This conclusion follows from Cube(b) and
Cube(c). Cite these sentences and check the step.
3. Now move the focus slider to the step containing SameRow(b, a). Since
the relation of being in the same row is symmetric and transitive, this
follows from SameRow(b, c) and SameRow(a, c). Cite these two sentences
and check the step.
1 This
mechanism does not handle the predicates Adjoins and Between, due to the complexity of the ways the meanings of these predicates interact with the others.
Chapter 2
4. The third conclusion, BackOf(e, c), follows from three of the premises. See
if you can find them. Cite them. If you get it wrong, Fitch will give you
an X when you try to check the step.
"
5. Now fill in the citations needed to make the fourth and fifth conclusions
check out. For these, you will have to invoke the Ana Con rule yourself.
(You will find the rule on the Con submenu of the Rule? popup.)
"
6. The final conclusion, SameCol(b, b), does not require that any premises be
cited in support. It is simply an analytic truth, that is, true in virtue of
its meaning. Specify the rule and check this step.
"
7. When you are done, choose Verify Proof to see that all the goals check
out. Save your work as Proof Ana Con 1.
"
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
The Ana Con mechanism is not really a rule, technically speaking, though
we will continue to call it that since it appears on the Rule? menu in Fitch.
This mechanism, along with the two others appearing on the Con submenu,
apply complicated procedures to see whether the sentence in question follows
from the cited sentences. As we will explain later, these three items try to find
proofs of the sentence in question behind the scenes, and then give you a
checkmark if they succeed. The proof they find may in fact apply many, many
different rules in getting from the cited steps to the target sentence.
The main difference you will run into between the genuine rules in Fitch
and the mechanisms appearing on the Con menu is that the latter rules
will sometimes fail even though your step is actually correct. With the genuine
rules, Fitch will always give your step either a checkmark or an X, depending
on whether the rule is applied correctly. But with the Con mechanisms, Fitch
will sometimes try to find a proof of the target sentence but fail. In these
cases, Fitch will give the step a question mark rather than a check or an X,
since there might be a complicated proof that it just couldnt find.
To mark the difference between the genuine rules of F and the three consequence mechanisms, Fitch displays the rule names in green and the consequence mechanisms in blue. Because the Con mechanisms look for a proof
behind the scenes, we will often ask you not to use them in giving solutions to
homework problems. After all, the point is not to have Fitch do your homework for you! In the following problems, you should only use the Ana Con
rule if we explicitly say you can. To see whether a problem allows you to use
any of the Con mechanisms, double click on the goal or choose View Goal
Constraints from the Goal menu.
Section 2.4
Remember
The deductive system you will be learning is a Fitch-style deductive system, named F. The computer application that assists you in constructing
proofs in F is therefore called Fitch. If you write out your proofs on paper,
you are using the system F, but not the program Fitch.
Exercises
2.15
!
2.16
!
If you skipped the You try it sections, go back and do them now. Submit the files Proof
Identity 1 and Proof Ana Con 1.
Use Fitch to give a formal version of the informal proof you gave in Exercise 2.5. Remember,
you will find the problem setup in the file Exercise 2.16. You should begin your proof from this
saved file. Save your completed proof as Proof 2.16.
In the following exercises, use Fitch to construct a formal proof that the conclusion is a consequence of
the premises. Remember, begin your proof by opening the corresponding file, Exercise 2.x, and save your
solution as Proof 2.x. Were going to stop reminding you.
2.17
SameCol(a, b)
b=c
c=d
2.18
!
SameCol(a, d)
2.19
Smaller(a, b)
Smaller(b, c)
Smaller(a, c)
You will need to use Ana Con in this
proof. This proof shows that the predicate Smaller in the blocks language is
transitive.
Chapter 2
Between(a, d, b)
a=c
e=b
Between(c, d, e)
2.20
!
RightOf(b, c)
LeftOf(d, e)
b=d
LeftOf(c, e)
Make your proof parallel the informal
proof we gave on page 52, using both
an identity rule and Ana Con (where
necessary).
Demonstrating nonconsequence / 63
Section 2.5
Demonstrating nonconsequence
Proofs come in a variety of different forms. When a mathematician proves
a theorem, or when a prosecutor proves a defendants guilt, they are showing that a particular claim follows from certain accepted information, the
information they take as given. This kind of proof is what we call a proof of
consequence, a proof that a particular piece of information must be true if the
given information, the premises of the argument, are correct.
A very different, but equally important kind of proof is a proof of nonconsequence. When a defense attorney shows that the crime might have been committed by someone other than the client, say by the butler, the attorney is
trying to prove that the clients guilt does not follow from the evidence in the
case. When mathematicians show that the parallel postulate is not a consequence of the other axioms of Euclidean geometry, they are doing the same
thing: they are showing that it would be possible for the claim in question (the
parallel postulate) to be false, even if the other information (the remaining
axioms) is true.
We have introduced a few methods for demonstrating the validity of an
argument, for showing that its conclusion is a consequence of its premises. We
will be returning to this topic repeatedly in the chapters that follow, adding
new tools for demonstrating consequence as we add new expressions to our
language. In this section, we discuss the most important method for demonstrating nonconsequence, that is, for showing that some purported conclusion
is not a consequence of the premises provided in the argument.
Recall that logical consequence was defined in terms of the validity of
arguments. An argument is valid if every possible circumstance that makes
the premises of the argument true also makes the conclusion true. Put the
other way around, the argument is invalid if there is some circumstance that
makes the premises true but the conclusion false. Finding such a circumstance
is the key to demonstrating nonconsequence.
To show that a sentence Q is not a consequence of premises P1 , . . . , Pn ,
we must show that the argument with premises P1 , . . . , Pn and conclusion Q
is invalid. This requires us to demonstrate that it is possible for P1 , . . . , Pn to
be true while Q is simultaneously false. That is, we must show that there is
a possible situation or circumstance in which the premises are all true while
the conclusion is false. Such a circumstance is said to be a counterexample to
the argument.
Informal proofs of nonconsequence can resort to many ingenious ways for
proofs of
consequence
proofs of
nonconsequence
counterexamples
Section 2.5
informal proofs of
nonconsequence
Al Gore is a politician.
Hardly any politicians are honest.
Al Gore is dishonest.
If the premises of this argument are true, then the conclusion is likely. But
still the argument is not valid: the conclusion is not a logical consequence of
the premises. How can we see this? Well, imagine a situation where there are
10,000 politicians, and that Al Gore is the only honest one of the lot. In such
circumstances both premises would be true but the conclusion would be false.
Such a situation is a counterexample to the argument; it demonstrates that
the argument is invalid.
What we have just given is an informal proof of nonconsequence. Are
there such things as formal proofs of nonconsequence, similar to the formal
proofs of validity constructed in F? In general, no. But we will define the
notion of a formal proof of nonconsequence for the blocks language used in
Tarskis World. These formal proofs of nonconsequence are simply stylized
counterparts of informal counterexamples.
For the blocks language, we will say that a formal proof that Q is not a
consequence of P1 , . . . , Pn consists of a sentence file with P1 , . . . , Pn labeled
as premises, Q labeled as conclusion, and a world file that makes each of
P1 , . . . , Pn true and Q false. The world depicted in the world file will be called
the counterexample to the argument in the sentence file.
formal proofs of
nonconsequence
You try it
................................................................
Chapter 2
1. Launch Tarskis World and open the sentence file Bills Argument. This
argument claims that Between(b, a, d) follows from these three premises:
Between(b, c, d), Between(a, b, d), and LeftOf(a, c). Do you think it does?
2. Start a new world and put four blocks, labeled a, b, c, and d on one row
of the grid.
Demonstrating nonconsequence / 65
3. Arrange the blocks so that the conclusion is false. Check the premises. If
any of them are false, rearrange the blocks until they are all true. Is the
conclusion still false? If not, keep trying.
"
4. If you have trouble, try putting them in the order d, a, b, c. Now you will
find that all the premises are true but the conclusion is false. This world is
a counterexample to the argument. Thus we have demonstrated that the
conclusion does not follow from the premises.
"
"
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
Remember
To demonstrate the invalidity of an argument with premises P1 , . . . , Pn
and conclusion Q, find a counterexample: a possible circumstance that
makes P1 , . . . , Pn all true but Q false. Such a counterexample shows that
Q is not a consequence of P1 , . . . , Pn .
Exercises
2.21
!
2.22
"
If you have skipped the You try it section, go back and do it now. Submit the world file World
Counterexample 1.
Is the following argument valid? Sound? If it is valid, give an informal proof of it. If it is not
valid, give an informal counterexample to it.
All computer scientists are rich. Anyone who knows how to program a computer is a
computer scientist. Bill Gates is rich. Therefore, Bill Gates knows how to program a
computer.
2.23
"
Is the following argument valid? Sound? If it is valid, give an informal proof of it. If it is not
valid, give an informal counterexample to it.
Philosophers have the intelligence needed to be computer scientists. Anyone who becomes a computer scientist will eventually become wealthy. Anyone with the intelligence needed to be a computer scientist will become one. Therefore, every philosopher
will become wealthy.
Section 2.5
Each of the following problems presents a formal argument in the blocks language. If the argument is
valid, submit a proof of it using Fitch. (You will find Exercise files for each of these in the usual place.)
Important: if you use Ana Con in your proof, cite at most two sentences in each application. If the
argument is not valid, submit a counterexample world using Tarskis World.
2.24
!
2.25
Larger(b, c)
Smaller(b, d)
SameSize(d, e)
Larger(e, c)
2.26
!
FrontOf(a, b)
LeftOf(a, c)
SameCol(a, b)
FrontOf(c, b)
2.27
SameRow(b, c)
SameRow(a, d)
SameRow(d, f)
LeftOf(a, b)
LeftOf(f, c)
SameRow(b, c)
SameRow(a, d)
SameRow(d, f)
FrontOf(a, b)
FrontOf(f, c)
Section 2.6
Alternative notation
You will often see arguments presented in the following way, rather than
in Fitch format. The symbol ... (read therefore) is used to indicate the
conclusion:
All men are mortal.
Socrates is a man.
... Socrates is mortal.
There is a huge variety of formal deductive systems, each with its own
notation. We cant possibly cover all of these alternatives, though we describe
one, the resolution method, in Chapter 17.
Chapter 2
Chapter 3
67
Boolean connectives
truth-functional
connectives
truth table
Henkin-Hintikka game
Section 3.1
Negation symbol:
The symbol is used to express negation in our language, the notion we
commonly express in English using terms like not, it is not the case that, nonand un-. In first-order logic, we always apply this symbol to the front of a
sentence to be negated, while in English there is a much more subtle system
for expressing negative claims. For example, the English sentences John isnt
home and It is not the case that John is home have the same first-order
translation:
Home(john)
This sentence is true if and only if Home(john) isnt true, that is, just in case
John isnt home.
In English, we generally avoid double negativesnegatives inside other
negatives. For example, the sentence It doesnt make no difference is problematic. If someone says it, they usually mean that it doesnt make any difference.
In other words, the second negative just functions as an intensifier of some
sort. On the other hand, this sentence could be used to mean just what it
says, that it does not make no difference, it makes some difference.
Fol is much more systematic. You can put a negation symbol in front of
any sentence whatsoever, and it always negates it, no matter how many other
negation symbols the sentence already contains. For example, the sentence
Home(john)
negates the sentence
Home(john)
literals
Chapter 3
Negation symbol: / 69
P
true
false
P
false
true
The game rule for negation is very simple, since you never have to do
anything. Once you commit yourself to the truth of P this is the same as
committing yourself to the falsity of P. Similarly, if you commit yourself to
the falsity of P, this is tantamount to committing yourself to the truth of
P. So in either case Tarskis World simply replaces your commitment about
the more complex sentence by the opposite commitment about the simpler
sentence.
You try it
................................................................
1. Open Wittgensteins World. Start a new sentence file and write the following
sentence.
Between(e, d, f)
"
2. Use the Verify Sentence button to check the truth value of the sentence.
"
3. Now play the game, choosing whichever commitment you please. What
happens to the number of negation symbols as the game proceeds? What
happens to your commitment?
"
4. Now play the game again with the opposite commitment. If you won the
first time, you should lose this time, and vice versa. Dont feel bad about
losing.
"
5. There is no need to save the sentence file when you are done.
"
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
Remember
1. If P is a sentence of fol, then so is P.
2. The sentence P is true if and only if P is not true.
3. A sentence that is either atomic or the negation of an atomic sentence
is called a literal.
Section 3.1
Exercises
3.1
If you skipped the You try it section, go back and do it now. There are no files to submit,
but you wouldnt want to miss it.
3.2
(Assessing negated sentences) Open Booles World and Brouwers Sentences. In the sentence file
you will find a list of sentences built up from atomic sentences using only the negation symbol.
Read each sentence and decide whether you think it is true or false. Check your assessment. If
the sentence is false, make it true by adding or deleting a negation sign. When you have made
all the sentences in the file true, submit the modified file as Sentences 3.2
3.3
!
(Building a world) Start a new sentence file. Write the following sentences in your file and save
the file as Sentences 3.3.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
Tet(f)
SameCol(c, a)
SameCol(c, b)
Dodec(f)
c += b
(d += e)
SameShape(f, c)
SameShape(d, c)
Cube(e)
Tet(c)
Now start a new world file and build a world where all these sentences are true. As you modify
the world to make the later sentences true, make sure that you have not accidentally falsified
any of the earlier sentences. When you are done, submit both your sentences and your world.
3.4
"
Let P be a true sentence, and let Q be formed by putting some number of negation symbols
in front of P. Show that if you put an even number of negation symbols, then Q is true, but
that if you put an odd number, then Q is false. [Hint: A complete proof of this simple fact
would require what is known as mathematical induction. If you are familiar with proof by
induction, then go ahead and give a proof. If you are not, just explain as clearly as you can
why this is true.]
Now assume that P is atomic but of unknown truth value, and that Q is formed as before.
No matter how many negation symbols Q has, it will always have the same truth value as a
literal, namely either the literal P or the literal P. Describe a simple procedure for determining
which.
Chapter 3
Conjunction symbol: / 71
Section 3.2
Conjunction symbol:
The symbol is used to express conjunction in our language, the notion we
normally express in English using terms like and, moreover, and but. In firstorder logic, this connective is always placed between two sentences, whereas in
English we can also conjoin other parts of speech, such as nouns. For example,
the English sentences John and Mary are home and John is home and Mary
is home have the same first-order translation:
Home(john) Home(mary)
This sentence is read aloud as Home John and home Mary. It is true if and
only if John is home and Mary is home.
In English, we can also conjoin verb phrases, as in the sentence John slipped
and fell. But in fol we must translate this the same way we would translate
John slipped and John fell :
Slipped(john) Fell(john)
This sentence is true if and only if the atomic sentences Slipped(john) and
Fell(john) are both true.
A lot of times, a sentence of fol will contain when there is no visible
sign of conjunction in the English sentence at all. How, for example, do you
think we might express the English sentence d is a large cube in fol? If you
guessed
Large(d) Cube(d)
you were right. This sentence is true if and only if d is large and d is a cube
that is, if d is a large cube.
Some uses of the English and are not accurately mirrored by the fol
conjunction symbol. For example, suppose we are talking about an evening
when Max and Claire were together. If we were to say Max went home and
Claire went to sleep, our assertion would carry with it a temporal implication,
namely that Max went home before Claire went to sleep. Similarly, if we were to
reverse the order and assert Claire went to sleep and Max went home it would
suggest a very different sort of situation. By contrast, no such implication,
implicit or explicit, is intended when we use the symbol . The sentence
WentHome(max) FellAsleep(claire)
is true in exactly the same circumstances as
FellAsleep(claire) WentHome(max)
Section 3.2
Q
true
false
true
false
PQ
true
false
false
false
The Tarskis World game is more interesting for conjunctions than negations. The way the game proceeds depends on whether you have committed
to true or to false. If you commit to the truth of P Q then you have
implicitly committed yourself to the truth of each of P and Q. Thus, Tarskis
World gets to choose either one of these simpler sentences and hold you to the
truth of it. (Which one will Tarskis World choose? If one or both of them are
false, it will choose a false one so that it can win the game. If both are true,
it will choose at random, hoping that you will make a mistake later on.)
If you commit to the falsity of P Q, then you are claiming that at least
one of P or Q is false. In this case, Tarskis World will ask you to choose one of
the two and thereby explicitly commit to its being false. The one you choose
had better be false, or you will eventually lose the game.
You try it
................................................................
!
1. Open Claires World. Start a new sentence file and enter the sentence
Cube(a) Cube(b) Cube(c)
Chapter 3
2. Notice that this sentence is false in this world, since c is a cube. Play
the game committed (mistakenly) to the truth of the sentence. You will
see that Tarskis World immediately zeros in on the false conjunct. Your
commitment to the truth of the sentence guarantees that you will lose the
game, but along the way, the reason the sentence is false becomes apparent.
3. Now begin playing the game committed to the falsity of the sentence.
When Tarskis World asks you to choose a conjunct you think is false,
pick the first sentence. This is not the false conjunct, but select it anyway
and see what happens after you choose OK.
Conjunction symbol: / 73
4. Play until Tarskis World says that you have lost. Then click on Back a
couple of times, until you are back to where you are asked to choose a
false conjunct. This time pick the false conjunct and resume the play of
the game from that point. This time you will win.
"
5. Notice that you can lose the game even when your original assessment
is correct, if you make a bad choice along the way. But Tarskis World
always allows you to back up and make different choices. If your original
assessment is correct, there will always be a way to win the game. If it
is impossible for you to win the game, then your original assessment was
wrong.
"
6. Save your sentence file as Sentences Game 1 when you are done.
"
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
Remember
1. If P and Q are sentences of fol, then so is P Q.
2. The sentence P Q is true if and only if both P and Q are true.
Exercises
3.5
!
3.6
!
If you skipped the You try it section, go back and do it now. Make sure you follow all the
instructions. Submit the file Sentences Game 1.
Start a new sentence file and open Wittgensteins World. Write the following sentences in the
sentence file.
1.
2.
3.
4.
5.
6.
7.
8.
Tet(f) Small(f)
Tet(f) Large(f)
Tet(f) Small(f)
Tet(f) Large(f)
Tet(f) Small(f)
Tet(f) Large(f)
(Tet(f) Small(f))
(Tet(f) Large(f))
Section 3.2
9. (Tet(f) Small(f))
10. (Tet(f) Large(f))
Once you have written these sentences, decide which you think are true. Record your evaluations, to help you remember. Then go through and use Tarskis World to evaluate your
assessments. Whenever you are wrong, play the game to see where you went wrong.
If you are never wrong, playing the game will not be very instructive. Play the game a
couple times anyway, just for fun. In particular, try playing the game committed to the falsity
of sentence 9. Since this sentence is true in Wittgensteins World, Tarskis World should be able
to beat you. Make sure you understand everything that happens as the game proceeds.
Next, change the size or shape of block f , predict how this will affect the truth values of
your ten sentences, and see if your prediction is right. What is the maximum number of these
sentences that you can get to be true in a single world? Build a world in which the maximum
number of sentences are true. Submit both your sentence file and your world file, naming them
as usual.
3.7
!
(Building a world) Open Maxs Sentences. Build a world where all these sentences are true.
You should start with a world with six blocks and make changes to it, trying to make all the
sentences true. Be sure that as you make a later sentence true you do not inadvertently falsify
an earlier sentence.
Section 3.3
Disjunction symbol:
The symbol is used to express disjunction in our language, the notion we
express in English using or. In first-order logic, this connective, like the conjunction sign, is always placed between two sentences, whereas in English we
can also disjoin nouns, verbs, and other parts of speech. For example, the
English sentences John or Mary is home and John is home or Mary is home
both have the same first-order translation:
Home(john) Home(mary)
exclusive vs. inclusive
disjunction
Chapter 3
Disjunction symbol: / 75
Q
true
false
true
false
PQ
true
true
true
false
The game rules for are the duals of those for . If you commit yourself
to the truth of P Q, then Tarskis World will make you live up to this by
committing yourself to the truth of one or the other. If you commit yourself to
the falsity of P Q, then you are implicitly committing yourself to the falsity
Section 3.3
of each, so Tarskis World will choose one and hold you to the commitment
that it is false. (Tarskis World will, of course, try to win by picking a true
one, if it can.)
You
. . . . . try
. . . . .it. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
!
1. Open the file Ackermanns World. Start a new sentence file and enter the
sentence
Cube(c) (Cube(a) Cube(b))
Make sure you get the parentheses right!
2. Play the game committed (mistakenly) to this sentence being true. Since
the sentence is a disjunction, and you are committed to true, you will
be asked to pick a disjunct that you think is true. Since the first one is
obviously false, pick the second.
4. Play the game again, this time committed to the falsity of the sentence.
You should be able to win the game this time. If you dont, back up and
try again.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
Remember
1. If P and Q are sentences of fol, then so is P Q.
2. The sentence P Q is true if and only if P is true or Q is true (or both
are true).
Exercises
3.8
!
If you skipped the You try it section, go back and do it now. Youll be glad you did. Well,
maybe. Submit the file Sentences Game 2.
Chapter 3
3.9
!
3.10
!
Open Wittgensteins World and the sentence file Sentences 3.6 that you created for Exercise 3.6.
Edit the sentences by replacing by throughout, saving the edited list as Sentences 3.9.
Once you have changed these sentences, decide which you think are true. Again, record your
evaluations to help you remember them. Then go through and use Tarskis World to evaluate
your assessment. Whenever you are wrong, play the game to see where you went wrong. If you
are never wrong, then play the game anyway a couple times, knowing that you should win. As
in Exercise 3.6, find the maximum number of sentences you can make true by changing the
size or shape (or both) of block f . Submit both your sentences and world.
Open Ramseys World and start a new sentence file. Type the following four sentences into the
file:
1. Between(a, b, c) Between(b, a, c)
2. FrontOf(a, b) FrontOf(c, b)
3. SameRow(b, c) LeftOf(b, a)
4. RightOf(b, a) Tet(a)
Assess each of these sentences in Ramseys World and check your assessment. Then make a single
change to the world that makes all four of the sentences come out false. Save the modified world
as World 3.10. Submit both files.
Section 3.4
Section 3.4
PQ
PQ
Your commitment
Player to move
Goal
true
you
false
Tarskis World
Choose one of
P, Q that
is true.
true
Tarskis World
false
you
either
Choose one of
P, Q that
is false.
Replace P
by P and
switch
commitment.
P P, then you know that it is true, no matter how the world is. After all,
if P is not true, then P will be true, and vice versa; in either event P P
will be true. But if P is quite complex, or if you have imperfect information
about the world, you may not know which of P or P is true. Suppose P
is a sentence like There is a whale swimming below the Golden Gate Bridge
right now. In such a case you would be willing to commit to the truth of the
disjunction (since either there is or there isnt) without knowing just how to
play the game and win. You know that there is a winning strategy for the
game, but just dont know what it is.
Since there is a moral imperative to live up to ones commitments, the
use of the term commitment in describing the game is a bit misleading.
You are perfectly justified in asserting the truth of P P, even if you do
not happen to know your winning strategy for playing the game. Indeed, it
would be foolish to claim that the sentence is not true. But if you do claim
that P P is true, and then play the game, you will be asked to say which
of P or P you think is true. With Tarskis World, unlike in real life, you can
always get complete information about the world by going to the 2D view,
and so always live up to such commitments.
Chapter 3
Exercises
Here is a problem that illustrates the remarks we made about sometimes being able to tell that a sentence
is true, without knowing how to win the game.
3.11
"
Make sure Tarskis World is set to display the world in 3D. Then open Kleenes World and
Kleenes Sentences. Some objects are hidden behind other objects, thus making it impossible
to assess the truth of some of the sentences. Each of the six names a, b, c, d, e, and f are in use,
naming some object. Now even though you cannot see all the objects, some of the sentences in
the list can be evaluated with just the information at hand. Assess the truth of each claim, if
you can, without recourse to the 2-D view. Then play the game. If your initial commitment is
right, but you lose the game, back up and play over again. Then go through and add comments
to each sentence explaining whether you can assess its truth in the world as shown, and why.
Finally, display the 2-D view and check your work. We have annotated the first sentence for you
to give you the idea. (The semicolon ; tells Tarskis World that what follows is a comment.)
When you are done, print out your annotated sentences to turn in to your instructor.
Section 3.5
Section 3.5
scope of negation
Home(claire) Home(max)
(Home(claire) Home(max))
mean quite different things. The first is a conjunction of literals, the first of
which says Claire is not home, the second of which says that Max is home. By
contrast, the second sentence is a negation of a sentence which itself is a conjunction: it says that they are not both home. You have already encountered
this use of parentheses in earlier exercises.
Many logic books require that you always put parentheses around any pair
of sentences joined by a binary connective (such as or ). These books do
not allow sentences of the form:
PQR
but instead require one of the following:
((P Q) R)
(P (Q R))
leaving out parentheses
The version of fol that we use in this book is not so fussy, in a couple of ways.
First of all, it allows you to conjoin any number of sentences without using
parentheses, since the result is not ambiguous, and similarly for disjunctions.
Second, it allows you to leave off the outermost parentheses, since they serve
no useful purpose. You can also add extra parentheses (or brackets or braces)
if you want to for the sake of readability. For the most part, all we will require
is that your expression be unambiguous.
Remember
Parentheses must be used whenever ambiguity would result from their
omission. In practice, this means that conjunctions and disjunctions must
be wrapped in parentheses whenever combined by means of some other
connective.
You try it
................................................................
!
Chapter 3
1. Lets try our hand at evaluating some sentences built up from atomic
sentences using all three connectives , , . Open Booles Sentences and
Wittgensteins World. If you changed the size or shape of f while doing
Exercises 3.6 and 3.9, make sure that you change it back to a large tetrahedron.
2. Evaluate each sentence in the file and check your assessment. If your assessment is wrong, play the game to see why. Dont go from one sentence
to the next until you understand why it has the truth value it does.
"
3. Do you see the importance of parentheses? After you understand all the
sentences, go back and see which of the false sentences you can make true
just by adding, deleting, or moving parentheses, but without making any
other changes. Save your file as Sentences Ambiguity 1.
"
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
Exercises
To really master a new language, you have to use it, not just read about it. The exercises and problems
that follow are intended to let you do just that.
3.12
!
3.13
!
3.14
!
If you skipped the You try it section, go back and do it now. Submit the file Sentences
Ambiguity 1.
(Building a world) Open Schroders Sentences. Build a single world where all the sentences
in this file are true. As you work through the sentences, you will find yourself successively
modifying the world. Whenever you make a change in the world, be careful that you dont
make one of your earlier sentences false. When you are finished, verify that all the sentences
are really true. Submit your world as World 3.13.
(Parentheses) Show that the sentence
(Small(a) Small(b))
is not a consequence of the sentence
3.16
!
3.15
!
Small(a) Small(b)
(DeMorgan Equivalences) Open the file DeMorgans Sentences. Construct a world where all the
odd numbered sentences are true. Notice that no matter how you do this, the even numbered
sentences also come out true. Submit this as World 3.16.1. Next build a world where all the
odd numbered sentences are false. Notice that no matter how you do it, the even numbered
sentences also come out false. Submit this as World 3.16.2.
Section 3.5
3.17
"
In Exercise 3.16, you noticed an important fact about the relation between the even and odd
numbered sentences in DeMorgans Sentences. Try to explain why each even numbered sentence
always has the same truth value as the odd numbered sentence that precedes it.
Section 3.6
DeMorgans laws
Chapter 3
Every language has many ways of saying the same thing. This is particularly
true of English, which has absorbed a remarkable number of words from other
languages in the course of its history. But in any language, speakers always
have a choice of many synonymous ways of getting across their point. The
world would be a boring place if there were just one way to make a given
claim.
Fol is no exception, even though it is far less rich in its expressive capacities than English. In the blocks language, for example, none of our predicates
is synonymous with another predicate, though it is obvious that we could
do without many of them without cutting down on the claims expressible in
the language. For instance, we could get by without the predicate RightOf by
expressing everything we need to say in terms of the predicate LeftOf, systematically reversing the order of the names to get equivalent claims. This is
not to say that RightOf means the same thing as LeftOfit obviously does
notbut just that the blocks language offers us a simple way to construct
equivalent claims using these predicates. In the exercises at the end of this
section, we explore a number of equivalences made possible by the predicates
of the blocks language.
Some versions of fol are more parsimonious with their basic predicates
than the blocks language, and so may not provide equivalent ways of expressing atomic claims. But even these languages cannot avoid multiple ways of
expressing more complex claims. For example, P Q and Q P express the
same claim in any first-order language. More interesting, because of the superficial differences in form, are the equivalences illustrated in Exercise 3.16,
known as DeMorgans laws. The first of DeMorgans laws tells us that the
negation of a conjunction, (P Q), is logically equivalent to the disjunction
of the negations of the original conjuncts: P Q. The other tells us that
the negation of a disjunction, (P Q), is equivalent to the conjunction of
the negations of the original disjuncts: P Q. These laws are simple consequences of the meanings of the Boolean connectives. Writing S1 S2 to
indicate that S1 and S2 are logically equivalent, we can express DeMorgans
double negation
Remember
(Double negation and DeMorgans Laws) For any sentences P and Q:
1. Double negation: P P
2. DeMorgan: (P Q) (P Q)
3. DeMorgan: (P Q) (P Q)
Exercises
3.18
!
3.19
"
(Equivalences in the blocks language) In the blocks language used in Tarskis World there are
a number of equivalent ways of expressing some of the predicates. Open Bernays Sentences.
You will find a list of atomic sentences, where every other sentence is left blank. In each blank,
write a sentence that is equivalent to the sentence above it, but does not use the predicate
used in that sentence. (In doing this, you may presuppose any general facts about Tarskis
World, for example that blocks come in only three shapes.) If your answers are correct, the odd
numbered sentences will have the same truth values as the even numbered sentences in every
world. Check that they do in Ackermanns World, Bolzanos World, Booles World, and Leibnizs
World. Submit the modified sentence file as Sentences 3.18.
(Equivalences in English) There are also equivalent ways of expressing predicates in English.
For each of the following sentences of fol, find an atomic sentence in English that expresses
the same thing. For example, the sentence Man(max) Married(max) could be expressed in
Section 3.6
English
1.
2.
3.
4.
5.
6.
Section 3.7
Translation
correct translation
truth conditions
An important skill that you will want to master is that of translating from
English to fol, and vice versa. But before you can do that, you need to know
how to express yourself in both languages. The problems below are designed
to help you learn these related skills.
How do we know if a translation is correct? Intuitively, a correct translation
is a sentence with the same meaning as the one being translated. But what
is the meaning? Fol finesses this question, settling for truth conditions.
What we require of a correct translation in fol is that it be true in the same
circumstances as the original sentence. If two sentences are true in exactly
the same circumstances, we say that they have the same truth conditions. For
sentences of Tarskis World, this boils down to being true in the very same
worlds.
Note that it is not sufficient that the two sentences have the same truth
value in some particular world. If that were so, then any true sentence of
English could be translated by any true sentence of fol. So, for example,
if Claire and Max are both at home, we could translate Max is at home by
means of Home(claire). No, having the same actual truth value is not enough.
They have to have the same truth values in all circumstances.
Remember
In order for an fol sentence to be a good translation of an English sentence, it is sufficient that the two sentences have the same truth values
in all possible circumstances, that is, that they have the same truth conditions.
Chapter 3
Translation / 85
In general, this is all we require of translations into and out of fol. Thus,
given an English sentence S and a good fol translation of it, say S, any other
sentence S" that is equivalent to S will also count as an acceptable translation
of it, since S and S" have the same truth conditions. But there is a matter of
style. Some good translations are better than others. You want sentences that
are easy to understand. But you also want to keep the fol connectives close
to the English, if possible.
For example, a good translation of It is not true that Claire and Max are
both at home would be given by
(Home(claire) Home(max))
This is equivalent to the following sentence (by the first DeMorgan law), so
we count it too as an acceptable translation:
Home(claire) Home(max)
But there is a clear stylistic sense in which the first is a better translation, since
it conforms more closely to the form of the original. There are no hard and
fast rules for determining which among several logically equivalent sentences
is the best translation of a given sentence.
Many stylistic features of English have nothing to do with the truth conditions of a sentence, and simply cant be captured in an fol translation. For
example, consider the English sentence Pris is hungry but Carl is not. This
sentence tells us two things, that Pris is hungry and that Carl is not hungry.
So it would be translated into fol as
Hungry(pris) Hungry(carl)
When it comes to truth conditions, but expresses the same truth function
as and. Yet it is clear that but carries an additional suggestion that and does
not, namely, that the listener may find the sentence following the but a bit surprising, given the expectations raised by the sentence preceding it. The words
but, however, yet, nonetheless, and so forth, all express ordinary conjunction,
and so are translated into fol using . The fact that they also communicate
a sense of unexpectedness is just lost in the translation. Fol, as much as we
love it, sometimes sacrifices style for clarity.
In Exercise 3.21, sentences 1, 8, and 10, you will discover an important
function that the English phrases either. . . or and both. . . and sometimes play.
Either helps disambiguate the following or by indicating how far to the left
its scope extends; similarly both indicates how far to the left the following
and extends. For example, Either Max is home and Claire is home or Carl
Section 3.7
Exercises
3.20
!
(Describing a simple world) Open Booles World. Start a new sentence file, named Sentences 3.20, where you will describe some features of this world. Check each of your sentences
to see that it is indeed a sentence and that it is true in this world.
1. Notice that f (the large dodecahedron in the back) is not in front of a. Use your first
sentence to say this.
2. Notice that f is to the right of a and to the left of b. Use your second sentence to say
this.
3. Use your third sentence to say that f is either in back of or smaller than a.
4. Express the fact that both e and d are between c and a.
5. Note that neither e nor d is larger than c. Use your fifth sentence to say this.
6. Notice that e is neither larger than nor smaller than d. Use your sixth sentence to say
this.
7. Notice that c is smaller than a but larger than e. State this fact.
8. Note that c is in front of f ; moreover, it is smaller than f . Use your eighth sentence
to state these things.
Chapter 3
Translation / 87
9. Notice that b is in the same row as a but is not in the same column as f . Use your
ninth sentence to express this fact.
10. Notice that e is not in the same column as either c or d. Use your tenth sentence to
state this.
Now lets change the world so that none of the above mentioned facts hold. We can do this as
follows. First move f to the front right corner of the grid. (Be careful not to drop it off the
edge. You might find it easier to make the move from the 2-D view. If you accidentally drop
it, just open Booles World again.) Then move e to the back left corner of the grid and make
it large. Now none of the facts hold; if your answers to 110 are correct, all of the sentences
should now be false. Verify that they are. If any are still true, can you figure out where you went
wrong? Submit your sentences when you think they are correct. There is no need to submit
the modified world file.
3.21
!
(Some translations) Tarskis World provides you with a very useful way to check whether your
translation of a given English sentence is correct. If it is correct, then it will always have the
same truth value as the English sentence, no matter what world the two are evaluated in. So
when you are in doubt about one of your translations, simply build some worlds where the
English sentence is true, others where it is false, and check to see that your translation has
the right truth values in these worlds. You should use this technique frequently in all of the
translation exercises.
Start a new sentence file, and use it to enter translations of the following English sentences
into first-order logic. You will only need to use the connectives , , and .
1. Either a is small or both c and d are large.
2. d and e are both in back of b.
3. d and e are both in back of b and larger than it.
4. Both d and c are cubes, however neither of them is small.
5. Neither e nor a is to the right of c and to the left of b.
6. Either e is not large or it is in back of a.
7. c is neither between a and b, nor in front of either of them.
8. Either both a and e are tetrahedra or both a and f are.
9. Neither d nor c is in front of either c or b.
10. c is either between d and f or smaller than both of them.
11. It is not the case that b is in the same row as c.
12. b is in the same column as e, which is in the same row as d, which in turn is in the
same column as a.
Before you submit your sentence file, do the next exercise.
Section 3.7
3.22
!
3.23
!
3.24
"
(Checking your translations) Open Wittgensteins World. Notice that all of the English sentences
from Exercise 3.21 are true in this world. Thus, if your translations are accurate, they will also
be true in this world. Check to see that they are. If you made any mistakes, go back and fix
them. But as we have stressed, even if one of your sentences comes out true in Wittgensteins
World, it does not mean that it is a proper translation of the corresponding English sentence.
All you know for sure is that your translation and the original sentence have the same truth
value in this particular world. If the translation is correct, it will have the same truth value as
the English sentence in every world. Thus, to have a better test of your translations, we will
examine them in a number of worlds, to see if they have the same truth values as their English
counterparts in all of these worlds.
Lets start by making modifications to Wittgensteins World. Make all the large or medium
objects small, and the small objects large. With these changes in the world, the English sentences 1, 3, 4, and 10 become false, while the rest remain true. Verify that the same holds for
your translations. If not, correct your translations. Next, rotate your modified Wittgensteins
World 90 clockwise. Now sentences 5, 6, 8, 9, and 11 should be the only true ones that remain.
Lets check your translations in another world. Open Booles World. The only English sentences that are true in this world are sentences 6 and 11. Verify that all of your translations
except 6 and 11 are false. If not, correct your translations.
Now modify Booles World by exchanging the positions of b and c. With this change, the
English sentences 2, 5, 6, 7, and 11 come out true, while the rest are false. Check that the same
is true of your translations.
There is nothing to submit except Sentences 3.21.
Start a new sentence file and translate the following into fol. Use the names and predicates
presented in Table 1.2 on page 30.
1. Max is a student, not a pet.
2. Claire fed Folly at 2 pm and then ten minutes later gave her to Max.
3. Folly belonged to either Max or Claire at 2:05 pm.
4. Neither Max nor Claire fed Folly at 2 pm or at 2:05 pm.
5. 2:00 pm is between 1:55 pm and 2:05 pm.
6. When Max gave Folly to Claire at 2 pm, Folly wasnt hungry, but she was an hour
later.
Referring again to Table 1.2, page 30, translate the following into natural, colloquial English.
Turn in your translations to your instructor.
1. Student(claire) Student(max)
2. Pet(pris) Owned(max, pris, 2:00)
3. Owned(claire, pris, 2:00) Owned(claire, folly, 2:00)
4. (Fed(max, pris, 2:00) Fed(max, folly, 2:00))
Chapter 3
Translation / 89
3.25
"!
3.26
"
Translate the following into fol, introducing names, predicates, and function symbols as
needed. Explain the meaning of each predicate and function symbol, unless it is completely
obvious.
1. AIDS is less contagious than influenza, but more deadly.
2. Abe fooled Stephen on Sunday, but not on Monday.
3. Sean or Brad admires Meryl and Harrison.
4. Daisy is a jolly miller, and lives on the River Dee.
5. Poloniuss eldest child was neither a borrower nor a lender.
(Boolean solids) Many of you know how to do a Boolean search on the Web or on your
computer. When we do a Boolean search, we are really using a generalization of the Boolean
truth functions. We specify a Boolean combination of words as a criterion for finding documents
that contain (or do not contain) those words. Another generalization of the Boolean operations
is to spatial objects. In Figure 3.1 we show four ways to combine a vertical cylinder (A) with a
horizontal cylinder (B) to yield a new solid. Give an intuitive explanation of how the Boolean
connectives are being applied in this example. Then describe what the object (A B) would
be like and explain why we didnt give you a picture of this solid.
Section 3.7
Section 3.8
Alternative notation
As we mentioned in Chapter 2, there are various dialect differences among
users of fol. It is important to be aware of these so that you will not be
stymied by superficial differences. In fact, you will run into alternate symbols
being used for each of the three connectives studied in this chapter.
The most common variant of the negation sign, , is the symbol known
as the tilde, . Thus you will frequently encounter P where we would write
P. A more old-fashioned alternative is to draw a bar completely across the
negated sentence, as in P. This has one advantage over , in that it allows
you to avoid certain uses of parentheses, since the bar indicates its own scope
by what lies under it. For example, where we have to write (P Q), the
bar equivalent would simply be P Q. None of these symbols are available
on all keyboards, a serious problem in some contexts, such as programming
languages. Because of this, many programming languages use an exclamation
point to indicate negation. In the Java programming language, for example,
P would be written !P.
There are only two common variants of . By far the most common is
&, or sometimes (as in Java), &&. An older notation uses a centered dot, as
in multiplication. To make things more confusing still, the dot is sometimes
omitted, again as in multiplication. Thus, for P Q you might see any of the
following: P&Q, P&&Q, P Q, or just PQ.
Happily, the symbol is pretty standard. The only exception you may
encounter is a single or double vertical line, used in programming languages.
So if you see P | Q or P . Q, what is meant is probably P Q. Unfortunately,
though, some old textbooks use P | Q to express not both P and Q.
Alternatives to parentheses
dot notation
Polish notation
Chapter 3
There are ways to get around the use of parentheses in fol. At one time, a
common alternative to parentheses was a system known as dot notation. This
system involved placing little dots next to connectives indicating their relative
power or scope. In this system, the two sentences we write as P (Q R)
and (P Q) R would have been written P . Q R and P Q . R, respectively. With more complex sentences, multiple dots were used. Fortunately,
this notation has just about died out, and the present authors never speak to
anyone who uses it.
Another approach to parentheses is known as Polish notation. In Polish
notation, the usual infix notation is replaced by prefix notation, and this
Alternative notation / 91
Remember
The following table summarizes the alternative notations discussed so far.
Our notation
P
PQ
PQ
Common equivalents
P, P, !P, Np
P&Q, P&&Q, P Q, PQ, Kpq
P | Q, P . Q, Apq
Section 3.8
Exercises
3.27
!
Chapter 3
P&Q
!(P . (Q&&P))
( P Q) P
P( Q RS)
3.28
!
Chapter 4
93
truth-functional vs.
non-truth-functional
operators
Section 4.1
logical truth
tautology
Chapter 4
tw-possible
number of rows in a
truth table
Section 4.1
column splits each of these, marking the first and third quarters of the rows
with true, the second and fourth quarters with false, and so on. This will
result in the last column having true and false alternating down the column.
Lets start by looking at a very simple example of a truth table, one for the
sentence Cube(a) Cube(a). Since this sentence is built up from one atomic
sentence, our truth table will contain two rows, one for the case where Cube(a)
is true and one for when it is false.
Cube(a)
T
F
reference columns
Cube(a) Cube(a)
In a truth table, the column or columns under the atomic sentences are
called reference columns. Once the reference columns have been filled in, we
are ready to fill in the remainder of the table. To do this, we construct columns
of Ts and Fs beneath each connective of the target sentence S. These columns
are filled in one by one, using the truth tables for the various connectives. We
start by working on connectives that apply only to atomic sentences. Once
this is done, we work on connectives that apply to sentences whose main
connective has already had its column filled in. We continue this process until
the main connective of S has had its column filled in. This is the column that
shows how the truth of S depends on the truth of its atomic parts.
Our first step in filling in this truth table, then, is to calculate the truth
values that should go in the column under the innermost connective, which in
this case is the . We do this by referring to the truth values in the reference
column under Cube(a), switching values in accord with the meaning of .
Cube(a)
t
f
Cube(a) Cube(a)
F
T
Once this column is filled in, we can determine the truth values that should
go under the by looking at the values under Cube(a) and those under the
negation sign, since these correspond to the values of the two disjuncts to
which is applied. (Do you understand this?) Since there is at least one T in
each row, the final column of the truth table looks like this.
Cube(a)
t
f
Chapter 4
Cube(a) Cube(a)
T f
T t
Not surprisingly, our table tells us that the sentence Cube(a) Cube(a)
cannot be false. It is what we will call a tautology, an especially simple kind
of logical truth. We will give a precise definition of tautologies later. Our
sentence is in fact an instance of a principle, P P, that is known as the law
of the excluded middle. Every instance of this principle is a tautology.
Lets next look at a more complex truth table, one for a sentence built up
from three atomic sentences.
T
T
T
T
F
F
F
F
T
T
F
F
T
T
F
F
T
F
T
F
T
F
T
F
(A B) C
Since two of the connectives in the target sentence apply to atomic sentences whose values are specified in the reference column, we can fill in these
columns using the truth tables for and given earlier.
A
t
t
t
t
f
f
f
f
B
t
t
f
f
t
t
f
f
C
t
f
t
f
t
f
t
f
(A B)
T
T
F
F
F
F
F
F
F
T
F
T
F
T
F
T
This leaves only one connective, the main connective of the sentence. We fill
in the column under it by referring to the two columns just completed, using
the truth table for .
Section 4.1
A
t
t
t
t
f
f
f
f
B
t
t
f
f
t
t
f
f
C
t
f
t
f
t
f
t
f
(A B)
t
t
f
f
f
f
f
f
C
f
t
f
t
f
t
f
t
T
T
F
T
F
T
F
T
When we inspect the final column of this table, the one beneath the connective , we see that the sentence will be false in any circumstance where
Cube(c) is true and one of Cube(a) or Cube(b) is false. This table shows that
our sentence is not a tautology. Furthermore, since there clearly are blocks
worlds in which c is a cube and either a or b is not, the claim made by our
original sentence is not logically necessary.
Lets look at one more example, this time for a sentence of the form
(A (A (B C))) B
This sentence, though it has the same number of atomic constituents, is considerably more complex than our previous example. We begin the truth table
by filling in the columns under the two connectives that apply directly to
atomic sentences.
A
t
t
t
t
f
f
f
f
B
t
t
f
f
t
t
f
f
C
t
f
t
f
t
f
t
f
(A (A (B C))) B
F
F
F
F
T
T
T
T
T
F
F
F
T
F
F
F
We can now fill in the column under the that connects A and B C by
referring to the columns just filled in. This column will have an F in it if and
only if both of the constituents are false.
Chapter 4
A
t
t
t
t
f
f
f
f
B
t
t
f
f
t
t
f
f
C
t
f
t
f
t
f
t
f
(A (A
f
f
f
f
t
t
t
t
(B C))) B
T
t
F
f
F
f
F
f
T
t
T
f
T
f
T
f
B
t
t
f
f
t
t
f
f
C
t
f
t
f
t
f
t
f
(A (A
T f
F f
F f
F f
F t
F t
F t
F t
(B C))) B
t
t
f
f
f
f
f
f
t
t
t
f
t
f
t
f
We can now fill in the column for the remaining by referring to the previously
completed column. The simply reverses Ts and Fs.
A
t
t
t
t
f
f
f
f
B
t
t
f
f
t
t
f
f
C
t
f
t
f
t
f
t
f
(A (A (B C))) B
F
t f t
t
T
f f f
f
T
f f f
f
T
f f f
f
T
f t t
t
T
f t t
f
T
f t t
f
T
f t t
f
Finally, we can fill in the column under the main connective of our sentence.
We do this with the two-finger method: running our fingers down the reference
column for B and the just completed column, entering T whenever at least
one finger points to a T.
Section 4.1
A
t
t
t
t
f
f
f
f
B
t
t
f
f
t
t
f
f
C
t
f
t
f
t
f
t
f
(A
f
t
t
t
t
t
t
t
(A (B C)))
t f t
t
f f f
f
f f f
f
f f f
f
f t t
t
f t t
f
f t t
f
f t t
f
T
T
T
T
T
T
T
T
We will say that a tautology is any sentence whose truth table has only Ts
in the column under its main connective. Thus, we see from the final column
of the above table that any sentence of the form
tautology
(A (A (B C))) B
is a tautology.
You try it
................................................................
Chapter 4
1. Open the program Boole from the software that came with the book. We
will use Boole to reconstruct the truth table just discussed. The first thing
to do is enter the sentence (A (A (B C))) B at the top, right of
the table. To do this, use the toolbar to enter the logical symbols and
the keyboard to type the letters A, B, and C. (You can also enter the
logical symbols from the keyboard by typing &, |, and for , , and
, respectively. If you enter the logical symbols from the keyboard, make
sure you add spaces before and after the binary connectives so that the
columns under them will be reasonably spaced out.) If your sentence is
well formed, the small (1) above the sentence will turn green.
2. To build the reference columns, click in the top left portion of the table to
move your insertion point to the top of the first reference column. Enter C
in this column. Then choose Add Column Before from the Table menu
and enter B. Repeat this procedure and add a column headed by A. To fill
in the reference columns, click under each of them in turn, and type the
desired pattern of Ts and Fs.
3. Click under the various connectives in the target sentence, and notice that
green squares appear in the columns whose values the connective depends
upon. Select a column so that the highlighted columns are already filled
in, and fill in that column with the appropriate truth values. Continue this
process until your table is complete. When you are done use the Verify
Assessment item from the Table menu to see if all the values are correct
and your table complete. You can also verify your table using the colored
button on the toolbar (just to the left of the print button). If you have
filled the table correctly, green check marks should appear to the left of
each row, and next to the target sentence. Red crosses indicate that you
have made a mistake, and you should fix these now.
4. Once you have a correct and complete truth table, click on the Assessment button in the pink area under the toolbar. This will allow you to
say whether you think the sentence is a tautology. Say that it is (since
it is), and check your assessment by again selecting Verify Assessment
from the Table menu (or by using the toolbar button). You should now
see a green check mark next to the word Tautology on the assessment
pane. Save your table as Table Tautology 1.
"
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
There is a slight problem with our definition of a tautology, in that it
assumes that every sentence has a main connective. This is almost always the
case, but not in sentences like:
PQR
main connectives
For purposes of constructing truth tables, we will assume that the main connective in conjunctions with more than two conjuncts is always the rightmost
. That is to say, we will construct a truth table for P Q R the same way
we would construct a truth table for:
(P Q) R
More generally, we construct the truth table for:
P 1 P2 P 3 . . . Pn
as if it were punctuated like this:
(((P1 P2 ) P3 ) . . .) Pn
We treat long disjunctions similarly.
Any tautology is logically necessary. After all, its truth is guaranteed simply by its structure and the meanings of the truth-functional connectives.
Tautologies are logical necessities in a very strong sense. Their truth is independent of both the way the world happens to be and even the meanings of
the atomic sentences out of which they are composed.
Section 4.1
It should be clear, however, that not all logically necessary claims are
tautologies. The simplest example of a logically necessary claim that is not
a tautology is the fol sentence a = a. Since this is an atomic sentence, its
truth table would contain one T and one F. The truth table method is too
coarse to recognize that the row containing the F does not represent a genuine
possibility.
C ube
(a) Larger(a, b)
)
r(a, b Larger(
rge
a
b = b b, a
a
(L
a=
) Tet
t(a
(a)
Te
))
Cube(a) Dode
c(a
(a)
Te t
Medium(b) La )
b)
r
(
ge
l
l
a
(b)
Sm
Tautologies
Logical Necessities
Tarskis World
Necessities
Figure 4.1: The relation between tautologies, logical truths, and twnecessities.
You should be able to think of any number of sentences that are not
tautological, but which nonetheless seem logically necessary. For example, the
sentence
(Larger(a, b) Larger(b, a))
cannot possibly be false, yet a truth table for the sentence will not show this.
The sentence will be false in the row of the truth table that assigns T to both
Larger(a, b) and Larger(b, a).
We now have two methods for exploring the notions of logical possibility
and necessity, at least for the blocks language. First, there are the blocks
worlds that can be constructed using Tarskis World. If a sentence is true in
Chapter 4
tt-possible
Remember
Let S be a sentence of fol built up from atomic sentences by means of
truth-functional connectives alone. A truth table for S shows how the
truth of S depends on the truth of its atomic parts.
1. S is a tautology if and only if every row of the truth table assigns true
to S.
2. If S is a tautology, then S is a logical truth (that is, is logically necessary).
3. Some logical truths are not tautologies.
4. S is tt-possible if and only if at least one row of the truth table assigns
true to S.
Section 4.1
Exercises
In this chapter, you will often be using Boole to construct truth tables. Although Boole has the capability
of building and filling in reference columns for you, do not use this feature. To understand truth tables,
you need to be able to do this yourself. In later chapters, we will let you use the feature, once youve
learned how to do it yourself. The Grade Grinder will, by the way, be able to tell if Boole constructed
the reference columns.
4.1
If you skipped the You try it section, go back and do it now. Submit the file Table Tautology 1.
4.2
!
4.3
"!
Assume that A, B, and C are atomic sentences. Use Boole to construct truth tables for each of
the following sentences and, based on your truth tables, say which are tautologies. Name your
tables Table 4.2.x, where x is the number of the sentence.
1. (A B) (A B)
2. (A B) (A B)
3. (A B) C
4. (A B) (A (B C))
In Exercise 4.2 you should have discovered that two of the four sentences are tautologies, and
hence logical truths.
1. Suppose you are told that the atomic sentence A is in fact a logical truth (for example,
a = a). Can you determine whether any additional sentences in the list (1)-(4) are
logically necessary based on this information?
2. Suppose you are told that A is in fact a logically false sentence (for example, a += a).
Can you determine whether any additional sentences in the list (1)-(4) are logical
truths based on this information?
In the following four exercises, use Boole to construct truth tables and indicate whether the sentence
is tt-possible and whether it is a tautology. Remember how you should treat long conjunctions and
disjunctions.
4.4
!
4.6
!
4.8
"
(B C B)
4.5
[A (B C) (A B)]
4.7
!
!
A (B (C A))
[(A B) (C D)]
Make a copy of the Euler circle diagram on page 102 and place the numbers of the following
sentences in the appropriate region.
1. a = b
2. a = b b = b
Chapter 4
3.
4.
5.
6.
7.
8.
9.
10.
4.9
!|"
a=bb=b
(Large(a) Large(b) Adjoins(a, b))
Larger(a, b) Larger(a, b)
Larger(a, b) Smaller(a, b)
Tet(a) Cube(b) a += b
(Small(a) Small(b)) Small(a)
SameSize(a, b) (Small(a) Small(b))
(SameCol(a, b) SameRow(a, b))
tt-possible
tw-possible
10
2. In the second column of the table, put yes if you think the sentence is tw-possible,
that is, if it is possible to make the sentence true by building a world in Tarskis World,
and no otherwise. For each sentence that you mark tw-possible, actually build a world
in which it is true and name it World 4.9.x, where x is the number of the sentence in
question. The truth tables you constructed before may help you build these worlds.
3. Are any of the sentences tt-possible but not tw-possible? Explain why this can happen. Are any of the sentences tw-possible but not tt-possible? Explain why not.
Submit the files you created and turn in the table and explanations to your instructor.
4.10
"!
4.11
"!!
Draw an Euler circle diagram similar to the diagram on page 102, but this time showing the
relationship between the notions of logical possibility, tw-possibility, and tt-possibility. For
each region in the diagram, indicate an example sentence that would fall in that region. Dont
forget the region that falls outside all the circles.
All necessary truths are obviously possible: since they are true in all possible circumstances,
they are surely true in some possible circumstances. Given this reflection, where would the
sentences from our previous diagram on page 102 fit into the new diagram?
Suppose that S is a tautology, with atomic sentences A, B, and C. Suppose that we replace
all occurrences of A by another sentence P, possibly complex. Explain why the resulting sentence
Section 4.1
Section 4.2
logical equivalence
tautological equivalence
B
t
f
t
f
(A B)
t
f
f
f
F
T
T
T
A
f
f
t
t
B
f
t
f
t
F
T
T
T
In this table, the columns in bold correspond to the main connectives of the
Chapter 4
two sentences. Since these columns are identical, we know that the sentences
must have the same truth values, no matter what the truth values of their
atomic constituents may be. This holds simply in virtue of the structure of
the two sentences and the meanings of the Boolean connectives. So, the two
sentences are indeed tautologically equivalent.
Lets look at a second example, this time to see whether the sentence
((A B) C) is tautologically equivalent to (A B) C. To construct a
truth table for this pair of sentences, we will need eight rows, since there are
three atomic sentences. The completed table looks like this.
A
t
t
t
t
f
f
f
f
B
t
t
f
f
t
t
f
f
C
t
f
t
f
t
f
t
f
((A B)
t
t
t
t
t
t
f
f
T
F
T
F
T
F
T
T
f
t
f
t
f
t
f
f
C)
f
t
f
t
f
t
f
t
(A
f
f
f
f
t
t
t
t
f
f
f
f
f
f
t
t
B)
f
f
t
t
f
f
t
t
T
F
T
F
T
F
T
T
Once again, scanning the final columns under the two main connectives reveals
that the sentences are tautologically equivalent, and hence logically equivalent.
All tautologically equivalent sentences are logically equivalent, but the
reverse does not in general hold. Indeed, the relationship between these notions is the same as that between tautologies and logical truths. Tautological
equivalence is a strict form of logical equivalence, one that wont apply to
some logically equivalent pairs of sentences. Consider the pair of sentences:
a = b Cube(a)
a = b Cube(b)
These sentences are logically equivalent, as is demonstrated in the following
informal proof.
Proof: Suppose that the sentence a = b Cube(a) is true. Then
a = b and Cube(a) are both true. Using the indiscernibility of identicals (Identity Elimination), we know that Cube(b) is true, and hence
that a = b Cube(b) is true. So the truth of a = b Cube(a) logically
implies the truth of a = b Cube(b).
The reverse holds as well. For suppose that a = b Cube(b) is true.
Then by symmetry of identity, we also know b = a. From this and
Cube(b) we can conclude Cube(a), and hence that a = b Cube(a)
Section 4.2
number of rows in
joint table
This proof shows that these two sentences have the same truth values in
any possible circumstance. For if one were true and the other false, this would
contradict the conclusion of one of the two parts of the proof. But consider
what happens when we construct a joint truth table for these sentences. Three
atomic sentences appear in the pair of sentences, so the joint table will look
like this. (Notice that the ordinary truth table for either of the sentences alone
would have only four rows, but that the joint table must have eight. Do you
understand why?)
a=b
t
t
t
t
f
f
f
f
Cube(a)
t
t
f
f
t
t
f
f
Cube(b)
t
f
t
f
t
f
t
f
a = b Cube(a)
T
T
F
F
F
F
F
F
a = b Cube(b)
T
F
T
F
F
F
F
F
This table shows that the two sentences are not tautologically equivalent,
since it assigns the sentences different values in the second and third rows.
Look closely at those two rows to see whats going on. Notice that in both
of these rows, a = b is assigned T while Cube(a) and Cube(b) are assigned
different truth values. Of course, we know that neither of these rows corresponds to a logically possible circumstance, since if a and b are identical, the
truth values of Cube(a) and Cube(b) must be the same. But the truth table
method doesnt detect this, since it is sensitive only to the meanings of the
truth-functional connectives.
As we expand our language to include quantifiers, we will find many logical
equivalences that are not tautological equivalences. But this is not to say
there arent a lot of important and interesting tautological equivalences. Weve
already highlighted three in the last chapter: double negation and the two
DeMorgan equivalences. We leave it to you to check that these principles are,
in fact, tautological equivalences. In the next section, we will introduce other
principles and see how they can be used to simplify sentences of fol.
Chapter 4
Remember
Let S and S" be a sentences of fol built up from atomic sentences
by means of truth-functional connectives alone. To test for tautological
equivalence, we construct a joint truth table for the two sentences.
1. S and S" are tautologically equivalent if and only if every row of the
joint truth table assigns the same values to S and S" .
2. If S and S" are tautologically equivalent, then they are logically equivalent.
3. Some logically equivalent sentences are not tautologically equivalent.
Exercises
In Exercises 4.12-4.18, use Boole to construct joint truth tables showing that the pairs of sentences are
logically (indeed, tautologically) equivalent. To add a second sentence to your joint truth table, choose
Add Column After from the Table menu. Dont forget to specify your assessments, and remember,
you should build and fill in your own reference columns.
4.12
!
4.13
!
4.15
!
4.17
!
4.19
"
(DeMorgan)
(A B) and A B
(Associativity)
(A B) C and A (B C)
4.14
(Idempotence)
A B A and A B
4.16
(Distribution)
A (B C) and (A B) (A C)
4.18
!
!
(Associativity)
(A B) C and A (B C)
(Idempotence)
A B A and A B
(Distribution)
A (B C) and (A B) (A C)
(tw-equivalence) Suppose we introduced the notion of tw-equivalence, saying that two sentences of the blocks language are tw-equivalent if and only if they have the same truth value
in every world that can be constructed in Tarskis World.
1. What is the relationship between tw-equivalence, tautological equivalence and logical
equivalence?
2. Give an example of a pair of sentences that are tw-equivalent but not logically equivalent.
Section 4.2
Section 4.3
tautological consequence
Our main concern in this book is with the logical consequence relation, of
which logical truth and logical equivalence can be thought of as very special
cases: A logical truth is a sentence that is a logical consequence of any set
of premises, and logically equivalent sentences are sentences that are logical
consequences of one another.
As youve probably guessed, truth tables allow us to define a precise notion
of tautological consequence, a strict form of logical consequence, just as they
allowed us to define tautologies and tautological equivalence, strict forms of
logical truth and logical equivalence.
Lets look at the simple case of two sentences, P and Q, both built from
atomic sentences by means of truth-functional connectives. Suppose you want
to know whether Q is a consequence of P. Create a joint truth table for P
and Q, just like you would if you were testing for tautological equivalence.
After you fill in the columns for P and Q, scan the columns under the main
connectives for these sentences. In particular, look at every row of the table in
which P is true. If each such row is also one in which Q is true, then Q is said
to be a tautological consequence of P. The truth table shows that if P is true,
then Q must be true as well, and that this holds simply due to the meanings
of the truth-functional connectives.
Just as tautologies are logically necessary, so too any tautological consequence Q of a sentence P must also be a logical consequence of P. We can
see this by proving that if Q is not a logical consequence of P, then it cant
possibly pass our truth table test for tautological consequence.
Proof: Suppose Q is not a logical consequence of P. Then by our definition of logical consequence, there must be a possible circumstance
in which P is true but Q is false. This circumstance will determine
truth values for the atomic sentences in P and Q, and these values
will correspond to a row in the joint truth table for P and Q, since
all possible assignments of truth values to the atomic sentences are
represented in the truth table. Further, since P and Q are built up
from the atomic sentences by truth-functional connectives, and since
the former is true in the original circumstance and the latter false,
P will be assigned T in this row and Q will be assigned F. Hence, Q
is not a tautological consequence of P.
Lets look at a very simple example. Suppose we wanted to check to see
whether A B is a consequence of A B. The joint truth table for these sen-
Chapter 4
B
t
f
t
f
AB
T
F
F
F
AB
T
T
T
F
When you compare the columns under these two sentences, you see that the
sentences are most definitely not tautologically equivalent. No surprise. But
we are interested in whether A B logically implies A B, and so the only
rows we care about are those in which the former sentence is true. A B is only
true in the first row, and A B is also true in that row. So this table shows
that A B is a tautological consequence (and hence a logical consequence) of
A B.
Notice that our table also shows that A B is not a tautological consequence of A B, since there are rows in which the latter is true and the former
false. Does this show that A B is not a logical consequence of A B? Well,
we have to be careful. A B is not in general a logical consequence of A B,
but it might be in certain cases, depending on the sentences A and B. Well
ask you to come up with an example in the exercises.
Not every logical consequence of a sentence is a tautological consequence
of that sentence. For example, the sentence a = c is a logical consequence of
the sentence (a = b b = c), but it is not a tautological consequence of it.
Think about the row that assigns T to the atomic sentences a = b and b = c,
but F to the sentence a = c. Clearly this row, which prevents a = c from being
a tautological consequence of (a = b b = c), does not respect the meanings
of the atomic sentences out of which the sentences are built. It does not
correspond to a genuinely possible circumstance, but the truth table method
does not detect this.
The truth table method of checking tautological consequence is not restricted to just one premise. You can apply it to arguments with any number
of premises P1 , . . . , Pn and conclusion Q. To do so, you have to construct a
joint truth table for all of the sentences P1 , . . . , Pn and Q. Once youve done
this, you need to check every row in which the premises all come out true to
see whether the conclusion comes out true as well. If so, the conclusion is a
tautological consequence of the premises.
Lets try this out on a couple of simple examples. First, suppose we want
to check to see whether B is a consequence of the two premises A B and A.
The joint truth table for these three sentences comes out like this. (Notice
that since one of our target sentences, the conclusion B, is atomic, we have
simply repeated the reference column when this sentence appears again on
Section 4.3
the right.)
A
t
t
f
f
B
t
f
t
f
AB
T
T
T
F
F
F
T
T
B
T
F
T
F
Scanning the columns under our two premises, A B and A, we see that
there is only one row where both premises come out true, namely the third.
And in the third row, the conclusion B also comes out true. So B is indeed a
tautological (and hence logical) consequence of these premises.
In both of the examples weve looked at so far, there has been only one
row in which the premises all came out true. This makes the arguments easy
to check for validity, but its not at all something you can count on. For
example, suppose we used the truth table method to check whether A C
is a consequence of A B and B C. The joint truth table for these three
sentences looks like this.
A
t
t
t
t
f
f
f
f
B
t
t
f
f
t
t
f
f
C
t
f
t
f
t
f
t
f
A B
T f
T f
T t
T t
F f
F f
T t
T t
BC
T
T
T
F
T
T
T
F
AC
T
T
T
T
T
F
T
F
Here, there are four rows in which the premises, A B and B C, are
both true: the first, second, third, and seventh. But in each of these rows the
conclusion, A C, is also true. The conclusion is true in other rows as well, but
we dont care about that. This inference, from A B and B C to A C, is
logically valid, and is an instance of an important pattern known in computer
science as resolution.
We should look at an example where the truth table method reveals that
the conclusion is not a tautological consequence of the premises. Actually, the
last truth table will serve this purpose. For this table also shows that the
sentence A B is not a tautological consequence of the two premises B C
and A C. Can you find the row that shows this? (Hint: Its got to be the
first, second, third, fifth, or seventh, since these are the rows in which B C
and A C are both true.)
Chapter 4
Remember
Let P1 , . . . , Pn and Q be sentences of fol built up from atomic sentences
by means of truth functional connectives alone. Construct a joint truth
table for all of these sentences.
1. Q is a tautological consequence of P1 , . . . , Pn if and only if every row
that assigns T to each of P1 , . . . , Pn also assigns T to Q.
2. If Q is a tautological consequence of P1 , . . . , Pn , then Q is also a logical
consequence of P1 , . . . , Pn .
3. Some logical consequences are not tautological consequences.
Exercises
For each of the arguments below, use the truth table method to determine whether the conclusion is a
tautological consequence of the premises. Your truth table for Exercise 4.24 will be fairly large. Its good
for the soul to build a large truth table every once in a while. Be thankful you have Boole to help you.
(But make sure you build your own reference columns!)
4.20
!
4.21
!
Small(a) Small(b)
4.22
!
Large(a)
Cube(a) Dodec(a)
(Cube(a) Large(a)) (Dodec(a) Large(a))
4.23
!!
A B
BC
CD
A D
4.25
"!
4.24
!!
A B C
C D
(B E)
D A E
Give an example of two different sentences A and B in the blocks language such that A B is
a logical consequence of A B. [Hint: Note that A A is a logical consequence of A A, but
here we insist that A and B be distinct sentences.]
Section 4.3
Section 4.4
You
. . . . . try
. . . . .it. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
!
1. Launch Fitch and open the file Taut Con 1. In this file you will find an
argument that has the same form as the argument in Exercise 4.23. (Ignore
the two goal sentences. Well get to them later.) Move the focus slider to
the last step of the proof. From the Rule? menu, go down to the Con
submenu and choose Taut Con.
2. Now cite the three premises as support for this sentence and check the
step. The step will not check out since this sentence is not a tautological
consequence of the premises, as you discovered if you did Exercise 4.23,
which has the same form as this inference.
Chapter 4
Use Taut Con to see if this sentence follows tautologically from the three
premises. Choose Verify Proof from the Proof menu. You will find that
although the step checks out, the goal does not. This is because we have
put a special constraint on your use of Taut Con in this exercise.
5. Choose View Goal Constraints from the Goal menu. You will find that
in this proof, you are allowed to use Taut Con, but can only cite two or
fewer support sentences when you use it. Close the goal window to get
back to the proof.
"
6. The sentence you entered also follows from the sentence immediately above
it plus just one of the three premises. Uncite the three premises and see
if you can get the step to check out citing just two sentences in support.
Once you succeed, verify the proof and save it as Proof Taut Con 1. Do
not close the proof, since it will be needed in the next You Try It.
"
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
You are probably curious about the relationship between Taut Con and
Ana Conand for that matter, what the other mysterious item on the Con
menu, FO Con, might do. These are in fact three increasingly strong methods
that Fitch uses to test for logical consequence. Taut Con is the weakest. It
checks to see whether the current step follows from the cited sentences in virtue
of the meanings of the truth-functional connectives. It ignores the meanings of
any predicates that appear in the sentence and, when we introduce quantifiers
into the language, it will ignore those as well.
To help you keep track of the information that Taut Con considers when
checking a step, Fitch has special goggles which obscure the information
that will not be considered when checking the step. As we said above, the
only things that matter when checking a step justified by Taut Con step are
the propositional connectives in the formulae, and the pattern of occurrence
of the atomic formulae.
What this means is that the meanings of the predicate symbols and names
in the formulae do not matter, and Fitchs goggles obscure this information.
When you put the goggles on, every individual atomic formula involved in the
step appears as a block of color, hiding the particular atomic formula that is
present. Every occurrence of the same atomic formula will be represented by
the same color, and different formulae will have different colors.
You try it
................................................................
1. Return to the file Taut Con 1 again that you made in the previous You
Try It section, and focus on the last step of the proof, which contains an
"
Section 4.4
2. Click on the picture of a pair of goggles that appears to the right of the
rule name, and notice how the conclusion and the cited sentences change
into blocks of color.
3. You should see a more colorful version of this
Hungry(carl) Hungry(pris)
Home(max) Hungry(carl)
Hungry(carl ( Home(max) Hungry(pris) )
Taut Con
Be sure to understand how the atomic formulas relate to their corresponding colors.
!
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
FO Con, which stands for first-order consequence, pays attention to the
truth-functional connectives, the quantifiers, and the identity predicate when
it checks for consequence. FO Con would, for example, identify a = c as a
consequence of a = b b = c. It is stronger than Taut Con in the sense that
any consequence that Taut Con recognizes as valid will also be recognized
by FO Con. But it may take longer since it has to apply a more complex
procedure, thanks to identity and the quantifiers. After we get to quantifiers,
well talk more about the procedure it is applying.
The strongest rule of the three is Ana Con, which tries to recognize consequences due to truth-functional connectives, quantifiers, identity, and most
of the blocks language predicates. (Ana Con ignores Between and Adjoins,
simply for practical reasons.) Any inference that checks out using either Taut
Con or FO Con should, in principle, check out using Ana Con as well. In
practice, though, the procedure that Ana Con uses may bog down or run
out of memory in cases where the first two have no trouble.
As we said before, you should only use a procedure from the Con menu
when the exercise makes clear that the procedure is allowed in the solution.
Moreover if an exercise asks you to use Taut Con, dont use FO Con or Ana
Con instead, even if these more powerful rules seem to work just as well. If
you are in doubt about which rules you are allowed to use, choose View Goal
Constraints from the Goal menu.
Chapter 4
You try it
................................................................
1. Open the file Taut Con 2. You will find a proof containing ten steps whose
rules have not been specified.
2. Focus on each step in turn. You will find that the supporting steps have
already been cited. Convince yourself that the step follows from the cited
sentences. Is it a tautological consequence of the sentences cited? If so,
change the rule to Taut Con and see if you were right. If not, change it
to Ana Con and see if it checks out. (If Taut Con will work, make sure
you use it rather than the stronger Ana Con.)
3. When all of your steps check out using Taut Con or Ana Con, go back
and find the one step whose rule can be changed from Ana Con to the
weaker FO Con.
4. When each step checks out using the weakest Con rule possible, save your
proof as Proof Taut Con 2.
"
"
"
"
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
Exercises
4.26
!
If you skipped the You try it sections, go back and do them now. Submit the files Proof Taut
Con 1 and Proof Taut Con 2.
For each of the following arguments, decide whether the conclusion is a tautological consequence of the
premises. If it is, submit a proof that establishes the conclusion using one or more applications of Taut
Con. Do not cite more than two sentences at a time for any of your applications of Taut Con. If
the conclusion is not a consequence of the premises, submit a counterexample world showing that the
argument is not valid.
4.27
!
Cube(a) Cube(b)
Dodec(c) Dodec(d)
Cube(a) Dodec(c)
Cube(b) Dodec(d)
4.28
!
Large(a) Large(b)
Large(a) Large(c)
Large(a) (Large(b) Large(c))
Section 4.4
4.29
!
Small(a) Small(b)
Small(b) Small(c)
Small(c) Small(d)
Small(d) Small(e)
Small(c)
4.30
!
Small(a) Small(e)
Section 4.5
substitution of logical
equivalents
Chapter 4
S(P) S(Q)
This is known as the principle of substitution of logical equivalents.
We wont prove this principle at the moment, because it requires a proof
by induction, a style of proof we get to in a later chapter. But the observation
allows us to use a few simple equivalences to do some pretty amazing things.
For example, using only the two DeMorgan laws and double negation, we can
take any sentence built up with , , and , and transform it into one where
applies only to atomic sentences. Another way of expressing this is that any
sentence built out of atomic sentences using the three connectives , , and
is logically equivalent to one built from literals using just and .
To obtain such a sentence, you simply drive the in, switching to ,
to , and canceling any pair of s that are right next to each other, not
separated by any parentheses. Such a sentence is said to be in negation normal
form or NNF. Here is an example of a derivation of the negation normal form
of a sentence. We use A, B, and C to stand for any atomic sentences of the
language.
((A B) C)
(A B) C
(A B) C
(A B) C
In reading and giving derivations of this sort, remember that the symbol
is not itself a symbol of the first-order language, but a shorthand way of
saying that two sentences are logically equivalent. In this derivation, the first
step is an application of the first DeMorgan law to the whole sentence. The
second step applies double negation to the component C. The final step is
an application of the second DeMorgan law to the component (A B). The
sentence we end up with is in negation normal form, since the negation signs
apply only to atomic sentences.
We end this section with a list of some additional logical equivalences
that allow us to simplify sentences in useful ways. You already constructed
truth tables for most of these equivalences in Exercises 4.13-4.16 at the end
of Section 4.2.
1. (Associativity of ) An fol sentence P (Q R) is logically equivalent
to (P Q) R, which is in turn equivalent to P Q R. That is,
associativity
P (Q R) (P Q) R P Q R
2. (Associativity of ) An fol sentence P (Q R) is logically equivalent
to (P Q) R, which is in turn equivalent to P Q R. That is,
P (Q R) (P Q) R P Q R
Section 4.5
commutativity
idempotence
Chapter 4
(A B) C ((B A) B)
(A B) C ((B A) B)
(A B) C (B A B)
(A B) C (B A)
(A B) C (A B)
(A B) C
chain of equivalences
Remember
1. Substitution of equivalents: If P and Q are logically equivalent:
PQ
then the results of substituting one for the other in the context of a
larger sentence are also logically equivalent:
S(P) S(Q)
2. A sentence is in negation normal form (NNF) if all occurrences of
apply directly to atomic sentences.
3. Any sentence built from atomic sentences using just , , and can
be put into negation normal form by repeated application of the DeMorgan laws and double negation.
4. Sentences can often be further simplified using the principles of associativity, commutativity, and idempotence.
Exercises
4.31
!
(Negation normal form) Use Tarskis World to open Turings Sentences. You will find the following five sentences, each followed by an empty sentence position.
1. (Cube(a) Larger(a, b))
3. (Cube(a) Larger(b, a))
5. (Cube(a) Larger(a, b) a += b)
7. (Tet(b) (Large(c) Smaller(d, e)))
9. Dodec(f) (Tet(b) Tet(f) Dodec(f))
In the empty positions, write the negation normal form of the sentence above it. Then build
any world where all of the names are in use. If you have gotten the negation normal forms
Section 4.5
correct, each even numbered sentence will have the same truth value in your world as the odd
numbered sentence above it. Verify that this is so in your world. Submit the modified sentence
file as Sentences 4.31.
4.32
!
(Negation normal form) Use Tarskis World to open the file Sextus Sentences. In the odd
numbered slots, you will find the following sentences.
1. (Home(carl) Home(claire))
3. [Happy(max) (Likes(carl, claire) Likes(claire, carl))]
5. [(Home(max) Home(carl)) (Happy(max) Happy(carl))]
Use Double Negation and DeMorgans laws to put each sentence into negation normal form in
the slot below it. Submit the modified file as Sentences 4.32.
In each of the following exercises, use associativity, commutativity, and idempotence to simplify the
sentence as much as you can using just these rules. Your answer should consist of a chain of logical
equivalences like the chain given on page 120. At each step of the chain, indicate which principle you
are using.
4.33
"
4.35
"
4.37
"
(A B) A
4.34
(A B) (C D) A
4.36
"
"
(B (A B C))
(A B) (B C)
(A B) C (B A) A
Section 4.6
distribution
Chapter 4
We have seen that with a few simple principles of Boolean logic, we can
start with a sentence and transform it into a logically equivalent sentence
in negation normal form, one where all negations occur in front of atomic
sentences. We can improve on this by introducing the so-called distributive
laws. These additional equivalences will allow us to transform sentences into
what are known as conjunctive normal form (CNF) and disjunctive normal
form (DNF). These normal forms are quite important in certain applications
of logic in computer science, as we discuss in Chapter 17. We will also use
disjunctive normal form to demonstrate an important fact about the Boolean
connectives in Chapter 7.
Recall that in algebra you learned that multiplication distributes over addition: a(b+c) = (ab)+(ac). The distributive laws of logic look formally
disjunctive normal
form (DNF)
[(A B) C] [(A B) D]
(A C) (B C) [(A B) D]
(A C) (B C) (A D) (B D)
Section 4.6
conjunctive normal
form (CNF)
[(A B) C] [(A B) D]
(A C) (B C) [(A B) D]
(A C) (B C) (A D) (B D)
(A B) C
(A B) C
(A B) C
(A C) (B C)
Chapter 4
Now look at the above sentence again and notice that it passes both of
these tests (in the CNF case because it has no disjunction signs).
Remember
1. A sentence is in disjunctive normal form (DNF) if it is a disjunction
of one or more conjunctions of one or more literals.
2. A sentence is in conjunctive normal form (CNF) if it is a conjunction
of one or more disjunctions of one or more literals.
3. Distribution of over allows you to transform any sentence in negation normal form into disjunctive normal form.
4. Distribution of over allows you to transform any sentence in negation normal form into conjunctive normal form.
5. Some sentences are in both CNF and DNF.
You try it
................................................................
1. Use Tarskis World to open the file DNF Example. In this file you will find
two sentences. The second sentence is the result of putting the first into
disjunctive normal form, so the two sentences are logically equivalent.
"
2. Build a world in which the sentences are true. Since they are equivalent,
you could try to make either one true, but you will find the second one
easier to work on.
"
3. Play the game for each sentence, committed correctly to the truth of the
sentence. You should be able to win both times. Count the number of steps
it takes you to win.
"
Section 4.6
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
Exercises
4.38
If you skipped the You try it section, go back and do it now. Submit the file World DNF 1.
4.39
!
Open CNF Sentences. In this file you will find the following conjunctive normal form sentences
in the odd numbered positions, but you will see that the even numbered positions are blank.
1. (LeftOf(a, b) BackOf(a, b)) Cube(a)
3. Larger(a, b) (Cube(a) Tet(a) a = b)
5. (Between(a, b, c) Tet(a) Tet(b)) Dodec(c)
7. Cube(a) Cube(b) (Small(a) Small(b))
9. (Small(a) Medium(a)) (Cube(a) Dodec(a))
In the even numbered positions you should fill in a DNF sentence logically equivalent to the
sentence above it. Check your work by opening several worlds and checking to see that each
of your sentences has the same truth value as the one above it. Submit the modified file as
Sentences 4.39.
4.40
!
Open More CNF Sentences. In this file you will find the following sentences in every third
position.
1. [(Cube(a) Small(a)) (Cube(a) Small(a))]
4. [(Cube(a) Small(a)) (Cube(a) Small(a))]
7. (Cube(a) Larger(a, b)) Dodec(b)
10. (Cube(a) Tet(b))
13. Cube(a) Tet(b)
The two blanks that follow each sentence are for you to first transform the sentence into negation
normal form, and then put that sentence into CNF. Again, check your work by opening several
worlds to see that each of your sentences has the same truth value as the original. When you
are finished, submit the modified file as Sentences 4.40.
Chapter 4
In Exercises 4.41-4.43, use a chain of equivalences to convert each sentence into an equivalent sentence
in disjunctive normal form. Simplify your answer as much as possible using the laws of associativity,
commutativity, and idempotence. At each step in your chain, indicate which principle you are applying.
Assume that A, B, C, and D are literals.
4.41
"
4.43
"
C (A (B C))
4.42
"
B (A B (A B (B C)))
A (A (B (A C)))
Section 4.6
Chapter 5
limitations of truth
table methods
Truth tables give us powerful techniques for investigating the logic of the
Boolean operators. But they are by no means the end of the story. Truth
tables are fine for showing the validity of simple arguments that depend only
on truth-functional connectives, but the method has two very significant limitations.
First, truth tables get extremely large as the number of atomic sentences
goes up. An argument involving seven atomic sentences is hardly unusual, but
testing it for validity would call for a truth table with 27 = 128 rows. Testing
an argument with 14 atomic sentences, just twice as many, would take a table
containing over 16 thousand rows. You could probably get a Ph.D. in logic for
building a truth table that size. This exponential growth severely limits the
practical value of the truth table method.
The second limitation is, surprisingly enough, even more significant. Truth
table methods cant be easily extended to reasoning whose validity depends
on more than just truth-functional connectives. As you might guess from the
artificiality of the arguments looked at in the previous chapter, this rules out
most kinds of reasoning youll encounter in everyday life. Ordinary reasoning
relies heavily on the logic of the Boolean connectives, make no mistake about
that. But it also relies on the logic of other kinds of expressions. Since the
truth table method detects only tautological consequence, we need a method
of applying Boolean logic that can work along with other valid principles of
reasoning.
Methods of proof, both formal and informal, give us the required extensibility. In this chapter we will discuss legitimate patterns of inference that
arise when we introduce the Boolean connectives into a language, and show
how to apply the patterns in informal proofs. In Chapter 6, well extend our
formal system with corresponding rules. The key advantage of proof methods
over truth tables is that well be able to use them even when the validity of
our proof depends on more than just the Boolean operators.
The Boolean connectives give rise to many valid patterns of inference.
Some of these are extremely simple, like the entailment from the sentence
P Q to P. These we will refer to as valid inference steps, and will discuss
128
them briefly in the first section. Much more interesting are two new methods
of proof that are allowed by the new expressions: proof by cases and proof by
contradiction. We will discuss these later, one at a time.
Section 5.1
Section 5.1
conjunction
elimination
(simplification)
conjunction
introduction
disjunction
introduction
Chapter 5
good thing, since we would get pretty tired of reading if everyone wrote with
the very same style. So too in giving proofs. If you go on to study mathematics, you will read lots of proofs, and you will find that every writer has his or
her own style. You will even develop a style of your own.
Every step in a good proof, besides being correct, should have two properties. It should be easily understood and significant. By easily understood
we mean that other people should be able to follow the step without undue
difficulty: they should be able to see that the step is valid without having to
engage in a piece of complex reasoning of their own. By significant we mean
that the step should be informative, not a waste of the readers time.
These two criteria pull in opposite directions. Typically, the more significant the step, the harder it is to follow. Good style requires a reasonable
balance between the two. And that in turn requires some sense of who your
audience is. For example, if you and your audience have been working with
logic for a while, you will recognize a number of equivalences that you will
want to use without further proof. But if you or your audience are beginners,
the same inference may require several steps.
Remember
1. In giving an informal proof from some premises, if Q is already
known to be a logical consequence of sentences P1 , . . . , Pn and each of
P1 , . . . , Pn has been proven from the premises, then you may assert Q
in your proof.
2. Each step in an informal proof should be significant but easily understood.
3. Whether a step is significant or easily understood depends on the
audience to whom it is addressed.
4. The following are valid patterns of inference that generally go unmentioned in informal proofs:
From P Q, infer P.
Section 5.1
Exercises
In the following exercises we list a number of patterns of inference, only some of which are valid. For
each pattern, determine whether it is valid. If it is, explain why it is valid, appealing to the truth tables
for the connectives involved. If it is not, give a specific example of how the step could be used to get from
true premises to a false conclusion.
5.1
"
5.3
"
5.5
"
5.2
5.4
5.6
"
"
"!
Section 5.2
Proof by cases
The simple forms of inference discussed in the last section are all instances of
the principle that you can use already established cases of logical consequence
in informal proofs. But the Boolean connectives also give rise to two entirely
new methods of proof, methods that are explicitly applied in all types of
rigorous reasoning. The first of these is the method of proof by cases. In our
formal system F, this method will be called disjunction elimination, but dont
be misled by the ordinary sounding name: it is far more significant than, say,
disjunction introduction or conjunction elimination.
We begin by illustrating proof by cases with a well-known piece of mathematical reasoning. The reasoning proves that there are irrational numbers b
and c such that bc is rational. First, lets review what this means. A number
is said to be rational if it can be expressed as a fraction n/m, for integers
n and m. If it
cant be so expressed, then it is irrational. Thus 2 is rational
(2 = 2/1), but 2 is irrational. (We will prove this latter fact in the next section, to illustrate proof by contradiction; for now, just take it as a well-known
truth.) Here now is our proof:
Proof: To show that there are irrational numbers
b and c such that
2
bc is rational, we will consider the number 2 . We note that this
number is either rational or irrational.
Chapter 5
2
If 2
is rational, then we have found our b and c; namely, we take
b = c = 2.
2
Suppose,
on
the
other
hand,
that
2 is irrational. Then we take
2
proof by cases
Section 5.2
Chapter 5
Exercises
The next two exercises present valid arguments. Turn in informal proofs of the arguments validity. Your
proofs should be phrased in complete, well-formed English sentences, making use of first-order sentences
as convenient, much in the style we have used above. Whenever you use proof by cases, say so. You dont
have to be explicit about the use of simple proof steps like conjunction elimination. By the way, there is
typically more than one way to prove a given result.
Section 5.2
5.7
Home(max) Home(claire)
Home(max) Happy(carl)
Home(claire) Happy(carl)
"
Happy(carl)
5.8
"
LeftOf(a, b) RightOf(a, b)
BackOf(a, b) LeftOf(a, b)
FrontOf(b, a) RightOf(a, b)
SameCol(c, a) SameRow(c, b)
BackOf(a, b)
5.9
!|"
5.10
"
5.11
"
5.12
"
5.13
"
5.14
"!
Assume the same four premises as in Exercise 5.8. Is LeftOf(b, c) a logical consequence of
these premises? If so, turn in an informal proof of the arguments validity. If not, submit a
counterexample world.
Suppose Maxs favorite basketball team is the Chicago Bulls and favorite football team is the
Denver Broncos. Maxs father John is returning from Indianapolis to San Francisco on United
Airlines, and promises that he will buy Max a souvenir from one of his favorite teams on the
way. Explain Johns reasoning, appealing to the annoying fact that all United flights between
Indianapolis and San Francisco stop in either Denver or Chicago. Make explicit the role proof
by cases plays in this reasoning.
Suppose the police are investigating a burglary and discover the following facts. All the doors
to the house were bolted from the inside and show no sign of forced entry. In fact, the only
possible ways in and out of the house were a small bathroom window on the first floor that
was left open and an unlocked bedroom window on the second floor. On the basis of this, the
detectives rule out a well-known burglar, Julius, who weighs two hundred and fifty pounds and
is arthritic. Explain their reasoning.
In our proof that there
are irrational numbers b and c where bc is rational, one of our steps
2
was to assert that 2 is either rational or irrational. What justifies the introduction of this
claim into our proof?
Describe an everyday example of reasoning by cases that you have performed in the last few
days.
Give an informal proof that if S is a tautological consequence of P and a tautological consequence of Q, then S is a tautological consequence of P Q. Remember that the joint truth
table for P Q and S may have more rows than either the joint truth table for P and S, or the
joint truth table for Q and S. [Hint: Assume you are looking at a single row of the joint truth
table for P Q and S in which P Q is true. Break into cases based on whether P is true or Q
is true and prove that S must be true in either case.]
Chapter 5
Section 5.3
Section 5.3
Proof:
we will assume
With an eye toward getting a contradiction,
in the form p/q, where at least one of p and q is odd. Since p/q = 2
we can square both sides to get:
p2
=2
q2
Multiplying both sides by q 2 , we get p2 = 2q 2 . But this shows that
p2 is an even number. As we noted before, this allows us to conclude
that p is even and that p2 is divisible by 4. Looking again at the
equation p2 = 2q 2 , we see that if p2 is divisible by 4, then 2q 2 is
divisible by 4 and hence q 2 must be divisible by 2. In which case, q is
even as well. So both p and q are even, contradicting the
fact that at
least one of them is odd. Thus, our assumption that 2 is rational
led us to a contradiction, and so we conclude that it is irrational.
contradiction
contradiction
symbol ()
Chapter 5
Similarly, the truth table method gives us a way of showing that a collection of sentences are mutually contradictory. Construct a joint truth table
for P1 , . . . , Pn . These sentences are tt-contradictory if every row has an F assigned to at least one of the sentences. If the sentences are tt-contradictory,
we know they cannot all be true at once, simply in virtue of the meanings
of the truth functional connectives out of which they are built. We have already mentioned one such example: any pair of sentences, one of which is the
negation of the other.
The method of proof by contradiction, like proof by cases, is often encountered in everyday reasoning, though the derived contradiction is sometimes
left implicit. People will often assume a claim for the sake of argument and
then show that the assumption leads to something else that is known to be
false. They then conclude the negation of the original claim. This sort of reasoning is in fact an indirect proof: the inconsistency becomes explicit if we
add the known fact to our set of premises.
Lets look at an example of this kind of reasoning. Imagine a defense
attorney presenting the following summary to the jury:
tt-contradictory
The prosecution claims that my client killed the owner of the KitKat
Club. Assume that they are correct. Youve heard their own experts
testify that the murder took place at 5:15 in the afternoon. We also
know the defendant was still at work at City Hall at 4:45, according
to the testimony of five co-workers. It follows that my client had to
get from City Hall to the KitKat Club in 30 minutes or less. But
to make that trip takes 35 minutes under the best of circumstances,
and police records show that there was a massive traffic jam the day
of the murder. I submit that my client is innocent.
Clearly, reasoning like this is used all the time: whenever we assume something and then rule out the assumption on the basis of its consequences.
Sometimes these consequences are not contradictions, or even things that we
know to be false, but rather future consequences that we consider unacceptable. You might for example assume that you will go to Hawaii for spring
break, calculate the impact on your finances and ability to finish the term
papers coming due, and reluctantly conclude that you cant make the trip.
When you reason like this, you are using the method of indirect proof.
Remember
Proof by contradiction: To prove S using this method, assume S and
prove a contradiction .
Section 5.3
Exercises
In the following exercises, decide whether the displayed argument is valid. If it is, turn in an informal proof, phrased in complete, well-formed English sentences, making use of first-order sentences as
convenient. Whenever you use proof by cases or proof by contradiction, say so. You dont have to be
explicit about the use of simple proof steps like conjunction elimination. If the argument is invalid, construct a counterexample world in Tarskis World. (Argument 5.16 is valid, and so will not require a
counterexample.)
5.15
b is a tetrahedron.
c is a cube.
Either c is larger than b or else they
are identical.
!|"
5.16
"
b is smaller than c.
5.17
!|"
5.18
!|"
a=ba=c
5.19
"
5.20
"
Suppose it is Friday night and you are going out with your boyfriend. He wants to see a romantic
comedy, while you want to see the latest Wes Craven slasher movie. He points out that if he
watches the Wes Craven movie, he will not be able to sleep because he cant stand the sight of
blood, and he has to take the MCAT test tomorrow. If he does not do well on the MCAT, he
wont get into medical school. Analyze your boyfriends argument, pointing out where indirect
proof is being used. How would you rebut his argument?
Chapter 5
5.21
Describe an everyday example of an indirect proof that you have used in the last few days.
"
5.22
"!
Prove that indirect proof is a tautologically valid method of proof. That is, show that if
P1 , . . . , Pn , S is tt-contradictory, then S is a tautological consequence of P1 , . . . , Pn .
In the next three exercises we ask you to prove simple facts about the natural numbers. We do not expect
you to phrase the proofs in fol. You will have to appeal to basic facts of arithmetic plus the definitions
of even and odd number. This is OK, but make these appeals explicit. Also make explicit any use of proof
by contradiction.
5.23
"
5.26
"!!
Assume that n2 is
odd. Prove that n is
odd.
5.24
5.25
Assume that n2 is
"
"
divisible by 3. Prove
that n2 is divisible
by 9.
A good way to make sure you understand a proof is to try to generalize it. Prove that 3 is
irrational. [Hint: You will need to figure out some facts about divisibility by 3 that parallel the
facts we used about even and odd, for example, the fact expressed in Exercise 5.25.] Can you
generalize these two results?
!
Assume that n + m
is odd. Prove that
n m is even.
Section 5.4
always valid
Section 5.4
never sound
much on its own. After all, the reason we are interested in logical consequence
is because of its relation to truth. If the premises cant possibly be true, then
even knowing that the argument is valid gives us no clue as to the truth or
falsity of the conclusion. An unsound argument gives no more support for its
conclusion than an invalid one.
In general, methods of proof dont allow us to show that an argument
is unsound. After all, the truth or falsity of the premises is not a matter of
logic, but of how the world happens to be. But in the case of arguments with
inconsistent premises, our methods of proof do give us a way to show that at
least one of the premises is false (though we might not know which one), and
hence that the argument is unsound. To do this, we prove that the premises
are inconsistent by deriving a contradiction.
Suppose, for example, you are given a proof that the following argument
is valid:
Home(max) Home(claire)
Home(max)
Home(claire)
Home(max) Happy(carl)
While it is true that this conclusion is a consequence of the premises, your
reaction should not be to believe the conclusion. Indeed, using proof by cases
we can show that the premises are inconsistent, and hence that the argument
is unsound. There is no reason to be convinced of the conclusion of an unsound
argument.
Remember
A proof of a contradiction from premises P1 , . . . , Pn (without additional assumptions) shows that the premises are inconsistent. An argument with inconsistent premises is always valid, but more importantly,
always unsound.
Exercises
5.27
"
Give two different proofs that the premises of the above argument are inconsistent. Your first
should use proof by cases but not DeMorgans law, while your second can use DeMorgan but
not proof by cases.
Chapter 5
Chapter 6
143
natural deduction
introduction and
elimination rules
elimination rules is a bit more generous in spirit than F. It doesnt allow you
to do anything that F wouldnt permit, but there are cases where Fitch will
let you do in one step what might take several in F. Also, many of Fitchs
rules have default applications that can save you a lot of time. If you want
the default use of some rule, all you have to do is specify the rule and cite
the step or steps you are applying it to; Fitch will then fill in the appropriate
conclusion for you. Similarly, if you have filled in the formula and rule, Fitch
can sometimes add appropriate support steps for you via the Add Support
Steps command. At the end of each section below well explain the default
uses of the rules introduced in that section.
rule defaults
Section 6.1
Conjunction rules
The simplest principles to formalize are those that involve the conjunction
symbol . These are the rules of conjunction elimination and conjunction
introduction.
Conjunction elimination
The rule of conjunction elimination allows you to assert any conjunct Pi of a
conjunctive sentence P1 . . . Pi . . . Pn that you have already derived
in the proof. (Pi can, by the way, be any conjunct, including the first or the
last.) You justify the new step by citing the step containing the conjunction.
We abbreviate this rule with the following schema:
Conjunction Elimination ( Elim):
P 1 . . . P i . . . Pn
..
.
$ Pi
You try it
................................................................
Chapter 6
1. Open the file Conjunction 1. There are three sentences that you are asked
to prove. They are shown in the goal strip at the bottom of the proof
window as usual.
2. The first sentence you are to prove is Tet(a). To do this, first add a new
step to the proof and write the sentence Tet(a).
3. Next, go to the popup Rule? menu and under the Elimination Rules,
choose .
"
4. If you try to check this step, you will see that it fails, because you have
have not yet cited any sentences in support of the step. In this example,
you need to cite the single premise in support. Do this and then check the
step.
"
5. You should be able to prove each of the other sentences similarly, by means
of a single application of Elim. When you have proven these sentences,
check your goals and save the proof as Proof Conjunction 1.
"
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
Conjunction introduction
The corresponding introduction rule, conjunction introduction, allows you to
assert a conjunction P1 . . . Pn provided you have already established each
of its constituent conjuncts P1 through Pn . We will symbolize this rule in the
following way:
Conjunction Introduction ( Intro):
P1
Pn
..
.
$ P 1 . . . Pn
In this rule, we have used the notation:
P1
Pn
to indicate that each of P1 through Pn must appear in the proof before you
can assert their conjunction. The order in which they appear does not matter,
and they do not have to appear one right after another. They just need to
appear somewhere earlier in the proof.
Here is a simple example of our two conjunction rules at work together. It
is a proof of C B from A B C.
Section 6.1
1. A B C
2. B
3. C
4. C B
Elim: 1
Elim: 1
Intro: 3, 2
You
. . . . . try
. . . . .it. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
!
1. Open the file Conjunction 2. We will help you prove the two sentences
requested in the goals. You will need to use both of the conjunction rules
in each case.
2. The first goal is Medium(d) Large(c). Add a new step and enter this
sentence. (Remember that you can copy the sentence from the goal pane
and paste it into the new step. Its faster than typing it in.)
3. Above the step you just created, add two more steps, typing one of the
conjuncts in each. If you can prove these, then the conclusion will follow
by Intro. Show this by choosing this rule at the conjunction step and
citing the two conjuncts in support.
4. Now all you need to do is prove each of the conjuncts. This is easily done
using the rule Elim at each of these steps. Do this, cite the appropriate
support sentences, and check the proof. The first goal should check out.
5. Prove the second goal sentence similarly. Once both goals check out, save
your proof as Proof Conjunction 2.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
Default and generous uses of the rules
As we said, Fitch is generous in its interpretation of the inference rules of F.
For example, Fitch considers the following to be an acceptable use of Elim:
17. Tet(a) Tet(b) Tet(c) Tet(d)
..
.
26. Tet(d) Tet(b)
Chapter 6
Elim: 17
What we have done here is pick two of the conjuncts from step 17 and assert
the conjunction of these in step 26. Technically, F would require us to derive the two conjuncts separately and, like Humpty Dumpty, put them back
together again. Fitch does this for us.
Since Fitch lets you take any collection of conjuncts in the cited sentence
and assert their conjunction in any order, Fitchs interpretation of Elim
allows you to prove that conjunction is commutative. In other words, you
can use it to take a conjunction and reorder its conjuncts however you please:
13. Tet(a) Tet(b)
..
.
21. Tet(b) Tet(a)
Elim: 13
You
. . . . . try
. . . . .it. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1. Open the file Conjunction 3. Notice that there are two goals. The first goal
asks you to prove Tet(c) Tet(a) from the premise. Strictly speaking, this
would take two uses of Elim followed by one use of Intro. However,
Fitch lets you do this with a single use of Elim. Try this and then check
the step.
"
2. Verify that the second goal sentence also follows by a single application of
Fitchs rule of Elim. When you have proven these sentences, check your
goals and save the proof as Proof Conjunction 3.
"
3. Next try out other sentences to see whether they follow from the given
sentence by Elim. For example, does Tet(c) Small(a) follow? Should
it?
"
4. When you are satisfied you understand conjunction elimination, close the
file, but dont save the changes you made in step 3.
"
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
The Intro rule implemented in Fitch is also less restrictive than our discussion of the formal rule might suggest. First of all, Fitch does not care about
the order in which you cite the supporting sentences. Second, if you cite a sentence, that sentence can appear more than once as a conjunct in the concluding
sentence. For example, you can use this rule to conclude Cube(a) Cube(a)
from the sentence Cube(a), if you want to for some reason.
Section 6.1
Both of the conjunction rules have default uses. If at a new step you cite
a conjunction and specify the rule as Elim, then when you check the step
(or choose Check Proof), Fitch will fill in the blank step with the leftmost
conjunct in the cited sentence. If you cite several sentences and apply Intro,
Fitch will fill in the conjunction of those steps, ordering conjuncts in the same
order they were cited.
default uses of
conjunction rules
You
. . . . . try
. . . . .it. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
!
2. Move the focus to the first blank step, the one immediately following the
premises. Notice that this step has a rule specified, as well as a support
sentence cited. Check the step to see what default Fitch generates.
3. Then, focus on each successive step, try to predict what the default will
be, and check the step. (The last two steps give different results because
we entered the support steps in different orders.)
4. When you have checked all the steps, save your proof as Proof Conjunction 4.
5. Feel free to experiment with the rule defaults some more, to see when they
are useful.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
parentheses and
conjunction rules
Chapter 6
You can use the Add Support Steps command (found on the Proof
menu) with either of the conjunction rules. In either case you must have
chosen a rule, and have entered a formula in the focus step. In the case of
Elim, a single support step will be created, and this step will contain the
formula at the focus step, followed by a conjunction symbol to indicate that
you must enter more conjuncts to complete the support formula. If the Add
Support Steps is used with the Intro rule, and the focus formula is a
conjunction, then one support step is introduced for each conjunct of the focus
formula.
One final point: In applying conjunction introduction, you will sometimes
have to be careful about parentheses, due to our conventions about dropping
outermost parentheses. If one of the conjuncts is itself a conjunction, then
of course there is no need to add any parentheses before forming the larger
conjunction, unless you want to. For example, the following are both correct
applications of the rule. (The first is what Fitchs default mechanism would
give you.)
Correct:
1. A B
2. C
3. (A B) C
Correct:
Intro: 1, 2
1. A B
2. C
3. A B C
Intro: 1, 2
1. A B
2. C
3. (A B) C
Wrong:
Intro: 1, 2
1. A B
2. C
3. A B C
Intro: 1, 2
Section 6.2
Disjunction rules
We know: the conjunction rules were boring. Not so the disjunction rules,
particularly disjunction elimination.
Disjunction introduction
The rule of disjunction introduction allows you to go from a sentence Pi to
any disjunction that has Pi among its disjuncts, say P1 . . . Pi . . . Pn .
In schematic form:
Disjunction Introduction ( Intro):
Pi
..
.
$ P 1 . . . Pi . . . P n
Section 6.2
Once again, we stress that Pi may be the first or last disjunct of the conclusion.
Further, as with conjunction introduction, some thought ought to be given to
whether parentheses must be added to Pi to prevent ambiguity.
As we explained in Chapter 5, disjunction introduction is a less peculiar
rule than it may at first appear. But before we look at a sensible example of
how it is used, we need to have at our disposal the second disjunction rule.
Disjunction elimination
subproofs
temporary
assumptions
We now come to the first rule that corresponds to what we called a method
of proof in the last chapter. This is the rule of disjunction elimination, the
formal counterpart of proof by cases. Recall that proof by cases allows you
to conclude a sentence S from a disjunction P1 . . . Pn if you can prove
S from each of P1 through Pn individually. The form of this rule requires us
to discuss an important new structural feature of the Fitch-style system of
deduction. This is the notion of a subproof.
A subproof, as the name suggests, is a proof that occurs within the context
of a larger proof. As with any proof, a subproof generally begins with an assumption, separated from the rest of the subproof by the Fitch bar. But the
assumption of a subproof, unlike a premise of the main proof, is only temporarily assumed. Throughout the course of the subproof itself, the assumption acts
just like an additional premise. But after the subproof, the assumption is no
longer in force.
Before we give the schematic form of disjunction elimination, lets look at
a particular proof that uses the rule. This will serve as a concrete illustration
of how subproofs appear in F.
1. (A B) (C D)
2. A B
3. B
4. B D
Elim: 2
Intro: 3
5. C D
6. D
7. B D
8. B D
Elim: 5
Intro: 6
Elim: 1, 24, 57
Chapter 6
Pn
..
.
S
..
.
$ S
What this says is that if you have established a disjunction P1 . . . Pn , and
you have also shown that S follows from each of the disjuncts P1 through Pn ,
then you can conclude S. Again, it does not matter what order the subproofs
appear in, or even that they come after the disjunction. When applying the
Section 6.2
rule, you will cite the step containing the disjunction, plus each of the required
subproofs.
Lets look at another example of this rule, to emphasize how justifications
involving subproofs are given. Here is a proof showing that A follows from the
sentence (B A) (A C).
1. (B A) (A C)
2. B A
3. A
Elim: 2
4. A C
5. A
6. A
Elim: 4
Elim: 1, 23, 45
The citation for step 6 shows the form we use when citing subproofs. The
citation nm is our way of referring to the subproof that begins on line n
and ends on line m.
Sometimes, in using disjunction elimination, you will find it natural to use
the reiteration rule introduced in Chapter 3. For example, suppose we modify
the above proof to show that A follows from (B A) A.
1. (B A) A
2. B A
3. A
Elim: 2
4. A
5. A
6. A
Reit: 4
Elim: 1, 23, 45
You
. . . . . try
. . . . .it. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
!
Chapter 6
1. Open the file Disjunction 1. In this file, you are asked to prove
Medium(c) Large(c)
Elim: 2
Intro: 3
5. Medium(c)
6. Medium(c) Large(c)
7. Medium(c) Large(c)
Intro: 5
Elim: 1, 24, 56
2. To use Elim in this case, we need to get two subproofs, one for each
of the disjuncts in the premise. It is a good policy to begin by specifying
both of the necessary subproofs before doing anything else. To start a
subproof, add a new step and choose New Subproof from the Proof
menu. Fitch will indent the step and allow you to enter the sentence you
want to assume. Enter the first disjunct of the premise, Cube(c) Large(c),
as the assumption of this subproof.
"
3. Rather than work on this subproof now, lets specify the second case before
we forget what were trying to do. To do this, we need to end the first
subproof and start a second subproof after it. You end the current subproof
by choosing End Subproof from the Proof menu. This will give you a
new step outside of, but immediately following the subproof.
"
4. Start your second subproof at this new step by choosing New Subproof
from the Proof menu. This time type the other disjunct of the premise,
Medium(c). We have now specified the assumptions of the two cases we
need to consider. Our goal is to prove that the conclusion follows in both
of these cases.
"
5. Go back to the first subproof and add a step following the assumption. (Focus on the assumption step of the subproof and choose Add Step After
from the Proof menu.) In this step use Elim to prove Large(c). Then
add another step to that subproof and prove the goal sentence, using
Intro. In both steps, you will have to cite the necessary support sentences.
"
Section 6.2
6. After youve finished the first subproof and all the steps check out, move
the focus slider to the assumption step of the second subproof and add a
new step. Use Intro to prove the goal sentence from your assumption.
7. Weve now derived the goal sentence in both of the subproofs, and so are
ready to add the final step of our proof. While focussed on the last step of
the second subproof, choose End Subproof from the Proof menu. Enter
the goal sentence into this new step.
8. Specify the rule in the final step as Elim. For support, cite the two
subproofs and the premise. Check your completed proof. If it does not
check out, compare your proof carefully with the proof displayed above.
Have you accidentally gotten one of your subproofs inside the other one?
If so, delete the misplaced subproof by focusing on the assumption and
choosing Delete Step from the Proof menu. Then try again.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
Default and generous uses of the rules
default uses of
disjunction rules
Chapter 6
There are a couple of ways in which Fitch is more lenient in checking Elim
than the strict form of the rule suggests. First, the sentence S does not have
to be the last sentence in the subproof, though usually it will be. S simply has
to appear on the main level of each subproof, not necessarily as the very
last step. Second, if you start with a disjunction containing more than two
disjuncts, say P Q R, Fitch doesnt require three subproofs. If you have
one subproof starting with P and one starting with Q R, or one starting
with Q and one starting with P R, then Fitch will still be happy, as long as
youve proven S in each of these cases.
Both disjunction rules have default applications, though they work rather
differently. If you cite appropriate support for Elim (i.e., a disjunction
and subproofs for each disjunct) and then check the step without typing a
sentence, Fitch will look at the subproofs cited and, if they all end with the
same sentence, insert that sentence into the step. If you cite a sentence and
apply Intro without typing a sentence, Fitch will insert the cited sentence
followed by , leaving the insertion point after the so you can type in the
rest of the disjunction you had in mind.
You try it
................................................................
1. Open the file Disjunction 2. The goal is to prove the sentence
"
"
3. When you are finished, see if the proof checks out. Do you understand the
proof? Could you have come up with it on your own?
"
"
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
When you choose the Intro rule, and enter a disjunction at the focus
step, you can use the Add Support Steps command to insert an appropriate
support step. Fitch has to guess at the formula that you might want to cite as
support. Fitch chooses the first disjunct, although any disjunct of the focus
formula would be appropriate. Add Support Steps cannot be used with the
Elim rule. When you use this rule, Fitch does not have enough information
to fill in the support steps, even when you have given a formula at the focus
step. You are on your own for this rule!
Exercises
6.1
!
6.2
!
If you skipped any of the You try it sections, go back and do them now. Submit the files Proof
Conjunction 1, Proof Conjunction 2, Proof Conjunction 3, Proof Conjunction 4, Proof Disjunction
1, and Proof Disjunction 2.
Open the file Exercise 6.2, which contains an incomplete formal proof. As it stands, none of
the steps check out, either because no rule has been specified, no support steps cited, or no
sentence typed in. Provide the missing pieces and submit the completed proof.
Use Fitch to construct formal proofs for the following arguments. You will find Exercise files for each
argument in the usual place. As usual, name your solutions Proof 6.x.
Section 6.2
6.3
!
a=bb=cc=d
6.4
!
a=cb=d
6.5
!
(A B) C
CB
6.6
A (B C)
(A B) (A C)
(A B) (A C)
A (B C)
Section 6.3
Negation rules
Last but not least are the negation rules. It turns out that negation introduction is our most interesting and complex rule.
Negation elimination
The rule of negation elimination corresponds to a very trivial valid step, from
P to P. Schematically:
Negation Elimination ( Elim):
P
..
.
$ P
Negation elimination gives us one direction of the principle of double negation. You might reasonably expect that our second negation rule, negation
introduction, would simply give us the other direction. But if thats what you
guessed, you guessed wrong.
Negation introduction
The rule of negation introduction corresponds to the method of indirect proof
or proof by contradiction. Like Elim, it involves the use of a subproof, as
will the formal analogs of all nontrivial methods of proof. The rule says that
if you can prove a contradiction on the basis of an additional assumption
P, then you are entitled to infer P from the original premises. Schematically:
Chapter 6
$ P
formal proofs of
inconsistency
Section 6.3
You try it
................................................................
!
4. A
!
!
!
Intro: 1, 2
Intro: 23
3. To construct this proof, add a step immediately after the premise. Turn it
into a subproof by choosing New Subproof from the Proof menu. Enter
the assumption A.
4. Add a new step to the subproof and enter , changing the rule to Intro.
Cite the appropriate steps and check the step.
5. Now end the subproof and enter the final sentence, A, after the subproof. Specify the rule as Intro, cite the preceding subproof and check
the step. Your whole proof should now check out.
6. Notice that in the third line of your proof you cited a step outside the
subproof, namely the premise. This is legitimate, but raises an important
issue. Just what steps can be cited at a given point in a proof? As a first
guess, you might think that you can cite any earlier step. But this turns
out to be wrong. We will explain why, and what the correct answer is, in
the next section.
7. Save your proof as Proof Negation 1.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
The contradiction symbol acts just like any other sentence in a proof. In
particular, if you are reasoning by cases and derive in each of your subproofs,
then you can use Elim to derive in your main proof. For example, here
is a proof that the premises A B, A, and B are inconsistent.
Chapter 6
1. A B
2. A
3. B
4. A
5.
Intro: 4, 2
6. B
7.
8.
Intro: 6, 3
Elim: 1, 45, 67
introducing
with Taut Con
Section 6.3
whose inconsistency results from the Boolean connectives plus the identity
predicate, you can check this using the FO Con mechanism, since FO Con
understands the meaning of =. If FO Con says that follows from the cited
sentences (and if those sentences do not contain quantifiers), then you should
be able to prove using just the introduction and elimination rules for =, ,
, , and .
The only time you may arrive at a contradiction but not be able to prove
using the rules of F is if the inconsistency depends on the meanings of
predicates other than identity. For example, suppose you derived the contradiction n < n, or the contradictory pair of sentences Cube(b) and Tet(b). The
rules of F give you no way to get from these sentences to a contradiction of
the form P and P, at least without some further premises.
What this means is that in Fitch, the Ana Con mechanism will let you
establish contradictions that cant be derived in F. Of course, the Ana Con
mechanism only understands predicates in the blocks language (and even
there, it excludes Adjoins and Between). But it will allow you to derive
from, for example, the two sentences Cube(b) and Tet(b). You can either do
this directly, by entering and citing the two sentences, or indirectly, by
using Ana Con to prove, say, Cube(b) from Tet(b).
introducing
with FO Con
introducing
with Ana Con
You try it
................................................................
!
1. Open Negation 2 using Fitch. In this file you will find an incomplete proof.
As premises, we have listed a number of sentences, several groups of which
are contradictory.
2. Focus on each step that contains the symbol. You will see that various
sentences are cited in support of the step. Only one of these steps is an
application of the Intro rule. Which one? Specify the rule for that step
as Intro and check it.
Chapter 6
3. Among the remaining steps, you will find one where the cited sentences
form a tt-contradictory set of sentences. Which one? Change the justification at that step to Taut Con and check the step. Since it checks out,
we assure you that you can derive from these same premises using just
the Boolean rules.
4. Of the remaining steps, the supports of two are contradictory in view of the
meaning of the identity symbol =. Which steps? Change the justification
at those step to FO Con and check the steps. To derive from these
premises, you would need the identity rules (in one case = Elim, in the
other = Intro).
5. Verify that the remaining steps cannot be justified by any of the rules
Intro, Taut Con or FO Con. Change the justification at those steps to
Ana Con and check the steps.
"
6. Save your proof as Proof Negation 2. (Needless to say, this is a formal proof
of inconsistency with a vengeance!)
"
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
Elimination
As we remarked earlier, if in a proof, or more importantly in some subproof,
you are able to establish a contradiction, then you are entitled to assert any
fol sentence P whatsoever. In our formal system, this is modeled by the rule
of Elimination ( Elim).
Elimination ( Elim):
..
.
$ P
The following You try it section illustrates both of the rules. Be sure
to go through it, as it presents a proof tactic you will have several occasions
to use.
You try it
................................................................
1. It often happens in giving proofs using Elim that one really wants
to eliminate one or more of the disjuncts, because they contradict other
assumptions. The form of the Elim rule does not permit this, though.
The proof we will construct here shows how to get around this difficulty.
"
2. Using Fitch, open the file Negation 3. We will use Elim and the two
rules to prove P from the premises P Q and Q.
"
3. Start two subproofs, the first with assumption P, the second with assumption Q. Our goal is to establish P in both subproofs.
"
4. In the first subproof, we can simply use reiteration to repeat the assumption P.
"
Section 6.3
6. Since you now have P in both subproofs, you can finish the proof using
Elim. Complete the proof.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
It turns out that we do not really need the rule of Elim. You can prove
any sentence from a contradiction without it; it just takes longer. Suppose, for
example, that you have established a contradiction at step 17 of some proof.
Here is how you can introduce P at step 21 without using Elim.
17.
18. P
19.
20. P
21. P
Reit: 17
Intro: 1819
Elim: 20
Still, we include Elim to make our proofs shorter and more natural.
Default and generous uses of the rules
default uses of
negation rules
Chapter 6
The rule of Elim allows you to take off two negation signs from the front of
a sentence. Repeated uses of this rule would allow you to remove four, six, or
indeed any even number of negation signs. For this reason, the implementation
of Elim in Fitch allows you to remove any even number of negation signs in
one step. Similarly for Intro, if the sentence in the assumption step of the
cited subproof is a negation, A, say, we allow you to deduce the unnegated
sentence A, instead of A.
Both of the negation rules have default applications. In a default application
of Elim, Fitch will remove as many negation signs as possible from the front
of the cited sentences (the number must be even, of course) and insert the
resulting sentence at the Elim step. In a default application of Intro,
the inserted sentence will be the negation of the assumption step of the cited
subproof.
You try it
................................................................
1. Open the file Negation 4. First look at the goal to see what sentence we
are trying to prove. Then focus on each step in succession and check the
step. Before moving to the next step, make sure you understand why the
step checks out and, more important, why we are doing what we are doing
at that step. At the empty steps, try to predict which sentence Fitch will
provide as a default before you check the step.
"
2. When you are done, make sure you understand the completed proof. Save
your file as Proof Negation 4.
"
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
Fitch will add a single support step if you use the Add Support Steps
command when you have entered a formula and chosen the Elim rule. The
support formula will be the formula from the focus step with two negation
symbols preceding it. If you choose the Intro rule and use Add Support
Steps then Fitch will insert a subproof as support, with the negation of the
focus formula as the assumption of the subproof and as the only other
step in the subproof. You can also use Add Support Steps with Elim.
Whatever formula is present, Fitch inserts a single support step containing
the support formula .
Exercises
6.7
!
6.8
!
If you skipped any of the You try it sections, go back and do them now. Submit the files
Proof Negation 1, Proof Negation 2, Proof Negation 3, and Proof Negation 4.
(Substitution) In informal proofs, we allow you to substitute logically equivalent sentences
for one another, even when they occur in the context of a larger sentence. For example, the
following inference results from two uses of double negation, each applied to a part of the whole
sentence:
P (Q R)
P (Q R)
Section 6.3
How would we prove this using F, which has no substitution rule? Open the file Exercise 6.8,
which contains an incomplete formal proof of this argument. As it stands, none of the proofs
steps check out, because no rules or support steps have been cited. Provide the missing justifications and submit the completed proof.
Evaluate each of the following arguments. If the argument is valid, use Fitch to give a formal proof using
the rules you have learned. If it not valid, use Tarskis World to construct a counterexample world. In
the last two proofs you will need to use Ana Con to show that certain atomic sentences contradict one
another to introduce . Use Ana Con only in this way. That is, your use of Ana Con should cite
exactly two atomic sentences in support of an introduction of . If you have difficulty with any of these
exercises, you may want to skip ahead and read Section 6.5.
6.9
Cube(b)
(Cube(c) Cube(b))
6.10
!
Cube(c)
6.11
Dodec(e)
Small(e)
Dodec(e) Dodec(f) Small(e)
Cube(c)
6.12
!
Dodec(f)
6.13
Dodec(e)
Large(e)
Dodec(e) Dodec(f) Small(e)
Cube(a) Cube(b)
(Cube(c) Cube(b))
Dodec(e)
Small(e)
Dodec(e) Dodec(f) Small(e)
Dodec(f)
6.14
!
Dodec(f)
SameRow(b, f) SameRow(c, f)
SameRow(d, f)
SameRow(c, f)
FrontOf(b, f)
(SameRow(d, f) Cube(f))
Cube(f)
In the following two exercises, determine whether the sentences are consistent. If they are, use Tarskis
World to build a world where the sentences are both true. If they are inconsistent, use Fitch to give a
proof that they are inconsistent (that is, derive from them). You may use Ana Con in your proof,
but only applied to literals (that is, atomic sentences or negations of atomic sentences).
6.15
!
Chapter 6
6.16
!
Smaller(a, b) Smaller(b, a)
SameSize(a, b)
Section 6.4
Elim: 2
Elim: 2
5. A C
6. A
7. A
8. A B
Elim: 5
Elim: 1, 24, 56
Intro: 7, 3
The problem with this proof is step 8. In this step we have used step
3, a step that occurs within an earlier subproof. But it turns out that this
sort of justificationone that reaches back inside a subproof that has already
endedis not legitimate. To understand why its not legitimate, we need to
think about what function subproofs play in a piece of reasoning.
A subproof typically looks something like this:
P
..
.
Q
R
..
.
S
T
..
.
Section 6.4
discharging
assumptions by ending
subproofs
Chapter 6
Subproofs begin with the introduction of a new assumption, in this example R. The reasoning within the subproof depends on this new assumption,
together with any other premises or assumptions of the parent proof. So in
our example, the derivation of S may depend on both P and R. When the
subproof ends, indicated by the end of the vertical line that ties the subproof
together, the subsequent reasoning can no longer use the subproofs assumption, or anything that depends on it. We say that the assumption has been
discharged or that the subproof has been ended.
When an assumption has been discharged, the individual steps of its subproof are no longer accessible. It is only the subproof as a whole that can be
cited as justification for some later step. What this means is that in justifying
the assertion of T in our example, we could cite P, Q, and the subproof as a
whole, but we could not cite individual items in the subproof like R or S. For
these steps rely on assumptions we no longer have at our disposal. Once the
subproof has been ended, they are no longer accessible.
This, of course, is where we went wrong in step 8 of the fallacious proof
given earlier. We cited a step in a subproof that had been ended, namely,
step 3. But the sentence at that step, B, had been proven on the basis of the
assumption B A, an assumption we only made temporarily. The assumption
is no longer in force at step 8, and so cannot be used at that point.
This injunction does not prevent you from citing, from within a subproof,
items that occur earlier outside the subproof, as long as they do not occur in
subproofs that ended before that step. For example, in the schematic proof
given above, the justification for S could well include the step that contains Q.
This observation becomes more pointed when you are working in a subproof of a subproof. We have not yet seen any examples where we needed to
have subproofs within subproofs, but the following proof, of one direction of
the first DeMorgan law, is one.
Notice that the subproof 215 contains two subproofs, 35 and 810. In
step 5 of subproof 35, we cite step 2 from the parent subproof 215. Similarly,
in step 10 of the subproof 810, we cite step 2. This is legitimate since the
subproof 215 has not been ended by step 10. While we did not need to in
this proof, we could in fact have cited step 1 in either of the sub-subproofs.
Another thing to note about this proof is the use of the Reiteration rule at
step 14. We did not need to use Reiteration here, but did so just to illustrate
a point. When it comes to subproofs, Reiteration is like any other rule: when
you use it, you can cite steps outside of the immediate subproof, if the proofs
that contain the cited steps have not yet ended. But you cannot cite a step
inside a subproof that has already ended. For example, if we replaced the
justification for step 15 with Reit: 10, then our proof would no longer be
correct.
1. (P Q)
2. (P Q)
3. P
4. P Q
5.
Intro: 3
Intro: 4, 2
6. P
7. P
Intro: 35
Elim: 6
8. Q
9. P Q
10.
11.
12.
13.
14.
15.
Intro: 8
Intro: 9, 2
Q
Q
PQ
(P Q)
Intro: 810
Elim: 11
Intro: 7, 12
Reit: 1
Intro: 13, 14
16. (P Q)
17. P Q
Intro: 215
Elim: 16
nested subproofs
Remember
In justifying a step of a subproof, you may cite any earlier step contained in the main proof, or in any subproof whose assumption is still
in force. You may never cite individual steps inside a subproof that
has already ended.
Fitch enforces this automatically by not permitting the citation of
individual steps inside subproofs that have ended.
Section 6.4
Exercises
6.17
"
3. Tet(a)
4. Tet(a) Dodec(b)
Elim: 4
Elim: 4
5. Dodec(b)
6. Tet(a)
Elim: 1, 23, 46
Intro: 7, 5
7. Tet(a)
8. Tet(a) Dodec(b)
What step wont Fitch let you perform? Why? Is the conclusion a consequence of the premise?
Discuss this example in the form of a clear English paragraph, and turn your paragraph in to
your instructor.
Use Fitch to give formal proofs for the following arguments. You will need to use subproofs within
subproofs to prove these.
6.18
!
6.19
AB
A B
AB
B C
AC
6.20
!
AB
AC
A (B C)
Section 6.5
an important maxim
Chapter 6
Many students try constructing formal proofs by blindly piecing together a sequence of steps permitted by the introduction and elimination rules, a process
no more related to reasoning than playing solitaire. This approach occasionally works, but more often than not it will failor at any rate, make it harder
to find a proof. In this section, we will give you some advice about how to
go about finding proofs when they dont jump right out at you. The advice
consists of two important strategies and an essential maxim.
Here is the maxim: Always keep firmly in mind what the sentences in your
proof mean! Students who pay attention to the meanings of the sentences avoid
innumerable pitfalls, among them the pitfall of trying to prove a sentence that
doesnt really follow from the information given. Your first step in trying to
construct a proof should always be to convince yourself that the claim made
by the conclusion is a consequence of the premises. You should do this even if
the exercise tells you that the argument is valid and simply asks you to find a
proof. For in the process of understanding the sentences and recognizing the
arguments validity, you will often get some idea how to prove it.
After youre convinced that the argument is indeed valid, the first strategy
for finding a formal proof is to try giving an informal proof, the kind you might
use to convince a fellow classmate. Often the basic structure of your informal
reasoning can be directly formalized using the rules of F. For example, if
you find yourself using an indirect proof, then that part of the reasoning will
probably require negation introduction in F. If you use proof by cases, then
youll almost surely formalize the proof using disjunction elimination.
Suppose you have decided that the argument is valid, but are having trouble finding an informal proof. Or suppose you cant see how your informal
proof can be converted into a proof that uses just the rules of F. The second
strategy is helpful in either of these cases. It is known as working backwards.
What you do is look at the conclusion and see what additional sentence or
sentences would allow you to infer that conclusion. Then you simply insert
these steps into your proof, not worrying about exactly how they will be justified, and cite them in support of your goal sentence. You then take these
intermediate steps as new goals and see if you can prove them. Once you do,
your proof will be complete.
Lets work through an example that applies both of these strategies. Suppose you are asked to give a formal proof of the argument:
working backwards
P Q
(P Q)
Youll recognize this as an application of one of the DeMorgan laws, so you
know its valid. But when you think about it (applying our maxim) you may
find that what convinces you of its validity is the following observation, which
is hard to formalize: if the premise is true, then either P or Q is false, and
that will make P Q false, and hence the conclusion true. Though this is
a completely convincing argument, it is not immediately clear how it would
translate into the introduction and elimination rules of F.
Lets try working backwards to see if we can come up with an informal
proof that is easier to formalize. Since the conclusion is a negation, we could
prove it by assuming P Q and deriving a contradiction. So lets suppose
P Q and take as our new goal. Now things look a little clearer. For the
premise tells us that either P or Q is true, but either of these cases directly
Section 6.5
You try it
................................................................
!
1. Open the file Strategy 1. Begin by entering the desired conclusion in a new
step of the proof. We will construct the proof working backwards, just
like we found our informal proof. Add a step before the conclusion youve
entered so that your proof looks something like this:
1. P Q
2. . . .
3. (P Q)
Rule?
Rule?
2. The main method used in our informal proof was reductio, which corresponds to negation introduction. So change the blank step into a subproof
with the assumption P Q and the contradiction symbol at the bottom.
(You can also use Add Support Steps op achieve this.) Also add a step
in between these to remind you that thats where you still need to fill
things in, and enter your justification for the final step, so you remember
why you added the subproof. At this point your proof should look roughly
like this:
1. P Q
2. P Q
3. . . .
4.
5. (P Q)
Chapter 6
Rule?
Rule?
Intro: 24
"
1. P Q
2. P Q
3. P
4. . . .
5.
Rule?
Rule?
6. Q
7. . . .
8.
9.
10. (P Q)
Rule?
Rule?
Elim: 1, 35, 68
Intro: 29
"
1. P Q
2. P Q
3. P
4. P
5.
Elim: 2
Intro: 4, 3
6. Q
7. Q
8.
Elim: 2
Intro: 7, 6
10. (P Q)
Intro: 29
9.
Elim: 1, 35, 68
Section 6.5
pitfalls of working
backwards
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
Working backwards can be a very useful technique, since it often allows
you to replace a complex goal with simpler ones or to add new assumptions
from which to reason. But you should not think that the technique can be
applied mechanically, without giving it any thought. Each time you add new
intermediate goals, whether they are sentences or subproofs, it is essential
that you stop and check whether the new goals are actually reasonable. If
they dont seem plausible, you should try something else.
Heres a simple example of why this constant checking is so important.
Suppose you were asked to prove the sentence A C from the given sentence
(A B) (C D). Working backwards you might notice that if you could
prove A, from this you could infer the desired conclusion by the rule Intro.
Sketched in, your partial proof would look like this:
1. (A B) (C D)
2. A
3. A C
Chapter 6
Rule?
Intro
The problem with this is that A does not follow from the given sentence,
and no amount of work will allow you to prove that it does. If you didnt notice this from the outset, you could spend a lot of time trying to construct an
impossible proof! But if you notice it, you can try a more promising approach.
(In this case, disjunction elimination is clearly the right way to go.) Working backwards, though a valuable tactic, is no replacement for good honest
thinking.
When youre constructing a formal proof in Fitch, you can avoid trying
to prove an incorrect intermediate conclusion by checking the step with Taut
Con. In the above example, for instance, if you use Taut Con at the second
step, citing the premise as support, you would immediately find that it is
hopeless to try to prove A from the given premise.
Many of the problems in this book ask you to determine whether an argument is valid and to back up your answer with either a proof of consequence
or a counterexample, a proof of non-consequence. You will approach these
problems in much the same way weve described, first trying to understand
the claims involved and deciding whether the conclusion follows from the
premises. If you think the conclusion does not follow, or really dont have a
good hunch one way or the other, try to find a counterexample. You may
succeed, in which case you will have shown the argument to be invalid. If you
cannot find a counterexample, trying to find one often gives rise to insights
about why the argument is valid, insights that can help you find the required
proof.
We can summarize our strategy advice with a seven step procedure for
approaching problems of this sort.
Remember
In assessing the validity of an argument, use the following method:
1. Understand what the sentences are saying.
2. Decide whether you think the conclusion follows from the premises.
3. If you think it does not follow, or are not sure, try to find a counterexample.
4. If you think it does follow, try to give an informal proof.
5. If a formal proof is called for, use the informal proof to guide you in
finding one.
6. In giving consequence proofs, both formal and informal, dont forget
the tactic of working backwards.
7. In working backwards, though, always check that your intermediate
goals are consequences of the available information.
One final warning: One of the nice things about Fitch is that it will give
you instant feedback about whether your proof is correct. This is a valuable
learning tool, but it can be misused. You should not use Fitch as a crutch,
trying out rule applications and letting Fitch tell you if they are correct. If
you do this, then you are not really learning the system F. One way to check
up on yourself is to write a formal proof out on paper every now and then. If
you try this and find you cant do it without Fitchs help, then you are using
Fitch as a crutch, not a learning tool.
Exercises
6.21
If you skipped the You try it section, go back and do it now. Submit the file Proof Strategy 1.
Section 6.5
6.22
!
6.23
"
In each of the following exercises, give an informal proof of the validity of the indicated argument. (You
should never use the principle you are proving in your informal proof, for example in Exercise 6.24,
you should not use DeMorgan in your informal proof.) Then use Fitch to construct a formal proof that
mirrors your informal proof as much as possible. Turn in your informal proofs to your instructor and
submit the formal proof in the usual way.
6.24
!|"
(A B)
6.25
!|"
A B
6.26
!|"
A (B C)
B C D
A B
(A B)
6.27
!|"
AD
(A B) (C D)
(B C) (D E)
C (A E)
In each of the following exercises, you should assess whether the argument is valid. If it is, use Fitch to
construct a formal proof. You may use Ana Con but only involving literals and . If it is not valid,
use Tarskis World to construct a counterexample.
6.28
!
Cube(c) Small(c)
Dodec(c)
6.29
!
Small(c)
6.30
!
(Cube(a) Cube(b))
(Cube(b) Cube(c))
Cube(a)
Larger(a, b) Larger(a, c)
Smaller(b, a) Larger(a, c)
Larger(a, b)
6.31
!
Dodec(b) Cube(b)
Small(b) Medium(b)
(Small(b) Cube(b))
Medium(b) Dodec(b)
6.32
!
Dodec(b) Cube(b)
Small(b) Medium(b)
Small(b) Cube(b)
Medium(b) Dodec(b)
Chapter 6
Section 6.6
1. a = a
2. b = b
3. a = a b = b
demonstrating
logical truth
= Intro
= Intro
Intro: 1, 2
The first step of this proof is not a premise, but an application of = Intro.
You might think that any proof without premises would have to start with
this rule, since it is the only one that doesnt have to cite any supporting steps
earlier in the proof. But in fact, this is not a very representative example of
such proofs. A more typical and interesting proof without premises is the
following, which shows that (P P) is a logical truth.
1. P P
2. P
3. P
4.
5. (P P)
Elim: 1
Elim: 1
Intro: 2, 3
Intro: 14
Notice that there are no assumptions above the first horizontal Fitch bar,
indicating that the main proof has no premises. The first step of the proof is
the subproof s assumption. The subproof proceeds to derive a contradiction,
based on this assumption, thus allowing us to conclude that the negation
of the subproofs assumption follows without the need of premises. In other
words, it is a logical truth.
When we want you to prove that a sentence is a logical truth, we will use
Fitch notation to indicate that you must prove this without assuming any
premises. For example the above proof shows that the following argument
is valid:
Section 6.6
(P P)
We close this section with the following reminder:
Remember
A proof without any premises shows that its conclusion is a logical truth.
Exercises
6.33
!
(Excluded Middle) Open the file Exercise 6.33. This contains an incomplete proof of the law
of excluded middle, P P. As it stands, the proof does not check out because its missing
some sentences, some support citations, and some rules. Fill in the missing pieces and submit
the completed proof as Proof 6.33. The proof shows that we can derive excluded middle in F
without any premises.
In the following exercises, assess whether the indicated sentence is a logical truth in the blocks language.
If so, use Fitch to construct a formal proof of the sentence from no premises (using Ana Con if
necessary, but only applied to literals). If not, use Tarskis World to construct a counterexample. (A
counterexample here will simply be a world that makes the purported conclusion false.)
6.34
6.35
!
(a = b Dodec(a) Dodec(b))
6.36
(a = b Dodec(a) Cube(b))
6.37
!
(a = b b = c a += c)
6.38
!
(SameRow(a, b) SameRow(b, c) FrontOf(c, a))
6.39
!
(SameCol(a, b) SameCol(b, c) FrontOf(a, c))
Chapter 6
(a += b b += c a = c)
The following sentences are all tautologies, and so should be provable in F. Although the informal proofs
are relatively simple, F makes fairly heavy going of them, since it forces us to prove even very obvious
steps. Use Fitch to construct formal proofs. You may want to build on the proof of Excluded Middle
given in Exercise 6.33. Alternatively, with the permission of your instructor, you may use Taut Con,
but only to justify an instance of Excluded Middle. The Grade Grinder will indicate whether you used
Taut Con or not.
6.40
6.41
!!
A (A B)
(A B) A B
6.42
!!
A (B (A B))
Section 6.6
Chapter 7
Conditionals
There are many logically important constructions in English besides the Boolean connectives. Even if we restrict ourselves to words and phrases that connect two simple indicative sentences, we still find many that go beyond the
Boolean operators. For example, besides saying:
Max is home and Claire is at the library,
and
Max is home or Claire is at the library,
we can combine these same atomic sentences in the following ways, among
others:
Max
Max
Max
Max
Max
Max
Max
Max
Max
Max
is
is
is
is
is
is
is
is
is
is
And these are just the tip of the iceberg. There are also constructions that
combine three atomic sentences to form new sentences:
If Max is home then Claire is at the library, otherwise Claire is
concerned,
and constructions that combine four:
If Max is home then Claire is at the library, otherwise Claire is
concerned unless Carl is with him,
and so forth.
Some of these constructions are truth functional, or have important truthfunctional uses, while others do not. Recall that a connective is truth functional if the truth or falsity of compound statements made with it is completely
178
Conditionals / 179
determined by the truth values of its constituents. Its meaning, in other words,
can be captured by a truth table.
Fol does not include connectives that are not truth functional. This is
not to say that such connectives arent important, but their meanings tend to
be vague and subject to conflicting interpretations. The decision to exclude
them is analogous to our assumption that all the predicates of fol have precise
interpretations.
Whether or not a connective in English can be, or always is, used truth
functionally is a tricky matter, about which well have more to say later in
the chapter. Of the connectives listed above, though, there is one that is very
clearly not truth functional: the connective because. This is not hard to prove.
non-truth-functional
connectives
Chapter 7
180 / Conditionals
do, however, make it much easier to say and prove certain things, and so are
valuable additions to the language.
Section 7.1
P
t
t
f
f
Q
t
f
t
f
PQ
T
F
T
T
Chapter 7
this English conditional, like the material conditional, is false if P is true and
Q is false. Thus, we will translate, for example, If Max is home then Claire is
at the library as:
Home(max) Library(claire)
In this course we will always translate if. . . then. . . using , but there
are in fact many uses of the English expression that cannot be adequately
expressed with the material conditional. Consider, for example, the sentence,
If Max had been at home, then Carl would have been there too.
This sentence can be false even if Max is not in fact at home. (Suppose the
speaker mistakenly thought Carl was with Max, when in fact Claire had taken
him to the vet.) But the first-order sentence,
Home(max) Home(carl)
is automatically true if Max is not at home. A material conditional with a
false antecedent is always true.
We have already seen that the connective because is not truth functional
since it expresses a causal connection between its antecedent and consequent.
The English construction if. . . then. . . can also be used to express a sort of
causal connection between antecedent and consequent. Thats what seems to
be going on in the above example. As a result, many uses of if. . . then. . .
in English just arent truth functional. The truth of the whole depends on
something more than the truth values of the parts; it depends on there being
some genuine connection between the subject matter of the antecedent and
the consequent.
Notice that we started with the truth table for and decided to read
it as if. . . then. . . . What if we had started the other way around, looking for
a truth-functional approximation of the English conditional? Could we have
found a better truth table to go with if. . . then. . . ? The answer is clearly no.
While the material conditional is sometimes inadequate for capturing subtleties of English conditionals, it is the best we can do with a truth-functional
connective. But these are controversial matters. We will take them up further
in Section 7.3.
Necessary and sufficient conditions
Other English expressions that we will translate using the material conditional
P Q include: P only if Q, Q provided P, and Q if P. Notice in particular
that P only if Q is translated P Q, while P if Q is translated Q P. To
Section 7.1
182 / Conditionals
necessary condition
sufficient condition
understand why, we need to think carefully about the difference between only
if and if.
In English, the expression only if introduces what is called a necessary
condition, a condition that must hold in order for something else to obtain.
For example, suppose your instructor announces at the beginning of the course
that you will pass the course only if you turn in all the homework assignments.
Your instructor is telling you that turning in the homework is a necessary
condition for passing: if you dont do it, you wont pass. But the instructor is
not guaranteeing that you will pass if you do turn in the homework: clearly,
there are other ways to fail, such as skipping the tests and getting all the
homework problems wrong.
The assertion that you will pass only if you turn in all the homework
really excludes just one possibility: that you pass but did not turn in all the
homework. In other words, P only if Q is false only when P is true and Q is
false, and this is just the case in which P Q is false.
Contrast this with the assertion that you will pass the course if you turn
in all the homework. Now this is a very different kettle of fish. An instructor
who makes this promise is establishing a very lax grading policy: just turn in
the homework and youll get a passing grade, regardless of how well you do
on the homework or whether you even bother to take the tests!
In English, the expression if introduces what is called a sufficient condition,
one that guarantees that something else (in this case, passing the course) will
obtain. Because of this an English sentence P if Q must be translated as
Q P. The sentence rules out Q being true (turning in the homework) and
P being false (failing the course).
Other uses of
unless
Chapter 7
This says that any object you pick will either fail to be an A or else be a B.
Well learn about such sentences in Part II of this book.
There is one other thing we should say about the material conditional,
which helps explain its importance in logic. The conditional allows us to reduce
the notion of logical consequence to that of logical truth, at least in cases
where we have only finitely many premises. We said that a sentence Q is a
consequence of premises P1 , . . . , Pn if and only if it is impossible for all the
premises to be true while the conclusion is false. Another way of saying this
is that it is impossible for the single sentence (P1 . . . Pn ) to be true while
Q is false.
Given the meaning of , we see that Q is a consequence of P1 , . . . , Pn if
and only if it is impossible for the single sentence
reducing logical
consequence to
logical truth
(P1 . . . Pn ) Q
to be false, that is, just in case this conditional sentence is a logical truth. Thus,
one way to verify the tautological validity of an argument in propositional
logic, at least in theory, is to construct a truth table for this sentence and
see whether the final column contains only true. This method is usually not
very practical, however, since the truth tables quickly get too large to be
manageable.
Remember
1. The following English constructions are all translated P Q: If P
then Q; Q if P; P only if Q; and Provided P, Q.
2. Unless P, Q and Q unless P are translated P Q.
3. Q is a logical consequence of P1 , . . . , Pn if and only if the sentence
(P1 . . . Pn ) Q is a logical truth.
Section 7.2
Biconditional symbol:
Our final connective is called the material biconditional symbol. Given any
sentences P and Q there is another sentence formed by connecting these by
Section 7.2
184 / Conditionals
if and only if
iff
just in case
vs.
Chapter 7
P
t
t
f
f
Q
t
f
t
f
PQ
T
F
F
T
Notice that the final column of this truth table is the same as that for
(P Q) (Q P). (See Exercise 7.3 below.) For this reason, logicians often
treat a sentence of the form P Q as an abbreviation of (P Q) (Q P).
Tarskis World also uses this abbreviation in the game. Thus, the game rule
for P Q is simple. Whenever a sentence of this form is encountered, it is
replaced by (P Q) (Q P).
Remember
1. If P and Q are sentences of fol, then so is P Q.
2. The sentence P Q is true if and only if P and Q have the same truth
value.
Exercises
For the following exercises, use Boole to determine whether the indicated pairs of sentences are tautologically equivalent. Feel free to have Boole build your reference columns and fill them out for you. Dont
forget to indicate your assessment.
7.1
!
7.3
!
7.5
!
7.7
!
7.9
"
7.10
!
A B and A B.
7.2
A B and (A B) (B A).
7.4
(A B) C and A (B C).
7.6
A (B (C D)) and
((A B) C) D.
7.8
!
!
!
!
(A B) and A B.
A B and (A B) (A B).
(A B) C and A (B C).
A (B (C D)) and
((A B) C) D.
(Just in case) Prove that the ordinary (nonmathematical) use of just in case does not express
a truth-functional connective. Use as your example the sentence Max went home just in case
Carl was hungry.
(Evaluating sentences in a world) Using Tarskis World, run through Abelards Sentences, evaluating them in Wittgensteins World. If you make a mistake, play the game to see where you
have gone wrong. Once you have gone through all the sentences, go back and make all the false
ones true by changing one or more names used in the sentence. Submit your edited sentences
as Sentences 7.10.
Section 7.2
186 / Conditionals
7.11
!
(Describing a world) Launch Tarskis World and choose Hide Labels from the World menu.
Then, with the labels hidden, open Montagues World. In this world, each object has a name,
and no object has more than one name. Start a new sentence file where you will describe some
features of this world. Check each of your sentences to see that it is indeed a sentence and that
it is true in this world.
1. Notice that if c is a tetrahedron, then a is not a tetrahedron. (Remember, in this world
each object has exactly one name.) Use your first sentence to express this fact.
2. However, note that the same is true of b and d. That is, if b is a tetrahedron, then d
isnt. Use your second sentence to express this.
3. Finally, observe that if b is a tetrahedron, then c isnt. Express this.
4. Notice that if a is a cube and b is a dodecahedron, then a is to the left of b. Use your
next sentence to express this fact.
5. Use your next sentence to express the fact that if b and c are both cubes, then they
are in the same row but not in the same column.
6. Use your next sentence to express the fact that b is a tetrahedron only if it is small.
[Check this sentence carefully. If your sentence evaluates as false, then youve got the
arrow pointing in the wrong direction.]
7. Next, express the fact that if a and d are both cubes, then one is to the left of the
other. [Note: You will need to use a disjunction to express the fact that one is to the
left of the other.]
8. Notice that d is a cube if and only if it is either medium or large. Express this.
9. Observe that if b is neither to the right nor left of d, then one of them is a tetrahedron.
Express this observation.
10. Finally, express the fact that b and c are the same size if and only if one is a tetrahedron
and the other is a dodecahedron.
Save your sentences as Sentences 7.11. Now choose Show Labels from the World menu.
Verify that all of your sentences are indeed true. When verifying the first three, pay particular
attention to the truth values of the various constituents. Notice that sometimes the conditional
has a false antecedent and sometimes a true consequent. What it never has is a true antecedent
and a false consequent. In each of these three cases, play the game committed to true. Make
sure you understand why the game proceeds as it does.
7.12
!
(Translation) Translate the following English sentences into fol. Your translations will use all
of the propositional connectives.
1. If a is a tetrahedron then it is in front of d.
2. a is to the left of or right of d only if its a cube.
3. c is between either a and e or a and d.
4. c is to the right of a, provided it (i.e., c) is small.
Chapter 7
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
Save your list of sentences as Sentences 7.12. Before submitting the file, you should complete
Exercise 7.13.
7.13
!
(Checking your translations) Open Bolzanos World. Notice that all the English sentences from
Exercise 7.12 are true in this world. Thus, if your translations are accurate, they will also be
true in this world. Check to see that they are. If you made any mistakes, go back and fix them.
Remember that even if one of your sentences comes out true in Bolzanos World, it does not
mean that it is a proper translation of the corresponding English sentence. If the translation is
correct, it will have the same truth value as the English sentence in every world. So lets check
your translations in some other worlds.
Open Wittgensteins World. Here we see that the English sentences 3, 5, 9, 11, 12, 13, 14,
and 20 are false, while the rest are true. Check to see that the same holds of your translations.
If not, correct your translations (and make sure they are still true in Bolzanos World).
Next open Leibnizs World. Here half the English sentences are true (1, 2, 4, 6, 7, 10, 11, 14,
18, and 20) and half false (3, 5, 8, 9, 12, 13, 15, 16, 17, and 19). Check to see that the same
holds of your translations. If not, correct your translations.
Finally, open Venns World. In this world, all of the English sentences are false. Check to
see that the same holds of your translations and correct them if necessary.
There is no need to submit any files for this exercise, but dont forget to submit Sentences
7.12.
Section 7.2
188 / Conditionals
7.14
!
7.15
!!
(Figuring out sizes and shapes) Open Eulers Sentences. The nine sentences in this file uniquely
determine the shapes and sizes of blocks a, b, and c. See if you can figure out the solution just
by thinking about what the sentences mean and using the informal methods of proof youve
already studied. When youve figured it out, submit a world in which all of the sentences are
true.
(More sizes and shapes) Start a new sentence file and use it to translate the following English
sentences.
1. If a is a tetrahedron, then b is also a tetrahedron.
2. c is a tetrahedron if b is.
3. a and c are both tetrahedra only if at least one of them is large.
4. a is a tetrahedron but c isnt large.
5. If c is small and d is a dodecahedron, then d is neither large nor small.
6. c is medium only if none of d, e, and f are cubes.
7. d is a small dodecahedron unless a is small.
8. e is large just in case it is a fact that d is large if and only if f is.
9. d and e are the same size.
10. d and e are the same shape.
11. f is either a cube or a dodecahedron, if it is large.
12. c is larger than e only if b is larger than c.
Save these sentences as Sentences 7.15. Then see if you can figure out the sizes and shapes of
a, b, c, d, e, and f . You will find it helpful to approach this problem systematically, filling in
the following table as you reason about the sentences:
a
Shape:
Size:
When you have filled in the table, use it to guide you in building a world in which the twelve
English sentences are true. Verify that your translations are true in this world as well. Submit
both your sentence file and your world file.
7.16
!
7.17
!!
(Name that object) Open Sherlocks World and Sherlocks Sentences. You will notice that none
of the objects in this world has a name. Your task is to assign the names a, b, and c in such a
way that all the sentences in the list come out true. Submit the modified world as World 7.16.
(Building a world) Open Boolos Sentences. Submit a world in which all five sentences in this
file are true.
Chapter 7
7.18
!
7.19
"
7.20
"
7.21
!|"
Using the symbols introduced in Table 1.2, page 30, translate the following sentences into fol.
Submit your translations as a sentence file.
1. If Claire gave Folly to Max at 2:03 then Folly belonged to her at 2:00 and to him at
2:05.
2. Max fed Folly at 2:00 pm, but if he gave her to Claire then, Folly was not hungry five
minutes later.
3. If neither Max nor Claire fed Folly at 2:00, then she was hungry.
4. Max was angry at 2:05 only if Claire fed either Folly or Scruffy five minutes before.
5. Max is a student if and only if Claire is not.
Using Table 1.2 on page 30, translate the following into colloquial English.
1. (Fed(max, folly, 2:00) Fed(claire, folly, 2:00)) Pet(folly)
2. Fed(max, folly, 2:30) Fed(claire, scruffy, 2:00)
3. Hungry(folly, 2:00) Hungry(scruffy, 2:00)
4. (Hungry(folly, 2:00) Hungry(scruffy, 2:00))
Translate the following into fol as best you can. Explain any predicates and function symbols
you use, and any shortcomings in your first-order translations.
1. If Abe can fool Stephen, surely he can fool Ulysses.
2. If you scratch my back, Ill scratch yours.
3. France will sign the treaty only if Germany does.
4. If Tweedledee gets a party, so will Tweedledum, and vice versa.
5. If John and Mary went to the concert together, they must like each other.
(The monkey principle) One of the stranger uses of if. . . then. . . in English is as a roundabout
way to express negation. Suppose a friend of yours says If Keanu Reeves is a great actor, then
Im a monkeys uncle. This is simply a way of denying the antecedent of the conditional, in
this case that Keanu Reeves is a great actor. Explain why this works. Your explanation should
appeal to the truth table for , but it will have to go beyond that. Turn in your explanation
and also submit a Boole table showing that A is equivalent to A.
Section 7.3
Conversational implicature
In translating from English to fol, there are many problematic cases. For
example, many students resist translating a sentence like Max is home unless
Claire is at the library as:
Library(claire) Home(max)
Section 7.3
190 / Conditionals
These students usually think that the meaning of this English sentence would
be more accurately captured by the biconditional claim:
Library(claire) Home(max)
conversational
implicatures
cancelling implicatures
Chapter 7
The reason the latter seems natural is that when we assert the English sentence, there is some suggestion that if Claire is at at the library, then Max is
not at home.
To resolve problematic cases like this, it is often useful to distinguish between the truth conditions of a sentence, and other things that in some sense
follow from the assertion of the sentence. To take an obvious case, suppose
someone asserts the sentence It is a lovely day. One thing you may conclude
from this is that the speaker understands English. This is not part of what
the speaker said, however, but part of what can be inferred from his saying it.
The truth or falsity of the claim has nothing to do with the speakers linguistic
abilities.
The philosopher H. P. Grice developed a theory of what he called conversational implicature to help sort out the genuine truth conditions of a
sentence from other conclusions we may draw from its assertion. These other
conclusions are what Grice called implicatures. We wont go into this theory
in detail, but knowing a little bit about it can be a great aid in translation,
so we present an introduction to Grices theory.
Suppose we have an English sentence S that someone asserts, and we
are trying to decide whether a particular conclusion we draw is part of the
meaning of S or, instead, one of its implicatures. Grice pointed out that if
the conclusion is part of the meaning, then it cannot be cancelled by some
further elaboration by the speaker. Thus, for example, the conclusion that
Max is home is part of the meaning of an assertion of Max and Claire are
home, so we cant cancel this conclusion by saying Max and Claire are home,
but Max isnt home. We would simply be contradicting ourselves.
Contrast this with the speaker who said It is a lovely day. Suppose he
had gone on to say, perhaps reading haltingly from a phrase book: Do you
speak any French? In that case, the suggestion that the speaker understands
English is effectively cancelled.
A more illuminating use of Grices cancellability test concerns the expression either. . . or. . . . Recall that we claimed that this should be translated into
fol as an inclusive disjunction, using . We can now see that the suggestion
that this phrase expresses exclusive disjunction is generally just a conversational implicature. For example, if the waiter says You can have either soup or
salad, there is a strong suggestion that you cannot have both. But it is clear
that this is just an implicature, since the waiter could, without contradicting
himself, go on to say And you can have both, if you want. Had the original
either. . . or. . . expressed the exclusive disjunction, this would be like saying
You can have soup or salad but not both, and you can have both, if you want.
Lets go back now to the sentence Max is at home unless Claire is at the
library. Earlier we denied that the correct translation was
Library(claire) Home(max)
which is equivalent to the conjunction of the correct translation
Library(claire) Home(max)
with the additional claim
Library(claire) Home(max)
Is this second claim part of the meaning of the English sentence, or is it
simply a conversational implicature? Grices cancellability test shows that it
is just an implicature. After all, it makes perfectly good sense for the speaker
to go on to say On the other hand, if Claire is at the library, I have no idea
where Max is. This elaboration takes away the suggestion that if Claire is at
the library, then Max isnt at home.
Another common implicature arises with the phrase only if, which people
often construe as the stronger if and only if. For example, suppose a father
tells his son, You can have dessert only if you eat all your lima beans. Weve
seen that this is not a guarantee that if the child does eat his lima beans he
will get dessert, since only if introduces a necessary, not sufficient, condition.
Still it is clear that the fathers assertion suggests that, other things equal, the
child can have dessert if he eats the dreaded beans. But again, the suggestion
can be cancelled. Suppose the father goes on to say: If you eat the beans, Ill
check to see if theres any ice cream left. This cancels the suggestion that
dessert is guaranteed.
Remember
If the assertion of a sentence carries with it a suggestion that could be
cancelled (without contradiction) by further elaboration by the speaker,
then the suggestion is a conversational implicature, not part of the content
of the original claim.
Section 7.3
192 / Conditionals
Exercises
7.22
"
7.23
"
7.24
"!
Suppose Claire asserts the sentence Max managed to get Carl home. Does this logically imply,
or just conversationally implicate, that it was hard to get Carl home? Justify your answer.
Suppose Max asserts the sentence We can walk to the movie or we can drive. Does his assertion
logically imply, or merely implicate, that we cannot both walk and drive? How does this differ
from the soup or salad example?
Consider the sentence Max is home in spite of the fact that Claire is at the library. What would
be the best translation of this sentence into fol? Clearly, whether you would be inclined to use
this sentence is not determined simply by the truth values of the atomic sentences Max is home
and Claire is at the library. This may be because in spite of the fact is, like because, a non-truthfunctional connective, or because it carries, like but, additional conversational implicatures. (See
our discussion of because earlier in this chapter and the discussion of but in Chapter 3.) Which
explanation do you think is right? Justify your answer.
Section 7.4
Truth-functional completeness
We now have at our disposal five truth-functional connectives, one unary
(), and four binary (, , , ). Should we introduce any more? Though
weve seen a few English expressions that cant be expressed in fol, like
because, these have not been truth functional. Weve also run into others, like
neither. . . nor. . . , that are truth functional, but which we can easily express
using the existing connectives of fol.
The question we will address in the current section is whether there are any
truth-functional connectives that we need to add to our language. Is it possible
that we might encounter an English construction that is truth functional but
which we cannot express using the symbols we have introduced so far? If so,
this would be an unfortunate limitation of our language.
How can we possibly answer this question? Well, lets begin by thinking
about binary connectives, those that apply to two sentences to make a third.
How many binary truth-functional connectives are possible? If we think about
the possible truth tables for such connectives, we can compute the total number. First, since we are dealing with binary connectives, there are four rows
in each table. Each row can be assigned either true or false, so there are
24 = 16 ways of doing this. For example, here is the table that captures the
truth function expressed by neither. . . nor. . . .
Chapter 7
P
t
t
f
f
Q
t
f
t
f
Neither P nor Q
F
F
F
T
Since there are only 16 different ways of filling in the final column of such
a table, there are only 16 binary truth functions, and so 16 possible binary
truth-functional connectives. We could look at each of these tables in turn
and show how to express the truth function with existing connectives, just as
we captured neither P nor Q with (P Q). But there is a more general and
systematic way to show this.
Suppose we are thinking about introducing a binary truth-functional connective, say #. It will have a truth table like the following, with one of the
values true or false in each row.
P
t
t
f
f
Q
t
f
t
f
P#Q
1st value
2nd value
3rd value
4th value
If all four values are false, then we can clearly express P # Q with the
sentence P P Q Q. So suppose at least one of the values is true.
How can we express P # Q? One way would be this. Let C1 , . . . , C4 stand for
the following four conjunctions:
C1
C2
C3
C4
=
=
=
=
(P Q)
(P Q)
(P Q)
(P Q)
Notice that sentence C1 will be true if the truth values of P and Q are as
specified in the first row of the truth table, and that if the values of P and Q
are anything else, then C1 will be false. Similarly with C2 and the second row
of the truth table, and so forth. To build a sentence that gets the value true
in exactly the same rows as P # Q, all we need do is take the disjunction of the
appropriate Cs. For example, if P # Q is true in rows 2 and 4, then C2 C4 is
equivalent to this sentence.
What this shows is that all binary truth functions are already expressible
using just the connectives , , and . In fact, it shows that they can be expressed using sentences in disjunctive normal form, as described in Chapter 4.
Section 7.4
194 / Conditionals
Its easy to see that a similar procedure allows us to express all possible
unary truth functions. A unary connective, say %, will have a truth table like
this:
P
t
f
%P
1st value
2nd value
If both of the values under %P are false, then we can express it using the
sentence P P. Otherwise, we can express %P as a disjunction of one or
more of the following:
C1 = P
C2 = P
C1 will be included as one of the disjuncts if the first value is true, and C2
will be included if the second value is true. (Of course, in only one case will
there be more than one disjunct.)
Once we understand how this procedure is working, we see that it will
apply equally well to truth-functional connectives of any arity. Suppose, for
example, that we want to express the ternary truth-functional connective
defined by the following truth table:
P
t
t
t
t
f
f
f
f
Q
t
t
f
f
t
t
f
f
R
t
f
t
f
t
f
t
f
(P, Q, R)
T
T
F
F
T
F
T
F
Chapter 7
truth-functional
completeness
disadvantages of
economy
Section 7.4
196 / Conditionals
one use of Elim, and one use of Intro (see Exercise 7.26). This is why
we havent skimped on connectives.
Remember
1. A set of connectives is truth-functionally complete if the connectives
allow us to express every truth function.
2. Various sets of connectives, including the Boolean connectives, are
truth-functionally complete.
Exercises
7.25
!
(Replacing , , and ) Use Tarskis World to open the file Sheffers Sentences. In this file,
you will find the following sentences in the odd-numbered positions:
1. Tet(a) Small(a)
3. Tet(a) Small(a)
5. Tet(a) Small(a)
7. (Cube(b) Cube(c)) (Small(b) Small(c))
In each even-numbered slot, enter a sentence that is equivalent to the one above it, but which
uses only the connectives and . Before submitting your solution file, you might want to try
out your sentences in several worlds to make sure the new sentences have the expected truth
values.
7.26
!
7.27
!
(Basic versus defined symbols in proofs) Treating a symbol as basic, with its own rules, or as a
defined symbol, without its own rules, makes a big difference to the complexity of proofs. Use
Fitch to open the file Exercise 7.26. In this file, you are asked to construct a proof of (A B)
from the premises A and B. A proof of the equivalent sentence A B would of course take a
single step.
(Simplifying if. . . then. . . else) Assume that P, Q, and R are atomic sentences. See if you can
simplify the sentence we came up with to express (P, Q, R) (if P then Q, else R), so that it
becomes a disjunction of two sentences, each of which is a conjunction of two literals. Submit
your solution as a Tarskis World sentence file.
Chapter 7
7.28
!!!
7.29
"!
(Expressing another ternary connective) Start a new sentence file using Tarskis World. Use the
method we have developed to express the ternary connective defined in the following truth
table, and enter this as the first sentence in your file. Then see if you can simplify the result
as much as possible. Enter the simplified form as your second sentence. (This sentence should
have no more than two occurrences each of P, Q, and R, and no more than six occurrences of
the Boolean connectives, , and .)
P Q R (P, Q, R)
t t t
T
t t f
T
t f t
T
t f f
F
f t t
F
f t f
T
f f t
T
f f f
T
(Sheffer stroke) Another binary connective that is truth-functionally complete on its own is
called the Sheffer stroke, named after H. M. Sheffer, one of the logicians who discovered and
studied it. It is also known as nand by analogy with nor. Here is its truth table:
P Q P|Q
t t
F
t f
T
f t
T
f f
T
Show how to express P, P Q, and P Q using the Sheffer stroke. (We remind you that
nowadays, the symbol | has been appropriated as an alternative for . Dont let that confuse
you.)
7.30
"!
7.31
"!
(Putting monkeys to work) Suppose we have the single binary connective , plus the symbol
for absurdity . Using just these expressions, see if you can find a way to express P, P Q,
and P Q. [Hint: Dont forget what you learned in Exercise 7.21.]
(Another non-truth-functional connective) Show that truth value at a particular time of the
sentence Max is home whenever Claire is at the library is not determined by the truth values
of the atomic sentences Max is home and Claire is at the library at that same time. That is,
show that whenever is not truth functional.
Section 7.4
198 / Conditionals
7.32
"!
Section 7.5
Alternative notation
As with the other truth-functional connectives, there are alternative notations
for the material conditional and biconditional. The most common alternative
to P Q is P Q. Polish notation for the conditional is Cpq. The most common alternative to P Q is P Q. The Polish notation for the biconditional
is Epq.
Remember
The following table summarizes the alternative notations discussed so far.
Our notation
P
PQ
PQ
PQ
PQ
Chapter 7
Common equivalents
P, P, !P, Np
P&Q, P&&Q, P Q, PQ, Kpq
P | Q, P . Q, Apq
P Q, Cpq
P Q, Epq
Chapter 8
Section 8.1
Valid steps
The most common valid proof step involving goes by the Latin name modus
ponens, or by the English conditional elimination. The rule says that if you
1 In
199
modus ponens or
conditional elimination
biconditional
elimination
contraposition
have established both P Q and P, then you can infer Q. This rule is obviously valid, as a review of the truth table for shows, since if P Q and P
are both true, then so must be Q.
There is a similar proof step for the biconditional, since the biconditional
is logically equivalent to a conjunction of two conditionals. If you have established either P Q or Q P, then if you can establish P, you can infer Q.
This is called biconditional elimination.
In addition to these simple rules, there are a number of useful equivalences
involving our new symbols. One of the most important is known as the Law of
Contraposition. It states that P Q is logically equivalent to Q P. This
latter conditional is known as the contrapositive of the original conditional. It
is easy to see that the original conditional is equivalent to the contrapositive,
since the latter is false if and only if Q is true and P is false, which is
to say, when P is true and Q is false. Contraposition is a particularly useful
equivalence since it is often easier to prove the contrapositive of a conditional
than the conditional itself. Well see an example of this in a moment.
Here are some logical equivalences to bear in mind, beginning with contraposition. Make sure you understand them all and see why they are equivalent.
Use Boole to construct truth tables for any you dont immediately see.
PQ
PQ
(P Q)
PQ
PQ
Q P
P Q
P Q
(P Q) (Q P)
(P Q) (P Q)
Remember
Let P and Q be any sentences of fol.
1. Modus ponens: From P Q and P, infer Q.
2. Biconditional elimination: From P and either P Q or Q P, infer
Q.
3. Contraposition: P Q Q P
Chapter 8
conditional proof
=
=
=
(2m + 1)2
4m2 + 4m + 1
2(2m2 + 2m) + 1
Section 8.1
Did you get lost? This proof has a pretty complicated structure, since we
first assumed Even(n2 ) for the purpose of conditional proof, but then immediately assumed Even(n) to get an indirect proof of Even(n). The contradiction
that we arrived at was Even(n2 ), which contradicted our first assumption.
Proofs of this sort are fairly common, and this is why it is often easier to
prove the contrapositive of a conditional. The contrapositive of our original
claim is this:
Even(n) Even(n2 )
Lets look at the proof of this contrapositive.
= (2m + 1)2
= 4m2 + 4m + 1
= 2(2m2 + 2m) + 1
But this shows that n2 is also odd, hence Even(n2 ). Thus, by conditional proof, we have established Even(n) Even(n2 ).
By proving the contrapositive, we avoided the need for an indirect proof
inside the conditional proof. This makes the proof easier to understand, and
since the contrapositive is logically equivalent to our original claim, our second
proof could serve as a proof of the original claim as well.
The method of conditional proof is used extensively in everyday reasoning. Some years ago Bill was trying to decide whether to take English 301,
Postmodernism. His friend Sarah claimed that if Bill takes Postmodernism,
he will not get into medical school. Sarahs argument, when challenged by Bill,
took the form of a conditional proof, combined with a proof by cases.
Suppose you take Postmodernism. Then either you will adopt the
postmodern disdain for rationality or you wont. If you dont, you will
fail the class, which will lower your GPA so much that you will not get
into medical school. But if you do adopt the postmodern contempt
toward rationality, you wont be able to pass organic chemistry, and
so will not get into medical school. So in either case, you will not get
into medical school. Hence, if you take Postmodernism, you wont
get into medical school.
Unfortunately for Bill, he had already succumbed to postmodernism, and
so rejected Sarahs argument. He went ahead and took the course, failed chemistry, and did not get into medical school. Hes now a wealthy lobbyist in
Washington. Sarah is an executive in the computer industry in California.
Chapter 8
Proving biconditionals
Not surprisingly, we can also use conditional proof to prove biconditionals,
though we have to work twice as hard. To prove P Q by conditional proof,
you need to do two things: assume P and prove Q; then assume Q and prove
P. This gives us both P Q and Q P, whose conjunction is equivalent to
P Q.
There is another form of proof involving that is common in mathematics. Mathematicians are quite fond of finding results which show that
several different conditions are equivalent. Thus you will find theorems that
make claims like this: The following conditions are all equivalent: Q1 , Q2 , Q3 .
What they mean by this is that all of the following biconditionals hold:
Q1 Q2
Q2 Q3
Q1 Q3
To prove these three biconditionals in the standard way, you would have
to give six conditional proofs, two for each biconditional. But we can cut our
work in half by noting that it suffices to prove some cycle of results like the
following:
proving a cycle of
conditionals
Q1 Q2
Q2 Q3
Q3 Q1
These would be shown by three conditional proofs, rather than the six that
would otherwise be required. Once we have these, there is no need to prove the
reverse directions, since they follow from the transitivity of . For example,
we dont need to explicitly prove Q2 Q1 , the reverse of the first conditional,
since this follows from Q2 Q3 and Q3 Q1 , our other two conditionals.
When we apply this technique, we dont have to arrange the cycle in
exactly the order in which the conditions are given. But we do have to make
sure we have a genuine cycle, one that allows us to get from any one of our
conditions to any other.
Lets give a very simple example. We will prove that the following conditions on a natural number n are all equivalent:
1. n is even
2. n2 is even
3. n2 is divisible by 4.
Proof: Rather than prove all six biconditionals, we prove that (3)
(2) (1) (3). Assume (3). Now clearly, if n2 is divisible by 4, then
Section 8.1
Exercises
8.1
"
8.2
!|"
In the following list we give a number of inference patterns, some of which are valid, some
invalid. For each pattern, decide whether you think it is valid and say so. Later, we will return
to these patterns and ask you to give formal proofs for the valid ones and counterexamples for
the invalid ones. But for now, just assess their validity.
1. Affirming the Consequent: From A B and B, infer A.
2. Modus Tollens: From A B and B, infer A.
3. Strengthening the Antecedent: From B C, infer (A B) C.
4. Weakening the Antecedent: From B C, infer (A B) C.
5. Strengthening the Consequent: From A B, infer A (B C).
6. Weakening the Consequent: From A B, infer A (B C).
7. Constructive Dilemma: From A B, A C, and B D, infer C D.
8. Transitivity of the Biconditional: From A B and B C, infer A C.
Open Conditional Sentences. Suppose that the sentences in this file are your premises. Now
consider the five sentences listed below. Some of these sentences are consequences of these
premises, some are not. For those that are consequences, give informal proofs and turn them
Chapter 8
in to your instructor. For those that are not consequences, submit counterexample worlds in
which the premises are true but the conclusion false. Name the counterexamples World 8.2.x,
where x is the number of the sentence.
1. Tet(e)
2. Tet(c) Tet(e)
3. Tet(c) Larger(f, e)
4. Tet(c) LeftOf(c, f)
5. Dodec(e) Smaller(e, f)
The following arguments are all valid. Turn in informal proofs of their validity. You may find it helpful
to translate the arguments into fol before trying to give proofs, though thats not required. Explicitly
note any inferences using modus ponens, biconditional elimination, or conditional proof.
8.3
"
8.4
"
8.5
"
8.6
"
8.7
"
8.8
"
Section 8.1
8.9
a
a
a
a
"
is
is
is
is
8.10
!|"
8.11
"
Open Between Sentences. Determine whether this set of sentences is satisfiable or not. If it
is, submit a world in which all the sentences are true. If not, give an informal proof that the
sentences are inconsistent. That is, assume all of them and derive a contradiction.
Analyze the structure of the informal proof in support of the following claim: If the U.S. does
not cut back on its use of oil soon, parts of California will be flooded within 50 years. Are there
weak points in the argument? What premises are implicitly assumed in the proof? Are they
plausible?
Proof: Suppose the U.S. does not cut back on its oil use soon. Then it will be unable
to reduce its carbon dioxide emissions substantially in the next few years. But then
the countries of China, India and Brazil will refuse to join in efforts to curb carbon
dioxide emissions. As these countries develop without such efforts, the emission of
carbon dioxide will get much worse, and so the greenhouse effect will accelerate. As a
result the sea will get warmer, ice will melt, and the sea level will rise. In which case,
low lying coastal areas in California will be subject to flooding within 50 years. So if
we do not cut back on our oil use, parts of California will be flooded within 50 years.
8.12
Describe an everyday example of reasoning that uses the method of conditional proof.
"
8.13
"!
8.15
"!
8.16
"!
8.14
"!
Prove that the following conditions on the natural number n are all equivalent. Use as few
conditional proofs as possible.
1. n is divisible by 3
2. n2 is divisible by 3
3. n2 is divisible by 9
4. n3 is divisible by 3
5. n3 is divisible by 9
6. n3 is divisible by 27
Give an informal proof that if R is a tautological consequence of P1 , . . . , Pn and Q, then Q R
is a tautological consequence of P1 , . . . , Pn .
Chapter 8
Section 8.2
Section 8.2
working backwards
You try it
................................................................
!
!
!
!
!
!
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
Once we have conditional introduction at our disposal, we can convert
any proof with premises into the proof, without premises, of a corresponding
conditional. For example, we showed in Chapter 6 (page 158) how to give a
formal proof of A from premise A. We can can now use the earlier proof to
build a proof of the logically true sentence A A.
Chapter 8
1. A
2. A
3.
Intro: 1, 2
5. A A
Intro: 14
4. A
Intro: 23
Notice that the subproof here is identical to the original proof given on
page 158. We simply embedded that proof in our new proof and applied conditional introduction to derive A A.
Default and generous uses of the rules
The rule Elim does not care in which order you cite the support sentences.
The rule Intro does not insist that the consequent be at the last step of
the cited subproof, though it usually is. Also, the assumption step might be
the only step in the subproof, as in a proof of a sentence of the form P P.
The default applications of the conditional rules work exactly as you would
expect. If you cite supports of the form indicated in the rule statements, Fitch
will fill in the appropriate conclusion for you.
default uses of
conditional rules
You try it
................................................................
1. Open the file Conditional 2. Look at the goal to see what sentence we are
trying to prove. Then focus on each step in succession and check the step.
On the empty steps, try to predict what default Fitch will supply.
2. When you are finished, make sure you understand the proof. Save the
checked proof as Proof Conditional 2.
"
"
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
You can use the Add Support Steps command when you are using the
Intro rule, and have an implication at the focus step. Fitch will insert a
subproof as support. The antecedant of the implication will be the assumption
of the subproof, and the consequent will appear at the last line of the subproof.
You cannot use Add Support Steps with Elim.
Section 8.2
This means that you can conclude Q if you can establish P and either of the
biconditionals indicated.
The introduction rule for the biconditional P Q requires that you give
two subproofs, one showing that Q follows from P, and one showing that P
follows from Q:
Biconditional Introduction ( Intro):
P
..
.
Q
Q
..
.
P
$ PQ
Chapter 8
1. P
2. P
3.
Intro: 1, 2
4. P
Intro: 23
5. P
Elim: 5
6. P
7. P P
Intro: 14, 56
You try it
................................................................
1. Open the file Conditional 3. In this file, you are asked to prove, without
premises, the law of contraposition:
"
(P Q) (Q P)
2. Start your proof by sketching in the two subproofs that you know youll
have to prove, plus the desired conclusion. Your partial proof will look like
this:
"
1. P Q
2. Q P
Rule?
3. Q P
4. P Q
5. (P Q) (Q P)
Rule?
Intro: 12, 34
Section 8.2
3. Now that you have the overall structure, start filling in the first subproof.
Since the goal of that subproof is a conditional claim, sketch in a conditional proof that would give you that claim:
1. P Q
2. Q
3. P
4. Q P
Rule?
Intro: 23
5. Q P
6. P Q
7. (P Q) (Q P)
!
Rule?
Intro: 14, 56
1. P Q
2. Q
3. P
4. Q
5.
Elim: 1, 3
Intro: 4, 2
7. Q P
Intro: 26
6. P
Intro: 35
8. Q P
9. P Q
10. (P Q) (Q P)
!
Chapter 8
Rule?
Intro: 17, 89
5. This completes the first subproof. Luckily, you sketched in the second
subproof so you know what you want to do next. You should be able to
finish the second subproof on your own, since it is almost identical to the
first.
"
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
The default and generous uses of the biconditional rules are exactly like those
for the conditional connective, and Add Support Steps works exactly as
you would expect.
Exercises
8.17
!
If you skipped any of the You try it sections, go back and do them now. Submit the files
Proof Conditional 1, Proof Conditional 2, and Proof Conditional 3.
In the following exercises we return to the patterns of inference discussed in Exercise 8.1. Some of
these are valid, some invalid. For each valid pattern, construct a formal proof in Fitch. For each invalid
pattern, give a counterexample using Tarskis World. To give a counterexample in these cases, you will
have to come up with sentences of the blocks language that fit the pattern, and a world that makes
those specific premises true and the conclusion false. Submit both the world and the sentence file. In the
sentence file, list the premises first and the conclusion last.
8.18
!
8.20
!
8.22
!
8.24
!
8.19
8.21
8.23
Constructive Dilemma:
From A B, A C, and B D,
infer C D.
8.25
Modus Tollens:
From A B and B, infer A.
Weakening the Antecedent:
From B C, infer (A B) C.
Weakening the Consequent:
From A B, infer A (B C).
Transitivity of the Biconditional:
From A B and B C,
infer A C.
Use Fitch to construct formal proofs for the following arguments. In two cases, you may find yourself
re-proving an instance of the law of Excluded Middle, P P, in order to complete your proof. If youve
forgotten how to do that, look back at your solution to Exercise 6.33. Alternatively, with the permission
of your instructor, you may use Taut Con to justify an instance of Excluded Middle.
8.26
8.27
!
P (Q P)
(P (Q R)) ((P Q) R)
Section 8.2
8.28
!
P P
8.29
!
(P Q) (P Q)
8.30
!
(P Q) (P Q)
The following arguments are translations of those given in Exercises 8.38.9. (For simplicity we have
assumed the unicorn refers to a specific unicorn named Charlie. This is less than ideal, but the best we
can do without quantifiers.) Use Fitch to formalize the proofs you gave of their validity. You will need
to use Ana Con to introduce in two of your proofs.
8.31
!
(Mythical(c) Mammal(c))
(Mythical(c) Mortal(c))
(Mortal(c) Mammal(c)) Horned(c)
Horned(c) Magical(c)
8.32
!
Horned(c) Mammal(c)
Magical(c)
8.33
!
Horned(c) (Elusive(c)
Dangerous(c))
(Elusive(c) Mythical(c)) Rare(c)
Mammal(c) Rare(c)
8.34
!
8.35
!
Cube(b) Small(b)
Small(c) (Small(d) Small(e))
Small(d) Small(c)
Cube(b) Small(e)
Small(c) Small(b)
Chapter 8
8.36
!
SameRow(d, a) SameRow(d, b)
SameRow(d, c)
SameRow(d, b) (SameRow(d, a)
SameRow(d, c))
SameRow(d, a) SameRow(d, c)
SameRow(d, a) SameRow(d, b)
8.37
!
8.38
!!
Use Fitch to give formal proofs of both (P Q) P and the equivalent sentence (P Q) P.
(You will find the exercise files in Exercise 8.38.1 and Exercise 8.38.2.) Do you see why it is
convenient to include in fol, rather than define it in terms of the Boolean connectives?
Section 8.3
Soundness
We intend our formal system F to be a correct system of deduction in the
sense that any argument that can be proven valid in F should be genuinely
valid. The first question that we will ask, then, is whether we have succeeded
in this goal. Does the system F allow us to construct proofs only of genuinely
valid arguments? This is known as the soundness question for the deductive
system F.
The answer to this question may seem obvious, but it deserves a closer look.
After all, consider the rule of inference suggested in Exercise 7.32 on page 198.
Probably, when you first looked at this rule, it seemed pretty reasonable, even
though on closer inspection you realized it was not (or maybe you got the
problem wrong). How can we be sure that something similar might not be the
case for one of our official rules? Maybe there is a flaw in one of them but we
just havent thought long enough or hard enough to discover it.
Or maybe there are problems that go beyond the individual rules, something about the way the rules interact. Consider for example the following
soundness of a
deductive system
Section 8.3
argument:
(Happy(carl) Happy(scruffy))
Happy(carl)
FT
*T
Soundness of FT
We know this argument isnt valid since it is clearly possible for the premise to
be true and the conclusion false. But how do we know that the rules of proof
weve introduced do not allow some very complicated and ingenious proof
of the conclusion from the premise? After all, there is no way to examine all
possible proofs and make sure there isnt one with this premise and conclusion:
there are infinitely many proofs.
To answer our question, we need to make it more precise. We have seen that
there is a certain vagueness in the notion of logical consequence. The concept
of tautological consequence was introduced as a precise approximation of the
informal notion. One way to make our question more precise is to ask whether
the rules for the truth-functional connectives allow us to prove only arguments
that are tautologically valid. This question leaves out the issue of whether the
identity rules are legitimate, but we will address that question later.
Lets introduce some new symbols to make it easier to express the claim
we want to investigate. We will use FT to refer to the portion of our deductive
system that contains the introduction and elimination rules for , , , , ,
and . You can think of the subscript t as standing for either tautology or
truth-functional. We will also write P1 , . . . , Pn 9T S to indicate that there
is a formal proof in FT of S from premises P1 , . . . , Pn . (The symbol 9 is
commonly used in logic to indicate the provability of whats on the right from
whats on the left. If you have trouble remembering what this symbol means,
just think of it as a tiny Fitch bar.) We can now state our claim as follows.
Theorem (Soundness of FT ) If P1 , . . . , Pn 9T S then S is a tautological consequence of P1 , . . . , Pn .
Proof: Suppose that p is a proof constructed in the system FT . We
will show that any sentence that occurs at any step in proof p is
a tautological consequence of the assumptions in force at that step.
This claim applies not just to sentences at the main level of p, but also
to sentences appearing in subproofs, no matter how deeply nested.
The assumptions in force at a step always include the main premises
of the proof, but if we are dealing with a step inside some nested
subproofs, they also include all the assumptions of these subproofs.
The soundness theorem follows from our claim because if S appears
Chapter 8
at the main level of p, then the only assumptions in force are the
premises P1 , . . . , Pn . So S is a tautological consequence of P1 , . . . , Pn .
To prove this claim we will use proof by contradiction. Suppose that
there is a step in p containing a sentence that is not a tautological
consequence of the assumptions in force at that step. Call this an
invalid step. The idea of our proof is to look at the first invalid step
in p and show that none of the twelve rules of FT could have justified
that step. In other words, we will apply proof by cases to show that,
no matter which rule of FT was applied at the invalid step, we get a
contradiction. (Actually, we will only look at three of the cases and
leave the remaining rules as exercises.) This allows us to conclude
that there can be no invalid steps in proofs in FT .
Intro: Suppose the first invalid step derives the sentence Q R
from an application of Intro to an earlier subproof with assumption Q and conclusion R.
..
.
Q
..
.
R
..
.
QR
..
.
Again let A1 , . . . , Ak be the assumptions in force at Q R. Note
that the assumptions in force at R are A1 , . . . , Ak and Q. Since step
R is earlier than the first invalid step, R must be a tautological consequence of A1 , . . . , Ak and Q.
Imagine constructing a joint truth table for the sentences A1 , . . . , Ak ,
Q, Q R, and R. There must be a row h of this table in which
A1 , . . . , Ak all come out true, but Q R comes out false, by the
assumption that this step is invalid. Since Q R is false in this
row, Q must be true and R must be false. But this contradicts our
observation that R is a tautological consequence of A1 , . . . , Ak and
Q.
Section 8.3
Chapter 8
Section 8.3
a=a b=b
a) Tet
(
et(
a)
???
Provable in FT
Tautologies
Logical truths
Figure 8.1: The soundness theorem for FT tells us that only tautologies are
provable (without premises) in FT .
Completeness
completeness of a
deductive system
Chapter 8
a=a b=b
t(a) Tet(a
Te
)
Provable in FT
Tautologies
Logical truths
Figure 8.2: Completeness and soundness of FT tells us that all and only tautologies are provable (without premises) in FT .
deductive system FT ? The next theorem assures us that this cannot happen.
Theorem (Completeness of FT ) If a sentence S is a tautological consequence
of P1 , . . . , Pn , then P1 , . . . , Pn 9T S.
The proof of this result is quite a bit more complicated than the proof of
the Soundness Theorem, and requires material we have not yet introduced.
Consequently, we will not be able to give the proof here, but will prove it in
Chapter 17.
This result is called the Completeness Theorem because it tells us that
the introduction and elimination rules are complete for the logic of the truthfunctional connectives: anything that is a logical consequence simply in virtue
of the meanings of the truth-functional connectives can be proven in FT . As
illustrated in Figure 8.2, it assures us that all tautologies (and tautologically
valid arguments) are provable in FT .
Notice, however, that the Soundness Theorem implies a kind of incompleteness, since it shows that the rules of FT allow us to prove only tautological
consequences of our premises. They do not allow us to prove any logical
consequence of the premises that is not a tautological consequence of those
premises. For example, it shows that there is no way to prove Dodec(c) from
Dodec(b) b = c in FT , since the former is not a tautological consequence
of the latter. To prove something like this, we will need the identity rules in
addition to the rules for the truth-functional connectives. Similarly, to prove
completeness of FT
soundness and
incompleteness
Section 8.3
Larger(c, b) from Larger(b, c), we would need rules having to do with the
predicate Larger. We will return to these issues in Chapter 19.
The Soundness and Completeness Theorems have practical uses that are
worth keeping in mind. The Completeness Theorem gives us a method for
showing that an argument has a proof without actually having to find a
such proof: just show that the conclusion is a tautological consequence of
the premises. For example, it is obvious that A (B A) is a tautology so
by the Completeness Theorem we know it must have a proof. Similarly, the
sentence B D is a tautological consequence of ((A B) (C D)) so we
know it must be possible to find a proof of the former from the latter.
The Soundness Theorem, on the other hand, gives us a method for telling
that an argument does not have a proof in FT : show that the conclusion is not
a tautological consequence of the premises. For example, A (A B) is not
a tautology, so it is impossible to construct a proof of it in FT , no matter how
hard you try. Similarly, the sentence B D is a not tautological consequence
of ((A B) (C D)), so we know there is no proof of this in FT .
Recall our earlier discussion of the Taut Con routine in Fitch. This procedure checks to see whether a sentence is a tautological consequence of whatever
sentences you cite in support. You can use the observations in the preceding
paragraphs, along with Taut Con, to decide whether it is possible to give
a proof using the rules of FT . If Taut Con says a particular sentence is a
tautological consequence of the cited sentences, then you know it is possible
to give a full proof of the sentence, even though you may not see exactly how
the proof goes. On the other hand, if Taut Con says it is a not tautological
consequence of the cited sentences, then there is no point in trying to find a
proof in FT , for the simple reason that no such proof is possible.
Remember
Given an argument with premises P1 , . . . , Pn and conclusion S:
1. (Completeness of FT ) If S is a tautological consequence of P1 , . . . , Pn ,
then there is a proof of S from premises P1 , . . . , Pn using only the
introduction and elimination rules for , , , , , and .
2. (Soundness of FT ) If S is not a tautological consequence of P1 , . . . , Pn ,
then there is no proof of S from premises P1 , . . . , Pn using only the
rules for , , , , , and .
3. Which of these alternatives holds can be determined with the Taut
Con procedure of Fitch.
Chapter 8
Exercises
Decide whether the following two arguments are provable in FT without actually trying to find proofs.
Do this by constructing a truth table in Boole to assess their tautological validity. Submit the table. Then
explain clearly how you know the argument is or is not provable by applying the Soundness and Completeness results. Turn in your explanations to your instructor. (The explanations are more important
than the tables, so dont forget the second part!)
8.39
!|"
8.40
A (B A (C D))
E (D (A (B D)))
!|"
A (B A (C D)) (A D)
(E (D (A (B D))))
AB
In the proof of the Soundness Theorem, we only treated three of the twelve rules of FT . The next three
problems ask you to treat some of the other rules.
8.41
"!
8.43
"!!
8.42
"!!
Section 8.4
Section 8.4
middle or to apply a DeMorgan equivalence. But you should still use rules like
Elim, Intro, and Intro when your informal proof would use proof by
cases, proof by contradiction, or conditional proof. Any one-step proofs that
consist of a single application of Taut Con will be counted as wrong!
Before doing these problems, go back and read the material in the Remember boxes, paying special attention to the strategy for evaluating arguments on page 173.
Remember
From this point on in the book, you may use Taut Con in formal proofs,
but only to skip simple steps that would go unmentioned in an informal
proof.
Exercises
In the following exercises, you are given arguments in the blocks language. Evaluate each arguments
validity. If it is valid, construct a formal proof to show this. If you need to use Ana Con, use it only to
derive from atomic sentences. If the argument is invalid, you should use Tarskis World to construct
a counterexample world.
8.44
!
Adjoins(a, b) Adjoins(b, c)
SameRow(a, c)
8.45
!
(Cube(b) b = c) Cube(c)
a += c
8.46
!
8.47
!
Cube(a) Small(b)
8.48
!
8.49
!
(Dodec(a) Dodec(b))
(SameCol(a, c) Small(a))
(SameCol(b, c) Small(b))
(Dodec(b) Small(a))
SameCol(a, c) SameCol(b, c)
Dodec(a) Small(b)
Chapter 8
8.50
!
8.51
!
8.52
!
8.53
!
Small(a) Small(b)
Small(b) (SameSize(b, c) Small(c))
Small(a) (Large(a) Large(c))
SameSize(b, c) (Large(c) Small(c))
Section 8.4
Part II
Quantifiers
Chapter 9
Introduction to Quantification
In English and other natural languages, basic sentences are made by combining
noun phrases and verb phrases. The simplest noun phrases are names, like Max
and Claire, which correspond to the constant symbols of fol. More complex
noun phrases are formed by combining common nouns with words known as
determiners, such as every, some, most, the, three, and no, giving us noun
phrases like every cube, some man from Indiana, most children in the class,
the dodecahedron in the corner, three blind mice, and no student of logic.
Logicians call noun phrases of this sort quantified expressions, and sentences containing them quantified sentences. They are so called because they
allow us to talk about quantities of thingsevery cube, most children, and so
forth.
The logical properties of quantified sentences are highly dependent on
which determiner is used. Compare, for example, the following arguments:
Every rich actor is a good actor.
Brad Pitt is a rich actor.
Brad Pitt is a good actor.
Many rich actors are good actors.
Brad Pitt is a rich actor.
Brad Pitt is a good actor.
No rich actor is a good actor.
Brad Pitt is a rich actor.
Brad Pitt is a good actor.
What a difference a determiner makes! The first of these arguments is obviously valid. The second is not logically valid, though the premises do make the
conclusion at least plausible. The third argument is just plain dumb: in fact
the premises logically imply the negation of the conclusion. You can hardly
get a worse argument than that.
Quantification takes us out of the realm of truth-functional connectives.
Notice that we cant determine the truth of quantified sentences by looking
at the truth values of constituent sentences. Indeed, sentences like Every rich
229
determiners
quantified sentences
hidden quantification
actor is a good actor and No rich actor is a good actor really arent made up
of simpler sentences, at least not in any obvious way. Their truth values are
determined by the relationship between the collection of rich actors and the
collection of good actors: by whether all of the former or none of the former
are members of the latter.
Various non-truth-functional constructions that weve already looked at
are, in fact, hidden forms of quantification. Recall, for example, the sentence:
Max is home whenever Claire is at the library.
You saw in Exercise 7.31 that the truth of this sentence at a particular time is
not a truth function of its parts at that time. The reason is that whenever is
an implicit form of quantification, meaning at every time that. The sentence
means something like:
Every time when Claire is at the library is a time when Max is at home.
quantifiers of fol
Section 9.1
Chapter 9
Before we can show you how fols quantifier symbols work, we need to introduce a new type of term, called a variable. Variables are a kind of auxiliary
symbol. In some ways they behave like individual constants, since they can
appear in the list of arguments immediately following a predicate or function
symbol. But in other ways they are very different from individual constants. In
particular, their semantic function is not to refer to objects. Rather, they are
atomic wffs
Remember
1. The language fol has an infinite number of variables, any of t, u, v,
w, x, y, and z, with or without numerical subscripts.
2. Variables can occur in atomic wffs in any position normally occupied
by a name.
Section 9.1
Section 9.2
Universal quantifier ()
everything, each thing,
all things, anything
Existential quantifier ()
something, at least one
thing, a, an
Chapter 9
If P is a wff, so is P.
If P1 , . . . , Pn are wffs, so is (P1 . . . Pn ).
If P1 , . . . , Pn are wffs, so is (P1 . . . Pn ).
If P and Q are wffs, so is (P Q).
If P and Q are wffs, so is (P Q).
If P is a wff and is a variable (i.e., one of t, u, v, w, x, . . . ), then
P is a wff
well-formed
formula (wff )
Section 9.3
free variable
bound variable
Some wffs have the important property that every occurrence of a variable
has a quantifier associated with it, to tell us whether the variable is treated
existentially or universally. When does a variable have an associated quantifier? We make this precise by defining the twin notions of free and bound
variables.
1.
2.
3.
4.
5.
6.
7.
sentences
Chapter 9
displayed above are sentences, since they all contain free variables. To make
a sentence out of the last of these, we can simply apply rule 6 to produce:
x ((Cube(x) Small(x)) y LeftOf(x, y))
Here all occurrences of the variable x have been bound by the quantifier x.
This wff is a sentence since it has no free variables. It claims that for every
object x, if x is both a cube and small, then there is an object y such that x
is to the left of y. Or, to put it more naturally, every small cube is to the left
of something.
These rules can be applied over and over again to form more and more
complex wffs. So, for example, repeated application of the first rule to the wff
Home(max) will give us all of the following wffs:
Home(max)
Home(max)
Home(max)
..
.
Since none of these contains any variables, and so no free variables, they are
all sentences. They claim, as you know, that Max is not home, that it is not
the case that Max is not home, that it is not the case that it is not the case
that Max is not home, and so forth.
We have said that a sentence is a wff with no free variables. However, it
can sometimes be a bit tricky deciding whether a variable is free in a wff. For
example, there are no free variables in the wff,
x (Doctor(x) Smart(x))
However there is a free variable in the deceptively similar wff,
x Doctor(x) Smart(x)
Here the last occurrence of the variable x is still free. We can see why this is the
case by thinking about when the existential quantifier was applied in building
up these two formulas. In the first one, the parentheses show that the quantifier
was applied to the conjunction (Doctor(x) Smart(x)). As a consequence, all
occurrences of x in the conjunction were bound by this quantifier. In contrast,
the lack of parentheses show that in building up the second formula, the
existential quantifier was applied to form x Doctor(x), thus binding only the
occurrence of x in Doctor(x). This formula was then conjoined with Smart(x),
and so the latters occurrence of x did not get bound.
Section 9.3
scope of quantifier
Parentheses, as you can see from this example, make a big difference.
They are the way you can tell what the scope of a quantifier is, that is, which
variables fall under its influence and which dont. This example also shows
that a variable can occur both free and bound in a formula. It is really an
occurrence of a variable that is either free or bound, not the variable itself. In
the formula
x Doctor(x) Smart(x)
the last occurrence of x is free and the second is bound.
Remember
1. Complex wffs are built from atomic wffs by means of truth-functional
connectives and quantifiers in accord with the rules on page 233.
2. When you append either quantifier x or x to a wff P, we say that
the quantifier binds all the free occurrences of x in P.
3. A sentence is a wff in which no variables occur free (unbound).
Exercises
9.1
!
9.2
!
9.3
!
(Fixing some expressions) Open the sentence file Bernsteins Sentences. The expressions in this
list are not quite well-formed sentences of our language, but they can all be made sentences by
slight modification. Turn them into sentences without adding or deleting any quantifier symbols.
With some of them, there is more than one way to make them a sentence. Use Verify to make
sure your results are sentences and then submit the corrected file.
(Fixing some more expressions) Open the sentence file Schonfinkels Sentences. Again, the expressions in this list are not well-formed sentences. Turn them into sentences, but this time,
do it only by adding quantifier symbols or variables, or both. Do not add any parentheses. Use
Verify to make sure your results are sentences and submit the corrected file.
(Making them true) Open Bozos Sentences and Leibnizs World. Some of the expressions in this
file are not wffs, some are wffs but not sentences, and one is a sentence but false. Read and
assess each one. See if you can adjust each one to make it a true sentence with as little change
as possible. Try to capture the intent of the original expression, if you can tell what that was
(if not, dont worry). Use Verify to make sure your results are true sentences and then submit
your file.
Chapter 9
Section 9.4
satisfaction
semantics of
Section 9.4
semantics of
domain of discourse
one object that satisfies the constituent wff S(x). So x (Cube(x) Small(x))
is true if there is at least one object that satisfies Cube(x) Small(x), that is,
if there is at least one small cube. Similarly, a sentence of the form x S(x)
is true if and only if every object satisfies the constituent wff S(x). Thus
x (Cube(x) Small(x)) is true if every object satisfies Cube(x) Small(x),
that is, if every object either isnt a cube or is small.
This approach to satisfaction is conceptually simpler than some. A more
common approach is to avoid the introduction of new names by defining satisfaction for wffs with an arbitrary number of free variables. We will not need
this for specifying the meaning of quantifiers, but we will need it in some of the
more advanced sections. For this reason, we postpone the general discussion
until later.
In giving the semantics for the quantifiers, we have implicitly assumed that
there is a clear, non-empty collection of objects that we are talking about.
For example, if we encounter the sentence x Cube(x) in Tarskis World, we
interpret this to be a claim about the objects depicted in the world window.
We do not judge it to be false just because the moon is not a cube. Similarly,
if we encounter the sentence x (Even(x2 ) Even(x)), we interpret this as a
claim about the natural numbers. It is true because every object in the domain
we are talking about, natural numbers, satisfies the constituent wff.
In general, sentences containing quantifiers are only true or false relative to
some domain of discourse or domain of quantification. Sometimes the intended
domain contains all objects there are. Usually, though, the intended domain
is a much more restricted collection of things, say the people in the room,
or some particular set of physical objects, or some collection of numbers. In
this book, we will specify the domain explicitly unless it is clear from context
what domain is intended. Also, in FOL we always assume that the domain
of discourse contains at least one object and that every individual constant
in the language stands for an object in that domain. (We could give up these
idealizations, but it would complicate things considerably without much gain
in realism.)
In the above discussion, we introduced some notation that we will use a
lot. Just as we often used P or Q to stand for a possibly complex sentence of
propositional logic, so too we will often use S(x) or P(y) to stand for a possibly
complex wff of first-order logic. Thus, P(y) may stand for a wff like:
x (LeftOf(x, y) RightOf(x, y))
When we then write, say, P(b), this stands for the result of replacing all the
free occurrences of y by the individual constant b:
x (LeftOf(x, b) RightOf(x, b))
Chapter 9
Section 9.4
Your commitment
Player to move
true
you
false
Tarskis World
true
Tarskis World
false
you
true
you
false
Tarskis World
true
Tarskis World
false
you
either
PQ
either
PQ
either
PQ
PQ
x P(x)
x P(x)
Goal
Choose one of
P, Q that
is true.
Choose one of
P, Q that
is false.
Choose some b
that satisfies
the wff P(x).
Choose some b
that does not
satisfy P(x).
Replace P
by P
and switch
commitment.
Replace P Q
by P Q
and keep
commitment.
Replace P Q by
(P Q) (Q P)
and keep
commitment.
are committed to there being some object that does not satisfy P(x). Tarskis
World will ask you to live up to your commitment by finding such an object.
We have now seen all the game rules. We summarize them in Table 9.1.
You try it
................................................................
!
Chapter 9
1. Open the files Game World and Game Sentences. Go through each sentence
and see if you can tell whether it is true or false. Check your evaluation.
2. Whether you evaluated the sentence correctly or not, play the game twice
for each sentence, first committed to true, then committed to false.
Make sure you understand how the game works at each step.
"
"
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
Exercises
9.4
"
9.5
!
9.6
!
9.7
"
If you skipped the You try it section, go back and do it now. This is an easy but important
exercise that will familiarize you with the game rules for the quantifiers. There is nothing you
need to turn in or submit.
(Evaluating sentences in a world) Open Peirces World and Peirces Sentences. There are 30
sentences in this file. Work through them, assessing their truth and playing the game when
necessary. Make sure you understand why they have the truth values they do. (You may need to
switch to the 2-D view for some of the sentences.) After you understand each of the sentences,
go back and make the false ones true by adding or deleting a negation sign. Submit the file
when the sentences all come out true in Peirces World.
(Evaluating sentences in a world) Open Leibnizs World and Zorns Sentences. The sentences
in this file contain both quantifiers and the identity symbol. Work through them, assessing
their truth and playing the game when necessary. After youre sure you understand why the
sentences get the values they do, modify the false ones to make them true. But this time you
can make any change you want except adding or deleting a negation sign.
In English we sometimes say things like Every Jason is envied, meaning that everyone named
Jason is envied. For this reason, students are sometimes tempted to write expressions like
b Cube(b) to mean something like Everything named b is a cube. Explain why this is not well
formed according to the grammatical rules on page 233.
Section 9.5
Section 9.5
All Ps are Qs
Some Ps are Qs
No Ps are Qs
Some Ps are not Qs
Aristotelian forms
We will begin by looking at the first two of these forms, which we have
already discussed to a certain extent. These forms are translated as follows.
The form All Ps are Qs is translated as:
x (P(x) Q(x))
whereas the form Some Ps are Qs is translated as:
x (P(x) Q(x))
Beginning students are often tempted to translate the latter more like the
former, namely as:
x (P(x) Q(x))
This is in fact an extremely unnatural sentence of first-order logic. It is meaningful, but it doesnt mean what you might think. It is true just in case there
is an object which is either not a P or else is a Q, which is something quite
different than saying that some Ps are Qs. We can quickly illustrate this
difference with Tarskis World.
You try it
................................................................
!
1. Use Tarskis World to build a world containing a single large cube and
nothing else.
3. Now change the large cube into a small tetrahedron and check to see if
the sentence is true or false. Do you understand why the sentence is still
true? Even if you do, play the game twice, once committed to its being
false, once to its being true.
4. Add a second sentence that correctly expresses the claim that there is a
large cube. Make sure it is false in the current world but becomes true when
you add a large cube. Save your two sentences as Sentences Quantifier 1.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
Chapter 9
The other two Aristotelian forms are translated similarly, but using a
negation. In particular No Ps are Qs is translated
x (P(x) Q(x))
Many students, and one of the authors, finds it more natural to use the following, logically equivalent sentence:
x (P(x) Q(x))
Both of these assert that nothing that is a P is also a Q.
The last of the four forms, Some Ps are not Qs, is translated by
x (P(x) Q(x))
which says there is something that is a P but not a Q.
The four Aristotelian forms are the very simplest sorts of sentences built
using quantifiers. Since many of the more complicated forms we talk about
later are elaborations of these, you should learn them well.
Remember
The four Aristotelian forms are translated as follows:
All
Some
No
Some Ps
Ps are
Ps are
Ps are
are not
Qs.
Qs.
Qs.
Qs.
x (P(x) Q(x))
x (P(x) Q(x))
x (P(x) Q(x))
x (P(x) Q(x))
Exercises
9.8
!
9.9
!
If you skipped the You try it section, go back and do it now. Submit the file Sentences
Quantifier 1.
(Building a world) Open Aristotles Sentences. Each of these sentences is of one of the four
Aristotelian forms. Build a single world where all the sentences in the file are true. As you
work through the sentences, you will find yourself successively modifying the world. Whenever
you make a change in the world, you had better go back and check that you havent made
any of the earlier sentences false. Then, when you are finished, verify that all the sentences are
really true and submit your world.
Section 9.5
9.10
"
9.11
!|"
(Common translation mistakes) Open Edgars Sentences and evaluate them in Edgars World.
Make sure you understand why each of them has the truth value it does. Play the game if
any of the evaluations surprise you. Which of these sentences would be a good translation of
There is a tetrahedron that is large? (Clearly this English sentence is false in Edgars World,
since there are no tetrahedra at all.) Which sentence would be a good translation of There is
a cube between a and b? Which would be a good translation of There is a large dodecahedron?
Express in clear English the claim made by each sentence in the file and turn in your answers
to your instructor.
(Common mistakes, part 2) Open Allans Sentences. In this file, sentences 1 and 4 are the
correct translations of Some dodecahedron is large and All tetrahedra are small, respectively.
Lets investigate the logical relations between these and sentences 2 and 3.
1. Construct a world in which sentences 2 and 4 are true, but sentences 1 and 3 are false.
Save it as World 9.11.1. This shows that sentence 1 is not a consequence of 2, and
sentence 3 is not a consequence of 4.
2. Can you construct a world in which sentence 3 is true and sentence 4 is false? If so, do
so and save it as World 9.11.2. If not, explain why you cant and what this shows.
3. Can you construct a world in which sentence 1 is true and sentence 2 is false? If so, do
so and save it as World 9.11.3. If not, explain why not.
Submit any world files you constructed and turn in any explanations to your instructor.
9.12
!
(Describing a world) Open Reichenbachs World 1. Start a new sentence file where you will
describe some features of this world using sentences of the simple Aristotelian forms. Check
each of your sentences to see that it is indeed a sentence and that it is true in this world.
1. Use your first sentence to describe the size of all the tetrahedra.
2. Use your second sentence to describe the size of all the cubes.
3. Use your third sentence to express the truism that every dodecahedron is either small,
medium, or large.
4. Notice that some dodecahedron is large. Express this fact.
5. Observe that some dodecahedron is not large. Express this.
6. Notice that some dodecahedron is small. Express this fact.
7. Observe that some dodecahedron is not small. Express this.
8. Notice that some dodecahedron is neither large nor small. Express this.
9. Express the observation that no tetrahedron is large.
10. Express the fact that no cube is large.
Now change the sizes of the objects in the following way: make one of the cubes large, one
of the tetrahedra medium, and all the dodecahedra small. With these changes, the following
should come out false: 1, 2, 4, 7, 8, and 10. If not, then you have made an error in describing
the original world. Can you figure out what it is? Try making other changes and see if your
sentences have the expected truth values. Submit your sentence file.
Chapter 9
9.13
!
Assume we are working in an extension of the first-order language of arithmetic with the
additional predicates Even(x) and Prime(x), meaning, respectively, x is an even number and
x is a prime number. Create a sentence file in which you express the following claims:
1. Every even number is prime.
2. No even number is prime.
3. Some prime is even.
4. Some prime is not even.
5. Every prime is either odd or equal to 2.
[Note that you should assume your domain of discourse consists of the natural numbers, so
there is no need for a predicate Number(x). Also, remember that 2 is not a constant in the
language, so must be expressed using + and 1.]
9.14
!
(Name that object) Open Maigrets World and Maigrets Sentences. The goal is to try to figure
out which objects have names, and what they are. You should be able to figure this out from
the sentences, all of which are true. Once you have come to your conclusion, assign the six
names to objects in the world in such a way that all the sentences do indeed evaluate as true.
Submit your modified world.
Section 9.6
existential
noun phrases
Section 9.6
universal
noun phrases
We have put parentheses around the first three predicates to indicate that
they were all part of the translation of the subject noun phrase. But this is
not really necessary.
Universal noun phrases are those that begin with determiners like every,
each, and all. These are usually translated with the universal quantifier.
Sometimes noun phrases beginning with no and with any are also translated
with the universal quantifier. Two of our four Aristotelian forms involve
universal noun phrases, so we also know the general pattern here: universal
noun phrases are usually translated using , frequently together with .
Lets consider the sentence Every small dog that is at home is happy. This
claims that everything with a complex property, that of being a small dog
at home, has another property, that of being happy. This suggests that the
overall sentence has the form All As are Bs. But in this case, to express the
complex property that fills the A position, we will use a conjunction. Thus
it would be translated as
x [(Small(x) Dog(x) Home(x)) Happy(x)]
noun phrases in
non-subject positions
In this case, the parentheses are not optional. Without them the expression
would not be well formed.
In both of the above examples, the complex noun phrase appeared at
the beginning of the English sentence, much like the quantifier in the fol
translation. Often, however, the English noun phrase will appear somewhere
else in the sentence, say as the direct object, and in these cases the fol
translation may be ordered very differently from the English sentence. For
example, the sentence Max owns a small, happy dog might be translated:
x [(Small(x) Happy(x) Dog(x)) Owns(max, x)]
which says there is a small, happy dog that Max owns. Similarly, the English
sentence Max owns every small, happy dog would end up turned around like
this:
x [(Small(x) Happy(x) Dog(x)) Owns(max, x)]
You will be given lots of practice translating complex noun phrases in the
exercises that follow. First, however, we discuss some troublesome cases.
Chapter 9
Remember
1. Translations of complex quantified noun phrases frequently employ
conjunctions of atomic predicates.
2. The order of an English sentence may not correspond to the order of
its fol translation.
vacuously true
generalizations
y(Tet(y) Small(y))
which asserts that every tetrahedron is small. But imagine that it has been
asserted about a world in which there are no tetrahedra. In such a world the
sentence is true simply because there are no tetrahedra at all, small, medium,
or large. Consequently, it is impossible to find a counterexample, a tetrahedron
which is not small.
What strikes students as especially odd are examples like
y(Tet(y) Cube(y))
On the face of it, such a sentence looks contradictory. But we see that if it is
asserted about a world in which there are no tetrahedra, then it is in fact true.
But that is the only way it can be true: if there are no tetrahedra. In other
words, the only way this sentence can be true is if it is vacuously true. Lets
call generalizations with this property inherently vacuous. Thus, a sentence of
the form x (P(x) Q(x)) is inherently vacuous if the only worlds in which it
is true are those in which x P(x) is true.
inherently vacuous
generalizations
Section 9.6
You try it
................................................................
!
1. Open Dodgsons Sentences. Note that the first sentence says that every
tetrahedron is large.
2. Open Peanos World. Sentence 1 is clearly false in this world, since the
small tetrahedron is a counterexample to the universal claim. What this
means is that if you play the game committed to the falsity of this claim,
then when Tarskis World asks you to pick an object you will be able to
pick the small tetrahedron and win the game. Try this.
4. Now open Peirces World. Verify that sentence 1 is again false, this time
because there are three counterexamples. (Now if you play the game committed to the falsity of the sentence, you will have three different winning
moves when asked to pick an object: you can pick any of the small tetrahedra and win.)
5. Delete all three counterexamples, and evaluate the claim. Is the result
what you expected? The generalization is true, because there are no counterexamples to it. It is what we called a vacuously true generalization,
since there are no objects that satisfy the antecedent. That is, there are
no tetrahedra at all, small, medium, or large. Confirm that all of sentences
13 are vacuously true in the current world.
6. Two more vacuously true sentences are given in sentences 4 and 5. However, these sentences are different in another respect. Each of the first three
sentences could have been non-vacuously true in a world, but these latter
two can only be true in worlds containing no tetrahedra. That is, they are
inherently vacuous.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
In everyday conversation, it is rare to encounter a vacuously true generalization, let alone an inherently vacuous generalization. When we do find
either of these, we feel that the speaker has misled us. For example, suppose a
professor claims Every freshman who took the class got an A, when in fact
Chapter 9
no freshman took her class. Here we wouldnt say that she lied, but we would
certainly say that she misled us. Her statement typically carries the conversational implicature that there were freshmen in the class. If there were no
freshmen, then thats what she would have said if she were being forthright.
Inherently vacuous claims are true only when they are misleading, so they
strike us as intuitively false.
Another source of confusion concerns the relationship between the following two Aristotelian sentences:
Some Ps are Qs
All Ps are Qs
conversational
implicature
Students often have the intuition that the first should contradict the second.
After all, why would you say that some student got an A if every student got
an A? If this intuition were right, then the correct translation of Some Ps
are Qs would not be what we have suggested above, but rather
x (P(x) Q(x)) x (P(x) Q(x))
It is easy to see, however, that the second conjunct of this sentence does not
represent part of the meaning of the sentence. It is, rather, another example
of a conversational implicature. It makes perfectly good sense to say Some
student got an A on the exam. In fact, every student did. If the proposed
conjunction were the right form of translation, this amplification would be
contradictory.
Remember
1. All Ps are Qs does not imply, though it may conversationally suggest,
that there are some Ps.
2. Some Ps are Qs does not imply, though it may conversationally suggest, that not all Ps are Qs.
Exercises
9.15
!
If you skipped the You try it section, go back and do it now. Submit the file Sentences
Vacuous 1.
Section 9.6
9.16
!
(Translating existential noun phrases) Start a new sentence file and enter translations of the
following English sentences. Each will use the symbol exactly once. None will use the symbol
. As you go, check that your entries are well-formed sentences. By the way, you will find that
many of these English sentences are translated using the same first-order sentence.
1. Something is large.
2. Something is a cube.
3. Something is a large cube.
4. Some cube is large.
5. Some large cube is to the left of b.
6. A large cube is to the left of b.
7. b has a large cube to its left.
8. b is to the right of a large cube. [Hint: This translation should be almost the same as
the last, but it should contain the predicate symbol RightOf.]
9. Something to the left of b is in back of c.
10. A large cube to the left of b is in back of c.
11. Some large cube is to the left of b and in back of c.
12. Some dodecahedron is not large.
13. Something is not a large dodecahedron.
14. Its not the case that something is a large dodecahedron.
15. b is not to the left of a cube. [Warning: This sentence is ambiguous. Can you think of
two importantly different translations? One starts with , the other starts with . Use
the second of these for your translation, since this is the most natural reading of the
English sentence.]
Now lets check the translations against a world. Open Montagues World.
Notice that all the English sentences above are true in this world. Check that all your
translations are also true. If not, you have made a mistake. Can you figure out what is
wrong with your translation?
Move the large cube to the back right corner of the grid. Observe that English sentences
5, 6, 7, 8, 10, 11, and 15 are now false, while the rest remain true. Check that the same
holds of your translations. If not, you have made a mistake. Figure out what is wrong
with your translation and fix it.
Now make the large cube small. The English sentences 1, 3, 4, 5, 6, 7, 8, 10, 11, and
15 are false in the modified world, the rest are true. Again, check that your translations
have the same truth values. If not, figure out what is wrong.
Finally, move c straight back to the back row, and make the dodecahedron large. All the
English sentences other than 1, 2, and 13 are false. Check that the same holds for your
translations. If not, figure out where you have gone wrong and fix them.
When you are satisfied that your translations are correct, submit your sentence file.
Chapter 9
9.17
!
(Translating universal noun phrases) Start a new sentence file, and enter translations of the
following sentences. This time each translation will contain exactly one and no .
1. All cubes are small.
2. Each small cube is to the right of a.
3. a is to the left of every dodecahedron.
4. Every medium tetrahedron is in front of b.
5. Each cube is either in front of b or in back of a.
6. Every cube is to the right of a and to the left of b.
7. Everything between a and b is a cube.
8. Everything smaller than a is a cube.
9. All dodecahedra are not small. [Note: Most people find this sentence ambiguous. Can
you find both readings? One starts with , the other with . Use the former, the one
that means all the dodecahedra are either medium or large.]
10. No dodecahedron is small.
11. a does not adjoin everything. [Note: This sentence is ambiguous. We want you to
interpret it as a denial of the claim that a adjoins everything.]
12. a does not adjoin anything. [Note: These last two sentences mean different things,
though they can both be translated using , , and Adjoins.]
13. a is not to the right of any cube.
14. (#) If something is a cube, then it is not in the same column as either a or b. [Warning:
While this sentence contains the noun phrase something, it is actually making a
universal claim, and so should be translated with . You might first try to paraphrase
it using the English phrase every cube.]
15. (#) Something is a cube if and only if it is not in the same column as either a or b.
Now lets check the translations in some worlds.
Open Claires World. Check to see that all the English sentences are true in this world,
then make sure the same holds of your translations. If you have made any mistakes, fix
them.
Adjust Claires World by moving a directly in front of c. With this change, the English
sentences 2, 6, and 1215 are false, while the rest are true. Make sure that the same holds
of your translations. If not, try to figure out what is wrong and fix it.
Next, open Wittgensteins World. Observe that the English sentences 2, 3, 7, 8, 11, 12,
and 13 are true, but the rest are false. Check that the same holds for your translations.
If not, try to fix them.
Finally, open Venns World. English sentences 2, 4, 7, and 1114 are true; does the same
hold for your translations?
When you are satisfied that your translations are correct, submit your sentence file.
Section 9.6
9.18
!
(Translation) Open Leibnizs World. This time, we will translate some sentences while looking
at the world they are meant to describe.
Start a new sentence file, and enter translations of the following sentences. Each of the
English sentences is true in this world. As you go, check to make sure that your translation
is indeed a true sentence.
1.
2.
3.
4.
5.
Now lets change the world so that none of the English sentences is true. We can do this
as follows. First change b into a medium cube. Next, delete the leftmost tetrahedron and
move b to exactly the position just vacated by the late tetrahedron. Finally, add a small
cube to the world, locating it exactly where b used to sit. If your answers to 15 are
correct, all of the translations should now be false. Verify that they are.
Make various changes to the world, so that some of the English sentences come out true
and some come out false. Then check to see that the truth values of your translations
track the truth values of the English sentences.
9.19
!!
9.20
"!
Start a new sentence file and translate the following into fol using the symbols from Table 1.2,
page 30. Note that all of your translations will involve quantifiers, though this may not be
obvious from the English sentences. (Some of your translations will also require the identity
predicate.)
1. People are not pets.
2. Pets are not people.
3. Scruffy was not fed at either 2:00 or 2:05. [Remember, Fed is a ternary predicate.]
4. Claire fed Folly at some time between 2:00 and 3:00.
5. Claire gave a pet to Max at 2:00.
6. Claire had only hungry pets at 2:00.
7. Of all the students, only Claire was angry at 3:00.
8. No one fed Folly at 2:00.
9. If someone fed Pris at 2:00, they were angry.
10. Whoever owned Pris at 2:00 was angry five minutes later.
Using Table 1.2, page 30, translate the following into colloquial English.
1. t Gave(claire, folly, max, t)
2. x (Pet(x) Hungry(x, 2:00))
Chapter 9
9.21
"!!
Translate the following into fol, introducing names, predicates, and function symbols as
needed. As usual, explain your predicates and function symbols, and any shortcomings in
your translations. If you assume a particular domain of discourse, mention that as well.
1. Only the brave know how to forgive.
2. No man is an island.
3. I care for nobody, not I,
If no one cares for me.
4. Every nation has the government it deserves.
5. There are no certainties, save logic.
6. Misery (that is, a miserable person) loves company.
7. All that glitters is not gold.
8. There was a jolly miller once
Lived on the River Dee.
9. If you praise everybody, you praise nobody.
10. Something is rotten in the state of Denmark.
Section 9.7
Section 9.7
Chapter 9
9.22
"
Assume that we have expanded the blocks language to include the function symbols fm, bm, lm
and rm described earlier. Then the following formulas would all be sentences of the language:
1. y (fm(y) = e)
2. x (lm(x) = b x += b)
3. x Small(fm(x))
4. x (Small(x) fm(x) = x)
5. x (Cube(x) Dodec(lm(x)))
6. x (rm(lm(x)) = x)
7. x (fm(bm(x)) = x)
8. x (fm(x) += x Tet(fm(x)))
9. x (lm(x) = b SameRow(x, b))
10. y (lm(fm(y)) = fm(lm(y)) Small(y))
Fill in the following table with trues and falses according to whether the indicated sentence
is true or false in the indicated world. Since Tarskis World does not understand the function
symbols, you will not be able to check your answers. We have filled in a few of the entries for
you. Turn in the completed table to your instructor.
Malcevs
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
Bolzanos
Booles
Wittgensteins
false
false
true
true
Section 9.7
Burton
Edith
John
Ellen
Archie
Addie
William
Anna
(5' 8")
(5' 11")
(6' 2")
(5' 4")
(5' 8")
(5' 3")
(5' 8")
(5' 7")
Kenneth
Evelyn
Jim
Helen
(5' 10")
(5' 7")
(6' 0")
(5' 6")
Jon
Mary
(6' 5")
(5' 6")
Melanie
J.R.
Claire
(5' 2")
(6' 2")
(5' 9")
9.23
"
9.24
"
Consider the first-order language with function symbols mother and father, plus names for each
of the people shown in the family tree in Figure 9.1. Here are some atomic wffs, each with a
single free variable x. For each, pick a person for x that satisfies the wff, if you can. If there is
no such person indicated in the family tree, say so.
1. mother(x) = ellen
2. father(x) = jon
3. mother(father(x)) = mary
4. father(mother(x)) = john
5. mother(father(x)) = addie
6. father(mother(father(x))) = john
7. father(father(mother(x))) = archie
8. father(father(jim)) = x
9. father(father(mother(claire))) = x
10. mother(mother(mary)) = mother(x)
Again using Figure 9.1, figure out which of the sentences listed below are true. Assume that
the domain of discourse consists of the people listed in the family tree.
1. x Taller(x, mother(x))
2. x Taller(father(x), mother(x))
3. y Taller(mother(mother(y)), mother(father(y)))
4. z [z += father(claire) Taller(father(claire), z)]
5. x [Taller(x, father(x)) Taller(x, claire)]
Chapter 9
9.25
!
Assume you are working in an extension of the first-order language of arithmetic with the
additional predicates Even(x) and Prime(x). Express the following in this language, explicitly
using the function symbol , as in z z, rather than z2 . Note that you do not have a predicate
Square(x).
1. No square is prime.
2. Some square is odd.
3. The square of any prime is prime.
4. The square of any prime other than 2 is odd.
5. The square of any number greater than 1 is greater than the number itself.
Submit your sentence file.
Section 9.8
Alternative notation
The notation we have been using for the quantifiers is currently the most
popular. An older notation that is still in some use employs (x) for x. Thus,
for example, in this notation our
x [Tet(x) Small(x)]
would be written:
(x) [Tet(x) Small(x)]
Another notation that is occasionally used exploits the similarity between
!
universal quantification and conjunction by writing x instead of x. In this
notation our sentence would be rendered:
"
x [Tet(x) Small(x)]
Finally, you will sometimes encounter the universal quantifier written x, as
in:
x [Tet(x) Small(x)]
Similar variants of x are in use. One version writes (x) or (Ex). Other
#
versions write x or x. Thus the following are notational variants of one
another.
x [Cube(x) Large(x)]
(Ex)[Cube(x) Large(x)]
#
x [Cube(x) Large(x)]
x [Cube(x) Large(x)]
Section 9.8
Remember
The following table summarizes the alternative notations.
Our notation
P
PQ
PQ
PQ
PQ
x S(x)
x S(x)
Common equivalents
P, P, !P, Np
P&Q, P&&Q, P Q, PQ, Kpq
P | Q, P . Q, Apq
P Q, Cpq
P Q, Epq
!
(x)S(x), x S(x), x S(x)
#
(x)S(x), (Ex)S(x), x S(x), x S(x)
Exercises
9.26
!
(Overcoming dialect differences) The following are all sentences of fol. But theyre in different
dialects. Start a new sentence file in Tarskis World and translate them into our dialect.
1. (x)(P(x) Q(x))
2. y((P(y) Q(y)) & R(y))
!
#
3.
x P(x) x P(x)
Chapter 9
Chapter 10
first-order logic
second-order
quantifiers
Section 10.1
259
quantified sentences
and tautological
consequence
x (Cube(x) Small(x))
x Cube(x)
x Small(x)
2.
x Cube(x)
x Small(x)
x (Cube(x) Small(x))
The first of these is valid because if every cube is small, and everything is
a cube, then everything is small. The second is valid because if everything
is a cube, and everything is small, then everything is a small cube. But are
these arguments tautologically valid? Or, to put it another way, can we simply
ignore the quantifiers appearing in these arguments and apply the principles
of modus ponens and Intro?
It doesnt take long to see that ignoring quantifiers doesnt work. For example, neither of the following arguments is valid, tautologically or otherwise:
3.
x (Cube(x) Small(x))
x Cube(x)
x Small(x)
4.
x Cube(x)
x Small(x)
x (Cube(x) Small(x))
Chapter 10
tautology and
quantification
x Cube(x) x Cube(x)
But is this sentence a tautology, true simply in virtue of the meanings of
the truth-functional connectives? Again, the answer is no, as we can see by
considering what happens when we replace the existential quantifier with a
universal quantifier:
x Cube(x) x Cube(x)
This sentence says that either everything is a cube or everything is not a
cube, which of course is false in any world inhabited by a mixture of cubes
and non-cubes.
Are there no tautologies in a language containing quantifiers? Of course
there are, but you dont find them by pretending the quantifiers simply arent
there. For example, the following sentence is a tautology:
x Cube(x) x Cube(x)
This sentence, unlike the previous one, is an instance of the law of excluded
middle. It says that either everything is a cube or its not the case that everything is a cube, and thats going to be true so long as the constituent
sentence x Cube(x) has a definite truth value. It would hold equally well if
the constituent sentence were x Cube(x), a fact you could recognize even if
you didnt know exactly what this sentence meant.
Recall that if we have a tautology and replace its atomic sentences by
complex sentences, the result is still a tautology, and hence also a logical
truth. This holds as long as the things we are substituting are sentences that
have definite truth values (whether true or not). We can use this observation
to discover a large number of quantified sentences that are logical truths.
Consider the following tautology:
(A B) (B A)
Section 10.1
(A B) C
A (B C)
(A B) (C D)
(A B) (B C)
(A B) (B A)
For instance, we can obtain Phred from the fourth of these by substituting as
follows:
Replace A by (y (P(y) R(y)) x (P(x) Q(x)))
Replace B by x (P(x) Q(x))
Replace C by y (P(y) R(y)).
But of the seven candidates for substitution, only the last, (A B)
(B A), is a tautology. If a sentence can be obtained from so many different formulas by substitution, how do we know which one to look at to see
if it is a tautology?
Here is a simple method for solving this problem: The basic idea is that
for purposes of testing whether a sentence is a tautology, we must treat any
quantified constituent of the sentence as if it is atomic. We dont look inside
quantified parts of the sentence. But we do pay attention to all of the truthfunctional connectives that are not in the scope of a quantifier. Well describe
Chapter 10
truth-functional form
truth-functional
form algorithm
Lets see how we would apply this algorithm to another sentence. See if you
understand each of the following steps, which begin with a quantified sentence
of fol, and end with that sentences truth-functional form. Pay particular
attention to steps 4 and 5. In step 4, do you understand why the label A
is used again? In step 5, do you understand why y Small(y) gets labeled C
rather than B?
Section 10.1
t.f. form
x Cube(x) x Cube(x)
A A
(A B) B
x Cube(x) Cube(a)
AB
AB
x Cube(x) y Tet(y)
x (Cube(x) Cube(x))
AB
A
Chapter 10
A
B
C
This shows that argument 3 is not an instance of Elim. But when we apply
the algorithm to a deceptively similar argument:
x Cube(x)A x Small(x)B
x Cube(x)A
x Small(x)B
we see that this argument is indeed an instance of modus ponens:
AB
A
B
The Taut Con procedure of Fitch uses the truth-functional form algorithm so you can use it to check whether a quantified sentence is a tautology,
or whether it is a tautological consequence of other sentences.
The truth-functional form algorithm allows us to apply all of the concepts
of propositional logic to sentences and arguments containing quantifiers. But
we have also encountered several examples of logical truths that are not tautologies, and logically valid arguments that are not tautologically valid. In the
next section we look at these.
Remember
1. Use the truth-functional form algorithm to determine the truthfunctional form of a sentence or argument containing quantifiers.
2. The truth-functional form of a sentence shows how the sentence is
built up from atomic and quantified sentences using truth-functional
connectives.
3. A quantified sentence is a tautology if and only if its truth-functional
form is a tautology.
4. Every tautology is a logical truth, but among quantified sentences
there are many logical truths that are not tautologies.
5. Similarly, there are many logically valid arguments of fol that are not
tautologically valid.
Section 10.1
Exercises
10.1
"
For each of the following, use the truth-functional form algorithm to annotate the sentence and
determine its form. Then classify the sentence as (a) a tautology, (b) a logical truth but not
a tautology, or (c) not a logical truth. (If your answer is (a), feel free to use the Taut Con
routine in Fitch to check your answer.)
1. x x = x
2. x Cube(x) Cube(a)
3. Cube(a) x Cube(x)
4. x (Cube(x) Small(x)) x (Small(x) Cube(x))
5. v (Cube(v) Small(v)) v (Cube(v) Small(v))
6. x Cube(x) x Cube(x)
7. [z (Cube(z) Large(z)) Cube(b)] Large(b)
8. x Cube(x) (x Cube(x) y Dodec(y))
9. (x Cube(x) y Dodec(y)) x Cube(x)
10. [(u Cube(u) u Small(u)) u Small(u)] u Cube(u)
Turn in your answers by filling in a table of the following form:
Annotated sentence
Truth-functional form
a/b/c
1.
..
.
In the following six exercises, use the truth-functional form algorithm to annotate the argument. Then
write out its truth-functional form. Finally, assess whether the argument is (a) tautologically valid, (b)
logically but not tautologically valid, or (c) invalid. Feel free to check your answers with Taut Con.
(Exercises 10.6 and 10.7 are, by the way, particularly relevant to the proof of the Completeness Theorem
for F given in Chapter 19.)
10.2
"
Cube(a) Cube(b)
Small(a) Large(b)
x (Cube(x) Small(x)) x (Cube(x) Large(x))
10.3
"
x Cube(x) y Small(y)
y Small(y)
x Cube(x)
Chapter 10
10.4
"
x Cube(x) y Small(y)
y Small(y)
x Cube(x)
10.5
"!
10.6
"!
10.7
"!
[In 10.6 and 10.7, we could think of the first premise as a way of introducing a new constant, c, by
means of the assertion: Let the constant c name a large cube, if there are any; otherwise, it may name
any object. Sentences of this sort are called Henkin witnessing axioms, and are put to important use
in proving completeness for F. The arguments show that if a constant introduced in this way ends up
naming a tetrahedron, it can only be because there arent any large cubes.]
Section 10.2
Section 10.2
first-order consequence
first-order validity
Chapter 10
Propositional logic
First-order logic
General notion
Tautology
Tautological consequence
Tautological equivalence
??
??
??
Logical truth
Logical consequence
Logical equivalence
One option would be to use the terms first-order logical truth, first-order
logical consequence, and first-order logical equivalence. But these are just too
much of a mouthful for repeated use, so we will abbreviate them. Instead of
first-order logical consequence, we will use first-order consequence or simply
FO consequence, and for first-order logical equivalence, well use first-order
(or FO) equivalence. We will not, however, use first-order truth for first-order
logical truth, since this might suggest that we are talking about a true (but
not logically true) sentence of first-order logic.
For first-order logical truth, it is standard to use the term first-order validity. This may surprise you, since so far weve only used valid to apply
to arguments, not sentences. This is a slight terminological inconsistency, but
it shouldnt cause any problems so long as youre aware of it. In first-order
logic, we use valid to apply to both sentences and arguments: to sentences
that cant be false, and to arguments whose conclusions cant be false if their
premises are true. Our completed table, then, looks like this:
Propositional logic
First-order logic
General notion
Tautology
Tautological consequence
Tautological equivalence
FO validity
FO consequence
FO equivalence
Logical truth
Logical consequence
Logical equivalence
So what do we mean by the notions of first-order validity, first-order consequence and first-order equivalence? These concepts are meant to apply to
those logical truths, consequences, and equivalences that are such solely in
virtue of the truth-functional connectives, the quantifiers, and the identity
symbol. Thus, for purposes of determining first-order consequence, we ignore
the specific meanings of names, function symbols, and predicates other than
identity.
There are two reasons for treating identity along with the quantifiers and
connectives, rather than like any other predicate. The first is that almost
all first-order languages use =. Other predicates, by contrast, vary from one
first-order language to another. For example, the blocks language uses the
binary predicate LeftOf, while the language of set theory uses , and the
language of arithmetic uses <. This makes it a reasonable division of labor to
try first to understand the logic implicit in the connectives, quantifiers, and
identity, without regard to the meanings of the other predicates, names, and
function symbols. The second reason is that the identity predicate is crucial for
expressing many quantified noun phrases of English. For instance, well soon
see how to express things like at least three tetrahedra and at most four cubes,
but to express these in fol we need identity in addition to the quantifiers
and . There is a sense in which identity and the quantifiers go hand in hand.
If we can recognize that a sentence is logically true without knowing the
meanings of the names or predicates it contains (other than identity), then
well say the sentence is a first-order validity. Lets consider some examples
from the blocks language:
identity
x SameSize(x, x)
x Cube(x) Cube(b)
(Cube(b) b = c) Cube(c)
(Small(b) SameSize(b, c)) Small(c)
All of these are arguably logical truths of the blocks language, but only
the middle two are first-order validities. One way to see this is to replace
the familiar blocks language predicates with nonsensical predicates, like those
used in Lewis Carrolls famous poem Jabberwocky.1 The results would look
1 The full text of Jabberwocky can be found at http://en.wikipedia.org/wiki/
Jabberwocky. The first stanza is:
using nonsense
predicates to test for
FO validity
Section 10.2
Moriarty
(a)
Scrooge
(b)
Romeo
(c)
Juliet
(d)
first-order
counterexamples
x R(x, a)
x R(b, x)
R(c, d)
R(a, b)
Next, well describe a specific interpretation of R (and the names a, b,
c, and d), along with a possible circumstance that would count as a counterexample to the new argument. This is easy. Suppose R means likes, and we
are describing a situation with four individuals: Romeo and Juliet (who like
each other), and Moriarty and Scrooge (who like nobody, and the feelings are
mutual).
If we let a refer to Moriarty, b refer to Scrooge, c and d refer to Romeo
and Juliet, then the premises of our argument are all true, though the conclusion is false. This possible circumstance, like an alternate truth assignment
in propositional logic, shows that our original conclusion is not a first-order
consequence of the premises. Thus we call it a first-order counterexample.
Section 10.2
Replacement Method:
1. To check for first-order validity or first-order consequence, systematically
replace all of the predicates, other than identity, with new, meaningless
predicate symbols, making sure that if a predicate appears more than
once, you replace all instances of it with the same meaningless predicate.
(If there are function symbols, replace these as well.)
2. To see if S is a first-order validity, try to describe a circumstance, along
with interpretations for the names, predicates, and functions in S, in
which the sentence is false. If there is no such circumstance, the original
sentence is a first-order validity.
3. To see if S is a first-order consequence of P1 , . . . , Pn , try to find a circumstance and interpretation in which S is false while P1 , . . . , Pn are all
true. If there is no such circumstance, the original inference counts as a
first-order consequence.
Chapter 10
LeftOf(x, x)
x
a)
be(x) Cube
Cu
(a
)
x
)
T
a
et(
(
t
e
Tautologies
First-order validities
Logical truths
Figure 10.2: The relation between tautologies, first-order validities, and logical
truths
first-order validity, it is a logical truth. The converse of neither of these
statements is true, though. (See Figure 10.2.)
2. Similarly, if S is a tautological consequence of premises P1 , . . . , Pn , then
it is a first-order consequence of these premises. Similarly, if S is a firstorder consequence of premises P1 , . . . , Pn , then it is a logical consequence
of these premises. Again, the converse of neither of these statements is
true.
Lets try our hand at applying all of these concepts, using the various
consequence mechanisms in Fitch.
You try it
................................................................
1. Open the file FO Con 1. Here you are given a collection of premises, plus a
series of sentences that follow logically from them. Your task is to cite support sentences and specify one of the consequence rules to justify each step.
But the trick is that you must use the weakest consequence mechanism
possible and cite the minimal number of support sentences possible.
"
2. Focus on the first step after the Fitch bar, x Cube(x) Cube(b). You will
recognize this as a logical truth, which means that you should not have
"
Section 10.2
to cite any premises in support of this step. First, ask yourself whether
the sentence is a tautology. No, it is not, so Taut Con will not check out.
Is it a first-order validity? Yes, so change the rule to FO Con and see
if it checks out. It would also check out using Ana Con, but this rule is
stronger than necessary, so your answer would be counted wrong if you
used this mechanism.
!
!
3. Continue through the remaining sentences, citing only necessary supporting premises and the weakest Con mechanism possible.
4. Save your proof as Proof FO Con 1.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
Just as for Taut Con, FO Con has its own special goggles which allow you
to obscure the information not considered when checking the inference rule.
In this case less is obscured by the goggles, since more information is taken
into account. Consider inferring Tove(b) from x (Tove(x) Slithy(x)) and
Slithy(b).
With FO Cons goggles, this inference looks like this:
x ( Tove (x) Slithy (x))
Slithy (b)
Tove (b)
FO Con
You try it
................................................................
Chapter 10
1. Open the file Goggles Example, which contains the inference described
above. Use the FO Con goggles to verify that the effect is as we have
described.
2. The particular goggles used by Fitch depend on the inference rule. To use
Taut Con goggles on this inference, first take the FO Con goggles off
(you cant wear both kinds at the same time), and then change the rule
to Taut Con. Then switch on the Taut Con goggles.
3. Notice that you see something different now. The display should look like
this:
"
x (Tove(x) Slithy(x))
Slithy(b)
Tove(b)
Taut Con
Because the Taut Con inference rule does not take into account the meaning of quantifiers, the entire quantified formula in the initial step is treated
as a single atomic formula. Since the inference contains three completely
unrelated formulas, as far as Taut Con is concerned, this inference will
not check out.
4. Take off the Taut Con goggles, and check the inference. As predicted, it
will not check out, since the conclusion is not a tautological consequence
of the premises. There is nothing to save.
"
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
Remember
1. A sentence of fol is a first-order validity if it is a logical truth when you
ignore the meanings of the names, function symbols, and predicates
other than the identity symbol.
2. A sentence S is a first-order consequence of premises P1 , . . . , Pn if it a
logical consequence of these premises when you ignore the meanings
of the names, function symbols, and predicates other than identity.
3. The Replacement Method is useful for determining whether a sentence
is a first-order validity and whether the conclusion of an argument is
a first-order consequence of the premises.
4. All tautologies are first-order validities; all first-order validities are
logical truths. Similarly for consequence.
Section 10.2
Exercises
10.8
If you skipped the You try it section, go back and do it now. Submit the file Proof FO Con 1.
10.9
!|"
Each of the following arguments is valid. Some of the conclusions are (a) tautological consequences of
the premises, some are (b) first-order consequences that are not tautological consequences, and some are
(c) logical consequences that are not first-order consequences. Use the truth-functional form algorithm
and the replacement method to classify each argument. You should justify your classifications by turning
in (a) the truth-functional form of the argument, (b) the truth-functional form and the argument with
nonsense predicates substituted, or (c) the truth-functional form, the nonsense argument, and a firstorder counterexample.
10.10
"
Cube(a) Cube(b)
Small(a) Large(b)
x (Cube(x) Small(x)) x (Cube(x) Large(x))
10.11
"
Cube(a) Cube(b)
Small(a) Large(b)
x (Cube(x) Large(x) Smaller(x, x))
10.12
"
x Cube(x) y Small(y)
y Small(y)
x Cube(x)
Chapter 10
10.13
"
x Cube(x) y Small(y)
y Small(y)
x Cube(x)
10.14
"
10.15
Cube(a)
Dodec(b)
"
(a = b)
10.16
"
(a = b)
10.17
Cube(a)
Cube(a)
"
"
(a = b)
10.18
Cube(a)
Cube(b)
z (Small(z) Cube(z))
Cube(d)
10.19
"
z (Small(z) Cube(z))
w (Cube(w) LeftOf(w, c))
y (Small(y) LeftOf(y, c))
Small(d)
Section 10.3
x (Small(x) Cube(x))
Section 10.3
A moments thought will convince you that each of these sentences is a firstorder consequence of the other, and so they are first-order equivalent. But
unlike the previous examples, they are not tautologically equivalent.
To see why Contraposition (and other principles of equivalence) can be
applied in the scope of quantifiers, we need to consider the wffs to which the
principle was applied:
Cube(x) Small(x)
Small(x) Cube(x)
Or, more generally, consider the wffs:
P(x) Q(x)
Q(x) P(x)
where P(x) and Q(x) may be any formulas, atomic or complex, containing the
single free variable x.
Now since these formulas are not sentences, it makes no sense to say they
are true in exactly the same circumstances, or that they are logical (or tautological) consequences of one another. Formulas with free variables are neither
true nor false. But there is an obvious extension of the notion of logical equivalence that applies to formulas with free variables. It is easy to see that in
any possible circumstance, the above two formulas will be satisfied by exactly
the same objects. Heres a proof of this fact:
Proof: We show this by indirect proof. Assume that in some circumstance there is an object that satisfies one but not the other of these
two formulas. Lets give this object a new name, say n1 . Consider
the results of replacing x by n1 in our formulas:
P(n1 ) Q(n1 )
Q(n1 ) P(n1 )
Since x was the only free variable, these are sentences. But by our
assumption, one of them is true and one is false, since that is how
we defined satisfaction. But this is a contradiction, since these two
sentences are logically equivalent by Contraposition.
logically equivalent wffs
We will say that two wffs with free variables are logically equivalent if, in
any possible circumstance, they are satisfied by the same objects.2 Or, what
2 Though we havent discussed satisfaction for wffs with more than one free variable, a
similar argument can be applied to such wffs: the only difference is that more than one
name is substituted in for the free variables.
Chapter 10
comes to the same thing, two wffs are logically equivalent if, when you replace their free variables with new names, the resulting sentences are logically
equivalent.
The above proof, suitably generalized, shows that when we apply any of our
principles of logical equivalence to a formula, the result is a logically equivalent
formula, one that is satisfied by exactly the same objects as the original. This
in turn is why the sentence x (Cube(x) Small(x)) is logically equivalent
to the sentence x (Small(x) Cube(x)). If every object in the domain of
discourse (or one object, or thirteen objects) satisfies the first formula, then
every object (or one or thirteen) must satisfy the second.
Equipped with the notion of logically equivalent wffs, we can restate the
principle of substitution of equivalents so that it applies to full first-order
logic. Let P and Q be wffs, possibly containing free variables, and let S(P) be
any sentence containing P as a component part. Then if P and Q are logically
equivalent:
PQ
substitution of
equivalent wffs
x (Cube(x) Small(x))
x (Cube(x) Small(x))
x (Cube(x) Small(x))
Section 10.3
ings of our quantifiers, you will see that there is a strong analogy between
and , on the one hand, and between and , on the other. For example,
suppose we are talking about a world consisting of four named blocks, say
a, b, c, and d. Then the sentence x Cube(x) will be true if and only if the
following conjunction is true:
Cube(a) Cube(b) Cube(c) Cube(d)
Likewise, x Cube(x) will be true if and only if this disjunction is true:
Cube(a) Cube(b) Cube(c) Cube(d)
This analogy suggests that the quantifiers may interact with negation in a
way similar to conjunction and disjunction. Indeed, in our four-block world,
the sentence
x Small(x)
will be true if and only if the following negation is true:
The DeMorgan laws for the quantifiers allow you to push a negation sign
past a quantifier by switching the quantifier from to or from to . So, for
example, if we know that not everything has some property (x P(x)), then
we know that something does not have the property (x P(x)), and vice versa.
Similarly, if we know that it is not the case that something has some property
(x P(x)), then we know that everything must fail to have it (x P(x)), and
vice versa. We call these the DeMorgan laws for quantifiers, due to the analogy
described above; they are also known as the quantifier/negation equivalences:
x P(x)
x P(x)
x P(x)
x P(x)
By applying these laws along with some earlier equivalences, we can see
that there is a close relationship between certain pairs of Aristotelian sentences. In particular, the negation of All Ps are Qs is logically equivalent to
Some Ps are not Qs. To demonstrate this equivalence, we note the following
chain of equivalences. The first is the translation of It is not true that all Ps
are Qs while the last is the translation of Some Ps are not Qs.
Chapter 10
x (P(x) Q(x))
x (P(x) Q(x))
x (P(x) Q(x))
x (P(x) Q(x))
x (P(x) Q(x))
The first step uses the equivalence of P(x) Q(x) and P(x) Q(x). The second and third steps use DeMorgans laws, first one of the quantifier versions,
and then one of the Boolean versions. The last step uses the double negation
law applied to P(x).
A similar chain of equivalences shows that the negation of Some Ps are
Qs is equivalent to No Ps are Qs:
x (P(x) Q(x))
x (P(x) Q(x))
Exercises
10.20 Give a chain of equivalences showing that the negation of Some Ps are Qs (x (P(x) Q(x)))
"
10.21 Open DeMorgans Sentences 2. This file contains six sentences, but each of sentences 4, 5, and
"
6 is logically equivalent to one of the first three. Without looking at what the sentences say, see
if you can figure out which is equivalent to which by opening various world files and evaluating
the sentences. (You should be able to figure this out from Ackermanns, Bolzanos, and Claires
Worlds, plus what weve told you.) Once you think youve figured out which are equivalent to
which, write out three equivalence chains to prove youre right. Turn these in to your instructor.
Section 10.3
10.22 ( versus ) We pointed out the similarity between and , as well as that between and
!
. But we were careful not to claim that the universally quantified sentence was logically
equivalent to the analogous conjunction. This problem will show you why we did not make this
claim.
Open Churchs Sentences and Ramseys World. Evaluate the sentences in this world. You
will notice that the first two sentences have the same truth value, as do the second two.
Modify Ramseys World in any way you like, but do not add or delete objects, and do not
change the names used. Verify that the first two sentences always have the same truth
values, as do the last two.
Now add one object to the world. Adjust the objects so that the first sentence is false, the
second and third true, and the last false. Submit your work as World 10.22. This world
shows that the first two sentences are not logically equivalent. Neither are the last two.
Section 10.4
The quantifier DeMorgan laws tell us how quantifiers interact with negation.
Equally important is the question of how quantifiers interact with conjunction
and disjunction. The laws governing this interaction, though less interesting
than DeMorgans, are harder to remember, so you need to pay attention!
First of all, notice that x (P(x) Q(x)), which says that everything is
both P and Q, is logically equivalent to x P(x) x Q(x), which says that
everything is P and everything is Q. These are just two different ways of saying
that every object in the domain of discourse has both properties P and Q. By
contrast, x (P(x) Q(x)) is not logically equivalent to x P(x) x Q(x). For
example, the sentence x (Cube(x) Tet(x)) says that everything is either a
cube or a tetrahedron, but the sentence x Cube(x) x Tet(x) says that either
everything is a cube or everything is a tetrahedron, clearly a very different
kettle of fish. We summarize these two observations, positive and negative, as
follows:
x (P(x) Q(x))
x (P(x) Q(x))
x P(x) x Q(x)
x P(x) x Q(x)
Chapter 10
P or something is Q: x P(x) x Q(x). But this equivalence fails the moment we replace with . The fact that there is a cube and a tetrahedron,
x Cube(x) x Tet(x), hardly means that there is something which is both
a cube and a tetrahedron: x (Cube(x) Tet(x))! Again, we summarize both
positive and negative observations together:
x (P(x) Q(x))
x (P(x) Q(x))
x P(x) x Q(x)
x P(x) x Q(x)
There is one circumstance when you can push a universal quantifier in past
a disjunction, or move an existential quantifier out from inside a conjunction.
But to explain this circumstance, we first have to talk a bit about a degenerate
form of quantification. In defining the class of wffs, we did not insist that the
variable being quantified actually occur free (or at all) in the wff to which the
quantifier is applied. Thus, for example, the expression x Cube(b) is a wff,
even though Cube(b) does not contain the variable x. Similarly, the sentence
y Small(y) doesnt contain any free variables. Still, we could form the sentences x y Small(y) or even y y Small(y). We call this sort of quantification
null quantification. Lets think for a moment about what it might mean.
Consider the case of the sentence x Cube(b). This sentence is true in a
world if and only if every object in the domain of discourse satisfies the wff
Cube(b). But what does that mean, since this wff does not contain any free
variables? Or, to put it another way, what does it mean to substitute a name
for the (nonexistent) free variable in Cube(b)? Well, if you replace every occurrence of x in Cube(b) with the name n1 , the result is simply Cube(b). So, in
a rather degenerate way, the question of whether an object satisfies Cube(b)
simply boils down to the question of whether Cube(b) is true. Thus, x Cube(b)
and Cube(b) are true in exactly the same worlds, and so are logically equivalent. The same holds of x Cube(b), which is also equivalent to Cube(b). More
generally, if the variable x is not free in wff P, then we have the following
equivalences:
x P
x P
null quantification
P
P
Section 10.4
(in which case, they will all satisfy the disjunctive wff by satisfying the second
disjunct), or both. That is, this sentence imposes the same conditions on a
world as the sentence Cube(b) x Small(x). Indeed, when the variable x is
not free in a wff P, we have both of the following:
x (P Q(x))
x (P Q(x))
replacing bound
variables
P x Q(x)
P x Q(x)
Compare these two equivalences to the non-equivalences highlighted a moment ago, and make sure you understand the differences. The equivalences involving null quantification, surprisingly enough, will become very useful later,
when we learn how to put sentences into what is called prenex form, where
all the quantifiers are out in front.
The last principles we mention are so basic they are easily overlooked.
When you are translating from English to fol and find a quantified noun
phrase, you must pick some variable to use in your translation. But we have
given you no guidelines as to which one is right. This is because it doesnt
matter which variable you use, as long as you dont end up with two quantifiers whose scope overlaps but which bind the same variable. In general, the
variable itself makes no difference whatsoever. For example, these sentences
are logically equivalent:
We codify this by means of the following: For any wff P(x) and variable y that
does not occur in P(x)
x P(x)
x P(x)
Chapter 10
y P(y)
y P(y)
Remember
1. (Pushing quantifiers past and ) For any wffs P(x) and Q(x):
(a) x (P(x) Q(x)) x P(x) x Q(x)
(b) x P P
Exercises
10.23 (Null quantification) Open Null Quantification Sentences. In this file you will find sentences in
!
the odd numbered slots. Notice that each sentence is obtained by putting a quantifier in front
of a sentence in which the quantified variable is not free.
1. Open Godels World and evaluate the truth of the first sentence. Do you understand why
it is false? Repeatedly play the game committed to the truth of this sentence, each time
choosing a different block when your turn comes around. Not only do you always lose,
but your choice has no impact on the remainder of the game. Frustrating, eh?
2. Check the truth of the remaining sentences and make sure you understand why they have
the truth values they do. Play the game a few times on the second sentence, committed
to both true and false. Notice that neither your choice of a block (when committed to
false) nor Tarskis Worlds choice (when committed to true) has any effect on the game.
Section 10.4
3. In the even numbered slots, write the sentence from which the one above it was obtained.
Check that the even and odd numbered sentences have the same truth value, no matter
how you modify the world. This is because they are logically equivalent. Save and submit
your sentence file.
Some of the following biconditionals are logical truths (which is the same as saying that the two sides of
the biconditional are logically equivalent); some are not. If you think the biconditional is a logical truth,
create a file with Fitch, enter the sentence, and check it using FO Con. If the sentence is not a logical
truth, create a world in Tarskis World in which it is false. Submit the file you create.
x (Cube(x) Dodec(x))
!
!
!
!
(w Dodec(w) Large(b))
(w Dodec(w) w Large(w))
Section 10.5
Chapter 10
axiomatic method
meaning postulates
3. x (Dodec(x) Cube(x))
4. x (Tet(x) Dodec(x) Cube(x))
Section 10.5
The first three axioms stem from the meanings of our three basic shape
predicates. Being one of these shapes simply precludes being another. The
fourth axiom, however, is presumably not part of the meaning of the three
predicates, as there are certainly other possible shapes. Still, if our goal is to
capture reasoning about blocks worlds of the sort that can be built in Tarskis
World, we will want to include 4 as an axiom.
Any argument that is intuitively valid and involves only the three basic shape predicates is first-order valid if we add these axioms as additional
premises. For example, the following intuitively valid argument is not firstorder valid:
x Tet(x)
x (Cube(x) Dodec(x))
If we add axioms 3 and 4 as premises, however, then the resulting argument
is
x Tet(x)
x (Dodec(x) Cube(x))
x (Tet(x) Dodec(x) Cube(x))
x (Cube(x) Dodec(x))
This argument is first-order valid, as can be seen by replacing the shape
predicates with meaningless predicates, say P, Q, and R:
x P(x)
x (Q(x) R(x))
x (P(x) Q(x) R(x))
x (R(x) Q(x))
If the validity of this argument is not entirely obvious, try to construct a
first-order counterexample. Youll see that you cant.
Lets look at an example where we can replace some instances of Ana
Con by the basic shape axioms and the weaker FO Con rule.
You
. . . . . try
. . . . .it. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
!
Chapter 10
1. Open the Fitch file Axioms 1. The premises in this file are just the four basic
shape axioms. Below the Fitch bar are four sentences. Each is justified by
a use of the rule Ana Con, without any sentences cited in support. Verify
that each of the steps checks out.
2. Now change each of the justifications from Ana Con to FO Con. Verify
that none of the steps now checks out. See if you can make each of them
check out by finding a single shape axiom to cite in its support.
"
"
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
The basic shape axioms dont express everything there is to say about the
shapes in Tarskis World. We have not yet said anything about the binary
predicate SameShape. At the very least, we would need the following as a fifth
axiom:
x SameShape(x, x)
We will not in fact add this as an axiom since, as we will see in Chapter 12, it
leaves out essential facts about the relation between SameShape and the basic
shape predicates. When we add these other facts, it turns out that the above
axiom is unnecessary.
Axiomatization has another interesting use that can be illustrated with our
axioms about shape. Notice that if we systematically replace the predicates
Cube, Tet, and Dodec by the predicates Small, Medium, and Large, the resulting
sentences are true in all of the blocks worlds. It follows from this that if we
take any valid argument involving just the shape predicates and perform the
stated substitution, the result will be another valid argument.
Presupposing a range of circumstances
The intuitive difference between the first three shape axioms and the fourth,
which asserts a general fact about our block worlds but not one that follows
from the meanings of the predicates, highlights an important characteristic
of much everyday reasoning. More often than not, when we reason, we do so
against the background of an assumed range of possibilities. When you reason
about Tarskis World, it is natural to presuppose the various constraints that
have been built into the program. When you reason about what movie to
go to, you implicitly presuppose that your options are limited to the movies
showing in the vicinity.
The inferences that you make against this background may not, strictly
speaking, be logically valid. That is, they may be correct relative to the presupposed circumstances but not correct relative to some broader range of possibilities. For example, if you reason from Cube(d) and Tet(d) to Dodec(d),
your reasoning is valid within the domain of Tarskis World, but not relative
to worlds where there are spheres and icosohedra. When you decide that the
assuming a range
of possibilities
Section 10.5
background
assumptions
Chapter 10
latest Harrison Ford movie is the best choice, this may be a correct inference
in your vicinity, but perhaps not if you were willing to fly to other cities.
In general, background assumptions about the range of relevant circumstances are not made an explicit part of everyday reasoning, and this can give
rise to disagreements about the reasonings validity. People with different assumptions may come up with very different assessments about the validity of
some explicit piece of reasoning. In such cases, it is often helpful to articulate
general facts about the presupposed circumstances. By making these explicit,
we can often identify the source of the disagreement.
The axiomatic method can be thought of as a natural extension of this everyday process. Using this method, it is often possible to transform arguments
that are valid only relative to a particular range of circumstances into arguments that are first-order valid. The axioms that result express facts about
the meanings of the relevant predicates, but also facts about the presupposed
circumstances.
The history of the axiomatic method is closely entwined with the history of logic. You were probably already familiar with axioms from studying
Euclidean geometry. In investigating the properties of points, lines, and geometrical shapes, the ancient Greeks discovered the notion of proof which lies
at the heart of deductive reasoning. This came about as follows. By the time
of the Greeks, an enormous number of interesting and important discoveries
about geometrical objects had already been made, some dating back to the
time of the ancient Babylonians. For example, ancient clay tablets show that
the Babylonians knew what is now called the Pythagorean Theorem. But for
the Babylonians, geometry was an empirical science, one whose facts were
discovered by observation.
Somewhere lost in the prehistory of mathematics, someone had a brilliant
idea. They realized that there are logical relationships among the known facts
of geometry. Some follow from others by logic alone. Might it not be possible
to choose a few, relatively clear observations as basic, and derive all the others
by logic? The starting truths are accepted as axioms, while their consequences
are called theorems. Since the axioms are supposed to be obviously true, and
since the methods of proof are logically valid, we can be sure the theorems
are true as well.
This general procedure for systematizing a body of knowledge became
known as the axiomatic method. A set of axioms is chosen, statements which
we are certain hold of the worlds or circumstances under consideration.
Some of these may be meaning postulates, truths that hold simply in virtue
of meaning. Others may express obvious facts that hold in the domain in
question, facts like our fourth shape axiom. We can be sure that anything
Lemmas / 291
that follows from the axioms by valid methods of inference is on just as firm
a footing as the axioms from which we start.
Exercises
10.30 If you skipped the You try it section, go back and do it now. Submit the file Proof Axioms 1.
!
10.31 Suppose we state our four basic shape axioms in the following schematic form:
"
1.
2.
3.
4.
x (R(x) P(x))
x (P(x) Q(x))
x (Q(x) R(x))
x (P(x) Q(x) R(x))
We noted that any valid argument involving just the three shape predicates remains valid
when you substitute other predicates, like the Tarskis World size predicates, that satisfy these
axioms. Which of the following triplets of properties satisfy the axioms in the indicated domain
(that is, make them true when you substitute them for P, Q, and R)? If they dont, say which
axioms fail and why.
1.
2.
3.
4.
[Note: Your answers in some cases will depend on how you are construing the predicates. The
important thing is that you explain your interpretations clearly, and how the interpretations
lead to the success or failure of the axioms.]
Section 10.6
Lemmas
One important feature of the axiomatic method is that as theorems are proved,
they become available as new facts that can be used in later proofs. If S has
been shown to be a consequence of the axioms, then S can be used in any
proof which employs those same axioms. After all, we could just re-prove S
in the new proof, since all of its premises are available in the new proof, and
then cite S in subsequent inference steps.
For example, suppose that you have proved that
x(Cube(x) Dodec(x) Tet(x))
Section 10.6
follows from the shape axioms that we presented in the previous section. If
you are later asked to prove
x(Cube(x) Dodec(x) Tet(x)) Small(a)
from the same axioms, you could simply reproduce the steps of the earlier
proof, and then use the -Intro rule to reach the desired conclusion. But you
have to repeat the work that you already did. That could be a drag if the
proof of the original result is long and complex.
Logicians are a lazy lot, and to avoid repeating work that they have already done they usually simply say something like we already know that
x(Cube(x) Dodec(x) Tet(x)) follows from the axioms, and so obviously,
x(Cube(x) Dodec(x) Tet(x)) Small(a) does too. They know that if a
picky person were to ask them to justify this move, they could simply reproduce the proof. But they know better than to do all that work without having
been asked.
A result that is used in this way is often called a lemma. Theres nothing
special about a lemma, its just a theorem that has been proved that is being
used in the course of proving another theorem. We just use the word lemma
to indicate that the result is being used in this way.
Fitch has a rule that takes advantage of the ability to reuse theorems as
lemmas in later proofs. The Lemma rule allows you to use a result that you
have previously proved within a new proof. To use this rule, you must select
a proof file that you have already made, and that file must have a single goal
which checks out. You cite the formulas in your proof which correspond to
the premises in the file, and you are permitted to conclude the goal formula
of that lemma in your proof. Lets try it out.
You
. . . . . try
. . . . .it. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
!
1. Open the file Lemma 1, which contains a proof of the validity of the following argument
x(Tet(x) Cube(x))
x(Tet(x) Dodec(x))
x(Cube(x) Dodec(x))
Chapter 10
Lemmas / 293
x(Tet(x) Cube(x))
x(Tet(x) Dodec(x))
x(Cube(x) Dodec(x))
x(Tet(x) Cube(x) Dodec(x)) Small(a)
Create a new step and insert the formula x(Tet(x) Cube(x) Dodec(x)),
this is the goal formula of Lemma 1. Cite all three premises, which are also
the premises of Lemma 1. Finally, open the Rule menu, and look for the
Lemma item, this is a submenu which initially contains Add Lemma. . ..
Select this item. A file chooser dialog will appear. Navigate to and select
the file Lemma 1. Check the step (it will check out.)
3. The Lemma rule will check out when the formula at the lemma step is
the only goal in the lemma file, the same number of formulas are cited as
there are premises in the lemma file, and these cited steps contain all of
the premise formulas (in any order). The premises and the cited formulas
have to match exactly. If the lemma contains the premises P and Q, for
example, citing the formula P Q wont work.
4. Complete the proof of this exercise by using a single application of Intro.
"
"
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
Situations where you can use a lemma arise frequently when doing reasoning with a collection of presupposed axioms because every proof in the
collection will share the axioms as common premises. Consequently, results
that are proved only from the axioms will be readily reusable in other proofs.
However there is nothing special about the axioms. As we have seen, they are
just premises used in proofs. If you complete a proof of a result using some
premises, and then you later have exactly the same premises (perhaps in addition to others) in another proof, then you are entitled to us that previous
result as a lemma.
The premises of the lemma do not even have to be premises of the new
proof as long as they appear in the new proof and can be cited from the step
that uses the lemma. Thus the premises of the lemma could instead be derived
from whatever premises you have in the new proof. For instance, in the next
example, you are required to derive the premises of the lemma before you can
use it.
You try it
................................................................
1. Open the file Lemma 2, which contains a proof of the validity of the following argument:
"
Section 10.6
Dodec(d) Cube(c)
Dodec(d) Large(d)
Cube(c) Small(c)
!
Large(d) Small(c)
2. Open the file Lemma Example 2, which contains the goal of proving:
Dodec(d) Cube(c)
x(Dodec(x) Large(x))
x(Cube(x) Small(x))
Large(d) Small(c)
3. Notice that the second and third premises of the lemma are instances of
the second and third premises of this proof, respectively. We can complete
the proof by deriving those formula, and then applying the lemma.
First, use -Elim to derive the formula Dodec(d) Large(d) in the first
non-premise step of the proof. In the next step, again use -Elim to derive
the formula Cube(c) Small(c). Finally complete the proof by applying
the lemma.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
Whenever you prove a result, you want it to be as general as possible just
in case you will need to use this result as a lemma in a later proof. If you look
closely at the proof in the file Lemma 2 which you used in the shape axioms,
you will notice that only one of the premises of the lemma is used in the proof
of the lemma. That means that the number of situations in which the lemma
can be used is much smaller than it needs to be, since to use the lemma we
have to be able to cite all of the premises. We want the number of premises
to be as small as possible to ensure that the proof is most general and most
useful. In fact, even if the proof is never used as the lemma, we want this to
be true, since otherwise we might be fooled into believing that the result is
more specific than it actually is.
There is another kind of generality that is very important. Imagine that
we had a proof of the following result:
(Tet(a) Small(b))
Tet(a) Small(b)
This is, of course, an instance of De Morgans laws where the specific formula Tet(a) and Small(b) are used. We know that this argument is valid regardless of the sentences that are chosen in place of Tet(a) and Small(b), a
fact that we usually express by
Chapter 10
Lemmas / 295
(P Q)
P Q
Because the general form of the result is valid we know that any specific
instance is valid. Ideally, we could prove the result in the general form, and
then use the general form as a lemma any time that a specific instance of the
result is needed. The Lemma rule in Fitch allows us to do exactly that.
You try it
................................................................
1. Open the file Lemma 3, which contains a proof of the validity of the following argument
"
AB
AC
BD
CD
2. Open the file Lemma Example 3, which asks you to prove the following
argument:
"
LeftOf(a, b) RightOf(a, b)
LeftOf(a, b) (Small(a) Tet(a))
RightOf(a, b) (Large(b) Cube(b))
(Small(a) Tet(a)) (Large(b) Cube(b))
3. Notice that the two arguments have identical form. Lemma 3 contains a
proof that arguments of a general form (known as constructive dilemma)
are valid, while Lemma Example 3 contains a particular instance of this
result.
"
4. Add a new step to Lemma Example 3, and insert the goal formula. Cite
all three premises, and use the Lemma rule, selecting the file Lemma 3.
Verify that the step checks out.
"
by
by
by
by
LeftOf(a, b)
RightOf(a, b)
Small(a) Tet(a)
Large(b) Cube(b)
Section 10.6
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
Just as for predicate symbols and formulae, you can use the general constant symbols n1 , . . . , n2 in your lemmas to say that the lemma does not
concern specific constant symbols. A lemma such as Tet(a) Cube(a) can
be used only to match that exact formula, but Tet(n1 ) Cube(n1 ) could be
used to prove Tet(a) Cube(a), Tet(f) Cube(f), or even
Tet(n2 ) Cube(n2 ).
Remember
1. Lemmas are just proofs that you have completed that are being used
to justify a step in another proof.
2. Fitchs lemma rule requires that each cited formula matches a premise
in the lemma file, and the derived formula matches the goal of the
lemma file.
3. The letters P, Q etc; in a lemma file will match any formula.
4. The constant symbols n1 , . . . , n9 in a lemma file will match any constant symbol.
Exercises
10.32 If you skipped the You try it sections, go back and do them now. Submit the files Proof
!
10.33 Which of the following arguments can be justified by a single application of Lemma 3? For each
"
one that can be justified, turn in the substitution of formulas that is required to justify the
application. For each argument that cannot be justified, explain why not.
1.
Cube(a) Cube(b)
Cube(b) SameSize(a, b)
Cube(a) SameShape(a, b)
SameSize(a, b) SameShape(a, b)
Chapter 10
Lemmas / 297
2.
Tet(a) Tet(b)
Tet(b) Larger(e, f)
Tet(a) Smaller(f, e)
Smaller(f, e) Larger(e, f)
3.
4.
Tet(e)
Tet(e) Small(e)
Tet(e) SameSize(e, e)
Small(e) SameSize(e, e)
5.
LeftOf(a, b) Smaller(a, b)
Smaller(a, b) P
LeftOf(a, b) Q
QP
10.34 Prove a single lemma which can be used to complete each of the proofs in the files Exer!
cise 10.34.1 and Exercise 10.34.2 in one step using the Lemma rule. Submit the lemma file as
Proof 10.34.
Section 10.6
Chapter 11
Multiple Quantifiers
So far, weve considered only sentences that contain a single quantifier symbol.
This was enough to express the simple quantified forms studied by Aristotle,
but hardly shows the expressive power of the modern quantifiers of first-order
logic. Where the quantifiers of fol come into their own is in expressing claims
which, in English, involve several quantified noun phrases.
Long, long ago, probably before you were even born, there was an advertising campaign that ended with the tag line: Everybody doesnt like something,
but nobody doesnt like Sara Lee. Now theres a quantified sentence! It goes
without saying that this was every logicians favorite ad campaign. Or consider Lincolns famous line: You may fool all of the people some of the time;
you can even fool some of the people all of the time; but you cant fool all of
the people all of the time. Why, the mind reels!
To express claims like these, and to reveal their logic, we need to juggle
more than one quantifier in a single sentence. But it turns out that, like
juggling, this requires a fair bit of preparation and practice.
Section 11.1
298
You
. . . . . try
. . . . .it. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1. Suppose you are evaluating the following sentence in a world with four
cubes lined up in the front row:
"
"
"
4. If we really wanted to express the claim that every cube is to the left or
right of every other cube, then we would have to write
"
Section 11.1
5. The second sentence in the file looks for all the world like it says there are
two cubes. But it doesnt. Delete all but one cube in the world and check
to see that its still true. Play the game committed to false and see what
happens.
6. See if you can modify the second sentence so it is false in a world with only
one cube, but true if there are two or more. (Use += like we did above.)
Save the modified sentences as Sentences Multiple 1.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
identity and
variables
In general, to say that every pair of distinct objects stands in some relation,
you need a sentence of the form x y (x += y . . . ), and to say that there
are two objects with a certain property, you need a sentence of the form
x y (x += y . . . ). Of course, other parts of the sentence often guarantee the
distinctness for you. For example if you say that every tetrahedron is larger
than every cube:
x y ((Tet(x) Cube(y)) Larger(x, y))
then the fact that x must be a tetrahedron and y a cube ensures that your
claim says what you intended.
Remember
When evaluating a sentence with multiple quantifiers, dont fall into the
trap of thinking that distinct variables range over distinct objects. In fact,
the sentence x y P(x, y) logically implies x P(x, x), and the sentence
x P(x, x) logically implies x y P(x, y)!
Exercises
11.1
!
11.2
!
If you skipped the You try it section, go back and do it now. Submit the file Sentences
Multiple 1.
(Simple multiple quantifier sentences) The file Freges Sentences contains 14 sentences; the first
seven begin with a pair of existential quantifiers, the second seven with a pair of universal
quantifiers. Go through the sentences one by one, evaluating them in Peirces World. Though
you probably wont have any trouble understanding these sentences, dont forget to use the
game if you do. When you understand all the sentences, modify the size and location of a single
block so that the first seven sentences are true and the second seven false. Submit the resulting
world.
Chapter 11
11.3
!
11.4
!
(Getting fancier) Open up Peanos World and Peanos Sentences. The sentence file contains 30
assertions that Alex made about this world. Evaluate Alexs claims. If you have trouble with
any, play the game (several times if necessary) until you see where you are going wrong. Then
change each of Alexs false claims into a true claim. If you can make the sentence true by
adding a clause of the form x += y, do so. Otherwise, see if you can turn the false claim into
an interesting truth: dont just add a negation sign to the front of the sentence. Submit your
corrected list of sentences.
(Describing a world) Lets try our hand describing a world using multiple quantifiers. Open
Finslers World and start a new sentence file.
1. Notice that all the small blocks are in front of all the large blocks. Use your first
sentence to say this.
2. With your second sentence, point out that theres a cube that is larger than a tetrahedron.
3. Next, say that all the cubes are in the same column.
4. Notice, however, that this is not true of the tetrahedra. So write the same sentence
about the tetrahedra, but put a negation sign out front.
5. Every cube is also in a different row from every other cube. Say this.
6. Again, this isnt true of the tetrahedra, so say that its not.
7. Notice there are different tetrahedra that are the same size. Express this fact.
8. But there arent different cubes of the same size, so say that, too.
Are all your translations true in Finslers World? If not, try to figure out why. In fact, play
around with the world and see if your first-order sentences always have the same truth values
as the claims you meant to express. Check them out in Konigs World, where all of the original
claims are false. Are your sentences all false? When you think youve got them right, submit
your sentence file.
11.5
!
(Building a world) Open Ramseys Sentences. Build a world in which sentences 110 are all true
at once (ignore sentences 1120 for now). These first ten sentences all make either particular
claims (that is, they contain no quantifiers) or existential claims (that is, they assert that things
of a certain sort exist). Consequently, you could make them true by successively adding objects
to the world. But part of the exercise is to make them all true with as few objects as possible.
You should be able to do it with a total of six objects. So rather than adding objects for each
new sentence, only add new objects when absolutely necessary. Again, be sure to go back and
check that all the sentences are true when you are finished. Submit your world as World 11.5.
[Hint: To make all the sentences true with six blocks, you will have to watch out for some
intentionally misleading implicatures. For example, one of the objects will have to have two
names.]
Section 11.1
11.6
!
11.7
!|"
(Modifying the world) Sentences 11-20 of Ramseys Sentences all make universal claims. That is,
they all say that every object in the world has some property or other. Check to see whether the
world you have built in Exercise 11.5 satisfies the universal claims expressed by these sentences.
If not, modify the world so it makes all 20 sentences true at once. Submit your modified world
as World 11.6. (Make sure you submit both World 11.5 and World 11.6 to get credit for both
exercises.)
(Block parties) The interaction of quantifiers and negation gives rise to subtleties that can be
pretty confusing. Open L
owenheims Sentences, which contains eight sentences divided into two
sets. Suppose we imagine a column containing blocks to be a party and think of the blocks in
the column as the attendees. Well say a party is lonely if theres only one block attending it,
and say a party is exclusive if theres any block whos not there (i.e., whos in another column).
1. Using this terminology, give simple and clear English renditions of each of the sentences.
For example, sentence 2 says some of the parties are not lonely, and sentence 7 says theres
only one party. Youll find sentences 4 and 9 the hardest to understand. Construct a lot
of worlds to see what they mean.
2. With the exception of 4 and 9, all of the sentences are first-order equivalent to other
sentences on the list, or to negations of other sentences (or both). Which sentences are 3
and 5 equivalent to? Which sentences do 3 and 5 negate?
3. Sentences 4 and 9 are logically independent: its possible for the two to have any pattern
of truth values. Construct four worlds: one in which both are true (World 11.7.1), one
in which 4 is true and 9 false (World 11.7.2), one in which 4 is false and 9 true (World
11.7.3), and one in which both are false (World 11.7.4).
Submit the worlds youve constructed and turn the remaining answers in to your instructor.
Section 11.2
Mixed quantifiers
Ready to start juggling with both hands? We now turn to the important case
in which universal and existential quantifiers get mixed together. Lets start
with the following sentence:
x [Cube(x) y (Tet(y) LeftOf(x, y))]
This sentence shouldnt throw you. It has the overall Aristotelian form
x [P(x) Q(x)], which we have seen many times before. It says that every
cube has some property or other. What property? The property expressed
Chapter 11
order of quantifiers
Section 11.2
You try it
................................................................
!
1. Open the files Mixed Sentences and Konigs World. If you evaluate the two
sentences, youll see that the first is true and the second false. Were going
to play the game to see why they arent both true.
2. Play the game on the first sentence, specifying your initial commitment as
true. Since this sentence is indeed true, you should find it easy to win.
When Tarskis World makes its choice, all you need to do is choose any
block in the same row as Tarskis.
3. Now play the game with the second sentence, again specifying your initial commitment as true. This time Tarskis World is going to beat you
because youve got to choose first. As soon as you choose a block, Tarski
chooses a block in the other row. Play a couple of times, choosing blocks
in different rows. See whos got the advantage now?
4. Just for fun, delete a row of blocks so that both of the sentences come out
true. Now you can win the game. So there, Tarski! She who laughs last
laughs best. Save the modified world as World Mixed 1.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
order of variables
Have you noticed that switching the order of the quantifiers does something
quite different from switching around the variables in the body of the sentence?
For example, consider the sentences
x y Likes(x, y)
x y Likes(y, x)
Assuming our domain consists of people, the first of these says that everybody
likes somebody or other, while the second says everybody is liked by somebody
or other. These are both very different claims from either of these:
y x Likes(x, y)
y x Likes(y, x)
Here, the first claims that there is a (very popular) person whom everybody
likes, while the second claims that there is a (very indiscriminate?) person
who likes absolutely everyone.
In the last section, we saw how using two existential quantifiers and the
identity predicate, we can say that there are at least two things with a particular property (say cubes):
x y (x += y Cube(x) Cube(y))
Chapter 11
With mixed quantifiers and identity, we can say quite a bit more. For example,
consider the sentence
x (Cube(x) y (Cube(y) y = x))
This says that there is a cube, and furthermore every cube is identical to it.
Some cube, in other words, is the only cube. Thus, this sentence will be true
if and only if there is exactly one cube. There are many ways of saying things
like this in fol; well run across others in the exercises. We discuss numerical
claims more systematically in Chapter 14.
exactly one
Remember
When you are dealing with mixed quantifiers, the order is very important.
x y R(x, y) is not logically equivalent to y x R(x, y).
Exercises
11.8
If you skipped the You try it section, go back and do it now. Submit the file World Mixed 1.
11.9
!
(Simple mixed quantifier sentences) Open Hilberts Sentences and Peanos World. Evaluate the
sentences one by one, playing the game if an evaluation surprises you. Once you understand
the sentences, modify the false ones by adding a single negation sign so that they come out
true. The catch is that you arent allowed to add the negation sign to the front of the sentence!
Add it to an atomic formula, if possible, and try to make the claim nonvacuously true. (This
wont always be possible.) Make sure you understand both why the original sentence is false
and why your modified sentence is true. When youre done, submit your sentence list with the
changes.
11.10 (Mixed quantifier sentences with identity) Open Leibnizs World and use it to evaluate the
!
sentences in Leibnizs Sentences. Make sure you understand all the sentences and follow any
instructions in the file. Submit your modified sentence list.
11.11 (Building a world) Create a world in which all ten sentences in Arnaults Sentences are true.
!
11.12 (Name that object) Open Carrolls World and Hercules Sentences. Try to figure out which objects
!
have names, and what they are. You should be able to figure this out from the sentences, all
of which are true. Once you have come to your conclusion, add the names to the objects and
check to see if all the sentences are true. Submit your modified world.
Section 11.2
The remaining three exercises all have to do with the sentences in the file Buridans Sentences and build
on one another.
11.13 (Building a world) Open Buridans Sentences. Build a world in which all ten sentences are true.
!
11.14 (Consequence) These two English sentences are consequences of the ten sentences in Buridans
!
Sentences.
1. There are no cubes.
2. There is exactly one large tetrahedron.
Because of this, they must be true in any world in which Buridans sentences are all true. So
of course they must be true in World 11.13, no matter how you built it.
Translate the two sentences, adding them to the list in Buridans Sentences. Name the
expanded list Sentences 11.14. Verify that they are all true in World 11.13.
Modify the world by adding a cube. Try placing it at various locations and giving it
various sizes to see what happens to the truth values of the sentences in your file. One or
more of the original ten sentences will always be false, though different ones at different
times. Find a world in which only one of the original ten sentences is false and name it
World 11.14.1.
Next, get rid of the cube and add a second large tetrahedron. Again, move it around and
see what happens to the truth values of the sentences. Find a world in which only one of
the original ten sentences is false and name it World 11.14.2.
Submit your sentence file and two world files.
11.15 (Independence) Show that the following sentence is independent of those in Buridans Sentences,
!!
Chapter 11
Section 11.3
Section 11.3
Exercises
Open Montagues Sentences. This file contains expressions that are halfway between English and first-order logic. Our goal is to edit this file until it contains translations of
the following English sentences. You should read the English sentence below, make sure
you understand how we got to the halfway point, and then complete the translation by
replacing the hyphenated expression with a wff of first-order logic.
1. Every cube is to the left of every tetrahedron. [In the Sentence window, you
see the halfway completed translation, together with some blanks that need to
be replaced by wffs. Commented out below this, you will find an intermediate
sentence. Make sure you understand how we got to this intermediate stage of
the translation. Then complete the translation by replacing the blank with
y (Tet(y) LeftOf(x, y))
2.
3.
4.
5.
6.
7.
8.
9.
10.
Once this is done, check to see if you have a well-formed sentence. Does it look
like a proper translation of the original English? It should.]
Every small cube is in back of a large cube.
Some cube is in front of every tetrahedron.
A large cube is in front of a small cube.
Nothing is larger than everything.
Every cube in front of every tetrahedron is large.
Everything to the right of a large cube is small.
Nothing in back of a cube and in front of a cube is large.
Anything with nothing in back of it is a cube.
Every dodecahedron is smaller than some tetrahedron.
Chapter 11
11.17 (More multiple quantifier sentences) Now, we will try translating some multiple quantifier
!
sentences completely from scratch. You should try to use the step-by-step procedure.
Start a new sentence file and translate the following English sentences.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
Open Bolzanos World. All of the above English sentences are true in this world. Verify
that all your translations are true as well.
Now open Rons World. The English sentences 4, 5, 8, 9, and 10 are true, but the rest are
false. Verify that the same holds of your translations.
Open Claires World. Here you will find that the English sentences 1, 3, 5, 7, 9, and 10
are true, the rest false. Again, check to see that your translations have the appropriate
truth value.
Finally, open Peanos World. Notice that only sentences 8 and 9 are true. Check to see
that your translations have the same truth values.
Section 11.4
Paraphrasing English
Some English sentences do not easily lend themselves to direct translation
using the step-by-step procedure. With such sentences, however, it is often
quite easy to come up with an English paraphrase that is amenable to the
procedure. Consider, for example, If a freshman takes a logic class, then he
or she must be smart. The step-by-step procedure does not work here. If we
try to apply the procedure we would get something like
Section 11.4
There is one particularly notorious kind of sentence that needs paraphrasing to get an adequate first-order translation. They are known as donkey
sentences, because the first and most discussed example of this kind is the
sentence
Every farmer who owns a donkey beats it.
What makes such a sentence a bit tricky is the existential noun phrase a
donkey in the noun phrase every farmer who owns a donkey. The existential
noun phrase serves as the antecedent of the pronoun it in the verb phrase;
its the donkey that gets beaten. Applying the step-by-step method might lead
you to translate this as follows:
x (Farmer(x) y (Donkey(y) Owns(x, y)) Beats(x, y))
This translation, however, cannot be correct since its not even a sentence; the
occurrence of y in Beats(x, y) is free, not bound. If we move the parenthesis
to capture this free variable, we obtain the following, which means something
quite different from our English sentence.
x (Farmer(x) y (Donkey(y) Owns(x, y) Beats(x, y)))
This means that everything in the domain of discourse is a farmer who owns
and beats a donkey, something which neither implies nor is implied by the
original sentence.
To get a correct first-order translation of the original donkey sentence, it
can be paraphrased as
Every donkey owned by any farmer is beaten by them.
This sentence clearly needs two universal quantifiers in its translation:
x (Donkey(x) y ((Farmer(y) Owns(y, x)) Beats(y, x)))
Chapter 11
Remember
In translating from English to fol, the goal is to get a sentence that has
the same meaning as the original. This sometimes requires changes in the
surface form of the sentence.
Exercises
11.18 (Sentences that need paraphrasing before translation) Translate the following sentences by first
!!
giving a suitable English paraphrase. Some of them are donkey sentences, so be careful.
1. Only large objects have nothing in front of them.
2. If a cube has something in front of it, then its small.
3. Every cube in back of a dodecahedron is also smaller than it.
4. If e is between two objects, then they are both small.
5. If a tetrahedron is between two objects, then they are both small.
Open Rons World. Recall that there are lots of hidden things in this world. Each of the above
English sentences is true in this world, so the same should hold of your translations. Check to
see that it does. Now open Bolzanos World. In this world, only sentence 3 is true. Check that
the same holds of your translations. Next open Wittgensteins World. In this world, only the
English sentence 5 is true. Verify that your translations have the same truth values. Submit
your sentence file.
11.19 (More sentences that need paraphrasing before translation) Translate the following sentences
!!
Section 11.4
same holds of your translations. Next open Wittgensteins World. In this world, only the English
sentences 2 and 3 are true. Verify that your translations have the same truth values. Submit
your sentence file.
11.20 (More translations) The following English sentences are true in Godels World. Translate them,
!!
and make sure your translations are also true. Then modify the world in various ways, and
check that your translations track the truth value of the English sentence.
1. Nothing to the left of a is larger than everything to the left of b.
2. Nothing to the left of a is smaller than anything to the left of b.
3. The same things are left of a as are left of b.
4. Anything to the left of a is smaller than something that is in back of every cube to the
right of b.
5. Every cube is smaller than some dodecahedron but no cube is smaller than every dodecahedron.
6. If a is larger than some cube then it is smaller than every tetrahedron.
7. Only dodecahedra are larger than everything else.
8. All objects with nothing in front of them are tetrahedra.
9. Nothing is between two objects which are the same shape.
10. Nothing but a cube is between two other objects.
11. b has something behind it which has at least two objects behind it.
12. More than one thing is smaller than something larger than b.
Submit your sentence file.
11.21 Using the symbols introduced in Table 1.2, page 30, translate the following into fol. Do
!
not introduce any additional names or predicates. Comment on any shortcomings in your
translations. When you are done, submit your sentence file and turn in your comments to your
instructor.
1. Every student gave a pet to some other student sometime or other.
2. Claire is not a student unless she owned a pet (at some time or other).
3. No one ever owned both Folly and Scruffy at the same time.
4. No student fed every pet.
5. No one who owned a pet at 2:00 was angry.
6. No one gave Claire a pet this morning. (Assume that this morning simply means
before 12:00.)
7. If Max ever gave Claire a pet, she owned it then and he didnt.
8. You cant give someone something you dont own.
9. Max fed all of his pets before Claire fed any of her pets. (Assume that Maxs pets
are the pets he owned at 2:00, and the same for Claire.)
10. Max gave Claire a pet between 2:00 and 3:00. It was hungry.
Chapter 11
11.22 Using the symbols introduced in Table 1.2, page 30, translate the following into colloquial
"
English. Assume that each of the sentences is asserted at 2 p.m. on January 2, 2011, and
use this fact to make your translations more natural. For example, you could translate
Owned(max, folly, 2:00) as Max owns Folly.
1. x [Student(x) z (Pet(z) Owned(x, z, 2:00))]
2. x [Student(x) z (Pet(z) Owned(x, z, 2:00))]
3. x t [Gave(max, x, claire, t) y t" Gave(claire, x, y, t" )]
4. x [Owned(claire, x, 2:00) t (t < 2:00 Gave(max, x, claire, t))]
5. x t (1:55 < t t < 2:00 Gave(max, x, claire, t))
6. y [Person(y) x t (1:55 < t t < 2:00 Gave(max, x, y, t))]
7. z {Student(z) y [Person(y) x t (1:55 < t t < 2:00
Gave(z, x, y, t))]}
11.23 Translate the following into fol. As usual, explain the meanings of the names, predicates, and
"!
function symbols you use, and comment on any shortcomings in your translations.
1. Theres a sucker born every minute.
2. Whither thou goest, I will go.
3. Soothsayers make a better living in the world than truthsayers.
4. To whom nothing is given, nothing can be required.
5. If you always do right, you will gratify some people and astonish the rest.
Section 11.5
ambiguity
Section 11.5
context sensitivity
You try it
................................................................
Chapter 11
2. Open Andersons First World. Notice that this world has four medium dodecahedra surrounding a single medium cube.
3. Imagine that Max makes the following claim about this situation:
"
"
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
The problems of translation are much more difficult when we look at extended discourse, where more than one sentence comes in. To get a feeling for
the difficulty, we start of with a couple of problems about extended discourse.
extended discourse
Remember
A important source of ambiguity in English stems from the order in which
quantifiers are interpreted. To translate such a sentence into fol, you
must know which order the speaker of the sentence had in mind. This
can often be determined by looking at the context in which the sentence
was used.
Section 11.5
Exercises
11.24 If you skipped the You try it section, go back and do it now. Save your sentence file as
!
Sentences Max 1.
Open Reichenbachs World 1 and examine it. Check to see that all of the sentences in the
following discourse are true in this world.
There are (at least) two cubes. There is something between them. It is a medium
dodecahedron. It is in front of a large dodecahedron. These two are left of a small
dodecahedron. There are two tetrahedra.
Translate this discourse into a single first-order sentence. Check to see that your translation is true. Now check to see that your translation is false in Reichenbachs World
2.
Open Reichenbachs World 2. Check to see that all of the sentences in the following
discourse are true in this world.
There are two tetrahedra. There is something between them. It is a medium
dodecahedron. It is in front of a large dodecahedron. There are two cubes. These
two are left of a small dodecahedron.
Translate this into a single first-order sentence. Check to see that your translation is
true. Now check to see that your translation is false in Reichenbachs World 1. However,
note that the English sentences in the two discourses are in fact exactly the same; they
have just been rearranged! The moral of this exercise is that the correct translation of a
sentence into first-order logic (or any other language) can be very dependent on context.
Submit your sentence file.
11.26 (Ambiguity) Use Tarskis World to create a new sentence file and use it to translate the following
!!
sentences into fol. Each of these sentences is ambiguous, so you should have two different
translations of each. Put the two translations of sentence 1 in slots 1 and 2, the two translations
of sentence 3 in slots 3 and 4, and so forth.
1. Every cube is between a pair of dodecahedra.
3. Every cube to the right of a dodecahedron is smaller than it is.
5. Cube a is not larger than every dodecahedron.
Chapter 11
11.27
"
11.28
"
Section 11.6
x OlderThan(mother(x), x)
Section 11.6
It expresses the claim that a persons mother is always older than the person.
To express the same thing with the relation symbol, we might write
x y [MotherOf(y, x) OlderThan(y, x)]
Actually, one might wonder whether the second sentence quite manages to
express the claim made by the first, since all it says is that everyone has at
least one mother who is older than they are. One might prefer something like
x y [MotherOf(y, x) OlderThan(y, x)]
This says that every mother of everyone is older than they are. But this too
seems somewhat deficient. A still better translation would be to conjoin one of
the above sentences with the following two sentences which, together, assert
that the relation of being the mother of someone is functional. Everyone has
at least one, and everyone has at most one.
x y MotherOf(y, x)
and
x y z [(MotherOf(y, x) MotherOf(z, x)) y = z]
We will study this sort of thing much more in Chapter 14, where we will
see that these two sentences can jointly be expressed by one rather opaque
sentence:
x y [MotherOf(y, x) z [MotherOf(z, x) y = z]]
And, if we wanted to, we could then incorporate our earlier sentence and
express the first claim by means of the horrendous looking:
x y [MotherOf(y, x) OlderThan(y, x) z [MotherOf(z, x) y = z]]
By now it should be clearer why function symbols are so useful. Look at all
the connectives and additional quantifiers that have come into translating our
very simple sentence
x OlderThan(mother(x), x)
We present some exercises below that will give you practice translating
sentences from English into fol, sentences that show why it is nice to have
function symbols around.
Remember
Anything you can express using an n-ary function symbol can also be
expressed using an n + 1-ary relation symbol, plus the identity predicate,
but at a cost in terms of the complexity of the sentences used.
Chapter 11
Exercises
11.29 Translate the following sentences into fol twice, once using the function symbol mother, once
"
11.30 Translate the following into a version of fol that has function symbols height, mother, and
"
father, the predicate >, and names for the people mentioned.
1. Marys father is taller than Mary but not taller than Claires father.
2. Someone is taller than Claires father.
3. Someones mother is taller than their father.
4. Everyone is taller than someone else.
5. No one is taller than himself.
6. Everyone but J.R. who is taller than Claire is taller than J.R.
7. Everyone who is shorter than Claire is shorter than someone who is shorter than
Melanies father.
8. Someone is taller than Jons paternal grandmother but shorter than his maternal grandfather.
Say which sentences are true, referring to the table in Figure 9.1 (p. 256). Take the domain of
quantification to be the people mentioned in the table. Turn in your answers.
11.31 Translate the following sentences into the blocks language augmented with the four function
"
symbols lm, rm, fm, and bm discussed in Section 1.5 (page 33) and further discussed in connection with quantifiers in Section 9.7 (page 254). Tell which of these sentences are true in
Malcevs World.
1. Every cube is to the right of the leftmost block in the same row.
2. Every block is in the same row as the leftmost block in the same row.
3. Some block is in the same row as the backmost block in the same column.
4. Given any two blocks, the first is the leftmost block in the same row as the second if
and only if there is nothing to the left of the second.
5. Given any two blocks, the first is the leftmost block in the same row as the second if
and only if there is nothing to the left of the second and the the two blocks are in the
same row.
Turn in your answers.
Section 11.6
11.32 Using the first-order language of arithmetic described earlier, express each of the following in
"
fol.
1. Every number is either 0 or greater than 0.
2. The sum of any two numbers greater than 1 is smaller than the product of the same
two numbers.
3. Every number is even. [This is false, of course.]
4. If x2 = 1 then x = 1. [Hint: Dont forget the implicit quantifier.]
b2 4ac
b2 4ac
5. !! For any number x, if ax2 +bx+c = 0 then either x = b+ 2a
or x = b 2a
.
In this problem treat a, b, c as constants but x as a variable, as usual in algebra.
Section 11.7
Prenex form
When we translate complex sentences of English into fol, it is common to
end up with sentences where the quantifiers and connectives are all scrambled
together. This is usually due to the way in which the translations of complex
noun phrases of English use both quantifiers and connectives:
x (P(x) . . . )
x (P(x) . . . )
As a result, the translation of (the most likely reading of) a sentence like
Every cube to the left of a tetrahedron is in back of a dodecahedron ends up
looking like
x [(Cube(x) y (Tet(y) LeftOf(x, y))) y (Dodec(y) BackOf(x, y))]
prenex form
While this is the most natural translation of our sentence, there are situations where it is not the most convenient one. It is sometimes important
that we be able to rearrange sentences like this so that all the quantifiers are
out in front and all the connectives in back. Such a sentence is said to be in
prenex form, since all the quantifiers come first.
Stated more precisely, a wff is in prenex normal form if either it contains
no quantifiers at all, or else is of the form
Q1 v1 Q2 v2 . . . Qn vn P
where each Qi is either or , each vi is some variable, and the wff P is
quantifier-free.
Chapter 11
There are several reasons one might want to put sentences into prenex
form. One is that it gives you a nice measure of the logical complexity of the
sentences. What turns out to matter is not so much the number of quantifiers,
as the number of times you get a flip from to or the other way round.
The more of these so-called alternations, the more complex the sentence is,
logically speaking. Another reason is that this prenex form is quite analogous
to the conjunctive normal form for quantifier-free wffs we studied earlier. And
like that normal form, it is used extensively in automated theorem proving.
It turns out that every sentence is logically equivalent to one (in fact many)
in prenex form. In this section we will present some rules for carrying out this
transformation. When we apply the rules to our earlier example, we will get
quantifier alternations
converting to
prenex form
x P(x) y Q(y)
In getting a formula into prenex form, its a good idea to get rid of conditionals
in favor of Boolean connectives, since these interact more straightforwardly
Section 11.7
y x (P(x) Q(y))
If we had done it in the other order, we would have obtained the superficially
different
x y (P(x) Q(y))
While the order of mixed quantifiers is usually quite important, in this case it
does not matter because of the pattern of variables within the matrix of the
wff.
Here is another example:
(x P(x) R(b)) x (P(x) x Q(x))
Again we have a conditional, but this time neither the antecedent nor the
consequent is in prenex normal form. Following the basic strategy of working
from the inside out, lets first put the antecedent and then the consequent
each into prenex form and then worry about what to do about the conditional.
Using the principle of null quantification on the antecedent we obtain
x (P(x) R(b)) x (P(x) x Q(x))
Next we use the principle involving the distribution of and on the consequent:
x (P(x) R(b)) x (P(x) Q(x))
Now both the antecedent and consequent are in prenex form. Recall that its
a good idea to get rid of conditionals in favor of Boolean connectives. Hence,
we replace by its equivalent using and :
x (P(x) R(b)) x (P(x) Q(x))
Chapter 11
Now we have a disjunction, but one of the disjuncts is not in prenex form.
Again, that can be fixed using DeMorgans law:
x (P(x) R(b)) x (P(x) Q(x))
Now both disjuncts are in prenex form. We need to pull the s out in front. (If
they were both s, we could do this easily, but they arent.) Here is probably
the least obvious step in the process: In order to get ready to pull the s out
in front, we replace the x in the second disjunct by a variable (say z) not in
the first disjunct:
x (P(x) R(b)) z (P(z) Q(z))
We now use the principle of null quantification twice, first on x:
x [(P(x) R(b)) z (P(z) Q(z))]
Finally, we use the same principle on z, giving a wff in prenex form:
x z [(P(x) R(b)) (P(z) Q(z))]
It is at this step that things would have gone wrong if we had not first changed
the second x to a z. Do you see why? The wrong quantifiers would have bound
the variables in the second disjunct.
If we wanted to, for some reason, we could now go on and put the inner
part, the part following all the quantifiers, into one of our propositional normal
forms, CNF or DNF.
With these examples behind us, here is a step-by-step transformation of
our original sentence into the one in prenex form given above. We have abbreviated the predicates in order to make it easier to read.
x [(C(x) y (T(y) L(x, y))) y (D(y) B(x, y))]
x [(C(x) y (T(y) L(x, y))) y (D(y) B(x, y))]
x [y (C(x) T(y) L(x, y)) y (D(y) B(x, y))]
x [y (C(x) T(y) L(x, y)) y (D(y) B(x, y))]
x [y (C(x) T(y) L(x, y)) z (D(z) B(x, z))]
x y [(C(x) T(y) L(x, y)) z (D(z) B(x, z))]
x y [z (C(x) T(y) L(x, y)) z (D(z) B(x, z))]
x y z [(C(x) T(y) L(x, y)) (D(z) B(x, z))]
x y z [(C(x) T(y) L(x, y)) (D(z) B(x, z))]
Remember
A sentence is in prenex form if any quantifiers contained in it are out in
front. Any sentence is logically equivalent to one in prenex form.
Section 11.7
Exercises
Derive the following from the principles given earlier, by replacing by its definition in terms of
and .
11.33
"
11.34
"
11.35
"
11.36
"
x P Q
x [P Q]
if x not free in Q
x P Q
x [P Q]
if x not free in Q
P x Q
x [P Q]
if x not free in P
P x Q
x [P Q]
if x not free in P
11.37 (Putting sentences in Prenex form) Open Jon Russells Sentences. You will find ten sentences,
!
at the odd numbered positions. Write a prenex form of each sentence in the space below it.
Save your sentences. Open a few worlds, and make sure that your prenex form has the same
truth value as the sentence above it.
11.38 (Some invalid quantifier manipulations) We remarked above on the invalidity of some quantifier
!
manipulations that are superficially similar to the valid ones. In fact, in both cases one side
is a logical consequence of the other side, but not vice versa. We will illustrate this. Build a
world in which (1) and (3) below are true, but (2) and (4) are false.
1. x [Cube(x) Tet(x)]
2. x Cube(x) x Tet(x)
3. x Cube(x) x Small(x)
4. x [Cube(x) Small(x)]
Section 11.8
Chapter 11
Exercises
11.39 (Translation) Open Peirces World. Look at it in 2-D to remind yourself of the hidden objects.
!
Start a new sentence file where you will translate the following English sentences. Again, be
sure to check each of your translations to see that it is indeed a true sentence.
1. Everything is either a cube or a tetrahedron.
2. Every cube is to the left of every tetrahedron.
3. There are at least three tetrahedra.
4. Every small cube is in back of a particular large cube.
5. Every tetrahedron is small.
6. Every dodecahedron is smaller than some tetrahedron. [Note: This is vacuously true in
this world.]
Now lets change the world so that none of the English sentences are true. (We can do this by
changing the large cube in front to a dodecahedron, the large cube in back to a tetrahedron,
and deleting the two small tetrahedra in the far right column.) If your answers to 15 are
correct, all of your translations should be false as well. If not, you have made a mistake in
translation. Make further changes, and check to see that the truth values of your translations
track those of the English sentences. Submit your sentence file.
11.40 (More translations for practice) This exercise is just to give you more practice translating
!!!
sentences of various sorts. They are all true in Skolems World, in case you want to look while
translating.
Translate the following sentences.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
Section 11.8
Open Skolems World. Notice that all of the above English sentences are true. Verify that
the same holds of your translations.
This time, rather than open other worlds, make changes to Skolems World and see
that the truth value of your translations track that of the English sentence. For example, consider sentence 5. Add a small dodecahedron between the front two cubes.
The English sentence is still true. Is your translation? Now move the dodecahedron
over between two tetrahedra. The English sentence is false. Is your translation? Now
make the dodecahedron medium. The English sentence is again true. How about your
translation?
Submit your sentence file.
11.41 Using the symbols introduced in Table 1.2, page 30, translate the following into fol. Do
!!
not introduce any additional names or predicates. Comment on any shortcomings in your
translations.
1. No student owned two pets at a time.
2. No student owned two pets until Claire did.
3. Anyone who owns a pet feeds it sometime.
4. Anyone who owns a pet feeds it sometime while they own it.
5. Only pets that are hungry are fed.
11.42 Translate the following into fol. As usual, explain the meanings of the names, predicates, and
"!
function symbols you use, and comment on any shortcomings in your translations.
1. You should always except the present company.
2. There was a jolly miller once
Lived on the River Dee;
He worked and sang from morn till night
No lark more blithe than he.
3. Man is the only animal that blushes. Or needs to.
4. You can fool all of the people some of the time, and some of the people all of the time,
but you cant fool all of the people all of the time.
5. Everybody loves a lover.
11.43 Give two translations of each of the following and discuss which is the most plausible reading,
"!
and why.
1. Every senior in the class likes his or her computer, and so does the professor. [Treat
the professor as a name here and in the next sentence.]
2. Every senior in the class likes his or her advisor, and so does the professor.
3. In some countries, every student must take an exam before going to college.
4. In some countries, every student learns a foreign language before going to college.
Chapter 11
11.44 (Using DeMorgans Laws in mathematics) The DeMorgan Laws for quantifiers are quite helpful
"
Section 11.8
Chapter 12
328
After all, there is no way the universal claim could be true without the specific
claim also being true. This inference step is called universal instantiation or
universal elimination. Notice that it allows you to move from a known result
that begins with a quantifier x (. . . x . . .) to one (. . . c . . .) where the quantifier
has been eliminated.
universal elimination
(instantiation)
Existential introduction
There is also a simple proof step for , but it allows you to introduce the
quantifier. Suppose you have established that c is a small tetrahedron. It
follows, of course, that there is a small tetrahedron. There is no way for the
specific claim about c to be true without the existential claim also being
true. More generally, if we have established a claim of the form S(c) then we
may infer x S(x). This step is called existential generalization or existential
introduction.
In mathematical proofs, the preferred way to demonstrate the truth of
an existential claim is to find (or construct) a specific instance that satisfies
the requirement, and then apply existential generalization. For example, if
we wanted to prove that there are natural numbers x, y, and z for which
x2 + y 2 = z 2 , we could simply note that 32 + 42 = 52 and apply existential
generalization (thrice over).
The validity of both of these inference steps is not unconditional in English.
They are valid as long as any name used denotes some object in the domain
of discourse. This holds for fol by convention, as we have already stressed,
but English is a bit more subtle here. Consider, for example, the name Santa.
The sentence
existential introduction
(generalization)
presuppositions
of these rules
Section 12.1
This is a rather obvious result, which is all the better for illustrating the
obviousness of these steps.
Proof: Using universal instantiation, we get
Cube(d) Large(d)
and
Large(d) LeftOf(d, b)
Applying modus ponens to Cube(d) and the first of these conditional
claims gives us Large(d). Another application of modus ponens gives
us LeftOf(d, b). But then we have
Large(d) LeftOf(d, b)
Finally, applying existential introduction gives us our desired conclusion:
x [Large(x) LeftOf(x, b)]
Before leaving this section, we should point out that there are ways to prove
existential statements other than by existential generalization. In particular,
to prove x P(x) we could use proof by contradiction, assuming x P(x) and
deriving a contradiction. This method of proceeding is somewhat less satisfying, since it does not actually tell you which object it is that satisfies the
condition P(x). Still, it does show that there is some such object, which is all
that is claimed. This was in fact the method we used back on page 132 to
prove that there are irrational numbers x and y such that xy is rational.
Remember
1. Universal instantiation: From x S(x), infer S(c), so long as c denotes
an object in the domain of discourse.
2. Existential generalization: From S(c), infer x S(x), so long as c denotes
an object in the domain of discourse.
Chapter 12
Section 12.2
temporary names
existential elimination
(instantiation)
Section 12.2
x [Cube(x) Large(x)]
x [Large(x) LeftOf(x, b)]
x Cube(x)
x [Large(x) LeftOf(x, b)]
The first two premises are the same but the third is weaker, since it does not
tell us which block is a cube, only that there is one. We would like to eliminate
the in our third premise, since then we would be back to the case we have
already examined. How then should we proceed? The proof would take the
following form:
Proof: We first note that the third premise assures us that there is at
least one cube. Let e name one of these cubes. We can now proceed
just as in our earlier reasoning. Applying the first premise, we see
that e must be large. (What steps are we using here?) Applying
the second premise, we see that e must also be left of b. Thus, we
have shown that e is both large and left of b. Our desired conclusion
follows (by what inference step?) from this claim.
an important condition
Section 12.3
Chapter 12
Finally, let us suppose we are able to prove from these premises that Sandy,
a math major, is smart. Under what conditions would we be entitled to infer
that every math major at the school is smart?
At first sight, it seems that we could never draw such a conclusion, unless
there were only one math major at the school. After all, it does not follow
from the fact that one math major is smart that all math majors are. But
what if our proof that Sandy is smart uses nothing at all that is particular to
Sandy? What if the proof would apply equally well to any math major? Then
it seems that we should be able to conclude that every math major is smart.
How might one use this in a real example? Let us suppose that our argument took the following form:
Anyone who passes Logic 101 with an A is smart.
Every math major has passed Logic 101 with an A.
Every math major is smart.
Our reasoning proceeds as follows.
Proof: Let Sandy refer to any one of the math majors. By the
second premise, Sandy passed Logic 101 with an A. By the first
premise, then, Sandy is smart. But since Sandy is an arbitrarily
chosen math major, it follows that every math major is smart.
This method of reasoning is used at every turn in doing mathematics. The
general form is the following: Suppose we want to prove x [P(x) Q(x)] from
some premises. The most straightforward way to proceed is to choose a name
that is not in use, say c, assume P(c), and prove Q(c). If you are able to do
this, then you are entitled to infer the desired result.
Lets look at another example. Suppose we wanted to prove that every
prime number has an irrational square root. To apply general conditional
proof, we begin by assuming that p is an arbitrary prime number. Our goal is
general conditional
proof
Section 12.3
universal introduction
(generalization)
Chapter 12
In fact, it was the first example we looked at back in Chapter 10. Lets give a
proof of this argument.
Proof: We begin by taking a new name d, and think of it as standing for any member of the domain of discourse. Applying universal
instantiation twice, once to each premise, gives us
1. Cube(d) Small(d)
2. Cube(d)
By modus ponens, we conclude Small(d). But d denotes an arbitrary
object in the domain, so our conclusion, x Small(x), follows by universal generalization.
Any proof using general conditional proof can be converted into a proof
using universal generalization, together with the method of conditional proof.
Suppose we have managed to prove x [P(x) Q(x)] using general conditional
proof. Here is how we would go about proving it with universal generalization
instead. First we would introduce a new name c, and think of it as standing
for an arbitrary member of the domain of discourse. We know we can then
prove P(c) Q(c) using ordinary conditional proof, since that is what we did
in our original proof. But then, since c stands for an arbitrary member of the
domain, we can use universal generalization to get x [P(x) Q(x)].
This is how formal systems of deduction can get by without having an
explicit rule of general conditional proof. One could in a sense think of universal generalization as a special case of general conditional proof. After all,
if we wanted to prove x S(x) we could apply general conditional proof to
the logically equivalent sentence x [x = x S(x)]. Or, if our language has the
predicate Thing(x) that holds of everything in the domain of discourse, we
could use general conditional proof to obtain x [Thing(x) S(x)]. But since
general conditional proof may not allow us to prove x S(x) alone, universal
generalization is, well, more general.1 (The relation between general conditional proof and universal generalization will become clearer when we get to
the topic of generalized quantifiers in Section 14.4.)
We have chosen to emphasize general conditional proof since it is the
method most often used in giving rigorous informal proofs. The division of this
method into conditional proof and universal generalization is a clever trick,
but it does not correspond well to actual reasoning. This is at least in part
due to the fact that universal noun phrases of English are always restricted by
some common noun, if only the noun thing. The natural counterparts of such
statements in fol have the form x [P(x) Q(x)], which is why we typically
prove them by general conditional proof.
universal generalization
and general conditional
proof
1 We would like to thank S. Marc Cohen for his observations on the relationship between
universal generalization and general conditional proof.
Section 12.3
x (Cube(x) Small(x))
x Cube(x)
x Small(x)
2.
x Cube(x)
x Small(x)
x (Cube(x) Small(x))
We saw there that the truth functional rules did not suffice to establish these
arguments. In this chapter we have seen (on page 335) how to establish the
first using valid methods that apply to the quantifiers. Lets conclude this
discussion by giving an informal proof of the second.
Proof: Let d be any object in the domain of discourse. By the first
premise, we obtain (by universal elimination) Cube(d). By the second
premise, we obtain Small(d). Hence we have (Cube(d) Small(d)).
But since d is an arbitrary object in the domain, we can conclude
x (Cube(x) Small(x)), by universal generalization.
Exercises
The following exercises each contain a formal argument and something that purports to be an informal
proof of it. Some of these proofs are correct while others are not. Give a logical critique of the purported
proof. Your critique should take the form of a short essay that makes explicit each proof step or method
of proof used, indicating whether it is valid or not. If there is a mistake, see if can you patch it up by
giving a correct proof of the conclusion from the premises. If the argument in question is valid, you
should be able to fix up the proof. If the argument is invalid, then of course you will not be able to fix
the proof.
12.1
"
Chapter 12
12.2
"
12.3
"
The following exercises each contains an argument; some are valid, some not. If the argument is valid,
give an informal proof. If it is not valid, use Tarskis World to construct a counterexample.
12.4
!|"
y [Cube(y) Dodec(y)]
x [Cube(x) Large(x)]
x Large(x)
12.5
!|"
x Dodec(x)
12.6
!|"
x [Cube(x) Dodec(x)]
x [Small(x) Tet(x)]
x Small(x)
y [Cube(y) Dodec(y)]
x [Cube(x) Large(x)]
x Large(x)
x [Dodec(x) Small(x)]
12.7
!|"
x [Cube(x) Dodec(x)]
x [Cube(x) (Large(x) LeftOf(c, x))]
x [Small(x) Tet(x)]
z Dodec(z)
Section 12.3
12.8
!|"
12.9
!|"
12.10
!|"
Section 12.4
Chapter 12
hidden dependencies
Section 12.4
Alex
Zoe
Eric
Rachel
Matt
Laura
Brad
Betsy
Tom
Sarah
a new restriction
Pseudo-proof: Assume x y Adjoins(x, y). We will show that, ignoring the above restriction, we can prove y x Adjoins(x, y). We
begin by taking c as a name for an arbitrary member of the domain. By universal instantiation, we get y Adjoins(c, y). Let d be
such that Adjoins(c, d). Since c stands for an arbitrary object, we
have x Adjoins(x, d). Hence, by existential generalization, we get
y x Adjoins(x, y).
Can you spot the fallacious step in this proof? The problem is that we
generalized from Adjoins(c, d) to x Adjoins(x, d). But the constant d was introduced by existential instantiation (though we did not say so explicitly) after
the constant c was introduced. Hence, the choice of the object d depends on
which object c we are talking about. The subsequent universal generalization
is just what our restriction rules out.
Let us now give a summary statement of the main methods of proof involving the first-order quantifiers.
Chapter 12
Remember
Let S(x), P(x), and Q(x) be wffs.
1. Existential Instantiation: If you have proven x S(x) then you may
choose a new constant symbol c to stand for any object satisfying S(x)
and so you may assume S(c).
2. General Conditional Proof: If you want to prove x [P(x) Q(x)] then
you may choose a new constant symbol c, assume P(c), and prove
Q(c), making sure that Q does not contain any names introduced by
existential instantiation after the assumption of P(c).
3. Universal Generalization: If you want to prove x S(x) then you may
choose a new constant symbol c and prove S(c), making sure that S(c)
does not contain any names introduced by existential instantiation
after the introduction of c.
Euclids Theorem
Section 12.4
Notice the order of the last two steps. Had we violated the new condition on the application of general conditional proof to conclude that p is a
prime number greater than or equal to every natural number, we would have
obtained a patently false result.
Here, by the way, is a closely related conjecture, called the Twin Prime
Conjecture. No one knows whether it is true or not.
x y [y > x Prime(y) Prime(y + 2)]
The Barber Paradox
There was once a small town in Indiana where there was a barber who shaved
all and only the men of the town who did not shave themselves. We might
formalize this in fol as follows:
z x [BarberOf(x, z) y (ManOf(y, z) (Shave(x, y) Shave(y, y)))]
Now there does not on the face of it seem to be anything logically incoherent about the existence of such a town. But here is a proof that there can
be no such town.
Purported proof: Suppose there is such a town. Lets call it Hoosierville, and lets call Hoosiervilles barber Fred. By assumption, Fred
shaves all and only those men of Hoosierville who do not shave themselves.
Now either Fred shaves himself, or he doesnt. But either possibility
leads to a contradiction, as we now show. As to the first possibility, if
Chapter 12
Barber Paradox
12.11
"
There is a number greater than every other number.
Purported proof: Let n be an arbitrary number. Then n is less than some other
number, n + 1 for example. Let m be any such number. Thus n m. But n is an
arbitrary number, so every number is less or equal m. Hence there is a number that
is greater than every other number.
Section 12.4
12.12
"
12.13
"
12.14
"
There is at most one object.
Purported proof: Toward a proof by contradiction, suppose that there is more than
one object in the domain of discourse. Let c be any one of these objects. Then there is
some other object d, so that d += c. But since c was arbitrary, x (d += x). But then, by
universal instantiation, d += d. But d = d, so we have our contradiction. Hence there
can be at most one object in the domain of discourse.
Chapter 12
12.15
"
The next three exercises contain arguments from a single set of premises. In each case decide whether
or not the argument is valid. If it is, give an informal proof. If it isnt, use Tarskis World to construct
a counterexample.
12.16
!|"
12.17
!|"
12.18
!|"
The next three exercises contain arguments from a single set of premises. In each, decide whether the
argument is valid. If it is, give an informal proof. If it isnt valid, use Tarskis World to build a counterexample.
Section 12.4
12.19
!|"
12.20
!|"
12.21
!|"
12.23 Translate the following argument into fol and determine whether or not the conclusion follows
"
In the next three exercises, we work in the first-order language of arithmetic with the added predicates
Even(x), Prime(x), and DivisibleBy(x, y), where these have the obvious meanings (the last means that the
natural number y divides the number x without remainder.) Prove the result stated in the exercise. In
some cases, you have already done all the hard work in earlier problems.
12.27 Are sentences (1) and (2) in Exercise 9.19 on page 252 logically equivalent? If so, give a proof.
"
12.28 Show that it would be impossible to construct a reference book that lists all and only those
"
Chapter 12
12.29 Call a natural number a near prime if its prime factorization contains at most two distinct
"!
primes. The first number which is not a near prime is 2 3 5 = 30. Prove
x y [y > x NearPrime(y)]
You may appeal to our earlier result that there is no largest prime.
Section 12.5
Axiomatizing shape
Lets return to the project of giving axioms for the shape properties in Tarskis
World. In Section 10.5, we gave axioms that described basic facts about the
three shapes, but we stopped short of giving axioms for the binary relation
SameShape. The reason we stopped was that the needed axioms require multiple quantifiers, which we had not covered at the time.
How do we choose which sentences to take as axioms? The main consideration is correctness: the axioms must be true in all relevant circumstances,
either in virtue of the meanings of the predicates involved, or because we have
restricted our attention to a specific type of circumstance.
The two possibilities are reflected in our first four axioms about shape,
which we repeat here for ease of reference:
correctness of axioms
completeness of axioms
Section 12.5
Chapter 12
Peano axioms
G
odels Incompleteness
Theorem
Section 12.5
Exercises
Give informal proofs of the following arguments, if they are valid, making use of any of the ten shape
axioms as needed, so that your proof uses only first-order methods of proof. Be very explicit about which
axioms you are using at various steps. If the argument is not valid, use Tarskis World to provide a
counterexample.
12.30
!|"
x (Cube(x) Dodec(x))
x y SameShape(x, y)
12.31
!|"
x Tet(x)
12.32
!|"
12.33
!|"
x y SameShape(x, y)
12.34
!|"
12.35
SameShape(b, c)
x y SameShape(x, y)
!|"
SameShape(c, b)
SameShape(b, c)
SameShape(c, d)
SameShape(b, d)
12.36 The last six shape axioms are quite intuitive and easy to remember, but we could have gotten
"!
by with fewer. In fact, there is a single sentence that completely captures the meaning of
SameShape, given the first four axioms. This is the sentence that says that two things are the
same shape if and only if they are both cubes, both tetrahedra, or both dodecahedra:
x y (SameShape(x, y)
((Cube(x) Cube(y))
(Tet(x) Tet(y))
(Dodec(x) Dodec(y)))
Use this axiom and and the basic shape axioms (1)-(4) to give informal proofs of axioms (5)
and (8).
12.37 Let us imagine adding as new atomic sentences involving a binary predicate MoreSides. We
"!
assume that MoreSides(b, c) holds if block b has more sides than block c. See if you can come
up with axioms that completely capture the meaning of this predicate. The natural way to do
this involves two or three introduction axioms and three or four elimination axioms. Turn in
your axioms to your instructor.
12.38 Find first-order axioms for the six size predicates of the blocks language. [Hint: use the axiom"!!
Chapter 12
Chapter 13
351
boxed constant
$ x (P(x) Q(x))
When we give the justification for universal introduction, we will cite the
subproof, as we do in the case of conditional introduction. The requirement
that c not occur outside the subproof in which it is introduced does not preclude it occurring within subproofs of that subproof. A sentence in a subproof
of a subproof still counts as a sentence of the larger subproof.
As a special case of Intro we allow a subproof where there is no sentential
assumption at all, just the boxed constant on its own. This corresponds to
the method of universal generalization discussed earlier, where one assumes
that the constant in question stands for an arbitrary object in the domain of
discourse.
Universal Introduction ( Intro):
c
..
.
P(c)
$ x P(x)
As we have indicated, we dont really need both forms of Intro. We use
both because the first is more natural while the second is more general.
Chapter 13
Lets illustrate how to use these rules by giving a formal proof mirroring
the informal proof given on page 335. We prove that the following argument
is valid:
x (P(x) Q(x))
z (Q(z) R(z))
x (P(x) R(x))
(This is a general form of the argument about all math majors being smart
given earlier.) Here is a completed proof:
1. x (P(x) Q(x))
2. z (Q(z) R(z))
3. d P(d)
4.
5.
6.
7.
P(d) Q(d)
Q(d)
Q(d) R(d)
R(d)
8. x (P(x) R(x))
Elim: 1
Elim: 3, 4
Elim: 2
Elim: 5, 6
Intro: 3-7
Notice that the constant symbol d does not appear outside the subproof. It is
newly introduced at the beginning of that subproof, and occurs nowhere else
outside it. That is what allows the introduction of the universal quantifier in
the final step.
You try it
................................................................
1. Open the file Universal 1. This file contains the argument proven above.
Well show you how to construct this proof in Fitch.
"
2. Start a new subproof immediately after the premises. Before typing anything in, notice that there is a blue, downward pointing triangle to the left
of the blinking cursor. It looks just like the focus slider, but sort of standing on its head. Use your mouse to click down on this triangle. A menu
will pop up, allowing you to choose the constant(s) you want boxed in
this subproof. Choose d from the menu. (If you choose the wrong one, say
c, then choose it again to unbox it.)
"
3. After you have d in the constant box, enter the sentence P(d) as your
assumption. Then add a step and continue the subproof.
"
Section 13.1
4. You should now be able to complete the proof on your own. When youre
done, save it as Proof Universal 1.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
Default and generous uses of the rules
default uses of rules
indicating substitutions
Both of the universal quantifier rules have default uses. If you cite a universal
sentence and apply Elim without entering a sentence, Fitch will replace the
universally quantified variable with its best guess of the name you intended. It
will either choose the alphabetically first name that does not already appear
in the sentence, or the first name that appears as a boxed constant in the
current subproof. For example, in steps 4 and 6 of the above proof, the default
mechanism would choose d, and so generate the correct instances.
If you know you want a different name substituted for the universally
quantified variable, you can indicate this by typing a colon (:), followed by
the variable, followed by the greater-than sign (>), followed by the name
you want. In other words, if instead of a sentence you enter : x > c, Fitch
will instantiate x P(x) as P(c), rather than picking its own default instance.
(Think of : x > c as saying substitute c for x.)
If you apply Intro to a subproof that starts with a boxed constant on its
own, without entering a sentence, Fitch will take the last sentence in the cited
subproof and universally quantify the name introduced at the beginning of the
subproof. If the cited subproof starts with a boxed constant and a sentence,
then Fitch will write the corresponding universal conditional, using the first
sentence and the last sentence of the proof to create the conditional.
You try it
................................................................
!
1. Open the file Universal 2. Look at the goal to see what sentence we are
trying to prove. Then focus on each step in succession and check the step.
Before moving to the next step, make sure you understand why the step
checks out and, more important, why we are doing what we are doing at
that step. At the empty steps, try to predict which sentence Fitch will
provide as a default before you check the step.
2. When you are done, make sure you understand the completed proof. Save
the file as Proof Universal 2.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
Chapter 13
Fitch has generous uses of both rules. Elim will allow you to remove
several universal quantifiers from the front a sentence simultaneously. For
example, if you have proven x y SameCol(x, y) you could infer SameCol(f, c)
in one step in Fitch. If you want to use the default mechanism to generate this
step, you can enter the substitutions : x > f : y > c before checking the step.
In a like manner, you can often prove a sentence starting with more than
one universal quantifier by means of a single application of Intro. You do
this by starting a subproof with the appropriate number of boxed constants.
If you then prove a sentence containing these constants you may end the
subproof and infer the result of universally quantifying each of these constants
using Intro. The default mechanism allows you to specify the variables to
be used in the generated sentence by indicating the desired substitutions, for
example : a > z : b > w will generate z w R(w, z) when applied to R(b, a).
Notice the order used to specify substitutions: for Elim it will always be
: variable > name, while for Intro it must be : name > variable.
Add Support Steps can be used with the Intro rule. If the focus step
contains a universally quantified formula then the support will be a subproof
with a new constant at the assumption step. The last step of the subproof will
contain the appropriate instance of the universal formula. With the Elim
rule a single support step containing a universal formula will be added. The
support formula will be a universal generalization of the formula at the focus
step, with the first constant replaced by the variable of quantification.
Remember
The formal rule of Intro corresponds to the informal method of general
conditional proof, including the special case of universal generalization.
Exercises
13.1
!
If you skipped the You try it sections, go back and do them now. Submit the files Proof
Universal 1 and Proof Universal 2.
For each of the following arguments, decide whether or not it is valid. If it is, use Fitch to give a formal
proof. If it isnt, use Tarskis World to give a counterexample. In this chapter you are free to use Taut
Con to justify proof steps involving only propositional connectives.
Section 13.1
13.2
!
x (Cube(x) Small(x))
x Cube(x)
13.3
!
x Small(x)
13.4
!
x Cube(x)
x Small(x)
x (Cube(x) Small(x))
13.5
x Cube(x)
x (Cube(x) Small(x))
x y ((Cube(x) Dodec(y))
Larger(y, x))
x y (Larger(x, y) LeftOf(x, y))
x y ((Cube(x) Dodec(y))
LeftOf(y, x))
13.6
!
x ((Cube(x) Large(x))
(Tet(x) Small(x)))
x (Tet(x) BackOf(x, c))
x (Small(x) Large(x))
13.7
!
x y ((Cube(x) Dodec(y))
FrontOf(x, y))
x (Cube(x) y (Dodec(y)
FrontOf(x, y)))
13.8
!
x (Cube(x) y (Dodec(y)
FrontOf(x, y)))
x y ((Cube(x) Dodec(y))
FrontOf(x, y))
13.9
!!
x y ((Cube(x) Dodec(y))
Larger(x, y))
x y ((Dodec(x) Tet(y))
Larger(x, y))
x y ((Cube(x) Tet(y))
Larger(x, y))
Section 13.2
Chapter 13
$ Q
Again we think of the notation at the beginning of the subproof as the formal
counterpart of the English Let c be an arbitrary individual such that S(c).
The rule of existential elimination is quite analogous to the rule of disjunction elimination, both formally and intuitively. With disjunction elimination,
we have a disjunction and break into cases, one for each disjunct, and establish the same result in each case. With existential elimination, we can think
of having one case for each object in the domain of discourse. We are required
to show that, whichever object it is that satisfies the condition S(x), the same
result Q can be obtained. If we can do this, we may conclude Q.
To illustrate the two existential rules, we will give a formal counterpart to
the proof given on page 332.
comparison with
Elim
Section 13.2
1. x [Cube(x) Large(x)]
2. x [Large(x) LeftOf(x, b)]
3. x Cube(x)
4. e Cube(e)
5. Cube(e) Large(e)
6. Large(e)
7. Large(e) LeftOf(e, b)
8. LeftOf(e, b)
9. Large(e) LeftOf(e, b)
10. x (Large(x) LeftOf(x, b))
Elim: 1
Elim: 5, 4
Elim: 2
Elim: 7, 6
Intro: 6, 8
Intro: 9
Elim: 3, 4-10
Defaults for the existential quantifier rules work similarly to those for the
universal quantifier. If you cite a sentence and apply Intro without typing
a sentence, Fitch will supply a sentence that existentially quantifies the alphabetically first name appearing in the cited sentence. When replacing the
name with a variable, Fitch will choose the first variable in the list of variables
that does not already appear in the cited sentence. If this isnt the name or
variable you want used, you can specify the substitution yourself; for example
: max > z will replace max with z and add z to the front of the result.
In a default application of Elim, Fitch will supply the last sentence in
the cited subproof, providing that sentence does not contain the temporary
name introduced at the beginning of the subproof.
You try it
................................................................
Chapter 13
1. Open the file Existential 1. Look at the goal to see the sentence we are
trying to prove. Then focus on each step in succession and check the step.
Before moving to the next step, make sure you understand why the step
checks out and, more important, why we are doing what we are doing at
that step.
2. At any empty steps, you should try to predict which sentence Fitch will
provide as a default before you check the step. Notice in particular step
eight, the one that contains : a > y. Can you guess what sentence would
have been supplied by Fitch had we not specified this substitution? You
could try it if you like.
3. When you are done, make sure you understand the completed proof. Save
the file as Proof Existential 1.
"
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
As with , Fitch has generous uses of both rules. Intro will allow
you to add several existential quantifiers to the front a sentence. For example,
if you have proved SameCol(b, a) you could infer y z SameCol(y, z) in one
step in Fitch. In a like manner, you can use a sentence beginning with more
than one existential quantifier in a single application of Elim. You do this
by starting a subproof with the appropriate number of boxed constants. If
you then prove a sentence not containing these constants, you may end the
subproof and infer the result using Elim.
The Add Support Steps command cannot be used with either of the
rules.
Remember
The formal rule of Elim corresponds to the informal method of existential instantiation.
Exercises
13.10 If you skipped the You try it section, go back and do it now. Submit the file Proof Existential 1.
!
For each of the following arguments, decide whether or not it is valid. If it is, use Fitch to give a formal
proof. If it isnt, use Tarskis World to give a counterexample. Remember that in this chapter you are
free to use Taut Con to justify proof steps involving only propositional connectives.
13.11
!
x (Cube(x) Tet(x))
x Cube(x)
13.12
!
x Tet(x)
13.13
!
y [Cube(y) Dodec(y)]
x [Cube(x) Large(x)]
x Large(x)
x Dodec(x)
x (Cube(x) Tet(x))
x Cube(x)
x Tet(x)
13.14
!
x (Cube(x) Small(x))
x Cube(x)
x Small(x)
Section 13.2
13.15
!
x (Cube(x) Small(x))
x Cube(x)
13.16
!
x Small(x)
x y Adjoins(x, y)
x y (Adjoins(x, y)
SameSize(x, y))
x y SameSize(y, x)
In our discussion of the informal methods, we observed that the method that introduces new constants
can interact to give defective proofs, if not used with care. The formal system F automatically prevents
these misapplications of the quantifier rules. The next two exercises are designed to show you how the
formal rules prevent these invalid steps by formalizing one of the fallacious informal proofs we gave
earlier.
1. x y SameCol(x, y)
2. c
3. y SameCol(c, y)
4. d SameCol(c, d)
5. SameCol(c, d)
6. SameCol(c, d)
7. x SameCol(x, d)
8. y x SameCol(x, y)
Elim: 1
Reit: 4
Elim: 3, 45
Intro: 26
Intro: 7
1. Write this proof in a file using Fitch and check it out. You will discover that step
6 is incorrect; it violates the restriction on existential elimination that requires the
constant d to appear only in the subproof where it is introduced. Notice that the other
steps all check out, so if we could make that move, then the rest of the proof would be
fine.
2. Construct a counterexample to the argument to show that no proof is possible.
Submit both files.
13.18 Lets contrast the faulty proof from the preceding exercise with a genuine proof
!
that x y R(x, y) follows from y x R(x, y). Use Fitch to create the following proof.
Chapter 13
1. y x SameCol(x, y)
2. d x SameCol(x, d)
3. c
4. SameCol(c, d)
5. y SameCol(c, y)
6. x y SameCol(x, y)
7. x y SameCol(x, y)
Elim: 2
Intro: 4
Intro: 35
Elim: 1, 26
Notice that in this proof, unlike the one in the previous exercise, both constant symbols c
and d are properly sequestered within the subproofs where they are introduced. Therefore the
quantifier rules have been applied properly. Submit your proof.
Section 13.3
consider meaning
x (Tet(x) Small(x))
x (Small(x) LeftOf(x, b))
x LeftOf(x, b)
Section 13.3
Obviously, the conclusion follows from the given sentences. But ask yourself
how you would prove it, say, to your stubborn roommate, the one who likes
to play devils advocate. You might argue as follows:
Look, Bozo, were told that there is a small tetrahedron. So we know
that it is small, right? But were also told that anything thats small
is left of b. So if its small, its got to be left of b, too. So somethings
left of b, namely the small tetrahedron.
Now we dont recommend calling your roommate Bozo, so ignore that bit.
The important thing to notice here is the implicit use of three of our quantifier
rules: Elim, Elim, and Intro. Do you see them?
What indicates the use of Elim is the it appearing in the second
sentence. What we are doing there is introducing a temporary name (in this
case, the pronoun it) and using it to refer to a small tetrahedron. That
corresponds to starting the subproof needed for an application of Elim.
So after the second sentence of our informal proof, we can already see the
following steps in our reasoning (using c for it):
1. x (Tet(x) Small(x))
2. x (Small(x) LeftOf(x, b))
3. c Tet(c) Small(c)
4. Small(c)
..
.
Elim: 3
5. x LeftOf(x, b)
??
6. x LeftOf(x, b)
Elim: 3-5
In general, the key to recognizing Elim is to watch out for any reference to
an object whose existence is guaranteed by an existential claim. The reference
might use a pronoun (it, he, she), as in our example, or it might use a definite
noun phrase (the small tetrahedron), or finally it might use an actual name
(let n be a small tetrahedron). Any of these are signs that the reasoning is
proceeding via existential elimination.
The third and fourth sentences of our informal argument are where the
implicit use of Elim shows up. There we apply the claim about all small
things to the small tetrahedron we are calling it. This gives us a couple
more steps in our formal proof:
Chapter 13
1. x (Tet(x) Small(x))
2. x (Small(x) LeftOf(x, b))
3. c Tet(c) Small(c)
4. Small(c)
5. Small(c) LeftOf(c, b)
6. LeftOf(c, b)
..
.
Elim: 3
Elim: 2
Elim: 4, 5
8. x LeftOf(x, b)
9. x LeftOf(x, b)
Elim: 3-8
The distinctive mark of universal elimination is just the application of a general claim to a specific individual. For example, we might also have said at
this point: So the small tetrahedron [theres the specific individual] must be
left of b.
The implicit use of Intro appears in the last sentence of the informal
reasoning, where we conclude that something is left of b, based on the fact that
it, the small tetrahedron, is left of b. In our formal proof, this application
of Intro will be done within the subproof, giving us a sentence that we can
export out of the subproof since it doesnt contain the temporary name c.
1. x (Tet(x) Small(x))
2. x (Small(x) LeftOf(x, b))
3. c Tet(c) Small(c)
4.
5.
6.
7.
Small(c)
Small(c) LeftOf(c, b)
LeftOf(c, b)
x LeftOf(x, b)
8. x LeftOf(x, b)
Elim: 3
Elim: 2
Elim: 4, 5
Intro: 6
Elim: 1, 3-7
One thing thats a bit tricky is that in informal reasoning we often leave out
simple steps like Intro, since they are so obvious. Thus in our example, we
might have left out the last sentence completely. After all, once we conclude
that the small tetrahedron is left of b, it hardly seems necessary to point out
that something is left of b. So youve got to watch out for these omitted steps.
Section 13.3
working backward
This completes our formal proof. To a trained eye, the proof matches the
informal reasoning exactly. But you shouldnt feel discouraged if you would
have missed it on your own. It takes a lot of practice to recognize the steps
implicit in our own reasoning, but it is practice that in the end makes us more
careful and able reasoners.
The second strategy that we stressed is that of working backwards: starting
from the goal sentence and inserting steps or subproofs that would enable us
to infer that goal. It turns out that of the four new quantifier rules, only
Intro really lends itself to this technique.
Suppose your goal sentence is of the form x (P(x) Q(x)). After surveying your given sentences to see whether there is any immediate way to infer
this conclusion, it is almost always a good idea to start a subproof in which
you introduce an arbitrary name, say c, and assume P(c). Then add a step to
the subproof and enter the sentence Q(c), leaving the rule unspecified. Next,
end the subproof and inferx (P(x) Q(x)) by Intro, citing the subproof
in support. When you check this partial proof, an X will appear next to the
sentence Q(c), indicating that your new goal is to prove this sentence.
Remember
1. Always be clear about the meaning of the sentences you are using.
2. A good strategy is to find an informal proof and then try to formalize
it.
3. Working backwards can be very useful in proving universal claims,
especially those of the form x (P(x) Q(x)).
4. Working backwards is not useful in proving an existential claim x S(x)
unless you can think of a particular instance S(c) of the claim that
follows from the premises.
5. If you get stuck, consider using proof by contradiction.
A worked example
We are going to work through a moderately difficult proof, step by step, using
what we have learned in this section. Consider the the following argument:
x P(x)
x P(x)
Chapter 13
This is one of four such inferences associated with the DeMorgan rules relating
quantifiers and negation. The fact that this inference can be validated in F
is one we will need in our proof of the Completeness Theorem for the system
F in the final chapter. (The other three DeMorgan rules will be given in the
review exercises at the end of this chapter. Understanding this example will
be a big help in doing those exercises.)
Before embarking on the proof, we mention that this inference is one of the
hallmarks of first-order logic. Notice that it allows us to assert the existence
of something having a property from a negative fact: that not everything has
the opposite property.
In the late nineteenth and early twentieth century, the validity of this sort
of inference was hotly debated in mathematical circles. While it seems obvious
to us now, it is because we have come to understand existence claims in a
somewhat different way than some (the so-called intuitionists) understood
them. While the first-order understanding of x Q(x) is as asserting that some
Q exists, the intuitionist took it as asserting something far stronger: that the
asserter had actually found a Q and proven it to be a Q. Under this stronger
reading, the DeMorgan principle under discussion would not be valid. This
point will be relevant to our proof.
Let us now turn to the proof. Following our strategy, we begin with an
informal proof, and then formalize it.
intuitionists
Section 13.3
You try it
................................................................
!
..
.
2. x P(x)
2. The first step in our informal proof was to decide to try to give a proof by
contradiction. Formalize this idea by filling in the following:
1. x P(x)
2. x P(x)
..
.
4.
Intro: ?, ?
5. x P(x)
Intro: 24
This step will check out because of the generous nature of Fitchs Intro
rule, which lets us strip off as well as add a negation.
!
8. x P(x)
Chapter 13
?
Intro: 35
Intro: 6, 1
Intro: 27
4. Recall how we proved P(c). We said that if P(c) were not the case, then
we would have P(c), and hence x P(x). But this contradicted the assumption at step 2. Formalize this reasoning by filling in the rest of the
proof.
"
1. x P(x)
2. x P(x)
3. c
4. P(c)
5. x P(x)
6.
7. P(c)
8. P(c)
9. x P(x)
10.
11. x P(x)
Intro: 4
Intro: 5, 2
Intro: 46
Elim: 7
Intro: 38
Intro: 9, 1
Intro: 210
5. This completes our formal proof of x P(x) from the premise x P(x).
Verify your proof and save it as Proof Quantifier Strategy 1.
"
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
Exercises
13.19 If you skipped the You try it section, go back and do it now. Submit the file Proof Quantifier
!
Strategy 1.
Recall that in Exercises 12.112.3 on page 336, you were asked to give logical analyses of purported
proofs of some arguments involving nonsense predicates. In the following exercises, we return to these
arguments. If the argument is valid, submit a formal proof. If it is invalid, turn in an informal counterexample. If you submit a formal proof, be sure to use the Exercise file supplied with Fitch. In order
to keep your hand in at using the propositional rules, we ask you not to use Taut Con in these proofs.
13.20
!|"
Section 13.3
13.21
!|"
13.22
x [Brillig(x) (Mimsy(x) Slithy(x))]
!|"
y [(Slithy(y) Mimsy(y)) Tove(y)]
x [Tove(x) (Outgrabe(x, b) Brillig(x))]
z [Brillig(z) Mimsy(z)]
z Slithy(z)
Some of the following arguments are valid, some are not. For each, either use Fitch to give a formal
proof or use Tarskis World to construct a counterexample. In giving proofs, feel free to use Taut Con
if it helps.
13.23
!
y [Cube(y) Dodec(y)]
x [Cube(x) Large(x)]
x Large(x)
13.24
!
x (Cube(x) Small(x))
x Cube(x) x Small(x)
x Dodec(x)
13.25
!
x Cube(x) x Small(x)
13.26
!
x (Cube(x) Small(x))
13.27
!
x (Cube(x) Small(x))
x (Adjoins(x, b) Small(x))
x ((Cube(x) Small(x))
Adjoins(x, b))
x (Cube(x) Small(x))
x (Adjoins(x, b) Small(x))
x ((Cube(x) Small(x)) Adjoins(x, b))
For each of the following, use Fitch to give a formal proof of the argument. These look simple but some
of them are a bit tricky. Dont forget to first figure out an informal proof. Use Taut Con whenever it
is convenient but do not use FO Con.
13.28
!!
x y Likes(x, y)
13.29
!!
x y Likes(x, y)
13.30
!!
Likes(carl, max)
x [y (Likes(y, x) Likes(x, y))
Likes(x, x)]
x Likes(x, carl)
Chapter 13
x (Small(x) Cube(x))
x Cube(x) x Small(x)
x Cube(x)
13.31
!!
The following valid arguments come in pairs. The validity of the first of the pair makes crucial use of the
meanings of the blocks language predicates, whereas the second adds one or more premises, making the
result a first-order valid argument. For the latter, give a proof that does not make use of Ana Con. For
the former, give a proof that uses Ana Con but only where the premises and conclusions of the citation
are literals (including ). You may use Taut Con but do not use FO Con in any of the proofs.
13.32
!
x (Tet(x) Small(x))
13.33
!
13.34
!
x (Tet(x) Small(x))
y (Small(y) Medium(y) Large(y))
x [Tet(x) (Large(x) Medium(x))]
13.35
!
13.36
!
13.37
!
x (SameCol(x, a) Cube(x))
13.38
!
13.39
x (Cube(x) y (Dodec(y)
!
Larger(x, y)))
x (Dodec(x) y (Tet(y) Larger(x, y)))
x Dodec(x)
x (Cube(x) y (Tet(y) Larger(x, y)))
x (Cube(x) y (Dodec(y)
Larger(x, y)))
x (Dodec(x) y (Tet(y)
Larger(x, y)))
x Dodec(x)
x y z ((Larger(x, y) Larger(y, z))
Larger(x, z))
x (Cube(x) y (Tet(y)
Larger(x, y)))
Section 13.3
Section 13.4
soundness of F
completeness of F
Section 13.5
13.40
!
x Cube(x) Small(d)
x (Cube(x) Small(d))
Chapter 13
13.41
!
x (Cube(x) Small(x))
x Cube(x) x Small(x)
13.42
!
x Cube(x) x Small(x)
x (Cube(x) Small(x))
Each of the following is a valid argument of a type discussed in Section 10.3. Use Fitch to give a proof
of its validity. You may use Taut Con freely in these proofs.
13.43
!
x Cube(x)
13.44
!
x Cube(x)
13.45
!
x Cube(x)
x Cube(x)
x Cube(x)
x Cube(x)
x Cube(x)
y Cube(y)
x Tet(x)
y Tet(y)
13.49
!
x P(x)
x y ((P(x) P(y)) x = y)
Cube(b) x Cube(b)
13.50
!
x y ((P(x) P(y)) x = y)
13.51
13.52
!!
!
x (P(x) y P(y))
[Hint: Review your answer to Exercise 12.22 where you should have given
an informal proof of something of this
form.]
13.53 Is x y LeftOf(x, y) a first-order consequence of x LeftOf(x, x)? If so, give a formal proof.
!|"
If not, give a reinterpretation of LeftOf and an example where the premise is true and the
conclusion is false.
The next exercises are intended to help you review the difference between first-order satisfiability and
true logical possibility. All involve the four sentences in the file Padoas Sentences. Open that file now.
Section 13.5
13.54 Any three of the sentences in Padoas Sentences form a satisfiable set. There are four sets of three
!!
sentences, so to show this, build four worlds, World 13.54.123, World 13.54.124, World 13.54.134,
and World 13.54.234,where the four sets are true. (Thus, for example, sentences 1, 2 and 4 should
be true in World 13.54.124.)
13.55 Give an informal proof that the four sentences in Padoas Sentences taken together are incon"!
sistent.
13.56 Is the set of sentences in Padoas Sentences first-order satisfiable, that is, satisfiable with some
"!
reinterpretation of the predicates other than identity? [Hint: Imagine a world where one of the
blocks is a sphere.]
13.57 Reinterpret the predicates Tet and Dodec in such a way that sentence 3 of Padoas Sentences
"!
comes out true in World 13.54.124. Since this is the only sentence that uses these predicates,
it follows that all four sentences would, with this reinterpretation, be true in this world. (This
shows that the set is first-order satisfiable.)
13.58 (Logical truth versus non-logical truth in all worlds) A distinction Tarskis World helps us to
!|"!!
understand is the difference between sentences that are logically true and sentences that are,
for reasons that have nothing to do with logic, true in all worlds. The notion of logical truth
has to do with a sentence being true simply in virtue of the meaning of the sentence, and so
no matter how the world is. However, some sentences are true in all worlds, not because of
the meaning of the sentence or its parts, but because of, say, laws governing the world. We
can think of the constraints imposed by the innards of Tarskis World as analogues of physical
laws governing how the world can be. For example, the sentence which asserts that there are at
most 12 objects happens to hold in all the worlds that we can construct with Tarskis World.
However, it is not a logical truth.
Open Posts Sentences. Classify each sentence in one of the following ways: (A) a logical
truth, (B) true in all worlds that can be depicted using Tarskis World, but not a logical truth,
or (C) falsifiable in some world that can be depicted by Tarskis World. For each sentence of
type (C), build a world in which it is false, and save it as World 13.58.x, where x is the number
of the sentence. For each sentence of type (B), use a pencil and paper to depict a world in
which it is false. (In doing this exercise, assume that Medium simply means neither small nor
large, which seems plausible. However, it is not plausible to assume that Cube means neither a
dodecahedron nor tetrahedron, so you should not assume anything like this.)
Chapter 13
Chapter 14
373
determiners and
quantifiers
generalized quantifiers
example, to express more than half the As are Bs, it turns out that we need
to supplement fol to include new expressions that behave something like
and . When we add such expressions to the formal language, we call them
generalized quantifiers, since they extend the kinds of quantification we can
express in the language.
In this chapter, we will look at the logic of some English determiners
beyond some and all. We will consider not only determiners that can be expressed using the usual quantifiers of fol, but also determiners whose meanings can only be captured by adding new quantifiers to fol.
In English, there are ways of expressing quantification other than determiners. For example, the sentences
Max
Max
Max
Max
Max
Max
adverbial quantification
each express a quantitative relation between the set of times when Max eats
and the set of times when he eats pizza. But in these sentences it is the adverb
that is expressing quantification, not a determiner. While we are going to
discuss the logic only of determiners, much of what we say can be extended to
other forms of quantification, including this kind of adverbial quantification.
In a sentence of the form Q A B, different determiners express very different
relations between A and B and so have very different logical properties. A valid
argument typically becomes invalid if we change any of the determiners. For
instance, while
No cube is small
d is a cube
d is not small
is a valid argument, it would become invalid if we replaced no by any of the
other determiners listed above. On the other hand, the valid argument
Many cubes are small
Every small block is left of d
Many cubes are left of d
remains valid if many is replaced by any of the above determiners other than
no. These are clearly logical facts, things wed like to understand at a more
Chapter 14
theoretical level. For example, well soon see that the determiners that can
replace Many in the second argument and still yield a valid argument are the
monotone increasing determiners.
There are two rather different approaches to studying quantification. One
approach studies determiners that can be expressed using the existing resources of fol. In the first three sections, we look at several important English
determiners that can be defined in terms of , , =, and the truth-functional
connectives, and then analyze their logical properties by means of these definitions. The second approach is to strengthen fol by allowing a wider range of
quantifiers, capturing kinds of quantification not already expressible in fol.
In the final three sections, we look briefly at this second approach and its
resulting logic.
approaches to
quantification
Section 14.1
Numerical quantification
We have already seen that many complex noun phrases can be expressed in
terms of (which really means everything, not just every) and (which
means something, not some). For example, Every cube left of b is small
can be paraphrased as Everything that is a cube and left of b is small, a
sentence that can easily be translated into fol using , and . Similarly,
No cube is small can be paraphrased as Everything is such that if it is a cube
then it is not small, which can again be easily translated into fol.
Other important examples of quantification that can be indirectly expressed in fol are numerical claims. By a numerical claim we mean a one
that explicitly uses the numbers 1, 2, 3, . . . to say something about the relation between the As and the Bs. Here are three different kinds of numerical
claims:
numerical claims
Section 14.1
one object:
Cube(a) Small(a) Cube(b)
x y [Cube(x) Small(x) Cube(y)]
at least two
In order to say that there are at least two cubes, you must find a way to
guarantee that they are different. For example, either of the following would
do:
Cube(a) Small(a) Cube(b) Large(b)
x y [Cube(x) Small(x) Cube(y) LeftOf(x, y)]
The most direct way, though, is simply to say that they are different:
x y [Cube(x) Cube(y) x += y]
This sentence asserts that there are at least two cubes. To say that there are
at least three cubes we need to add another and some more inequalities:
x y z [Cube(x) Cube(y) Cube(z) x += y x += z y += z]
at most two
You will see in the You try it section below that all three of these inequalities
are really needed. To say that there are at least four objects takes four s
and six (= 3 + 2 + 1) inequalities; to say there are at least five takes five s
and 10 (= 4 + 3 + 2 + 1) inequalities, and so forth.
Turning to the second kind of numerical quantification, how can we say
that there are at most two cubes? Well, one way to do it is by saying that
there are not at least three cubes:
x y z [Cube(x) Cube(y) Cube(z) x += y x += z y += z]
Applying some (by now familiar) quantifier equivalences, starting with DeMorgans Law, gives us the following equivalent sentence:
x y z [(Cube(x) Cube(y) Cube(z)) (x = y x = z y = z)]
exactly two
Chapter 14
it as follows: There are at least two cubes and there are at most two cubes.
Translating each conjunct gives us a rather long sentence using five quantifiers:
x y [Cube(x) Cube(y) x += y]
x y z [(Cube(x) Cube(y) Cube(z)) (x = y x = z y = z)]
The same claim can be expressed more succinctly, however, as follows:
x y [Cube(x) Cube(y) x += y z (Cube(z) (z = x z = y))]
If we translate this into English, we see that it says there are two distinct
objects, both cubes, and that any cube is one of these. This is a different way
of saying that there are exactly two cubes. (We ask you to give formal proofs
of their equivalence in Exercises 14.12 and 14.13.) Notice that this sentence
uses two existential quantifiers and one universal quantifier. An equivalent
way of saying this is as follows:
x y [x += y z (Cube(z) (z = x z = y))]
Put in prenex form, this becomes:
x y z [x += y (Cube(z) (z = x z = y))]
All three expressions consist of two existential quantifiers followed by a single
universal quantifier. More generally, to say that there are exactly n objects
satisfying some condition requires n + 1 quantifiers, n existential followed by
one universal.
You
. . . . . try
. . . . .it. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1. In this Try It, you will get to examine some of the claims made above in
more detail. Open Whiteheads Sentences.
"
2. The first sentence says that there are at least two objects and the second
sentence says that there are at most two objects. (Do you see how they
manage to say these things?) Build a model where the first two sentences
are both true.
"
"
Section 14.1
Chapter 14
5. Sentence 5 appears, at first sight, to assert that there are at least three
objects, so it should be false in a world with two objects. Check to see if
it is indeed false in such a world. Why isnt it? Play the game to confirm
your suspicions.
6. The sixth sentence actually manages to express the claim that there are at
least three objects. Do you see how its different from the fifth sentence?
Check to see that it is false in the current world, but is true if you add a
third object to the world.
7. The seventh sentence says that there are exactly three objects in the world.
Check to see that it is true in the world with three objects, but false if you
either delete an object or add another object.
8. Sentence 8 asserts that a is a large object, and in fact the only large object.
To see just how the sentence manages to say this, start with a world with
three small objects and name one of them a. Play the game committed
to true to see why the sentence is false. You can quit the game as soon as
you understand why the sentence is false. Now make a large. Again play
the game committed to true and see why you can now win (does it matter
which block Tarski picks?). Finally, make one of the other objects large as
well, and play the game committed to true to see why it is false.
9. Sentence 8 asserted that a was the only large object. How might we say
that there is exactly one large object, without using a name for the object? Compare sentence 8 with sentence 9. The latter asserts that there is
something that is the only large object. Check to see that it is true only
in worlds in which there is exactly one large object.
10. If you have understood sentence 9, you should also be able to understand
sentence 10. Construct a world in which sentence 10 is true. Save this
world as World Numerical 1.
11. Sentence 11 says there is exactly one medium dodecahedron, while sentence
12 says there are at least two dodecahedra. There is nothing incompatible
about these claims. Make sentences 11 and 12 true in a single world. Save
the world as World Numerical 2.
"
13. Sentence 14 says that there are exactly two tetrahedra. Check that it is
true in such worlds, but false if there are fewer or more than two.
"
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
Numerical quantification, when written out in full in fol, is hard to read
because of all the inequalities, especially when the numbers get to be more
than 3 or 4, so a special notation has become fairly common:
abbreviations for
numerical claims
n x P(x) for the fol sentence asserting There are at least n objects
satisfying P(x).
n x P(x) for the fol sentence asserting There are at most n objects
satisfying P(x).
!n x P(x) for the fol sentence asserting There are exactly n objects
satisfying P(x).
It is important to remember that this notation is not part of the official
language of fol, but an abbreviation for a much longer fol expression.
The special case of n = 1 is important enough to warrant special comment.
The assertion that there is exactly one object satisfying some condition P(x)
can be expressed in fol as follows:
exactly one
Section 14.1
Similarly, to say There are at most n cubes that are small, we say There are
at most n things that are small cubes. Finally, to say There are exactly n cubes
that are small, we say There are exactly n things that are small cubes. These
observations probably seem so obvious that they dont require mentioning.
But we will soon see that nothing like this holds for some determiners, and that
the consequences are rather important for the general theory of quantification.
Remember
The notations n , n , and !n are abbreviations for complex fol expressions meaning there are at least/at most/exactly n things such that
. . . .
Exercises
14.1
!
14.2
"
14.3
!!
If you skipped the You try it section, go back and do it now. Submit the files World Numerical 1
and World Numerical 2.
Give clear English translations of the following sentences of fol. Which of the following are
logically equivalent and which are not? Explain your answers.
1. !x Tove(x) [Remember that the notation ! is an abbreviation, as explained above.]
2. x y [Tove(y) y = x]
3. x y [Tove(y) y = x]
4. x y [(Tove(x) Tove(y)) x = y]
5. x y [(Tove(x) Tove(y)) x = y]
(Translating numerical claims) In this exercise we will try our hand at translating English
sentences involving numerical claims.
Using Tarskis World, translate the following English sentences.
1.
2.
3.
4.
5.
There
There
There
There
There
Open Peanos World. Note that all of the English sentences are true in this world. Check
to see that your translations are as well.
Chapter 14
Open Bolzanos World. Here sentences 1, 3, and 5 are the only true ones. Verify that your
translations have the right truth values in this world.
Open Skolems World. Only sentence 5 is true in this world. Check your translations.
Finally, open Montagues World. In this world, sentences 2, 3, and 5 are the only true
ones. Check your translations.
14.4
!
(Saying more complicated things) Open Skolems World. Create a file called Sentences 14.4 and
describe the following features of Skolems World.
1. Use your first sentence to say that there are only cubes and tetrahedra.
2. Next say that there are exactly three cubes.
3. Express the fact that every cube has a tetrahedron that is to its right but is neither
in front of or in back of it.
4. Express the fact that at least one of the tetrahedra is between two other tetrahedra.
5. Notice that the further back something is, the larger it is. Say this.
6. Note that none of the cubes is to the right of any of the other cubes. Try to say this.
7. Observe that there is a single small tetrahedron and that it is in front of but to neither
side of all the other tetrahedra. State this.
If you have expressed yourself correctly, there is very little you can do to Skolems World without
making at least one of your sentences false. Basically, all you can do is stretch things out,
that is, move things apart while keeping them aligned. To see this, try making the following
changes. (Theres no need to turn in your answers, but try the changes.)
1. Add a new tetrahedron to the world. Find one of your sentences that comes out false.
Move the new tetrahedron so that a different sentence comes out false.
2. Change the size of one of the objects. What sentence now comes out false?
3. Change the shape of one of the objects. What sentence comes out false?
4. Slide one of the cubes to the left. What sentence comes out false?
5. Rearrange the three cubes. What goes wrong now?
14.5
(Ambiguity and numerical quantification) In the Try It on page 314, we saw that the sentence
!
At least four medium dodecahedra are adjacent to a medium cube.
is ambiguous, having both a strong and a weak reading. Using Tarskis World, open a new
sentence file and translate the strong and weak readings of this sentence into fol as sentences
(1) and (2). Remember that Tarskis World does not understand our abbreviation for at least
four so you will need to write this out in full. Check that the first sentence is true in Andersons
First World but not in Andersons Second World, while the second sentence is true in both worlds.
Make some changes to the worlds to help you check that your translations express what you
intend. Submit your sentence file.
Section 14.1
14.6
!
(Games of incomplete information) As you recall, you can sometimes know that a sentence
is true in a world without knowing how to play the game and win. Open Mostowskis World.
Translate the following into first-order logic. Save your sentences as Sentences 14.6. Now, without using the 2-D view, make as good a guess as you can about whether the sentences are true
or not in the world. Once you have assessed a given sentence, use Verify to see if you are right.
Then, with the correct truth value checked, see how far you can go in playing the game. Quit
whenever you get stuck, and play again. Can you predict in advance when you will be able to
win? Do not look at the 2-D view until you have finished the whole exercise.
1. There are at least two tetrahedra.
2. There are at least three tetrahedra.
3. There are at least two dodecahedra.
4. There are at least three dodecahedra.
5. Either there is a small tetrahedron behind a small cube or there isnt.
6. Every large cube is in front of something.
7. Every tetrahedron is in back of something.
8. Every small cube is in back of something.
9. Every cube has something behind it.
10. Every dodecahedron is small, medium, or large.
11. If e is to the left of every dodecahedron, then it is not a dodecahedron.
Now modify the world so that the true sentences are still true, but so that it will be clear how
to play the game and win. When you are done, just submit your sentence file.
14.7
!|"
(Satisfiability) Recall that a set of sentences is satisfiable if there is world in which it is true.
Determine whether the following set of sentences is satisfiable. If it is, build a world. If it is
not, use informal methods of proof to derive a contradiction from the set.
1. Every cube is to the left of every tetrahedron.
2. There are no dodecahedra.
3. There are exactly four cubes.
4. There are exactly four tetrahedra.
5. No tetrahedron is large.
6. Nothing is larger than anything to its right.
7. One thing is to the left of another just in case the latter is behind the former.
14.8
(Numbers of variables) Tarskis World only allows you to use six variables. Lets explore what
!|"!!! kind of limitation this imposes on our language.
1. Translate the sentence There are at least two objects, using only the predicate =. How
many variables do you need?
2. Translate There are at least three objects. How many variables do you need?
Chapter 14
3. It is impossible express the sentence There are at least seven objects using only = and
the six variables available in Tarskis World, no matter how many quantifiers you use.
Try to prove this. [Warning: This is true, but it is very challenging to prove. Contrast
this problem with the one below.] Submit your two sentences and turn in your proof.
14.9
!!
(Reusing variables) In spite of the above exercise, there are in fact sentences we can express
using just the six available variables that can only be true in worlds with at least seven objects.
For example, in Robinsons Sentences, we give such a sentence, one that only uses the variables
x and y.
1. Open this file. Build a world where there are six small cubes arranged on the front
row and test the sentences truth. Now add one more small cube to the front row, and
test the sentences truth again. Then play the game committed (incorrectly) to false.
Can you see the pattern in Tarskis Worlds choice of objects? When it needs to pick
an object for the variable x, it picks the leftmost object to the right of all the previous
choices. Then, when it needs to pick an object for the variable y, it picks the last object
chosen. Can you now see how the reused variables are working?
2. Now delete one of the cubes, and play the game committed (incorrectly) to true. Do
you see why you cant win?
3. Now write a sentence that says there are at least four objects, one in front of the next.
Use only variables x and y. Build some worlds to check whether your sentence is true
under the right conditions. Submit your sentence file.
Section 14.2
Section 14.2
formal proofs of
numerical claims
This may seem like making pretty heavy weather of an obvious fact, but it
illustrates two things. First, to prove a numerical claim of the form there exist
exactly n objects x such that P (x), which we agreed to abbreviate as !n x P(x),
you need to prove two things: that there are at least n such objects, and that
there are at most n such objects.
The proof also illustrates a point about fol. If we were to translate our
premises and desired conclusion into fol, things would get quite complicated.
If we then tried to prove our fol conclusion from the fol premises using the
rules we have presented earlier, we would completely lose track of the basic
fact that makes the proof work, namely, that 2 3 = 6. Rather than explicitly
state and use this fact, as we did above, we would have to rely on it in a hidden
way in the combinatorial details of the proof. While it would be possible to
give such a proof, no one would really do it that way.
The problem has to do with a syntactic shortcoming of fol. Not having
quantifiers that directly express numerical claims in terms of numbers, such
claims must be translated using just and . If we were to add numerical
quantifiers to fol, we would be able to give proofs that correspond much
more closely to the intuitive proofs. Still, the theoretical expressive power of
the language would remain the same.
We can think of the above proof as illustrating a new method of proof.
When trying to prove !n x P(x), prove two things: that there are at least n
objects satisfying P(x), and that there are at most n such objects.
A particularly important special case of this method is with uniqueness
claims, those of the form !x P(x), which say there is exactly one object with
some property. To prove such a claim, we must prove two things, existence
and uniqueness. In proving existence, we prove that there is at least one
object satisfying P(x). Given that, we can then show uniqueness by showing that there is at most one such object. To give an example, let us prove
!x [Even(x) Prime(x)].
Proof: We first prove existence, that is, that there is an even prime.
This we do simply by noting that 2 is even and a prime. Thus,
Chapter 14
Section 14.2
Exercises
Use Fitch to give formal proofs of the following arguments. You may use Taut Con where it is convenient. We urge you to work backwards, especially with the last problem, whose proof is simple in
conception but complex in execution.
14.10
!
14.11
!
x y (Cube(y) y = x)
x (Cube(x) y (Cube(y) y = x))
14.12
!
x y (Cube(x) Cube(y) x += y)
x y z ((Cube(x) Cube(y) Cube(z)) (x = y x = z y = z))
x y (Cube(x) Cube(y) x += y z (Cube(z) (z = x z = y)))
14.13
!!
The next two exercises contain arguments with similar premises and the same conclusion. If the argument
is valid, turn in an informal proof. If it is not, submit a world in which the premises are true but the
conclusion is false.
14.14
!|"
Chapter 14
14.15
!|"
The following exercises state some logical truths or valid arguments involving numerical quantifiers. Give
informal proofs of each. Contemplate what it would be like to give a formal proof (for specific values of
n and m) and be thankful we didnt ask you to give one!
14.16
"
0 x S(x) x S(x)
[The only hard part about this is figuring out what 0 x S(x) abbreviates.]
14.17
14.18
"
"
14.19
"
n x A(x)
m x B(x)
x (A(x) B(x))
n+m
n x A(x)
m x B(x)
n+m x (A(x) B(x))
14.20
"!
x (A(x) B(x))
14.21 We have seen that x y R(x, y) is logically equivalent to y x R(x, y), and similarly for .
"!
What happens if we replace both of these quantifiers by some numerical quantifier? In particular, is the following argument valid?
!x !y R(x, y)
!y !x R(x, y)
If so, give an informal proof. If not, describe a counterexample.
The following exercises contain true statements about the domain of natural numbers 0, 1, . . . . Give
informal proofs of these statements.
14.22 !x [x2 2x + 1 = 0]
"
14.23 !2 y [y + y = y y]
14.24 !2 x [x2 4x + 3 = 0]
"
"
"
Section 14.2
Section 14.3
the
definite descriptions
both, neither
Noun phrases of the form the A are called definite descriptions and the above
analysis is called the Russellian analysis of definite descriptions.
While Russell did not explicitly consider both or neither, the spirit of his
analysis extends naturally to these determiners. We could analyze Both cubes
are small as saying that there are exactly two cubes and each of them is small:
!2 x Cube(x) x [Cube(x) Small(x)]
Chapter 14
Similarly, Neither cube is small would be construed as saying that there are
exactly two cubes and each of them is not small:
!2 x Cube(x) x [Cube(x) Small(x)]
More generally, Both As are Bs would be translated as:
!2 x A(x) x [A(x) B(x)]
and Neither A is a B would be translated as:
!2 x A(x) x [A(x) B(x)]
Notice that on Russells analysis of definite descriptions, the sentence The
cube is not small would be translated as:
x [Cube(x) y (Cube(y) y = x) Small(x)]
This is not, logically speaking, the negation of The cube is small. Indeed
both sentences could be false if there are no cubes or if there are too many.
The superficial form of the English sentences makes them look like negations
of one another, but according to Russell, the negation of The cube is small
is something like Either there is not exactly one cube or it is not small. Or
perhaps more clearly, If there is exactly one cube then it is not small. Similarly,
the negation of Both cubes are small would not be Both cubes are not small
but If there are exactly two cubes then they are not both small.
Russells analysis is not without its detractors. The philosopher P. F.
Strawson, for example, argued that Russells analysis misses an important
feature of our use of the determiner the. Return to our example of the elephant. Consider these three sentences:
definite descriptions
and negation
presuppositions
Section 14.3
conversational
implicature
cannot make a successful claim unless the presuppositions of your claim are
satisfied. With our elephant example, the sentence can only be used to make
a claim in case there is one, and only one, elephant in the speakers closet.
Otherwise the sentence simply misfires, and so does not have a truth value at
all. It is much like using an fol sentence containing a name b to describe a
world where no object is named b. Similarly, on Strawsons approach, if we
use both elephants in my closet or neither elephant in my closet, our statement
simply misfires unless there are exactly two elephants in my closet.
If Strawsons objection is right, then there will be no general way of translating the, both, or neither into fol, since fol sentences (at least those without
names in them) always have truth values. There is nothing to stop us from
enriching fol to have expressions that work this way. Indeed, this has been
proposed and studied, but that is a different, richer language than fol.
On the other hand, there have been rejoinders to Strawsons objection. For
example, it has been suggested that when we say The elephant in my closet is
not wrinkling my clothes, the suggestion that there is an elephant in my closet
is simply a conversational implicature. To see if this is plausible, we try the
cancellability test. Does the following seem coherent or not? The elephant
in my closet is not wrinkling my clothes. In fact, there is no elephant in my
closet. Some people think that, read with the right intonation, this makes
perfectly good sense. Others disagree.
As we said at the start of this section, these are subtle matters and there is
still no universally accepted theory of how these determiners work in English.
What we can say is that the Russellian analysis is as close as we can come
in fol, that it is important, and that it captures at least some uses of these
determiners. It is the one we will treat in the exercises that follow.
Remember
1. The Russellian analysis of The A is a B is the fol translation of There
is exactly one A and it is a B.
2. The Russellian analysis of Both As are Bs is the fol translation of
There are exactly two As and each of them is a B.
3. The Russellian analysis of Neither A is a B is the fol translation of
There are exactly two As and each of them is not a B.
4. The competing Strawsonian analysis of these determiners treats them
as having presuppositions, and so as only making claims when these
presuppositions are met. On Strawsons analysis, these determiners
cannot be adequately translated in fol.
Chapter 14
Exercises
1. Open Russells Sentences. Sentence 1 is the second of the two ways we saw in the You
try it section on page 377 for saying that there is a single cube. Compare sentence
1 with sentence 2. Sentence 2 is the Russellian analysis of our sentence The cube is
small. Construct a world in which sentence 2 is true.
2. Construct a world in which sentences 2-7 are all true. (Sentence 7 contains the Russellian analysis of The small dodecahedron is to the left of the medium dodecahedron.)
Submit your world.
14.27 (The Strawsonian analysis of definite descriptions) Using Tarskis World, open a sentence file
!
14.28 (The Russellian analysis of both and neither) Open Russells World. Notice that the following
!
14.29 Discuss the meaning of the determiner Maxs. Notice that you can say Maxs pet is happy, but
"!
also Maxs pets are happy. Give a Russellian and a Strawsonian analysis of this determiner.
Which do you think is better?
Section 14.3
Section 14.4
We have seen that many English determiners can be captured in fol, though
by somewhat convoluted circumlocutions. But there are also many determiners that simply arent expressible in fol. A simple example is the determiner
Most, as in Most cubes are large. There are two difficulties. One is that the
meaning of most is a bit indeterminate. Most cubes are large clearly implies
More than half the cubes are large, but does the latter imply the former? Intuitions differ. But even if we take it to mean the same as More than half,
it cannot be expressed in fol, since the determiner More than half is not
expressible in fol.
It is possible to give a mathematical proof of this fact. For example, consider the sentence:
More than half the dodecahedra are small.
To see the problem, notice that the English sentence makes a claim about the
relative sizes of the set A of small dodecahedra and the set B of dodecahedra that are not small. It says that the set A is larger than the set B and
it does so without claiming anything about how many objects there are in
these sets or in the domain of discourse. To express the desired sentence, we
might try something like the following (where we use A(x) as shorthand for
Dodec(x) Small(x), and B(x) as shorthand for Dodec(x) Small(x)):
[x A(x) x B(x)] [2 x A(x) 1 x B(x)] [3 x A(x) 2 x B(x)] . . .
unexpressible in fol
Chapter 14
The trouble is, there is no place to stop this disjunction! Without some
fixed finite upper bound on the total number of objects in the domain, we
need all of the disjuncts, and so the translation of the English sentence would
be an infinitely long sentence, which fol does not allow. If we knew there
were a maximum of twelve objects in the world, as in Tarskis World, then we
could write a sentence that said what we needed; but without this constraint,
the sentence would have to be infinite.
This is not in itself a proof that the English sentence cannot be expressed
in fol. But it does pinpoint the problem and, using this idea, one can actually
give such a proof. In particular, it is possible to show that for any first-order
sentence S of the blocks language, if S is true in every world where more than
half the dodecahedra are small, then it is also true in some world where less
than half the dodecahedra are small. Unfortunately, the proof of this would
take us beyond the scope of this book.
The fact that we cannot express more than half in fol doesnt mean there
is anything suspect about this determiner. It just means that it does not fall
within the expressive resources of the invented language fol. Nothing stops
us from enriching fol by adding a new quantifier symbol, say Most. Lets
explore this idea for a moment, since it will shed light on some topics from
earlier in the book.
How not to add a determiner
Well begin by telling you how not to add the determiner Most to the language.
Following the lead from and , we might start by adding the following clause
to our grammatical rules on page 233:
If S is a wff and is a variable, then Most S is a wff, and any
occurrence of in Most S is said to be bound.
We might then say that the sentence Most x S(x) is true in a world just in
case more objects in the domain satisfy S(x) than dont.1 Thus the sentence
Most x Cube(x) says that most things are cubes.
How can we use our new language to express our sentence Most dodecahedra are small ? The answer is, we cant. If we look back at , , and the
numerical determiners, we note something interesting. It so happens that we
can paraphrase every cube is small and some cube is small using everything
and something; namely, Everything is such that if it is a cube then it is small
and Something is a cube and it is small. At the end of the section on numerical
quantification, we made a similar observation. There is, however, simply no
way to paraphrase Most dodecahedra are small using Most things and expressions that can be translated into fol. After all, it may be that most cubes
are small, even when there are only three or four cubes and millions of dodecahedra and tetrahedra in our domain. Talking about most things is not going
to let us say much of interest about the lonely cubes.
These observations point to something interesting about quantification
and the way it is represented in fol. For any determiner Q, let us mean by
its general form any use of the form Q A B as described at the beginning of
this chapter. In contrast, by its special form well mean a use of the form Q
thing(s) B. The following table of examples makes this clearer.
1 For
the set-theoretically sophisticated, we note that this definition make sense even if
the domain of discourse is infinite.
Section 14.4
reducibility
Determiner
every
some
no
exactly two
Special form
everything
something
nothing
exactly two things
most
most things
General form
every cube, every student of logic, . . .
some cube, some student of logic, . . .
no cube, no student of logic, . . .
exactly two cubes, exactly two students of logic, . . .
most cubes, most students of logic, . . .
Many determiners have the property that the general form can be reduced
to the special form by the suitable use of truth-functional connectives. Lets
call such a determiner reducible. We have seen that every, some, no, and the
various numerical determiners are reducible in this sense. Here are a couple
of the reductions:
Every A B Everything is such that if it is an A then it is a B
Exactly two A B Exactly two things satisfy A and B
But some determiners, including most, many, few, and the, are not reducible. For non-reducible determiners Q, we cannot add Q to fol by simply
adding the special form in the way we attempted here. We will see how we
can add such determiners in a moment.
There was some good fortune involved when logicians added and as
they did. Since every and some are reducible, the definition of fol can get away
with just the special forms, which makes the language particularly simple. On
the other hand, the fact that fol takes the special form as basic also results
in many of the difficulties in translating from English to fol that we have
noted. In particular, the fact that the reduction of Every A uses , while that
of Some A uses , causes a lot of confusion among beginning students.
How to add a determiner
The observations made above show that if we are going to add a quantifier
like Most to our language, we must add the general form, not just the special
form. Thus, the formation rule should take two wffs and a variable and create
a new wff:
If A and B are wffs and is a variable, then Most (A, B) is a wff,
and any occurrence of in Most (A, B) is said to be bound.
The wff Most x (A, B) is read most x satisfying A satisfy B. Notice that the
syntactic form of this wff exhibits the fact that Most x (A, B) expresses a binary
relation between the set A of things satisfying A and the set B of things that
satisfy B. We could use the abbreviation Most x (S) for Most x (x = x, S); this
Chapter 14
is read most things x satisfy S. This, of course, is the special form of the
determiner, whereas the general form takes two wffs.
We need to make sure that our new symbol Most means what we want it
to. Toward this end, let us agree that the sentence Most x (A, B) is true in a
world just in case most objects that satisfy A(x) satisfy B(x) (where by this
we mean more objects satisfy A(x) and B(x) than satisfy A(x) and B(x)).
With these conventions, we can translate our English sentence faithfully as:
Most x (Dodec(x), Small(x))
The order here is very important. While the above sentences says that most
dodecahedra are small, the sentence
Most x (Small(x), Dodec(x))
says that most small things are dodecahedra. These sentences are true under
very different conditions. We will look more closely at the logical properties
of Most and some other determiners in the next section.
Once we see the general pattern, we see that any meaningful determiner
Q of English can be added to fol in a similar manner.
If A and B are wffs and is a variable, then Q (A, B) is a wff, and
any occurrence of in Q (A, B) is said to be bound.
The wff Q x (A, B) is read Q x satisfying A satisfy B, or more simply, Q
As are Bs. Thus, for example,
Few x (Cube(x), Small(x))
is read Few cubes are small.
As for the special form, again we use the abbreviation
Q x (S)
for Q x (x = x, S); this is read Q things x satisfy S. For instance, the wff
Many x (Cube(x)) is shorthand for Many x (x = x, Cube(x)), and is read Many
things are cubes.
What about the truth conditions of such wffs? Our reading of them suggests how we might define their truth conditions. We say that the sentence
Q x (A, B) is true in a world just in case Q of the objects that satisfy A(x) also
satisfy B(x). Here are some instances of this definition:
1. At least a quarter x (Cube(x), Small(x)) is true in a world iff at least a quarter of the cubes in that world are small.
Section 14.4
2. At least two x (Cube(x), Small(x)) is true in a world iff at least two cubes
in that world are small.
3. Finitely many x (Cube(x), Small(x)) is true in a world iff finitely many of
the cubes in that world are small.
4. Many x (Cube(x), Small(x)) is true in a world iff many of the cubes in
that world are small.
Chapter 14
Remember
Given any English determiner Q, we can add a corresponding quantifier
Q to fol. In this extended language, the sentence Q x (A, B) is true in a
world just in case Q of the objects that satisfy A(x) also satisfy B(x).
Exercises
14.30 Some of the following English determiners are reducible, some are not. If they are reducible,
"
explain how the general form can be reduced to the special form. If they do not seem to be
reducible, simply say so.
1. At least three
2. Both
3. Finitely many
4. At least a third
5. All but one
14.31 Open Coopers World. Suppose we have expanded fol by adding the following expressions:
"
Section 14.4
14.32 Once again open Coopers World. This time translate the following sentences into English and
"
say which are true in Coopers World. Make sure your English translations are clear and
unambiguous.
1. Most y (Tet(y), Small(y))
2. Most z (Cube(z), LeftOf(z, b))
3. Most y Cube(y)
4. Most x (Tet(x), y Adjoins(x, y))
5. y Most x (Tet(x), Adjoins(x, y))
6. Most x (Cube(x), y Adjoins(x, y))
7. y Most x (Cube(x), Adjoins(x, y))
8. Most y (y += b)
9. x (Most y (y += x))
10. Most x (Cube(x), Most y (Tet(y), FrontOf(x, y))
Section 14.5
This is called the conservativity property of determiners. Here are two instances
of the half of conservativity, followed by two instances of the half:
If no doctor is a doctor and a lawyer, then no doctor is a lawyer.
If exactly three cubes are small cubes, then exactly three cubes
Chapter 14
are small.
If few actors are rich, then few actors are rich and actors.
If all good actors are rich, then all good actors are rich and good
actors.
It is interesting to speculate why this principle holds of single word determiners in human languages. There is no logical reason why there could not be
determiners that did not satisfy it. (See, for example, Exercise 14.52.) It might
have something to do with the difficulty of understanding quantification that
does not satisfy the condition, but if so, exactly why remains a puzzle.
There is one word which has the superficial appearance of a determiner
that is not conservative, namely the word only. For example, it is true that
only actors are rich actors but it does not follow that only actors are rich, as
it would if only were conservative. There are independent linguistic grounds
for thinking that only is not really a determiner. One piece of evidence is the
fact that determiners cant be attached to complete noun phrases. You cant
say Many some books are on the table or Few Claire eats pizza. But you can
say Only some books are on the table and Only Claire eats pizza, suggesting
that it is not functioning as a determiner. In addition, only is much more
versatile than determiners, as is shown by the sentences Claire only eats pizza
and Claire eats only pizza. You cant replace only in these sentences with a
determiner and get a grammatical sentence. If only is not a determiner, it is
not a counterexample to the conservativity principle.
Monotonicity
The monotonicity of a determiner has to do with what happens when we
increase or decrease the set B of things satisfying the verb phrase in a sentence of the form Q A B. The determiner Q is said to be monotone increasing
provided for all A, B, and B " , the following argument is valid:
monotone increasing
Q x (A(x), B(x))
x (B(x) B" (x))
Q x (A(x), B" (x))
In words, if Q(A, B) and you increase B to a larger set B " , then Q(A, B " ).
There is a simple test to determine whether a determiner is monotone increasing:
Test for monotone increasing determiners: Q is monotone increasing if
and only if the following argument is valid:
Section 14.5
Monotone decreasing
no
Neither
all but one
neither
few
at most two
finitely many
exactly two
monotone decreasing
The reason this test works is that the second premise in the definition of
monotone increasing, x (B(x) B" (x)), is automatically true. If we try out
the test with a few determiners, we see, for example, that some, every, and
most are monotone increasing, but few is not.
On the other hand, Q is said to be monotone decreasing if things work in
the opposite direction, moving from the larger set B " to a smaller set B:
Q x (A(x), B" (x))
x (B(x) B" (x))
Q x (A(x), B(x))
The test for monotone decreasing determiners is just the opposite as for
monotone increasing determiners:
Test for monotone decreasing determiners: Q is monotone decreasing
if and only if the following argument is valid:
Q cube(s) is (are) small.
Q cube(s) is (are) small and in the same row as c.
Many determiners are monotone increasing, several are monotone decreasing, but some are neither. Using our tests, you can easily verify for yourself
Chapter 14
the classifications shown in Table 14.1. To apply our test to the first column
of the table, note that the following argument is valid, and remains so even if
most is replaced by any determiner in this column:
Most cubes are small and in the same row as c.
Most cubes are small.
On the other hand, if we replace most by any of the determiners in the other
columns, the resulting argument is clearly invalid.
To apply the test to the list of monotone decreasing determiners we observe
that the following argument is valid, and remains so if no is replaced by any
of the other determiners in the second column:
No cubes are small.
No cubes are small and in the same row as c.
On the other hand, if we replace no by the determiners in the other columns,
the resulting argument is no longer valid.
If you examine Table 14.1, you might notice that there are no simple oneword determiners in the third column. This is because there arent any. It
so happens that all the one-word determiners are either monotone increasing
or monotone decreasing, and only a few fall into the decreasing category.
Again, this may have to do with the relative simplicity of monotonic versus
non-monotonic quantification.
Persistence
Persistence is a property of determiners very similar to monotonicity, but
persistence has to do which what happens if we increase or decrease the set
of things satisfying the common noun: the A in a sentence of the form Q A B.
The determiner Q is said to be persistent provided for all A, A" , and B, the
following argument is valid:2
persistence
Q x (A(x), B(x))
x (A(x) A" (x))
Q x (A" (x), B(x))
In words, if Q A B and you increase A to a larger A" , then Q A" B. On the other
hand, Q is said to be anti-persistent if things work in the opposite direction:
anti-persistence
2 Some authors refer to persistence as left monotonicity, and what we have been calling
monotonicity as right monotonicity, since they have to do with the left and right arguments,
respectively, when we look at Q A B as a binary relation Q(A, B).
Section 14.5
Anti-persistent
every
few
at most two
finitely many
no
Neither
all but one
most
exactly two
many
the
both
neither
Chapter 14
You want to be rich, right? Well, according to this report, few actors
have incomes above the federal poverty level. Hence, few actors are
rich.
Your fathers argument depends on the fact that few is monotone decreasing.
The set of rich people is a subset of those with incomes above the poverty
level, so if few actors are in the second set, few are in the first. Notice that
we immediately recognize the validity of this inference without even thinking
twice about it.
Suppose you were to continue the discussion by pointing out that the actor
Brad Pitt is extraordinarily rich. Your father might go on this way:
Several organic farmers I know are richer than Brad Pitt. So even
some farmers are extraordinarily rich.
This may seem like an implausible premise, but you know fathers. In any
case, the argument is valid, though perhaps unsound. Its validity rests on the
fact that Several is both persistent and monotone increasing. By persistence,
we can conclude that several farmers are richer than Brad Pitt (since the
organic farmers are a subset of the farmers), and by monotonicity that several
farmers are extraordinarily rich (since everyone richer than Brad Pitt is).
Finally, from the fact that several farmers are extraordinarily rich it obviously
follows that some farmers are (see Exercise 14.51).
There are many other interesting topics related to the study of determiners,
but this introduction should give you a feel for the kinds of things we can
discover about determiners, and the large role they play in everyday reasoning.
Remember
1. There are three properties of determiners that are critical to their
logical behavior: conservativity, monotonicity, and persistence.
2. All English determiners are conservative (with the exception of only,
which is not usually considered a determiner).
3. Monotonicity has to do with the behavior of the second argument of
the determiner. All basic determiners in English are monotone increasing or decreasing, with most being monotone increasing.
4. Persistence has to do with the behavior of the first argument of the
determiner. It is less common than monotonicity.
Section 14.5
Exercises
For each of the following arguments, decide whether it is valid. If it is, explain why. This explanation
could consist in referring to one of the determiner properties mentioned in this section or it could consist
in an informal proof. If the argument is not valid, carefully describe a counterexample.
14.33
"
14.34
"
14.35
"
14.36
"
14.37
"
"
14.38
"
"
14.40
"
"
14.42
"
!x Dodec(x)
14.43
14.41
14.39
14.44
"
14.45
"
Chapter 14
14.46
"
14.47
"
14.48
"
14.49
"!
14.50
"!
14.51 In one of our example arguments, we noted that Several A B implies Some A B. In general, a
"!
14.52 Consider a hypothetical English determiner allbut. For example, we might say Allbut cubes
"
are small to mean that all the blocks except the cubes are small. Give an example to show
that allbut is not conservative. Is it monotone increasing or decreasing? Persistent or antipersistent? Illustrate with arguments expressed in English augmented with allbut.
14.53 (Only) Whether or not only is a determiner, it could still be added to fol, allowing expressions
"!
of the form Only x (A, B), which would be true if and only if only As are Bs.
1. While Only is not conservative, it does satisfy a very similar property. What is it?
2. Discuss monotonicity and persistence for Only. Illustrate your discussion with arguments expressed in English.
14.54 (Adverbs of temporal quantification) It is interesting to extend the above discussion of quan"!
tification from determiners to so-called adverbs of temporal quantification, like always, often,
usually, seldom, sometimes, and never. To get a hint how this might go, lets explore the
ambiguities in the English sentence Max usually feeds Carl at 2:00 p.m.
Earlier, we treated expressions like 2:00 as names of times on a particular day. To
interpret this sentence in a reasonable way, however, we need to treat such expressions as predicates of times. So we need to add to our language a predicate 2pm(t)
that holds of those times t (in the domain of discourse) that occur at 2 p.m., no matter on what day they occur. Let us suppose that Usually means most times. Thus,
Section 14.5
Section 14.6
three place
quantification
Chapter 14
The expressions in bold take two common noun expressions and a verb expression to make a sentence. The techniques used to study generalized quantification in earlier sections can be extended to study these determiners, but
we have to think of them as expressing three place relations on sets, not just
two place relations. Thus, if we added these determiners to the language, they
would have the general form Q x (A(x), B(x), C(x)).
plurals
tense
modality
Exercises
14.55 Try to translate the nursery rhyme about Humpty Dumpty into fol. Point out the various
"
Section 14.6
14.56 Consider the following two claims. Does either follow logically from the other? Are they logically
"
14.57 Recall the first-order language introduced in Table 1.2, page 30. Some of the following can be
"!
given first-order translations using that language, some cannot. Translate those that can be.
For the others, explain why they cannot be faithfully translated, and discuss whether they
could be translated with additional names, predicates, function symbols, and quantifiers, or if
the shortcoming in the language is more serious.
1. Claire gave Max at least two pets at 2:00 pm.
2. Claire gave Max at most two pets at 2:00 pm.
3. Claire gave Max several pets at 2:00 pm.
4. Claire was a student before Max was.
5. The pet Max gave Claire at 2:00 pm was hungry.
6. Most pets were hungry at noon.
7. All but two pets were hungry at noon.
8. There is at least one student who made Max angry every time he (or she) gave Max a
pet.
9. Max was angry whenever a particular student gave him a pet.
10. If someone gave Max a pet, it must have been Claire.
11. No pet fed by Max between 2:00 and 2:05 belonged to Claire.
12. If Claire fed one of Maxs pets before 2:00 pm, then Max was angry at 2:00 pm.
13. Follys owner was a student.
14. Before 3:00, no one gave anyone a pet unless it was hungry.
15. No one should give anyone a pet unless it is hungry.
16. A pet that is not hungry always belongs to someone or other.
17. A pet that is not hungry must belong to someone or other.
18. Max was angry at 2:00 pm because Claire had fed one of his pets.
19. When Max gave Folly to Claire, Folly was hungry, but Folly was not hungry five minutes
later.
20. No student could possibly be a pet.
14.58 Here is a famous puzzle. There was a Roman who went by two names, Cicero and Tully.
"!!!
Chapter 14
What is at stake here is nothing more or less than the principle that if (. . . a . . . ) is true, and
a = b, then (. . . b . . . ) is true. [Hint: Does the argument sound more reasonable if we replace
claims by claims that? By the way, the puzzle is usually stated with believes rather than
claims.]
The following more difficult exercises are not specifically relevant to this section, but to the general topic
of truth of quantified sentences. They can be considered as research projects in certain types of classes.
14.59 (Persistence through expansion) As we saw in Exercise 11.5, page 301, some sentences simply
"!!
cant be made false by adding objects of various sorts to the world. Once they are true, they
stay true. For example, the sentence There is at least one cube and one tetrahedron, if true,
cannot be made false by adding objects to the world. This exercise delves into the analysis of
this phenomenon in a bit more depth.
Lets say that a sentence A is persistent through expansion if, whenever it is true, it remains
true no matter how many objects are added to the world. (In logic books, this is usually called
just persistence, or persistence under extensions.) Notice that this is a semantic notion. That
is, its defined in terms of truth in worlds. But there is a corresponding syntactic notion. Call a
sentence existential if it is logically equivalent to a prenex sentence containing only existential
quantifiers.
Show that Cube(a) x FrontOf(x, a) is an existential sentence.
Is x FrontOf(x, a) Cube(a) an existential sentence?
Show that every existential sentence is persistent through expansion. [Hint: You will have
to prove something slightly stronger, by induction on wffs. If you are not familiar with
induction on wffs, just try to understand why this is the case. If you are familiar with
induction, try to give a rigorous proof.] Conclude that every sentence equivalent to an
existential sentence is persistent through expansion.
It is a theorem, due to Tarski and L
% os (a Polish logician whose name is pronounced more like
wash than like loss), that any sentence that is persistent through expansion is existential.
Since this is the converse of what you were asked to prove, we can conclude that a sentence
is persistent through expansion if and only if it is existential. This is a classic example of a
theorem that gives a syntactic characterization of some semantic notion. For a proof of the
theorem, see any textbook in model theory.
14.60 (Invariance under motion, part 1) The real world does not hold still, the way the world of
!
mathematical objects does. Things move around. The truth values of some sentences change
with such motion, while the truth values of other sentences dont. Open Ockhams World and
Ockhams Sentences. Verify that all the sentences are true in the given world. Make as many
of Ockhams Sentences false as you can by just moving objects around. Dont add or remove
any objects from the world, or change their size or shape. You should be able to make false (in a
Section 14.6
single world) all of the sentences containing any spatial predicates, that is, containing LeftOf,
RightOf, FrontOf, BackOf, or Between. (However, this is a quirk of this list of sentences, as we
will see in the next exercise.) Save the world as World 14.60.
14.61 (Invariance under motion, part 2) Call a sentence invariant under motion if, for every world,
"!!
the truth value of the sentence (whether true or false) does not vary as objects move around
in that world.
1. Prove that if a sentence does not contain any spatial predicates, then it is invariant
under motion.
2. Give an example of a sentence containing a spatial predicate that is nonetheless invariant under motion.
3. Give another such example. But this time, make sure your sentence is not first-order
equivalent to any sentence that doesnt contain spatial predicates.
14.62 (Persistence under growth, part 1) In the real world, things not only move around, they also
!
grow larger. (Some things also shrink, but ignore that for now.) Starting with Ockhams World,
make the following sentences true by allowing some of the objects to grow:
1. x Small(x)
2. x y (Cube(x) Dodec(y) Larger(y, x))
3. y (Cube(y) v (v += y Larger(v, y)))
4. x y (Large(x) Large(y) x += y)
How many of Ockhams Sentences are false in this world? Save your world as World 14.62.
14.63 (Persistence under growth, part 2) Say that a sentence S is persistent under growth if, for
"!
every world in which S is true, S remains true if some or all of the objects in that world get
larger. Thus, Large(a) and Small(a) are persistent under growth, but Smaller(a, b) isnt. Give
a syntactic definition of as large a set of sentences as you can for which every sentence in the
set is persistent under growth. Can you prove that all of these sentences are persistent under
growth?
Chapter 14
Part III
Applications and
Metatheory
Chapter 15
413
Russells Paradox
Section 15.1
Chapter 15
The first person to study sets extensively and to appreciate the inconsistencies
lurking in the naive conception was the nineteenth century German mathematician Georg Cantor. According to the naive conception, a set is just a
collection of things, like a set of chairs, a set of dominoes, or a set of numbers.
The things in the collection are said to be members of the set. We write a b,
and read a is a member (or an element) of b, if a is one of the objects that
makes up the set b.
There is only one constant symbol in set theory and this is denoted by a
special symbol. We therefore dont need to use the letters at the beginning
many-sorted language
you read the section on generalized quantifiers in Chapter 14, you will recognize these
as the quantifiers formed from some and the nouns thing and time respectively.
Section 15.1
axiom scheme
universal closure
Notice that this is not just one axiom, but an infinite collection of axioms,
one for each wff P (x). For this reason, it is called an axiom scheme. When we
replace P (x) by some specific wff then we call the result an instance of the
axiom scheme. We will see later that some instances of this axiom scheme are
inconsistent, so we will have to modify the scheme. But for now we assume
all of its instances as axioms in our theory of sets.
Actually, the Axiom of Comprehension is a bit more general than our notation suggests, since the wff P (x) can contain variables other than x, say
z1 , . . . , zn . What we really want is the universal closure of the displayed formula, where all the other variables are universally quantified:
Axiom 1. Axiom of Unrestricted Comprehension
z1 . . . zn ax[x a P (x)]
Most applications of the axiom will in fact make use of these additional
variables. For example, the claim that for any objects z1 and z2 , there is a set
containing z1 and z2 as its only members, is an instance of this axiom scheme:
z1 z2 a x[x a (x = z1 x = z2 )]
Here the formula P (x) that we are using as an instance is x = z1 x =
z2 , which contains the free variables z1 and z2 . These must be universally
quantified to form an instance of comprehension.
The Axiom of Comprehension, as we have stated it, is weaker than the intuitive principle that motivated it. After all, we have already seen that there
are many determinate properties expressible in English that cannot be expressed in any particular version of fol. For example, we cant express the
English connective because or the quantifier many in fol. Since there is no
formula of fol capable of expressing properties requiring these connectives,
the axiom does not guarantee that there are sets containing just those objects
that satisfy such properties. These sets are getting left out of our axiomatization. Still, the axiom as stated is quite strong. In fact, it is too strong, as we
will soon see.
Axiom of
Extensionality
Chapter 15
As we said, there are two principles that capture the naive conception of a set.
The second principle is that a set is completely determined by its members.
If you know the members of a set b, then you know everything there is to
know about the identity of the set. This principle is captured by the Axiom
of Extensionality. Stated precisely, the Axiom of Extensionality says that if
sets a and b have the same elements, then a = b. We can express this in fol
as follows:
Axiom 2. (Axiom of Extensionality)
ab[x(x a x b) a = b]
In particular, the identity of a set does not depend on how it is described.
For example, suppose we have the set containing just the two numbers 7 and
11. It can be described as the set of prime numbers between 6 and 12, or as
the set of solutions to the equation x2 18x + 77 = 0. It might even be the
set of Maxs favorite numbers, who knows? The important point is that the
Axiom of Extensionality tells us that all of these descriptions pick out the
same set.
Notice that if we were developing a theory of properties rather than sets,
we would not take extensionality as an axiom. It is perfectly reasonable to have
two distinct properties that apply to exactly the same things. For example,
the property of being a prime number between 6 and 12 is a different property
from that of being a solution to the equation x2 18x + 77 = 0, and both of
these are different properties from the property of being one of Maxs favorite
numbers. It happens that these properties hold of exactly the same numbers,
but the properties themselves are still different.
We can use the Axioms of Extensionality and Comprehension together to
prove an important claim about sets, namely that each formula gives rise to
a unique set.
Proposition 1. For each wff P (x) we can prove that there is a unique set of
objects that satisfy P (x). Using the notation introduced in Section 14.1:
uniqueness theorem
z1 . . . zn !ax[x a P (x)]
This is our first chance to apply our techniques of informal proof to a claim
in set theory. Our proof might look like this:
Proof: We will prove the claim using universal generalization. Let
z1 ,. . . ,zn be arbitrary objects. The Axiom of Comprehension assures
us that there is at least one set of objects that satisfy P (x). So we
need only prove that there is at most one such set. Suppose a and b
are both sets that have as members exactly those things that satisfy
P (x). That is, a and b satisfy:
x[x a P (x)]
x[x b P (x)]
Section 15.1
This brace notation for sets is convenient but inessential. It is not part of
the official first-order language of set theory, since it doesnt fit the format of
first-order languages. We will only use it in informal contexts. In any event,
anything that can be said using brace notation can be said in the official
language. For example, b {x | P (x)} could be written:
a[x(x a P (x)) b a]
Remember
Naive set theory has the Axiom of Extensionality and the Axiom Scheme
of Comprehension. Comprehension asserts that every first-order formula
determines a set. Extensionality says that sets with the same members
are identical.
Chapter 15
Exercises
15.1
"
15.2
!
15.3
"
!!
Section 15.2
empty set ()
Section 15.2
singleton set
unordered pairs
to represent it. Some authors use 0 to denote the empty set. It can also be
informally denoted by {}.
When there is one and only one object x satisfying P (x) the Axiom of
Comprehension guarantees there is a set whose only member is that object.
We call this the singleton set containing x, and denote it by {x}.
Some students are tempted to confuse an object with the singleton set
containing that object. But in that direction lies, if not madness, at least
dreadful confusion. After all, a singleton set is a set (an abstract object) and
its member might have been any object at all, say the Washington Monument.
The Washington Monument is a physical object, not a set. So we must not
confuse an object x with the set {x}. Even if x is a set, we must not confuse
it with its own singleton. For example, a set x might have any number of
elements in it, but {x} has exactly one element: x.
The notation for singleton sets is an example of list notation for sets, which
allows us to just list the elements of the set in braces. We write {3} for the set
that contains only the number 3, {2, 3} for the set that contains just the numbers 2 and 3, and {Washington Monument, White House, Lincoln Memorial}
for the set that contains those three landmarks. Like brace notation, list notation is a convenient but dispensible part of the notation that we use for
discussing set theory. The description of a set in list notation is just an abbreviation for a longer expression in the official language.
Introducing list notation suggests that there are sets with two elements,
three elements, four elements and so on, but before we can be sure of this we
have to justify the existence of any such set using the Axiom of Comprehension. The next proposition proves the existence of a set containing any two
elements that we choose.
Proposition 2. (Unordered Pairs) For any objects x and y there is a (unique)
set a = {x, y}. In symbols:
x y !a w(w a (w = x w = y))
Proof: Let x and y be arbitrary objects, and let
a = {w | w = x w = y}
The existence of a is guaranteed by Comprehension, and its uniqueness follows from the Axiom of Extensionality. Clearly a has x and
y and nothing else as elements.
We did not previously prove the existence of singleton sets, but we now
have two ways to show that singleton sets exist. One is to use an instance of
Chapter 15
which says that for any y there is a set whose elements are those that are
equal to y, i.e. {y}.
Another is to apply the previous result about pairs, with the same object
playing the role of both x and y, thus:
Proposition 3. (Singletons) For any object x there is a singleton set {x}.
singleton sets
15.4
"
15.5
"
15.6
"
Are the
1.
2.
3.
Section 15.2
15.7
"
Suppose that a1 and a2 are sets, each of which has only the Washington Monument as a
member. Prove (informally) that a1 = a2 .
15.8
Give an informal proof that there is only one empty set. (Hint: Use the Axiom of Extensionality.)
"
15.9
"
Give an informal proof that the set of even primes greater than 10 is equal to the set of even
primes greater than 100.
1.
2.
3.
4.
{7, 8, 9}
{3 + 4, 4 + 4, 5 + 4}
{7, 8, 9, 3 + 4}
{7, 3 + 4, 5 + 2, 6 + 1}
Section 15.3
Subsets
The next notion is closely related to the membership relation, but fundamentally different. It is the subset relation, and is defined as follows:
subset ()
Chapter 15
Subsets / 423
It doesnt make much difference which way you think of it. Different people
prefer different understandings. The first is probably the most common, since
it keeps the official language of set theory pretty sparse.
Lets prove a proposition involving the subset relation that is very obvious,
but worth noting.
Proposition 4. For any set a, a a.
Proof: Let a be an arbitrary set. For purposes of general conditional
proof, assume that c is an arbitrary member of a. Then trivially (by
reiteration), c is a member of a. So x(x a x a). But then
we can apply our definition of subset to conclude that a a. Hence,
a(a a). (You are asked to formalize this proof in Exercise 15.14.)
The following proposition is very easy to prove, but it is also extremely
useful. You will have many opportunities to apply it in what follows.
Proposition 5. For all sets a and b, a = b if and only if a b and b a.
In symbols:
ab(a = b (a b b a))
Proof: Again, we use the method of universal generalization. Let a
and b be arbitrary sets. To prove the biconditional, we first prove
that if a = b then a b and b a. So, assume that a = b. We need
to prove that a b and b a. But this follows from Proposition 4
and two uses of the indiscernibility of identicals.
To prove the other direction of the biconditional, we assume that
a b and b a, and show that a = b. To prove this, we use the
Axiom of Extensionality. By that axiom, it suffices to prove that a
and b have the same members. But this follows from our assumptions,
which tell us that every member of a is a member of b and vice versa.
Since a and b were arbitrary sets, our proof is complete. (You are
asked to formalize this proof in Exercise 15.15.)
Remember
Let a and b be sets.
1. a b iff every element of a is an element of b.
2. a = b iff a b and b a.
Section 15.3
Exercises
1.
2.
3.
4.
5.
6.
7.
15.12 Give an informal proof of the following simple theorem: For every set a, a.
"
15.13 Give an formal proof of the following simple theorem: For every set a, a.
!
15.14 In the file Exercise 15.14, you are asked to give a formal proof of Proposition 4 from the
!
definition of the subset relation. The proof is very easy, so you should not use any of the Con
rules. (You will find the symbol if you choose the Set tab in the Fitch toolbar.)
15.15 In the file Exercise 15.15, you are asked to give a formal proof of Proposition 5 from the Axiom
!
of Extensionality, the definition of subset, and Proposition 4. The proof is a bit more complex,
so you may use Taut Con if you like.
15.16 Give a formal proof that the subset relation is transitive, that is, that
!
x y x ((x y y z) x z)
15.17
"
Proposition 4 tells us that every set is a subset of itself. Sometimes we are interested in just
those subsets of a set that are not equal to the set. We can define these proper subsets using
the formula x y (x y (x y x += y)). Give an informal proof of
x y (x y y x)
Section 15.4
Chapter 15
intersection ()
a b z(z a b (z a z b))
2. The union of a and b is the set whose members are just those objects in
either a or b or both. This set is generally written a b. In symbols:
union ()
a b z(z a b (z a z b))
At first sight, these definitions seem no more problematic than the definition of the subset relation. But if you think about it, you will see that there
is actually something a bit fishy about them as they stand. For how do we
know that there are sets of the kind described? For example, even if we know
that a and b are sets, how do we know that there is a set whose members
are the objects in both a and b? And how do we know that there is exactly
one such set? Remember the rules of the road. We have to prove everything
from explicitly given axioms. Can we prove, based on our axioms, that there
is such a unique set?
It turns out that we can, at least with the naive axioms. But later, we
will have to modify the Axiom of Comprehension to avoid inconsistencies.
The modified form of this axiom will allow us to justify only one of these two
operations. To justify the union operation, we will need a new axiom. But we
will get to that in good time.
Proposition 6. (Intersection) For any pair of sets a and b there is one and
only one set c whose members are the objects in both a and b. In symbols:
existence and
uniqueness of a b
a b !c x(x c (x a x b))
This proposition is actually just an instance of Proposition 1 on page 417.
Look back at the formula displayed for that proposition, and consider the
special case where z1 is a, z2 is b, and P (x) is the wff x a x b. So
Proposition 6 is really just a corollary (that is, an immediate consequence) of
Proposition 1.
2 Function
symbols are discussed in the optional Section 1.5. You should read this section
now if you skipped over it.
Section 15.4
We can make this same point using our brace notation. Proposition 1
guarantees a unique set {x | P (x)} for any formula P (x), and we are simply
noting that the intersection of sets a and b is the set c = {x | x a x b}.
The union operation is very similar to intersection, except that it forms
the set of objects that are in either of its argument sets, not both as for
intersection.
existence and
uniqueness of a b
Proposition 7. (Union) For any pair of sets a and b there is one and only
one set c whose members are the objects in either a or b or both. In symbols:
a b !c x(x c (x a x b))
Again, this is a corollary of Proposition 1, since c = {x | x a x b}.
This set clearly has the desired members.
The definition of union is superficially similar to the definition of pair sets
that we encountered on page 420. Can you spot the difference? The pair set
axiom says that there is a set containing objects that are equal to one of two
objects, while the union axiom says that there is a set containing objects that
are members of one of two objects. A pair set contains at most two objects,
while there is no limit to the number of objects that a union could contain.
Here are several theorems we can prove using the above definitions and
results.
Proposition 8. Let a, b, and c be any sets.
1. a b = b a
2. a b = b a
3. a b = b if and only if b a
4. a b = b if and only if a b
5. a (b c) = (a b) (a c)
6. a (b c) = (a b) (a c)
We prove two of these and leave the rest as exercises.
Proof of 8.1: This follows quite easily from the definition of intersection and the Axiom of Extensionality. To show that a b = b a,
we need only show that a b and b a have the same members. By
the definition of intersection, the members of a b are the things
that are in both a and b, whereas the members of b a are the things
that are in both b and a. These are clearly the same things. We will
look at a formal proof of this in the next You try it section.
Chapter 15
Proof of 8.3: Since (8.3) is the most interesting, we prove it. Let a
and b be arbitrary sets. We need to prove a b = b iff b a. To prove
this, we give two conditional proofs. First, assume a b = b. We need
to prove that b a. But this means x(x b x a), so we will
use the method of general conditional proof. Let x be an arbitrary
member of b. We need to show that x a. But since b = a b, we see
that x a b. Thus x a x b by the definition of intersection.
Then it follows, of course, that x a, as desired.
Now lets prove the other half of the biconditional. Thus, assume that
b a and let us prove that a b = b. By Proposition 5, it suffices to
prove a b b and b a b. The first of these is easy, and does not
even use our assumption. So lets prove the second, that b a b.
That is, we must prove that x(x b x (a b)). This is proven
by general conditional proof. Thus, let x be an arbitrary member of
b. We need to prove that x a b. But by our assumption, b a,
so x a. Hence, x a b, as desired.
You try it
................................................................
1. Open the Fitch file Intersection 1. Here we have given a complete formal
proof of Proposition 8.1 from the definition of intersection and the Axiom
of Extensionality. (We have written int(x, y) for x y.) We havent
specified the rules or support steps in the proof, so this is what you need to
do. This is the first formal proof weve given using function symbols. The
appearance of complex terms makes it a little harder to spot the instances
of the quantifier rules.
"
2. Specify the rules and support steps for each step except the next to last
(i.e., step 22). The heart of the proof is really the steps in which c ac b
is commuted to c b c a, and vice versa.
"
"
4. When you have a completed proof specifying all rules and supports, save
it as Proof Intersection 1.
"
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
Section 15.4
Exercises
15.18 If you skipped the You try it section, go back and do it now. Submit the file Proof Intersection 1.
!
15.19 Let a = {2, 3, 4, 5}, b = {2, 4, 6, 8}, and c = {3, 5, 7, 9}. Compute the following and express your
"
"
tion 8.2.
Proposition 8.2. You will find the problem set up in the file Exercise 15.21. You
may use Taut Con, since a completely
formal proof would be quite tedious.
"
!!
tion 8.4.
Proposition 8.4. You will find the problem set up in the file Exercise 15.23. You
may use Taut Con in your proof.
"
"
tion 8.5.
Chapter 15
tion 8.6.
15.26 Give an informal proof that for every set a there is a unique set c such that for all x, x c
"
Section 15.5
Ordered Pairs
In order for set theory to be a useful framework for modeling structures of
various sorts, it is important to find a way to represent order. For example, in
high school you learned about the representation of lines and curves as sets of
ordered pairs of real numbers. A circle of radius one, centered at the origin,
is represented as the following set of ordered pairs:
modeling order
{Cx, yD | x2 + y 2 = 1}
But sets themselves are unordered. For example {1, 0} = {0, 1} by Extensionality. So how are we to represent ordered pairs and other ordered objects?
What we need is some way of modeling ordered pairs that allows us to
prove the following:
Cx, yD = Cu, vD (x = u y = v)
If we can prove that this holds of our representation of ordered pairs, then we
know that the representation allows us to determine which is the first element
of the ordered pair and which is the second.
It turns out that there are many ways to do this. The simplest and most
widely used is to model the ordered pair Cx, yD by means of the unlikely set
{{x}, {x, y}}.
Definition For any objects x and y, we take the ordered pair Cx, yD to be the
set {{x}, {x, y}}. In symbols:
ordered pair
Section 15.5
ordered n-tuples
Notice that there is something about this set that we have not encountered
explicitly before, namely that an ordered pair is a set whose members are
themselves sets: it is a set of sets. There is nothing mysterious about this fact;
it follows directly from the fact that the axiom of comprehension is stated
very generally. The range of the quantifier x in the axiom is the entire domain
of naive set theory, which certainly contains all of the sets, in addition to any
other objects that might exist.
Once we have figured out how to represent ordered pairs, the way is open
for us to represent ordered triples, quadruples, etc. For example, we will represent the ordered triple Cx, y, zD as Cx, Cy, zDD. More generally, we will represent
ordered n-tuples as Cx1 , Cx2 , . . . xn DD.
By the way, as with brace notation for sets, the ordered pair notation
Cx, yD is not part of the official language of set theory. It can be eliminated
from formulas without difficulty, though the formulas get rather long.
Exercises
15.27 Using propositions 2 and 3, let a = {2, 3} and let b = {a}. How many members does a have?
"
How many members does b have? Does a = b? That is, is {2, 3} = {{2, 3}}?
15.28 How many sets are members of the set described below?
"
{{}, {{}, 3, {}}, {}}
[Hint: First rewrite this using as a notation for the empty set. Then delete from each
description of a set any redundancies.]
15.29 Apply the Unordered Pair theorem (Proposition 2) to x = y = . What set is obtained? Call
"
this set c. Now apply the theorem to x = , y = c. Do you obtain the same set or a different
set?
15.30 This exercise and the one to follow lead you through the basic properties of ordered pairs.
"!
1. How many members does the set {{x}, {x, y}} contain if x += y? How many if x = y?
2. Recall that we defined Cx, yD = {{x}, {x, y}}. How do we know that for any x and y
there is a unique set Cx, yD?
3. Give an informal proof that the easy half of the fundamental property of ordered pairs
holds with this definition:
(x = u y = v) Cx, yD = Cu, vD
Chapter 15
15.31 Building on Problem 15.30, prove that for any two sets a and b, there is a set of all ordered
"!
pairs Cx, yD such that x a and y b. This set is called the Cartesian Product of a and b, and
is denoted by a b.
15.32 Suppose that a has three elements and b has five. What can you say about the size of a b,
"!
a b, and a b? (a b is defined in Exercise 15.31.) [Hint: in some of these cases, all you can
do is give upper and lower bounds on the size of the resulting set. In other words, youll have
to say the set contains at least such and such members and at most so and so.]
Section 15.6
extension
relation in set theory
Section 15.6
properties of relations
inference scheme
Chapter 15
x R(x, x)
x R(x, x)
xy(R(x, y) R(y, x))
xy(R(x, y) R(y, x))
xy[(R(x, y) R(y, x)) x = y]
inverse or converse
R1 = {Cx, yD | Cy, xD R}
Thus, for example, the extension of smaller in some domain is always the
inverse of the extension of larger. In an exercise, we ask you to prove some
simple properties of inverse relations, including one showing that if S is the
inverse of R, then R is the inverse of S.
Equivalence relations and equivalence classes
Many relations have the properties of reflexivity, symmetry, and transitivity.
We have seen one example: being the same shape as. Such relations are called
equivalence relations, since they each express some kind of equivalence among
objects. Some other equivalence relations expressible in the blocks language
include being the same size as, being in the same row as, and being in the same
column as. Other equivalence relations include has the same birthday as, has
the same parents as, and wears the same size shoes as. The identity relation
is also an equivalence relation, even though it never classifies distinct objects
as equivalent, the way others do.
As these examples illustrate, equivalence relations group together objects
that are the same in some dimension or other. This fact makes it natural to
talk about the collections of objects that are the same as one another along the
given dimension. For example, if we are talking about the same size relation,
equivalence relations
Section 15.6
say among shirts in a store, we can talk about all the shirts of a particular size,
say small, medium, and large, and even group them onto three appropriate
racks.
We can model this grouping process very nicely in set theory with an important construction known as equivalence classes. This construction is widely
used in mathematics and will be needed in our proof of the Completeness Theorem for the formal proof system F.
Given any equivalence relation R on a set D, we can group together the
objects that are deemed equivalent by means of R. Specifically, for each x D,
let [x]R be the set
{y D | Cx, yD R}
equivalence classes
In words, [x]R is the set of things equivalent to x with respect to the relation
R. It is called the equivalence class of x. (If x is a small shirt, then think
of [x]SameSize as the stores small rack.) The fact that this grouping operation behaves the way we would hope and expect is captured by the following
proposition. (We typically omit writing the subscript R from [x]R when it is
clear from context, as in the following proposition.)
Proposition 9. Let R be an equivalence relation on a set D.
1. For each x, x [x].
2. For all x, y, [x] = [y] if and only if Cx, yD R.
3. For all x, y, [x] = [y] if and only if [x] [y] += .
Proof: (1) follows from the fact that R is reflexive on D. (2) is more
substantive. Suppose that [x] = [y]. By (1), y [y], so y [x]. But
then by the definition of [x], Cx, yD R. For the converse, suppose
that Cx, yD R. We need to show that [x] = [y]. To do this, it suffices
to prove that [x] [y] and [y] [x]. We prove the first, the second
being entirely similar. Let z [x]. We need to show that z [y]. Since
z [x], Cx, zD R. From the fact that Cx, yD R, using symmetry,
we obtain Cy, xD R. By transitivity, from Cy, xD R and Cx, zD R
we obtain Cy, zD R. But then z [y], as desired. The proof of (3)
is similar and is left as an exercise.
Chapter 15
Exercises
15.33 Open the Fitch file Exercise 15.33. This file contains as goals the sentences expressing that the
!
same shape relation is reflexive, symmetric, and transitive (and hence an equivalence relation).
You can check that each of these sentences can be proven outright with a single application
of Ana Con. However, in this exercise we ask you to prove this applying Ana Con only to
atomic sentences. Thus, the exercise is to show how these sentences follow from the meaning
of the basic predicate, using just the quantifier rules and propositional logic.
For the next six exercises, we define relations R and S so that R(a, b) holds if either a or b is a
tetrahedron, and a is in the same row as b, whereas S(a, b) holds if both a and b are tetrahedra, and in
the same row. The exercises ask you to decide whether R or S has various of the properties we have been
studying. If it does, open the appropriate Fitch exercise file and submit a proof. If it does not, submit
a world that provides a counterexample. Thus, for example, when we ask whether R is reflexive, you
should create a world in which there is an object that does not bear R to itself, since R is not in fact
reflexive. In cases where you give a proof, you may use Ana Con applied to literals.
15.34 Is R reflexive?
15.35 Is R symmetric?
15.36 Is R transitive?
15.37 Is S reflexive?
15.38 Is S symmetric?
15.39 Is S transitive?
15.40 Fill in the following table, putting yes or no to indicate whether the relation expressed by the
"
predicate at the top of the column has the property indicated at the left.
Smaller
SameCol
Adjoins
LeftOf
Transitive
Reflexive
Irreflexive
Symmetric
Asymmetric
Antisymmetric
15.41 Use Tarskis World to open the file Venns World. Write out the extension of the same column
"
relation in this world. (It contains eight ordered pairs.) Then write out the extension of the
between relation in this world. (This will be a set of ordered triples.) Finally, what is the
extension of the adjoins relation in this world? Turn in your answers.
15.42 Describe a valid inference scheme (similar to the one displayed on page 432) that goes with
"
each of the following properties of binary relations: symmetry, antisymmetry, asymmetry, and
irreflexivity.
Section 15.6
15.43 What are the inverses of the following binary relations: older than, as tall as, sibling of, father
"
15.44 Give informal proofs of the following simple facts about inverse relations.
1. R is symmetric iff R = R1 .
2. For any relation R, (R1 )1 = R.
"
15.45 Use Tarskis World to open the file Venns World. Write out equivalence classes that go with
"
each of the following equivalence relations: same shape, same size, same row, and identity.
You can write the equivalence classes using list notation. For example, one of the same shape
equivalence classes is {a, e}.
[x]R = {y D | Cx, yD R}
Explain how Proposition 1 can be used to show that the set displayed on the right side of this
equation exists.
15.47 (Partitions and equivalence relations) Let D be some set and let P be some set of non-empty
"!
subsets of D with the property that every element of D is in exactly one member of P. Such a
set is said to be a partition of D. Define a relation E on D by: Ca, bD E iff there is an X P
such that a X and b X. Show that E is an equivalence relation and that P is the set of its
equivalence classes.
15.48 If a and b are subsets of D, then the Cartesian product (defined in Exercise 15.31) a b is a
"!
binary relation on D. Which of the properties of relations discussed in this section does this
relation have? (As an example, you will discover that a b is irreflexive if and only if a b = .)
Your answer should show that in the case where a = b = D, a b is an equivalence relation.
How many equivalence classes does it have?
Section 15.7
Functions
The notion of a function is one of the most important in mathematics. We
have already discussed functions to some extent in Section 1.5.
Intuitively, a function is simply a way of doing things to things or assigning things to things: assigning license numbers to cars, assigning grades to
Chapter 15
Functions / 437
x1 y R(x, y)
In other words, a relation is a function if for any input there is at most one
output. If the function also has the following property, then it is called a
total function on D:
Totality:
functions as special
kind of relation
total functions
xy R(x, y)
Total functions give answers for every object in the domain. If a function
is not total on D, it is called a partial function on D.3
Whether or not a function is total or partial depends very much on just
what the domain D of discourse is. If D is the set of all people living or dead,
then intuitively the father of function is total, though admittedly things get a
bit hazy at the dawn of humankind. But if D is the set of living people, then
this function is definitely partial. It only assigns a value to a person whose
father is still living.
There are some standard notational conventions used with functions. First,
it is standard to use letters like f, g, h and so forth to range over functions.
Second, it is common practice to write f (x) = y rather than Cx, yD f when
f is a function.
The domain of a function f is the set
partial functions
f (x)
domain of function
3 Actually, usage varies. Some authors use partial function to include total functions,
that is, to be synonymous with function.
Section 15.7
When dealing with partial functions, the notion of an extension of a function is important. A function f " is an extension of function f if the domain
of f is a subset of the domain of f " and f (a) = f " (a) for any a in the domain of f . In other words, f and f " agree on their assignments wherever they
are both defined, but the extension f " may make additional assignments not
made by f . Notice that this is a different meaning of extension from the one
that we mentioned earlier in the section on modeling relations. Sorry for the
confusion, but there you have it.
Notice that the identity relation on D, {Cx, xD | x D}, is a total function
on D: it assigns each object to itself. When we think of this relation as a
function we usually write it as id. Thus, id(x) = x for all x D.
Later in the book we will be using functions to model rows of truth tables,
naming functions for individual constants, and most importantly in defining
the notion of a first-order structure, the notion needed to make the concept
of first-order consequence mathematically rigorous.
Exercises
15.50 Use Tarskis World to open the file Venns World. List the ordered pairs in the frontmost (fm)
"
function described in Section 1.5 (page 33). Is the function total or partial? What is its range?
15.51 Which of the following sets represent functions on the set D = {1, 2, 3, 4}? For those which are
"
15.52 What is the domain and range of the square root function on the set N = {0, 1, 2, . . .} of all
"
natural numbers?
15.53 Open the Fitch file Exercise 15.53. The premise here defines R to be the frontmost relation.
!!
The goal of the exercise is to prove that this relation is functional. You may use Taut Con as
well as Ana Con applied to literals.
A function f is said to be injective or one-to-one if it always assigns different values to different objects
in its domain. In symbols, if f (x) = f (y) then x = y for all x, y in the domain of f .
15.54 Which of the following functions are one-to-one: father of, student id number of, frontmost, and
"
fingerprint of ? (You may need to decide just what the domain of the function should be before
deciding whether the function is injective. For frontmost, take the domain to be Venns World.)
Chapter 15
15.55 Let f (x) = 2x for any natural number x. What is the domain of this function? What is its
"
15.56 Let f (x) = x2 for any natural number x. What is the domain of this function? What is its
"
range? Is the function one-to-one? How does your answer change if we take the domain to
consist of all the integers, both positive and negative?
15.57 Let E be an equivalence relation on a set D. Consider the relation R that holds between any
"
x in D and its equivalence class [x]E . Is this a function? If so, what is its domain? What is its
range? Under what conditions is it an one-to-one function?
Section 15.8
powersets ()
Proposition 10. (Powersets) For any set b there is a unique set whose members are just the subsets of b. In symbols:
b c x(x c x b)
Proof: By the Axiom of Comprehension, we may form the set c =
{x | x b}. This is the desired set. By the Axiom of Extensionality,
there can be only one such set.
By way of example, let us form the powerset of the set b = {2, 3}. Thus,
we need a set whose members are all the subsets of b. There are four of these.
The most obvious two are the singletons {2} and {3}. The other two are the
empty set, which is a subset of every set, as we saw in Problem 15.12, and the
set b itself, since every set is a subset of itself. Thus:
b = {, {2}, {3}, {2, 3}}
Here are some facts about the powerset operation. We will ask you to prove
them in the exercises.
Proposition 11. Let a and b be any sets.
1. b b
Section 15.8
2. b
3. a b iff a b
It is possible for a set to have some of its own subsets as elements. For
example, any set that has the empty set as an element has a subset as an
element, since the empty set is a subset of every set. To take another example,
the set
{Washington Monument}
is both a subset and an element of the set
{Washington Monument, {Washington Monument}}
However, it turns out that no set can have all of its subsets as elements.
Proposition 12. For any set b, it is not the case that b b.
Proof: Let b be any set. We want to prove that b + b, or equivalently that there is a member of b that is not a member of b.
To prove this, we construct a particular subset of b that is not an
element of b. Let
c = {x | x b x + x}
by the Axiom of Comprehension. This set c is clearly a subset of b
since it was defined to consist of those members of b satisfying some
additional condition. It follows from the definition of the powerset
operation that c is an element of b. We will show that c + b.
Toward a proof by contradiction, suppose that c b. Then either
c c or c + c. But which? It is not hard to see that neither can
be the case. First, suppose that c c. Then by our definition of
c, c is one of those members of b that is left out of c. So c + c.
Next consider the possibility that c + c. But then c is one of those
members of b that satisfies the defining condition for c. Thus c c.
Thus we have proven that c c c + c, which is a contradiction.
So our assumption that c b must be false, so b + b .
Chapter 15
This theorem applies to both finite and infinite sets. The proof shows how
to take any set b and find a set c which is a subset of b but not a member of b,
namely the set c = {x | x b and x + x}. This is sometimes called the Russell
set for b, after Bertrand Russell. So what we have proved in the preceding can
be restated as:
Proposition 13. For any set b, the Russell set for b, the set
{x | x b x + x},
is a subset of b but not a member of b.
This result is, as we will see, a very important result, one that immediately
implies Proposition 12.
Lets compute the Russell set for a few sets. If b = {0, 1}, then the Russell
set for b is just b itself. If b = {0, {0, {0, . . . }}} then the Russell set for b is just
{0} since b b. Finally, if b = {Washington Monument}, then the Russell set
for b is just b itself.
Remember
The powerset of a set b is the set of all its subsets:
b = {a | a b}
Exercises
"
"
15.61 Compute .
"
"
"
"
tion 11.
15.64 Here are a number of conjectures you might make. Some are true, but some are false. Prove
"
the true ones, and find examples to show that the others are false.
1. For any set b, b.
2. For any set b, b b.
3. For any sets a and b, (a b) = a b.
4. For any sets a and b, (a b) = a b.
15.65 What is the Russell set for each of the following sets?
"
1.
2.
3.
4.
{}
A set a satisfying a = {a}
A set {1, a} where a = {a}
The set of all sets
Section 15.8
Section 15.9
Russells Paradox
We are now in a position to show that something is seriously amiss with
the theory we have been developing. Namely, we can prove the negation of
Proposition 12. In fact, we can prove the following which directly contradicts
Proposition 12.
Proposition 14. There is a set c such that c c.
Proof: Using the Axiom of Comprehension, there is a universal set,
a set that contains everything. This is the set c = {x | x = x}. But
then every subset of c is a member of c, so c is a subset of c.
universal set (V )
Russells Paradox
Chapter 15
The set c used in the above proof is called the universal set and is usually
denoted by V . It is called that because it contains everything as a member, including itself. What we have in fact shown is that the powerset of the
universal set both is and is not a subset of the universal set.
Let us look at our contradiction a bit more closely. Our proof of Proposition 12, applied to the special case of the universal set, gives rise to the
set
Z = {x | x V x + x}
This is just the Russell set for the universal set. But proposition 13 tells us
that for any set a, the Russell set for a is not a member of a. So the Russell
set for V is not a member of V , which is a contradiction because everything
is in V .
This set Z is called the (absolute) Russell set, and the contradiction we
have just established is called Russells Paradox.
It would be hard to overdramatize the impact the discovery of Russells
Paradox had on set theory at the turn of the century. Simple as it is, it shook
the subject to its foundations. It is just as if in arithmetic we discovered a
proof that 23+27=50 and 23+27+= 50. Or as if in geometry we could prove
that the area of a square both is and is not the square of the side. But here
we are in just that position. This shows that there is something wrong with
our starting assumptions of the whole theory, the two axioms with which we
began. There simply is no domain of sets which satisfies these assumptions.
This discovery was regarded as a paradox just because it had earlier seemed to
most mathematicians that the intuitive universe of sets did satisfy the axioms.
Russells Paradox is just the tip of an iceberg of problematic results in
naive set theory. These paradoxes resulted in a wide-ranging attempt to clarify
Exercises
15.66
!
In the file Exercise 15.66, you are asked to give a formal proof of from the single premise
y x (x y x x). The problem is, this is a straightforward instance of the Axiom of
Comprehension (using x x for the formula P (x)). Your formal proof will thus show that this
axiom is inconsistent. (We could also have asked you to prove, from no premises, the negation
of this instance of Comprehension, showing that the negation is a logical truth.)
Section 15.9
Section 15.10
The paradoxes of naive set theory show us that our intuitive notion of set is
simply inconsistent. We must go back and rethink the assumptions on which
the theory rests. However, in doing this rethinking, we do not want to throw
out the baby with the bath water. Set theory has proved so valuable as a useful
toolkit for mathematicians that we would like to develop a new set theory in
which all of the previous results, except for the proof of inconsistency, are
provable.
If we examine the Russell Paradox closely, we see that it is actually a
straightforward refutation of the Axiom of Comprehension. It shows that there
is no set determined by the property of not belonging to itself. That is, the
following is, on the one hand, a logical truth, but also the negation of an
instance of Comprehension:
cx(x c x + x)
The Axiom of Extensionality is not needed in the derivation of this fact.
So it is the Comprehension Axiom that is the problem, as you proved in
exercise 15.66. In fact, back in Chapter 13, Exercise 13.52, we asked you to
give a formal proof of
y x [E(x, y) E(x, x)]
This is just the above sentence with E(x, y) used instead of x y. The
proof shows that the sentence is actually a first-order validity; its validity does
not depend on anything about the meaning of . It follows that no coherent
conception of set can countenance the Russell set.
But why is there no such set? It is not enough to say that the set leads
us to a contradiction. We would like to understand why this is so. Various
answers have been proposed to this question.
Cumulative sets
One popular view is the cumulation metaphor due to the logician Ernst
Zermelo. Zermelos idea is that sets should be thought of as formed by abstract
acts of collecting together previously given objects. We start with some basic
objects. We collect sets of these objects. Then we collet sets whose members
are the objects and earlier sets, and so on and on. Before one can form a set
by this abstract act of collecting, one must already have all of its members,
Zermelo suggested.
Chapter 15
On this conception, sets come in distinct, discrete stages, each set arising
at the first stage after the stages where all of its members arise. For example,
if set x arises as stage 17 and set y at stage 37, then a = {x, y} would arise
at stage 38. If b is constructed at some stage, then its powerset b will be
constructed at the next stage. Each stage gives rise to sets that can be used
to form new sets at a later stage. Once a set has been formed, we can always
create a new stage by forming its power set and so there is no last stage of this
process. On Zermelos conception, the reason there can never be a universal
set is that as any set b arises, there is always its powerset to be formed later,
and so there is no stage at which the universal set V can appear. In this sense,
V is too big to be a set.
The most common form of modern set theory is Zermelo-Frankel set theory, also known as zfc. The axioms of zfc capture this cumulative idea of
sets. The axioms allow us to prove the existence of a basic collection of sets,
those that exist at stage 1, and additional axioms permit us to collect together
sets of objects from one stage to form the members of the next stage.
In zfc, it is generally assumed that we are dealing with pure sets, that
is, there is nothing but sets in the domain of discourse. The only basic object
with which we start our collection operations is the empty set, and its
existence must be justified by an axiom. If we want to speak about numbers
or any other objects in zfc, we must build models of them within the theory.
For example, in zfc, we could model 0 by the empty set, 1 by {}, 2 by {{}},
and so on.
Here is a list of the axioms of zfc.
Zermelo-Frankel set
theory zfc
axioms of zfc
1. Axiom of Extensionality:
a b (x (x a x b) a = b)
Since the Axiom of Extensionality is not implicated in any set theoretic
paradox, and since it does not even assert the existence of any sets, we
are safe using it in the same form as before. If we are developing a theory
of pure sets, however, it is more common to state it in a single-sorted
language:
y z (x (x y x z) y = z)
The difference between these two is that the first only addresses the
identity of sets, while the second addresses the identity of everything in
the domain of discourse. Note in particular that the second version implies that there is at most one thing in the domain that has no members
Section 15.10
(which would be the empty set). We will continue to state the remaining axioms in our many-sorted language, since it makes them easier to
understand.
2. Axiom of Separation:
z1 . . . zn abx[x b (x a P (x))]
The Axiom of Separation is a weakened version of the Axiom of Unrestricted Comprehension that led us into contradiction in naive set theory.
To see how it differs, compare it to the full version of Comprehension
stated on page 416.
Where the Axiom of Comprehension allowed us to prove the existence
of any set of objects satisfying a formula, the Axiom of Separation only
allows us to use a set that already exists, and form the subset of its
members that satisfy some formula. We can separate the elements satisfying the property out from some larger set. The idea here is that if the
set a has already been collected from previous stages of the cumulation,
then any subset must also be available at the same stage.
As promised, the existence of the intersection ac of sets a and c follows
directly from the Axiom of Separation. Using the property x c as P (x)
in the Axiom of Separation yields the same formula as using x ax c
in the Axiom of Comprehension. However, there are some sets that the
Axiom of Comprehension would permit that cannot be proved to exist
on the basis of Separation. This is good, since in particular we cannot
prove the existence of the universal set. (In fact we can show that it does
not exist.) It is easy to show that the resulting theory is consistent (see
Exercise 15.71). Unfortunately, we also cannot prove the existence of
sets that we do want in our set theory. In particular it blocks the proofs
that the empty set, unions and powersets exist. None of the instances of
the Axiom of Comprehension that we used to prove the existence of such
sets are also instances of the Axiom of Separation. We need to add extra
axioms to ensure that these unproblematic operations are still available
to us.
3. Unordered Pair Axiom: For any two objects there is a set that has
both as elements (and no others).
xyaz(z a (z = x z = y))
It is a requirement of first-order logic that the domain of discourse is
non-empty, and so the Unordered Pair Axiom implies the existence of a
Chapter 15
set that contains this object, whatever it is. From this set, we can use
the Axiom of Separation to derive the existence of the empty set as the
subset of this set whose elements satisfy x += x.
4. Union Axiom: Given any set a of sets, the union of all the members
of a is also a set. That is:
abx[x b c(c a x c)]
This is a generalization of the union axiom that we presented earlier.
The idea here is that a is a set of sets, and the union set whose existence
is guaranteed by this axiom is the set of all objects that are in at least
one member of the set a.
Using this axiom, we can show that for any sets c and d, cd exists. cd
is the set of objects that are in either c or d, which means that they are
members of some member of {c, d}. The Union axiom generalizes this
idea, because the set a may have infinitely many members and so we can
form the union of all of the members of this set. There is not necessarily
a finite sequence of binary union operations that could be used to form
the same set.
5. Powerset Axiom: Every set has a powerset.
abx(x b x a)
The preceding four axioms are important instances of the Axiom of
Unrestricted Comprehension that we used in the development of naive
set theory. In zfc we are not allowed to use every instance of this axiom,
since this leads to inconsistency. But we can use these instances which
we can show lead to a consistent theory.
The next axiom guarantees the existence of an infinite set.
6. Axiom of Infinity: There is a set containing the empty set and which
contains {a} for every a that it contains.
a( a x(x a {x} a)
The set whose existence is guaranteed by this axiom has as many elements as there are natural numbers. We can think of the set as
representing the number 0, {} as representing 1, {{}} as representing
2 and so on. In general. the singleton set containing the representation
of a number is the representation of the next number.
Section 15.10
The next two axioms use functions to construct new sets out of existing
sets.
7. Axiom of Replacement: Suppose that you have a formula P (x, y)
which holds of exactly one y for each x in a set a, that is:
x (x a !y P (x, y))
Then, there is a set
{y | x (x a P (x, y))}
The Axiom of Replacement tells us that if we have a set and a formula
which relates each element of the set to a unique object, then the collection of those objects is also a set. This axiom is actually a schema,
since P (x, y) can be any formula with the required uniqueness property,
and each such formula defines a set.
8. Axiom of Choice: Suppose a is a set whose members are all non-empty
sets. Then there is a function f whose domain is a and which satisfies
the following:
x (x a f (x) x)
The idea here is that the function f looks at each set in a and chooses
exactly one member of that set sort of like Max picking his favorite
book from each shelf in the bookcase. The function f is called a choice
function for a. The Axiom of Choice guarantees the existence of f and,
thanks to Replacement, of the set that forms the range of f :
{f (x) | x a}
Axiom of Choice
Unlike the previous axioms, the Axiom of Choice is not a straightforward logical consequence of naive set theory.4 The Axiom of Choice has
a long and somewhat convoluted history. There are many, many equivalent ways of stating it; in fact there is a whole book of statements
equivalent to the axiom of choice. In the early days of set theory some
authors took it for granted, others saw no reason to suppose it to be
true. Nowadays it is taken for granted as being obviously true by most
mathematicians. The attitude is that while there may be no way to define a choice function for a given set a, and so no way to prove that
one exists by means of Separation, such functions exist nonetheless, and
so are asserted to exist by this axiom. It is extremely widely used in
modern mathematics.
4 Technically
Chapter 15
regularity and
cumulation
Section 15.10
because of its relationship to a particular kind of induction called wellfounded induction. For the relation with the Axiom of Regularity, see
Exercise 16.10.
You should examine the axioms of zfc in turn to see if you think they
hold on Zermelos conception of set.
Sizes of infinite sets
sizes of powersets
One view of the problem caused by considering the collection of all objects
to be a set is that this collection is just too big to be a completed totality.
As we have seen, the universal set V cannot appear at any stage of Zermelos
cumulative hierarchy of sets. But this is a slightly different objection to considering V to be a set, an objection that is due to John von Neumann.
If V is too big, then exactly how big does a collection have to get to be
too big to be considered a set? Some philosophers have suggested that the
powerset of an infinite set might be too large to be considered as a completed
totality, and this has led to concern that the Powerset Axiom is not justified
on this conception of set. To see why, let us start by thinking about the size
of the powerset of finite sets.
We have seen that if we start with a set b of size n, then its powerset
b has 2n members. For example, if b has five members, then its power set
has 25 = 32 members. But if b has 1000 members, then its power set has
21000 members, an incredibly large number indeed; larger, they say, than the
number of atoms in the universe. And then we could form the powerset of
that, and the powerset of that gargantuan sets indeed.
But what happens if b is infinite? To address this question, we first have to
figure out what exactly we mean by the size of an infinite set. Cantor answered
this question by giving a rigorous analysis of size that applies to all sets, finite
and infinite. For any set b, the Cantorian size of b is denoted | b |. Informally,
| b |=| c | just in case the members of b and the members of c can be associated
with one another in a unique fashion. More precisely, what is required is that
there be a one-to-one function with domain b and range c. (The notion of a
one-to-one function was defined in Exercise 15.54.)
For finite sets, | b | behaves just as one would expect. This notion of size is
somewhat subtle when it comes to infinite sets, though. It turns out that for
infinite sets, a set can have the same size as some of its proper subsets.5 The
set N of all natural numbers, for example, has the same size as the set E of
even numbers; that is | N | = | E |. The main idea of the proof is contained in
5 Recall
from Exercise 15.17 that a proper subset is a subset which is not equal to the
set, in other words it leaves out at least one member of the set.
Chapter 15
2
E
4
...
...
n ...
E
2n . . .
This picture shows the sense in which there are as many even integers as there
are integers. (This was really the point of Exercise 15.55.) Indeed, it turns out
that many sets have the same size as the set of natural numbers, including
the set of all rational numbers. The set of real numbers, however, is strictly
larger than the set of natural numbers, as Cantor proved.
Cantor also showed that that for any set b whatsoever,
| b | > | b |
This result is not surprising, given what we have seen for finite sets. (The
proof of Proposition 12 was really extracted from Cantors proof of this fact.)
The two together do raise the question as to whether an infinite set b could
be small but its powerset too large to be a set. Thus the Powerset Axiom
is not as unproblematic as the other axioms in terms of Von Neumanns size
metaphor. Still, it is almost universally assumed that if b can be coherently
regarded as a fixed totality, so can b. Thus the Powerset Axiom is a fullfledged part of modern set theory.
The modern conception of set really combines these two ideas, von Neumanns and Zermelos. This conception views a set is as a small collection
that is formed at some stage of the cumulation process.
questions about
powerset axiom
Remember
1. Modern set theory replaces the naive concept of set, which is inconsistent, with a concept of set as a collection that is not too large.
2. These collections are seen as arising in stages, where a set arises only
after all its members are present.
3. The Axiom of Comprehension of set theory is replaced by the Axiom
of Separation and some of the intuitively correct consequences of the
Axiom of Comprehension.
4. Modern set theory also contains the Axiom of Regularity, which is
justified on the basis of (2).
5. All the propositions stated in this chapterwith the exception of
Propositions 1 and 14are theorems of zfc.
Section 15.10
Exercises
15.67 Try to derive the existence of the absolute Russell set from the Axiom of Separation. Where
"
15.68 Verify our claim that all of Propositions 213 are provable using the axioms of zfc. (Some of
"!
the proofs are trivial in that the theorems were thrown in as axioms. Others are not trivial.)
15.69 (Cantors Theorem) Show that for any set b whatsoever, | b | += | b | . [Hint: Suppose that f is
"!
a function mapping b one-to-one into b and then modify the proof of Proposition 12.]
1. Verify that our proof of Proposition 12 can be carried out using the axioms of zfc.
2. Use (1) to prove there is no universal set.
15.71 Prove that the Axiom of Separation and Extensionality are consistent. That is, find a universe
"
of discourse in which both are clearly true. [Hint: consider the domain whose only element is
the empty set.]
15.72 Show that the theorem about the existence of ab can be proven using the Axiom of Separation,
"!
but that the theorem about the existence of a b cannot be so proven. [Come up with a domain
of sets in which the separation axiom is true but the theorem in question is false.]
15.73 (The Union Axiom and ) Exercise 15.72 shows us that we cannot prove the existence of a b
"
from the Axiom of Separation. However, the Union Axiom of zfc is stronger than this. It says
not just that a b exists, but that the union of any set of sets exists.
1. Show how to prove the existence of a b from the Union Axiom. What other axioms
of zfc do you need to use?
2. Apply the Union Axiom to show that there is no set of all singletons. [Hint: Use proof
by contradiction and the fact that there is no universal set.]
15.74 Prove in zfc that for any two sets a and b, the Cartesian product a b exists. The proof you
"!
gave in an earlier exercise will probably not work here, but the result is provable.
15.75 While and have set-theoretic counterparts in and , there is no absolute counterpart
"
to .
Chapter 15
1. Use the axioms of zfc to prove that no set has an absolute complement.
2. In practice, when using set theory, this negative result is not a serious problem. We
usually work relative to some domain of discourse, and form relative complements.
Justify this by showing, within zfc, that for any sets a and b, there is a set c = {x |
x a x + b}. This is called the relative complement of b with respect to a.
15.76 Assume the Axiom of Regularity. Show that no set is a member of itself. Conclude that, if we
"!
assume Regularity, then for any set b, the Russell set for b is simply b itself.
1. Show that if there is a sequence of sets with the following property, then the Axiom of
Regularity is false:
. . . an+1 an . . . a2 a1
2. Show that in zfc we can prove that there are no sets b1 , b2 , . . . , bn , . . . , where bn =
{n, bn+1 }.
3. In computer science, a stream is defined to be an ordered pair Cx, yD whose first element
is an atom and whose second element is a stream. Show that if we work in zfc and
define ordered pairs as usual, then there are no streams.
There are alternatives to the Axiom of Regularity which have been explored in recent years.
We mention our own favorite, the axiom afa, due to Peter Aczel and others. The name afa
stands for anti-foundation axiom. Using afa you can prove that a great many sets exist with
properties that contradict the Axiom of Regularity. We wrote a book, The Liar, in which we
used afa to model and analyze the so-called Liars Paradox (see Exercise 19.32, page 571).
Section 15.10
Chapter 16
Mathematical Induction
form of statements
proved by induction
In the first two parts of this book, we covered most of the important methods
of proof used in rigorous reasoning. But we left out one extremely important
method: proof by mathematical induction.
By and large, the methods of proof discussed earlier line up fairly nicely
with various connectives and quantifiers, in the sense that you can often tell
from the syntactic form of your premises or conclusion what methods you
will be using. The most obvious exception is proof by contradiction, or its
formal counterpart Intro. This method can in principle be used to prove
any form of statement, no matter what its main connective or quantifier. This
is because any sentence S is logically equivalent to one that begins with a
negation symbol, namely, S.
In terms of syntactic form, mathematical induction is typically used to
prove statements of the form
x [P(x) Q(x)]
inductive definition
induction in science
This is also the form of statements proved using general conditional proof.
In fact, proof by induction is really a pumped-up version of this method:
general conditional proof on steroids, you might say. It works when these
statements involve a predicate P(x) defined in a special way. Specifically, proof
by induction is available when the predicate P(x) is defined by what is called an
inductive definition. For this reason, we need to discuss proof by induction and
inductive definitions side by side. We will see that whenever a predicate P(x)
is defined by means of an inductive definition, proof by induction provides a
much more powerful method of proof than ordinary general conditional proof.
Before we can discuss either of these, though, we should distinguish both
from yet a third process that is also known as induction. In science, we use
the term induction whenever we draw a general conclusion on the basis of a
finite number of observations. For example, every day we observe that the sun
comes up, that dropped things fall down, and that people smile more when
it is sunny. We come to infer that this is always the case: that the sun comes
up every morning, that dropped things always fall, that people are always
happier when the sun is out.
Of course there is no strict logical justification for such inferences. We may
have correctly inferred some general law of nature, or we may have simply observed a bunch of facts without any law that backs them up. Some time in
454
the future, people may be happier if it rains, for example after a long drought.
Induction, in this sense, does not guarantee that the conclusion follows necessarily from the premises. It is not a deductively valid form of inference, since
it is logically possible for the premises to be true and the conclusion false.
This is all by way of contrast with mathematical induction, where we can
justify a general conclusion, with infinitely many instances, on the basis of a
finite proof. How is this possible? The key lies in the inductive definitions that
underwrite this method of proof. Induction, in our sense, is a logically valid
method of proof, as certain as any we have studied so far.
Usually, discussions of mathematical induction start (and end) with induction on the natural numbers, to prove statements of the form
vs. mathematical
induction
x [NatNum(x) Q(x)]
We will start with other examples, examples which show that mathematical
induction applies much more widely than just to natural numbers. The reason
it applies to natural numbers is simply that the natural numbers can be
specified by means of an inductive definition. But so can many other things.
Section 16.1
Dominoes
When they were younger, Claire and Max liked to build long chains of dominoes, all around the house. Then they would knock down the first and, if
things were set up right, the rest would all fall down. Little did they know
that in so doing they were practicing induction. Setting up the dominoes is
like giving an inductive definition. Knocking them all down is like proving a
theorem by induction.
There are two things required to make all the dominoes fall over. They
must be close enough together that when any one domino falls, it knocks down
the next. And then, of course, you need to knock down the first. In a proof by
induction, these two steps correspond to what are called the inductive step
(getting from one to the next) and the basis step (getting the whole thing
started).
Section 16.1
Notice that there is no need to have just one domino following each domino.
You can have two, as long as the one in front will knock down both of its
successors. In this way you can build quite elaborate designs, branching out
here and there, and, when the time is right, you can knock them all down
with a single flick of the finger. The same is true, as well see, with induction.
Inductive definitions
inductive definitions
ambig-wffs
Inductive definitions are used a great deal in logic. In fact, we have been using
them implicitly throughout this book. For example, our definitions of the wffs
of fol were really inductive definitions. So was our definition of the set of
terms of first-order arithmetic. Both of these definitions started by specifying
the simplest members of the defined collection, and then gave rules that told
us how to generate new members of the collection from old ones. This is
how inductive definitions work.
Lets look at another example, just to make things more explicit. Suppose
that for some reason we wanted to study an ambiguous variant of propositional logic, maybe as a mathematical model of English that builds in some
ambiguity. Lets take some primitive symbols, say A1 , . . . , An , and call these
propositional letters. Next, we will build up wffs from these using our old
friends , , , , and . But we are going to let the language be ambiguous,
unlike fol, by leaving out all parentheses. How will we do this? To distinguish
these strings from wffs, let us call them ambig-wffs. Intuitively, what we want
to say is the following:
1. Each propositional letter is an ambig-wff.
2. If p is any ambig-wff, so is the string p.
3. If p and q are ambig-wffs, so are p q, p q, p q, and p q.
4. Nothing is an ambig-wff unless it is generated by repeated applications
of (1), (2), and (3).
base clause
inductive clause
final clause
Chapter 16
In this definition, clause (1) specifies the basic ambig-wffs. It is called the
base clause of the definition. Clauses (2) and (3) tell us how to form new
ambig-wffs from old ones. They are called inductive clauses. The final clause
just informs us that all ambig-wffs are generated by the earlier clauses, in case
we thought that the World Trade Center or the actor Brad Pitt or the set {2}
might be an ambig-wff.
Remember
An inductive definition consists of
a base clause, which specifies the basic elements of the defined set,
one or more inductive clauses, which tell us how to generate additional
elements, and
a final clause, which tells us that all the elements are either basic or
generated by the inductive clauses.
Inductive proofs
Having set up an inductive definition of the set of ambig-wffs, we are in a
position to prove things about this set. For example, assuming the clauses of
our inductive definition as premises, we can easily prove that A1 A2 A3
is an ambig-wff.
Proof: First, A1 , A2 , and A3 are ambig-wffs by clause (1). A3 is
thus an ambig-wff by clause (2). Then A2 A3 is an ambig-wff by
clause (3). Another use of clause (3) gives us the desired ambig-wff
A1 A2 A3 . (Can you give a different derivation of this ambig-wff,
one that applies before ?)
This proof shows us how the inductive definition of the ambig-wffs is supposed to work, but it is not an inductive proof. So lets try to prove something
about ambig-wffs using the method of inductive proof. Indeed, lets prove a
few things that will help us identify strings that are not ambig-wffs.
Consider the string . Obviously, this is not an ambig-wff. But how do
we know? Well, clause (4) says it has to be formed by repeated applications of
clauses (1)(3). Examining these clauses, it seems obvious that anything you
get from them will have to contain at least one propositional letter. But what
kind of proof is that? What method are we applying when we say examining
these clauses, it seems obvious that . . . ? What we need is a way to prove the
following simple fact:
Proposition 1. Every ambig-wff contains at least one propositional letter.
Notice that this claim has the form of a general conditional, where the
antecedent involves an inductively defined predicate:
Section 16.1
inductive step
an inductive proof
inductive hypothesis
Chapter 16
wff except the basic elements and things that can be generated from them by
repeated applications of our two rules, we can be sure that all the ambig-wffs
have the property in question.
Lets try another example. Suppose we want to prove that the string
A1 A2 is not an ambig-wff. Again, this is pretty obvious, but to prove
it we need to prove a general fact about the ambig-wffs, one that will allow
us to conclude that this particular string does not qualify. The following fact
would suffice:
Proposition 2. No ambig-wff has the symbol occurring immediately before
one of the binary connectives: , , , .
Once again, note that the desired result has the form of a general conditional claim, where the antecedent is our inductively defined predicate:
p [(p is an ambig-wff) Q(p)]
inventors paradox
Section 16.1
palindromes
Make sure you understand how this definition works. For example, come up
with a string of seven letters that is a pal.
Now lets prove that every pal reads the same way back to front and front
to back, in other words, every pal is a palindrome. Here is our inductive proof
of this fact:
Proof: We prove by induction that every pal reads the same forwards
and backwards, that is, when the order of letters in the string is
reversed.
Basis: The basic elements of pal are single letters of the alphabet.
Clearly, any single letter reads the same way forwards or backwards.
Induction: Suppose that the pal reads the same way forwards or
backwards. (This is our inductive hypothesis.) Then we must show
that if you add a letter, say l, to the beginning and end of , then
the result, ll, reads the same way forwards and backwards. When
you reverse the string ll, you get l" l, where " is the result of
reversing the string . But by the inductive hypothesis, = " , and
so the result of reversing ll is ll, i.e., it reads the same forwards
and backwards.
We conclude by induction that every pal is a palindrome.
Chapter 16
Remember
Given an inductive definition of a set, an inductive proof requires
a basis step, which shows that the property holds of the basic elements,
and
an inductive step, which shows that if the property holds of some
elements, then it holds of any elements generated from them by the
inductive clauses.
The assumption that begins the inductive step is called the inductive
hypothesis.
Exercises
16.1
"
16.2
"
16.3
"
16.4
"
16.5
"
16.6
"
Raymond Smullyan, a famous logician/magician, gives the following good advice: (1) always
speak the truth, and (2) each day, say I will repeat this sentence tomorrow. Prove that
anyone who did these two things would live forever. Then explain why it wont work.
Give at least two distinct derivations which show that the following is an ambig-wff: A1
A2 A2 .
Prove by induction that no ambig-wff begins with a binary connective, ends with a negation
sign, or has a negation sign immediately preceding a binary connective. Conclude that the
string A1 A2 is not an ambig-wff.
Prove that no ambig-wff ever has two binary connectives next to one another. Conclude that
A1 A2 is not an ambig-wff.
Modify
1.
2.
3.
4.
Prove by induction that every semi-wff has the following property: the number of right parentheses is equal to the number of left parentheses plus the number of negation signs.
Section 16.1
16.7
"
16.8
"
In the text, we proved that every pal is a palindrome, a string of letters that reads the same
back to front and front to back. Is the converse true, that is, is every palindrome a pal? If so,
prove it. If not, fix up the definition so that it becomes true.
(Existential wffs) In this problem we return to a topic raised in Problem 14.59. In that problem
we defined an existential sentence as one whose prenex form contains only existential quantifiers.
A more satisfactory definition can be given by means of the following inductive definition. The
existential wffs are defined inductively by the following clauses:
1. Every atomic or negated atomic wff is existential.
2. If P1 , . . . , Pn are existential, so are (P1 . . . Pn ) and (P1 . . . Pn ).
3. If P is an existential wff, so is P , for any variable .
4. Nothing is an existential wff except in virtue of (1)(3).
Prove the following facts by induction:
If P is an existential wff, then it is logically equivalent to a prenex wff with no universal
quantifiers.
Suppose P is an existential sentence of the blocks language. Prove that if P is true in
some world, then it will remain true if new objects are added to the world. [You will need
to prove something a bit stronger to keep the induction going.]
Is our new definition equivalent to our old one? If not, how could it be modified to make it
equivalent?
16.9
"
Give a definition of universal wff, just like that of existential wff in the previous problem, but
with universal quantifiers instead of existential. State and prove results analogous to the results
you proved there. Then show that every universal wff is logically equivalent to the negation of
an existential wff.
16.10 Define the class of wellfounded sets by means of the following inductive definition:
"!!
1. If C is any set of objects, each of which is either not a set or is itself a wellfounded set,
then C is a wellfounded set.
2. Nothing is a wellfounded set except as justified by (1).
This exercise explores the relationship between the wellfounded sets and the cumulative conception set discussed in the preceding chapter.
1. Which of the following sets are wellfounded?
, {}, {Washington Monument}, {{{. . .}}}
Chapter 16
Section 16.2
Sets satisfying
clauses (1)-(3)
Ambig-wffs
Figure 16.1: The set of ambig-wffs is the intersection of all sets satisfying
(1)(3).
Definition The set S of ambig-wffs is the smallest set satisfying the following
clauses:
1. Each propositional letter is in S.
2. If p is in S, then so is p.
3. If p and q are in S, then so are p q, p q, p q, and p q.
smallest set
What we have done here is replace the puzzling clause (4) by one that
refers to the smallest set satisfying (1)(3). How does that help? First of all,
what do we mean by smallest? We mean smallest in the sense of subset: we
want a set that satisfies (1)(3), but one that is a subset of any other set
satisfying (1)(3). How do we know that there is such a smallest set? We need
to prove a lemma, to show that our definition makes sense.1
Lemma 3. If S is the intersection of a collection X of sets, each of which
satisfies (1)(3), S will also satisfy (1)(3).
We leave the proof of the lemma as Exercise 16.11.
As a result of this lemma, we know that if we define the set of ambig-wffs
to be the intersection of all sets that satisfy (1)(3), then we will have a set
that satisfies (1)(3). Further, it must be the smallest such set, since when
1 A lemma is an auxiliary result, one that is of little intrinsic interest, but which is
needed for some larger end. Lemmas have the same formal status as theorems or propositions, but are usually less important.
Chapter 16
you take the intersection of a bunch of sets, the result is always a subset of
all of the original sets.
The situation is illustrated in Figure 16.1. There are lots of sets that satisfy
clauses (1)(3) of our definition, most of which contain many elements that are
not ambig-wffs. For example, the set of all finite strings of propositional letters
and connectives satisfies (1)(3), but it contains strings like A1 A2 that
arent ambig-wffs. Our set theoretic definition takes the set S of ambig-wffs
to be the smallest, that is, the intersection of all these sets.
Notice that we can now explain exactly why proof by induction is a valid
form of reasoning. When we give an inductive proof, say that all ambig-wffs
have property Q, what we are really doing is showing that the set {x | Q(x)}
satisfies clauses (1)(3). We show that the basic elements all have property
Q and that if you apply the generation rules to things that have Q, you will
get other things that have Q. But if Q satisfies clauses (1)(3), and S is the
intersection of all the sets that satisfy these clauses, then S Q. Which is to
say: all ambig-wffs have property Q.
justifying induction
Exercises
16.12 Give an inductive definition of the set of wffs of propositional logic, similar to the above
"
definition, but putting in the parentheses in clause (3). That is, the set of wffs should be
defined as the smallest set satisfying various clauses. Be sure to verify that there is such a
smallest set.
16.13 Based on your answer to Exercise 16.12, prove that every wff has the same number of left
"
Section 16.3
Section 16.3
defining natural
numbers
about its members using an inductive proof. Still, the natural numbers are
one of the simplest and most useful examples to which induction applies.
Just how are the natural numbers defined? Intuitively, the definition runs
as follows:
1. 0 is a natural number.
2. If n is a natural number, then n + 1 is a natural number.
3. Nothing is a natural number except in virtue of repeated applications
of (1) and (2).
In set theory, this definition gets codified as follows. The set N of natural
numbers is the smallest set satisfying:
1. 0 N
2. If n N , then n + 1 N
Based on this definition, we can prove statements about natural numbers
by induction. Suppose we have some set Q of natural numbers and want to
prove that the set contains all natural numbers:
x [x N x Q]
induction on N
Basis: The basis step requires that we prove that the sum of the first
0 natural numbers is 0, which it is. (If you dont like this case, you
Chapter 16
might check to see that Q(1) holds. You might even go so far as to
check Q(2), although its not necessary.)
Induction: To prove the inductive step, we assume that we have a
natural number k for which Q(k) holds, and show that Q(k+1) holds.
That is, our inductive hypothesis is that the sum of the first k natural
numbers is k(k 1)/2. We must show that the sum of the first k + 1
natural numbers is k(k + 1)/2. How do we conclude this? We simply
note that the sum of the first k + 1 natural numbers is k greater
than the sum of the first k natural numbers (since the first natural
number is zero, the second is one, and so on). We already know by
the inductive hypothesis that this latter sum is simply k(k 1)/2.
Thus the sum of the first k + 1 numbers is
k(k 1)
+k
2
Getting a common denominator gives us
k(k 1) 2k
+
2
2
which we factor to get
k(k + 1)
2
the desired result.
Exercises
16.15 Prove by induction that for all natural numbers n, 0 + 1 + . . . + n n2 . Your proof should
"
not presuppose Proposition 4, which we proved in the text, though it will closely follow the
structure of that proof.
1 + 3 + 5 + . . . + (2n + 1) = (n + 1)2
1
1
1
1
(1 )(1 ) . . . (1 ) =
2
3
n
n
Section 16.3
16.18 Notice that 13 + 23 + 33 = 36 = 62 and that 13 + 23 + 33 + 43 + 53 = 225 = 152 . Prove that the
"!
sum of the first n perfect cubes is a square. [Hint: This is an instance of the inventors paradox.
You will have to prove something stronger than this.]
16.19 (after Polya) Examine the following incorrect proof that all logicians have the same shoe size.
"
Section 16.4
successor function
Chapter 16
1. x (s(x) += 0)
2. x y (s(x) = s(y) x = y)
3. x (x + 0 = x)
4. x y [x + s(y) = s(x + y)]
5. x (x 0 = 0)
6. x y [x s(y) = (x y) + x]
The first two axioms tell us essential properties of the successor function.
Together they ensure that the numbers are arranged in a sequence with no
loops. The first axiom tells us that the sequence cannot loop back to zero, since
then zero would be the successor of something (contradicting axiom one), and
that it cant loop back to some other number, since then that number would
be the successor of two different numbers (contradicting axiom two).
Axioms 3 and 4 define the properties of the binary addition operator in
terms of the successor function. The definition of + reflects the structure of
the natural numbers given by their inductive definition. In the first axiom we
state the truth that the result of adding 0 to any number is that number.
The second axiom says that the result of adding the successor of n to m is
the successor of the result of adding n to m, for any n and m. Together these
axioms tell us how to add any pair of numbers, since the first concerns adding
0, and the second the successor of a number, and all numbers are one or the
other.
Axioms 5 and 6 follow a similar pattern, and provide a definition for multiplication. The first tells the result of multiplying by 0, and the second the
result of multiplying by the successor of a number. Again the second axiom
relates the result of multiplying by the successor of a number to the result of
multiplying by that number.
We should view the claim that these axioms can serve as definitions of
addition and multiplication with a little suspicion. The second addition axiom
contains + on both sides of the equality making it look like an inductive
definition. But we are not giving an inductive definition of a set, but rather
defining the function +. The reason the definition works is that the function
applies to the natural numbers, a set that is itself inductively defined using
the successor function. So while axiom 4 does not allow us to eliminate the
+ function in one step, we can argue that + can be eliminated after some
number of applications of this axiom, followed by an application of axiom 3.
To see how this works, lets see how we could work out the value of
s(s(s(0))) + s(s(0)) by eliminating the + function. This is our official way
Section 16.4
induction scheme
=
=
=
s( s(s(s(0))) + s(0) )
s(s( s(s(s(0))) + 0 ))
s(s( s(s(s(0))) ))
by axiom 4.
by axiom 4.
by axiom 3.
Chapter 16
laws of addition and multiplication. It turns out, however, that these facts,
and all other obvious facts, and many not-so-obvious facts, can be proven
from the axioms as set out above. Here is one of the simplest; we give an
informal proof, using just these axioms, of
x (s(x) = s(0) + x)
Proof: The proof is by the formalized version of mathematical induction. The predicate Q(x) in question is (s(x) = s(0) + x). We
need to prove the basis case, Q(0), and the induction step
x (Q(x) Q(s(x)))
The basis case requires us to prove that s(0) = s(0) + 0, which is
obviously true by axiom 3.
We prove the induction step by general conditional proof. Let n be
any number such that Q(n). This is our inductive hypothesis. Using
it, we need to prove Q(s(n)). That is, our induction hypothesis is that
s(n) = s(0) + n and our goal is to prove that s(s(n)) = s(0) + s(n).
This takes the following steps:
s( s(n) )
=
=
s( s(0) + n )
s(0) + s(n)
by induction hypothesis.
by axiom 4.
Lets see how we might prove a more complicated result, like the commutativity of +:
x y (x + y = y + x)
We will not prove this, since it is a valuable exercise that you should complete.
But we will sketch the proof, since proving it requires a technique we havent
encountered before, known as double induction. It turns out that to prove this
claim, we need to use induction on both x and y.
Overall, the proof of this claim will be an induction on x, with the goal of
proving the universal claim x Q(x), where Q(x) is the formula y (x + y =
y + x). So the base case of the proof must establish Q(0), that is:
y (0 + y = y + 0)
and the inductive case will assume Q(n):
y (n + y = y + n)
with the goal of showing Q(s(n)):
y (s(n) + y = y + s(n))
Section 16.4
Now, whats fun about a double inductive proof is that you also need to use
induction to prove the claims within the cases. For example, your base case
above will be demonstrated by induction on y, with the new predicate Q" (y):
0+y =y+0
This induction on y will have a base case establishing (the trivial) Q" (0):
0+0=0+0
Its inductive case will assume Q" (m):
0+m=m+0
and try to show Q" (s(m)):
0 + s(m) = s(m) + 0
This all turns out to be easy. But wait, theres more! That was just the
base case of our induction on x. So now we assume:
y (n + y = y + n)
and try to prove:
y (s(n) + y = y + s(n))
And this claim we also have to prove by induction on y. That is, we have to
prove the base case:
s(n) + 0 = 0 + s(n)
followed by an inductive case starting from the inductive hypothesis:
s(n) + m = m + s(n)
and concluding with:
s(n) + s(m) = s(m) + s(n)
G
odel Incompleteness
Theorem
Chapter 16
Exercises
Give informal proofs, similar in style to the one in the text, that the following statements are consequences
of pa. Explicitly identify any predicates to which you apply induction. When proving the later theorems,
you may assume the results of the earlier problems.
16.20
"
x (0 + x = x)
16.22 x (0 x = 0)
"
16.24
"
16.25
"
16.26
"!
16.27
"!!
16.28
"!!
16.21 x (s(0) x = x)
"
x y (x + s(y) = s(x) + y) [Hint: Dont get confused by the two universal quantifiers. Start
by assuming that x is an arbitrary number and perform induction on y. This does not require
double induction.]
x y z (x + z = y + z x = y) [Hint: this is an easy induction on z.]
x y z ((x + y) + z = x + (y + z)) [Hint: This is relatively easy, but you have to perform induction on z. That is, your basis case is to show that (x + y) + 0 = x + (y + 0).
You should then assume (x + y) + n = x + (y + n) as your inductive hypothesis and show
(x + y) + s(n) = x + (y + s(n)).]
x y (x + y = y + x) [Hint: this requires double induction.]
x y (x y = y x) [Hint: To prove this, you will first need to prove the lemma
x y (s(x) y = (x y) + y). Prove this by induction on y.]
Section 16.5
Induction in Fitch
Fitch contains an inference rule for induction on the natural numbers which
exactly parallels the informal technique that we have just described. In order
to complete a formal proof by induction of a universally quantified formula,
you must be able to cite the statement of the base case of the induction, and
a subproof which represents the step case:
Section 16.5
Peano Induction:
P(0)
n P(n)
..
.
P(s(n))
$ x P(x)
Like the rules of universal introduction and existential elimination, the
Peano Induction rule requires that we use a boxed constant in the subproof
where we introduce an arbitrary n for the step case of the induction. This
ensures that n is indeed arbitrary, and that all we know about it is that the
property P holds of that element of the domain. Our goal is to then prove
that the property is inherited by s(n). Once we have that proof, and the proof
that 0 has the property P , then we can infer that every natural number has
the property.
When we use the Peano induction rule, we must interpret the domain of
quantification as being just the set of natural numbers.
Default and generous uses of the Induction rule
Like many of the rules of Fitch, the induction rule has a default use. If no
conclusion is given, then Fitch tries to determine the appropriate conclusion
on the basis of the citations. Specifically it will try to use the base case formula
to decide what you are trying to use the rule to prove.
If you use the Add Support Steps command with a step containing a universally quantified formula using this rule, then the necessary base case and
step proofs will be inserted into the proof.
Exercises
Use Fitch to construct formal proofs of the following theorems from the six Peano Axioms plus the
Peano Induction rule. In the corresponding Exercise file, you will find as premises only the specific
axioms needed to prove the goal theorem. In the starred exercises, you may want to use one or more
earlier proofs as lemmas. If you have not read Section 10.6 describing the Lemma rule in Fitch, you
might want to do so before attempting the starred exercises. (Lemmas are, of course, never necessary
they simply make life a whole lot easier.)
Chapter 16
16.29 x (0 + x = x)
!
16.30 x (s(0) x = x)
16.31 x (0 x = 0)
!!
16.33 x y (x + s(y) = s(x) + y) [Hint: This does not require double induction. Your final step will
!
be universal generalization on x, but within the subproof leading up to that step, you will need
to perform induction on y.]
16.35 x y z (x + z = y + z x = y)
!
16.37 x y (s(x) y = (x y) + y) [Hint: This is the key lemma that you will need to prove Exer!!!
cise 16.38. In order to prove it, you will need the Associativity and Commutativity of Addition
(Exercises 16.34 and 16.36, respectively).]
16.38 x y (x y = y x) [Hint: This is pretty easy, once you have Exercises 16.31 and 16.37 to use
!
as lemmas.]
Section 16.6
Section 16.6
1. Irreflexive: x xRx
2. Transitivity: x y z ((xRy yRz) xRz)
3. Trichotomy: x y (xRy x = y yRx)
In addition to < as we have just defined it, other examples of total strict
orderings include alphabetical ordering among letters, or among words. Many
familiar orderings are not total strict orderings though. For example, the relation taller than is not a TSO, since trichotomy fails. Consider for example,
two different people who are the same height. Such a relation is called a partial
ordering.
Anyway, we would like to show that < is a TSO.
Proposition 5. < is irreflexive.
x x < x
Proof: Suppose that there is some number a with a < a. Then by the
definition of < there is a number k such that a+s(k) = a. Axiom 3 of
pa tells us that a+0 = a and so by substitution a+s(k) = a+0. This
tells us that s(k) = 0 (see Exercise 16.25) contradicting axiom 1 of
pa. So there can be no such number a, showing that < is irreflexive.
Proposition 6. < is transitive.
x y z ((x < y y < z) x < z)
Proof: Let a, b and c be arbitrary numbers with a < b and b < c.
This tells us that there are numbers j and k such that a + s(k) = b
and b + s(j) = c. We can substitute the value of b into the second of
these equations to obtain (a + s(k)) + s(j) = c. Assuming that + is
associative (see exercise 16.26), we have a + (s(k) + s(j)) = c, which
by axiom 4 gives us that a + s(s(k) + j) = c.
Any relation that is transitive and irreflexive is also antisymmetric, that is
x y (xRy yRx). This is easy to show, since if we assume that aRb and
bRa, then transitivity yields aRa which contradicts irreflexivity. So it follows
that if a < b then b < a.
The final property required of TSOs is that trichotomy holds. This says
that every pair of different objects are related by the ordering relation, one
must be less than the other. It is this property that accounts for the appearance of the word total in total strict order.
Chapter 16
Proposition 7. Trichotomy
x y (x < y x = y y < x)
Proof: Let a be arbitrary. We will show that either
y (a < y a = y y < a)
by induction on y.
Basis: We must show (a < 0 a = 0 0 < a). We know that every
number, including a, is either 0, or the successor of some number. 0
is less than any successor, so one of the second two disjuncts must
hold.
Induction: We assume that (a < k a = k k < a) for some k and
show that (a < s(k) a = s(k) s(k) < a). Lets split into cases
according to the disjuncts of the induction hypothesis.
a < k In this case a < s(k), and we are done.
a = k In this case too, a < s(k).
k < a This tells us that y k + s(y) = a. y must be either 0 or the
successor of some number. If y = 0, then k + s(0) = a or equivalently s(k) = a. If on the other hand y = s(b) for some b,
then k + s(s(b)) = a, which means that s(k) + s(b) = a, and so
s(k) < a (see Exercise 16.24).
Exercises
Use Fitch to construct formal proofs of the following theorems from the Peano Axioms plus the definition
of <. In the corresponding Exercise file, you will find as premises only the specific axioms needed to prove
the goal theorem.
16.39 x x < 0
!
16.41 x y x < y [Hint: Notice that the Exercise file contains the definition of < but none of the
!
Peano Axioms! So although this follows from Exercise 16.40, you wont be able to use your
proof of that that as a lemma. The proof is actually pretty simple, but requires some thought.]
Section 16.6
For the following exercises you will want to use your proofs from Section 16.5 as lemmas.
16.43 x (x = 0 0 < x)
!
16.45 x x < x
!
16.47 x y (x < y x = y y < x) [Hint: Youll find some earlier exercises from this section to be
!!!
16.48 Define so that x y (x y z x + z = y). Give informal proofs that is reflexive and
"
transitive.
16.49 Define so that x y (x y y < x). Give informal proofs that is reflexive and
"
transitive.
Section 16.7
Strong Induction
Sometimes the induction principle that we have described does not fit well
with the property of natural numbers that we are trying to prove. As an
example of this imagine proving that every natural number greater than one
is either prime, or can be expressed as the product of primes. Mathematicians
call this fact The Fundamental Theorem of Arithmetic. Anything that merits
such a grand name deserves an investigation. Lets see what happens when
we try to prove this using ordinary induction on the natural numbers. We will
start with a base case of n = 2, since the claim does not apply to 0 or 1.
Proposition 8. For any number n greater than 1, n is either prime or can
be expressed as a product of primes, i.e., n = p1 . . . pm , where each of the
ps are prime.
Chapter 16
strong induction
How would an informal proof using this principle look? The goal of the
proof would be to show that the antecedent of this conditional is true, which
would then allow us to conclude x Q(x) by Elim. So we need to show:
(1)
k (k < n Q(k))
Section 16.7
In other words, we assume that every number less than n has the property
in question. This is our inductive hypothesis. The goal of the proof is then to
show that n has the property, that is, Q(n).
Here, we have to consider two cases. If n = 0, then our inductive hypothesis
(2) tells us nothing at all, since it holds trivially for any Q. So we need to show
Q(0) without the benefit of any other information. But if n += 0, then our goal
is to show Q(n) based on the substantive knowledge that all the predecessors
of n have the property Q. So just like ordinary induction, we in effect have
a base case, Q(0), and an inductive case, Q(n) for n > 0. Only in the latter
case will the inductive hypothesis (2) give us any help.
Proofs by strong induction thus take the following form:
Basis (n = 0): Show Q(0).
Inductive (n > 0): Assume k (k < n Q(k)). Show Q(n).
Conclude: x Q(x)
Now how can we justify the principle of strong induction? Ordinary induction is justified by virtue of how the set of natural numbers is inductively
defined, as we saw in Section 16.3. But strong induction does not follow the
clauses of the inductive definition, so it is legitimate to ask how we know it is
a valid principle. It turns out that we can justify the principle by proving that
it follows from ordinary induction. That is, (SI) can be proven by induction.
The key trick is to remember that strong induction is really just a generalized solution to the inventors paradox, and so in order to prove it, we will
use ordinary induction on a stronger property.
Proposition 9.
n [k (k < n Q(k)) Q(n)] x Q(x)
Proof: We will prove this by assuming:
(a)
and showing:
(b) x Q(x)
But instead of proving (b) directly, we will first prove the stronger:
(c)
Chapter 16
This says for every x, x and all of its predecessors has the property
Q. Clearly, (c) (b), so if we can prove (a) (c), we will have
(a) (b), as desired.
We prove (c) from (a) by ordinary induction on x.
Section 16.7
You might think that strong induction is poorly named, since it follows
from ordinary weak induction. But the point of the name is not that the
principle is stronger than ordinary induction. In fact, anything you can prove
by one you can also prove by the other. The difference is simply that strong
induction allows you to use a stronger inductive hypothesis. You get to assume
that all the numbers smaller than n have the property, not just its immediate
predecessor.
You should probably be able to guess the form of Fitchs strong induction
rule:
Strong Induction:
n x (x < n P(x))
Where n does not occur outside the subproof where it is
introduced.
..
.
P(n)
$ x P(x)
Exercises
The following exercises work together to result in a formal proof of Proposition 9. Youll need to do them
all to get to the final proof, but it will be worth the work.
16.50 We begin by proving a simple lemma, namely that every number is either 0 or the successor
!
of some other number. If you completed exercise 16.43, then you already proved something
similar.
x (x = 0 y x = s(y))
16.51 You may have thought that we would use the lemma in the previous exercise in the proof of
!
Proposition 9, but in fact we are going to use it to prove a second lemma. In the informal proof
of Proposition 9 we appealed twice to the following fact:
x y (x < s(y) (x = y x < y))
Prove this fact, using lemma 16.50 if necessary.
16.52 Open the file Lemma 16.52. This file contains the skeleton of the proof of the inductive step
!
in Proposition 9. Complete the proof by selecting the correct rules, and citations for all of the
steps. Submit your file as Proof 16.52.
Chapter 16
16.53 Finally open the file Exercise 16.53. This asks you to prove Proposition 9. Formalize the informal
!
proof of this result that we gave in the previous section, using the lemma from Exercise 16.52
where necessary. You may use Fo Con, but if you like a challenge you can try to complete the
proof without using it.
Section 16.7
Chapter 17
Advanced Topics in
Propositional Logic
This chapter contains some more advanced ideas and results from propositional logic, logic without quantifiers. The most important part of the chapter
is the proof of the Completeness Theorem for the propositional proof system
FT that you learned in Part I. This result was discussed in Section 8.3 and
will be used in the final chapter when we prove the Completeness Theorem
for the full system F. The final two sections of this chapter treat topics in
propositional logic of considerable importance in computer science.
Section 17.1
truth assignments
modeling semantics
In Part I, we kept our discussion of truth tables pretty informal. For example,
we did not give a precise definition of truth tables. For some purposes this
informality suffices, but if we are going to prove any theorems about fol,
such as the Completeness Theorem for the system FT , this notion needs to
be modeled in a mathematically precise way. As promised, we use set theory
to do this modeling.
We can abstract away from the particulars of truth tables and capture
what is essential to the notion as follows. Let us define a truth assignment for
a first-order language to be any function h from the set of all atomic sentences
of that language into the set {true, false} provided that h assigns false to
the formula . That is, for each atomic sentence A of the language, h gives us
a truth value, written h(A), either true or false. Intuitively, we can think
of each such function h as representing one row of the reference columns of a
large truth table.
Given a truth assignment h, we can define what it means for h to make an
arbitrary sentence of the language true or false. There are many equivalent
defined on
ways to do this. One natural way is to extend h to a function h
the set of all sentences and taking values in the set {true, false}. Thus if
fills in the
we think of h as giving us a row of the reference column, then h
values of the truth tables for all sentences of the language, that is, the values
484
1. h(Q)
= h(Q) for atomic sentences Q.
2. h(Q)
= true if and only if h(Q)
= false;
R) = true if and only if h(Q)
3. h(Q
= true and h(R)
= true;
4. h(Q
= true or h(R)
= true, or both.
R) = true if and only if h(Q)
5. h(Q
= false or h(R)
= true, or
both.
R) = true if and only if h(Q)
6. h(Q
= h(R).
A truth assignment h assigns values to every atomic sentence in the language. But intuitively, to compute the truth table for a sentence S, we need
only fill out the reference rows for the atomic sentences that actually appear
in S. In Exercise 17.3, we ask you to prove that the only values of h that
matter to h(S)
are those assigned to the atomic constituents of S.
With this precise model of a truth assignment, we can give a mathematically precise version of our definitions of a tautology and a tt-satisfiable sentence.
Namely, we say that S is a tautology if every truth assignment h has S coming
modeling tautology
and consequence
tt-satisfiable
Section 17.1
Exercises
17.1
"
17.2
Recall the Sheffer stroke symbol from Exercise 7.29, page 197, and the three place symbol
discussed on page 194. Suppose we had included these as basic symbols of our language. Write
| R) and h((P,
"
17.3
"
Let h1 and h2 be truth assignments that agree on (assign the same value to) all the atomic
1 (S) = h
2 (S). [Hint: use induction on wffs.]
sentences in S. Show that h
Section 17.2
Chapter 17
S
S
This formal proof shows that T 9T S, as desired. Proving the other
direction of this lemma is simple. We leave it as Exercise 17.13.
This lemma shows that our assumption that T +9T S is tantamount to
assuming that T {S} +9T . We can state our observations so far in a more
positive and memorable way by introducing the following notion. Let us say
that a set of sentences T is formally consistent if and only if T +9T , that is,
if and only if there is no proof of from T in FT . With this notion in hand,
formal consistency
Section 17.2
we can state the following theorem, which turns out to be equivalent to the
Completeness Theorem:
Theorem (Reformulation of Completeness) Every formally consistent set of
sentences is tt-satisfiable.
outline of proof
The Completeness Theorem results from applying this to the set T {S}.
The remainder of the section is devoted to proving this theorem. The proof is
quite simple in outline.
Completeness for formally complete sets: First we will show that this
theorem holds of any formally consistent set with an additional property,
known as formal completeness. A set T is formally complete if for any
sentence S of the language, either T 9T S or T 9T S. This is really
an unusual property of sets of sentences, since it says that the set is
so strong that it settles every question that can be expressed in the
language, since for any sentence, either it or its negation is provable
from T .
Extending to formally complete sets: Once we show that every formally
consistent, formally complete set of sentences is tt-satisfiable, we will
show that every formally consistent set can be expanded to a set that
is both formally consistent and formally complete.
Putting things together: The fact that this expanded set is tt-satisfiable
will guarantee that the original set is as well, since a truth value assignment that satisfies the more inclusive set will also satisfy the original
set.
The rest of this section is taken up with filling out this outline.
Completeness for formally complete sets of sentences
To prove that every formally consistent, formally complete set of sentences is
tt-satisfiable, the following lemma will be crucial.
Lemma 3. Let T be a formally consistent, formally complete set of sentences,
and let R and S be any sentences of the language.
1. T 9T (R S) iff T 9T R and T 9T S
2. T 9T (R S) iff T 9T R or T 9T S
3. T 9T S iff T +9T S
4. T 9T (R S) iff T +9T R or T 9T S
5. T 9T (R S) iff either T 9T R and T 9T S or T +9T R and T +9T S
Chapter 17
Proof: Let us first prove (1). Since it is an iff, we need to prove that
each side entails the other. Let us first assume that T 9T (R S).
We will show that T 9T R. The proof that T 9T S will be exactly
the same. Since T 9T (R S), there is a formal proof of (R S) from
premises in T . Take this proof and add one more step. At this step,
write the desired sentence R, using the rule Elim.
Next, let us suppose that T 9T R and T 9T S. Thus, there are proofs
of each of R and S from premises in T . What we need to do is merge
these two proofs into one. Suppose the proof of R uses the premises
P1 , . . . , Pn and looks like this:
P1
..
.
Pn
..
.
R
And suppose the proof of S uses the premises Q1 , . . . , Qk and looks
like this:
Q1
..
.
Qk
..
.
S
To merge these two proofs into a single proof, we simply take the
premises of both and put them into a single list above the Fitch
bar. Then we follow the Fitch bar with the steps from the proof of
R, followed by the steps from the proof of S. The citations in these
steps need to be renumbered, but other than that, the result is a
legitimate proof in FT . At the end of this proof, we add a single step
containing R S which we justify by Intro. The merged proof
looks like this:
Section 17.2
P1
..
.
Pn
Q1
..
.
Qk
..
.
R
..
.
S
RS
We now turn to (2). One half of this, the direction from right to
left, is very easy, using the rule of Intro, so lets prove the other
direction. Thus, we want to show that if T 9T (R S) then T 9T R or
T 9T S. (This is not true in general, but it is for formally consistent,
formally complete sets.)
Assume that T 9T (R S), but, toward a proof by contradiction,
that T +9T R and T +9T S. Since T is formally complete, it follows
that T 9T R and T 9T S. This means that we have two formal
proofs p1 and p2 from premises in T , p1 having R as a conclusion,
p2 having S as a conclusion. As we have seen, we can merge these
two proofs into one long proof p that has both of these as conclusions.
Then, by Intro, we can prove R S. But then using the proof of
the version of DeMorgan from Exercise 6.25, we can extend this proof
to get a proof of (R S). Thus T 9T (R S). But by assumption
we also have T 9T (R S). By merging the proofs of (R S) and
R S we can get a proof of by adding a single step, justified by
Intro. But this means that T is formally inconsistent, contradicting
our assumption that it is formally consistent.
One direction of part (3) follows immediately from the definition of
formal completeness, while the left to right half follows easily from
the definition of formal consistency.
Parts (4) and (5) are similar to part (2) and are left as an exercise.
With this lemma in hand, we can now fill in the first step in our outline.
Chapter 17
Section 17.2
Chapter 17
Compactness Theorem
Exercises
17.4
!
Section 17.2
For the following three exercises, suppose our language contains only two predicates, Cube and Small,
two individual constants, a and b, and the sentences that can be formed from these by means of the
truth-functional connectives.
17.5
"
17.6
"
17.7
!|"
This time let T be the following set of sentences (note the difference in the first sentence):
{(Cube(a) Small(a)), Cube(b) Cube(a), Small(a) Small(b)}
This set is not formally complete. Use the procedure described in the proof of Proposition 6 to
extend this to a formally consistent, formally complete set. (Use alphabetical ordering of atomic
sentences.) What is the resulting set? What is the truth value assignment h that satisfies this
set? Submit a world making the sentences in your formally complete set true.
17.8
"!
Suppose our language has an infinite number of atomic sentences A1 , A2 , A3 , . . .. Let T be the
following set of sentences:
{A1 A2 , A2 A3 , A3 A4 , . . .}
There are infinitely many distinct truth value assignments satisfying this set. Give a general
description of these assignments. Which of these assignments would be generated from the
procedure we used in our proof of the Completeness Theorem?
Each of the following four exercises contains an argument. Classify each argument as being (A) provable
in FT , (B) provable in F but not in FT , or (C) not provable in F. In justifying your answer, make
explicit any appeal you make to the Soundness and Completeness Theorems for FT and for F. (Of
course we have not yet proven the latter.) Recall from Chapter 10 that sentences whose main operator
is a quantifier are treated as atomic in the definition of tautological consequence.
Chapter 17
17.9
"
x Dodec(x) x Large(x)
x Dodec(x)
17.10
"
x Large(x)
17.11
"
x Dodec(x) x Large(x)
x Dodec(x)
x (Dodec(x) Large(x))
x Dodec(x)
x Large(x)
17.12
"
x Large(x)
x (Dodec(x) Large(x))
x Dodec(x)
x Large(x)
17.13 Prove the half of Lemma 2 that we did not prove, the direction from right to left.
"
"
"!
Lemma 3.
Lemma 3.
17.16 In the inductive proof of Proposition 4, carry out the step for sentences of the form R S.
"!
Section 17.3
Horn sentences
In Chapter 4 you learned how to take any sentence built up without quantifiers
and transform it into one in conjunctive normal form (CNF), CNF, that is, one
which is a conjunction of one or more sentences, each of which is a disjunction
of one or more literals. Literals are atomic sentences and their negations. We
will call a literal positive or negative depending on whether it is an atomic
sentence or the negation of an atomic sentence, respectively.
A particular kind of CNF sentence turns out to be important in computer
science. These are the so-called Horn sentences, named not after their shape,
but after the American logician Alfred Horn, who first isolated them and studied some of their properties. A Horn Sentence is a sentence in CNF that has
the following additional property: every disjunction of literals in the sentence
contains at most one positive literal. Later in the section we will find that
there is a more intuitive way of writing Horn sentences if we use the connective . But for now we restrict attention to sentences involving only , ,
and .
The following sentences are all in CNF but none of them are Horn sentences:
positive and
negative literals
Horn sentences
Section 17.3
conditional form of
Horn sentences
Examination of each shows that each conjunct contains at most one positive
literal as a disjunct. Verify this for yourself to make sure you understand
the definition. (Remember that the definition of CNF allows some degenerate
cases, as we stressed in Chapter 4.)
The definition of Horn sentences may seem a bit ad hoc. Why is this
particular type of CNF sentence singled out as special? Using the material
conditional, we can put them in a form that is more intuitive. Consider the
following sentence:
(Home(claire) Home(max)) Happy(carl)
If we replace by its equivalent in terms of and , and then use DeMorgans
Law, we obtain the following equivalent form:
Home(claire) Home(max) Happy(carl)
This is a disjunction of literals, with only one positive literal. Horn sentences
are just conjunctions of sentences of this sort.
Here are some more examples. Assume that A, B, C, and D are atomic
sentences. If we replace by its definition, and use DeMorgans laws, we find
that each sentence on the left is logically equivalent to the Horn sentence on
the right.
(A B) ((B C) D)
((B C D) A) A
A ((B C) D)
(A B) (B C D)
(B C D A) A
A (B C D)
Chapter 17
say,
A1 . . . An B
This can be rewritten using and as:
(A1 . . . An ) B
This is the typical case, but there are the important limiting cases, disjunctions with a positive literal but no negative literals, and disjunctions with
some negative literals but no positive literal. By a logical sleight of hand,
though, we can in fact rewrite these in the same conditional form. The sleight
of hand is achieved by introducing a couple of rather odd atomic sentences,
F and our old friend . The first of these is assumed to be always true. The
second, of course, is always false. Using these,
A1 . . . An
can be rewritten as:
(A1 . . . An )
Similarly, we can rewrite the lone atomic sentence B as F B. We summarize
these observations by stating the following result.
Proposition 7. Any Horn sentence of propositional logic is logically equivalent to a conjunction of conditional statements of the following three forms,
where the Ai and B stand for ordinary atomic sentences:
1. (A1 . . . An ) B
2. (A1 . . . An )
3. F B
Using the truth table method, we could program a computer to check to see
if a sentence is tt-satisfiable or not since the truth table method is completely
mechanical. You can think of our Taut Con routine as doing something like
this, though actually it is more clever than this brute force method. In general,
though, any method of checking arbitrary formulas for tt-satisfiability is quite
expensive. It consumes a lot of resources. For example, a sentence involving
50 atomic sentences has 250 rows in its truth table, a very big number. For
Horn sentences, however, we can in effect restrict attention to a single row. It
is this fact that accounts for the importance of this class of sentences.
This efficient method for checking the satisfiability of Horn sentences,
known as the satisfaction algorithm for Horn sentences, is really quite simple.
inefficiency of
truth tables
satisfaction algorithm
for Horn sentences
Section 17.3
We first describe the method, and then apply it to a couple of examples. The
idea behind the method is to build a one-row truth table by working back and
forth, using the conjuncts of the sentence to figure out which atomic sentences
need to have true written beneath them. We will state the algorithm twice,
once for the Horn sentences in CNF form, but then also for the conditional
form.
Satisfaction algorithm for Horn sentences: Suppose we have a Horn
sentence S built out of atomic sentences A1 , . . . , An . Here is an efficient
procedure for determining whether S is tt-satisfiable.
satisfaction algorithm
for Horn sentences
1. Start out as though you were going to build a truth table, by listing all
the atomic sentences in a row, followed by S. But do not write true or
false beneath any of them yet.
2. Check to see which if any of the atomic sentences are themselves conjuncts of S. If so, write true in the reference column under these atomic
sentences.
3. If some of the atomic sentences are now assigned true, then use these to
fill in as much as you can of the right hand side of the table. For example,
if you have written true under A5 , then you will write false wherever
you find A5 . This, in turn, may tell you to fill in some more atomic
sentences with true. For example, if A1 A3 A5 is a conjunct of S,
and each of A1 and A5 have been assigned false, then write true
under A3 . Proceed in this way until you run out of things to do.
4. One of two things will happen. One possibility is that you will reach a
point where you are forced to assign false to one of the conjuncts of
S, and hence to S itself. In this case, the sentence is not tt-satisfiable.
But if this does not happen, then S is tt-satisfiable. For then you can
fill in all the remaining columns of atomic sentences with false. This
will give you a truth assignment that makes S come out true, as we will
prove below. (There may be other assignments that make S true; our
algorithm just generates one of them.)
Lets apply this algorithm to an example.
You
. . . . . try
. . . . .it. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
!
Chapter 17
To make this fit on the page, lets abbreviate the two atomic sentences
Home(claire) and Home(max) by C and M, respectively. Open Boole and
create the following table (it will be easier if you choose By Row in the
Edit menu):
C M C M (M C)
2. The first step of the above method tells us to put true under any atomic
sentence that is a conjunct of S. In this case, this means we should put a
true under C. So enter a t under the reference column for C.
"
3. We now check to see how much of the right side of the table we can fill in.
Using Boole, check which columns on the right hand side call on columns
that are already filled in. There is only one, the one under C. Fill it in
to obtain the following:
"
C
t
C M (M C)
f
M
t
C M (M C)
f
5. But this means the second conjunct gets assigned false, so the whole
sentence comes out false.
C
t
M
t
"
"
C M (M C)
f
f
"
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
Lets restate the satisfaction algorithm for Horn sentences in conditional
form, since many people find it more intuitive, and then apply it to an example.
Satisfaction algorithm for Horn sentences in conditional form: Suppose we have a Horn sentence S in conditional form, built out of atomic
sentences A1 , . . . , An , as well as F and .
1. If there are any conjuncts of the form F Ai , write true in the reference column under each such Ai .
algorithm for
conditional Horn
sentences
Section 17.3
correctness of algorithm
We wont actually write out the table, but instead will just talk through the
method. First, we see that if the sentence is to be satisfied, we must assign
true to B, since F B is a conjunct. Then, looking at the second conjunct,
B C, we see that assigning true to B forces us to assign true to C. But at
this point, we run out of things that we are forced to do. So we can assign A
the value false getting get an assignment that makes remaining conditional,
and hence the whole sentence, true.
How do we know that this algorithm is correct? Well, we dont, yet. The
examples may have convinced you, but they shouldnt have. We really need
to give a proof.
Theorem The algorithm for the satisfiability of Horn sentences is correct, in
that it classifies as tt-satisfiable exactly the tt-satisfiable Horn sentences.
Proof: There are two things to be proved here. One is that any
tt-satisfiable sentence is classified as tt-satisfiable by the algorithm.
The other is that anything classified by the algorithm as tt-satisfiable
really is tt-satisfiable. We are going to prove this result for the form
of the algorithm that deals with conditionals. Before getting down
to work, lets rephrase the algorithm with a bit more precision. Define sets T 0 , T 1 , . . . of atomic sentences, together with F and , as
follows. Let T 0 = {F}. Let T n be the set consisting of F together
Chapter 17
h(C)
= true for each conditional C that is a conjunct of S. There
are three types of conditionals to consider:
Case 1: The conjunct is of the form F A. In this case A is in T 1 .
assigns true to the A and so to the conditional.
But then h
Case 2: The conjunct is of the form (A1 . . . An ) B. If each of the
Ai gets assigned true, then each is in T N and so B is in T N +1 = T N .
assigns true to B and so to the conditional. On the other
But then h
hand, if one of the Ai gets assigned false then the conditional comes
Section 17.3
Exercises
17.17 If you skipped the You try it section, go back and do it now. Submit the file Table Horn 1.
!
17.18 A sentence in CNF can be thought of as a list of sentences, each of which is a disjunction of
!
literals. In the case of Horn sentences, each of these disjunctions contains at most one positive
literal. Open Horns Sentences. You will see that this is a list of sentences, each of which is a
disjunction of literals, at most one of which is positive. Use the algorithm given above to build
a world where all the sentences come out true, and save it as World 17.18.
17.19 Open Horns Other Sentences. You will see that this is a list of sentences, each of which is a
!|"
disjunctive Horn sentence. Use the algorithm given above to see if you can build a world where
all the sentences come out true. If you can, save the world as World 17.19. If you cannot, explain
how the algorithm shows this.
17.20 Rewrite the following Horn sentences in conditional form. Here, as usual, A, B, and C are taken
"
to be atomic sentences.
1. A (A B C) C
2. (A B C) C
3. (A B) (A B)
Use Boole to try out the satisfaction algorithm on the following Horn sentences (two are in conditional
form). Give the complete row that results from the application of the algorithm. In other words, the table
you submit should have a single row corresponding to the assignment that results from the application
of the algorithm. Assume that A, B, C, and D are atomic sentences. (If you use Verify Table to check
your table, Boole will tell you that there arent enough rows. Simply ignore the complaint.)
17.21 A (A B) (B C)
!
17.22 A (A B) D
17.23 A (A B) B
17.24 A (A B) B
Chapter 17
The programming language Prolog is based on Horn sentences. It uses a slightly different notation,
though. The clause
(A1 . . . An ) B
is frequently written
B : A1 , . . . , An
or
B A1 , . . . , An
and read B, if A1 through An . The following exercises use this Prolog notation.
AncestorOf(a, b) MotherOf(a, b)
AncestorOf(b, c) MotherOf(b, c)
AncestorOf(a, b) Father(a, b)
AncestorOf(b, c) Father(b, c)
AncestorOf(a, c) AncestorOf(a, b), AncestorOf(b, c)
MotherOf(a, b) true
FatherOf(b, c) true
FatherOf(b, d) true
The first five clauses state instances of some general facts about the relations mother of, father
of, and ancestor of. (Prolog actually lets you say things with variables, so we would not actually
need multiple instances of the same scheme. For example, rather than state both the first two
clauses, we could just state AncestorOf(x, y) MotherOf(x, y).) The last three clauses describe
some particular facts about a, b, c, and d. Use the Horn satisfaction algorithm to determine
whether the above set of Horn sentences (in conditional form) is satisfiable.
17.28 The Prolog program in Exercise 17.27 might be considered as part of a database. To ask
"
17.29 Use the procedure of the Exercise 17.28 to determine whether the following are consequences
"
Section 17.3
17.30 Suppose you have a Horn sentence which can be put into conditional form in a way that does
"
not contain any conjunct of form 3 in Proposition 7. Show that it is satisfiable. Similarly, show
that if it can be put into a conditional form that does not contain a conjunct of form 2, then
it is satisfiable.
Section 17.4
Resolution
set of clauses
People are pretty good at figuring out when one sentence is a tautological
consequence of another, and when it isnt. If it is, we can usually come up
with a proof, especially when we have been taught the important methods of
proof. And when it isnt, we can usually come up with an assignment of truth
values that makes the premises true and the conclusion false. But for computer
applications, we need a reliable and efficient algorithm for determining when
one sentence is a tautological consequence of another sentence or a set of
sentences.
Recall that S is a tautological consequence of premises P1 , . . . , Pn if and
only if the set {P1 , . . . , Pn , S}is not tt-satisfiable, that is to say, its conjunction is not tt-satisfiable. Thus, the problem of checking for tautological consequence and the problem of checking to see that a sentence is not tt-satisfiable
amount to the same thing. The truth table method provides us with a reliable
method for doing this. The trouble is that it can be highly expensive in terms
of time and paper (or computer memory). If we had used it in Fitch, there
are many problems that would have bogged down your computer intolerably.
In the case of Horn sentences, we have seen a much more efficient method,
one that accounts for the importance of Horn sentences in logic programming.
In this section, we present a method that applies to arbitrary sentences in
CNF. It is not in general as efficient as the Horn sentence algorithm, but
it is often much more efficient than brute force checking of truth tables. It
also has the advantage that it extends to the full first-order language with
quantifiers. It is known as the resolution method, and lies at the heart of many
applications of logic in computer science. While it is not the algorithm that
we have actually implemented in Fitch, it is closely related to that algorithm.
The basic notion in resolution is that of a set of clauses. A clause is just
any finite set of literals. Thus, for example,
C1 = {Small(a), Cube(a), BackOf(b, a)}
is a clause. So is
C2 = {Small(a), Cube(b)}
Chapter 17
Resolution / 505
The special notation ! is used for the empty clause. A clause C is said to
be satisfied by a truth assignment h provided at least one of the literals in
1 The empty clause ! clearly is not tt-satisfiable by
C is assigned true by h.
any assignment, since it does not contain any elements to be made true. If
C += ! then h satisfies C if and only if the disjunction of the sentences in C
is assigned true by h.
A nonempty set S of clauses is said to be satisfied by the truth assignment
h provided each clause C in S is satisfied by h. Again, this is equivalent to
saying that the CNF sentence formed by conjoining the disjunctions formed
empty clause
resolution method
1. Start with a set T of sentences in CNF which you hope to show is not
tt-satisfiable. Transform each of these sentences into a set of clauses in
the natural way: replace disjunctions of literals by clauses made up of
the same literals, and replace conjunctions by sets of clauses. Call the
set of all these clauses S. The aim now is to show S is not tt-satisfiable.
2. To show S is not tt-satisfiable, systematically add clauses to the set in
such a way that the resulting set is satisfied by the same assignments
as the old set. The new clauses to throw in are called resolvents of old
clauses. If you can finally get a set of clauses which contains !, and
so obviously cannot be satisfied, then you know that our original set S
could not be satisfied.
resolvents
Section 17.4
that in order for an assignment h to satisfy the set {C1 , C2 }, h will have
to assign true to at least one of Cube(a), Cube(b), or BackOf(b,a). So let
C3 = {Cube(a), Cube(b), BackOf(b, a)} be an additional clause. Then the set
of clauses {C1 , C2 } and {C1 , C2 , C3 } are satisfied by exactly the same assignments. The clause C3 is a resolvent of the first set of clauses.
For another example, let C1 , C2 , and C3 be the following three clauses:
C1
C2
C3
=
=
=
{Home(max), Home(claire)}
{Home(claire)}
{Home(max)}
Notice that in order for an assignment to satisfy both C1 and C2 , you will
have to satisfy the clause
C4
{Home(max)}
Thus we can throw this resolvent C4 into our set. But when we look at
{C1 , C2 , C3 , C4 }, it is obvious that this new set of clauses cannot be satisfied. C3 and C4 are in direct conflict. So the original set is not tt-satisfiable.
With these examples in mind, we now define what it means for one clause,
say R, to be a resolvent of two other clauses, say C1 and C2 .
resolvent defined
Chapter 17
Resolution / 507
completeness of
resolution
{A, D}
{A}
{D}
{B, C}
{C, D}
{B, D}
{D}
!
{B, D}
Since we are able to start with clauses in S and resolve to the empty clause,
we know that the original set T of sentences is not tt-satisfiable. A figure of
this sort is sometimes called a proof by resolution.
A proof by resolution shows that a set of sentences, or set of clauses, is not
tt-satisfiable. But it can also be used to show that a sentence C is a tautological
consequence of premises P1 , . . . , Pn . This depends on the observation, made
earlier, that S is a consequence of premises P1 , . . . , Pn if and only if the set
{P1 , . . . , Pn , S} is not tt-satisfiable.
proof by resolution
Remember
1. Every set of propositional sentences can be expressed as a set of
clauses.
2. The resolution method is an important method for determining
whether a set of clauses is tt-satisfiable. The key notion is that of
a resolvent for a set of clauses.
3. A resolvent of clauses C1 and C2 is a clause R provided there is an
atomic sentence in one of the clauses whose negation is in the other
clause, and if R is the set of all the remaining literals in either clause.
Section 17.4
Exercises
17.31 Open Alan Robinsons Sentences. The sentences in this file are not mutually satisfiable in any
!|"
world. Indeed, the first six sentences are not mutually satisfiable. Show that the first five
sentences are mutually satisfiable by building a world in which they are all true. Submit this
as World 17.31. Go on to show that each sentence from 7 on can be obtained from earlier
sentences by resolution, if we think of the disjunction in clausal form. The last sentence, !,
is clearly not satisfiable, so this shows that the first six are not mutually satisfiable. Turn in
your resolution proof to your instructor.
17.32 Use Fitch to give an ordinary proof that the first six sentences of Alan Robinsons Sentences are
!
not satisfiable.
17.33 Construct a proof by resolution showing that the following CNF sentence is not satisfiable:
"
(A C B) A (C B A) (A B)
17.34 Construct a proof by resolution showing that the following sentence is not satisfiable. Since the
"
17.35 Resolution can also be used to show that a sentence is logically true. To show that a sentence
"
is logically true, we need only show that its negation is not satisfiable. Use resolution to show
that the following sentence is logically true:
A (B C) (A B) (A B C)
Give resolution proofs of the following arguments. Remember, a resolution proof will demonstrate that
the premises and the negation of the conclusion form an unsatisfiable set.
17.36
"
B
A C
(C B)
17.37
"
"
AB
A
B
17.38
"
AB
17.39
CA
C
17.40
"
BC
(A B) C
A B
C (A B)
A (B C)
17.41
"
AB
AC
BD
CD
Chapter 17
Resolution / 509
17.42
"
A (B C)
E
(A B) (D E)
A
CD
17.43
"
A B
C (D E)
D C
A E
CB
17.45 (Completeness of resolution) In this exercise we outline the theorem stated in the section to
"!!
Section 17.4
7. Now you have shown that any unsatisfiable set S of clauses built from just two atomic
sentences has ! as an eventual resolvent. Can you see how this method generalizes to
the case of three atomic sentences? You will need to use your results for one and two
atomic sentences.
8. If you have studied the chapter on induction, complete this proof to obtain a general
proof of Theorem 17.4. Nothing new is involved except induction.
Chapter 17
Chapter 18
First-order structures
In our treatment of propositional logic, we introduced the idea of logical consequence in virtue of the meanings of the truth-functional connectives. We
developed the rigorous notion of tautological consequence as a precise approximation of the intuitive notion. We achieved this precision thanks to truth table techniques, which we later extended by means of truth assignments. Truth
assignments have two advantages over truth tables: First, in assigning truth
values to all atomic sentences at once, they thereby determine the truth or
falsity of every sentence in the language, which allows us to apply the concept
of tautological consequence to infinite sets of sentences. Second, they allow us
to do this with complete mathematical rigor.
In Chapter 10, we introduced another approximation of the intuitive notion
of consequence, that of first-order consequence, consequence in virtue of the
meanings of , and =, in addition to the truth-functional connectives. We
described a vague technique for determining when a sentence was a first-order
consequence of others, but did not have an analog of truth tables that gave
us enough precision to prove results about this notion, such as the Soundness
Theorem for F.
Now that we have available some tools from set theory, we can solve this
problem. In this section, we define the notion of a first-order structure. A firstorder structure is analogous to a truth assignment in propositional logic. It
represents circumstances that determine the truth values of all of the sentences
of a language, but it does so in such a way that identity and the first-order
511
first-order structures
FOL
modeling a world
domain of discourse
Chapter 18
quantifiers and are respected. This will allow us to give a precise definition
of first-order consequence and first-order validity.
In our intuitive explanation of the semantics of quantified sentences, we
appealed to the notion of a domain of discourse, defining truth and satisfaction relative to such a domain. We took this notion to be an intuitive
one, familiar both from our experience using Tarskis World and from our
ordinary experience communicating with others about real-world situations.
The notion of a first-order structure results from modeling these domains in
a natural way using set theory.
Lets begin with a very simple language, a sublanguage of the blocks language. Assume that we have only three predicates, Cube, Larger, and =, and
one name, say c. Even with this simple language there are infinitely many
sentences. How should we represent, in a rigorous way, the circumstances that
determine the truth values of sentences in this language?
By way of example, consider Mary Ellens World, shown in Figure 18.1.
This world has three cubes, one of each size, and one small tetrahedron. The
small cube is named c. Our goal is to construct a mathematical object that
represents everything about this world that is relevant to the truth values
of sentences in our toy language. Later, we will generalize this to arbitrary
first-order languages.
Since sentences are going to be evaluated in Mary Ellens World, one thing
we obviously need to represent is that the world contains four objects. We do
this by using a set D = {b1 , b2 , b3 , b4 } of four objects, where b1 represents the
leftmost block, b2 the next, and so forth. Thus b4 represents the tetrahedron.
This set D is said to be the domain of discourse of our first-order structure.
To keep first-order structures as clean as possible, we represent only those
features of the domain of discourse that are relevant to the truth of sentences
in the given first-order language. Given our current sublanguage, there are
many features of Mary Ellens World that are totally irrelevant to the truth
of sentences. For example, since we cannot say anything about position, our
mathematical structure need not represent any facts about the positions of our
blocks. On the other hand, we can say things about size and shape. Namely,
we can say that an object is (or is not) a cube and that one object is (or is
not) larger than another. So we will need to represent these sorts of facts. We
do this by assigning to the predicate Cube a certain subset Cu of the domain
of discourse D, namely, the set of cubes. This set is called the extension of
the predicate Cube in our structure. In modeling the world depicted above,
this extension is the set Cu = {b1 , b2 , b3 }. Similarly, to represent facts about
the relative sizes of the objects, we assign to the predicate Larger a set La of
ordered pairs Cx, yD, where x, y D. If Cx, yD La, then this represents the
fact that x is larger than y. So in our example, we would have
extensions of predicates
referents of constants
identity
Section 18.1
definition of
first-order structure
FOL
Exercises
18.1
"
Write out a complete description of a first-order structure M that would represent Mary Ellens
World. This has been done above except for the packaging into a single function.
Chapter 18
18.2
!
(Simon says) Open Mary Ellens World. The structure M that we have used to model this world,
with respect to the sublanguage involving only Cube, Larger, and c, is also a good model of
many other worlds. What follows is a list of proposed changes to the world. Some of them
are allowable changes, in that if you make the change, the model M still represents the world
with respect to this language. Other changes are not. Make the allowable changes, but not the
others.
1. Move everything back one row.
2. Interchange the position of the tetrahedron and the large cube.
3. Make the tetrahedron a dodecahedron.
4. Make the large cube a dodecahedron.
5. Make the tetrahedron (or what was the tetrahedron, if you have changed it) large.
6. Add a cube to the world.
7. Add a dodecahedron to the world.
Now open Mary Ellens Sentences. Check to see that all these sentences are true in the world you
have built. If they are not, you have made some unallowable changes. Submit your modified
world.
18.3
"
18.4
"
18.5
"!
In the text we modeled Mary Ellens World with respect to one sublanguage of Tarskis World.
How would our structure have to be modified if we added the following to the language: Tet,
Dodec, Between? That is, describe the first-order structure that would represent Mary Ellens
World, in its original state, for this expanded language. [Hint: One of your extensions will be
the empty set.]
Consider a first-order language with one binary predicate Outgrabe. Suppose for some reason
we are interested in first-order structures M for this language which have the particular domain
{Alice, Mad Hatter }. List all the sets of ordered pairs that could serve as the extension of the
symbol Outgrabe. How many would there be if the domain had three elements?
In Section 14.4 (page 396) we promised to show how to make the semantics of generalized
quantifiers rigorous. How could we extend the notion of a first-order structure to accommodate the addition of a generalized quantifier Q? Intuitively, as we have seen, a sentence like
Q x (A(x), B(x)) asserts that a certain binary relation Q holds between the set A of things that
satisfy A(x) and the set B that satisfies B(x) in M. Thus, the natural way to interpret them
is by means of a binary relation on (DM ). What quantifier corresponds to the each of the
following binary relations on sets?
1. A B
2. A B =
3. A B +=
4. | A B |= 1
5. | A B | 3
6. | A B | > | A B |
Section 18.1
18.6
"!
FOL
While we cant say with precision exactly which binary relation a speaker might have in mind
with the use of some quantifiers, like many, we can still use this framework to illustrate the
nature of the logical properties like conservativity, monotonicity, and so forth discussed in
Section 14.5. Each of the following properties of binary relations Q on subsets of D correspond
to a property of quantifiers. Identify them.
1. Q(A, B) if and only if Q(A, A B)
2. If Q(A, B) and A A" then Q(A" , B)
3. If Q(A, B) and A" A then Q(A" , B)
4. If Q(A, B) and B " B then Q(A, B " )
5. If Q(A, B) and B B " then Q(A, B " )
Section 18.2
modeling satisfaction
and truth
variable assignments
Chapter 18
empty variable
assignment (g )
appropriate
assignments
1. g1 is appropriate for any wff with the single free variable x, or with no
free variables at all;
2. g2 is appropriate for any wff whose free variables are a subset of {x, y, z};
3. g3 is appropriate for any wff at all; and
4. g4 (which we just agreed to write as g ) is appropriate for any wff with no
free variables, that is, for sentences, but not for wffs with free variables.
We next come to the definition of truth by way of satisfaction. The definition we gave earlier required us to define this by means of substituting names
for variables. The definition we are about to give ends up being equivalent,
but it avoids this detour. It works by defining satisfaction more generally. In
particular, we will define what it means for an assignment g to satisfy a wff
P(x1 , . . . , xn ) in M. We will define this by induction, with cases corresponding
to the various ways of building up wffs from atomic wffs. This will reduce the
problem gradually to the base case of atomic wffs, where we say explicitly
what satisfaction means.
In order to handle the two inductive clauses in which P starts with a
quantifier, we need a way to modify a variable assignment. For example, if g
is defined on x and we want to say what it means for g to satisfy z Likes(x, z),
then we need to be able to take any object b in the domain of discourse and
consider the variable assignment which is just like g except that it assigns the
value b to the variable z. We will say that g satisfies our wff z Likes(x, z) if
and only if every such modified assignment g " satisfies Likes(x, z). To make
this a bit easier to say, we introduce the notation g[z/b] for the modified
variable assignment. Thus, in general, g[v/b] is the variable assignment whose
modified variable
assignments
Section 18.2
FOL
domain is that of g plus the variable v and which assigns the same values as
g, except that the new assignment assigns b to the variable v.
Here are a couple examples, harking back to our earlier examples of variable assignments given above:
1. g1 assigns b to the variable x, so g1 [y/c] assigns b to x and c to y. By
contrast, g1 [x/c] assigns a value only to x, the value c.
2. g2 assigns a, b, c to the variables x, y, and z, respectively. Then g2 [x/b]
assigns the values b, b, and c to x, y, and z, respectively. The assignment
g2 [u/c] assigns the values c, a, b, and c to the variables u, x, y, and z,
respectively.
3. g3 assigns b to all the variables of the language. g3 [y/b] is the same
assignment, g3 , but g3 [y/c] is different. It assigns c to y and b to every
other variable.
4. g4 , the empty function, does not assign values to any variables. Thus
g4 [x/b] is the function which assigns b to x. Notice that this is the same
function as g1 .
[[t]]M
g
definition of
satisfaction
Chapter 18
M |= P [g]
Section 18.2
FOL
But g[y/d] satisfies this wff if and only if it satisfies Likes(x, y) but does not
satisfy Likes(y, y), by the clauses for conjunction and negation. Looking at
the atomic case, we see that this is true just in case the pair Ce, dD is in the
extension of Likes, while the pair Cd, dD is not. But this can only happen if
e = a and d = b. Thus the only way our original g can satisfy our wff is if it
assigns a to the variable x, as we anticipated.
Notice in the above example how we started off with a wff with one free
variable and an assignment defined on that one variable, but in order to give
our analysis, we had to move to consider a wff with two free variables and
so to assignments defined on those two free variables. This is typical. After
all, what we are really interested in is truth for sentences, that is, wffs with
no free variables, but in order to define this, we must define something more
general, satisfaction of wffs with free variables by assignments defined on those
variables. Indeed, having defined satisfaction, we are now in a position to look
at the special case where the wffs have no free variables and use it for our
definition of truth.
definition of truth
M |= P
Definition (Truth) Let L be some first-order language and let M be a structure for L. A sentence P of L is true in M if and only if the empty variable
assignment g satisfies P in M. Otherwise P is false in M.
Just as we write M |= Q [g] if g satisfies a wff Q in M, so too we write:
M |= P
if the sentence P is true in M.
Lets look back at the structure given just above and see if the sentence
x y (Likes(x, y) Likes(y, y))
come out as it should under this definition. First, notice that it is a sentence,
that is, has no free variables. Thus, the empty assignment is appropriate
for it. Does the empty assignment satisfy it? According to the definition of
satisfaction, it does if and only if there is an object that we can assign to the
variable x so that the resulting assignment satisfies
y (Likes(x, y) Likes(y, y))
But we have seen that there is such an object, namely, a. So the sentence is
true in M; in symbols, M |= x y (Likes(x, y) Likes(y, y)).
Consider next the sentence
x y (Likes(x, y) Likes(y, y))
Chapter 18
Does the empty assignment satisfy this? It does if and only if for every object
e in the domain, if we assign e to x, the resulting assignment g satisfies
y (Likes(x, y) Likes(y, y))
But, as we showed earlier, g satisfies this only if g assigns a to x. If it assigns,
say, b to x, then it does not satisfy the wff. Hence, the empty assignment does
not satisfy our sentence, i.e., the sentence is not true in M. So its negation is;
in symbols, M |= x y (Likes(x, y) Likes(y, y)).
A number of problems are given later to help you understand that this
does indeed model the informal, intuitive notion. In the meantime, we will
state a proposition that will be important in proving the Soundness Theorem
for fol. Intuitively, whether or not a sentence is true in a structure should
depend only on the meanings specified in the structure for the predicates and
individual constants that actually occur in the sentence. That this is the case
is a consequence of the following, somewhat stronger claim.
Proposition 1. Let M1 and M2 be structures which have the same domain
and assign the same interpretations to the predicates and constant symbols
in a wff P. Let g1 and g2 be variable assignments that assign the same objects
to the free variables in P. Then M1 |= P[g1 ] iff M2 |= P[g2 ].
The proof of this proposition, which uses induction on wffs, is a good
exercise to see if you understand the definition of satisfaction. Consequently,
we ask you to prove it in Exercise 18.10.
Once we have truth, we can define the important notions of first-order
consequence and first-order validity, our new approximations of the intuitive
notions of logical consequence and logical truth. In the following definitions,
we assume that we have a fixed first-order language and that all sentences
come from that language. By a structure, we mean any first-order structure
that interprets all the predicates and individual constants of the language.
Definition[First-order consequence] A sentence Q is a first-order consequence
of a set T = {P1 , . . . } of sentences if and only if every structure that makes
all the sentences in T true also makes Q true.
You can see that this definition is the exact analogue of our definition of
tautological consequence. The only difference is that instead of rows of a truth
table (or truth value assignments), we are using first-order structures in the
definition. We similarly modify our definition of tautology to get the following
definition of first-order validity.
Definition (First-order validity) A sentence P is a first-order validity if and
only if every structure makes P true.
definition of
FO consequence
definition of
FO validity
Section 18.2
FO-satisfiable
FOL
We will also use other notions analogous to those introduced in propositional logic in discussing first-order sentences and sets of sentences. For example, we will call a sentence fo-satisfiable if there is a first-order structure that
makes it true, and call a set of sentences fo-satisfiable if there is a structure
that makes all the members of the set true. Sometimes we will leave out the
fo if the context make it clear what kind of satisfiability we are referring to.
You may have wondered why Tarskis World is so named. It is our way of
paying tribute to Alfred Tarski, the logician who played the pivotal role in the
development of the semantic conception of logic. It was Tarski who developed
the notion of a first-order structure, the notion of satisfaction, and who gave
the first analysis of truth, first-order validity, and first-order consequence along
the lines we have sketched here.
One final note. If you go on to study logic further, you will discover that our
treatment of satisfaction is a bit more general that most. Tarski, and most
of those who have followed him, have restricted attention to total variable
assignments, that is, to variable assignments that are defined on all variables.
Then, to define truth, they pick out one of these total assignments and use it,
since they cannot use the empty assignment. The two approaches agree on the
resulting notion of truth, and hence on the notion of logical consequence. The
approach adopted here using partial assignments is more general, seems to us
more natural, and fits in better with our implementation of Tarskis World. It
is easy to represent finite partial assignments in the computers memory, but
not so easy to deal with infinite assignments.
Remember
1. First-order structures are mathematical models of the domains about
which we make claims using fol.
2. Variable assignments are functions mapping variables into the domain
of some first-order structure.
3. A variable assignment satisfies a wff in a structure if, intuitively, the
objects assigned to the variables make the wff true in the structure.
4. Using the notion of satisfaction, we can define what it means for a
sentence to be true in a structure.
5. Finally, once we have the notion of truth in a structure, we can model
the notions of logical truth, and logical consequence.
Chapter 18
Exercises
18.7
"
18.8
"
(Modifying variable assignments.) Suppose D = {a, b, c, d} and let g be the variable assignment
which is defined only on the variable x and takes value b. Describe explicitly each of the
following:
1. g[y/c]
2. g[x/c]
3. g[z/b]
4. g[x/b]
5. (g[x/c])[z/d]
6. (g[x/c])[x/d]
Consider the language with only one binary predicate symbol P and let M be the structure
with domain D = {1, 2, 3} and where the extension of P consists of those pairs Cn, mD such
that m = n + 1. For each of the following wffs, first describe which variable assignments are
appropriate for it. Then describe the variable assignments which satisfy it, much the way we
described the variable assignments that satisfy the wff z Likes(x, z) on page 517.
1. P(y,z)
2. y P(y, z)
3. z P(y, z)
4. P(x,x)
5. x P(x, x)
6. x P(x, x)
7. P(x, x) P(y, z)
8. x (P(x, x) P(y, z))
9. y (P(x, x) P(y, z))
10. y z P(y, z)
11. y y P(y, z)
Now consider the structure N with the same domain but where the extension of P is the set
of those pairs Cn, mD such that n m. How do your answers change?
18.9
"
Let g be a variable assignment in M which is appropriate for the wff P. Show that the following
three statements are equivalent:
1. g satisfies P in M
2. g " satisfies P in M for some extension g " of g
3. g " satisfies P in M for every extension g " of g
Intuitively, this is true because whether a variable assignment satisfies P can depend only on
the free variables of P, but it needs a proof. What does this result say in the case where P is
a sentence? Express your answer using the concept of truth. [Hint: You will need to prove this
by induction on wffs.]
Section 18.2
FOL
18.11 (From first-order structures to truth assignments.) Recall from Section 10.1 that when dealing
"
with sentences containing quantifiers, any sentence that starts with a quantifier is treated just
like an atomic sentence from the point of view of truth tables and hence truth assignments.
Given a first-order structure M for a language L, define a truth assignment hM as follows: for
any sentence S that is atomic or begins with a quantifier,
hM (S) = true if and only if M |= S
Show that the same if and only if holds for all sentences.
18.12 (From truth assignments to first-order structures.) Let h be any truth assignment for a first"
order language without function symbols. Construct a first-order structure Mh as follows. Let
the domain of M be the set of individual constants of the language. Given a relation symbol
R, binary lets say for simplicity of notation, define its extension to be
{Cc, dD | h(R(c, d)) = true}
Finally, interpret each individual constant as naming itself.
1. Show that for any sentence S that does not contain quantifiers or the identity symbol:
Mh |= S iff h(S) = true
[Hint: use induction on wffs.]
2. Show that the result in (1) does not extend to sentences containing the identity symbol.
[Hint: consider an h that assigns false to b = b.]
3. Recall from Section 10.1 that it is possible for a truth assignment h to assign true to
Cube(b) but false to x Cube(x). Show that for such an h, the result in (1) does not
extend to quantified sentences.
18.13 (An important problem about satisfiability.) Open Skolems Sentences. You will notice that
!|"
these sentences come in pairs. Each even-numbered sentence is obtained from the preceding
sentence by replacing some names with variables and existentially quantifying the variables.
The odd-numbered sentence logically implies the even-numbered sentence which follows it, of
course, by existential generalization. The converse does not hold. But something close to it
does. To see what, open Thoralfs First World and check the truth values of the sentences in
the world. The even numbered sentences all come out true, while the odd sentences cant be
evaluated because they contain names not in use in the world.
Extend Thoralfs First World by assigning the names b, c, d and e in such a way that the odd
Chapter 18
numbered sentences are also true. Do the same for Thoralfs Second World, saving the resulting
worlds as World 18.13.1 and World 18.13.2. Submit these worlds.
Explain under what conditions a world in which x P(x) is true can be extended to one in
which P(c) is true. Turn in your explanation to your instructor.
Section 18.3
soundness of F
Proof: The proof is very similar to the proof of the Soundness Theorem for FT , the propositional part of F, on page 216. We will show
that any sentence that occurs at any step in a proof p in F is a firstorder consequence of the assumptions in force at that step (which
include the premises of p). This claim applies not just to sentences
at the main level of proof p, but also to sentences appearing in subproofs, no matter how deeply nested. The theorem follows from this
claim because if S appears at the main level of p, then the only assumptions in force are premises drawn from T . So S is a first-order
consequence of T .
Call a step of a proof valid if the sentence at that step is a first-order
consequence of the assumptions in force at that step. Our earlier
proof of soundness for FT was actually a disguised form of induction
on the number of the step in question. Since we had not yet discussed
induction, we disguised this by assuming there was an invalid step
1 Recall
that the formal proof system F includes all the introduction and elimination
rules, but not the Con procedures.
Section 18.3
FOL
and considering the first of these. When you think about it, you
see that this is really just the inductive step in an inductive proof.
Assuming we have the first invalid step allows us to assume that all
the earlier steps are valid, which is the inductive hypothesis, and
then prove (by contradiction) that the current step is valid after all.
We could proceed in the same way here, but we will instead make
the induction explicit. We thus assume that we are at the nth step,
that all earlier steps are valid, and show that this step is valid as
well.
The proof is by cases, depending on which rule is applied at step
n. The cases for the rules for the truth-functional connectives work
out pretty much as before. We will look at one, to point out the
similarity to our earlier soundness proof.
Elim: Suppose the nth step derives the sentence R from an application of Elim to sentences Q R and Q appearing earlier in the
proof. Let A1 , . . . , Ak be a list of all the assumptions in force at step
n. By our induction hypothesis we know that Q R and Q are both
established at valid steps, that is, they are first-order consequences
of the assumptions in force at those steps. Furthermore, since F only
allows us to cite sentences in the main proof or in subproofs whose
assumptions are still in force, we know that the assumptions in force
at steps Q R and Q are also in force at R. Hence, the assumptions for these steps are among A1 , . . . , Ak . Thus, both Q R and
Q are first-order consequences of A1 , . . . , Ak . We now show that R is
a first-order consequence of A1 , . . . , Ak .
Suppose M is a first-order structure in which all of A1 , . . . , Ak are
true. Then we know that M |= Q R and M |= Q, since these sentences are first-order consequences of A1 , . . . , Ak . But in that case,
by the definition of truth in a structure we see that M |= R as well.
So R is a first-order consequence of A1 , . . . , Ak . Hence, step n is a
valid step.
Notice that the only difference in this case from the corresponding
case in the proof of soundness of FT is our appeal to first-order
structures rather than rows of a truth table. The remaining truthfunctional rules are all similar. Lets now consider a quantifier rule.
Elim: Suppose the nth step derives the sentence R from an application of Elim to the sentence x P(x) and a subproof containing
R at its main level, say at step m. Let c be the new constant intro-
Chapter 18
Section 18.3
FOL
The Soundness Theorem for F assures us that we will never prove an invalid argument using just the rules of F. It also warns us that we will never be
able to prove a valid argument whose validity depends on meanings of predicates other than identity. The Completeness Theorem for F is significantly
harder to prove than the Soundness Theorem for F, or for that matter, than
the Completeness Theorem for FT . In fact, it is the most significant theorem
that we prove in this book and forms the main topic of Chapter 19.
Exercises
"
"!
"!
"!!
Section 18.4
Chapter 18
just like means here is that the structures are isomorphic, a notion we have
not defined. The intuitive notion should be enough to convince you of our claim.
Section 18.4
FOL
"
"
Section 18.5
Skolemization
One important role function symbols play in first-order logic is as a way of
simplifying (for certain purposes) sentences that have lots of quantifiers nested
inside one another. To see an example of this, consider the sentence
x y Neighbor(x, y)
Given a fixed domain of discourse (represented by a first-order structure M,
say) this sentence asserts that every b in the domain of discourse has at least
one neighbor c. Let us write this as
M |= Neighbor(x, y)[b, c]
rather than the more formal M |= Neighbor(x, y)[g] where g is the variable
assignment that assigns b to x and c to y. Now if the original quantified
sentence is true, then we can pick out, for each b, one of bs neighbors, say his
Chapter 18
Skolemization / 531
Skolemization
Skolem function
Skolem normal form
f x P(x, f(x))
This sort of sentence, however, takes us into what is known as second-order
logic, which is beyond the scope of this book.
Skolem functions, and Skolem normal form, are very important in advanced parts of logic. We will discuss one application of them later in the
chapter, when we sketch how to apply the resolution method to fol with
quantifiers.
One of the reasons that natural language does not get bogged down in lots
of embedded quantifiers is that there are plenty of expressions that act like
function symbols, so we can usually get by with Skolemizations. Possessives,
for example, act as very general Skolem functions. We usually think of the
possessive apostrophe s as indicating ownership, as in Johns car. But it really functions much more generally as a kind of Skolem function. For example,
if we are trying to decide where the group will eat out, then Maxs restaurant
can refer to the restaurant that Max likes best. Or if we are talking about
logic books, we can use Kleenes book to refer not to one Kleene owns, but to
one he wrote.
Skolemization in
natural language
Section 18.5
FOL
Remember
(Simplest case of Skolemization) Given a sentence of the form x y P(x, y)
in some first-order language, we Skolemize it by choosing a function symbol f not in the language and writing x P(x, f(x)). Every world that makes
the Skolemization true also makes the original sentence true. Every world
that makes the original sentence true can be turned into one that makes
the Skolemization true by interpreting the function symbol f by a function f which picks out, for any object b in the domain, some object c
such that they satisfy the wff P(x, y).
Exercises
"
"
Section 18.6
Unification of terms
We now turn to a rather different topic, unification, that applies mainly to
languages that contain function symbols. Unification is of crucial importance
when we come to extend the resolution method to the full first-order language.
The basic idea behind unification can be illustrated by comparing a couple
of claims. Suppose first that Nancy tells you that Maxs father drives a Honda,
and that no ones grandfather drives a Honda. Now this is not true, but there is
nothing logically incompatible about the two claims. Note that if Nancy went
on to say that Max was a father (so that Maxs father was a grandfather)
Chapter 18
P(f(a)),
P(f(g(a))),
x P(f(g(x)))
x P(f(x))
unification
definition of
unifiable terms
Section 18.6
FOL
If you said to substitute h(a) for the variable x and g(z) for y you were right.
All three terms are transformed into the term f(g(z), h(a)). Are there any
other substitutions that would work? Yes, there are. We could plug any term
in for z and get another substitution. The one we chose was the simplest in
that it was the most general. We could get any other from it by means of a
substitution.
Here are some examples of pairs, some of which can, others of which cannot, be unified. See if you can tell which are which before reading on.
g(x),
h(f(x, x)),
f(x, y),
g(g(x)),
g(x),
g(x),
h(y)
h(y)
f(y, x)
g(h(y))
g(h(z))
g(h(x))
Half of these go each way. The ones that are unifiable are the second, third,
and fifth. The others are not unifiable. The most general unifiers of the three
that are unifiable are, in order:
Substitute f(x, x) for y
Substitute some other variable z for both x and y
Substitute h(z) for x
Unification Algorithm
Chapter 18
The first pair is not unifiable because no matter what you do, one will always
start with g while the other starts with h. Similarly, the fourth pair is not
unifiable because the first will always start with a pair of gs, while the second
will always start with a g followed by an h. (The reason the last pair cannot
be unified is a tad more subtle. Do you see why?)
There is a very general procedure for checking when two (or more) terms
are unifiable or not. It is known as the Unification Algorithm. We will not
explain it in this book. But once you have done the following exercises, you
will basically understand how the algorithm works.
Exercises
"
"
"
"!
Section 18.7
Resolution, revisited
In this section we discuss in an informal way how the resolution method for
propositional logic can be extended to full first-order logic by combining the
tools we have developed above.
The general situation is that you have some first-order premises P1 , . . . , Pn
and a potential conclusion Q. The question is whether Q is a first-order consequence of P1 , . . . , Pn . This, as we have seen, is the same as asking if there
is no first-order structure which is a counterexample to the argument that
Q follows from P1 , . . . , Pn . This in turn is the same as asking whether the
sentence
P1 . . . Pn Q
extending resolution
to fol
Section 18.7
universal sentences
FOL
is not fo-satisfiable. So the general problem can be reduced to that of determining, of a fixed finite sentence, say S, of fol, whether it is fo-satisfiable.
The resolution method discussed earlier gives a procedure for testing this
when the sentence S contains no quantifiers. But interesting sentences do
contain quantifiers. Surprisingly, there is a method for reducing the general
case to the case where there are no quantifiers.
An overview of this method goes as follows. First, we know that we can
always pull all quantifiers in a sentence S out in front by logically valid transformations, and so we can assume S is in prenex form.
Call a sentence universal if it is in prenex form and all of the quantifiers
in it are universal quantifiers. That is, a universal sentence S is of the form
x1 . . . xn P(x1 , . . . , xn )
For simplicity, let us suppose that there are just two quantifiers:
x y P(x, y)
Lets assume that P contains just two names, b and c, and, importantly, that
there are no function symbols in P.
We claim that S is fo-satisfiable if and only if the following set T of
quantifier-free sentences is fo-satisfiable:
T = {P(b, b), P(b, c), P(c, b), P(c, c)}
reducing to
non-quantified
sentences
caveats
Chapter 18
Note that we are not saying the two are equivalent. S obviously entails T , so
if S is fo-satisfiable so is T . T does not in general entail S, but it is the case
that if T is fo-satisfiable, so is S. The reason is fairly obvious. If you have a
structure that makes T true, look at the substructure that just consists of b
and c and the relationships they inherit. This little structure with only two
objects makes S true.
This neat little observation allows us to reduce the question of the unsatisfiability of the universal sentence S to a sentence of fol containing no
quantifiers, something we know how to solve using the resolution method for
propositional logic.
There are a couple of caveats, though. First, since the resolution method
for propositional logic gives us truth-assignments, in order for our proof to
work must be able to go from a truth-assignment h for the atomic sentences
of our language to a first-order structure Mh for that language making the
same atomic sentences true. This works for sentences that do not contain =,
as we saw in Exercise 18.12, but not in general. This means that in order to
be sure our proof works, the sentence S cannot contain =.
Exercise 18.12 also required that the sentence not contain any function
symbols. This is a real pity, since Skolemization gives us a method for taking
any prenex sentence S and finding another one that is universal and fosatisfiable if and only if S is: just replace all the s one by one, left to right, by
function symbols. So if we could only generalize the above method to the case
where function symbols are allowed, we would have a general method. This is
where the Unification Algorithm comes to the rescue. The basic strategy of
resolution from propositional logic has to be strengthened a bit.
Resolution method for fol: Suppose we have sentences S, S" , S"" , . . . and
want to show that they are not simultaneously fo-satisfiable. To do this using
resolution, we would carry out the following steps:
Skolemizing
resolution method
for fol
Section 18.7
FOL
Example. Suppose you want to show that x P(x, b) and y P(f(y), b) are
not jointly fo-satisfiable, that is, that their conjunction is not fo-satisfiable.
With this example, we can skip right to step 6, giving us two clauses, each
consisting of one literal. Since we can unify x and f(y), we see that these two
clauses resolve to !.
Example. Suppose we are told that the following are both true:
x (P(x, b) Q(x))
y (P(f(y), b) Q(y))
and we want to derive the sentence,
y (Q(y) Q(f(y)))
To show that this sentence is a first-order consequence of the first two, we
need to show that those sentences together with the negation of this sentence
are not simultaneously fo-satisfiable. We begin by putting this negation into
prenex form:
y (Q(y) Q(f(y)))
We now want to Skolemize this sentence. Since the existential quantifier in
our sentence is not preceded by any universal quantifiers, to Skolemize this
sentence we replace the variable y by a 0-ary function symbol, that is, an
individual constant:
Q(c) Q(f(c)))
Dropping the conjunction gives us the following two sentences:
Q(c)
Q(f(c))
We now have four sentences to which we can apply step 6. This yields the
following four clauses:
1.
2.
3.
4.
Applying resolution to these shows that they are not fo-satisfiable. Here is a
step-by-step derivation of the empty clause.
5.
6.
7.
Chapter 18
Resolvent
{Q(y), Q(f(y))}
{Q(f(c))}
!
Resolved Clauses
1, 2
3, 5
4, 6
Substitution
f(y) for x
c for y
none needed
Example. Lets look at one more example that shows the whole method at
work. Consider the two English sentences:
1. Everyone admires someone who admires them unless they admire
Quaid.
2. There are people who admire each other, at least one of whom admires
Quaid.
Suppose we want to use resolution to show that under one plausible reading of
these sentences, (2) is a first-order consequence of (1). The readings we have
in mind are the following, writing A(x, y) for Admires(x, y), and using q for the
name Quaid:
(S1 )
(S2 )
(When you figure out why S1 logically entails S2 in Problem 18.27, you may
decide that these are not reasonable translations of the English. But that is
beside the point here.)
Our goal is to show that S1 and S2 are not jointly fo-satisfiable. The
sentence S2 is equivalent to the following universal sentence, by DeMorgans
Laws:
x y (A(x, q) A(x, y) A(y, x))
The sentence S1 is not logically equivalent to a universal sentence, so we
must Skolemize it. First, note that it is equivalent to the prenex form:
x y [A(x, q) (A(x, y) A(y, x))]
Skolemizing, we get the universal sentence,
x [A(x, q) (A(x, f(x)) A(f(x), x))]
Putting the quantifier-free part of this in conjunctive normal form gives us:
x [(A(x, q) A(x, f(x))) (A(x, q) A(f(x), x))]
This in turn is logically equivalent to the conjunction of the following two
sentences:
x [A(x, q) A(x, f(x))]
x [A(x, q) A(f(x), x))]
Next, we change variables so that no variable is used in two sentences, drop
the universal quantifiers, and form clauses from the results. This leaves us
with the following three clauses:
Section 18.7
FOL
Resolvent
{A(q, f(q))}
{A(f(q), q)}
{A(q, f(q))}
!
Resolved Clauses
1, 3
2, 3
3, 5
4, 6
Substitution
q for w, x, z
q for w, y, z
f(q) for z, q for w
none needed
FO Con routine
Exercises
"!
"
ical consequence of S1 .
Chapter 18
"
"
Section 18.7
Chapter 19
Completeness and
Incompleteness
G
odels Completeness
Theorem
G
odels Incompleteness
Theorem
542
Section 19.1
theories
T *S
Completeness Theorem
for F
Compactness Theorem
Henkin method
1 Recall
that the formal proof system F includes all the introduction and elimination
rules, but not the Con procedures.
Section 19.1
=. In Exercise 18.12, we illustrated this defect. We noted there, for example, that there are truth assignments that assign true to both the sentences
Cube(b) and x Cube(x) (since from the point of view of propositional logic,
x Cube(x) is an atomic sentence unrelated to Cube(b)), whereas no first-order
structure can make both of these sentences true.
Henkins method finds a clever way to isolate the exact gap between the
first-order validities and the tautologies by means of a set of fol sentences H.
In a sense that we will make precise, H captures exactly what the truth table
method misses about the quantifiers and identity.2 For example, H will contain
the sentence Cube(b) x Cube(x), thereby ruling out truth assignments like
those mentioned above that assign true to both Cube(b) and x Cube(x).
Here is an outline of our version of Henkins proof.
witnessing constants
Elimination Theorem
The Elimination Theorem: The Henkin theory is weak enough, and the
formal system F strong enough, to allow us to prove the following (Theorem 4): Let p be any formal first-order proof whose premises are all either sentences of L or sentences from H, with a conclusion that is also a
sentence of L. We can eliminate the premises from H from this proof in
favor of uses of the quantifier rules. More precisely, there exists a formal
proof p" whose premises are those premises of p that are sentences of L
and with the same conclusion as p.
Henkin construction
The Henkin Construction: On the other hand, the Henkin theory is strong
enough, and the notion of first-order structure wide enough, to allow us
to prove the following result (Theorem 19.5): for every truth assignment
2 This remark will be further illustrated by Exercises 19.319.5, 19.17 and 19.18, which
we strongly encourage you to do when you get to them. They will really help you understand
the whole proof.
Chapter 19
final steps
Let us show how we can use these results to prove the Completeness
Theorem. Assume that T and S are all from the original language L
and that S is a first-order consequence of T . We want to prove that
T 9 S. By assumption, there can be no first-order structure in which all
of T {S} is true. By the Henkin Construction there can be no truth
assignment h which assigns true to all sentences in T H {S}; if
there were, then the first-order structure Mh would make T {S} true.
Hence S is a tautological consequence of T H.3 The Completeness
Theorem for propositional logic tells us there is a formal proof p of S
from T H. The Elimination Theorem tells us that using the quantifier
rules, we can transform p into a formal proof p" of S from premises in
T . Hence, T 9 S, as desired.
The next few sections of this chapter are devoted to filling in the details
of this outline. We label the sections to match the names used in our outline.
Section 19.2
witnessing constant
for P
Section 19.2
You might wonder just how we can form all these new constant symbols.
How do we write them down and how do we make sure that distinct wffs
get distinct witnessing constants? Good question. There are various ways we
could arrange this. One is simply to use a single symbol c not in the language
K and have the new symbol be the expression c with the wff as a subscript.
Thus, for example, in our above list, the constant symbol c1 would really be
the symbol
c(Small(x)Cube(x))
This is a pretty awkward symbol to write down, but it at least shows us how
we could arrange things in principle.
The language K " consists of all the symbols of K plus all these new witnessing constants. Now that we have all these new constant symbols, we can
use them in wffs. For example, the language K " allows us to form sentences
like
Smaller(a, cBetween(x,a,b) )
But then we also have sentences like
x Smaller(x, cBetween(x,a,b) )
so we would like to have a witnessing constant symbol subscripted by
Smaller(x, cBetween(x,a,b) )
Unfortunately, this wff , while in K " , is not in our original language K, so we
have not added a witnessing constant for it in forming K " .
Bummer. Well, all is not lost. What we have to do is to iterate this construction over and over again. Starting with a language L, we define an infinite
sequence of larger and larger languages
L0 L1 L2 . . .
where L0 = L and Ln+1 = L"n . That is, the language Ln+1 results by applying
the above construction to the language Ln . Finally, the Henkin language LH
for L consists of all the symbols of Ln for any n = 0, 1, 2, 3, . . ..
Each witnessing constant cP is introduced at a certain stage n 1 of this
construction. Let us call that stage the date of birth of cP . When we come to
proving the Elimination Theorem it will be crucial to remember the following
fact, which is obvious from the construction of LH .
Lemma 1. (Date of Birth Lemma) Let n + 1 be the date of birth of cP . If Q
is any wff of the language Ln , then cP does not appear in Q.
Chapter 19
Exercises
19.1
"
This exercise and its companion (Exercise 19.2) are intended to give you a better feel for why
we have to keep iterating the witnessing constant construction. It deals with the constants that
would turn out to be important if our original set T contained the sentence x y Larger(x, y).
Write out the witnessing constants for the following wffs, keeping track of their dates of birth.
The constant symbol a is taken from the original language L.
1. Larger(a, x)
2. Larger(c1 , x), where c1 is your constant from 1.
3. Larger(c2 , x), where c2 is your constant from 2.
4. Larger(c3 , x), where c3 is your constant from 3.
Section 19.3
witnessing axioms
Section 19.3
Definition The Henkin theory H consists of all sentences of the following five
forms, where c and d are any constants and P(x) is any formula (with exactly
one free variable) of the language LH :
H1: All Henkin witnessing axioms
x P(x) P(cP(x) )
H2: All sentences of the form
P(c) x P(x)
H3: All sentences of the form
x P(x) x P(x)
H4: All sentences of the form
c=c
H5: All sentences of the form
(P(c) c = d) P(d)
connection to
quantifier rules
Notice that there is a parallel between these sentences of H and the quantifier and identity rules of F:
H1 corresponds roughly to Elim, in that both are justified by the
same intuition,
H2 corresponds to Intro,
H3 reduces to ,
H4 corresponds to = Intro, and
H5 corresponds to = Elim.
Just what this correspondence amounts to is a bit different in the various cases.
For example, the axioms of types H2-H5 are all first-order validities, while this
is not true of H1, of course. The witnessing axioms make substantive claims
about the interpretations of the witnessing constants. The following result,
while not needed in the proof of completeness, does explain why the rest of
the proof has a chance of working.
Chapter 19
19.2
Write out the witnessing axioms associated with the wffs in Exercise 19.1.
"
The next three exercises are designed to help you understand how the theory H fills the gap between
tautological and first-order consequence. For these exercises we take L to be the blocks language and T
to consist of the following set of sentences:
T = {Cube(a), Small(a), x (Cube(x) Small(x)) y Dodec(y)}
19.3
"
19.5
"
19.4
"
Section 19.3
19.6
!|"
Use Tarskis World to open Henkins Sentences. Take the constants c and d as shorthand for
the witnessing constant cCube(x) and cDodec(x)Small(x) , respectively.
1. Show that these sentences are all members of H. Identify the form of each axiom from
our definition of H.
2. By (1) and Proposition 3, any world in which c and d are not used as names can be
turned into a world where all these sentences are true. Open Henkins World. Name some
blocks c and d in such a way that all the sentences are true. Submit this world.
19.7
"
19.8
"
Show that for every constant symbol c of LH there is a distinct witnessing constant d such
that c = d is a tautological consequence of H. [Hint: consider the wff c = x.]
Show that for every binary relation symbol R of L and all constants c, c" , d, and d" of LH , the
following is a tautological consequence of H:
(R(c, d) c = c" d = d" ) R(c" , d" )
[Hint: Notice that (R(c, d) c = c" ) R(c" , d) is one of the H5 axioms of H. So is
(R(c" , d) d = d" ) R(c" , d" ). Show that the desired sentence is a tautological consequence
of these two sentences and hence a tautological consequence of H.]
19.9
"
Let T be a theory of L. Use Proposition 3 (but without using the Completeness Theorem or
the Elimination Theorem) to show that if a sentence S of L is a first-order consequence of
T H, then it is a first-order consequence of T alone.
Section 19.4
Chapter 19
The proof of this result will take up this section. We break the proof down
into a number of lemmas.
Proposition 5. (Deduction Theorem). If T {P} 9 Q then T 9 P Q
Deduction Theorem
The proof of this is very similar to the proof of Lemma 2 and is left as an
exercise. It is also illustrated by the following.
You try it
................................................................
1. Open Deduction Thm 1. This contains a formal first-order proof of the
following argument:
"
"
3. For the first step of the new proof, start a subproof with x Dodec(x) as
premise.
"
4. Now using Copy and Paste, copy the entire proof (but not the premises)
from Deduction Thm 1 into the subproof you just created in Proof Deduction
Thm 1. After you paste the proof, verify it. Youll see that you need to
add some support citations to get all the steps to check out, but you can
easily do this.
"
5. End the subproof and use Intro to obtain the desired conclusion. Save
your completed proof as Proof Deduction Thm 1.
"
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Congratulations
The following is proven using the Deduction Theorem and modus ponens
repeatedly to get rid of each of the Pi . The details are left as an exercise.
Proposition 6. If T {P1 , . . . , Pn } 9 Q and, for each i = 1, . . . , n, T 9 Pi
then T 9 Q.
Section 19.4
Chapter 19
eliminating witnessing
axioms
eliminating other
members of H
x P(x) x P(x)
(P(c) c = d) P(d)
c=c
Section 19.4
Proof: The only one of these that is not quite obvious from the
rules of inference of F is the DeMorgan biconditional. We essentially
proved half of this biconditional on page 364, and gave you the other
half as Exercise 13.44.
We have now assembled the tools we need to prove the Elimination Theorem.
proof of Elimination
Theorem
Exercises
19.10 If you skipped the You try it section, go back and do it now. Submit the file Proof Deduction
!
Thm 1.
Chapter 19
Give formal proofs of the following arguments. Because these results are used in the proof of Completeness, do not use any of the Con rules in your proofs.
19.11
!
PQ
P Q
19.12
!
(P Q) R
19.13
!
P R
(P Q) R
QR
19.14 Prove the Deduction Theorem (Proposition 5). [Hint: The proof of this is very similar to the
"
19.15 Prove Proposition 6. [Hint: Use induction on n and the Deduction Theorem.]
"
19.16 Use Fitch to open Exercise 19.16. Here you will find a first-order proof of the following argument:
!
x (Cube(x) Small(x))
x y (x = y)
Cube(b) x Small(x)
Using the method of Lemma 8, transform this proof into a proof of
x (Cube(x) Small(x))
x y (x = y)
y Cube(y) x Small(x)
Submit your proof as Proof 19.16.
19.17 Open Exercise 19.17. This file contains the following argument:
!
x (Cube(x) Small(x))
x Cube(x)
x Cube(x) Cube(c)
Small(c) x Small(x)
(Cube(c) Small(c)) x (Cube(x) Small(x))
x (Cube(x) Small(x)) x (Cube(x) Small(x))
x Small(x)
First use Taut Con to show that the conclusion is a tautological consequence of the premises.
Having convinced yourself, delete this step and give a proof of the conclusion that uses only
the propositional rules.
Section 19.4
19.18 Open Exercise 19.17 again. Take the constant c as shorthand for the witnessing constant cCube(x) .
!
Take T to be the first two premises of this proof. We saw in Exercise 19.6 that the other
sentences are all members of H. The Elimination Theorem thus applies to show that you could
transform your proof from the preceding exercise into a proof from the first two premises, one
that does not need the remaining premises. Open Exercise 19.18 and give such a proof. [If you
were to actually transform your previous proof, using the method we gave, the result would be
a very long proof indeed. Youll be far better off giving a new, direct proof.]
Section 19.5
constructing Mh
Chapter 19
a problem: identity
equivalence classes to
the rescue
definition of Mh
Section 19.5
We now need to prove that Mh makes true all and only the sentences to
which h assigns true. That is, we need to show that for any sentence S of
LH , Mh |= S if and only if h(S) = true. The natural way to prove this is by
induction on the complexity of the sentence S.
For the atomic case, we have basically built Mh to guarantee that our claim
holds. But there is one important thing we need to check. Suppose we have
distinct constants c and c" , and d and d" , where [c] = [c" ] and [d] = [d" ]. We need
to rule out the possibility that h(R(c, d)) = true and h(R(c" , d" )) = false.
For if this were the case, C[c" ], [d" ]D would be in the extension of R (since
C[c], [d]D is, and these are the same), and consequently Mh would assign the
wrong value to the atomic sentence R(c" , d" )! But this situation is impossible,
as is shown by the following lemma.
Lemma 12. If c c" , d d" , and h(R(c, d)) = true, then h(R(c" , d" )) =
true.
Proof: By Exercise 19.8, the following is a tautological consequence
of the Henkin theory H:
(R(c, d) c = c" d = d" ) R(c" , d" )
Since h assigns everything in H true, and it assigns true to each
conjunct of R(c, d) c = c" d = d" , it must also assign true to
R(c" , d" ).
This lemma assures us that our construction of Mh works for the atomic
sentences. That is, Mh will make an atomic sentence true if and only if h
assigns true to that atomic sentence. The proof of the Henkin Construction
Lemma will be completed by proving the full version of this result.
the crucial lemma
Chapter 19
complexity
0
0
1
2
3
4
5
Notice that the complexity of a wff P(x) is the same as that of P(c).
(See, for example, Exercise 19.21.) With this definition of complexity,
we can prove the lemma, by induction on the complexity of sentences.
As remarked above, the case where the complexity is 0 is true by the
way we defined the structure Mh .
Assume that the lemma holds for all sentences of complexity k and
let S have complexity k + 1. There are several cases to consider,
depending on the main connective or quantifier of S. We treat one
of the truth-functional cases, as these are all similar, and then both
of the quantifier cases.
Case 1. Suppose S is P Q. If Mh |= S, then at least one of P or Q is
true. Assume that P is true. Since the complexity of S is k + 1, the
complexity of P is k, so by induction hypothesis, h(P) = true. But
then h(P Q) = true, as desired. The proof in the other direction
is similar.
Case 2. Suppose that S is x P(x). We need to show that Mh |=
x P(x) if and only if h assigns the sentence true. Assume first that
the sentence is true in Mh . Then since every object in the domain is
Section 19.5
Chapter 19
We can define f ([d]) to be the equivalence class [cf(d)=x ] of the witnessing constant cf(d)=x . Since
x [f(d) = x] f(d) = cf(d)=x
is in H, it is not hard to check that all the details of the proof then
work out pretty much without change.
This completes our filling in of the outline of the proof of the Completeness
Theorem.
Exercises
19.19 Use Tarskis World to open Henkin Construction. This file lists eight sentences. Lets suppose
!|"
that the predicates used in these sentences (Cube, Dodec, and Small) exhaust the predicates of
L. (In particular, we banish = to avoid the complications it caused in the proof of the Henkin
Construction Lemma.) Let h be any truth assignment that assigns true to all these sentences.
Describe the first-order structure Mh . (How many objects will it have? What will they be
called? What shape and size predicates will hold of them?) Use Tarskis World to build a world
that would be represented by this first-order structure. There will be many such. It should, of
course, make all the sentences in this list true. Submit your world.
19.20 Show that all sentences of the following forms are tautological consequences of H:
"
1. c = c
2. c = d d = c
3. (c = d d = e) c = e
19.21 What are the complexities of the following wffs, where complexity is measured as in the proof
"
19.22 In the inductive proof of Lemma 13, Case 1 considered only one of the truth-functional con"
nectives, namely, . Give an analogous proof that covers the case where S is of the form P Q.
19.23 In the inductive proof of Lemma 13, Case 3 considered only one direction of the biconditional
"
Section 19.5
Section 19.6
The L
owenheim-Skolem Theorem
the structure Mh
L
owenheim-Skolem
Theorem
One of the most striking things about the proof of completeness is the nature of the first-order structure Mh . Whereas our original language may be
talking about physical objects, numbers, sets, what-have-you, the first-order
structure Mh that we construct by means of the Henkin construction has as
elements something quite different: equivalence classes of constant symbols.
This observation allows us to exploit the proof to establish something known
as the Lowenheim-Skolem Theorem for fol.
Recall from our discussion of infinite sets in Chapter 15 that there are
different sizes of infinite sets. The smallest infinite sets are those that have
the same size as the set of natural numbers, those that can be put in one-toone correspondence with the natural numbers. A set is countable if it is finite
or is the same size as the set of natural numbers. Digging into the details of
the proof of completeness lets us prove the following important theorem, due
originally to the logicians Lowenheim and Skolem. They proved it before Godel
proved the Completeness Theorem, using a very different method. Lowenheim
proved it for single sentences, Skolem proved it for countably infinite sets of
sentences.
Theorem (L
owenheim-Skolem Theorem) Let T be a set of sentences in a
countable language L. Then if T is satisfied by a first-order structure, it is
satisfied by one whose domain is countable.
Chapter 19
resolution of paradox
lesson of paradox
Section 19.6
Section 19.7
nonstandard models
Chapter 19
x
x
x
x
>
>
>
>
..
.
0
1
1+1
(1 + 1) + 1
Let T consist of all sentences of L that are true of the natural numbers. Let n be a new constant symbol and let S be the set consisting
of the following sentences:
n
n
n
n
>
>
>
>
..
.
0
1
1+1
(1 + 1) + 1
Section 19.7
what nonstandard
models show
Chapter 19
intended domain of discourse using just axioms stated in the first-order language of arithmetic. With first-order axioms, we cant rule out the existence
of natural numbers (that is, members of the domain) that are infinitely far
from zero. The distinction between being finitely far from zero, which holds
of all the genuine natural numbers, and being infinitely far from zero, which
holds of elements like n from the proof, is not one that we can make in the
first-order language.
We can recast this result by considering what would happen if we added
to our language a predicate NatNum, with the intended meaning is a natural number. If we did nothing to supplement the deductive system F, then
it would be impossible to add sufficient meaning postulates to capture the
meaning of this predicate or to prove all the consequences expressible using
the predicate. For example, if we take the set T " = T S from the proof above,
then intuitively the sentence NatNum(n) is a consequence of T " . There can,
however, be no proof of this in F.
What would happen if we added new rules to F involving the predicate
NatNum? Could we somehow strengthen F in some way that would allow us
to prove NatNum(n) from T " ? The answer is that if the strengthened proof
system allows only finite proofs and is sound with respect to the intended
structure, then our attempt is doomed to fail. Any proof of NatNum(n) would
use only finitely many premises from T " . This finite subset is satisfiable in the
natural numbers: just assign n to a large enough number. Consequently, by the
soundness of the extended proof system, NatNum(n) must not be provable
from this finite subset.
While these observations are about the natural numbers, they show something very general about any language that implicitly or explicitly expresses
the concept of finiteness. For example, if the language of set theory is supplemented with a predicate with the intended meaning is a finite set, the
Compactness Theorem can be used to show that there are first-order structures in which this predicate applies to infinite sets, no matter what meaning
postulates we specify for the new predicate.
For a more down-to-earth example, we could consider the first-order language for talking about family relations. If this language has a predicate meaning is an ancestor of, then however we try to capture its meaning with axioms,
we will fail. Implicit in the concept ancestor is the requirement that there are
only finitely many intermediate relatives. But since there is no fixed, finite
limit to how distant an ancestor can be, the Compactness Theorem guarantees that there will be structures allowing infinitely distant ancestors.
These examples are explored in Exercises 19.30 and 19.31.
Exercises
The next three exercises refer to the following list of sentences. In each exercise, give an informal
argument justifying your answer.
1.
2.
3.
4.
5.
6.
"
"
"
largest first-order
structure making
15 true?
structure making
14 and 6 true is
infinite.
structure making
13 and 5 true?
19.28 Let T be a set of first-order sentences. Suppose that for any natural number n, there is a
"!
structure whose domain is larger than n that satisfies T . Use the Compactness Theorem to
show that there is a structure with an infinite domain that satisfies T . [Hint: Consider the
sentences that say there are at least n things, for each n.]
19.29 Let L" be the language of Peano arithmetic augmented with the predicate NatNum, with the
"!
intended interpretation is a natural number. Let T be the set of sentences in this language that
are true of the natural numbers. Let S be as in the proof of the non-standard model theorem.
Show that NatNum(n) is not a first-order consequence of T S.
Section 19.7
19.30 Suppose we add the monadic predicate Finite to the first-order language of set theory, where
"!!
this is meant to hold of all and only finite sets. Suppose that T consists of the axioms of
zfc plus new axioms involving this predicate, insisting only that the axioms are true in the
intended universe of sets. Use the Compactness Theorem to show that there is a first-order
structure satisfying T and containing an element c which satisfies Finite(x) but has infinitely
many members. [Hint: Add to the language a constant symbol c and infinitely many constants
b1 , b2 , . . .. Form a theory S that says that the bs are all different and all members of c. Show
that T S is satisfiable.]
19.31 Use the Compactness Theorem to show that the first-order language with the binary predicates
"!!
Par(x, y) and Anc(x, y), meaning parent of and ancestor of, respectively, is not axiomatizable.
That is, there is no set of meaning postulates, finite or infinite, which characterize those firstorder structures which represent logically possible circumstances. [Hint: The crucial point is
that a is an ancestor of b if and only if there is some finite chain linking a to b by the parent
of relation, but it is logically possible for that chain to be arbitrarily long.]
Section 19.8
The G
odel Incompleteness Theorem
completeness vs.
incompleteness
Chapter 19
that were true of the natural numbers. This was part of an ambitious project
that came to be known as Hilberts Program, after its main proponent, David
Hilbert. By the early 1930s a great deal of progress had been made in Hilberts
Program. All the known theorems about arithmetic had been shown to follow
from relatively simple axiomatizations like Peano arithmetic. Furthermore,
the logician Mojzesz Pressburger had shown that any true sentence of the
language not mentioning multiplication could be proven from the relevant
Peano axioms.
Godels Incompleteness Theorem showed the positive progress was misleading, and that in fact the goal of Hilberts Program could never be accomplished. A special case of Godels theorem can be stated as follows:
Theorem (Godels Incompleteness Theorem for pa) Peano Arithmetic is not
formally complete.
The proof of this theorem, which we will describe below, shows that the
result applies far more broadly than just to Peanos axiomatization, or just to
the particular formal system F. In fact, it shows that no reasonable extension
of either of these will give you a formally complete theory of arithmetic, in a
sense of reasonable that can be made precise.
Well try to give you a general idea how the proof goes. A key insight
is that any system of symbols can be represented in a coding scheme like
Morse code, where a sequence of dots and dashes, or equivalently, 0s and 1s,
is used to represent any individual symbol of the system. With a carefully
designed coding system, any string of symbols can be represented by a string
of 0s and 1s. But we can think of such a sequence as denoting a number in
binary notation. Hence, we can use natural numbers to code strings of our
basic symbols. The first thing Godel established was that all of the important
syntactic notions of first-order logic can be represented in the language of
Peano arithmetic. For example, the following predicates are representable:
Hilberts Program
G
odels Incompleteness
Theorem
idea of proof
coding system
representability
Section 19.8
Diagonal Lemma
prove all and only the true instances of these predicates. So if p is a proof of
S and n and m are their codes, then the formal version of the last sentence
on our list would actually be a first-order consequence of the Peano axioms.
A lot of careful work has to be done to show that these notions are representable in Peano arithmetic, work that is very similar to what you have
to do to implement a system like F on a computer. (Perhaps Godel was the
worlds first real hacker.) But it is possible, and fairly routine once you get
the hang of it.
G
odels second key insight was that it is possible to get sentences that
express facts about themselves, relative to the coding scheme. This is known
as the Diagonal Lemma. This lemma states that for any wff P(x) with a single
free variable, it is possible to find a number n that codes the sentence P(n)
asserting that n satisfies P(x). In other words, P(n) can be thought of as
asserting
This sentence has the property expressed by P.
Depending on what property P expresses, some of these will be true and some
false. For example, the formal versions of
This sentence is a well-formed formula
and
This sentence has no free variables
are true, while the formal versions of
This sentence is a proof
and
This sentence is an axiom of Peano arithmetic
are false.
Now consider the formal version of the following sentence, whose existence
the Diagonal Lemma guarantees:
This sentence is not provable from the axioms of Peano arithmetic.
the G
odel sentence G
This sentence (the one above, not this one) is called G, after Godel. Lets show
that G is true but not provable in pa.
Proof: To show that G is true, we give an indirect proof. Suppose G
is not true. Then given what it claims, it must be provable in pa. But
since the axioms of pa are true and F is sound, anything provable
Chapter 19
applying theorem to
other theories
n encodes an axiom of T
is representable in T , Godels whole argument can be repeated, generating
yet another true sentence that is unprovable in the extended system.
Godels incompleteness result is one of the most important theorems in
logic, one whose consequences are still being explored today. We urge the interested reader to explore it further. Detailed proofs of Godels theorem can be
found in many advanced textbooks in logic, including Smullyans G
odels Incompleteness Theorems, Endertons Mathematical Introduction to Logic, and
Boolos and Jeffreys Computability and Logic.
Exercises
19.32 Godels Incompleteness Theorem was inspired by the famous Liars Paradox, the sentence This
"!
Section 19.8
19.33 (Undefinability of Truth) Show that the following predicate cannot be expressed in the language
"!
of arithmetic:
n is the code of a true sentence.
This is a theorem due to Alfred Tarski. [Hint: Assume it were expressible. Apply the Diagonal
Lemma to obtain a sentence which says of itself that it is not true.]
19.34 (Lobs Paradox) Consider the sentence If this conditional is true, then logic is the most fasci"!
nating subject in the world. Assume that the sentence makes an unambiguous claim.
1. Use the method of conditional proof (and modus ponens) to establish the claim.
2. Use modus ponens to conclude that logic is the most fascinating subject in the world.
Surely a good way to end a logic course.
Chapter 19
Summary of Rules
Propositional rules (FT )
Conjunction Introduction
( Intro)
Conjunction Elimination
( Elim)
P1 . . . Pi . . . P n
..
.
P1
Pn
..
.
$ Pi
$ P1 . . . Pn
Disjunction Introduction
( Intro)
Disjunction Elimination
( Elim)
P1 . . . Pn
..
.
Pi
..
.
$ P1 . . . Pi . . . Pn
P1
..
.
S
Pn
..
.
S
..
.
$ S
573
Negation Introduction
( Intro)
P
..
.
Negation Elimination
( Elim)
P
..
.
$ P
$ P
Introduction
( Intro)
P
..
.
P
..
.
$
Conditional Introduction
( Intro)
P
..
.
Q
$ PQ
Summary of Rules
Elimination
( Elim)
..
.
$ P
Conditional Elimination
( Elim)
PQ
..
.
P
..
.
$ Q
Biconditional Introduction
( Intro)
P
..
.
Q
Q
Biconditional Elimination
( Elim)
P Q (or Q P)
..
.
P
..
.
$ Q
..
.
P
$ PQ
Reiteration
(Reit)
P
..
.
$ P
Identity Elimination
(= Elim)
P(n)
..
.
n=m
..
.
$ P(m)
First-order rules (F )
Universal Elimination
( Elim)
x S(x)
..
.
$ S(c)
$ x (P(x) Q(x))
Universal Introduction
( Intro)
c
..
.
P(c)
$ x P(x)
where c does not occur outside the subproof where it is
introduced.
Existential Introduction
( Intro)
S(c)
..
.
Existential Elimination
( Elim)
x S(x)
..
.
$ x S(x)
c S(c)
..
.
Q
$ Q
where c does not occur outside the subproof where it is
introduced.
Summary of Rules
Induction rules
Peano Induction:
P(0)
n P(n)
..
.
P(s(n))
$ x P(x)
Where n does not occur
outside the subproof where
it is introduced.
Strong Induction:
n x (x < n P(x))
..
.
P(n)
$ x P(x)
Where n does not occur
outside the subproof where
it is introduced.
Summary of Rules
Glossary
Ambiguity: A feature of natural languages that makes it possible for a single
sentence to have two or more meanings. For example, Max is happy or
Claire is happy and Carl is happy, can be used to claim that either Max
is happy or both Claire and Carl are happy, or it can be used to claim
that at least one of Max and Claire is happy and that Carl is happy.
Ambiguity can also arise from words that have two meanings, as in the
case of puns. Fol does not allow for ambiguity.
Analytical consequence: A sentence S is an analytical consequence of some
premises if S follows from the premises in virtue of the meanings of the
truth-functional connectives, identity, quantifiers, and predicate symbols
appearing in S and the premises.
Antecedent: The antecedent of a conditional is its first component clause.
In P Q, P is the antecedent and Q is the consequent.
Argument: The word argument is ambiguous in logic.
1. One kind of argument consists of a sequence of statements in which
one (the conclusion) is supposed to follow from or be supported by
the others (the premises).
2. Another use of argument refers to the term(s) taken by a predicate in an atomic wff. In the atomic wff LeftOf(x, a), x and a are
the arguments of the binary predicate LeftOf.
Arity: The arity of a predicate indicates the number of arguments (in the
second sense of the word) it takes. A predicate with arity of one is called
unary. A predicate with an arity of two is called binary. Its possible for
a predicate to have any arity, so we can talk about 6-ary or even 113-ary
predicates.
Atomic sentences: Atomic sentences are the most basic sentences of fol,
those formed by a predicate followed by the right number (see arity) of
names (or complex terms, if the language contains function symbols).
Atomic sentences in fol correspond to the simplest sentences of English.
Axiom: An axiom is a proposition (or claim) that is accepted as true about
some domain and used to establish other truths about that domain.
579
580 / Glossary
Boolean connective (Boolean operator): The logical connectives conjunction, disjunction, and negation allow us to form complex claims
from simpler claims and are known as the Boolean connectives after the
logician George Boole. Conjunction corresponds to the English word
and, disjunction to or, and negation corresponds to the phrase it is not
the case that. (See also Truth-functional connective.)
Bound variable: A bound variable is an instance of a variable occurring
within the scope of a quantifier used with the same variable. For example, in x P(x, y) the variable x is bound, but y is unbound or free.
Claim: Claims are made by people using declarative sentences. Sometimes
claims are called propositions.
Completeness: Completeness is an overworked word in logic.
1. A formal system of deduction is said to be complete if, roughly
speaking, every valid argument has a proof in the formal system.
This sense is discussed in Section 8.3 and elsewhere in the text.
(Compare with Soundness.)
2. A set of sentences of fol is said to be formally complete if for
every sentence of the language, either it or its negation can be
proven from the set, using the rules of the given formal system.
Completeness, in this sense, is discussed in Section 19.8.
3. A set of truth-functional connectives is said to be truth-functionally
complete if every truth-functional connective can be defined using
only connectives in the given set. Truth-functional completeness is
discussed in Section 7.4.
Conclusion: The conclusion of an argument is the statement that is meant to
follow from the other statements, or premises. In most formal systems,
the conclusion comes after the premises, but in natural language, things
are more subtle.
Conditional: The term conditional refers to a wide class of constructions
in English including if. . . then. . . , . . . because. . . , . . . unless. . . ., and the
like, that express some kind of conditional relationship between the two
parts. Only some of these constructions are truth functional and can be
represented by means of the material conditional of fol. (See Material
conditional.)
Glossary
Glossary / 581
Glossary
582 / Glossary
Glossary
Glossary / 583
Glossary
584 / Glossary
Glossary
Glossary / 585
Logical equivalence: Two sentences are logically equivalent if they have the
same truth values in all possible circumstances.
Logical necessity: See Logical truth.
Logical possibility: We say that a sentence or claim is logically possible if
there is no logical reason it cannot be true, i.e., if there is a possible
circumstance in which it is true.
Logical truth: A logical truth is a sentence that is a logical consequence of
any set of premises. That is, no matter what the premises may be, it
is impossible for the conclusion to be false. This is also called a logical
necessity
Logical validity: An argument is logically valid if the conclusion is a logical
consequence of the premises.
Material conditional: A truth-functional version of the conditional if. . .
then. . . . The material conditional P Q is false if P is true and Q is
false, but otherwise is true. (See Conditional.)
Modus ponens: The Latin name for the rule that allows us to infer Q from
P and P Q. Also known as Elimination.
Names: See Individual constants.
Necessary condition: A necessary condition for a statement S is a condition
that must hold in order for S to obtain. For example, if you must pass
the final to pass the course, then your passing the final is a necessary
condition for your passing the course. Compare with Sufficient condition.
Negation normal form (NNF): A sentence of fol is in negation normal
form if all occurrences of negation apply directly to atomic sentences.
For example, (A B) is in NNF whereas (A B) is not in NNF.
Numerical quantifier: Numerical quantifiers are those quantifiers used to
express numerical claims, for example, at least two, exactly one, no more
than five, etc.
Predicate: Predicates are used to express properties of objects or relations
between objects. Larger and Cube are examples of predicates in the
blocks language.
Glossary
586 / Glossary
Prefix notation: In prefix notation, the predicate or relation symbol precedes the terms denoting objects in the relation. Larger(a, b) is in prefix
notation. Compare with Infix notation.
Premise: A premise of an argument is one of the statements meant to support (lead us to accept) the conclusion of the argument.
Prenex normal form: A wff of fol is in prenex normal form if it contains
no quantifiers, or all the quantifiers are out in front.
Proof: A proof is a step-by-step demonstration that one statement (the conclusion) follows logically from some others (the premises). A formal proof
is a proof given in a formal system of deduction; an informal proof is
generally given in English, without the benefit of a formal system.
Proof by cases: A proof by cases consists in proving some statement S from
a disjunction by proving S from each disjunct.
Proof by contradiction: To prove S by contradiction, we assume S and
prove a contradiction. In other words, we assume the negation of what
we wish to prove and show that this assumption leads to a contradiction.
Proof by induction: See Inductive proof.
Proof of non-consequence: In a proof of non-consequence, we show that
an argument is invalid by finding a counterexample. That is, to show
that a sentence S is not a consequence of some given premises, we have to
show that it is possible for the premises to be true in some circumstance
where S is false.
Proposition: Something that is either true or false. Also called a claim.
Quantifier: In English, a quantified expression is a noun phrase using a
determiner such as every, some, three, etc. Quantifiers are the elements
of fol that allow us to express quantified expressions like every cube.
There are only two quantifiers in fol, the universal quantifier () and
the existential quantifier (). From these two, we can, however, express
more complex quantified expressions.
Reductio ad absurdum: See Proof by contradiction.
Glossary
Glossary / 587
Glossary
588 / Glossary
Glossary
Glossary / 589
Truth table: Truth tables show the way in which the truth value of a sentence built up using truth-functional connectives depends on the truth
values of the sentences components.
Truth value: The truth value of a statement in some circumstances is true
if the statement is true in those circumstances, otherwise its truth value
is false. This is an informal notion but also has rigorous counterparts
in propositional logic, where circumstances are modeled by truth assignments, and in first-order logic where circumstances are modeled by
first-order structures.
Universal quantifier (): The universal quantifier is used to express universal claims. Its corresponds, roughly, to English expressions such as
everything, all things, each thing, etc. (See also Quantifiers.)
Union (): The operation on sets a and b that returns the set a b whose
members are those objects in either a or b or both.
Validity: Validity is used in two ways in logic:
1. Validity as a property of arguments: An argument is valid if the
conclusion must be true in any circumstance in which the premises
are true. (See also Logical validity and Logical consequence.)
2. Validity as a property of sentences: A first-order sentence is said
to be valid if it is logically true simply in virtue of the meanings of
its connectives, quantifiers, and identity. (See First-order validity.)
Variable: Variables are expressions of fol that function somewhat like pronouns in English. They are like individual constants in that they may
be the arguments of predicates, but unlike constants, they can be bound
by quantifiers. Generally letters from the end of the alphabet, x, y, z,
etc., are used for variables.
Variable assignment: A function assigning objects to some or all of the
variables of a first-order language. This notion is used in defining truth
of sentences in a first-order structure.
Well-formed formula (wff ): Wffs are the grammatical expressions of
fol. They are defined inductively. First, an atomic wff is any n-ary
predicate followed by n terms. Complex wffs are constructed using connectives and quantifiers. The rules for constructing complex wffs are
found on page 233. Wffs may have free variables. Sentences of fol are
wffs with no free variables.
Glossary
File Index
Abelards Sentences, 185
Ackermanns World, 76, 83, 281
Alan Robinsons Sentences, 508
Allans Sentences, 244
Ana Con 1, 60
Andersons First World, 314
Andersons Second World, 315
Aristotles Sentences, 243
Arnaults Sentences, 305
Austins Sentences, 27
Axioms 1, 288
Bernays Sentences, 83
Bernsteins Sentences, 236
Between Sentences, 206
Bills Argument, 64
Bolzanos World, 35, 83, 187, 276,
281, 309, 311, 381
Booles Sentences, 80
Booles World, 35, 70, 83, 86, 88
Boolos Sentences, 188
Bozos Sentences, 236
Brouwers Sentences, 70
Buridans Sentences, 306
590
Identity 1, 58
Intersection 1, 427
Jon Russells Sentences, 324
Kleenes Sentences, 79
Kleenes World, 79
Konigs World, 301, 304
Lowenheims Sentences, 302
Leibnizs Sentences, 305
Leibnizs World, 35, 83, 187, 236,
241, 252, 305, 308
Lestrades Sentences, 26
Lestrades World, 26
Maigrets Sentences, 245
Maigrets World, 245
Malcevs World, 319
Mary Ellens Sentences, 515
Mary Ellens World, 515
Maxs Sentences, 74
Mixed Sentences, 304
Montagues Sentences, 308
Montagues World, 186, 250, 381
More CNF Sentences, 126
Mostowskis World, 382
Negation 1, 158
Negation 2, 160
Negation 3, 161
Negation 4, 163
Null Quantification Sentences, 285
Ockhams Sentences, 409, 410
Ockhams World, 409, 410
Padoas Sentences, 371
Peanos Sentences, 301
Peanos World, 248, 301, 309, 380
Peirces Sentences, 241
File Index
Universal 2, 354
Venns World, 187, 251, 435, 438
Weiners Sentences, 105
Whiteheads Sentences, 377
Wittgensteins Sentences, 24
Wittgensteins World, 24, 27, 35,
44, 69, 73, 77, 80, 88, 185,
187, 251, 311, 312
World Counterexample 1, 65
World DNF 1, 126
World Mixed 1, 304
World Submit Me 1, 8, 9
Zorns Sentences, 241
File Index
Exercise Index
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
593
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercises
Exercise Index
General Index
(universal quantifier), 230, 232
Elim, 351
Intro, 352
(biconditional), 183189
Elim, 210
Intro, 210
(conjunction), 7174
Elim, 144
Intro, 145
(contradiction), 138, 157159
Elim, 161
Intro, 157
(disjunction), 7477
Elim, 151
Intro, 149
= (equality), 25, 37
= Elim, 50, 51, 56
= Intro, 50, 51, 55
(existential quantifier), 230, 232
233
Elim, 357
Intro, 357
(implication), 180183
Elim, 207
Intro, 207
(negation), 6870
Elim, 156, 162
Intro, 157
(equivalent), 83
+= (not equal), 68
(empty set), 420
(set intersection), 424429
(set membership), 37
(powerset), 439441
(subset), 422424
(set union), 424429
9T , 486
F, 54, 143, 351
FT , 216, 486
H, 548
M, 514
| b | (Cantorian size), 450
g , 517
# (exercise difficulty), 7, 8
! (submit exercise) , 6, 8
" (turn-in exercise), 6, 8
!|" (submit/turn-in exercise), 6,
8
absolute complement, 429, 452
Aczel, Peter, 453
addition, 130
afa, 453
affirming the consequent, 204, 213
alternative notation, 40, 66, 90, 198,
257258
ambig-wffs, 456
ambiguity, 4, 79, 316
and context, 313
and inference, 317
Ana Con, 60, 61, 114, 116, 160,
274, 288
analytic consequence, 60
antecedent of a conditional, 180
strengthening, 204, 213
anti-foundation axiom, 453
antisymmetry, 432
appropriate variable assignment, 517
595
General Index
General Index
counterexample, 5, 15, 63
first-order, 271
Date of Birth Lemma, 546
Deduction Theorem, 551
deductive system, 54
definite descriptions
Russellian analysis, 388
Strawsons objection, 390
DeMorgan laws, 81, 8283, 106,
184, 321
and first-order equivalence, 277
281
for quantifiers, 281, 365
determinate property, 22, 24
determiners, 229, 373
adding to fol, 392397
anti-persistent, 401
binary, 406
conservativity, 398
general form, 393
logic of, 398403
monotone, 375, 399401
decreasing, 400
increasing, 399
test for decreasing, 400
test for increasing, 399
persistent, 401403
reducible, 394
semantics of, 395
discharged assumption, 166
disjunction, 67, 74
elimination, 150
exclusive, 74, 75
inclusive, 74
introduction, 130, 149
disjunctive normal form (DNF), 122
126
distributive laws, 123
of over , 123, 125
General Index
partial
defined on an object, 437
partial on a domain, 437
range of, 437
total on a domain, 437
function symbols, 31, 3134, 231,
317, 530, 532
and quantification, 253255
completeness for languages with,
560
of arithmetic, 38
functional calculus, 2
game, 67, 77
Henkin-Hintikka, 67
rules, 78, 240
for , 72
for , 184
for , 68
for , 180
for , 75
for quantifiers, 239240
winning strategy, 78
general conditional proof, 332336,
338, 341, 351, 454
Godel
Incompleteness Theorem, 568
571
numbering, 569
Godel, Kurt, 542, 562
Grade Grinder, 511, 13
Grice, H. P., 190
Grices cancellability test, 190
Henkin
construction, 544, 556561
Construction Lemma, 556
theory, 544, 548
Henkin, Leon, 67
Henkin-Hintikka game, 67
Hintikka, Jaakko, 67
General Index
General Index
logical
consequence, 2, 45, 521, 542
dependence, 51, 105
truth, 94, 521
validity, 42
logically equivalent wffs, 279
L
% os, J., 409
Lowenheim, Leopold, 562
Lowenheim-Skolem Theorem, 562
lower predicate calculus, 2
Mary Ellens World, 512, 514, 515
material biconditional, 183, 199
material conditional, 180
usefulness of, 199
meaning, 84
meaning postulates, 287
method of counterexample, 5, 15,
63
method of proof, 5, 46, 129, 199,
338
by cases, 132135
by contradiction, 137139
conditional, 200
general, 334
mixed quantifiers, 302, 305, 338
modus ponens, 199, 200, 207
modus tollens, 204, 213
monkey principle, 189
multiple quantifiers, 298
names, 19, 20
in English, 20
introducing new, 29
natural deduction, 143
natural language, 4
necessary
tt-, 103
tw-, 103
condition, 181
logically, 103, 138
negation, 67, 68
double, 68, 83
elimination, 156
introduction, 137, 156
normal form, 119123
scope of, 80
non-logical truth in all worlds, 372
nonstandard models
of arithmetic, 564
of set theory, 563
normal form
conjunctive (CNF), 122126, 321,
495
disjunctive (DNF), 122126
negation, 119123
prenex, 284, 320324
Skolem, 531
notation, for sets
brace, 418
list, 420
noun phrase, 241
complex, 245
plural, 407
quantified, 307, 313
existential, 245, 250
universal, 246, 251
null quantification, 283
numerical claim, 375, 383
ordered pair, 429, 430
pa, 468
palindrome, 460, 462
paraphrasing English, 309
parentheses, 79, 80, 101, 148, 150,
236
alternatives to, 90
partition, 436
Peano, Giuseppe, 468
Peano arithmetic, 468472
Peano Induction, 474, 577
General Index
Peano, Giuseppe, 3
Peirce, Charles Sanders, 3
persistence
through expansion, 409
under growth, 410
Phred, 262263
plural noun phrases, 407
Polish notation, 91, 92, 198
possessives, 531
possible
tt-, 103
tw-, 103
logically, 95, 138
powerset, 439
axiom, 447
predicate, 19, 21, 25
vs function symbol, 32
argument, 21
arity, 21
extension of, 431, 513, 514
introducing new, 28
symbols, 20
prefix notation, 23, 33, 91
premises, 41, 44
inconsistent, 141
prenex form, 284, 298, 320324, 536
presuppositions, 289
and determiners, 389
and the axiomatic method, 290
prime number, 245, 341, 342
Prolog, 3
proof, 46, 49, 103
advantages over truth tables,
128
by cases, 132135
by contradiction, 137139
by induction
basis, 461
induction step, 461
conditional, 200
General Index
first-order, 259
game rules for, 239240
generalized
adding to fol, 392397
and vagueness, 396
logic of, 398403
semantics of, 515516
mixed, 302, 305, 338
multiple, 298
second-order, 259
semantics for, 237241
range of a function, 437
rational inquiry, 12, 4
rational number, 132
reasoning
invalid, 1
valid, 1
reductio ad absurdum, 137139
reference columns, 96
referent of a name, 513, 514
reflexivity, 52, 432
of identity, 50
Reit, 56
reiteration, 56, 152
relations
antisymmetric, 432
asymmetric, 432
binary, 431
equivalence, 433434
functional, 437
inverse, 433
irreflexive, 432
modeling in set theory, 431
434
reflexive, 432
transitive, 432
relative complement, 452
replacement method, 272
resolution clause, 504
General Index
General Index
tense, 407
terms, 32, 231
complex, 32, 34
of first-order arithmetic, 39
ternary connective, 197
theorem, 48, 195
theory
formally complete, 488
formally consistent, 487
transitivity, 432
of <, 52
of , 204, 213
of identity, 51
translation, 14, 28, 84
and meaning, 84
and mixed quantifiers, 298300,
317
and paraphrase, 309
extra exercises, 324327
of
a, 233
all, 232
an, 233
and, 71
any, 232, 246
at least n, 375
at least one, 233
at most n, 375
both, 388
but, 71
each, 232, 246
every, 232, 241, 246
everything, 230
exactly n, 375
few, 395
if, 181
if and only if, 184
iff, 184
just in case, 184
many, 395
moreover, 71
most, 395
neither, 388
neither. . . nor, 75
no, 241, 246
non-, 68
not, 68
only if, 182
or, 74
provided, 181
some, 233, 241
something, 230
the, 388
un-, 68
unless, 182
of complex noun phrases, 245
247
of conditionals, 181
step-by-step method, 307
using function symbols, 317
truth, 516
assignment, 484
conditions, 84, 190
in a structure, 520, 522
in all worlds, 372
logical, 93, 94, 103, 183, 184,
269
non-logical, 372
undefinability of, 572
value, 24, 67
truth table, 67
disadvantages of, 128
for , 72
for , 184
for , 68
for , 180
for , 75
joint, 106, 110
method, 497
modeling in set theory, 484
General Index
485
number of rows, 95
reference columns, 96
truth-functional
completeness, 192, 195
connective, 67
binary, 192
semantics for, 68, 72, 75, 180,
184
ternary, 197
form, 263
algorithm, 263266
tt-satisfiable, 485
twin prime conjecture, 342
unary function symbol, 317
undefinability of truth, 572
unification, 533
algorithm, 534
uninterpreted language, 15
union, 425, 426
axiom, 447, 452
uniqueness
claim, 384
quantifier, !, 384
universal
elimination, 351
generalization, 334, 335, 341,
351
instantiation, 330, 351
introduction, 334, 351, 352
noun phrase, 246, 251
quantifier, 232
sentence, 536
set, 442, 452
wff, 462
unordered pair, 420
axiom, 446
uses of fol, 34
vacuously true generalization, 247
General Index
vagueness, 21
valid
argument, 141
first-order, 521
proof, 46
step, 128, 129, 328330
validity
first-order, 267275
and logical truth, 273
variable, 230231
assignment, 516, 522
appropriate, 517
empty, 517
bound, 285
free, 234
limited number of, 382
reusing, 383
unbound, 234
von Neumann, John, 451
weakening the consequent, 204, 213
Web address, 16
well-formed formula (wff), 233, 233
236
atomic, 231, 233
existential, 462
universal, 462
wellfounded sets, 462
winning strategy for playing the game,
78
witnessing constant, 544
adding to a language, 545546
for a wff, 545
You try it exercises, 7, 13
Zermelo, Ernst, 450
Zermelo-Frankel set theory, 445
zfc, 445, 563