Symbolic Logic - Syntax Semantics and Proof PDFDrive PDF
Symbolic Logic - Syntax Semantics and Proof PDFDrive PDF
David W. Agler
R o w m a n & L i t t l e f i e ld P u b l i s h e r s , I n c .
Lanham • Boulder • New York • Toronto • Plymouth, UK
All rights reserved. No part of this book may be reproduced in any form or by any electronic or
mechanical means, including information storage and retrieval systems, without written permission
from the publisher, except by a reviewer who may quote passages in a review.
BC38.A35 2013
511.3—dc23 2012026304
™ The paper used in this publication meets the minimum requirements of American
National Standard for Information Sciences—Permanence of Paper for Printed Library
Materials, ANSI/NISO Z39.48-1992.
Acknowledgments xix
Introduction 1
I.1 What Is Symbolic Logic? 1
I.2 Why Study Logic? 1
I.3 How Do I Study Logic? 2
I.4 How Is the Book Structured? 2
3 Truth Tables 65
3.1 Valuations (Truth-Value Assignments) 65
3.2 Truth Tables for Propositions 69
3.3 Truth Table Analysis of Propositions 76
3.4 Truth Table Analysis of Sets of Propositions 81
3.5 The Material Conditional Explained (Optional) 84
3.6 Truth Table Analysis of Arguments 88
3.7 Short Truth Table Test for Invalidity 91
vii
4 Truth Trees 99
4.1 Truth-Tree Setup and Basics in Decomposition 99
4.2 Truth-Tree Decomposition Rules 105
4.3 The Remaining Decomposition Rules 120
4.4 Basic Strategies 127
4.5 Truth-Tree Walk Through 135
4.6 Logical Properties of Truth Trees 138
Appendix 359
Further Reading 367
Index 371
About the Author 375
Acknowledgments xix
Introduction 1
I.1 What Is Symbolic Logic? 1
I.2 Why Study Logic? 1
I.3 How Do I Study Logic? 2
I.4 How Is the Book Structured? 2
ix
3 Truth Tables 65
3.1 Valuations (Truth-Value Assignments) 65
Exercise Set #1 68
Solutions to Starred Exercises in Exercise Set #1 69
3.2 Truth Tables for Propositions 69
Exercise Set #2 74
Solutions to Starred Exercises in Exercise Set #2 75
3.3 Truth Table Analysis of Propositions 76
3.3.1 Tautology 76
3.3.2 Contradiction 77
3.3.3 Contingency 78
Exercise Set #3 80
Solutions to Starred Exercises in Exercise Set #3 80
4 Truth Trees 99
4.1 Truth-Tree Setup and Basics in Decomposition 99
4.1.1 Truth-Tree Setup 100
4.1.2 Truth-Tree Decomposition 100
Exercise Set #1 104
4.2 Truth-Tree Decomposition Rules 105
4.2.1 Conjunction Decomposition Rule (∧D) 105
4.2.2 Disjunction Decomposition Rule (∨D) 107
4.2.3 Decompose Propositions under Every Descending Open Branch 115
Exercise Set #2 118
Solutions to Starred Exercises in Exercise Set #2 119
4.3 The Remaining Decomposition Rules 120
4.3.1 Conditional Decomposition (→D) 120
4.3.2 Biconditional Decomposition (↔D) 121
4.3.3 Negated Conjunction Decomposition (¬∧D) 122
4.3.4 Negated Disjunction Decomposition (¬∨D) 123
4.3.5 Negated Conditional Decomposition (¬→D) 123
4.3.6 Negated Biconditional Decomposition (¬↔D) 123
4.3.7 Double Negation Decomposition (¬¬D) 124
4.3.8 Truth-Tree Rules 125
Exercise Set #3 125
Solutions to Starred Exercises in Exercise Set #3 126
4.4 Basic Strategies 127
4.4.1 Strategic Rule 1 127
4.4.2 Strategic Rule 2 128
Appendix 359
Further Reading 367
Index 371
About the Author 375
In writing this book, I am very grateful to have had the support of my colleagues and
friends. At the Pennsylvania State University, I received comments and encourage-
ment about the text from a variety of graduate students and faculty: Kate Miffitt,
Stuart Selber, Ayesha Abdullah, Elif Yavnik, Ryan Pollock, David Selzer, Lindsey
Stewart, Deniz Durmus, Elizabeth Troisi, Cameron O’Mara, and Ronke Onke. I owe
special thanks to Mark Fisher and Emily Grosholz, without whom this book would
not be possible.
In addition, I owe thanks to my students for catching a number of typos, for
the style in which this book is written, and for their feedback on various sections.
These pages began as my student notes, which transformed into lecture notes, then
into handouts distributed in summer 2009, then into a course packet used in the fall
of 2009; finally I used them as a textbook in the summer and fall 2010 and spring
2011. Later, a number of my colleagues used this book in the spring, summer, and
fall 2011 and spring 2012 semesters. Some notable students include Isaac Bishop,
Kristin Nuss, Karintha Parker, Sarah Mack, Amanda Wise, Meghan Barnett, Alex-
ander McCormack, and Kevin Bogle. I owe special thanks to Courtney Pruitt, who
provided me with a number of her solutions for exercises, and Robert Early, who
pointed out a number of typos in a draft of the book.
xix
What is logic? Logic is a science that aims to identify principles for good and bad
reasoning. As such, logic is a prescriptive study in that one of its goals is to tell you
how you ought to reason. It is not a descriptive science in that it does not investi-
gate the physical and psychological mechanisms of reasoning. That is, it does not
describe how you do, in fact, reason. Thus, logic lays down certain ideals for how
reasoning should occur.
Symbolic logic is a branch of logic that represents how we ought to reason by us-
ing a formal language consisting of abstract symbols. These abstract symbols, their
method of combination, and their meanings (interpretations) provide a precise, widely
applicable, and highly efficient language in which to reason.
This is a good question and depends a lot upon who you are and what you want to do
with your life. Here are a variety of answers to that question, not all of which may be
relevant to you:
(1) Circuit design: Logical languages are helpful in the simplification of electrical
switch/circuit design. This means that you can manipulate the formal language of
logic to simplify the wiring in your home, a computer, or an automobile.
(2) Other formal languages: If you are interested in other formal or computer lan-
guages (e.g., HTML, Python), some components of symbolic logic overlap with
these languages. This means that learning symbolic logic can supplement your
existing knowledge or serve as a stepping-stone to these other languages.
(3) Rule-following behavior: Some of the procedures in learning symbolic logic re-
quire that you pay close attention to precise rules. Learning how to use these rules
is a transferable skill. This means that if you can master the many precise rules of
symbolic logic, then you should be able to take this skill and transfer it to some-
thing else you want to do in life (e.g., law, law enforcement, journalism, etc.).
(4) Problem-solving and analytic skills: To some extent, this text will teach you how
to argue better by giving you a method for drawing out what is entailed by certain
propositions. Learning various rules of inference maps onto uncontroversial ways
of arguing, and this has application for problem solving in general. If you are
planning on taking an entrance exam for college, law school, graduate school, and
so on, many students claim that logic offers one way, among a variety of others,
to solve problems on these exams.
(5) Study in philosophy: Many introductory courses in logic are taught in the phi-
losophy department, and so it is important to emphasize the importance and use
of logic in philosophy. There are a variety of ways to show why logic matters to
philosophy, but here are two simple ones. First, the precise syntax and semantics
of logic can be used as a philosophical tool for clarifying a controversial or un-
clear claim, and the precise set of inference rules allows for a canonical way of
arguing about certain philosophical topics (e.g., God’s existence). Second, some
philosophers have argued that the syntax and semantics of natural language (e.g.,
English) are sloppy and ought to be revised to meet the standards of precision of
our logical system. This has certain implications for what we count as meaningful
and rational discourse.
In order to gain competency in logic, you will need to read the textbook, do a vari-
ety of exercises (of varying difficulty) throughout the book, and check your answers
against the solutions that are provided. Learning symbolic logic is much like learning
a new language, becoming skilled at a particular athletic sport, or learning to play a
musical instrument: competency requires regular and engaged practice. Insofar as
competency requires regular practice, you will want to read the text and do the exer-
cises in short, spaced-out increments rather than one, long block. Just as weightlifters,
tennis players, or saxophone players don’t become great by practicing fifteen hours
straight for one day, you won’t retain much of what you learn by solving a hundred
proofs in one sitting. In addition, insofar as competency requires engaged practice, it is
important that you use a pencil and paper (or Word document) to actually write down
your solutions to various exercises. Just listening to great singers doesn’t make you a
great singer. Being good at various logical operations requires hard work.
The book is structured in three main parts. Part I begins with a discussion of a number
of central concepts (e.g., argument, validity) in logic as they are used in everyday
English. Part II articulates a system of symbolic logic known as propositional logic.
The formal structure, semantics, proof system, and various ways of testing for logi-
cal properties are articulated. Part III investigates a second, more powerful system of
logic known as predicate logic. The formal structure, semantics, proof system, and
testing procedures are also investigated.
As there are many ways to learn symbolic logic (or teach a symbolic logic course)
without reading every chapter, three different routes are provided in table I.1.
Propositions, Arguments,
and Logical Properties
What is logic? Logic is a science that aims to identify principles for good and bad
reasoning. As such, logic is a prescriptive study in that one of its goals is to tell you
how you ought to reason. It is not a descriptive science in that it does not investigate
the physical and psychological mechanisms of reasoning. That is, it does not describe
how you do, in fact, reason. Two concepts central to logic are propositions and argu-
ments. Propositions are the bearers of truth and falsity, while an argument is a series
of propositions separated by those that are premises (or assumptions) and those that
are conclusions.
There are three major goals for this chapter:
1.1 Propositions
Typically, in order to say something true or false, you must utter a sentence. Thus,
if Victor says, ‘John is tall,’ then Victor has said something true if and only if (iff)
John is tall, and Victor has said something false if and only if John is not tall. Other
examples include the following:
Each of the above sentences is either true or false. However, not all sentences are
either true or false. For example, questions like ‘How tall are you?’ cannot be true or
false, commands like ‘Close the door!’ cannot be true or false, and exclamations like
‘Oh my!’ cannot be true or false. Typically, although not always, it is only declarative
sentences that express something that can be true or false.
The branch of logic that will be considered here is concerned with sentences that
express contents that can be true or false. The content expressed by sentences that can
be true or false is called a proposition.
(1) Yes.
In looking at (1), we don’t know what ‘yes’ is a response to, we don’t know what
question it is an answer to, and without this information, we cannot really say that (1)
expresses a proposition. However, given a broader context, for example, a conversa-
tion, we might say that (1) does express a proposition. For instance, consider the
conversation below:
In the above example, notice that we understand Victor to express the proposition
I ate lunch even though he does not fully articulate this proposition by saying, ‘I ate
lunch.’ Instead, we understand Victor’s one-word utterance ‘yes’ to express a com-
plete proposition, but only because a certain amount of missing information is filled in
by the context. Such a proposition would be true if and only if Victor ate lunch, and it
would be false if and only if Victor did not eat lunch. In sum, not every language ex-
pression will express a proposition on its own; sometimes we need to look at the con-
text in which the sentence is uttered to determine what proposition is being expressed.
Finally, a single sentence uttered in conversation or in an argument can express
different propositions depending upon whether we analyze what a sentence means or
what we think a speaker means in uttering a particular sentence. That is, our focus can
be upon what a sentence says (literal meaning) or on what a speaker means in uttering
the sentence (speaker meaning). Consider the following example:
In the above example, what Victor literally says is that John definitely attracts
attention, but what Victor means is John is not a good dancer; he is wild, and this
causes people to notice. Literal meaning and speaker meaning are two different types
of meaning. To put the difference somewhat roughly, the literal meaning focuses on
what the individual words of the sentence mean and how these words are put together
to form larger units of meaning. You can think of the literal meaning as the meaning
of a sentence in isolation from the context in which it is uttered (this is somewhat inac-
curate). In contrast, speaker meaning tends to require language users to know not only
what the speaker says but also things about what the speaker intends to get across to
his listeners. This textbook focuses on the literal meaning of sentences.
To summarize, one key element in symbolic logic is the proposition. A proposition is
a sentence (or some abstract meaning expressed by a sentence) that is capable of being
true or false. While generally it is complete sentences that express propositions, not all
sentences are propositions since commands, questions, or exclamations are not capable
of being true or false, and some one-word utterances, in context, express propositions.
Finally, a single sentence can express multiple propositions depending upon whether or
not we interpret its literal meaning or what the speaker means in uttering the sentence.
1.2 Arguments
A second key concept in logic is the notion of an argument. In this section, the concept
of an argument is defined, and arguments are distinguished from nonarguments.
The premises of an argument are propositions that are claimed to be true. In the
above example, (1) and (2) are claimed to be true. The conclusion of an argument
is the proposition or propositions claimed to follow from the premises. In the above
example, (3) is claimed to follow from premises (1) and (2).
Later in the text, you will find some arguments that do not have premises. These
arguments instead start from an assumption. Assumptions are propositions that are not
claimed to be true but instead are supposed to be true for the purpose of argument.
Below is an example of an argument that does not involve a premise but instead starts
with an assumption (in order to indicate assumptions, they are indented).
(1) Arguments have argument indicators (words like therefore, in conclusion, etc.).
(2) The conclusion of an argument is claimed to follow from the premises/assumptions.
Argument Indicators
Therefore I infer that Since
So It follows that Hence
In conclusion For the reason that Thus
Consequently Inasmuch as We deduce that
It implies Ergo We can conclude that
If John is a crooked lawyer, then he will hide evidence. John is a crooked lawyer.
Therefore, John will hide evidence.
In the above example, the argument indicator therefore indicates that John will hide
evidence is the conclusion and follows from the other two propositions. That is,
In the standard organization of arguments, the argument begins by laying out all of
the premises or assumptions, then provides an argument indicator to signal that the
conclusion is the next proposition in the series, and finally presents the conclusion of
the argument.
Here is another example:
Ryan is either strong or weak. If Ryan is strong, then he can lift 200 lbs. Ryan can
lift 200 lbs. Thus, Ryan is strong.
Notice how the argument begins with a set of propositions that are the premises,
then has an argument indicator thus, and ends with a proposition that follows from the
premises (the conclusion). That is,
We can conclude that John is the murderer. For John was at the scene of the crime
with a bloody glove. And if John was at the scene of the crime with a bloody glove,
then he is the murderer.
In the above example, the argument begins with the argument indicator, which is
followed by the conclusion, and the following two sentences are premises.
Imagine that you see a brown bag on a table. You reach inside and pull out a single
black bean. You reach in again and pull out another black bean. You do this repeat-
edly until you have one hundred black beans in front of you. You might conclude
that the next bean you pull out is black.
The above is a type of inductive argument. The premises provide some degree of
support for the conclusion but certainly do not guarantee the truth of the conclusion.
We are tempted to say that because all of the beans pulled from the bag thus far have
been black, the next bean will probably (but not necessarily) be black.
Consider another example:
Imagine that there is a closed brown bag on a table and some black beans next to
the bag. You are puzzled by where the beans came from and reason as follows:
The above is a type of abductive argument. In the above case, while the truth of the
premises does not guarantee the truth of the conclusion or even provide direct support
for the conclusion, the conclusion would (perhaps best) explain the premises.
Finally, one last example:
If John is a crooked lawyer, then he will hide evidence. John is a crooked lawyer.
Therefore, John will hide evidence.
The above is a type of deductively valid argument. In the above case, the truth of
the premises guarantees the truth of the conclusion. That is, it is necessarily the case
that, if the premises are true, then the conclusion is true. In other words, it is logically
impossible for the premises to be true and the conclusion to be false.
In all of the above examples of arguments, one proposition (the conclusion) follows
from a set of premises/assumptions. This sense of following from differs importantly
from other ways in which propositions are arranged. For example, in contrast to
arguments where a proposition follows from another proposition, propositions can
be ordered in a temporal arrangement or narrative arrangement, as in the case of a
chronological list, a work of fiction, a witness’s description of a brutal crime, or even
a grocery list.
To illustrate, the following passage is not an argument:
It was a sunny August afternoon when John walked to the store. On his way there,
he saw a little, white rabbit hopping in a meadow. He smiled and wondered about
the rabbit on the rest of his walk.
Not only are there no argument indicators in the above passage, but no proposition
follows from any other. While we might be able to order these propositions, the order-
ing would be chronological or temporal. Here is another example:
A mist is rising slowly from the fields and casting an opaque veil over everything
within eyesight. Lighted up by the moon, the mist gives the impression at one mo-
ment of a calm, boundless sea, at the next of an immense white wall. The air is
damp and chilly. Morning is still far off. A step from the bye-road, which runs along
the edge of the forest, a little fire is gleaming. A dead body, covered from head to
foot with new white linen, is lying under a young oak-tree. (Anton Chekhov)
While in the above passage, a number of sentences express propositions, there are
no argument indicators, and one sentence does not follow from the others in an induc-
tive, abductive, or deductive sense. The above passage does not express an argument
but rather provides a description of a particular scene.
To review, an argument is a series of propositions in which a certain proposition (a
conclusion) is represented as following from a set of premises or assumptions. Argu-
ments can be identified by (1) the presence of argument indicators, and (2) the fact that
the conclusion follows from premises/assumptions in a way distinct from a temporal
or narrative ordering.
Exercise Set #1
First, in elementary symbolic logic, validity only applies to arguments. That is, it
is a property that applies to arguments and nothing else. It is inappropriate to say that
propositions or sets of propositions that do not form an argument are valid. Consider
the following two passages:
(1) It was a sunny August afternoon when John walked to the store. On his way there,
he saw a little, white rabbit hopping in a meadow. He smiled and wondered about
the rabbit on the rest of his walk.
(2) Everyone with a vivid imagination has the mind of a child. When John walked to
the store, he had a number of highly unique, unusual, and colorful thoughts about
a rabbit he saw. Therefore, John has the mind of a child.
For the purpose of the above example, suppose that John does not live in Dallas or
Philadelphia. In fact, suppose that John lives in San Francisco. Despite the fact that
all of the propositions above are false, the conclusion still follows from the premises.
To see this more clearly, consider the following procedure:
First, assume that (1) and (2) are jointly true. That is, assume that (1) is true and
that (2) is true.
Second, now that you have assumed (1) and (2) are both true, given the truth of (1)
and (2), is it possible for (3) to be false? That is, if (1) and (2) are both true, can
(3) be false?
The answer to the final question is no. It is necessarily the case that, if the premises
are true, then the conclusion is true. When an argument is one that it is impossible for
the premises to be true and the conclusion false, the argument is deductively valid.
To get a clearer hold of this definition, we turn to the definition of deductive
validity. There are two ways of defining deductive validity. First, an argument is
deductively valid if and only if, it is necessarily the case that, on the assumption the
premises are true, the conclusion is true. Note that this does not mean that the premises
are (in fact) true. It only means that it is necessary that if the premises are true, then
it will be logically necessary that the conclusion is true. The second formulation is as
follows. An argument is deductively valid if and only if it is logically impossible for
the premises to be true and the conclusion to be false.
Validity An argument is deductively valid if and only if, it is necessarily the case
that if the premises are true, then the conclusion is true. That is, an argu-
ment is deductively valid if and only if it is logically impossible for its
premises/assumptions to be true and its conclusion to be false.
To get a clearer understanding of deductive validity, let’s focus on the second for-
mulation. This formulation says that a deductively valid argument is one where the
following two conditions are jointly impossible:
What does it mean to say that these two conditions are impossible? To say that
something is logically impossible is to say that the state of affairs it proposes involves
a logical contradiction.
A proposition is a logical contradiction if and only if, no matter how the world is,
no matter what the facts, the proposition is always false.
(1) to (3) are logical contradictions. No matter how we imagine the world, no matter
what the circumstances are, (1) to (3) are always false. Since the state of affairs they
propose involves a contradiction, each one of these is logically impossible. That is,
under no situation, circumstance, or way the world could be can John be two different
heights, can Toronto be in Canada and not in Canada, or can Frank be the murderer
and not be the murderer.
Returning now to the definition of a deductively valid argument, to say that an
argument is deductively valid is to say that it would be impossible for the argument’s
premises to be true and its conclusion to be false. In other words, we would be uttering
something contradictory (always false) if we were to say that a deductive argument’s
premises/assumptions were true and its conclusion was false.
Using the two conditions above, we can determine whether the argument is valid
by asking the following question:
Is it logically impossible for the premises to be true (condition 1) and the conclusion
to be false (condition 2)?
This is a difficult question to answer for all cases, so let’s consider a more step-by-
step method for determining validity. We will call this the negative test for validity.
The negative test works as follows:
To see how the negative test works, consider the following argument:
In order to determine whether the above argument is valid, the first step is to look at
the premises and to ask yourself whether it is possible for (1) and (2) to be true. That
is, can both (1) and (2) be true at the same time? The answer is yes. Even though there
might be some immortal man living on this planet, it seems that there is nothing that
would make it impossible for both to be true at the same time.
The second step is to consider, given that the premises are assumed true, whether or
not it is possible for the conclusion to be false. In other words, under the assumption that
all men are mortal and Barack Obama is a man, is it possible for Barack Obama is mor-
tal to be false? The answer is no since it is logically impossible for (1) and (2) to be true,
and (3) to be false. According to the negative test, the argument is deductively valid.
Since an argument is valid if and only if it is impossible for the premises to be true
and the conclusion to be false, a deductively valid argument can have any of the fol-
lowing:
In order to further clarify the notions of validity and invalidity, it will be helpful
to look at some concrete examples using the negative test. Consider the following
argument:
Step 1 says to ask whether it is possible for all of the premises to be true. The an-
swer is yes; even though (1) is in fact false, we can imagine that (1) and (2) are true.
According to the negative test, if the answer to the first step is yes, then we need to
move to step 2. Step 2 asks the following: Given that (1) and (2) are assumed true, can
the conclusion be false? The answer is no; if all men are immortal and Barack Obama
is a man, then it is impossible for Barack Obama is immortal to be false. Therefore,
the above argument is valid.
Is it possible for all of the premises to be true? Yes; even though (1) is in fact false,
we can imagine that (1) and (2) are both true. Given that (1) and (2) are assumed true,
can the conclusion be false? No; if all men are rational, and some men are mortal,
then it is impossible for Some mortals are rational to be false. Therefore, the above
argument is valid.
Consider the next argument, which has true premises and a true conclusion.
Is it possible for all of the premises to be true? Yes; (1) and (2) are both true.
Given that (1) and (2) are assumed true, can the conclusion be false? Yes; even if we
assume that some horses are domesticated, and all Clydesdales are horses, it is logi-
cally possible for All Clydesdales are domesticated to be false, for we can imagine
a wild Clydesdale. While all Clydesdales are in fact domesticated, nothing about the
premises requires (3) to be true. Since it is possible for the premises to be true and the
conclusion to be false, the argument is invalid.
Consider the next argument where there is a false conclusion.
Notice that in the above example, the conclusion is unrelated to the premises. Is it
possible for all of the premises to be true? Yes; (1) and (2) are both true. Given that (1)
and (2) are assumed true, can the conclusion be false? Yes; in fact, (3) is false. Since
it is possible for the premises to be true and the conclusion to be false, the argument
is invalid.
Finally, consider an example of an argument where all of the premises are false, but
the conclusion is true.
Notice that (1) and (2) are both false, and (3) is true. Notice that it is necessarily the
case that if (1) and (2) are assumed true, then (3) is true. That is, the above argument
is valid.
Deductive arguments can either be valid or invalid, and if they are valid, they can
be sound or unsound. An argument is sound if and only if it is both valid and all of
its premises are true.
Sound An argument is sound if and only if it is valid and all of its premises are
true.
An argument is not sound (or unsound) in either of two cases: (1) if an argument
is invalid, then it is not sound; (2) if an argument is valid but has at least one false
premise, then it is not sound.
While the determination of whether an argument is valid or invalid falls within the
scope of logic, the determination of the truth or falsity of the premises often falls outside
logic. The reason for this is that the truth or falsity of a large number of propositions can-
not be determined by their form. That is, if a premise is contingent (one whose truth or
falsity depends upon the facts of the world), then its logical form does not tell us whether
the premise is true or false, and while the argument may be valid, we will not be able to
determine whether or not it is sound without empirical investigation.
1.4 Summary
In this introductory chapter, the first goal has been to define the concept of a proposi-
tion. The second goal has been to separate the concept of argument from narrative and
other discursive forms involving sentences and propositions. The third goal has been
to acquire a basic understanding of the deductive validity. In a later chapter, deductive
validity along with other logical properties will be defined more rigorously by using a
more precise logical language. It is to this language that we now turn.
End-of-Chapter Exercises
A. Logical possibility of the premises. Identify the premises of the following argu-
ments and determine whether it is logically possible for the premises to be true.
1. * You should only believe in something that is backed by science. Belief
in God is not backed by science. Consequently, you should not believe
in God.
2. Drinking coffee or tea stains your teeth. You should never do anything to
stain your teeth. Therefore, you should not drink coffee or tea.
3. * Some athletes are drug users. Some drug users are athletes. Thus, we can
conclude that some people are both athletes and drug users.
4. Democrats think we should raise taxes. Republicans think we should
lower them. Hence, we should lower taxes.
5. * People can doubt many things, but you cannot doubt God exists. John is
a person. John doubts that God exists. Therefore, John is a bad person.
6. All politicians are crooks. John is a politician, but he is not a crook. Thus,
everyone should vote for John.
7. * We can conclude that in order to become an elite distance runner, it is
necessary that you learn to mid-foot strike when running. Elite distance
runners tend to mid-foot strike when running. Amateur distance runners
land on the heel of their feet when running.
8. Frank cooked a delicious pizza. Liz ate the pizza and got food poisoning.
Ergo, Frank’s delicious pizza caused Liz to get food poisoning.
9. John’s fingerprints were on the murder weapon. John was not the mur-
derer. Therefore, John is innocent of any crime.
10. Alcohol is a dangerous substance. Marijuana is a dangerous substance.
Marijuana is illegal, but alcohol is legal. For that reason, alcohol should
be made illegal.
11. Alcohol is a dangerous substance. Marijuana is a dangerous substance.
Marijuana is illegal, but alcohol is legal. For that reason, marijuana should
be made legal.
12. John prayed to God for a new bike. John didn’t get the bike. God exists. It
follows that God was not listening to John’s prayers.
13. There is good reason to conclude that God does not like John. First, John
prayed to God for a new bike; second, John didn’t get the bike.
14. Studies show that people who get one hug a day are happier, more produc-
tive people than people who get less than one hug a day. Studies show that
people who get more than ten hugs a day are unhappier and less productive
than people who get one and only one hug a day. For that reason, I infer
that everyone should try to hug another person at least once a day.
15. Studies show that people who get more than one hug a day are happier,
more productive people than people who get less than one hug a day. Stud-
ies show that people who get more than ten hugs a day are unhappier and
less productive than people who do not get at least one hug a day. There-
fore, it is advisable to hug as many people as you can as much as you can.
B. Validity or invalidity. Using the negative test, determine whether the following
arguments are valid or invalid:
1. * All men are mortal. Socrates is a man. Therefore, Socrates is a mortal.
2. All fish are in the sea. Frank is a fish. Therefore, Frank is in the sea.
3. * Some monsters are friendly. Frank is a monster. Therefore, Frank is
friendly.
4. Some men are smokers. Some men ride bikes. Therefore, some men
smoke and ride bikes.
5. * God is good. God is great. Therefore God is good and great.
6. John is a nice person. Sarah is a nice person. Therefore, John and Sarah
are nice people.
7. * John loves Sarah. Sarah loves John. Therefore, John and Sarah are married.
8. John loves Sarah. Sarah loves John. Therefore, John and Sarah love each
other.
9. Democrats think that taxes should be raised, and Democrats are always
right. Therefore, taxes should be raised.
10. Republicans think that taxes should be lowered, and Republicans are al-
ways right. Therefore, taxes should be lowered.
11. Murder is always wrong and should be illegal. Abortion is murder. There-
fore, abortion is wrong and should be illegal.
12. Murder is always wrong and should be illegal. Abortion is not murder.
Therefore, abortion is not wrong and should not be illegal.
13. Smoking causes cancer, which raises health-care costs. We should never
do anything that raises health-care costs. We should never smoke.
14. Smoking causes cancer, which raises health-care costs. It is sometimes ac-
ceptable to do things that raise health-care costs. It is acceptable to smoke.
15. The government should not create any law that interferes with a person’s
basic human rights. Passing a law that makes smoking illegal interferes
with a person’s basic human rights. Therefore, the government should not
create a law that makes smoking illegal.
C. Conceptual questions. Answer the following questions about valid arguments:
1. * Is it possible for a valid argument to be sound?
2. Is it possible for a sound argument to be invalid?
3. * Is it possible for an argument with false premises to be sound?
4. Is it possible for an argument with false premises to be valid?
5. * If an argument has two premises, and these premises cannot both be true,
is the argument valid or invalid? Justify your answer.
C. Conceptual questions
1. * Yes. It is possible for a valid argument to be sound, provided the argument
has true premises.
3. * No. It is not possible for an argument with false premises to be sound
because all of the premises in a sound argument must true.
Definitions
In the Introduction to this book, logic was defined as a science that aims to identify
principles for good and bad reasoning. Symbolic logic was defined as a branch of logic
that represents how we ought to reason through the use of a formal language. In this
chapter, you will be introduced to just such a logical language. This language is called
the language of propositional logic (or PL for short). The major goals of this chapter
are the following:
Many propositions in English are built up from one or more propositions. For ex-
ample, consider the following three propositions:
Notice that (3) is built up from (1) and (2) by placing and between them. The term
and connects the sentence John went to the store to the sentence Liz went to the game
to form the complex proposition in (3). A number of other terms are used to generate
complex propositions from other propositions. For example,
Terms like and, or, if . . ., then. . ., and if and only if are called propositional connec-
tives. These are terms that connect propositions to create more complex propositions.
25
Since the terms above do not connect propositions to form complex propositions,
it would be misleading to call them propositional connectives. Instead, we call terms
similar to the above that work (or operate) on single propositions to form more com-
plex propositions, as well as the propositional connectives, propositional operators.
Propositional A propositional operator is a term (e.g., and, or, it is not the case that)
operator that operates on propositions to create more complex propositions.
These propositions do not contain terms like and, or, and it is not the case that that
work on propositions to create more complex propositions.
In contrast to atomic propositions there are complex propositions. These are propo-
sitions that contain at least one truth-functional operator.
Thus, the lightening-color function takes a particular color as an input (e.g., brown)
and yields a lighter version of that color as output (e.g., light brown).
Input Output
(colors) Lightening-Color Function (colors)
grey —————————► lightgrey
brown —————————► lightbrown
In the case of the lightening-color function, the output color is entirely determined
by the input color. Here is a slightly more complicated function that takes two colored
items (red or blue) as input and produces an emotion (happy or sad) as output:
Color-emotion function = df. If the color-value input of both of the colored items is
blue, then the output is happy. If the color-value input of either of the colored items
is red, then the output is sad.
In the above example, suppose that someone’s emotions are determined accord-
ing to the color-emotion function, and suppose that we present that person with two
different pieces of clothing, a blue shirt and a blue pair of pants. The color-emotion
function says that if the color-value input of both items is blue, then this individual
will be happy. Alternatively, if we hand him or her a pair of blue pants and a red shirt,
the color-emotion function says that if the color-value input of either of the items is
red, then the individual will be sad.1
Let us now return to the truth-functional use of propositional operators. In order
to distinguish between a propositional operator that is used truth-functionally and
one that is not, it is important to define a truth-value function. A truth-value func-
tion is a kind of function where the truth-value output is entirely determined by the
truth-value input.
In order to illustrate, consider a type of function that takes the truth value of two
propositions as input and determines a truth value for a complex proposition as output.
This truth-value function is defined as follows:
Special truth function = df. If the truth-value input of both of the propositions is true,
then the complex proposition is true. If the truth-value input of either of the proposi-
tions is false, then the complex proposition is false.
The special truth function takes the truth value of two propositions and then, using
the truth-functional rule above, determines the value for a complex proposition that is
composed of these propositions. The truth value of the output proposition is entirely
determined by the truth values of the input propositions.
Input
(truth value of Output
propositions) Special Truth Function (truth value of complex proposition)
(1) (2) —————————► (3)
T T —————————► T
T F —————————► F
F T —————————► F
F F —————————► F
Now take a complex proposition that is composed of (1) and (2) and connected by
the propositional operator and:
(3) John went to the store, and Liz went to the game.
Let’s further suppose that the propositional operator and is being used in a truth-
functional way and that and corresponds to the special truth function expressed above.
If this is the case, then the truth value of (3) should be entirely determined by the truth
values of (1) and (2). To illustrate, suppose that (1) is true, and (2) is true. According
to the special truth function, if the truth-value input of both of the propositions is true,
then the complex proposition is true. Thus, (3) is also true. And it is entirely deter-
mined by the input. Alternatively, suppose that (1) is true, and (2) is false. According
to the special truth function, if the truth-value input of either of the proposition is
false, then the complex proposition is false. Thus, (3) is false. In plain English, the
truth value of John went to the store and Liz went to the game is entirely determined
by the truth value of John went to the store, the truth value of Liz went to the game,
and the special truth function expressed by and.
To summarize, recall that many propositions in English are built up from one or
more propositions or involve propositional operators. A propositional operator is a
term (e.g., and, or, it is not the case that) that works on propositions to create more
complex propositions. Some propositional operators are used in ways that correspond
to truth functions. A truth function is a function where the truth-value output of a
proposition is entirely determined by the truth-value input. Propositional operators
that correspond to truth functions are called truth-functional operators. Complex
propositions involving truth-functional operators are propositions whose truth values
are determined entirely by the truth values of the propositions that compose them.
In this section, we introduce the formal language of propositional logic (PL). We start
by introducing the symbols that make up PL:
This atomic proposition can be abbreviated in the language of PL by any single, capi-
tal Roman letter of our choosing. Thus, John is grumpy can be abbreviated as follows:
The particular letter used to represent the proposition is unimportant (e.g., ‘Q’ or
‘W’ or ‘A’) provided that once a letter is chosen, it is used consistently and unambigu-
ously. Thus, John is grumpy could have been abbreviated in PL as follows:
To ensure that any simple English proposition can be represented in PL, integers
can be subscripted to uppercase Roman letters (e.g., A1, A2, A3, A4, B1, B2, B3, B4, Z1,
Z2, . . ., Z30).This will ensure that the symbols available in PL to represent English
propositions are infinite.
As noted in the previous section, more complex propositions are built up by adding
propositional operators to them. Consider the following complex proposition:
To represent (2) in PL, we could abbreviate (2) as ‘J.’ However, the problem with
abbreviating (2) as ‘J’ is that this would cover over the fact that (2) is a complex
proposition composed of two propositions and a truth-functional operator. In abbre-
viating English propositions in PL, we want to represent as much of the underlying
truth-functional structure as possible. In order to do this, we introduce a number of
new symbols (∧, ∨, →, ↔, ¬) into PL that abbreviate various truth-functional opera-
tors (and, or, if . . ., then . . ., . . . if and only if . . ., not) that are found in English. Our
method for explaining these truth-functional operators in PL will not be to consider all
of the various propositional operators that occur in English one by one and then show
how they can be abbreviated by one of our new truth-functional symbols. Instead, we
will define five truth-functional operators in PL (∨, →, ↔, ¬, ∧), and then offer some
general suggestions as to how they relate to the truth-functional use of propositional
operators in English.
2.2.1 Conjunction
In the language of PL, where ‘P’ is a proposition and ‘Q’ is a proposition, a proposi-
tion of the form
P∧Q
is called a conjunction. The ‘∧’ symbol is a truth-functional operator called the caret.
Each of the two propositions that compose the conjunctions are called the proposi-
tion’s conjuncts. The proposition to the left of the caret is called the left conjunct,
while the proposition to the right of the caret is called the right conjunct. That is,
Conjunction = df. If the truth-value input of both of the propositions is true, then the
complex proposition is true. If the truth-value input of either of the proposition is
false, then the complex proposition is false.
In other words, a conjunction is only true in the case when both of the conjuncts
are true. We can also represent this truth function in terms of truth-functional input
and output as follows:
Input Output
Proposition P Q P∧Q
Truth value T T T
Truth value T F F
Truth value F T F
Truth value F F F
The best translations into English of ‘P∧Q’ are sentences that make use of a . . .
and . . . structure, such as ‘P and Q.’ For example, consider the following sentence:
(1) can be abbreviated by letting ‘G’ stand for John is grumpy, ‘L’ stand for Liz is
happy, and the caret for the truth-functional operator and. Thus, (1) can be abbrevi-
ated as follows:
G∧L
(2*) H∧L
(3*) L∧G
(4*) L∧G
2.2.2 Negation
In the language of PL, where ‘P’ is a proposition, a proposition of the form
¬P
is called a negation. The symbol for negation represents the following truth function:
Negation = df. If the truth-value input of the proposition is true, then the complex
proposition involving ‘¬’ is false. If the truth-value input of the proposition (atomic
or complex) is false, then the complex proposition involving ‘¬’ is true.
That is,
Input Output
Proposition P ¬P
Truth value T F
Truth value F T
In other words, the negation function changes the truth value of the proposition it
operates upon. If ‘M’ is true, then ‘¬M’ is false. And if ‘M’ is false, then ‘¬M’ is true.
To put this in plain English, if a proposition is true, adding ‘¬’ to it changes it to false.
If the proposition is false, adding ‘¬’ to it changes it to true.
The best translations into English of ‘¬P’ are sentences involving the use of not or
it is not the case that (e.g.,‘not-P’). For example, consider the following propositions:
(1*) ¬L
(2*) ¬G
(3*) ¬Z
You might be tempted to translate (4E) with a single letter. For example,
(4) J
The problem with this translation is that it does not translate the not in (4E) with the
‘¬’ operator. A more comprehensive translation of (4E) is the following:
(4*) ¬J
The main reason for not translating (4E) as (4) is that it would make the analysis
of English propositions in terms of their truth functions pointless. To see this more
clearly, consider the following proposition:
One way of translating (5E) is by a single letter, since (5E) is a proposition. Thus,
(5) J
(1) ¬M
(2) ¬(M∧J)
(3) ¬M∧J
The scope of ‘¬’ is the proposition (atomic or complex) that occurs to the right of
the negation. In the absence of parentheses, ‘¬’ simply operates on the propositional
letter to its immediate right. For example, in the case of (1), ‘¬’ applies to ‘M.’ In
order to indicate that ‘¬’ applies to a complex proposition, parentheses are used. For
instance, in the case of (2), ‘¬’ operates not merely on ‘M’ and not merely on ‘J’ but
on the complex proposition ‘M∧J’ contained within the parentheses. This proposition
is importantly different from one like (3), where ‘¬’ applies not to the conjunction
‘M∧J’ but only to ‘M.’
Thus, ‘¬’ has more (or wider) scope in (2) than in (3) since in (2) ‘¬’ applies to
the complex proposition (‘M∧J’) and in (3) it only applies to the atomic proposition
(‘M’). Conversely, ‘∧’ has more (or wider) scope in (3) but less scope in (2) since in
(3) ‘∧’ operates on ‘¬M’ and ‘J,’ whereas in (2) it is being operated on by ‘¬.’ Notice
further that in (2), ‘¬’ has ‘∧’ in its scope, whereas in (3), ‘∧’ has ‘¬’ in its scope.
The operator with the greatest scope is known as the main operator. The main op-
erator is the operator that has all other operators within its scope.
Thus, in the case of (2), since ‘¬’ has ‘∧’ in its scope, ‘¬’ is the main operator. In
the case of (3), since ‘∧’ has ‘¬’ in its scope, ‘∧’ is the main operator. Consider a few
more examples:
(4) ¬¬P
(5) ¬(P∧¬Q)
(6) ¬¬P∧¬¬Q
In the case of (4), the leftmost ‘¬’ has the most scope since it has ‘¬P’ within its
scope. In the case of (5), the leftmost ‘¬’ has the most scope since it has ‘∧’ and the
rightmost ‘¬’ in its scope. In the case of (6), ‘∧’ has the most scope since it has ‘¬¬P’
and ‘¬¬Q’ in its scope.
The main operator also determines how we classify a given proposition. Since
the operator with the greatest scope in (1) is ‘¬’ (negation), and it operates on ‘∧’
(conjunction), (1) is classified as a negated conjunction. In the case of (2), since the
operator with the greatest scope is ‘∧’ (conjunction), and it operates on two proposi-
tions (one of the conjuncts being negated), we can say that (2) is a conjunction with
a negated conjunct.
(2) ¬(M∧J)
(3) ¬M∧J
In (2), the order of operations is first to use the truth-functional rule corresponding
to ‘∧’ to determine a truth value for ‘M∧J’ and then to use the truth function corre-
sponding to ‘¬’ to determine the truth value for ‘¬(M∧J).’ In other words, in deter-
mining the truth value of a complex proposition, we move from the truth-functional
operators with the least scope to the truth-functional operators with the most scope.
Consider the following procedure for (2), supposing that ‘M’ is true and ‘J’ is false.
Begin by writing ‘T’ (for true) or ‘F’(for false) underneath the corresponding propo-
sitional letter:
¬( M ∧ J)
T F
Next, start with the truth-functional operator with the least scope and assign a truth
value to the complex proposition that results from the input of the propositions being
operated upon. In short, use the truth values for ‘M’ and ‘J’ and the truth-functional
rule for conjunction to determine the truth value of ‘M∧J.’ Once determined, write the
corresponding truth value underneath ‘∧’:
¬( M ∧ J)
T F F
¬( M ∧ J)
T T F F
Thus, given the above truth values assigned to ‘M’ and ‘J,’ ‘¬(M∧J)’ is true.
Now consider the same process for (3). First, start by assigning truth values to the
propositional letters:
¬ M ∧ J
T F
In this case, since ‘¬’ has the least scope, we determine the truth value of ‘¬M’
first:
¬ M ∧ J
F T F
¬ M ∧ J
F T F F
Thus, given the above truth values, ‘¬M∧J’ is false. This shows that the order of
operations can affect the truth value of complex propositions. For when ‘M’ is true
and ‘J’ is false, then (2) is true and (3) is false.
The difference in the order of operations of (2) and (3) also influences how (2)
and (3) are translated into English. For the purpose of translation, let ‘M’ = Mary is
a zombie and ‘J’ = John is running. Translating (2) and (3) into English gives us the
following:
(2E) It is not both the case that Mary is a zombie and John is running.
(3E) Mary is not a zombie, and John is running.
Notice that both translations aim at capturing the scope of the operators. In (2), ‘¬’
has wide scope and operates on complex proposition ‘M∧J.’ This is reflected in (2E),
where it is not both the case that operates on the complex sentence Mary is a zombie
and John is running. This differs from (3) and (3E), where ‘∧’ has wide scope. In the
case of (3), ‘∧’ connects two conjuncts,‘¬M’ and ‘J.’ This is reflected in (3E), where
the use of and operates on the sentence Mary is not a zombie and the sentence John
is running.
To keep the order of operations clear and to maximize readability, three different
scope indicators are employed: parentheses (( )), then brackets ([ ]), and then braces
({ }).
Finally, only the scope of ‘¬’ and ‘∧’ has been discussed. What about the remaining
truth-functional operators (∨, →, ↔)?
Remember that the scope of ‘¬’ (negation) is the proposition (atomic or complex)
to its immediate right, and the scope of ‘∧’ (conjunction) includes the propositions
(atomic or complex) to its immediate left and right. Truth-functional operators that
apply to one and only one proposition are called unary operators. The truth-functional
operator for negation, ‘¬,’ is a unary operator. The remaining truth-functional opera-
tors (∧, ∨, →, ↔) are binary operators in that they apply to two propositions (atomic
or complex). Since ‘∧,’ ‘∨,’ ‘→,’ and ‘↔’ always connect two propositions (atomic
or complex), these operators are called connectives because they are operators that
connect two propositions.
Unary operator ¬
Binary operator (connective) ∧, ∨, →, ↔
(6) {[(¬P→Q)↔R]∨S}∧M
In (6), the rightmost ‘∧’ has the greatest scope (and so is the main operator) be-
cause it contains all other operators in its scope. That is, it operates on two proposi-
tions: ‘[(¬P→Q) ↔ R]∨S’ and ‘M.’ Next, ‘∨’ has the next most scope since it has
‘↔’ and ‘→’ in its scope. The ‘∨’ operates on two propositions: ‘(¬P→Q)↔R’
and ‘S.’ Next, ‘↔’ has the next most scope. The ‘↔’ operates on two propositions:
‘¬P→Q’ and ‘R.’ Next, ‘→’ has the next most scope. The ‘→’ operates on two
propositions: ‘¬P’ and ‘Q.’ Finally, ‘¬’ has the least scope. The ‘¬’ operates on
only one proposition: ‘P.’
[(P∧Q)↔R]∨S ∨
S∨[(P∧Q)∧R] ∨
M↔W ↔
¬(¬M∧W) ¬
¬(¬R→S)∧(F→P) ∧
¬[(R→¬M)→M] ¬
¬¬Q→¬S →
(W∧¬Ρ)∨¬(Ρ∧¬S) ∨
2.3 Syntax of PL
While (1) and (2) are sentences, (3) is not since it is an ungrammatical expression.
The same is true in PL. The next few subsections articulate the syntax of PL. In these
subsections, we articulate what it means for an expression to be a well-formed formula
(grammatically correct) of PL. But before this, we need to make a crucial distinction
so that we can talk about PL without confusion.
In cases like (1), where we use language to express something about the world or
our mental state, we will say that an expression is being used. That is, the proper name
‘John’ is being used in (1) to refer to John. In contrast to (1), when we use language
not to talk about something in the world but to talk about language itself, we will say
that an expression is being mentioned. Here is an example:
In (2), the proper name ‘John’ is being mentioned and not used because the expres-
sion ‘John’ in (2) refers to the term John and not a real, living person John.
As you may have noticed from the examples above, the use/mention distinction is
commonly marked by the use of single quotation marks. We make use of two differ-
ent explicit methods to indicate that a term is being mentioned rather than used. First,
single quotation marks indicate that a term is being mentioned, and the absence of
quotation marks indicates that it is being used. For example,
¬P
P∧Q
(P→Q)∧S
are all instances where expressions in PL are being mentioned rather than used.
If this is your formulation of such a rule, you will fall miserably short of your goal.
Why? Well, because your goal is to say that for any proposition in PL (not just the
atomic proposition ‘P’), if you put a ‘¬’ in front of it, you form a negation. That is,
your goal is not to express something about a particular proposition but to characterize
a general feature of any proposition in PL.
In order to do achieve this end, you will need metalinguistic variables (or meta-
variables). A metavariable is a variable in the metalanguage of PL (not an actual part
of the language of PL) that is used to talk about expressions in PL. In other words,
metavariables are variables in the metalanguage that allow us to make general state-
ments about the object language like the one we are currently aiming at. In order to
clearly demarcate metavariables from propositions that belong to PL, we will repre-
sent metavariables using bold, uppercase letters (with or without numerical subscripts)
(e.g., ‘P,’‘Q,’‘R,’‘Z1’).
Before we look at a number of examples involving the use of metalinguistic vari-
ables, at least two things should be pointed out. First, metavariables are part of the
metalanguage of PL. This means that they will be used to talk about PL and are not
part of PL itself. Second, metavariables are variables for expressions in the object lan-
guage. To illustrate, consider the following mathematical expression: for any positive
integer n, if n is odd, then n + 2 is odd. In this example, n does not stand for some
particular positive integer but is instead a variable for positive integers. Likewise, our
metalinguistic variables are variables, but they are variables for expressions in PL.
Let’s consider a few examples that illustrate the fact that metavariables provide a
very general means of talking about the object language. First, let’s consider our ear-
lier effort, where we wanted to say that for any proposition in PL (atomic or complex),
if you put a ‘¬’ in front of it, you get a negation.
In the above example, note that ‘P’ is metavariable. ‘P’ is not part of the vocabu-
lary of PL. Instead, it is used to refer to any proposition in PL (e.g.,‘A,’‘¬A,’‘A→
B,’‘¬A→B,’etc.). Since the above statement makes use of the metavariable ‘P,’ it
captures the general statement that if you place a negation in front of a proposition,
you get a negation.
Consider a second example:
If ‘P’ is a proposition in PL, and ‘Q’ is a proposition in PL, then ‘P∧Q’ is a proposi-
tion in PL.
In the above example, again note that ‘P’ and ‘Q’ are metavariables. They do not
stand for some particular proposition in PL (e.g.,‘P’ or ‘Q’) and are not part of the
language. Instead, they are being used to refer to any proposition in PL (e.g., ‘A,’
‘¬A,’ ‘A→B,’ ‘¬A→B,’ and so on). Again, since the above statement makes use of
the metavariables ‘P’ and ‘Q,’ it expresses conjunctions in PL.
A final example:
∨, →, ↔, ¬, ∧
Third, there are parentheses, braces, and brackets to indicate the scope of truth-
functional operators
(), [], {}
Other, complex propositions are formed by combining the above three elements in
a way that is determined by the grammar of PL. A syntactically correct proposition
in PL is known as a well-formed formula (wff, pronounced ‘woof’). The rules that
determine the grammatical and ungrammatical ways in which the elements of PL can
be combined are known as formation rules.
The formation rules for PL are as follows:
Rule (1) specifies that every uppercase Roman (unbolded) letter (with or without
subscripted integers) is an atomic proposition. Rules (2) to (6) specify how complex
propositions are formed from simpler propositions. Finally, rule (7) specifies that
further complex propositions in PL can only be formed by repeated uses of rules
(1) to (6).
The formation rules listed above provide a method for determining whether or
not an expression is an expression in PL. That is, the formation rules are rules for
constructing well-formed (or grammatically correct) formulas, and so, if a formula
cannot be created using the rules, then that formula is not well formed (not grammati-
cally correct). To illustrate how these rules can be used to determine whether or not a
proposition is a wff, consider whether the following expression is a wff:
P→¬Q
Begin by showing that all of the atomic letters that compose P→¬Q are wffs. That
is, by rule (1) every propositional letter (e.g., ‘P,’‘Q,’‘R’) is a wff; thus, it follows that
‘P’ and ‘Q’ are wffs. Next, move to more complex propositions. That is, by rule (2)
if ‘P’ is a wff, then ‘¬P’ is a wff; thus, it follows that since ‘Q’ is a wff, then ‘¬Q’ is
a wff. Finally, by rule (5), if ‘P’ and ‘R’ are wffs, then ‘(P→R)’ is a wff; thus, it fol-
lows that since ‘P’ is a wff and ‘¬Q’ is a wff, then ‘P→¬Q’ is a wff. More compactly,
Thus, using formation rules (1) to (7), it was shown that ‘P→¬Q’ is a wff. Next,
consider a slightly more complex example. That is, show that‘P→(R∨¬M)’ is a wff.
First, rule (1) states that every propositional letter (e.g., ‘P,’‘Q,’‘R’) is a wff. Thus,
in the case of ‘P→(R∨¬M),’ it follows, by rule (1),that ‘P,’‘R,’ and ‘M’ are wffs.
Next, by rule (2), if ‘P’ is a wff, then ‘¬P’ is a wff. Thus, since ‘M’ is a wff, then
‘¬M’ is also a wff. Rule (4) states if ‘P’ and ‘R’ are wffs, then ‘(P∨R)’ is a wff. Thus,
in the case of ‘P→(R∨¬M),’ since ‘R’ and ‘¬M’ are wffs, then ‘(R∨¬M)’ is a wff.
Finally, rule (5) states that if ‘P’ and ‘R’ are wffs, then ‘(P→R)’ is a wff. Thus, in
the case of ‘P→(R∨¬M),’ since ‘P’ and ‘(R∨¬M)’are wffs, then so is ‘P→(R∨¬M).’
More compactly, ‘P→(R∨¬M)’ is a wff because
(1) P
(2) ¬Q
(3) W→R
(4) W∧R
(5) P∧(R∨S)
The literal negations (or negated forms) of the above propositions are formed by
placing a negation before the entire proposition. Thus,
(1*) ¬P
(2*) ¬¬Q
(3*) ¬(W→R)
(4*) ¬(W∧R)
(5*) ¬(P∧(R∨S)) or ¬[P∧(R∨S)]
Another useful reason for the introduction of metavariables is that they allow for a
more succinct specification of the main operator of a proposition.
(1) P
(2) S∧M
(3) (P∧Q)→¬Z
In the case of (1), ‘P’ is an atomic proposition and so has no main operator. In the
case of (2), ‘S∧M’ is a proposition of the form ‘Q∧R’ where ‘Q’ is the proposition ‘S’
and ‘R’ is the proposition ‘M.’ Thus, the main operator is the truth-functional operator
between ‘Q’ and ‘R,’ which is the caret. In the case of (3), ‘(P∧Q)→¬Z’ has the form
‘Q→R,’ where ‘Q’ is the proposition ‘(P∧Q)’ and ‘R’ is the proposition ‘¬Z.’ Thus, the
main operator is the truth-functional operator between ‘Q’and ‘R,’ which is the arrow.
Exercise Set #1
A. Translate the following English sentences into PL. Let ‘J’ = John is tall, ‘F’ =
Frank is tall, ‘L’ = Liz is happy, and ‘Z’ = Zombies are coming to get me.
1. * John is tall and Frank is tall.
2. Frank is tall.
3. * Liz is happy.
4. Zombies are coming to get me and Liz is happy.
5. * Zombies are not coming to get me and Liz is not happy.
6. John is tall and Liz is happy.
7. * Liz is not happy and Frank is tall.
8. Liz is happy and Liz is not happy.
4. (P∧¬Q)→R
5. * ¬(P→¬Q)
6. ¬¬(P↔Q)
7. * ¬[P∧(Q∨R)]
8. (P→Q)∨(P∧R)
9. (P↔S)→[(¬R→S)∨¬T]
A.
1. * J∧F
3. * L
5. * ¬Z∧¬L
7. * ¬L∧F
9. * J∧Z
B.
1. * ¬J, where ‘J’ = John is a murderer.
3. * M∧J, where ‘M’ = Mary is an excellent painter, and ‘J’ = John is a fan-
tastic juggler.
7. * ¬J∧M, where ‘J’ = John is the murderer, and ‘M’ = Mary is the murderer.
10. * J∧¬J, where ‘J’ = John is a great juggler.
C.
1. * Wff; the ‘∧’ in ‘J∧(Q∨R).’
3. * Wff; the leftmost ‘∧’ in ‘J∧¬(Q∧R).’
5. * Not a wff.
19. * Wff; the ‘∨’ in ‘(J↔R)∨(R↔R).’
D.
1. * ‘P’ is a wff. Proof: by rule (1), ‘P’ is a wff.
3. * ‘P∧¬Q’ is a wff. Proof: by rule (1), ‘P’ and ‘Q’ are wffs. By rule (2), if
‘Q’ is a wff, then ‘¬Q’ is a wff. Finally, by rule (3), if ‘P’ and ‘¬Q’ are
wffs, then ‘P∧¬Q’ is a wff.
5. * ‘¬(P→¬Q)’ is a wff. Proof: by rule (1), ‘P’ and ‘Q are wffs. By rule (2),
if ‘Q’ is a wff, then ‘¬Q’ is a wff. By rule (5), if ‘P’ and ‘¬Q’ are wffs,
then ‘P→¬Q’ is a wff. Finally, by rule (2), if ‘P→¬Q’ is a wff, then
‘¬(P→¬Q)’ is a wff.
7. * ‘¬[P∧(Q∨R)]’ is a wff. Proof: by rule (1), ‘P,’‘Q,’ and ‘R’ are wffs. By
rule (4), if ‘Q’ and ‘R’ are wffs, then ‘Q∨R’ is a wff. By rule (3), if ‘P’
and ‘Q∨R’ are wffs, then ‘P∧(Q∨R)’ is a wff. Finally, by rule (2), if ‘P’ is
a wff, then ‘¬[P∧(Q∨R)]’ is a wff.
2.4.1 Disjunction
In the language of PL, where ‘P’ is a proposition and ‘Q’ is a proposition, a proposi-
tion of the form
P∨Q
is called a disjunction. The ‘∨’ symbol is a truth-functional operator called the wedge
(or vee). Each of the two propositions that compose the disjunction are called the
proposition’s disjuncts. The proposition to the left of the wedge is called the left dis-
junct, while the proposition to the right of the caret is called the right disjunct. That is,
Disjunction = df. If the truth-value input of either of the propositions is true, then
the complex proposition is true. If the truth-value input of both of the propositions
is false, then the complex proposition is false.
In other words, a disjunction is true if either (or both) of the disjuncts are true and
false only when both of the disjuncts are false. This function can be represented as
follows:
Input Output
Proposition P Q P∨Q
Truth value T T T
Truth value T F T
Truth value F T T
Truth value F F F
The best translations into English of ‘P∨Q’ are sentences involving the inclusive
use of or as in ‘P or Q.’ Consider the following proposition:
(2) M∨J
Although complex translation is not the focus of this text, it should be noted that
while ‘∨’ is almost exclusively represented in English by the word or, ‘∨’ (in con-
junction with other truth-functional operators) is used to translate a number of other
English expressions. For example, let ‘Z’ = Mary is a zombie and ‘M’ = Mary is a
mutant. Now consider the following sentence.
One way to translate (3) is by using ‘∨,’ ‘¬,’ and a scope indicator:
(3) ¬(Z∨M)
(4E) Michael Jordan or Kobe Bryant is the greatest basketball player ever.
(5E) Either Julia Child or Jeff Smith is the greatest TV chef ever.
In (4E) and (5E), the connective or is interpreted exclusively. For (4E) or (5E) to be
true, one or the other of the simpler sentences (but not both) has to be true. Either one
or the other is the greatest, but both are not the greatest. Thus, the sentences are el-
liptical in that we could add not both to both (4E) and (5E). That is,
(4E*) Michael Jordan or Kobe Bryant is the greatest basketball player ever, not both.
(5E*) Either Julia Child or Jeff Smith is the greatest TV chef ever, not both.
The missing not both is implied by the fact that ‘greatest’ usually conveys that one
and only one person or thing is the greatest. Thus, (4E*) and (5E*) can only be true if
one and only one of the constitutive propositions is true. That is, (4E*) will be false if
both Michael Jordan and Kobe Bryant are the greatest, and (5E*) will be false if both
Julia Child and Jeff Smith are the greatest.
This use of or is distinct from the inclusive sense of or, where the complex propo-
sition can be true even if both of the constitutive propositions are true. Consider the
following disjunctions:
Suppose in the case of (6E) that Mary visits John and has lunch, and in the case of
(7E), Mary is a zombie, and John is a mutant. In these cases, nothing prompts us to add
not both to (6E) and (7E). Thus, (6E) and (7E) are true provided either of the disjuncts
are true (even if both are true).
(4E*) Michael Jordan or Kobe Bryant is the greatest basketball player ever, not both.
(4E*) Michael Jordan or Kobe Bryant is the greatest and not both.
(4E*) M∨K ∧ ¬(M∧K)
Another way to do the same thing would be to introduce a new truth-functional op-
erator into the language of PL. We might introduce ‘⊕’ as the truth-functional operator
that stands for the exclusive sense of or. The operator could be defined in terms of the
following truth function:
Input Output
Proposition P Q P⊕Q
Truth value T T F
Truth value T F T
Truth value F T T
Truth value F F F
P→Q
Antecedent Consequent
P → Q
Conditional = df. If the truth-value input of the proposition to the left of the ‘→’ is
true and the one to the right is false, then the complex proposition is false. For all
other truth-value inputs, the complex proposition is true.
Input Output
Proposition P Q P→Q
Truth value T T T
Truth value T F F
Truth value F T T
Truth value F F T
The best translations into English of ‘P→Q’ are sentences that make use of an
if. . ., then. . . structure, such as ‘if P, then Q.’
For example,
(1PL) T→C
(2PL) M→J
(3PL) H→P
In addition to if. . ., then . . . statements, there are a number of other ways to express
conditionals.
Symbolic
English Proposition Representation
If Mary is a zombie, then John is a zombie M→J
If Mary is not a zombie, John is not a zombie ¬M→¬J
In the case that Mary is a zombie, John is a zombie. M→J
Mary being a zombie means that John is a zombie. M→J
On the condition that Mary is a zombie, John is a zombie. M→J
Only if John is a zombie, Mary is a zombie M→J
There is question about which uses of the if . . ., then . . . construction in English corre-
spond to the ‘→’ truth function. At this point, we’ll ignore the philosophical and logical
debate concerning this issue and translate if . . ., then . . . and equivalent constructions
by using ‘→.’3 Later on in the text, we’ll give a defense of why truth-functional uses of
if . . ., then . . . constructions should behave like the truth function presented above, but
for now, try to commit the above truth function to memory.
However, it should be noted here that there are two different ways that an if . . .,
then . . . construction can be used in English: (1) a truth-functional way, and (2) a non-
truth-functional way. Truth-functional uses of if . . ., then . . . are uses where the truth
value of the complex proposition are determined by the truth value of the component
propositions. So, If John is in Toronto, then he is in Canada is true depending upon
the values of John is in Toronto and John is in Canada. However, we can use if . . .,
then . . . statements in a non-truth-functional way. Perhaps the most evident example
concerns causal statements.
Consider the following two causal statements:
(4) If John prays before his big logic exam, then he will receive an A.
(5) If John jumps up, then (assuming normal conditions) he will come down.
Assume that in the case of (5), the antecedent and consequent are true. If the causal
use of if. . ., then. . . is truth-functional, then we have the following:
Input Output
Proposition U → D
Truth value T T T
This is exactly what the truth-functional use of ‘→’ tells us should happen.
However, assume that in the case of (4), the antecedent and consequent are true.
John prays before his exam and receives an A. If the causal use of if . . ., then . . . is
truth-functional, then we have the following:
Input Output
Proposition J → S
Truth value T T T
However, what if the cause of John getting an A on the exam was not his praying
but that he cheated? If this is the case, (4) is false even if the antecedent and conse-
quent are both true. This is because (4) asserts that John’s prayer caused him to get
an A, and what caused John to get an A was his cheating. Thus, our input and output
conditions are as follows:
Input Output
Proposition J → S
Truth value T T F
P↔Q
Biconditional = df. If the truth value input of the propositions are identical, then the
complex proposition is true. If the truth value inputs differ, the complex proposi-
tion is false.
Input Output
Proposition P ↔ Q
Truth value T T T
Truth value T F F
Truth value F T F
Truth value F F T
The best translations into English of ‘P↔Q’ are sentences that make use of an . . .if
and only if . . . structure, such as ‘P if and only if Q.’ For example, Mary is a zombie
if and only if she was infected by the T-virus or John will win the election if and only
if he campaigns in southern states. By abbreviating Mary is a zombie as ‘M’ andMary
was infected by the T-virus as ‘T,’ and by using the symbolic representation for the
double arrow, the above complex English proposition is abbreviated as follows:
In the previous sections, the emphasis has been on using the following truth-functional
operators (¬, ∧, ∨, →, ↔) to translate the following English expressions (not, and, or,
if . . ., then . . ., and if and only if). As it stands, your ability to translate from English
into propositional logic and from propositional logic into English is limited to these
expressions. This section considers a number of additional English expressions and
suggests various ways to translate these into the language of propositional logic.
In this section, the following proposition types are considered:
neither P nor Q
not both P and Q
P only if Q
P even if Q
not-P unless Q or P unless Q
First, a ‘neither P nor Q’ proposition is true if and only if both ‘P’and ‘Q’are false.
Thus, it can be translated as ‘¬P∧¬Q.’ Consider the following propositions:
(1E) says John does not play guitar and Liz does not play guitar, and so (1E) can be
translated as follows:
(1) ¬J∧¬L
Likewise, (2E) says that Barack is not a good president and George is not a good
president. Thus, (2E) is best translated as a conjunction where each of the conjuncts
is negated:
(2) ¬B∧¬G
Second, a ‘not both P and Q’ proposition is true so long as ‘P’and ‘Q’are not jointly
true. Thus, ‘not both P and Q’ is true in three different cases. First, it is true when ‘P’
is true and ‘Q’ is false. Second, it is true when ‘Q’ is true and ‘P’ is false. Third, it
is true when ‘P’ and ‘Q’are both false. Given that ‘not both P and Q’ is true in three
different cases and false only when ‘P’ is true and ‘Q’ is true, the best way to translate
‘not both P and Q’ is a negated conjunction, that is,¬(P∧Q).
To illustrate, consider the following propositions:
Start translating (3E) by isolating the two propositions that compose it. These are
When (3E) states that Frank did not kiss both Corinne and George, this means that
while he may have kissed one of them, it is not the case that Frank kissed Corinne and
George. Thus, (3E) is best translated as a negated conjunction:
(3) ¬(C∧G)
Likewise, (4E) receives a similar treatment. It may be the case that George ate the
hamburger but not the hot dog, or George may have eaten the hot dog but not the
hamburger, but (4E) says that he did not eat them both.
(4) ¬(B∧D)
Third, it is tempting to translate ‘P only if Q’ as ‘Q→P’ since you may think that
if signifies the antecedent like it does in ‘if P then Q.’ But the translation of ‘P only if
Q’ as ‘Q→P’ should be avoided. In discussing ‘P only if Q,’ it is helpful to consider
two different explanations for why ‘P only if Q’ should be translated as ‘P→Q’ rather
than ‘Q→P.’ The first way involves getting clearer on the distinction between a neces-
sary condition and a sufficient condition. ‘P’ is a sufficient condition for ‘Q’ when the
truth of ‘P’ guarantees the truth of ‘Q.’ By contrast, ‘P’ is a necessary condition for
‘Q’ when the falsity of ‘P’ guarantees the falsity of ‘Q.’
In the material conditional ‘P→Q,’ ‘P’is a sufficient condition for ‘Q,’ while ‘Q’is a
necessary condition for ‘P.’ To see this more clearly, consider the following argument:
If Toronto is the largest city in Canada (‘P’), then Toronto is the largest city in
Ontario (‘Q’).
Toronto is the largest city in Canada (‘P’).
Therefore, Toronto is largest city in Ontario (‘Q’).
Notice that the truth of ‘P’ in the above argument guarantees the truth of ‘Q.’ Thus,
‘P’ is sufficient for ‘Q.’ In contrast, consider the following argument:
If Toronto is the largest city in Canada (‘P’), then Toronto is the largest city in
Ontario (‘Q’).
Toronto is not the largest city in Ontario (‘¬Q’).
Therefore, Toronto is not the largest city in Ontario (‘¬P’).
Notice that the falsity of ‘Q’ in the above argument (as represented by the second
premise) guarantees the falsity of ‘P’ (as represented by the conclusion). Thus, ‘Q’ is
necessary for ‘P.’ Now consider that ‘P only if Q’ says that in order for ‘P’ to be true,
‘Q’ needs to be true. That is, ‘P only if Q’ says that ‘Q’is a necessary condition for
‘P.’ Thus, ‘P only if Q’ should be translated as ‘P→Q.’
A second way to explain why ‘P only if Q’ should be translated as ‘P→Q’ begins by
considering the conditions under which ‘P only if Q’ is false. ‘P only if Q’ is false in
just one case, namely, where ‘P’ is true and ‘Q’ is false. Thus, in looking for a transla-
tion of ‘P only if Q,’ we want to use truth-functional operators that makes ‘P only if Q’
true in every case, except when ‘P’ is true and ‘Q’ is false. In looking at the truth table
definitions (see below), we see that ‘P→Q’ is false just in the case that ‘P’ is true and
‘Q’ is false, and it is true in all others. Thus, ‘P only if Q’ is best translated as ‘P→Q.’
(5E) Ryan will let Daniel live only if Daniel wins the lottery.
(6E) Stock prices will go up only if people buy more stocks.
(7E) Only if people buy more stocks will stock prices go up.
(5) R→D
Other only if propositions should be translated similarly. That is, the proposition
that immediately comes after the only if is the consequent of the condition while the
other proposition is the antecedent. Thus, in the case of (6E),
(6) U→B
(7) U→B
Fourth, ‘P even if Q’ is true if and only if ‘P’ is true. That is, ‘P even if Q’ says ‘P
regardless of Q,’ and so the truth (or falsity) of ‘P even if Q’ entirely depends upon
whether ‘P’ is true and is independent of whether ‘Q’ is true or false. Given that this is
the case, there are two ways to translate ‘P even if Q.’ First, if the goal of a translation
into a formal language is merely to capture the conditions under which a proposition
is true, then we can disregard ‘Q’ altogether and translate ‘P even if Q’ as simply ‘P.’
However, if the goal of a translation is to preserve what is expressed, then we can
translate ‘P even if Q’ as ‘P∧(Q∨¬Q)’
To illustrate, consider the following sentences:
(8E) says that Corinne is a good worker regardless of whether her employer is in-
competent. Since the truth or falsity of (8E) does not depend upon whether or not her
employer is incompetent, the truth or falsity of (8E) turns entirely on whether or not
Corinne is a good worker. Thus, (8E) is best translated simply as follows:
(8) C
(9) U
Fifth, and finally, one of the most difficult expressions to translate is ‘P unless Q’
because there are two seemingly conflicting ways to translate the expression. First,
consider the following proposition:
(11E) You will not win the lottery unless you acquire a ticket.
Before beginning, notice that (11E) is not ‘P unless Q’ but ‘not-P unless Q.’ In
thinking about the meaning of (11E), let’s consider the conditions under which (11E)
is true and false (beginning with the uppermost row).
(11E) is true if you acquired a ticket and won the lottery. Congratulations! Second,
(11E) is false if did not acquire a ticket and you did win the lottery. Third, (11E) is true
if you acquired a ticket and did not win the lottery. (11E) doesn’t say you will win
the lottery if you buy a ticket; it only says that acquiring a ticket is a precondition for
winning. If buying a ticket were a sufficient condition for winning the lottery, then
everyone would play! Finally, (11E) is true if you did not acquire a ticket and did not
win the lottery. Did you expect to win the lottery without acquiring a ticket? Get real!
If we let ‘P’ stand for You will win the lottery and ‘Q’ stand for You will acquire
a lottery ticket, a translation of (11E) will be a proposition that is false just in the
case that ‘P’ is true and ‘Q’ is false (and true in all others). Since ‘¬P∨Q’ is false
only when ‘P’ is true and ‘Q’ is false, ‘¬P∨Q’ is a translation of (11E). Thus, we
can translate propositions like ‘not-P unless Q’ as ‘¬P∨Q.’ Some further examples
include the following:
Other cases of ‘P unless Q’ seem to say something stronger. Consider the following
proposition:
In thinking about the meaning of (12E), let’s consider the conditions under which
(12E) is true and false.
Skipping row 1 and beginning with row 2, (12E) is true if John is at the party and
Liz did not call him. (12E) says that John is pretty much assured to be at the party,and
the only thing that is going to stop him from being there is Liz’s call. Moving to row
3, (12E) is true if John is not at the party and Liz did call him. Again, (12E) says that
the only thing that is going to keep John from being at the party is Liz’s call, and so,
if Liz called him, and he is not at the party. (12E) makes good on what it says. Mov-
ing to row 4, (12E) is false if John is not at the party and Liz did not call. Part of what
(12E) says is that if Liz does not call, then John will be there. So, in the case that Liz
did not call and John is not at the party, (12E) is false.
Thus far, our analysis of (12E) does not differ too much from our analysis of (11E).
However, what is problematic about (12E), and cases of ‘P unless Q’ in general, is
how to treat row 1. In one reading of (12E), (12E) is true if John is at the party and
Liz called John (she may have called John to tell him to have a great time at the
party). I find a variety of readings of this sort to be unnatural and to depend upon
ambiguous (and sometimes unarticulated) aspects of the sentence. For example,
consider the following:
Suppose I take the job, although I did get another offer, but the offer was not as
good. Thus, ‘P unless Q’ is true at row 1 (i.e., when ‘P’is true and ‘Q’ is true). But
the rationale for this reading is built upon ambiguity, for above sentence really says,
In that case, ‘P unless Q’ is false at row 1 (i.e., when ‘P’ is true and ‘Q’ is true).
Therefore, I think a more natural reading of (12E) is that (12E) is false if Liz called
John and John is at the party.
If we let ‘P’ stand for John is at the party and ‘Q’ stand for Liz called John, a trans-
lation of (12E) will be a proposition that is true in just two cases: (1) where ‘P’ is true
and ‘Q’ is false, and (2) where ‘P’ is false and ‘Q’ is true. Since ‘¬(P↔Q)’ is true
just in these cases, ‘¬(P↔Q)’ is a translation of (12E). Thus, we can translate proposi-
tions like ‘P unless Q’ using the exclusive disjunction ‘P⊕Q,’ which is equivalent to
‘(P∨Q)∧¬(P∧Q)’ or ‘¬(P↔Q).’ Some further examples include the following:
End-of-Chapter Exercises
8. (Z∨M)∧¬(Z↔P)
9. * (L→M)∧¬(¬Z↔¬P)
10. ¬Q
D. Basic translation. Translate the following English expressions into a symbolic
propositional logic expression. Make sure to capture as much of these expressions
as possible with the propositional operators.
1. * John is robbing the store, and Mary is in the getaway car.
2. John is not a happy man, and Mary is a happy women.
3. * John will go to the store, or he will buy a new car, or he will run from the
law.
4. If John is a zombie, then Mary should run, or Mary should fight.
5. If John is not a zombie, then Mary should run or fight.
6. If John is hungry or a zombie, then Mary should flee.
7. * If Mary left the store two hours ago, and John left one hour ago, and Frank
leaves now, then John will not arrive at the store before Mary.
8. Frank is hungry if and only if John stole his sandwich.
E. Translate the following symbolic propositional logic expressions into English.
Use the following: ‘J’ = John is a zombie, ‘M’ = Mary is a mobster, ‘F’ = Frank is
a fireman. If you are having difficulty, first identify and translate the main opera-
tor, then translate the component sentences, and finally make sure to pay attention
to the scope of negation.
1. * J→M
2. ¬J→F
3. * F→¬J
4. (F∨J)→¬M
5. ¬(F∨J)→¬J
6. J↔¬M
7. [(J∧M)∧F]→(¬M∧¬J)
8. (M↔J)→(J∨¬F)
9. * M∨(F∨¬J)
10. F∨(M∨¬J)
F. For each proposition, identify the number of different ways that they can be rep-
resented using metalinguistic variables.
1. * A↔Β
2. (A→B)↔C
3. * ¬(T→R)
4. S∧R
5. * (P↔Q)→W
6. A∨¬B
7. * ¬¬P
8. ¬¬P∧Q
9. * ¬(W→¬R)
10. ¬(¬P↔¬E)→¬Q
A.
1. * ∧.
3. * The leftmost ¬.
5. * The leftmost ¬.
9. * The leftmost ¬.
B.
1. * Yes; A, ¬A.
3. * No.
5. * No.
7. * Yes; A∧B, ¬(A∧B).
9. * No.
C.
1. * ¬P.
3. * ¬(P→Q).
5. * ¬(P∧¬M).
7. * ¬[P∧(R↔S)].
9. * ¬[(L→M)∧¬(¬Z↔¬P)].
D.
1. * J∧M, where ‘J’ = John is robbing the store, and ‘M’ = Mary is in the
getaway car.
3. * (S∨B)∨L or S∨(B∨L), where ‘S’ = John will go to the store, ‘B’ = John
will buy a new car, and ‘L’ = John will run from the law.
7. * [(M∧J)∧F]→¬A, where ‘M’ = Mary left the store two hours ago, ‘J’ =
John left one hour ago,‘F’ = Frank leaves now, and ‘A’ = John will arrive
at the store before Mary.
E.
1. * J→M; if John is a zombie, then Mary is a mobster.
3. * F→¬J; if Frank is a fireman, then John is not a zombie.
9. * M∨(F∨¬J); Mary is a mobster, or Frank is a fireman, or John is not a
zombie.
F.
1. * P,P↔Q.
3. * P, ¬P,¬(P→Q).
5. * P, P→Q, (P↔Q)→W.
7. * P, ¬P, ¬¬P.
9. * P, ¬P, ¬(P→Q), ¬(P→¬Q).
G.
1. * S∧(T∨¬T).
Key: ‘T’ = You are on the right track; ‘S’ = You will get run over if you
just sit there.
3. * (¬N∧¬L)∧P
Key: ‘N’ = Marriage is heaven; ‘L’ = Marriage is hell; ‘P’ = Marriage is
purgatory.
5. * ¬V∧¬P
Key: ‘V’ = Happiness is a virtue; ‘P’ = Happiness is a pleasure.
7. * A∧(W∨¬W)
Key: ‘A’ = Every author in some way portrays himself in his works; ‘W’
= Portraying oneself in one’s work is against one’s will.
9. * ¬A∨M
Key: ‘A’ = America is wholly herself; ‘M’ = America is engaged in high
moral principle.
Definitions
Notes
1. Functions are abundant in mathematics. They typically associate a quantity (the input)
with another quantity (the output). For example, f(x) = 2x is a function that associates with any
positive integer (the input) an integer twice as large (the output).
2. The truth function represented by ‘→’ is sometimes represented as ⊃, also known as the
horseshoe.
3. If you are interested, see J. Bennett, A Philosophical Guide to Conditionals (Oxford:
Clarendon Press, 2003); D. Sanford, If P, Then Q: Conditionals and the Foundations of Rea-
soning (New York: Routledge, 1989); J. Etchemendy, The Concept of Logical Consequence
(Cambridge, MA: Harvard University Press, 1990).
4. Symbolically, the material biconditional is sometimes represented by ‘≡,’ also known as
the tribar.
Truth Tables
Thus far, we have articulated the symbols and syntax of PL. The primary goal of this
chapter is to explain more fully the semantics of PL by (1) revisiting the notion of a
valuation (truth-value assignment), (2) articulating a mechanical method that shows
how the truth value of complex proposition ‘P’ in PL is determined by the truth value
of the propositions that make up ‘P,’ and (3) using this method to determine whether
certain logical properties belong to propositions, sets of propositions, and arguments.
This mechanical method will give us a determinate yes or no answer as to whether a
proposition is contingent, contradictory, or tautological; as to whether sets of proposi-
tions are consistent or inconsistent; as to whether a pair of propositions are equivalent
or nonequivalent; and as to whether arguments are valid or invalid. Such a method is
known as a decision procedure.
There are two things to note about the above definition. First, we stipulate that a
valuation can only assign a value of ‘T’ or ‘F’ to a proposition. This is an idealization
since it is an open question whether or not there are additional truth values (e.g., ‘I’ for
indeterminate). Second, previously when we wanted to say that a particular proposi-
tion has a certain truth value, we expressed this as follows:
‘A’ is true.
65
From now on, we will make use of a notational abbreviation to represent this same
fact. That is, we will use an italicized lowercase letter ‘v’ in order to represent that
‘A’ is true:
v(A) = T
The above says that ‘A’ is assigned the truth value of ‘T.’
A key feature of the syntax of PL is that every proposition in PL can be generated
using formation rules (see 2.3.3.). In addition, the truth value of any complex proposi-
tion in PL is determined by the truth value of the propositional letters that make it up
and the use of the following truth table definitions:
Using the truth table definitions, we can determine how the truth value of a complex
proposition is determined by the truth values of the atomic propositions that make it
up. For example, if ‘A’ is true, and ‘B’ is false, then using the above truth table defini-
tion, ‘A∧B’ is false. We will consider how this works in two steps.
The first step is to see that the truth value of a complex well-formed formula (wff,
pronounced ‘woof’) can be determined, provided the truth values are assigned to the
atomic propositions composing the complex wffs. For example, consider the follow-
ing proposition:
Translate Mary is a zombie as ‘Z’ and John is a mutant as ‘J.’ Next, insert the ap-
propriate symbolic operators to reflect the truth-functional syntax of English. In the
above example, (1E) can be translated as follows:
(1PL) Z∧¬J
Since ‘Z’ and ‘J’ are both propositions, they have a fixed truth value. Assume that
v(Z) = T, v(J) = F. Using the truth values of the atomic propositions and the truth-
functional definitions, the truth value of the complex proposition ‘Z∧¬J’ can be de-
termined. This is done in two steps.
(2) Starting with the truth-functional operator with the least scope and proceeding
to the truth-functional operator with the most scope, use the appropriate truth-
functional definition to determine the truth value of the complex proposition.
Starting with step 1, start by writing the truth values below each atomic proposition.
Z ∧ ¬ J
T F
Moving to step 2, starting with the truth-functional operator with the least amount
of scope and proceeding to the operators with more scope, assign truth values to com-
plex propositions until a truth value is assigned to the main operator. This procedure
will thus require knowledge of the corresponding truth-functional rules associated
with each truth-functional operator (see table above).
In the above example, ‘¬’ has the least amount of scope and operates on ‘B.’
The truth-functional rule for ‘¬’ says that if the truth-value input is ‘F,’ then the
truth-value output is ‘T.’ In the above example, v(J) = F, and so the truth value for
‘¬J’ is v(¬J) = T. Represent this determination by writing a ‘T’ under ‘¬’ to the
immediate left of ‘B.’
Z ∧ ¬ J
T Τ F
Next, proceed to the operator with the next-least scope. In the above example, this
is ‘∧.’ The ‘∧’ operates on ‘Z’ and ‘¬J,’ where v(Z) = T and v(¬J) = T. The truth-
functional definition for ‘∧’ determines that the complex proposition is true.
Z ∧ ¬ J
T T Τ F
Using the truth values of the atomic propositions and the truth-functional rules, the
truth value of ‘Z∧¬J’ has been determined, that is, v(Z∧¬J) = T.
Consider a slightly more complex example:
Translate Mary is a zombie as ‘Z,’ John is a mutant as ‘J,’ and We are doomed
as ‘D.’ Next, insert the appropriate symbolic operators to reflect the truth-functional
syntax of English. (2E) can be translated as follows:
(2PL) (¬Z∨J)→D
Assume that v(Z) = T, v(J) = F, and v(D) = F. Following step 1, write the appropri-
ate truth value below each proposition.
(¬ Z ∨ J) → D
T F F
Following step 2, start with truth-functional operators with the least amount of
scope and assign truth values to complex propositions until a truth value is assigned
to the main operator. In the above example, ‘¬’ has the least amount of scope. Thus,
given the truth function associated with ‘¬’ and v(Z) = T, the truth value for the com-
plex proposition ‘¬Z’ is v(¬Z) = F. This is represented by writing an ‘F’ under ‘¬’
to the left of ‘Z.’
(¬ Z ∨ J) → D
F T F F
Continue to the operator with the next-least scope. In the above example, this is
‘∨.’ Thus,
(¬ Z ∨ J) → D
F T F F F
In the above example, note that ‘F’ is written underneath ‘∨’ because ‘∨’ operates
on (or connects) two false propositions, that is, v(¬Z) = F and v(J) = F. Now, look for
the operator with the next-least scope. In our case this is ‘→,’ which is also the main
operator. The ‘→’ operates on v(D) = F and the complex proposition v(¬Z∨J) = F.
(¬ Z ∨ J) → D
F T F F T F
Using the notion of scope and the truth-functional definition associated with the
various operators, the truth value of ‘(¬Z∨J)→D’ has been determined to be true. This
is represented as ‘T’ under the main operator.
Exercise Set #1
A.
1. * v(A∧¬B) = F
3. * v(A↔¬B) = F
5. * v((¬A→B)→C) = T
7. * v(¬B→(A∧¬A)) = T
9. * v([(A→B)→(B→C)]∨A) = T
In the previous section, truth table definitions were used to determine the truth value
of complex propositions in the case where the propositional letters that compose these
propositions were assigned valuations. In these cases, given the truth values of the
propositional letters, we were able to determine the truth value of the complex propo-
sitions. Namely, if John is tall is true, and Liz is happy is true, the proposition John is
tall, and Liz is happy John is tall, and Liz is happy is true.
However, the function of a truth table is much more general for it can be used
to give a description of the different ways in which truth values can be assigned to
propositional letters. To see this more clearly, let’s take a simple case involving the
proposition ‘P∧Q.’ Notice that the proposition ‘P∧Q’ consists of two propositional
letters, ‘P’ and ‘Q.’ Now, it might be the case that ‘P’ is true and ‘Q’ is true. Or, it
might be the case that ‘P’ is true and ‘Q’ is false. Or it might be the case that ‘P’ is
false and ‘Q’ is true. Or it might be the case that ‘P’ is false and ‘Q’ is false. A truth
table will take the different ways in which the propositional letters of ‘P∧Q’ might
be valuated and determine the truth value of ‘P∧Q’ on that basis.
To represent these different scenarios using a truth table, start by constructing a
table with three columns and five rows, where ‘P∧Q’ is placed in the upper-right-
most cell, and the atomic propositions ‘P’ and ‘Q’ are placed in the two columns
to the left.
P Q P∧Q
Next, we want to represent the different ways that ‘P’ and ‘Q’ can be evaluated.
To do this, start by writing ‘T,’ ‘T,’ ‘F,’ ‘F’ under the leftmost ‘P,’ and alternating
‘T,’‘F,’‘T,’‘F’ under ‘Q.’1
P Q P∧Q
T T
T F
F T
F F
Now that the truth table is set up, the procedure for computing the truth value of
the complex proposition ‘P∧Q’ is the same as computing the truth value for complex
propositions discussed in the previous section. We can follow the same two-step pro-
cedure we followed earlier:
(1) Write the appropriate truth value underneath each propositional letter.
(2) Starting with the truth-functional operator with the least scope and proceeding
to the truth-functional operator with the most scope, use the appropriate truth-
functional rule to determine the truth value of the complex proposition.
Thus, starting from the first row, we write a ‘T’ under every ‘P’ in ‘P∧Q’ and a ‘T’
under every ‘Q’ in ‘P∧Q’
P Q P ∧ Q
T T T T
T F
F T
F F
Next, moving to the second row, write ‘T’ under every ‘P’ and ‘F’ under every ‘Q.’
P Q P ∧ Q
T T T T
T F T F
F T
F F
Continue the process until all of the data concerning the truth values of the propo-
sitional letters is under the complex proposition.
P Q P ∧ Q
T T T T
T F T F
F T F T
F F F F
Next, the truth value of the complex proposition ‘P∧Q’ is determined using the
truth values of the propositional letters plus the truth table definitions. Since ‘P∧Q’ is
a conjunction, we fill out the table accordingly:
P Q P ∧ Q
T T T T T
T F T F F
F T F F T
F F F F F
(P∨¬P)→Q
Next, notice that the propositional letters that compose ‘(P∨¬P)→Q’ are ‘P’ and
‘Q.’ So, write ‘P’ in the uppermost left box and ‘Q’ to the right of ‘P.’
P Q (P∨¬P)→Q
Next, we need to consider all possible combinations of truth and falsity for the
compound expression. Start by writing ‘T,’ ‘T,’ ‘F,’ ‘F’ under the leftmost ‘P.’ Now
alternate ‘T,’‘F,’‘T,’‘F’ under ‘Q.’
P Q (P∨¬P)→Q
T T
T F
F T
F F
Now that the truth table is set up, the procedure for computing the truth value of
the complex propositional form is essentially the same as computing the truth value
for complex propositions. We can follow the same two-step procedure we followed
earlier:
(1) Write the appropriate truth value underneath each propositional letter.
(2) Starting with the truth-functional operator with the least scope and proceeding
to the truth-functional operator with the most scope, use the appropriate truth-
functional rule to determine the truth value of the complex proposition.
Thus, first look at the truth value for ‘P’ in row 1. It is ‘T.’ Now move right across
the row, inserting ‘T’ wherever there is a ‘P.’ Do the same for rows 2, 3, and 4.
P Q (P ∨ ¬ P) → Q
1. T T T T
2. T F T T
3. F T F F
4. F F F F
Now do this for ‘Q’ using the ‘Ts’ and ‘Fs’ that occur below it.
P Q (P ∨ ¬ P) → Q
1. T T T T T
2. T F T T F
3. F T F F T
4. F F F F F
Now that all of the truth values have been transferred from the left side of the table,
the next step is to determine the truth value of the more complex propositions within
the table. In order to do this, assign truth values to propositions that have the least
scope, moving to the expression that has the most scope (i.e., the main operator). Since
the main operator of ‘(P∨¬P)→Q’ is ‘→,’ the proposition has a conditional form. The
operator with the least scope is ‘¬.’ Thus, start with negation by writing the appropri-
ate truth value below the negation (¬).
P Q (P ∨ ¬ P) → Q
1. T T T F T T
2. T F T F T F
3. F T F T F T
4. F F F T F F
Second, the wedge (∨) has the next-least scope in ‘(P∨¬P)→Q.’ Using the truth
values under the ‘¬’ in ‘¬P’ and the truth values under the non-negated ‘P,’ determine
the truth value for the disjunction ‘(P∨¬P)’ and write the truth value under ‘∨.’
P Q (P ∨ ¬ P) → Q
1. T T T T F T T
2. T F T T F T F
3. F T F T T F T
4. F F F T T F F
The next step is to finish the table by writing the correct truth value under the main
operator of the proposition. The truth value under the main operator will determine
the truth value for the propositional form. In order to do this, look at the truth value
under the main operator in ‘(P∨¬P)’ and the truth value of ‘Q.’ One is written under
‘∨,’ the other is written under ‘Q.’
P Q (P ∨ ¬ P) → Q
1. T T T T F T T
2. T F T T F T F
3. F T F T T F T
4. F F F T T F F
1 2 3 4 5 6 7 8
Look at row 1 of the truth table. We see from column 4 that the disjunction is
‘T,’ and the truth value of ‘Q’ is ‘T.’ What does the truth table definition say about
‘v(P→Q)’ when v(P) = T and v(Q) = T? The conditional is true, so write ‘T’ under-
neath the ‘→’ in row 1 of column 7.
P Q (P ∨ ¬ P) → Q
1. T T T T F T T T
2. T F T T F T F
3. F T F T T F T
4. F F F T T F F
1 2 3 4 5 6 7 8
What does it say about ‘v(P→Q)’ when v(P) = T and v(Q) = F? It says v(P→Q) =
F. Write ‘F’ in line 2, column 6 under the ‘→.’
P Q (P ∨ ¬ P) → Q
1. T T T T F T T T
2. T F T T F T F F
3. F T F T T F T
4. F F F T T F F
1 2 3 4 5 6 7 8
P Q (P ∨ ¬ P) → Q
1. T T T T F T T T
2. T F T T F T F F
3. F T F T T F T T
4. F F F T T F F F
1 2 3 4 5 6 7 8
This is a complete truth table. The truth value in the column under the main operator
of ‘(P∨¬P)→Q’ indicates the truth value of the proposition given a specific valuation
of its atomic parts.
Exercise Set #2
A.
3. * (P↔Q)∧(P→¬Q)
P Q (P ↔ Q) ∧ (P → ¬ Q)
T T T T T F T F F T
T F T F F F T T T F
F T F F T F F T F T
F F F T F T F T T F
5. * (P↔Q)∧(P↔¬Q)
P Q (P ↔ Q) ∧ (P ↔ ¬ Q)
T T T T T F T F F T
T F T F F F T T T F
F T F F T F F T F T
F F F T F F F F T F
7. * (P→Q)→(P∨¬Q)
P Q (P → Q) → (P ∨ ¬ Q)
T T T T T T T T F T
T F T F F T T T T F
F T F T T F F F F T
F F F T F T F T T F
9. * ¬¬(¬P∧¬¬Q)
P Q ¬ ¬ (¬ P ∧ ¬ ¬ Q)
T T F T F T F T F T
T F F T F T F F T F
F T T F T F T T F T
F F F T T F F F T F
11. * [(P→Q)∧(R→P)]∧¬(R→Q)
P Q R [(P → Q) ∧ (R → P)] ∧ ¬ (R → Q)
T T T T T T T T T T F F T T T
T T F T T T T F T T F F F T T
T F T T F F F T T T F T T F F
T F F T F F F F T T F F F T F
F T T F T T F T F F F F T T T
F T F F T T T F T F F F F T T
F F T F T F F T F F F T T F F
F F F F T F T F T F F F F T F
13. * [(P∧Q)∧(R∧P)]∧¬(R∧Q)
P Q R [(P ∧ Q) ∧ (R ∧ P)] ∧ ¬ (R ∧ Q)
T T T T T T T T T T F F T T T
T T F T T T F F F T F T F F T
T F T T F F F T T T F T T F F
T F F T F F F F F T F T F F F
F T T F F T F T F F F F T T T
F T F F F T F F F F F T F F T
F F T F F F F T F F F T T F F
F F F F F F F F F F F T F F F
The precise syntax of PL, truth table definitions and scope indicators allow for the use
of a decision procedure. A decision procedure is a mechanical method that determines
in a finite number of steps whether a proposition, set of propositions, or argument has
a certain logical property.
3.3.1 Tautology
In rough terms, a tautology is a proposition that is always true. Sentences like ‘John is
tall or not tall,’ ‘Mary is the murderer or she isn’t,’ or ‘It is what it is’ are all examples
of tautologies. More precisely, a proposition ‘P’ is a tautology if and only if (iff) ‘P’
is true under every valuation. That is, a proposition ‘P’ is a tautology if and only if
it is true no matter how we assign truth values to the atomic letters that compose it.
Using a completed truth table, we can determine whether or not a proposition is a
tautology simply by looking to see whether there are only ‘Ts’ under the proposition’s
main operator (or in the case of no operators, under the propositional letter).
Tautology A proposition ‘P’ is a tautology if and only if ‘P’ is true under every
valuation. A truth table for a tautology will have all ‘Ts’ under its main
operator (or in the case of no operators, under the propositional letter).
P Q P → (Q → P)
T T T T T T T
T F T T F T T
F T F T T F F
F F F T F T F
Notice that there is a single line of ‘Ts’ under the main operator in the above table.
This means that under every combination of valuations of ‘P’ and ‘Q,’ the complex
proposition ‘P→(Q→P)’ is true, and therefore ‘P→(Q→P)’ is a tautology.
As a second example, consider the following truth table for ‘P∧(Q→P)’:
P Q P ∧ (Q → P)
T T T T T T T
T F T T F T T
F T F F T F F
F F F F F T F
Notice that under the main operator of ‘P∧(Q→P),’ there are two ‘Fs,’ indicating
that under some valuation, ‘P∧(Q→P)’ is false. Given our definition that a propo-
sition is a tautology if and only if it is true under every valuation, the proposition
‘P∧(Q→P)’ is not a tautology.
3.3.2 Contradiction
In rough terms, a contradiction is a proposition that is always false. Sentences like
‘John is tall and not tall,’ ‘Mary is the murderer, and she isn’t,’ or ‘A is not A’ are
examples of contradictions. More precisely, a proposition ‘P’ is a contradiction if
and only if ‘P’ is false under every valuation. That is, a proposition ‘P’ is a contra-
diction if and only if it is false no matter how we assign truth values to the propo-
sitional letters that compose it.
Using a completed truth table, we can determine whether or not a proposition is a
contradiction simply by looking to see whether there are only ‘Fs’ under the proposi-
tion’s main operator (or in the case of no operators, under the propositional letter).
P Q ¬ P ∧ (Q ∧ P)
T T F T F T T T
T F F T F F F T
F T T F F T F F
F F T F F F F F
Notice that there is a single line of ‘Fs’ under the main operator in the above table.
This means that under every combination of valuations of ‘P’ and ‘Q,’ the complex
proposition ‘¬P∧(Q∧P)’ is false, and therefore ‘¬P∧(Q∧P)’ is a contradiction.
As a second example, consider the truth table for ‘P∧(Q→P).’
P Q P ∧ (Q → P)
T T T T T T T
T F T T F T T
F T F F T F F
F F F F F T F
Notice that under the main operator of ‘P∧(Q→P),’ there are two ‘Ts,’ indicating
that under some valuation, ‘P∧(Q→P)’ is true. Given our definition that a proposition
is a contradiction if and only if it is false under every valuation, the above truth table
shows that proposition ‘P∧(Q→P)’ is not a contradiction.
3.3.3 Contingency
In the previous two sections, a tautology was defined as a proposition that is true
under every valuation, while a contradiction was defined as a proposition that is false
under every valuation. This leaves one final case. In rough terms, a contingency is a
proposition whose truth value depends on how it is valuated. Sentences like ‘John is
tall,’ ‘Mary is the murderer,’ and ‘Politicians are trustworthy’ are all examples of con-
tingencies. More precisely, a contingency is a proposition ‘P’ that is neither always
true (a tautology) nor always false (a contradiction) under every valuation.
Using a completed truth table, we can determine whether or not a proposition is a
contingency simply by looking to see whether there is at least one ‘F’ and at least one
‘T’ under the proposition’s main operator (or in the case of no operators, under the
propositional letter).
P P
T T
F F
Notice that the truth table for ‘P’ has one ‘F’ under it and one ‘T’ under it. Thus, it
is a contingency since it is neither always true nor always false.
Next, consider the truth table for ‘P→Q.’
P Q P → Q
T T T T T
T F T F F
F T F T T
F F F T F
Notice that ‘P→Q’ is neither true under every valuation nor false under every valu-
ation. That is, ‘P→Q’ is neither always true nor always false. This is evident from
the fact that ‘P→Q’ is true when ‘P’ is true and ‘Q’ is true (as row 1 indicates), and
‘P→Q’ is false when ‘P’ is true and ‘Q’ is false (as row 2 indicates). Thus, ‘P→Q’ is
a contingency.
Finally, consider the truth table for ‘P∧(Q→P).’
P Q P ∧ (Q → P)
T T T T T T T
T F T T F T T
F T F F T F F
F F F F F T F
We considered the above proposition and its corresponding truth table earlier when
discussing contradictions and tautologies. In asking whether the proposition was a
tautology, we used a truth table of this proposition to show that ‘P∧(Q→P)’ is not a
tautology. In asking whether the proposition was a contradiction, we used a truth table
of this proposition to show that ‘P∧(Q→P)’ is not a contradiction.
Notice that under the main operator of ‘P∧(Q→P),’ there are two ‘Ts,’ indicating
that under some valuation ‘P∧(Q→P)’ is true. In addition, notice that under the main
operator of ‘P∧(Q→P),’ there are two ‘Fs,’ indicating that under some valuation
‘P∧(Q→P)’ is false. Thus, we know that ‘P∧(Q→P)’ is neither always true nor always
false, and therefore it is a contingency.
Exercise Set #3
A. Construct truth tables for the following propositions, then explain whether the
proposition is a tautology, contradiction, or contingency.
1. * R→¬R
2. (P→Q)∧(¬Q→¬P)
3. * (¬P∧¬Q)∧P
4. P∨¬(¬M∨M)
5. * (R∧R)∧R
6. (P→W)∧(P↔W)
7. ¬(P∧W)∧¬(¬P∨W)
8. ¬(M∨W)∧Q
9. Q→Q
10. (R∨R)∨R
A.
1. * R→¬R; contingent.
R R → ¬ R
T T F F T
F F T T F
3. * (¬P∧¬Q)∧P; contradiction.
P Q (¬ P ∧ ¬ Q) ∧ P
T T F T F F T F T
T F F T F T F F T
F T T F F F T F F
F F T F T T F F F
5. * (R∧R)∧R; contingency
R (R ∧ R) ∧ R
T T T T T T
F F F F F F
A complete truth table allows for analysis of various properties of propositions, sets
of propositions, and arguments. In this section, we consider how to analyze sets of
propositions for logical equivalence and consistency with truth tables.
3.4.1 Equivalence
In rough terms, two propositions are logically equivalent if and only if they always
have the same truth value. For example, the proposition expressed by John is a mar-
ried man will always have the same truth value as the proposition expressed by John is
not an unmarried man. More precisely, a pair of propositions ‘P’ and ‘Q’ is logically
equivalent if and only if ‘P’ and ‘Q’ have identical truth values under every valuation.
That is, ‘P’ and ‘Q’ are equivalent if and only if, no matter how we assign truth values
to the propositional letters that compose them, there will never be a case where ‘P’ is
true and ‘Q’ is false or a case where ‘P’ is false and ‘Q’ is true.
Using a completed truth table, we can determine whether or not a pair of proposi-
tions ‘P’ and ‘Q’ is logically equivalent by looking to see whether there is a row on
the truth table where one of the ‘P’ has a different truth value than ‘Q.’
A truth table provides a decision procedure for showing logical equivalence by al-
lowing for a comparison of truth values at each row of a truth table. For example, take
the following two propositions: ‘P→Q’ and ‘Q∨¬P.’ Start by putting both of these
expressions into a truth table.
P Q (P → Q) (Q ∨ ¬ P)
T T T T T T T F T
T F T F F F F F T
F T F T T T T T F
F F F T F F T T F
The truth table above shows that ‘P→Q’ is logically equivalent to ‘Q∨¬P’ be-
cause whenever v(P→Q) = T, then v(Q∨¬P) = T, and whenever v(P→Q) = F, then
v(Q∨¬P) = F.
P Q P → Q Q ∨ ¬ P
T T T T T T T F T
T F T F F F F F T
F T F T T T T T F
F F F T F F T T F
Notice that the truth values of ‘P→Q’ and ‘Q∨¬P’ do not differ, and so ‘P→Q’ and
‘Q∨¬P’ are logically equivalent. However, now consider the truth table where these
two propositions are joined together using the double arrow (↔).
P Q (P → Q) ↔ (Q ∨ ¬ P)
T T T T T T T T F T
T F T F F T F F F T
F T F T T T T T T F
F F F T F T F T T F
Notice that a biconditional is true if and only if each side of the biconditional has
the same truth value. And notice that in the case of logically equivalent propositions,
like those of ‘P→Q’ and ‘Q∨¬P,’ the truth values are the same for every row of the
truth table. Since they have the same truth value for every row of the truth table,
‘(P→Q)↔(Q∨¬P)’ determines a tautology.
Thus, we can determine whether two propositions are logically equivalent by
placing the double arrow between these propositions and testing to see whether the
proposition is a tautology. If the proposition is a tautology, then the propositions are
logically equivalent. If the proposition is not a tautology, then the propositions are not
logically equivalent.
3.4.2 Consistency
In rough terms, a set of propositions is logically consistent if and only if it is logi-
cally possible for all of them to be true. For example, the following propositions
can all be true:
John is a bachelor.
Frank is a bachelor.
Vic is a bachelor.
P Q (P → Q) (Q ∨ P) (P ↔ Q)
1. T T T T T T T T T T T
2. T F T F F F T T T F F
3. F T F T T T T F F F T
4. F F F T F F F F F T F
To determine whether the above three propositions are logically consistent requires
that there be at least one row where ‘T’ is located under the main connective for each
proposition. A ‘T’ is not located under the main operator for lines 2, 3, and 4, but a ‘T’
is located under the main operator for line 1. Therefore, ‘P→Q,’ ‘Q∨P,’ and ‘P↔Q’
are logically consistent.
Remember that we are concerned with logical properties of propositions. A set of
propositions may be materially or factually inconsistent yet remain logically consis-
tent. For example, consider a case where empirical science informed us that v(P→Q)
= F and v(Q∨P) = T. If this happened, then ‘P→Q’ and ‘Q∨P’ would form a factu-
ally inconsistent set since they are not both true. However, at the level of logical or
semantic analysis, they could both be true since it could be the case that v(P→Q) = T
and v(Q∨P) = T.
P Q (P ∨ Q) ¬ (Q ∨ P)
1. T T T T T F T T T
2. T F T T F F F T T
3. F T F T T F T T F
4. F F F F F T F F F
The above truth table shows that there is no row where both propositions are true
under the same truth valuations. Therefore, these two propositions are inconsistent.
When an inconsistent set of propositions is conjoined to form a conjunction, a
contradiction is formed. That is, given two separate yet inconsistent propositions, the
conjunction of these two propositions forms a contradiction.
P Q (P ∨ Q) ∧ ¬ (Q ∨ P)
1. T T T T T F F T T T
2. T F T T F F F F T T
3. F T F T T F F T T F
4. F F F F F F T F F F
In chapter 2, it was noted that there are difficulties assimilating the truth function as-
sociated with ‘→’ to the use of if . . ., then . . . in English. It was briefly explained how
various uses of the English if . . ., then . . . do not correspond to the truth-functional
‘→’ (e.g., causal statements). No justification was given for why ‘→’ should receive
the following evaluation:
P Q P→Q
T T T
T F F
F T T
F F F
With a better understanding of truth tables and logical properties defined in terms
of truth-functionality, a more compelling case can be made for why ‘if P then Q’
corresponds to ‘P→Q.’ There are two main strategies for justifying the claim that
truth-functional uses of if . . ., then . . . in English correspond with the truth function
associated with the arrow.
The first strategy involves considering all of the possible truth functions and elimi-
nating those that don’t correspond to an intuitive understanding of what conditionals
mean. The other involves considering certain intuitions about valid inference.
First, consider all of the possible truth functions.
P Q
T T T T T F T T T F
T F T F T T F T T F
F T F T T T T F T T
F F F F T T T T F T
1 2 3 4 5 6 7 8
P Q
T T T F F F F F T F
T F F T T F T F F F
F T F T F T F F F F
F F T F T F F T F F
9 10 11 12 13 14 15 16
T T T T T T
T F T F T T
F T T T F T
F F T T T F
1 2 3 5 6 7
T T
F F
F F
T F
9 15
Consider (2). Generally, we think that ‘P→Q’ and truth-functional uses of ‘if P then
Q’ say something more than simply ‘Q.’ That is, If John is in Toronto, then John is in
Canada says something more than John is in Canada. Thus, (2) should be eliminated.
Consider (9). The truth function in (9) can be expressed by the biconditional ‘P↔Q.’
However, there is a difference between biconditionals and conditionals. In the case
of the former, ‘P↔Q’ is logically equivalent to ‘Q↔P.’ However, this is not the case
with conditionals. For example, consider the following conditional:
because (1) can be true while (2) is false. Namely, (2) is false provided John is in
Canada but not in Toronto. This allows us to eliminate (9).
Finally, consider (15). The truth function described in (15) can be represented by the
conjunction ‘P∧Q.’ If if . . ., then . . . statements are represented by the ‘∧’ function, then
every truth-functional use of ‘if P then Q’ can be replaced by a statement of the form ‘P
and Q.’ However, this does not seem to be the case. Consider the following propositions:
Clearly, (3) and (4) do not say the same thing. (4) is true if and only if John is both
in Toronto and in Canada. However, we think that (3) is true if John is in Canada but
not Toronto, e.g., if he were in Vancouver. Thus, the only truth function that repre-
sents truth-functional uses of if . . ., then . . . is represented in (5), and this column
corresponds to the truth function expressed by ‘→.’
A second justification for why truth-functional uses of if . . ., then . . . correspond to
the ‘→’ function depends upon our understanding of logical properties. For suppose
that instead of treating truth-functional uses of ‘if P then Q’ in terms of ‘→,’they were
defined as the following truth function ‘→*’:
P→*Q
T
F
F
F
If this were the case, then the following two propositions would be logically incon-
sistent:
P Q P→*Q ¬P
T T T F
T F F F
F T F T
F F F T
However, we do not regard (5) and (6) as inconsistent for suppose that John wants
to convey to his friend Liz that Toronto is in Canada. But also assume that John and
Liz are having this conversation, not in Toronto but in Chicago. John says to Liz, “If
I’m in Toronto, then I’m in Canada.” What John says is true even though he is not in
Toronto, he’s never been to Toronto, and never plans on going to Toronto. Even if the
antecedent of (5) is false, the conditional should be true for, to be this different, John
simply denies that he can be both in Toronto and in Canada, for Toronto is in Canada!
A complete truth table allows for analysis of various properties of propositions, sets of
propositions, and arguments. In this section, we consider how to analyze arguments to
determine whether or not they are valid or invalid.
3.6.1 Validity
In this section, we use the truth table method to determine whether an argument is
valid or invalid. An argument is valid if and only if it is impossible for the premises to
be true and the conclusion to be false. In chapter 1, the negative test was used to deter-
mine whether or not arguments were valid. This test asked you to imagine whether it
is possible for the premises to be true and the conclusion to be false. The negative test,
however, has some limitations since it depends on an individual’s psychological ca-
pacities, and this capacity is taxed when dealing with extremely long arguments. Truth
tables provide an easier, more reliable, and purely mechanical method for determining
validity. Using a truth table, an argument is valid in PL if and only if there is no row of
the truth table where the premises are true and the conclusion is false. If there is a row
where the premises are true and the conclusion is false, then the argument is invalid.
Before providing some examples of truth tables that illustrate arguments that are valid
(or invalid), it will be helpful to introduce an additional symbol to represent arguments.
This symbol is the single turnstile (). Although later in this text, the turnstile will stand
for something more specific, temporarily we use it to indicate the presence of an argu-
ment: the propositions to the left of the turnstile are the premises, while the proposition
to the right of the turnstile is the conclusion. For example, the turnstile in
P∧R R
indicates the presence of an argument where ‘P∧R’ is the premise and ‘R’ is the con-
clusion. Likewise, the turnstile in
indicates the presence of an argument where ‘P∧R,’ ‘Z→Z,’ and ‘¬(P∧Q)’ are prem-
ises, and ‘R’ is the conclusion. Lastly, the turnstile in
R
indicates the presence of an argument that has no premises but has ‘R’ as a conclusion.
To illustrate, consider the following truth table for the following argument: ‘P→Q,
¬Q ¬P.’
P Q (P → Q) ¬ Q ¬ P
T T T T T F T F T
T F T F F T F F T
F T F T T F T T F
F F F T F T F T F
Notice that there is no row where the premises ‘P→Q’ and ‘¬Q’ are true and ‘¬P’
is false. Even though there is a row (the bottom row) where ‘P→Q’ and ‘¬Q’ are
jointly true, this is not a row where ‘¬P’ is also false. Thus, ‘P→Q, ¬Q ¬P’ is valid.
Next, consider the argument ‘P→Q, Q P’ and its truth table.
P Q (P → Q) Q P
T T T T T T T
T F T F F F T
F T F T T T F
F F F T F F F
Notice that in the truth table above, there is a row where the premises are true and
the conclusion is false. This is row 3. Thus, P→Q, Q P is invalid.
Premises Conclusion
P Q P → Q P Q
T T T T T T T
T F T F F T F
F T F T T F T
F F F T F F F
Notice that in considering whether or not ‘P→Q, PQ’ is valid, what is being ana-
lyzed is whether the following truth-value assignment is possible:
If it is not possible, then ‘P→Q, PQ’ is valid. If it is possible, then ‘P→Q, PQ’
is invalid. However, notice that if v(Q) = F, then v(¬Q) = T. This suggests another
way of determining validity, that is, determining whether the following truth-value
assignment is possible:
That is, another way of checking for validity is by asking the following question:
Is it possible for both the premises and the negation of the conclusion to be true?
If the answer to this question is no, then the argument is valid. If the answer is yes,
then the argument is invalid.
An equivalent way of asking the same question is the following:
Are the premises and the negation of the conclusion logically consistent (i.e., all
true under the same truth-value assignment)?
If the premises and the negation of the conclusion are not logically consistent, then
the argument is valid. If the premises and the negation of the conclusion are logically
consistent, then the argument is invalid.
To see this more clearly, consider again the valid argument ‘P→Q, PQ’ but which
determines whether the argument is valid by determining whether ‘{P→Q, P, ¬Q}’
is inconsistent.
Notice that in the above table, there is no line on the truth table where ‘P→Q,’ ‘P,’
and ‘¬Q’ are all true. That is, ‘{P→Q, P, ¬Q}’ is inconsistent. In saying that ‘{P→Q,
P, ¬Q}’ is inconsistent, we are saying that it is impossible for the premises ‘P→Q’
and ‘P’ to be true and the negation of the conclusion ‘¬Q’ to be true. Thus, ‘P→Q,
P Q’ is valid.
To consider this more generally, compare the definitions for validity and inconsis-
tency. An argument is valid if and only if it is impossible for the premises ‘P,’‘Q,’. . .,
‘Y’ to be true and the conclusion ‘Z’ to be false. This is just another way of saying that
an argument is valid if and only if it is impossible for the propositions ‘{P, Q, . . ., Y,
¬Z}’ all to be true. Notice, however, that if it is impossible for the propositions ‘{P, Q,
. . .,Y, ¬Z}’to all be true, then the propositions ‘{P, Q, . . ., Y, ¬Z}’ are inconsistent.
Thus, validity can be defined in terms of inconsistency.
P Q R (P → Q) (R ∧ ¬ Q) Q
T T F
Next, we work backward from our knowledge of the truth-functional definition and
the truth values assigned to the complex propositions. For example, we know that if
‘R∧¬Q’ is true, then both of the conjuncts ‘R’ and ‘¬Q’ are true.
P Q R (P → Q) (R ∧ ¬ Q) Q
T T T T F
P Q R (P → Q) (R ∧ ¬ Q) Q
F T T T T F F
Using this forcing strategy, we have determined that ‘Q’ is false and ‘R’ is true, and
so we can assign all other ‘Rs’ the value of true and ‘Qs’ the value of false.
P Q R (P → Q) (R ∧ ¬ Q) Q
F T T F T T T F F
Finally, we know that for ‘P→Q’ to be true and ‘Q’ to be false, ‘P’ must be false.
Thus,
P Q R (P → Q) (R ∧ ¬ Q) Q
F F T F T F T T T F F
And so, using the forcing method, we have shown that ‘P→Q, R∧¬QQ’ is invalid.
This method began by assuming the argument was invalid (i.e., the premises were
assumed to be true and the conclusion to be false). From there, we worked backward
using the truth values and the truth-table definitions to obtain a consistent assignment
of valuations to the propositional letters.
End-of-Chapter Exercises
A. Construct a truth table for the following propositions. Determine whether they are
contradictions, tautologies, or contingencies.
1. * A→(B→A)
2. A∧¬A
3. * ¬(A∧¬A)
4. ¬(R∨¬R)
5. P→[P∧(¬P→R)]
6. P↔(Q∨¬P)
7. Q↔(Q∨P)
8. R∧[(S∨¬T)↔(R∧T)]
9. Q↔¬¬Q
10. P∨¬(Q→¬P)
B. Construct a truth table for the following sets of propositions. Determine whether
the set is consistent or inconsistent.
1. * A↔C, ¬C∨¬A
2. ¬(A∧¬A), A∨¬Α
3. * P∨S, S∨P
4. P→(R∨¬R), (R∨¬R)→P
5. P↔R, ¬R↔¬P, (P→R)∧(R→P)
6. P, ¬¬P
7. P, (P∨Q)∨P
8. P→Q, ¬P∨¬Q, P
9. P↔Q, ¬P↔¬Q
10. P→Q, Q→P, ¬P→Q
C. Construct a truth table for the following pairs of propositions. Determine whether
the pairs are equivalent.
1. * (A↔B), (A→B)∧(B→A)
2. ¬(A∧¬A), A∨¬Α
3. * (P→R)∧(R→P), P↔R
4. P→(S∧M), ¬P∨(S∧M)
5. ¬P→¬R, R∧(¬P∧R)
6. P, ¬¬P
7. R, ¬(R→S)→S
8. R∨¬P, P→R
9. P→Q, ¬Q→¬P
10. S∧T, T∧S, ¬T∨¬S
D. Construct a truth table for the following arguments. Determine whether the argu-
ment is valid or invalid.
1. * A A
2. (A∧B)∧¬A B
3. (¬M∨¬P)→M, ¬M∨¬P M
4. * P→M, ¬P ¬M
5. A∨S, ¬S A
6. P↔M (P→M)∧(Μ → P)
7. * A∨B ¬(B∨¬B)
8. P, P∨¬Q P∧¬Q
9. ¬(P∧Q) ¬P∨¬Q
10. ¬(P∧Q) ¬P∧¬Q
E. Determine the truth value for each of the following propositions where v(P) = T,
v(W) = F, and v(Q) = T. Note that the truth value of some propositions can be
determined even if their truth-value input is not known.
1. * P∧¬W
2. (P→W)→¬W
3. * (P∨W)↔¬Q
4. P∨¬R
5. * ¬(P∨R)↔(¬P∧¬R)
6. ¬(P→Q)→W
7. * ¬S∨S
8. ¬(S∧¬S)
9. * W→W
10. (P∧¬P)→W
F. The main purpose of truth tables is their use as a decision procedure. A deci-
sion procedure is a mechanical test that can be used to determine whether a
proposition, set of propositions, or argument has or does not have a certain logical
property. Thus, provided we can translate an English sentence into the language
of propositional logic, we don’t have to think about whether a proposition is
A.
1. * A→(B→A); tautology.
A B A → (B → A)
T T T T T T T
T F T T F T T
F T F T T F F
F F F T F T F
3. * ¬(A∧¬A); tautology.
A ¬ (A ∧ ¬ A)
T T T F F T
F T F F T F
B.
1. * A↔C, ¬C∨¬A; consistent.
A C A ↔ C ¬ C ∨ ¬ A
T T T T T F T F F T
T F T F F T F T F T
F T F F T F T T T F
F F F T F T F T T F
C.
1. * (A↔B), [(A→B)∧(B→A)]; equivalent.
A B A ↔ B (A → B) ∧ (B → A)
T T T T T T T T T T T T
T F T F F T F F F F T T
F T F F T F T T F T F F
F F F T F F T F T F T F
D.
1. * A A; valid.
A A
T T
F F
E.
1. * v(P∧¬W) = T
3. * v[(P∨W)↔¬Q] = F
5. * v[¬(P∨R)↔(¬P∧¬R)] = T
7. * v(¬S∨S) = T
9. * v(W→W) = T
Definitions
Tautology A proposition ‘P’ is a tautology if and only if ‘P’ is true under every
valuation. A truth table for a tautology will have all ‘Ts’ under its main
operator (or in the case of no operators, under the propositional letter).
Contradiction A proposition ‘P’ is a contradiction if and only if ‘P’ is false under
every valuation. A truth table for a contradiction will have all ‘Fs’
under its main operator (or in the case of no operators, under the
propositional letter).
Contingency A proposition ‘P’ is a contingency if and only if ‘P’ is neither always
false under every valuation nor always true under every valuation. A
truth table for a contingency will have at least one ‘T’ and at least
one ‘F’ under its main operator (or in the case of no operators, under
the propositional letter).
Equivalence A pair of propositions ‘P’ and ‘Q’ is equivalent if and only if ‘P’ and
‘Q’ have identical truth values under every valuation. In a truth table
for an equivalence, there is no row on the truth table where one of the
pair ‘P’ has a different truth value than the other ‘Q.’
Consistency A set of propositions ‘{P, Q, R,. . ., Z}’is logically consistent if
and only if there is at least one valuation where ‘P,’‘Q,’‘R,’. . .,‘Z’
are true. A truth table shows that a set of propositions is consistent
when there is at least one row on the truth table where ‘P,’‘Q,’‘R,’
. . .,‘Z’ are all true.
Inconsistency A set of propositions ‘{P, Q, R,. . ., Z}’ is logically inconsistent if
and only if there is no valuation where ‘P,’‘Q,’‘R,’. . .,‘Z’ are jointly
true. A truth table shows that a set of propositions is inconsistent
when there is no row on the truth table where ‘P,’‘Q,’‘R,’. . .,‘Z’ are
all true.
Validity An argument ‘P, Q,. . ., Y Z’ is valid in PL if and only if it is impos-
sible for the premises to be true and the conclusion false. A truth table
shows that an argument is invalid if and only if there is no row of the
truth table where the premises are true and the conclusion is false.
Invalidity An argument ‘P, Q,. . ., Y Z’ is invalid in PL if and only if it is
possible for the premises to be true and the conclusion false. A truth
table shows that an argument is valid if and only if there is a row of
the truth table where the premises are true and the conclusion is false.
Note
1. The number of rows you need to construct is determined by the number of variables. If
you have one variable (e.g., ‘p’), then you will only need two rows, one for ‘T’ and one for
‘F.’ If you have two variables, you will need four rows. If you have three variables, you’ll need
eight. You will not need to construct tables with more than three variables, but the general
expression is 2n where n equals the number of variables. For example, where n=30 variables,
there will be 1,073,741,824 rows.
Truth Trees
The major goal of this chapter is to introduce you to the truth-tree decision proce-
dure. In the previous chapter, truth tables were employed to mechanically determine
various logical properties of propositions, sets of propositions, and arguments. As a
decision procedure, truth tables have the advantage of giving a complete and graphical
representation of all of the possible truth-value assignments for propositions, sets of
propositions, and arguments. However, this decision procedure has the disadvantage
of becoming unmanageable in cases involving more than three distinct propositional
letters. The goal of this chapter is to introduce a more economical decision procedure
called the truth-tree method. The truth-tree method is capable of yielding the exact
same information as the truth table method. In addition, the complexity of the truth-
tree method is not a function of the number of distinct propositional letters. Thus,
whereas determining whether ‘(R∧¬M)∨(Q∨W)’ is contingent requires producing a
complex sixteen-row truth table, the truth-tree method will prove to be much more
economical wise. Thus, trees provide a simpler method for testing propositions, sets
of propositions, and arguments for logical properties.
In addition, truth trees will be useful in a later chapter where a more expressive logi-
cal language is introduced. In that language, the truth-table method will be unsuitable
because predicate logic is not a truth-functional language. However, it will be possible
to make use of the truth-tree method as a partial decision procedure.
99
1 2 3
Numbering Propositions Justification
Step 1 involves setting up the truth tree by vertically stacking and numbering each
proposition. To illustrate, consider the following two propositions:
(R∧¬M), R∧(W∧¬M)
Begin your setup by vertically stacking each proposition (one under the other),
numbering each proposition, and then justifying why the proposition is in on that line.
1 R∧¬M P
2 R∧(W∧¬M) P
The order of stacking does not matter so long as all of the propositions are stacked
and numbered, and a ‘P’ (for ‘proposition’) is written along the right-hand side to
indicate that this particular proposition is part of the set of propositions provided.
P#Q P
P Stacking rule
Q Stacking rule
P#Q P
P Q Branching rule
P#Q P
P P Branching rule
Q Q Stacking rule
Stacking rule A stacking rule is a truth-tree rule where the condition under which
a proposition ‘P’ is true is represented by stacking. A stacking
rule is applied to propositions that are true under one truth-value
assignment.
In each of the above examples, ‘P#Q’ is true only under one truth-value assign-
ment. Each of these can be represented graphically by a stacking rule. For example,
suppose the truth function represented by column 1 for ‘P#Q’ is to be decomposed.
This truth function states that ‘P#Q’ is true if and only if (iff) v(P) = T and v(Q) =
T. This can be represented graphically by writing ‘P’ and ‘Q’ directly under ‘P#Q’
in the tree.
P#Q P
P
Q
To see this more clearly, consider the above diagram in an expanded form (read the
diagram below from left to right)
Consider the truth function represented by column 2 for ‘P#Q.’ Notice that under
this valuation of ‘P#Q,’v(P#Q) = T if and only if v(P) = T and v(Q) = F. In this case,
we cannot represent the conditions under which ‘P#Q’ is true by writing
because v(P#Q) = T only if v(Q) = F. But since v(Q) = F if and only if v(¬Q) = T, we
can represent the truth function represented by column 2 for ‘P#Q’ as follows:
Thus, in order to represent column 2 for ‘P#Q,’ we stack ‘P’ and ¬Q under ‘P#Q’
as follows:
P#Q P
P
¬Q
The above tree represents that v(P#Q) = T in one and only one case. That is, v(P#Q)
= T if and only if v(P) = T and v(¬Q) = T. This procedure can be used to represent the
remaining truth functions in columns 3 and 4.
Branching occurs when the proposition being decomposed is false only under one
truth-value assignment.
Branching rule A branching rule is a truth-tree rule where the condition under
which a proposition ‘P’ is true is represented by branching. A
branching rule is applied to propositions that are false only under
one truth-value assignment.
In each of the above columns, ‘P#Q’ is false only under one truth-value assignment.
Each of these can be represented graphically by a branching rule. For example, sup-
pose the truth function represented by column 1 for ‘P#Q’ is to be decomposed. This
truth function states that ‘P#Q’ is true if and only if v(P) = T or v(Q) = T. This can be
represented graphically by branching a ‘P’ and ‘Q’ from ‘P#Q’ in the tree.
P#Q P
P Q Branching rule
Branching ‘P’ and ‘Q’ graphically represents the fact that v(P#Q) = T if and only if
v(P) = T or v(Q) = T. We can think of the above diagram as representing the following:
Let’s turn to the truth function represented by column 2. This truth function states
that v(P#Q) = T if and only if v(P) = T or v(Q) = F. Otherwise put, v(P#Q) = T if and
only if v(P) = T or v(¬Q) = T. Thus, in order to represent column 2 for ‘P#Q,’ write
‘P’ under one branch and ‘¬Q’ on the other. That is,
P#Q P
P ¬Q Branching rule
Again, this procedure can be used to represent the remaining truth functions in
columns 3 and 4. These truth functions are left as an exercise.
Finally, branching and stacking occurs when the proposition is true under two
truth-value assignments and false under two truth-value assignments.
Below are the truth functions involving two true truth-value assignments and two
false truth-value assignments.
Exercise Set #1
A. Consider the following truth functions. Create a truth-tree decomposition rule that
accurately represents columns 3 and 4.
P Q P#Q P#Q P#Q P#Q
T T T F F F
T F F T F F
F T F F T F
F F F F F T
1 2 3 4
B. Consider the following truth functions. Create a truth-tree decomposition rule that
accurately represents columns 3 and 4.
P Q P#Q P#Q P#Q P#Q
T T T T T F
T F T T F T
F T T F T T
F F F T T T
1 2 3 4
In earlier sections, we learned how to set up the truth tree for decomposition (step 1)
and learned a very general approach to decomposition. In this section, we learn two
of the nine decomposition rules for PL and some vocabulary for talking about and
analyzing truth trees.
1 R∧¬M P
2 R∧(W∧¬M) P
Notice that both propositions are conjunctions of the form ‘P∧Q.’ From the truth
table analysis of conjunctions, we know that a conjunction is true if and only if both
of its conjuncts are true, and so it is true under only one truth-value assignment.
Thus, in the case of line 1, v(R∧¬M) = T if and only if v(R) = T and v(¬M) = T. In
order to represent that v(R∧¬M) = T under one and only one truth-value assignment,
a stacking rule is employed. That is, ‘R’ and ‘¬M’ are stacked below the existing
set of propositions.
1 R∧¬M P
2 R ∧ (W∧¬M) P
3 R
4 ¬M
In the case of the example above, a stacking decomposition rule was applied to a
conjunction. However, there is a more general point. Namely, since a conjunction is
a type of proposition that is true under one and only truth-value assignment, a general
stacking rule can be formulated that is specific to conjunctions. This is known as con-
junction decomposition and is abbreviated as follows: ‘∧D.’
1 R∧¬M P
2 R∧(W∧¬M) P
3 R 1∧D
4 ¬M 1∧D
The above tree now indicates that ‘R’ at line 3 and ‘¬M’ at line 4 came about by
applying ‘∧D’ to line 1. In order to indicate that ‘R∧¬M’ has had a decomposition
rule applied to it, a checkmark () is placed to the right of it. This indicates that the
proposition has been decomposed.
A fully decomposed truth tree is a tree where all the propositions that can be de-
composed have been decomposed.
Fully decomposed tree A fully decomposed truth tree is a tree where all the prop-
ositions that can be decomposed have been decomposed.
In the tree above, the proposition at line 2 can be decomposed but has not been
decomposed, so the tree is not a fully decomposed truth tree. Thus, the next step is to
decompose any remaining propositions. Notice that line 2 is a conjunction. Thus, ‘∧D’
can be applied to it. This produces the following truth tree:
1 R∧¬M P
2 R∧(W∧¬M) P
3 R 1∧D
4 ¬M 1∧D
5 R 2∧D
6 W∧¬M 2∧D
We still do not have a fully decomposed truth tree since line 6 can be decomposed
but has not been decomposed. Thus, we can apply another use of ‘∧D’ to line 6. This
produces the following:
1 R∧¬M P
2 R∧(W∧¬M) P
3 R 1∧D
4 ¬M 1∧D
5 R 2∧D
6 W∧¬M 2∧D
7 W 6∧D
8 ¬M 6∧D
1 R∧W P
2 M∨¬W P
Next, decompose the ‘R∧W’ with conjunction decomposition by stacking ‘R’ and
‘W’ under the existing stack.
1 R∧W P
2 M∨¬W P
3 R 1∧D
4 W 1∧D
P Q P∨Q
T T T
T F T
F T T
F F F
Notice that v(P∨Q) = F in one and only one case, and it is true in all others. Re-
member that a stacking rule is applied if and only if a proposition is true in one case,
and a branching rule is applied to propositions that are false only under one truth-value
assignment. Thus, disjunctions will branch when decomposed. In order to determine
what kind of branching rule to use, consider the four possibilities:
1 2 3 4
P∨Q P∨Q P∨Q P∨Q
P Q ¬P Q P ¬Q ¬P ¬Q
Consider (2) as a possible candidate for P∨Q. This tree says that v(P∨Q) = T if
and only if v(P) = T or v(¬Q) = T; that is, v(Q) = F. This does not represent the truth
conditions for P∨Q since row 3 of the truth table says that if v(P) = F and v(Q) = T,
then v(P∨Q) = T. The only acceptable candidate is (1) since it represents the fact that
a disjunction is true if either of its disjuncts are true. Thus, disjunction decomposition
(∨D) is the following decomposition rule:
P Q ∨D
1 R∧W P
2 M∨¬W P
3 R 1∧D
4 W 1∧D
5 M ¬W 2∨D
Before continuing further, it is helpful at this point to introduce some more termi-
nology to talk about the tree and its branches.
First, all of the propositions in a branch can be catalogued by starting from the bot-
tom of a branch and moving upward through the branch to the top of the tree.
Branch A branch includes all the propositions obtained by starting from the bot-
tom of the tree and reading upward through the tree.
For example, note that in the above tree, there are two branches.
1 A∨B P
2 B∨C P
3 C∨D P
4 A B
5 B C B C
6 C D C D C D C D
#1 #2 #3 #4 #5 #6 #7 #8
For example, consider the following partially decomposed tree consisting of ‘R∧W’
and ‘M∨¬W.’
1 R∧W P
2 M∨¬W P
3 R 1∧D
4 W 1∧D
Notice that there is only one branch in the tree above and that it contains a decom-
posable proposition that has not been decomposed. That is, ‘M∨¬W’ is decomposable
but has not been decomposed.
In contrast, notice that the tree below consists of two branches, and both are fully
decomposed.
1 R∧W P
2 M∨¬W P
3 R 1∧D
4 W 1∧D
5 M ¬W 2∨D
Third, there are (1) closed branches, (2) open branches, and (3) completed open
branches. A branch is closed (or closes) provided that branch contains a proposition
and its literal negation (e.g., ‘P’ and ‘¬P’).
Closed branch A closed branch contains a proposition ‘P’ and its literal negation
‘¬P.’ A closed branch is represented by an ‘X.’
1 R∧W P
2 ¬W P
3 R 1∧D
4 W 1∧D
X
Notice that there is one branch in the above tree, and that branch contains ‘W’ at
line 4 and the literal negation of ‘W’ at line 2. The above branch is closed since it
contains ‘W’ and ‘¬W’ in the branch.
A branch is open if and only if that the branch is not closed. That is, the branch does
not contain any instance of a proposition and its literal negation.
Open branch An open branch is a branch that is not closed, that is, a branch that
does not contain a proposition ‘P’ and its literal negation ‘¬P.’
For example, consider the following partially decomposed tree consisting of ‘R∧W’
and ‘M∨¬W.’
1 R∧W P
2 M∨¬W P
3 R 1∧D
4 W 1∧D
Notice that there is one branch, and it is open since the branch does not contain a
proposition and its literal negation.
To bring much of the above terminology together, consider that once line 2 in the
above tree has been decomposed, there are no undecomposed propositions in the tree,
and so the tree below has two fully decomposed branches.
1 R∧W P
2 M∨¬W P
3 R 1∧D
4 W 1∧D
5 M ¬W 2∨D
0 X
In the above tree, notice that the leftmost branch (consisting of ‘M,’‘W,’ and ‘R’)
is a completed open branch (indicated by ‘0’), and the rightmost branch (consisting
of ‘¬W,’‘W,’ and ‘R’) is a closed branch (indicated by ‘X’). The leftmost branch is a
completed open branch because (1) it does not contain any instance of a proposition
and its literal negation, and (2) it is fully decomposed.
The rightmost branch is a closed branch since it contains a proposition and its literal
negation (i.e., ‘¬W’ at line 5 and ‘W’ at line 4).
Consider another example of a tree examined earlier in this chapter:
1 R∧¬M P
2 R∧(W∧¬M) P
3 R 1∧D
4 ¬M 1∧D
5 R 2∧D
6 W∧¬M 2∧D
7 W 6∧D
8 ¬M 6∧D
0
In the above example, there is only one branch consisting of the following nonde-
composable propositions: ‘¬M,’ ‘W,’ and ‘R.’ Since the branch is fully decomposed
and does not contain an instance of a proposition and its literal negation, the branch
is a completed open branch.
Fourth, and finally, using the above terminology for branches, it is now possible to
talk about the whole tree. A tree is classified into the following three types:
A tree is a completed open tree if and only if there is one completed open branch.
Completed open tree A tree is a completed open tree if and only if it has at least
one completed open branch. That is, a tree is a completed
open tree if and only if it contains at least one fully decom-
posed branch that is not closed. A completed open tree is a
tree where there is at least one branch that has an ‘0’ under
it.
1 R∧¬M P
2 R∧(W∧¬M) P
3 R 1∧D
4 ¬M 1∧D
5 R 2∧D
6 W∧¬M 2∧D
7 W 6∧D
8 ¬M 6∧D
0
It was determined above that this tree has a completed open branch. Thus, the above
tree is a completed open tree.
Consider a second tree:
1 R∧W P
2 M∨¬W P
3 R 1∧D
4 W 1∧D
5 M ¬W 2∨D
0 X
Notice that the leftmost branch is a completed open branch. Thus, the tree is a
completed open tree.
A tree is closed if and only if all of its branches are closed. That is, if every branch
in a tree has an ‘X’ under it, then the tree is closed.
Closed tree A tree is closed when all of the tree’s branches are closed. A closed
tree will have an ‘X’ under every branch.
1 R∧W P
2 ¬W P
3 R 1∧D
4 W 1∧D
X
Notice that all of the tree’s branches are closed. Thus, the tree is closed.
Here is another example:
1 R∧W P
2 ¬R∨¬W P
3 R 1∧D
4 W 1∧D
5 ¬R ¬W 2∨D
X X
Notice that the above tree consists of two branches. Both are closed, and so the tree
is closed.
Next, consider the following tree consisting of ‘A→B,’‘¬(A∨B),’ and ‘B∧C’:
1 A→B P
2 ¬(A∨B) P
3 B∧C P
4 B 3∧D
5 C 3∧D
6 ¬A 2¬∨D
7 ¬B 2¬∨D
8 ¬A B 1→D
X X
The above tree uses some decomposition rules that have not been introduced, but
you should be able to determine that the tree is closed because every branch is closed.
The rightmost branch is closed because it contains a proposition and its literal nega-
tion (‘B’ and ‘¬B’), but also notice that the leftmost branch is closed because ‘B’ (line
4) and ‘¬B’ (line 7) are in the leftmost branch.
As one last example, consider the following tree consisting of ‘P∨Q’ and‘¬P∧Q.’
1 P∨Q P
2 ¬P∧Q P
3 ¬P 2∧D
4 Q 2∧D
5 P Q 1∨D
X 0
Notice that the above tree is not closed. This tree is a completed open tree because
the tree has at least one completed open branch. This is the rightmost branch consist-
ing of ‘Q,’‘Q,’ and ‘¬P.’ Remember that in order for a tree to be closed, all of the
branches must be closed, while in order for a tree to be a completed open tree, there
needs to be at least one completed open branch.
Finally, an uncompleted open tree is a tree that is neither a completed open tree nor
a closed tree. This is a tree that is unfinished because all of the branches have not been
closed or there is not one completed open branch.
1 R∧W P
2 C∨D P
Next, choose one of the propositions to decompose. Suppose that ‘C∨D’ is chosen
from line 2.
1 R∧W P
2 C∨D P
3 C D 2∨D
In the above tree, ‘C∨D’ is decomposed. However, ‘R∧W’ has not been decom-
posed. In decomposing line 1, you will decompose it under every open branch that
descends from and contains ‘P.’
So, in the case of the above tree, there are two open branches that descend from
‘R∧W,’ and so ‘R∧W’ must be decomposed under both of these branches. The com-
pleted tree is as follows:
1 R∧W P
2 C∨D P
3 C D 2∨D
4 R R 1∧D
5 W W 1∧D
0 0
The general rule for this is called the decomposition descending rule. It states that
when decomposing a proposition ‘P,’ decompose ‘P’ under every open branch that
descends from ‘P.’
Consider the following propositions: ‘R∨(P∧M)’ and ‘C∨D.’ Again, start by stack-
ing the propositions and then decompose line 1.
1 R∨(P∧M) P
2 C∧D P
3 R P∧M 1∨D
Notice that there are two decomposable propositions remaining: ‘C∧D’ at line 2 and
‘P∧M’ at line 3. According to the decomposition descending rule, ‘C∧D’ must be de-
composed under every open branch that descends from ‘C∧D.’ Since the left and right
branches are both open, ‘C∧D’ must be decomposed under both of these branches.
1 R∨(P∧M) P
2 C∧D P
3 R P∧M 1∨D
4 C C 2∧D
5 D D 2∧D
Notice that the above tree is still not fully decomposed because there is a remaining
decomposable proposition in the rightmost branch. In order to complete the truth tree,
‘P∧M’ at line 3 must be decomposed. Since propositions are decomposed under every
remaining open branch that descends from the proposition upon which a decomposi-
tion rule is applied, ‘P∧M’ will only be decomposed in the rightmost branch. Thus,
the completed tree is as follows:
1 R∨(P∧M) P
2 C∧D P
3 R P∧M 1∨D
4 C C 2∧D
5 D D 2∧D
6 0 P 3∧D
7 M 3∧D
0
Notice that ‘P∧M’ is not decomposed under the leftmost branch. This is because
propositions are decomposed under open branches that descend from that proposition.
Thus, decomposing ‘P∧M’ at line 3 in the following way would be a violation of the
decomposition descending rule:
3 R P∧M 1∨D
4 C C 2∧D
5 D D 2∧D
6 NO! —► P P 3∧D
7 NO! —► M M 3∧D
1 P∧¬P P
2 W∨L P
3 P 1∧D
4 ¬P 1∧D
X
The decomposition descending rule states that when a proposition ‘P’ is decom-
posed, ‘P’ should be decomposed under every open branch that descends from ‘P.’
However, in the above tree, there are no open branches since ‘P’ and ‘¬P’ form a
closed branch. At this point, the branch is closed, and the tree is complete and consid-
ered closed since any further decomposition will only yield more closed branches. To
illustrate, consider what would happen if line 2 were decomposed:
1 P∧¬P P
2 W∨L P
3 P 1∧D
4 ¬P 1∧D
5 W L 2∨D
X X
Notice that the leftmost branch closes because it contains ‘P’ and ‘¬P’ and the
rightmost branch closes because it contains ‘P’ and ‘¬P’ at lines 3 and 4.
Exercise Set #2
A. Using the truth-tree method, decompose the following sets of propositions to de-
termine whether the tree is an open or closed tree.
1. * P∧(R∧S)
2. P∨(R∨D)
3. * P∨Q, ¬P∧Q
4. P∧(R∧D), Z∧M
5. * P∧(P∨Z), P∧¬P
6. W∧¬W, Z∨(D∨R)
7. * (P∧Q)∧W, ¬W ∨ M
8. (R∨W)∨¬Ρ, (¬R∧¬W)∧P
9. * (R∨¬W)∨¬P, C∧D, (¬R∧R)∧M
10. D∨¬D, Z∧[(Z∧P)∧¬R]
5 P Q 1∨D
X 0
5. * P∧(P∨Z), P∧¬P; closed tree.
1 P∧(P∨Z) P
2 P∧¬P P
3 P 2∧D
4 ¬P 2∧D
X
7. * (P∧Q)∧W, ¬W∨M; open tree.
1 (P∧Q)∧W P
2 ¬W ∨ M P
3 P∧Q 1∧D
4 W 1∧D
5 P 3∧D
6 Q 3∧D
7 ¬W M 2∨D
X 0
Thus far, we have formulated two decomposition rules (∧D and ∨D). What remains is
to formulate the decomposition rules for the remaining proposition types and to learn
how to read trees for logical properties (e.g., equivalence, tautology, validity, etc.).
This section addresses the remaining seven decomposition rules, the next section ad-
dresses the strategic use of rules, and a later section addresses logical properties.
P Q P→Q
T T T
T F F
F T T
F F T
Notice that a conditional is false under one and only one truth-value assignment.
That is, v(P→Q) = F if and only if v(P) = T and v(Q) = F. Thus, conditionals will
branch when decomposed. In order to determine what kind of branching rule to use,
again consider the four possibilities:
1 2 3 4
P∨Q P∨Q P∨Q P∨Q
P Q ¬P Q P ¬Q ¬P ¬Q
Rather than testing each of these possibilities, notice from the truth table that a
conditional (‘P→Q’) is true if and only if either v(P) = F or v(Q) = T. In other words,
v(P→Q) = T if and only if v(¬P) = T or v(Q) = T. Thus, the decomposition rule for
conditionals (→D) can be represented as the following branching rule:
P→Q P
¬P Q →D
As a quick illustration, consider the following tree for ‘P→(W→Z)’ where ‘→D’
is used twice:
1 P→(W→Z) P
2 ¬P W→Z 1→D
3 ¬W Z 2→D
Notice that in the above tree when ‘→D’ is used to decompose ‘W→Z,’‘¬W’ and
‘Z’ are placed under ‘W→Z’ on the right-hand side of the branch and not the left-hand
side.
P Q P↔Q
T T T
T F F
F T F
F F T
A biconditional is a proposition that is true in two cases and false in two cases. In
order to represent this, we will use a combination of stacking and branching.
P↔Q P
P ¬P ↔D
Q ¬Q ↔D
As a quick illustration, consider the following tree for ‘P↔(W∧Z),’ where ‘↔D’
is used:
1 P↔(W∧Z) P
2 P ¬P 1↔D
3 W∧Z ¬(W∧Z) 1↔D
4 W 3∧D
5 Z 3∧D
There are two things to note in the above tree. First, when ↔D is applied to line
1, on the left-hand side we have ‘P’ and ‘W∧Z,’ and the latter proposition can be
decomposed using ‘∧D.’ However, notice that on the right-hand side we have ‘¬P’
and ‘¬(W∧Z).’ While ‘¬P’ cannot be decomposed further, ‘¬(W∧Z)’ can, but we
currently do not have a rule for how to decompose it.
¬(P∧Q) P
¬P ¬Q ¬∧D
In order to illustrate ‘¬∧D,’ we return to the following tree considered in our dis-
cussion of ‘↔D’:
1 P→(W∧Z) P
2 P ¬P 1↔D
3 W∧Z ¬(W∧Z) 1↔D
4 W 3∧D
5 Z 3∧D
¬W ¬Z 3¬∧D
¬(P∨Q) P
¬P ¬∨D
¬Q ¬∨D
As a quick illustration, consider the following tree for ‘¬[Z∨(B∨R)],’ where ‘¬∨D’
is used twice.
1 ¬[Z∨(B∨R)] P
2 ¬Z 1¬∨D
3 ¬(B∨R) 1¬∨D
4 ¬B 3¬∨D
5 ¬R 3¬∨D
¬(P→ Q) P
P ¬→D
¬Q ¬→D
1 ¬[L→(B→R)] P
2 L 1¬→D
3 ¬(B→R) 1¬∨D
4 B 3¬∨D
5 ¬R 3¬∨D
¬(P↔Q) P
P ¬P ¬↔D
¬Q Q ¬↔D
1 ¬[P↔(W∨Z)] P
2 P ¬P 1¬↔D
3 ¬(W∨Z) W∨Z 1¬↔D
4 ¬W 3¬∨D
5 ¬Z 3¬∨D
6 W Z 3∨D
¬¬P P
P ¬¬D
As a quick illustration, consider the following tree for ‘¬[¬P∨¬Z)],’ where there
are two uses of ‘¬¬D’ after a use of ‘¬∨D’:
1 ¬(¬P∨¬Z) P
2 ¬¬P 1¬∨D
3 ¬¬Z 1¬∨D
4 P 2¬¬D
5 Z 3¬¬D
¬(P→Q) P→Q
P ¬→D
¬Q ¬→D
¬P Q →D
¬¬P
P ¬¬D
Exercise Set #3
A. Using the truth-tree method, decompose the following sets of propositions to de-
termine whether the tree is an open or closed tree.
1. * P→R, ¬(P→Z)
2. ¬(P∨L), P→Z
3. * ¬(P∧M), P
4. P↔R, P→R
5. * P∧¬L, ¬(P→L), Z∨F
6. ¬P→(Z∧M), ¬(P→Z)
7. * R→(R∨L), Z↔(Q∧¬R)
8. ¬¬P, (P∧¬P)→(Z∨V)
9. ¬(Z↔L), Z→(P∧V)
10. ¬(¬Z∨P)↔(P∧¬V)
11. ¬(¬Z→P), ¬[P∨(¬V→R)]
12. F∨(T∨R), ¬(P∧R), P∧(¬P∧R)
13. M∧(¬P∧¬Z), Z∧L, F→(R∧T)
14. (P∧Z)→¬Z, Z∧¬(M∨V)
15. (M∨T)↔¬(M↔¬Z)
5 ¬P R 1→D
X 0
3. * ¬(P∧M), P; open tree.
1 ¬(P∧M) P
2 P P
3 ¬P ¬M 1¬∧D
X 0
5. * P∧¬L, ¬(P→L), Z∨F; open tree.
1 P∧¬L P
2 ¬(P→L) P
3 Z∨F P
4 P 1∧D
5 ¬L 1∧D
6 P 2¬→D
7 ¬L 2¬→D
8 Z F 3∨D
0 0
3 Z ¬Z 2↔D
4 Q∧¬R ¬(Q∧¬R) 2↔D
5 Q 4∧D
6 ¬R 4∧D
7 ¬Q ¬¬R 4¬∧D
8 R 7¬¬D
9 ¬R R∨L 1→D
0
Before analyzing the remaining rules for truth-functional operators, it is helpful to for-
mulate a number of strategic rules that will simplify the decomposition of truth trees.
These rules are listed in order of decreasing priority. That is, if you have a choice
between using strategic rule 1 or strategic rule 3, use 1.
Suppose that all you wanted to know about a particular proposition or set of proposi-
tions was whether or not it produced an open tree. It is not always necessary to produce
a fully developed tree in order to make this determination since a completed open tree
is a tree with at least one completed open branch. For instance, consider whether the
following set of propositions is consistent: ‘(P∧¬W)∧¬M’ and ‘M∨(¬M∨P).’
1 (P∧¬W)∧¬M P
2 ¬M∨(¬M∨P) P
3 P∧¬W 1∧D
4 ¬M 1∧D
5 P 3∧D
6 ¬W 3∧D
7 ¬M ¬M∨P 2∨D
0
Now, the above tree is not fully decomposed since there is still a complex proposi-
tion ‘¬M∨P’ (at line 7). However, there is no reason to decompose it because a tree
is a completed open tree if and only if there is at least one completed open branch.
1 ¬M∨¬P P
2 M∨P P
3 P∨Q P
In the stack above, we want to construct the simplest possible tree to determine
whether the tree is open or closed. Strategic rule 2 suggests that we chose rules that
are likely to close branches. These are lines 1 and 2. Let us decompose ‘M∨P,’ then
‘¬M∨¬P.’
1 ¬M∨¬P P
2 M∨P P
3 P∨Q P
4 M P 2∨D
5 ¬M ¬P ¬M ¬P 1∨D
X X
Notice two things about the above tree. First, ‘¬M∨¬P’ had to be decomposed un-
der every open branch below it in the tree, not only under ‘P’ (line 4) but also under
‘M’ (line 4). Second, after decomposing ‘¬M∨¬P,’ there are two closed branches and
two open branches. If ‘P∨Q’ were decomposed before ‘¬M∨¬P,’ there would be four
open branches. This would require ‘¬M∨¬P’ to be decomposed under each of these
four open branches. Since we made use of the strategic rule that says to use decom-
position rules that close branches and decomposed ‘¬M∨¬P’ before ‘P∨Q,’ we only
have to decompose ‘P∨Q’ under the remaining two open branches.
1 ¬M∨¬P P
2 M∨P P
3 P∨Q P
4 M P 2∨D
5 ¬M ¬P ¬M ¬P 1∨D
X X
P Q P Q 3∨D
7 X 0 0 0
The above tree is a completed open tree since there is at least one completely de-
composed open branch.
One helpful way of remembering this rule is that you want your trees to be tall and
not bushy. Whenever you decompose a complex proposition, it must be decomposed
into every open branch below the decomposed proposition. Using stacking rules be-
fore branching rules has the benefit of simplifying trees. Consider the following set of
propositions: ‘¬M∨Q’ and ‘(R∧Q)∧P.’
1 ¬M∨Q P
2 (R∧Q)∧P P
In the above stack, we have the option of using a decomposition rule that stacks
(∧D) or a decomposition rule that branches (∨D). Consider what happens if we
branch first.
1 ¬M∨Q P
2 (R∧Q)∧P P
3 ¬M Q 1∨D
4 R∧Q R∧Q 2∧D
5 P P 2∧D
6 R R 4∧D
7 Q Q 4∧D
The above tree is seven lines long and requires writing two separate stacks for the
decomposition of ‘(R∧Q)∧P.’ Consider what happens if we use the stacking rule (∧D)
first.
1 ¬M∨Q P
2 (R∧Q)∧P P
3 R∧Q 2∧D
4 P 2∧D
5 R 3∧D
6 Q 3∧D
7 ¬M Q 1∨D
Here, the tree is again seven rows long but is less complex since it requires only one
column for the decomposition of the stacking rule (∧D). Thus, stacking rules produce
simpler trees than branching rules. So, stack before you branch!
Strategic rule 4 is the weakest of the strategic rules. Compare the following two
trees for ‘(M∨S)∨(Q∨R), R∨S.’ In the first tree, the more complex proposition is
decomposed:
1 (M∨S)∨(Q∨R) P
2 R∨S P
4 M S Q R 3∨D
5 R S R S R S R S 2∨D
In the above tree, notice that line 5 involves four decompositions of ‘R∨S’ under
the four open branches. In the second tree, the first proposition decomposed is the less
complex proposition at line 2. In the tree below, notice that line 5 has two decomposi-
tions of ‘M∨S’ and two decompositions of ‘Q∨R.’
1 (M∨S)∨(Q∨R) P
2 R∨S P
3 R S 2∨D
5 M S Q R M S Q R 4∨D
Although using strategic rule 4 may not reduce the overall size of the tree, it can
help simplify the number of different applications of a rule applied.
Exercise Set #4
A. Using the truth-tree method, construct a truth tree to determine whether the fol-
lowing yield a completed open tree or a closed tree. Remember to use the strategic
rules.
1. * A∧B, B∧C, C∧D
2. (A∧B)∧D, (D∧¬B)∧A
3. * A∧B, B∧¬C, C∧D
4. ¬A∨¬B, A∧¬B
5. * A→ B, B→ C, C→D
6. ¬(A→B), ¬(A∨B)
7. * A →B, ¬B→C, ¬C→D
8. B↔C, B→C, ¬B∨C
9. * A→B, ¬(A∨B), B∧C
10. [P→¬(Q∨R)], ¬(Q↔¬R)
11. * ¬[P→¬(Q∨R)], ¬(Q∨¬R)
12. (P↔¬L), P∧(¬P∧Z), L→(R→Z)
13. P∨(R∨D), R↔(V↔D)¬(P→P)
14. ¬[Z∨(¬Z∨V)], ¬(C∨P)↔(M∧¬D)
15. P∨¬(¬P→R), V↔(M→¬M)
4 ¬A B 1→D
5 ¬B C ¬B C 2→D
X
6 ¬C D ¬C D ¬C D 3→D
0 0 X 0 X 0
4 ¬A B 1→D
5 B C B C 2→D
6 C D C D C D C D 3→D
0
9. * A→B, ¬(A∨B), B∧C; closed tree.
1 A→B P
2 ¬(A∨B) P
3 B∧C P
4 B 3∧D
5 C 3∧D
6 ¬A 2¬∨D
7 ¬B 2¬∨D
X
11. * ¬[P→¬(Q∨R)], ¬(Q∨¬R); open tree.
1 ¬[P→¬(Q∨R)] P
2 ¬(Q∨¬R) P
3 ¬Q 2¬∨D
4 ¬¬R 2¬∨D
5 R 4¬¬D
6 P 1¬→D
7 ¬¬(Q∨R) 1¬→D
8 Q∨R 7¬¬D
9 Q R 8→D
X 0
In this section, three trees are examined. The first will come completed, the second
will have a step-by-step walkthrough, and the third will illustrate a simple strategic
point. In each case, you should look at the original stack of propositions, try to com-
plete the tree yourself, and check your work against the completed tree.
Consider the following set of propositions: ‘{M→P, ¬(P∨Q), (R∨S)∧¬P}.’ First,
there is the initial setup, which will simply consist of stacking the propositions:
1 M→P P
2 ¬(P∨Q) P
3 (R∨S)∧¬P P
1 M→P P
2 ¬(P∨Q) P
3 (R∨S)∧¬P P
4 R∨S 3∧D
5 ¬P 3∧D
6 ¬P 2¬∨D
7 ¬Q 2¬∨D
8 ¬M P 1→D
X
9 R S 4∨D
0 0
Third, we can analyze the tree to see whether or not it is open or closed. The above
tree is complete since all propositions are decomposed, and it is open since there is at
least one branch with an ‘0’ under it.
For the next example, consider the following set of propositions:
Looking at this set, there is only one proposition that stacks: ‘¬(M→P).’ Strategic
rule 2 says to use rules that stack rather than branch, so we will start decomposing the
tree by applying ‘(¬→D)’ to ‘¬(M→P)’:
1 ¬(M→P) P
2 ¬(P↔Q) P
3 ¬(P∨M)∨¬Q P
4 P→M P
5 M 1¬→D
6 ¬P 1¬→D
Next, all of the remaining propositions branch, so we should consider using a de-
composition branching rule on a proposition that will close branches. It would not be
a good idea to decompose ‘P→M’ at line 5 since this will give us ‘M’ and ‘¬P’ in
a branch and would not close any branches. Another option is to decompose the ne-
gated biconditional ‘¬(P↔Q)’ using ‘(¬↔D)’ since this opens one branch yet closes
another. Consider this choice below.
1 ¬(M→P) P
2 ¬(P↔Q) P
3 ¬(P∨M)∨¬Q P
4 P→M P
5 M 1¬→D
6 ¬P 1¬→D
7 P ¬P 2¬↔D
8 ¬Q Q 2¬↔D
X
Next, decomposing ‘P→M’ from line 4 is still not a good idea since it will only open
more branches. However, applying (∨D) upon ‘¬(P∨M)∨¬Q’ will close one branch:
1 ¬(M→P) P
2 ¬(P↔Q) P
3 ¬(P∨M)∨¬Q P
4 P→M P
5 M 1¬→D
6 ¬P 1¬→D
7 P ¬P 2¬↔D
8 ¬Q Q 2¬↔D
X
9 ¬(P∨M) ¬Q 3∨D
X
There still is not a good reason to decompose ‘P→M’ since it will not close any
branches. Thus, try decomposing ‘¬(P∨M)’ since it stacks rather than branches.
1 ¬(M→P) P
2 ¬(P↔Q) P
3 ¬(P∨M)∨¬Q P
4 P→M P
5 M 1¬→D
6 ¬P 1¬→D
7 P ¬P 2¬↔D
8 ¬Q Q 2¬↔D
X
9 ¬(P∨M) ¬Q 3∨D
10 ¬P X 9¬∨D
11 ¬M 9¬∨D
X
All branches are closed; therefore the tree is closed. It is important to recognize
that (1) whenever a branch closes, you should not decompose any more proposi-
tions in that branch, and (2) when all branches close, the tree is finished, even if all
propositions have not been fully decomposed. The above walkthrough shows that
even though a tree can be fully decomposed by decomposing propositions randomly,
a tactical use of the decomposition rules will reduce the complexity of the tree and
your total amount of work.
Being able to identify which propositions are likely to close branches is extremely
helpful. To see this more clearly, consider the following set of propositions:
This set of propositions is likely to yield a very complex tree if you do not proceed
with a strategic use of the decomposition rules in mind. It can, however, be solved
quite easily if you see that ‘¬(¬P∨Q)’ will stack, giving ‘¬¬P’ and ‘¬Q,’ while
‘P→Q’ will branch, giving ‘¬P’ and ‘Q.’ This will immediately close both branches
and the tree, making the remaining decompositions irrelevant.
1 P→Q P
2 T→[(Q∨R)∨(¬S∨M)] P
3 (P∨Q)∨R P
4 (P→M)∨[W↔(¬S∨S)] P
5 ¬(¬P∨Q) P
6 ¬¬P 5∨D
7 ¬Q 5∨D
8 P 6¬¬D
9 ¬P Q 1→D
X X
1 R∧¬M P
2 R∧(W∧¬M) P
3 R 1∧D
4 ¬M 1∧D
5 R 2∧D
6 W∧¬M 2∧D
7 W 6∧D
8 ¬M 6∧D
0
assign a value of true to atomic propositions and false to negated propositional letters.
This procedure determines a valuation set. In the case of ‘R,’‘W,’ and ‘¬M,’ we as-
sign their truth values as follows:
v(R) = T
v(W) = T
v(M) = F
1 R∧W P
2 M∨¬W P
3 R 1∧D
4 W 1∧D
5 M ¬W 2∨D
0 X
Notice that the above tree is a completed open tree where the leftmost branch is a
completed open branch and the rightmost branch is closed. Focusing only on atomic
propositions and their literal negations, branches 1 and 2 consist of the following:
Branch 1 M, W, R
Branch 2 ¬W, W, R
By assigning a value of true to atomic propositions and false to their literal nega-
tions, a consistent set of valuations can be assigned to propositional letters in the
completed open branch but not for the closed branch. With respect to branch 1, we
see that the stack of propositions is true when the following valuations are assigned
to propositional letters:
v(M) = T
v(W) = T
v(R) = T
M W R
Branch 1 T T T
Branch 2 Closed branch
In sum, valuations for propositional letters can be extracted by reading upward from
the base of a completed open branch and assigning a value of true to propositional
letters and false to literal negations of propositional letters.
Before turning to a discussion of how trees can be used to determine properties like
consistency, contingency, validity, and so forth, we finish our discussion here with a
tree that involves more than one valuation set (i.e., more than one way to assign truth
values to propositional letters). Consider a tree consisting of the following proposi-
tions: ‘M→P,’‘¬(P∨Q),’‘(R∨S)∧¬P.’
1 M→P P
2 ¬(P∨Q) P
3 (R∨S)∧¬P P
4 R∨S 3∧D
5 ¬P 3∧D
6 ¬P 2¬∨D
7 ¬Q 2¬∨D
8 ¬M P 1→D
X
9 R S 4∨D
0 0
In the above tree, there are two completed open branches. Moving upward from
the leftmost ‘R’ at the base of the completed tree, we can first assign ‘R’ a value of
true. Next, notice that the branch consists of ‘¬M,’‘¬P,’ and ‘¬Q,’ and so we assign
a value of false to ‘M,’‘P,’ and ‘Q.’
R S M P Q
Valuation set 1 T ? F F F
However, note that there is no ‘S’ in this branch. This means that the proposi-
tions in the stack can be jointly true independent of whether ‘s’ is true or ‘s’ is
false. The propositions that compose the stack will be true independent of whether
‘S’ is true or false. Let’s call the valuation set where v(S) = T valuation set 1, and
the one where v(S) = F valuation set 2.
R S M P Q
Valuation set 1 T T F F F
Valuation set 2 T F F F F
Turning to the second completed open branch, we begin from the base of the tree
and move upward from ‘S,’ assigning it true as a value, then assigning truth values to
the other propositional letters: v(M) = F, v(P) = F, and v(Q) = F. However, note that in
this branch, ‘R’ is not present and so can be either v(R) = F (valuation set 3) or v(R)
= T (valuation set 4).
R S M P Q
Valuation set 1 T T F F F
Valuation set 2 T F F F F
Valuation set 3 F T F F F
Valuation set 4 T T F F F
Notice, however, that valuation set 4 and valuation set 1 are identical, and so one
of these is superfluous.
R S M P Q
Valuation set 1 T T F F F
Valuation set 2 T F F F F
Valuation set 3 F T F F F
Thus, using the truth-tree method, we have been able to determine that ‘’M→P,’
‘¬(P∨Q),’ and ‘(R∨S)∧¬P’ are jointly true under three different possible valuations.
Exercise Set #5
A. Decompose the following truth trees. For any completed open branch, give the
truth-value assignments (valuations) to the propositional letters that compose the
branch.
1. * ¬(P→¬Q), P∧Q
2. ¬P∨¬Z, P→¬(Z∨R)
3. * ¬[P→¬(M∧¬Z)], ¬(¬P∨Z)
4. ¬(P↔Z), ¬P→¬Z
5. * (P↔¬Z), ¬(P→Z)
6. P∨(R→¬Z), ¬P∨¬(R→¬Z)
7. P∧(R↔¬Z), ¬P∧¬(R↔¬Z)
8. P∨(Q∨¬M)→R, ¬R∧Q
5 P ¬P 1↔D
6 ¬Z ¬¬Z 1↔D
7 Z 6¬¬D
0 X
4.6.2 Consistency
Now that you are familiar with how to decompose truth trees, how to determine
whether the tree is open or closed, and how to retrieve a set of valuations from a tree
with a completed open branch, the next step is to learn how to analyze trees for various
properties. In this section, we learn how to determine whether a set of propositions is
consistent (or inconsistent).
In the previous chapter, we saw that a truth table can be used to determine whether
a set of propositions is consistent by determining whether or not there is some row
P Q (P → Q) (Q ∨ P) (P ↔ Q)
T T T T T T T T T T T
T F T F F F T T T F F
F T F T T T T F F F T
F F F T F F F F F T F
From the above, ‘P→Q,’ ‘Q∨P,’ and ‘P↔Q’ are consistent when v(P) = T and v(Q)
= T. We are now in a position to define consistency and inconsistency for truth trees.
The method for determining whether the set of propositions ‘{P, Q, R, . . ., Z}’ is
consistent (or inconsistent) begins by putting each of the propositions in the set on its
own line in a stack. For example, in the case of ‘{R∧¬M, R∧(W∧¬M)},’ the proposi-
tions are first stacked:
1 R∧¬M P
2 R∧(W∧¬M) P
Second, the tree is decomposed until the tree either closes or there is a completed
open branch:
1 R∧¬M P
2 R∧(W∧¬M) P
3 R 1∧D
4 ¬M 1∧D
5 R 2∧D
6 W∧¬M 2∧D
7 W 6∧D
8 ¬M 6∧D
0
Finally, the tree is analyzed. Since the above tree contains at least one completed
open branch, there is at least one valuation set that would make the propositions in
the set jointly true. The definition of consistency states that if the tree has a completed
open branch, the set of propositions ‘{R∧¬M, R∧(W∧¬M)}’ is consistent.
As a second example, consider the set ‘{R∧W, M∨¬W}’:
1 R∧W P
2 M∨¬W P
3 R 1∧D
4 W 1∧D
5 M ¬W 2∨D
0 X
The above tree is a completed open tree (since it contains at least one completed
open branch), and so the set of propositions ‘{R∧W, M∨¬W}’ that formed the stack
is consistent.
Consider a final example:
1 P→Q P
2 T→[(Q∨R)∨(¬S∨M)] P
3 (P∨Q)∨R P
4 (P→M)∨[W↔(¬S∨S)] P
5 ¬(¬P∨Q) P
6 ¬¬P 5∨D
7 ¬Q 5∨D
8 P 6¬¬D
9 ¬P Q 1→D
X X
The above tree is complete and closed. It is thus inconsistent because there is not a
completed open branch, for all of the branches are closed.
Number
Property of Trees Propositions in the Tree Has the Property. . .
Consistency At Least 1 For ‘{P, Q, R, . . ., Z},’ Iff ‘P,’ ‘Q,’ ‘R,’ . .
a truth tree for ‘P,’ ‘Q,’ ., ‘Z’ determines a
‘R,’ . . ., ‘Z’ completed open tree.
Inconsistency At Least 1 For ‘P,’ a truth tree for Iff ‘P,’ ‘Q,’ ‘R,’ . .
‘P,’ ‘Q,’ ‘R,’ . . ., ‘Z’ ., ‘Z’ determines a
closed tree.
Before moving on to the remaining logical properties, notice that a benefit of the
truth-tree method is that it is more efficient. For instance, a complete truth table of the
tree considered above would require constructing an eight-row table, involving 104
‘Ts’ and ‘Fs.’ The above truth tree is a much more economical method for testing for
the same property since it tested whether ‘R∧¬M’ and ‘R∧(W∧¬M)’ are logically
consistent in eight lines.
Tautology A proposition ‘P’ is a tautology if and only if ‘P’ is true under ev-
ery valuation. A truth tree shows that ‘P’ is a tautology if and only
if a tree of the stack of ‘¬P’ determines a closed tree.
1 ¬(P∨¬P) P
2 ¬P 1∧D
3 ¬¬P 1∧D
4 P 3¬¬D
X
Notice that the above tree is closed. A closed tree for a stack consisting of ‘{¬(P∨¬P)}’
means that there is no valuation set that makes ‘¬(P∨¬P)’ true. If there is no valuation
set that makes ‘¬(P∨¬P)’ true, then ‘¬(P∨¬P)’ is a contradiction. However, the propo-
sition we want to know about is ‘P∨¬P.’ If ‘¬(P∨¬P)’ is always false, then ‘(P∨¬P)’ is
always true. And if ‘(P∨¬P)’ is always true, then ‘(P∨¬P)’ is a tautology.
Next, consider ‘P→(Q∧¬P).’ Let’s begin by testing this proposition to see if it is a
tautology by placing the literal negation of ‘P→(Q∧¬P)’ as the first line of the stack:
1 ¬[P→(Q∧¬P)] P
2 P 1¬→D
3 ¬(Q∧¬P) 1¬→D
4 ¬Q ¬¬P 3¬∧D
5 0 P 4¬¬D
Notice that the above tree does not determine a closed tree, and so ‘P→(Q∧¬P)’
is not a tautology. From the above tree, we know that there is at least one valuation
set that makes ‘¬[P→(Q∧¬P)]’ true. In other words, there is at least one valuation
set that makes ‘P→(Q∧¬P)’ false. We do not know, however, whether every way of
valuating the propositional letters in ‘P→(Q∧¬P)’ would make it false, in which case
‘P→(Q∧¬P)’ would be a contradiction, or if some ways of valuating ‘P→(Q∧¬P)’
make it true and some make it false, making ‘P→(Q∧¬P)’ a contingency. In order to
find this out, we need to make use of another test.
Before considering the other tests for ‘P→(Q∧¬P),’ let’s briefly turn to the truth-
tree test for contradiction. Consider ‘P∧¬P,’ which is obviously a contradiction. The
test for contradiction begins by simply writing ‘P∧¬P’ on the first line, decomposing
the proposition, and then checking to see whether the tree is open or closed.
1 P∧¬P P
2 P 1∧D
3 ¬P 1∧D
X
Notice that the above tree is closed, which means that there is no way of assigning
truth values to the propositional letters in ‘P∧¬P’ so as to make it true. In other words,
‘P∧¬P’ is a contradiction.
Let’s return to ‘P→(Q∧¬P)’ and see whether it is a contradiction. Begin the tree de-
composition by writing ‘P→(Q∧¬P)’ at line 1 and then decomposing the proposition.
1 P→(Q∧¬P) P
2 ¬P Q∧¬P 1→D
3 0 Q 2∧D
4 ¬P 2∧D
0
Notice that the above tree does not close, and so ‘P→(Q∧¬P)’ is not a contradic-
tion. An earlier tree (above) showed that ‘P→(Q∧¬P)’ is also not a tautology. This
leaves one option for ‘P→(Q∧¬P),’ namely, that it is a contingency.
To summarize, the truth-tree method can be used to determine whether a proposi-
tion ‘P’ is a tautology, contradiction, or contingency. In testing ‘P’ to see if it is a
tautology, begin the tree with ‘¬P.’ If the tree closes, you know that it is a tautology.
If the tree is open, then ‘P’ is either a contradiction or a contingency. Similarly, in
testing ‘P’ to see if it is a contradiction, begin the tree with ‘P.’ If the tree closes, you
know that it is a contradiction. If the tree is open, then ‘P’ is either a tautology or a
contingency. Lastly, if the truth-tree test shows that ‘P’ is neither a contradiction nor
a tautology, then ‘P’ is a contingency.
Number Propositions in
Property of Trees the Tree Has the Property . . .
Tautology At Least 1 For ‘P,’ a truth tree Iff ‘¬P’ determines a
for ‘¬P’ closed tree.
Contradiction At Least 1 For ‘P,’ a truth tree Iff ‘P’ determines a
for ‘P’ closed tree.
Contingency At Least 2 For ‘P,’ truth trees Iff neither ‘P’ nor ‘¬P’
for ‘P’ and ‘¬P’ determines a closed tree.
Provided that you are able to do a truth tree that checks a proposition for a tautol-
ogy, a truth tree checking for logical equivalence requires little more knowledge. Two
propositions ‘P’ and ‘Q’ are logically equivalent if and only if ‘P’ and ‘Q’ never have
different truth values. It follows that if ‘P’ and ‘Q’ can never have different truth val-
ues, then ‘P↔Q’ is a tautology. Thus, in order to determine whether ‘P’ and ‘Q’ are
logically equivalent, we only need to determine whether ‘P↔Q’ is a tautology. This
is done by considering whether ‘¬(P↔Q)’determines a closed tree.
Consider the following two propositions: ‘P∨¬P’ and ‘¬(P∧¬P).’ In order to
determine whether or not they are equivalent, we combine them in the form of a
biconditional, giving us ‘(P∨¬P)↔¬(P∧¬P)’; we then negate the biconditional,
which yields‘¬[(P∨¬P)↔¬(P∧¬P)],’ and test to see whether the tree closes.
1 ¬[(P∨¬P)↔¬(P∧¬P)] P
Since both branches close, the tree is closed. Therefore, ‘P∨¬P’ and ‘¬(P∧¬P)’ are
logically equivalent.
4.6.5 Validity
In this section, we use the truth-tree method to determine whether an argument is valid
or invalid. An argument is valid if and only if it is impossible for the premises to be
true and the conclusion to be false. In chapter 1, the negative test was used to deter-
mine whether or not arguments were valid. This test asks you to imagine whether it is
possible for the premises to be true and the conclusion to be false. In chapter 3, it was
argued that the truth table method provides a better way to test for validity because it
does not rely on an individual’s ability to imagine whether it is possible for the prem-
ises to be true and conclusion false. At the beginning of this chapter, it was pointed
out that the truth table method becomes increasingly unmanageable when arguments
involve a large number of propositional letters.
The truth-tree method circumvents these problems because the complexity of a
truth tree is not a function of the number of propositional letters in the argument. In
the case of a truth tree, an argument ‘P, Q, R, . . ., Y Z’ is valid if and only if the
stack ‘P,’‘Q,’‘R,’. . ., ‘Y,’‘¬Z’ determines a closed tree.
P→Q, P Q.
A corresponding truth tree can be created by stacking the premises and the negation
of the conclusion, then testing for consistency, that is, testing ‘P→Q,’‘P,’ and ‘¬Q.’
1 P→Q P
2 P P
3 ¬Q P
4 ¬P Q 1→D
X X
Since the stack ‘P→Q,’ ‘P,’ and ‘¬Q’ closes, it is inconsistent; therefore, ‘P→Q,
P Q’ is a valid argument.
Consider a more complex example:
In order to test this argument for validity, we test the following stack:
Notice that nothing about the premises is changed. The only difference is that the
conclusion is negated.
1 P→¬(Q→¬W) P
2 (Q→¬W)∨(Ρ↔S) P
3 P P
4 ¬(S→P) P
5 S 4¬→D
6 ¬P 4¬→D
X
The tree immediately closes because there is an inconsistency in the trunk or main
branch of the tree. Therefore, ‘P→¬(Q→¬W), (Q→¬W)∨(Ρ↔S), P S→P’ is valid.
End-of-Chapter Exercises
A. Using the truth-tree method, determine whether the following sets of propositions
are logically consistent or inconsistent.
1. * P∧Q, P→Q
2. ¬P∧¬Q, ¬(P∨Q)
3. * P, P→Q, ¬Q
4. P→Q, P∨Q, ¬P∨Q
5. * ¬(P∧Q), P→¬Q, ¬(P→Q)
6. (R→¬S)→¬M, R↔M
7. R↔M, M↔R, R→¬M
8. ¬(R↔¬R), (R→R)
9. (P∨Q), P→Q, ¬Q∧P
10. W∧P, S↔Q, ¬(P↔W), W→Z, ¬P
B. Determine which of the following propositions are contradictions, tautologies, or
contingencies by using the truth-tree method.
1. * [(P∨¬W)∧P]∧W
2. (P→Q)∧P
3. * (P∨¬P)∨Q
4. (¬P∨Q)∧¬(P→Q)
5. * (P↔Q)∧¬[(P→Q)∧(Q→P)]
6. [(R→¬S)→¬M]→(¬P∧P)
7. ¬(¬P→¬P)
8. ¬(R↔¬R)∧(R↔R)
A.
1. * P∧Q, P→Q; consistent.
1 P∧Q P
2 P→Q P
3 P 1∧D
4 Q 1∧D
5 ¬P Q 2→D
X 0
3. * P, P→Q, ¬Q; inconsistent.
1 P P
2 P→Q P
3 ¬Q P
4 ¬P Q 2→D
X X
4 ¬P ¬Q 2→D
X
5 ¬P ¬Q 1¬∧D
X 0
B.
1. * [(P∨¬W)∧P]∧W; contingent.
Tree #1: Not a Contradiction
1 [(P∨¬W)∧P]∧W P
2 (P∨¬W)∧P 1∧D
3 W 1∧D
4 (P∨¬W) 2∧D
5 P 2∧D
6 ¬W P 4∨D
X 0
2 ¬[(P∨¬W)∧P] ¬W ¬∧D
0
3. * (P∨¬P)∨Q; tautology.
1 (P∨¬P)∨Q P
2 P∨¬P Q 1∨D
0
1 ¬[(P∨¬P)∨Q] P
2 ¬(P∨¬P) 1¬∨D
3 ¬Q 1¬∨D
4 ¬P 2¬∨D
5 P 2¬∨D
X
5. * (P↔Q)∧¬[(P→Q)∧(Q→P)]; contradiction.
1 (P↔¬Q)∧¬[(P→Q)∧(Q→P)] P
2 (P↔Q) 1∧D
3 ¬[(P→Q)∧(Q→P)] 1∧D
7 P ¬P P ¬P 2↔D
8 Q ¬Q Q ¬Q 2↔D
X X X X
C.
1. * P→Q, ¬P∨Q; equivalent.
1 ¬[(P→Q)↔(¬P∨Q)] P
¬[(P↔Q)↔(¬P∨Q)]
2 P→Q ¬(P→Q) ¬↔D
3 ¬(¬P∨Q) ¬P∨Q ¬↔D
4 ¬¬P 3¬∨D
5 ¬Q 3¬∨D
6 P 4¬¬D
7 P 2¬→D
8 ¬Q 2¬→D
9 ¬P Q 2→D
X X
10 ¬P Q 3∨D
X X
5 P ¬P 2↔D
6 Q ¬Q 2↔D
12 ¬P Q 10→D
13 ¬Q P ¬Q P 11→D
X X
14 P ¬P 2¬↔D
15 ¬Q Q 2¬↔D
X X
16 P ¬P 2¬↔D
17 ¬Q Q 2¬↔D
X X
5 ¬P Q 1→D
X X
6 ¬Q R 2→D
X
7 ¬P Q 1→D
X
7. * ¬P∨¬Q¬(P∨Q); invalid.
1 ¬P∨¬Q P
2 ¬¬(P∨Q) P
3 P∨Q 2¬¬D
4 P Q 3∨D
5 ¬P ¬Q ¬P ¬Q 1∨D
X 0
9. * P→P; valid.
1 ¬(P→P) P
2 P 1¬→D
3 ¬P 1¬→D
X
Definitions
Equivalence A pair of propositions ‘P,’ ‘Q’ is equivalent if and only if ‘P’ and ‘Q’
have identical truth values under every valuation. A truth tree shows
that ‘P’ and ‘Q’ are equivalent if and only if a tree of ‘¬(P↔Q)’
determines a closed tree.
Validity An argument ‘P, Q, . . ., Y Z’ is valid in PL if and only if it is im-
possible for the premises to be true and the conclusion false. A truth
tree shows that an argument ‘P, Q, . . ., Y Z’ is valid in PL if and
only if ‘P,’ ‘Q,’ ‘R,’ . . .,‘Y,’ ‘¬ Z’ determines a closed tree.
Truth-Tree Vocabulary
Open branch An open branch is a branch that is not closed, that is, a branch
that does not contain a proposition and its literal negation.
Stacking rule A stacking rule is a truth-tree rule where the condition under
which a proposition ‘P’ is true is represented by stacking. A
stacking rule is applied to propositions that are true under one
truth-value assignment.
Truth tree A truth tree is a schematic decision procedure typically used for
the purpose of testing propositions, pairs of propositions, and
arguments for logical properties.
Number
Property of Trees Propositions in the Tree Has the Property . . .
Consistency At Least 1 For ‘{P, Q, R, . . ., Z},’ a Iff ‘P,’ ‘Q,’ ‘R,’ . .
truth tree for ‘P,’ ‘Q,’ ‘R,’ ., ‘Z’ determines a
. . ., ‘Z’ completed open tree.
Inconsistency At Least 1 For ‘P,’ a truth tree for Iff ‘P,’ ‘Q,’ ‘R,’ . .
‘P,’ ‘Q,’ ‘R,’ . . ., ‘Z’ ., ‘Z’ determines a
closed tree.
Tautology At Least 1 For ‘P,’ a truth tree for Iff ‘¬P’ determines a
‘¬P’ closed tree.
Contradiction At Least 1 For ‘P,’ a truth tree for ‘P’ Iff ‘P’ determines a
closed tree.
Contingency At Least 2 For ‘P,’ truth trees for ‘P’ Iff neither ‘P’ nor
and ‘¬P’ ‘¬P’ determines a
closed tree.
Equivalence At Least 1 For ‘P’ and ‘Q,’ a truth Iff ‘¬(P↔Q)’
tree for ‘¬(P↔Q)’ determines a closed
tree.
Validity At Least 1 For ‘P, . . ., Y Z,’ a truth Iff ‘P,’ . . ., ‘Y,’ ‘¬Z’
tree for ‘P,’ . . ., ‘Y,’ ‘¬Z’ determines a closed
tree.
Decomposable Propositions
¬(P→Q) P→Q
P ¬→D
¬Q ¬→D
¬P Q →D
¬¬P
P ¬¬D
Note
1. See Merrie Bergmann, James Moore, and Jack Nelson, The Logic Book, 5th edition (Bos-
ton: McGraw Hill Education, 2009), pp. 137–40.
In the previous chapters, we used truth tables and truth trees to test whether individual
propositions, sets of sentences, and arguments had a given semantic property. For
example, with respect to a set of propositions, a truth tree could be devised to test
whether the propositions in the set were consistent. These tests do not, however, corre-
spond to the reasoning that takes place in daily life. The goal of this chapter is to intro-
duce a system of natural deduction. A natural deduction system is a set of derivational
rules (general steps of reasoning) that mirror everyday reasoning in certain noteworthy
ways. The particular system will be called a system of propositional derivations, or
PD for short. Unlike truth tables and truth trees, PD is a system of syntactic rules in-
sofar as they are formulated on the basis of the structure of everyday reasoning. Once
the basics of PD have been mastered, we turn to a more advanced system of reasoning
and a set of reasoning strategies that, while deviating from everyday reasoning, makes
reasoning more efficient.
In this chapter, our principal concern will be learning how to solve a variety of proofs
in an efficient manner. A proof is a finite sequence of well-formed formulas (wffs), or
propositions, each of which is either a premise, an assumption, or the result of preced-
ing formulas and a derivation rule.
Let’s unpack this definition. First, in calling a proof a finite sequence of well-formed
formulas, we are simply saying that no proof will be infinitely long, and no proof will
contain propositions that are not well-formed formulas. Let’s call the end point of any
proof the conclusion. Thus, every proof will have an ending point, and every proposition
161
in the proof should obey the formation rules laid down in chapter 2. Second, every line
or proposition in the proof will be one of three types: (1) a premise, (2) an assumption, or
(3) the result of some combination of (1) or (2) plus a derivation rule. A derivation rule
is an explicitly stated rule of PD that allows for moving forward in a proof. For example,
if there is a derivation rule that says whenever you have ‘P’ and ‘Q,’ you can move a step
forward in the proof to the proposition ‘P∧Q,’ then this derivation rule would justify (or
legitimate) ‘P∧Q’ in a proof involving ‘P’ and ‘Q.’
A special symbol is introduced in proofs. This symbol is the syntactic (or single)
turnstile (). In previous chapters, we used ‘’ to represent arguments. In this chapter,
‘’ takes on a more precise meaning, namely, that of syntactic entailment. Thus,
P∧R R
means that there is a proof of ‘R’ from ‘P∧R,’ or ‘R’ is a syntactic consequence of
‘P∧R.’ In the above example, ‘R’ is the conclusion of the proof, whereas ‘P∧R’ are
its premises. In addition,
R
means that there is a proof of ‘R,’ or that ‘R’ is a theorem. In the above example, ‘R’
is the conclusion of a proof with no premises.
The simple idea of a proof then is one that begins with a set of premises and in
which each subsequent step in the proof is justified by a specified rule. To see how this
might look, let’s examine the following argument: ‘R∨S, ¬S R.’ Setting up a proof
is relatively simple. There are three components. The first is a numerical ordering of
the lines of the proof. The second is a listing of the set of propositions that are the
premises of the proof. The third is the labeling of each of these premises with a ‘P’ for
premise. Thus, setting up the proof for ‘R∨S, ¬S R’ looks as follows:
Line
Number Proposition Premises
1 R∨S P
2 ¬S P
As the proof proceeds, derivation rules will be used to derive propositions. These
propositions are listed under the premises and justified by citing any propositions used
in the argument form and the abbreviation for a the derivation rule. Thus,
Line
Number Proposition Premises/Justification
1 R∨S P
2 ¬S P
3 R 1,2 + derivation rule abbreviation
Once the conclusion is reached in a proof, the proof is finished. Thus, in the case of
‘R∨S, ¬S R,’ since ‘R’ is the conclusion, the proof is completed at line 3.
Certain propositions are premises. The justification for premises is symbolized by the
letter ‘P.’ So, if we were asked to prove ‘A→B, B→C C→D,’ the proof would be
set up as follows:
1 A→B P
2 B→C P
Goal Proposition
|
|
1 A→B P ▼
2 B→C P/C→D
In what follows, the derivation rules for PD are introduced. PD is an intelim system.
That is, for every propositional operator (¬, ∧, ∨, →, ↔), there are two derivation
rules: an introduction rule and an elimination rule. An introduction rule for a particu-
lar operator is a derivation rule that introduces a proposition with that operator into the
proof. An elimination rule for a particular rule for a particular operator is a derivation
rule that crucially begins (or uses) a proposition with that operator in the proof.
W, Q, R ⊢ W∧R
Begin by writing the premises, labeling them with ‘P,’ and indicating that the goal
of the proof is ‘W∧R.’
1 W P
2 Q P
3 R P/W∧R
The goal of the proof is to derive ‘W∧R,’ which can be derived by using ‘∧I’ on
lines 1 and 3.
1 W P
2 Q P
3 R P/W∧R
4 W∧R 1,3∧I
Notice that line 4 is justified by using the conjunction introduction derivation rule,
and it is applied to lines 1 and 3.
It is important to remember that ‘P’ and ‘Q’ in the above form are metavariables
for propositions. Since ‘P’ and ‘Q’ are metavariables, ‘∧I’ can be used on both atomic
and complex propositions. Consider the following argument:
A→B, D ⊢ (A→B)∧D
1 A→B P
2 D P/(A→B)∧D
3 (A→B)∧D 1,2∧I
1 (R↔S) P
2 W∧¬T P
3 Z P/[(R↔S)∧(W∧¬T)]∧Z
Note that this proof will require multiple uses of ‘∧I’ in a particular order.
1 (R↔S) P
2 W∧¬T P
3 Z P/[(R↔S)∧(W∧¬T)]∧Z
4 (R↔S)∧(W∧¬T) 1,2∧I
5 [(R↔S)∧(W∧¬T)]∧Z 3,4∧I
In the introduction, it was noted that a natural deduction system is a set of deri-
vational rules (general steps of reasoning) that mirror everyday reasoning in certain
noteworthy ways. Conjunction introduction seems to do just this. For consider the
following argument:
1 John is angry. P
2 Liz is angry. P
3 John is angry, and Liz is angry. Conjunction introduction
In the above argument, John is angry, and Liz is angry follows from John is an-
gry and Liz is angry. More generally, it seems whenever we have two propositions,
it is legitimate to derive a second proposition that is the conjunction of these two
propositions.
Finally, it is important to note that conjunction introduction, as a derivation form,
is a statement about syntactic entailment. That is, the derivation rule is formulated
independently of the truth or falsity of the premises. However, while the rules are
formulated purely in terms of their structure, the rules we include in PD are guided
by semantic concerns. That is, we choose rules that are deductively valid. Given this
consideration, we can check whether or not we should include a given derivation rule
into PD using a truth table or truth tree.
In the case of ‘∧I’, consider the following truth table:
P Q ⊢ P ∧ Q
T T T T T
T F T F F
F T F F T
F F F F F
The above table shows that for ‘P, Q⊢P∧Q,’ in no truth-value assignment is it pos-
sible for the premises ‘P’ and ‘Q’ to be true and the conclusion ‘P∧Q’ false.
1 P P
2 Q P
3 ¬(P∧Q) P
4 ¬P ¬Q 3→D
X X
The above tree shows that the premises and the negation of the conclusion produces
a closed tree and is therefore valid.
1 (P∧Q)∧W P
2 P∧Q 1∧E
Notice that ‘∧E’ was applied to line 1 to infer ‘P∧Q.’ When using ‘∧E’ either of the
conjuncts of the conjunction can be inferred. Thus, another application of ‘∧E’ allows
for inferring the right conjunct ‘W’:
1 (P∧Q)∧W P
2 P∧Q 1∧E
3 W 1∧E
Notice that line 2 is also a conjunction. This means that ‘∧E’ can also be applied to
‘P∧Q,’ allowing for an inference to either of its conjuncts.
1 (P∧Q)∧W P
2 P∧Q 1∧E
3 W 1∧E
4 P 2∧E
5 Q 2∧E
(A→B)∧(C∧D) ⊢ D
1 (A→B)∧(C∧D) P/D
2 C∧D 1∧E
3 D 2∧E
Notice that the proposition in line 1 is a conjunction. Thus, ‘∧E’ can be applied to
it in order to derive either of the conjuncts. Since the goal of the proof is to derive ‘D’
rather than ‘A→B,’‘∧E’ is used to derive ‘C∧D’ at line 2. At this point, another use
of ‘∧E’ allows for deriving ‘D’ and finishes the proof.
Again, conjunction elimination closely resembles the way everyday reasoning oc-
curs. For instance,
John is in the park, and Mary is in the subway. Therefore, John is in the park.
1 P∧Q P
2 ¬P P
3 P 1∧D
4 Q 1∧D
X
R∧B, D ⊢ (B∧D)∧R
1 R∧B P
2 D P/(B∧D)∧R
3 R 1∧E
4 B 1∧E
5 B∧D 2,4∧I
6 (B∧D)∧R 3,5∧I
In the above proof, it was necessary to make use of conjunction elimination to break
the complex propositions down into their component parts and to use conjunction
introduction on key atomic propositions to build up the desired complex proposition.
Keep this strategy of breaking down and building up in mind as you practice the ex-
ercises below.
Exercise Set #1
1. * (P∧Q)∧W ⊢ P
1 (P∧Q)∧W P
2 P∧Q 1∧E
3 P 2∧E
3. * M, F, R ⊢ (M∧F)∧R
1 M P/(M∧F)∧R
2 F P
3 R P
4 M∧F 1,2∧I
5 (M∧F)∧R 3,4∧I
5. * P→Q, S∧M ⊢ (P→Q)∧M
1 P→Q P/(P→Q)∧M
2 S∧M P
3 M 2∧E
4 (P→Q)∧M 1,3∧I
Let’s assume that God does exist. If God is as great as you say he is, then there
should be no poverty or suffering in the world. But there is suffering in the world.
This is inconsistent! Therefore, God does not exist.
The above argument does not start by simply asserting, God exists. It instead begins
by assuming that God exists and then, given this proposition and others, involves a
line of reasoning on the basis of this assumption. One way to look at the argument
above is that it involves two proofs. There is the main proof, which aims at the con-
clusion God does not exist, and there is a subproof, which aims to show that anyone
who assumes (or believes) that God does exist is forced into believing something that
cannot be the case, namely, something inconsistent (i.e., ‘There is both suffering and
no suffering in the world’).
Main line
Subproof
|
| |
▼ ▼
1 There is suffering in the world. Premise
2 If God exists, there is no suffering. Premise
3 Assume God exists. A
4 There is no suffering. From (2) and (3)
There is suffering. From (1)
5 God does not exist. Conclusion
Let’s assume that God does not exist. If God does not exist, then the universe just
magically came into existence out of nothing. But the universe did not come into
existence from nothing. Therefore, you are contradicting yourself in saying that
God does not exist and the world exists. Therefore, God does exist.
In the above case, your friend starts with an assumption that she does not believe,
namely, that God does not exist. Using this assumption, your friend reasons—within
a subproof—that if God does not exist is true, then The world both exists and does
not exist is true, which is inconsistent. Thus, your friend ultimately concludes that
God does exist.
Main line
| Subproof
|
| |
| |
▼ ▼
1 The universe cannot magically begin. Premise
2 If God does not exist, then the universe magically began. Premise
3 Assume God does not exist. A
4 The universe magically began. From (2) and (3)
5 The universe cannot magically begin. From (1)
6 God does exist. Conclusion
Main line
| Subproof
|
| |
| |
▼ ▼
1 S Premise
2 B A
3 B∧S 1,2∧I
Notice that at line 2, ‘B’ is assumed, and so the proof enters a subproof. In the above
case, the subproof is nested in (or built on) the main line of the proof.
Within a subproof, it is possible to make use of derivation rules and even to make
further assumptions. An example of a legitimate use of reasoning is ‘∧I’ at line 3
above. In addition, additional assumptions can be made, producing a subproof within
a subproof. For example,
1 Q P
2 S A
3 W A
In the example above, the proof starts with ‘Q’ in the main line of the proof. Next,
at line 2, ‘S’ is assumed, which begins the first subproof (call this subproof1). Note
that subproof1 is not simply an independent proof but is part of a proof that is built
on the main line of the proof. Building one part of the proof on another in this way is
called nesting. In the above example, the main line of the proof nests subproof1 (or,
alternatively, subproof1 is in the nest of the main line) since subproof1 is built on the
main line. In addition, subproof1 is more deeply nested than the main line since the
main line nests (or contains) subproof1.
Next, at line 3, ‘W’ is assumed, which begins a second subproof (call this subproof2).
Again, notice that the assumption that begins subproof2 is not independent of subproof1.
Instead, subproof2 is built upon subproof1 and the main line of the proof. This means
that the main line nests subproof1,and subproof1 nests subproof2. In addition, subproof2
is more deeply nested than both subproof1 and the main line of the proof.
Here is a graphical representation:
1 Main line P
2 Subproof1 A
3 Subproof2 A
►
One way to think about the above argument is in terms of a conversation. That is,
the above example is similar to someone uttering the following:
1 A P
2 B A
3 A∧B 1,2∧I
4 C A
5 A∧C 1,4∧I
In the above case, there are two subproofs, but neither subproof nests the other. That
is, the subproof beginning with ‘C’ at line 4 is not built upon the subproof beginning
with ‘B’ at line 2. It is a separate subproof. In plain English, it is as though the follow-
ing conversation is taking place:
|
|
|
|
▼
◄
1 A P
2 B A
3 A∧B 1,2∧I
4 C A
5 C∧B 4,2∧I—NO!
Notice that the above example involves transferring a proposition from one sub-
proof into another subproof. That is, ‘B’ is assumed and then used in a different
subproof at line 5.
Consider another example:
1 A P
2 B∧C A
3 B 2∧E
4 A∧(B∧C) 1,2∧I—NO!
5 A∧B 1,3∧I—NO!
In the above case, lines 4 and 5 are not acceptable because propositions within the
subproof are taken outside the subproof. It would be as if you said, ‘Assume that I
am the greatest person in the world; therefore, I am the greatest person in the world.’
However, propositions from outside the subproof can be taken into a subproof that
it contains. For example,
1 A P
2 B∧C A
3 A∧(B∧C) 1,2∧I
Q ⊢ P→Q
1 Q P/Q
2 P A
3 P∧Q 1,2∧I
4 Q 3∧E
5 P→Q 2–4→I
Notice that conditional introduction begins by assuming ‘P’ (the antecedent of the
conclusion); then ‘Q’ (the consequent of the conclusion) is derived in the subproof;
finally, a conditional ‘P→Q’ is derived outside the subproof. When a subproof is
completed, we say that the assumption of the subproof has been discharged. That is,
the force or positive charge of the assumption, as well the corresponding subproof in
which it occurs, is no longer in effect. In addition, we call a subproof with an undis-
charged assumption an open subproof and a subproof with a discharged assumption
a closed subproof.
Let’s consider the following proof in a more step-by-step manner:
S→D ⊢ S→(S→D)
First, start by writing down the premises and indicating the goal proposition of the
proof:
1 S→D P/S→(S→D)
1 S→D P/S→(S→D)
2 S A/S→D
With ‘S’ assumed, the next step is to derive ‘S→D’ in the subproof.
1 S→D P/S→(S→D)
2 S A/A→D
3 S∧(S→D) 1,2∧I
4 S→D 3∧E
With ‘S→D’ derived in the subproof, we can discharge the assumption (close the
subproof) by using ‘→I’:
1 S→D P/S→(S→D)
2 S A/A→D
3 S∧(S→D) 1,2∧I
4 S→D 3∧E
5 S→(S→D) 2–4→I
The proof is now complete! When using ‘→I,’ you exit the subproof with a condi-
tional consisting of the assumption of the subproof as the antecedent and the derived
conclusion in the subproof as the consequent. In the case above, ‘S’ is the assumption
in the subproof, and ‘S→D’ is the derived proposition in the subproof. Conditional
introduction allows for deriving the proposition ‘S→(S→D)’ out of the subproof in
which ‘S’ and ‘S→D’ are found. Once ‘→I’ is used at line 5, the assumption ‘S’ is
discharged and the subproof is closed (i.e., not open).
When working with multiple subproofs, it is important to realize that the use of
conditional introduction only allows for deriving a conditional out of the subproof
containing the assumption and the derived proposition. In short, you can only use
conditional introduction to derive a conditional out of one subproof. Consider the
following proofs:
1 P P
2 R A
3 Z A
4 Z∧R 2,3∧I
5 R 4∧E
6 Z→R 3–5→I—OK!
1 P P
2 R A
3 Z A
4 Z∧R 2,3∧I
5 R 4∧E
6 Z→R 3–5→I—NO!
In the first proof, ‘Z→R’ is properly derived out of the subproof where ‘Z’ is the
assumption. In the second proof, ‘Z→R’ is improperly derived not only out of the
subproof where ‘Z’ is the assumption but also the subproof involving ‘R.’
Here is a final example involving the use of conditional introduction. Prove the
following:
A ⊢ B→[B∧(C→A)]
1 A P/B→[B∧(C→A)]
2 B A/B∧(C→A)
3 C A/A
4 C∧A 1,3∧I
5 A 4∧E
6 C→A 3–5→I
7 B∧(C→A) 2,6∧I
8 B→[B∧(C→A)] 2–7→I
Notice that the above proof involves an auxiliary assumption ‘C,’ and while propo-
sitions can be brought into the subproof, any proposition involving ‘C’ cannot be used
outside the subproof until it has been discharged with ‘→I.’
1 (A∨B)→C P
2 A P
3 A∨B P/C
4 C 1,3→E
In the above example, notice that ‘→E’ is used on the propositions occurring in
lines 1 and 3. It is not permissible to use ‘→E’ on lines 1 and 2 because in order to
use ‘→E,’ you need a proposition that is the entire antecedent of the conditional. One
easy way to remember ‘→E’ is that it is a rule that affirms the antecedent, and since
only conditionals have antecedents, this rule will only apply to conditionals.
Let’s consider a more complicated use of ‘→E’ in the following proof:
1 A→B P
2 A P
3 B→C P
4 C→D P/D
5 B 1,2→E
6 C 3,5→E
7 D 4,6→E
5 Reiteration (R) P
Any proposition ‘P’ that occurs in a proof or subproof may .
be rewritten at a level of the proof that is equal to ‘P’ or more .
deeply nested than ‘P.’ .
P R
1 A→B P
2 A P
3 A 2R
Line 3 involves a use of reiteration from line 2. Next, turn to a slightly more com-
plex example:
1 A→B P
2 A P
3 ¬(B∨C) P
4 A→B 1R
5 A 2R
6 ¬(B∨C) 3R
(1) reiterate a proposition from a more deeply nested part of the proof into a less
deeply nested part of the proof;
(2) reiterate a proposition from one part of the proof into another part that is not
within its nest.
Another way of putting (1) is that it is not acceptable to reiterate a proposition out of
a subproof, but it is acceptable to reiterate a proposition into a subproof. For example,
1 P→Q P
2 S A
3 W A
4 P→Q 1R
5 S 2R
6 W 3R—NO!
7 S 2R—NO!
Notice that while the use of ‘R’ at lines 4 and 5 is acceptable, the use of ‘R’ at lines
6 and 7 reiterates propositions from a more nested part of the proof into a less nested
part of the proof, which is not acceptable.
To see why this use of reiteration is invalid, consider the following argument, which
does not obey the above restriction.
This argument is clearly invalid since from the assumption that I am the richest
person in the world, it does not follow that I am the richest person in the world.
Consider the second restriction on the use of ‘R’; namely, it is not acceptable to
(2) reiterate a proposition from one part of the proof into another part that is not
within its nest.
1 P P
2 S A
3 P∧S 1,2∧I
4 T A
Notice that line 4 begins a subproof that is not nested (or contained) in the previous
subproof beginning at line 2. That is, it begins a subproof that, while contained in the
main line of the proof, is independent of the subproof that begins at line 2.
1 P P
2 S A
3 P∧S 1,2∧I
4 T A
5 S 2R—NO!
Notice that line 5 violates the second restriction since ‘S’ is reiterated from one
subproof into another that is not within its nest.
Finally, it is also important to note one distinguishing feature of reiteration, namely,
that it is a derived rule. This means that any use of reiteration is somewhat superfluous
since the inference that it achieves can be achieved using the existing set of deriva-
tion rules. To see this more clearly, consider the proof of the valid argument ‘R R.’
1 R P/R
2 R A/R→R
3 R∧R 1,2∧I
4 R 3∧E
5 R→R 2–4→I
6 R 1,5→E
Although the introduction of reiteration into our set of derivation rules is not es-
sential, it is extremely convenient since the proof above can be simplified into the
following proof.
1 R P/R
2 R 1R
Exercise Set #2
1. * (A∧B)→C, A, B C
1 (A∧B)→C P/C
2 A P
3 B P
4 A∧B 2,3∧I
5 C 1,4→E
3. * P, Q P→Q
1 P P
2 Q P/P→Q
3 P A/Q
4 Q 2R
5 P→Q 3–4→I
5. * A, B C→(A∧B)
1 A P
2 B P/C→(A∧B)
3 C A/A∧B
4 A∧B 1,2∧I
5 C→(A∧B) 3–4→I
7. * P P, without using reiteration
1 P P/P
2 P A/P→P
3 P∧P 1,2∧I
4 P 3∧E
5 P→P 2–4→I
6 P 1,5→E
9. * A→(A∧A)
1 A A/A∧A
2 A 1R
3 A∧A 1,2∧I
4 A→(A∧A) 1–3→I
Negation introduction is a derivation rule where ‘P’ is assumed, and in the course
of a subproof, an inconsistency of the form ‘Q’ and ‘¬Q’is derived. Once an incon-
sistency is shown to follow from ‘P,’ negation introduction (¬I) allows for deriving
‘¬P’ out of the subproof.
Negation elimination follows a similar procedure, except the initial assumption is a
negated proposition ‘¬P’ and the proposition derived is the unnegated form (i.e., ‘P’).
Notice that ‘¬P’ is assumed, a contradiction is derived, and ‘P’ is discharged. The
idea again is that if ‘¬P’ leads to a contradiction, then ‘P’ must be the case.
The basic idea in using ‘¬I’ and ‘¬E’ is to (1) assume a proposition in a subproof,
(2) derive a contradiction, and (3a) derive the literal negation of the assumed proposi-
tion outside the subproof, or (3b) derive the unnegated form of the assumed proposi-
tion outside the subproof. Here is an example:
A→D, ¬D ⊢¬A
1 A→D P
2 ¬D P
3 A A/¬I
4 D 1,3→Ε
5 ¬D 2R
6 ¬A 3–5¬I
Thus, since the assumption of ‘A’ leads to an explicit inconsistency, it must be the
case that ‘¬A.’ Let us consider a case of negation elimination, that is, a use of ‘¬E.’
For example, ‘(A→D)∧A, ¬D ⊢C.’
1 (A→D)∧A P
2 ¬D P
3 ¬C A/¬E
4 A→D 1∧E
5 A 1∧E
6 D 4,5→Ε
7 ¬D 2R
8 C 3–7¬E
Exercise Set #3
1. * A B→A
1 A P/B→A
2 B A/A
3 A 1R
4 B→A 2–3→I
3. * (P→Q)→W, Q W
1 (P→Q)→W P
2 Q P/W
3 P A/Q
4 Q 2R
5 P→Q 3–4→I
6 W 1,5→E
5. * (A∧B)∧C, R∧(W∧¬C) S
1 (A∧B)∧C P
2 R∧(W∧¬C) P/S
3 C 1∧E
4 W∧¬C 2∧E
5 ¬C 4∧E
6 ¬S A/P, ¬P
7 C 3R
8 ¬C 5R
9 S 6–8¬E
7. * (A∨B)→M, M→¬(A∨B) ¬(A∨B)
1 (A∨B)→M P
2 M→¬(A∨B) P/¬(A∨B)
3 (A∨B) A/P, ¬P
4 M 1,3→E
5 ¬(A∨B) 2,4→E
6 (A∨B) 3R
7 ¬(A∨B) 3–6¬I
9. * A, B, B→¬A C→D
1 A P
2 B P
3 B→¬A P/C→D
4 ¬A 2,3→E
5 ¬(C→D) A/P, ¬P
6 A 1R
7 ¬A 4R
8 C→D 5–7¬E
P ⊢(P∨W)∧(Z∨P)
1 P P/(P∨W)∧(Z∨P)
2 P∨W 1∨I
3 Z∨P 1∨I
4 (P∨W)∧(Z∨P) 2,3∧I
In the above proof, there are two different uses of ‘∨I’ on ‘P’ in line 1. Notice that
the use of ‘∨I’ only applies to a single proposition and allows for deriving a disjunction.
Consider a more complex example. Prove the following:
[P∨(Q∨R)]→W, R ⊢W
1 [P∨(Q∨R)]→W P
2 R P/W
3 Q∨R 2∨I
4 P∨(Q∨R) 3∨I
5 W 1,4→E
In the above example, the desired conclusion is ‘W.’ Notice that ‘W’ could be
derived if ‘P∨(Q∨R)’ were in the proof. Using multiple instances of ‘∨I’ on ‘R’ in
line 2 allows us to obtain this proposition. Notice that ‘∨I’ can be applied to complex
propositions, even propositions that are already disjunctions.
One more proof:
P, (P∨¬W)→R ⊢R∨¬(Z∨Q)
1 P P
2 (P∨¬W)→R P/R∨¬(Z∨Q)
3 P∨¬W 1∨I
4 R 2,3→E
5 R∨¬(Z∨Q) 4∨I
This proof requires two uses of ‘∨I.’ The first use is similar to the one in the previ-
ous proof. ‘∨I’ is applied to ‘P’ to derive ‘∨¬W’ in order to derive ‘R’ from line 2.
Once ‘R’ is inferred at line 4, ‘∨I’ is used again to infer the disjunction.
Q A
.
.
.
R
R ∨E
In the above argument form, each of the disjuncts from ‘P∨Q’ is separately as-
sumed. From within both of these subproofs, ‘R’ is derived by some unspecified form
of valid reasoning. If ‘R’ can be derived in both subproofs, then ‘R’ can be derived
from ‘P∨Q.’
Here is an example involving ‘∨E’:
1 P∨Q P
2 P→R P
3 Q→R P/R
4 P A
5 R 2,4→E
6 Q A
7 R 3,6→E
8 R 1,4–5,6–7∨E
Notice that since ‘P’ implies ‘R’ in a subproof, and ‘Q’ implies ‘R’ in a subproof,
‘R’ can be discharged from the subproof. Also, notice that in the justification column,
the use of ‘∨E’ requires citing the original disjunction and all propositions in the two
subproofs. Here is another example:
1 P∨(Q∧T) P
2 P→T P/T
3 P A
4 T 2,4→E
5 Q∧T A
6 T 5∧E
7 T 1,3–4,5–6∨E
Again, the use of ‘∨E’ involves two separate subproofs. First, the proof begins by
assuming one of the disjuncts and deriving ‘T’; then there is a separate assumption
(involving the other disjunct), and the same proposition (i.e., ‘T’) is derived. Once ‘T’
is derived in both subproofs, then ‘T’ can be derived outside the subproof.
Here is a more complex proof involving ‘∨E.’ Prove the following:
1 S→W P
2 M→W P
3 P∧(R∨T) P
4 R→(S∨M) P
5 T→(S∨M) P
6 R∨T 3∧E
7 R A
8 S∨M 4,7→E
9 T A
10 S∨M 5,9→E
11 S∨M 6,7–8,9–10∨E
12 S A
13 W 1,12→E
14 M A
15 W 2,14→E
16 W 11,12–13,14–15∨E
Suppose that you wanted to show that John will have a great evening follows
from John will either go to the party or stay home. In order to show this, you
need to show both that if John goes to the party, he will have a great time and
that if John stays home, he will also have a great time. The reason that it needs
to follow from both disjuncts is because if John will have a great evening only if
he goes to the party and not if he stays home (and vice versa), then it is possible
for the premise John will either go to the party or stay home to be true, and the
conclusion John will have a great evening to be false.
Q A
.
.
.
P
P↔Q ↔I
P→Q,Q→P P↔Q.
1 P→Q P
2 Q→P P/P↔Q
3 P A
4 Q 1,3→E
5 Q A
6 P 2,5→E
7 P↔Q 3–4,5–6↔I
P↔Q
Q
P ↔E
Let’s look at two examples (simple and complex) that use ‘↔E’ and then look at a
proof involving both ‘↔I’ and ‘↔E.’ Prove the following:
1 (P↔Q)↔(R↔T) P
2 R→T P/P↔Q
3 P↔Q 1,2↔E
In order to infer one side of the biconditional, it is necessary to have the other side
at some line in the proof. Since the right-hand side of the biconditional is at line 2, the
left-hand side can be derived using ‘↔E.’ Here is another example:
1 P↔Q P
2 P P
3 (Q∧P)↔W P/W
4 Q 1,2↔E
5 Q∧P 2,4∧I
6 W 3,5↔E
The above proof involves two uses of ‘↔E.’ The first use at line 4 is straightfor-
ward. However, notice that in the second use, in order to derive ‘W’ at line 6, ‘Q∧P’is
needed.
Finally, consider an example that combines both ‘↔I’ and ‘↔E.’ Prove the fol-
lowing:
P↔Q, Q , (P∨¬Z)↔(¬Z∨P)
1 P↔Q P
2 Q P/(P∨¬Z)↔(¬Z∨P)
3 P 1,2↔E
4 P∨¬Z A/(¬Z∨P)
5 ¬Z∨P 3∨I
6 ¬Z∨P A
7 P∨¬Z 3∨I
8 (P∨¬Z)↔(¬Z∨P) 4–5,6–7↔I
Exercise Set #4
16. ⊢ ¬(A∧¬A)
17. P→Q, P⊢ Q→Q
18. A∨B, A→R, B→R⊢ W→R
19. P⊢ P→P
20. P→Q, Q→P⊢ P↔Q
21. (P∨W)→¬Q, P↔¬Q, W⊢ P
22. P, Q, R, S⊢ (P↔Q)↔(R↔S)
23. P↔(Q∨R), P, ¬Q⊢ R
B. Translate the following English arguments into propositional logic and prove
these arguments using the derivation rules in PD.
1. If John is happy, then Mary is sad. John is happy. Therefore, Mary is sad.
2. If John is happy, then Mary is sad. Mary is not sad. Therefore, John is not
happy.
3. John is happy, and Mary is happy. If John is happy, then John loves Mary.
If Mary is happy, then Mary loves John. Therefore, John loves Mary, and
Mary loves John.
4. God is good, and God is great. It follows that if God is good or God is not
good, then God is great.
5. If God is all-knowing and all-powerful and all-loving, then there is no evil
in the world. There is evil in the world. Therefore, it is not the case that
God is all-knowing and all-powerful and all-loving.
6. John will run from the law if and only if (iff) the police are after him. John
will run from the law. Thus, the police are after John.
7. John is a criminal. If John is a criminal, then the police are after him. If
John is a criminal, then John will run from the law. It follows that John
will run from the law if and only if the police are after him.
8. If the murder weapon is John’s or a witness saw John commit the crime,
then John is not innocent. A witness did see John commit the crime. Thus,
it follows that John is not innocent.
9. John is the murderer, or he isn’t. If John is the murderer, then there is
strong evidence showing that he is guilty. There is strong evidence show-
ing John is guilty if and only if the prosecution can show he pulled the
trigger. But it is not the case that the prosecution can show John pulled the
trigger. Thus, it follows that John is not the murderer.
10. Taxes will go up, or they won’t. If taxes go up, then people will lose
their jobs, and the price of housing will decline. If taxes don’t go up, then
people will lose their jobs, and the price of housing will decline. It follows
that people will lose their jobs.
11. If Mr. Z wins the election or gets control of the military, then Mr. Z will
either dissolve the federal government or save the country. Mr. Z has got-
ten control of the military if and only if he has convinced the generals that
the current president is inept. Mr. Z has convinced the generals that the
current president is inept and will not save the country. It follows that Mr.
Z will dissolve the federal government.
12. If John runs every day, then he has a healthy heart. If John has a healthy
heart, he won’t have a heart attack. It follows that if John runs every day,
he won’t have a heart attack.
13. If John runs every day and eats properly, then he will live a long life. John
will not live a long life. It follows that John does not both run every day
and eat properly.
14. If John is innocent, then the bloody glove found at the scene of the crime
and the murder weapon are not John’s, and John was not seen at the scene of
the crime. But the bloody glove is John’s, the murder weapon is John’s, and
John was seen at the scene of the crime. It follows that John is not innocent.
1. * P→Q, P ⊢Q
1 P→Q P
2 P P/Q
3 Q 1,2→E
3. * P, (P∨Q)→R ⊢R
1 P P
2 (P∨Q)→R P/R
3 P∨Q 1∨I
4 R 2,3→E
5. * P, P→Q ⊢Q∨M
1 P P
2 P→Q P/Q∨M
3 Q 1,2→E
4 Q∨Μ 3∨I
7. * P→Q, Q→R, P ⊢R
1 P→Q P
2 Q→R P
3 P P/R
4 Q 1,3→E
5 R 2,4→E
9. * (A∨Β)∨C, (Α∨Β)→D, C→D ⊢D∨M
1 (A∨Β)∨C P
2 (Α∨Β)→D P
3 C→D P/D∨Μ
4 A∨Β A
5 D 2,4→D
6 C A
7 D 6,3→D
8 D 1,4–5,6–7∨E
9 D∨M 8∨I
11. * P, Q ⊢P↔Q
1 P P
2 Q P/P↔Q
3 P A
4 Q 2R
5 Q A
6 P 1R
7 P↔Q 3–4,5–6↔I
13. * A→B, ¬B, A ⊢W
1 A→B P
2 ¬B P
3 A P/W
4 B 1,3→E
5 ¬W A/P, ¬P
6 ¬B 2R
7 B 4R
8 W 5–7¬E
Q A
.
.
.
R
R ∨E
If you struggled on a number of the exercises in the preceding sections, you are not
alone. Learning to do proofs quickly and accurately requires practice and a familiarity
with a basic set of proof strategies. In this section, two different kinds of strategies for
solving proofs are formulated: (1) strategies aimed at the direct manipulation of proposi-
tions in the proof, and (2) strategies aimed at the deliberate and tactical use of assump-
tions. In the next section, we supplement our existing set of derivation rules with some
additional derivation rules, and we then finish things off by refining our strategic rules.
SP#1(E) First, eliminate any conjunctions with ‘∧E,’ disjunctions with ‘∨E,’ con-
ditionals with ‘→E,’ and biconditionals with ‘↔E.’ Then, if necessary,
use any necessary introduction rules to reach the desired conclusion.
SP#2(B) First, work backward from the conclusion using introduction rules (e.g.,
‘∧I,’ ‘∨I,’‘ →I,’ ‘↔I’). Then, use SP#1(E).
Beginning with SP#1(E), the basic idea behind this strategic rule is to start a proof
by simplifying or breaking down any available premises. Consider the following:
P→(R∧M), (P∧S)∧Z R
1 P→(R∧M) P
2 (P∧S)∧Z P/R
SP#1(E) suggests that you should use elimination rules to break down any complex
propositions into simpler propositions. Since line 2 is a complex conjunction, conjunc-
tion elimination (∧E) can be used to derived a number of atomic propositions:
1 P→(R∧M) P
2 (P∧S)∧Z P/R
3 P∧S 2∧E
4 Z 2∧E
5 P 3∧E
6 S 3∧E
At this point, we can follow SP#1(E) further and apply additional elimination rules.
Line 1 is a conditional, and since the antecedent of this conditional ‘P’ occurs at line
5, conditional elimination allows for deriving ‘R∧M.’
1 P→(R∧M) P
2 (P∧S)∧Z P/R
3 P∧S 2∧E
4 Z 2∧E
5 P 3∧E
6 S 3∧E
7 R∧M 1,5→E
Following SP#1(E) still further, notice that ‘R∧M’ is a conjunction, and so we can
apply conjunction elimination to finish the proof.
1 P→(R∧M) P
2 (P∧S)∧Z P/R
3 P∧S 2∧E
4 Z 2∧E
5 P 3∧E
6 S 3∧E
7 R∧M 1,5→E
8 R 7∧E
This proof will require not only the initial use of elimination rules to break propo-
sitions into simpler propositions but also the use of introduction rules to derive the
desired conclusion.
1 P→R P
2 (P∧S)∧Z P/R∧S
1 P→R P
2 (P∧S)∧Z P/R∧S
3 P∧S 2∧E
4 Z 2∧E
5 P 3∧E
6 S 3∧E
7 R 1,5→E
Now that elimination rules have been applied, SP#1(E) suggests trying to reach the
conclusion by applying any introduction rules that would lead to the conclusion. In the
case of the above proof, since the goal of the proof is ‘R∧S,’ and ‘R∧S’ is a conjunc-
tion, we can apply conjunction introduction. Thus,
1 P→R P
2 (P∧S)∧Z P/R∧S
3 P∧S 2∧E
4 Z 2∧E
5 P 3∧E
6 S 3∧E
7 R 1,5→E
8 R∧S 6,7∧I
Moving to SP#2(B), the basic idea behind this strategy is this: rather than moving
forward (downward) in a proof from the premises or assumptions to a conclusion,
work backward (upward) from the conclusion to the premises or assumptions.
SP#2(B) First, work backward from the conclusion using introduction rules (e.g.,
‘∧I,’ ‘∨I,’ ‘→I,’ ‘↔I’). Then, use SP#1(E).
1 P→R P
2 Z→W P
3 P P/R∨W
Rather than trying to use elimination rules, we might start the proof by skipping a
few lines in the proof and writing the conclusion at the bottom. That is,
1 P→R P
2 Z→W P
3 P P/R∨W
.
.
.
.
# R∨W ?
Next, consider what derivation rule would have allowed us to reach ‘R∨W.’ Since
‘R∨W’ is a disjunction, we can speculate that it could be derived by the use of disjunc-
tion introduction from either ‘W’ or ‘R.’ We will call propositions that are obtained
as a result of the working-backward method intermediate conclusions. Thus, working
backward we obtain the intermediate conclusions ‘R’ and ‘W’:
1 P→R P
2 Z→W P
3 P P/R∨W
.
.
R W
# R∨W #∨I
Now that we have worked backward a line, the next step will be to try to use the
premises and reach either of the intermediate conclusions. Using elimination rules, we
can derive the following:
1 P→R P
2 Z→W P
3 P P/R∨W
4 R 1,3→E
R W
# R∨W #∨I
In the above, we have created two paths. The first path links the premises to one of
the intermediate conclusions. The second path links the intermediate conclusion to the
conclusion of the proof. With both of these paths, we can finish the proof as follows:
1 P→R P
2 Z→W P
3 P P/R∨W
4 R 1,3→E
5 R∨W 4∨I
Thus, the general idea behind SP#2(B) is to start with the conclusion and work
backward, using any introduction rules that would yield an intermediate conclusion.
Once you have worked backward to a sufficient degree, try to use any elimination
rules that would lead you to an intermediate conclusion.
Exercise Set #5
A. Solve the following proofs using strategic rules SP#1(E) and SP#2(B). However,
start each proof by using SP#2(B).
1. * Z∧(B∧F), (M∧T)∧(L→P), Q∧(R∧P) ¬R∨(S∨T)
2. (S∧W)∧(T∧X), (P∧W)∧F, F→R (P∧R)∨(S∧L)
3. * (Z∧Q)∧(F∧L), R∧P, W∧B (Z∨T)∨(M→R)
4. (L∧F)→S, W∧(F∧X), W→L (S∨R)∨P
5. M∧(R∧¬Z), S∧(P∧W), Q (S↔Q)∨[M∧(R∧Z)]
6. [(P∧Q)∧(W∧L)]∧[R∧(S∧T)], Z∧[(W∧R)∧(T∧Z)], (F→P)↔W A∨Z
P→Q, ¬Q⊢¬P.
1 P→Q P
2 ¬Q P/¬P
First, notice that our strategic rules involving premises do not seem to offer any
help, for we cannot apply ‘→E’ to lines 1 and 2, and there is no obvious way to work
backward with introduction rules.
However, consider that the goal proposition ‘¬P’ is a negated proposition, and so
there is a strategic rule SA#1(P,¬Q) for atomic propositions and negated propositions:
1 P→Q P
2 ¬Q P/¬P
3 P A/contra
Each strategic rule involving assumptions will offer advice on what the goal of the
subproof will be. In the case of SA#1(P,¬Q), it says that the goal of the subproof will
be to derive a proposition and its literal negation.
1 P→Q P
2 ¬Q P/¬P
3 P A/contra
4 Q 1,3→E
5 ¬Q 2R
Once ‘Q’ and ‘¬Q’ are derived, SA#1(P,¬Q) offers advice on how to close the
subproof. In the case of SA#1(P,¬Q), close the subproof using either ‘¬I’ or ‘¬E.’ In
our case, since ‘P’ is assumed and ‘¬P’ is the goal, we will use ‘¬I’:
1 P→Q P
2 ¬Q P/¬P
3 P A/contra
4 Q 1,3→E
5 ¬Q 2R
6 ¬P 3–5¬I
P⊢¬¬P
1P P/¬¬P
2 ¬P A/P∧¬P
3 P 1R
4 ¬¬P 2–3¬I
The goal proposition of this proof is a negated proposition. SA#1(P, ¬Q) says to
start by assuming the opposite of our desired goal. In this case, ‘¬P’ is assumed. Next,
SA#1(P,¬Q) says to derive a contradiction. This is done at lines 2 and 3. Finally,
SA#1(P,¬Q) says to close the subproof with ‘¬I.’ This is done at line 4.
Consider one more example:
⊢ ¬(P∧¬P)
This proof is a zero-premise derivation, so it will require starting the proof by mak-
ing an assumption.
1 P∧¬P A
2 P 1∧E
3 ¬P 1∧E
4 ¬(P∧¬P) 1–3¬I
The goal of the first proof is a negated proposition. SA#1(P,¬Q) says to start by
assuming the opposite of our desired goal. Since ‘¬(P∧P)’ is the goal, ‘P∧¬P’ is as-
sumed. Next, SA#1(P,¬Q) says to derive a proposition and its literal negation within
the subproof. This is done at lines 2 and 3. Finally, SA#1(P,¬Q) says to exit the sub-
proof with ‘¬I.’ This is done at line 4.
Next, we turn to the second strategic rule involving assumptions. Consider the
following:
R P→R
Notice that the conclusion of this argument is the conditional ‘P→R.’ In order to
prove this, first consider a basic strategy for solving for proofs whose conclusions are
conditionals.
1 R P/P→R
‘P’ is the assumed 2 P A/R —— ‘R’ is the consequent
proposition. It is the . of the goal proposition
antecedent of the goal . ‘P→R.’
proposition ‘P→R.’ .
We want to get ‘R’ in
————————
this subproof.
Once the antecedent is assumed, SA#2(→) says to derive the consequent of the goal
proposition. This is ‘R,’ which can be derived by using reiteration.
1 R P/P→R
2 P A/R
3 R 1R
Once the consequent is derived, SA#2(→) says to use ‘→I,’ which completes the
proof
1 R P/P→R
2 P A/P
3 R 1R
4 P→R 2–3→I
Consider a second illustration of the strategic rule for assumptions, where the con-
clusion is a conditional:
R (P∨R)→P
1 R P/(P∨R)→P
When making an assumption, first look at the main operator of the conclusion.
Since the main operator of ‘(P∨R)→R’ is the arrow, SA#2(→) says to assume the
antecedent of that proposition. Thus,
1 R P/(P∨R)→R
2 P∨R A/R
1 R P/(P∨R)→R
2 P∨R A/R
3 R 1R
With the consequent derived, SA#2(→) says to use ‘→I.’ This will take us out of
the subproof and complete the proof.
1 R P/(P∨R)→R
2 P∨R A/R
3 R 1R
4 (P∨R)→R 2–3→I
P→(Q→P)
Again, when making an assumption, first look at the main operator of the goal
proposition. Since the main operator of ‘P→(Q→P)’ is the arrow, SA#2(→) says to
assume the antecedent of that proposition and derive the consequent. Thus,
1 P A/Q→P
The next step will be to derive ‘Q→P’ in the subproof. However, there is no im-
mediately obvious way to do this. At this point, it might be helpful to make another
assumption. It is important to recognize that since our goal conclusion is ‘Q→P,’
what we assume will be guided by this proposition. Since ‘Q→P’ is a conditional,
SA#2(→) says to assume the antecedent of that proposition. Thus,
1 P A/Q→P
2 Q A/P
Now that we have assumed ‘Q,’ the goal proposition is ‘P,’ which we can easily
derive:
1 P A/Q→P
2 Q A/P
3 P 1R
Next, close the most deeply nested subproof with ‘→I,’ and then close the remain-
ing open subproof with another use of ‘→I’:
1 P A/Q→P
2 Q A/P
3 P 1R
4 Q→P 2–3→I
5 P→(Q→P) 1–4→I
SA#3(∧) If the conclusion is a conjunction, you will need two steps. First, as-
sume the negation of one of the conjuncts, derive a contradiction, and
then use ‘¬I’ or ‘¬E.’ Second, in a separate subproof, assume the nega-
tion of the other conjunct, derive a contradiction, and then use ‘¬I’ or
‘¬E.’ From this point, a use of ‘∧I’ will solve the proof.
¬(P∨Q) ⊢ ¬P∧¬Q
In order to solve this, an assumption is needed. In proceeding, notice that the main
operator of the conclusion is the operator for the conditional (i.e.,‘∧’). According to
SA#3(∧), solving a proof of this sort will require two separate assumptions, one for
each conjunct. Let’s begin by focusing on the left conjunct,‘¬P’:
1 ¬(P∨Q) P/¬P∧¬Q
2 P A/P∧¬P
3 P∨Q 2∨I
4 ¬(P∨Q) 1R
5 ¬P 2–4¬I
Now that the left conjunct has been derived, it is time to derive the right conjunct
using a very similar procedure:
1 ¬(P∨Q) P/¬P∧¬Q
2 P A/P∧¬P
3 P∨Q 2∨I
4 ¬(P∨Q) 1R
5 ¬P 2–4¬I
6 Q A/Q∧¬Q
7 P∨Q 6∨I
8 ¬(P∨Q) 1R
9 ¬Q 6–8¬I
10 ¬P∧¬Q 5,9∧I
Looking at the above proof, notice that two assumptions were made. First, ‘P’ was
assumed, a contradiction was derived, and then a use of ‘¬I’ introduced ‘¬P’ to the
main line of the proof. Second, ‘Q’ was assumed, a contradiction was derived, and
then a use of ‘¬I’ introduced ‘¬Q’ to the main line of the proof. Finally, both ‘¬P’
and ‘¬Q’ were conjoined with ‘∧I.’
Finally, let’s consider the fourth strategic rule involving assumptions:
¬(¬P∧¬Q) ⊢ P∨Q.
1 ¬(¬P¬∧¬Q) P/P∨Q
1 ¬(¬P∧¬Q) P/P∨Q
2 ¬(P∨Q) A/contra
Obtaining the contradiction in the subproof is no easy task. At this point, however,
we might try to work backward toward a proposition that would generate a contradic-
tion. That is, our goal is two different intermediate conclusions:
1 ¬(¬P∧¬Q) P/P∨Q
2 ¬(P∨Q) A/contra
# ¬P∧¬Q P∨Q ?
With either of the above goal propositions, we could derive a contradiction and
close the subproof with our desired conclusion. Let’s choose ‘¬P∧¬Q’ as our desired
conclusion. Since ‘¬P∧¬Q’ is a conjunction, we will use SA#3(∧) as our strategy.
Thus, begin by assuming the non-negated form of the left conjunct and work toward
a contradiction:
1 ¬(¬P∧¬Q) P/P∨Q
2 ¬(P∨Q) A/¬P∧¬Q
3 P A/contra
4 P∨Q 3∨I
5 ¬(P∨Q) 2R
6 ¬P 3–5¬I
Next, assume the non-negated form of the right conjunct and derive a contradiction:
1 ¬(¬P∧¬Q) P/P∨Q
2 ¬(P∨Q) A/¬P∧¬Q
3 P A/contra
4 P∨Q 3∨I
5 ¬(P∨Q) 2R
6 ¬P 3–5¬I
7 Q A / contra
8 P∨Q 7∨I
9 ¬(P∨Q) 2R
10 ¬Q 7–9¬I
Now, we can generate our desired contradiction and finish the proof:
1 ¬(¬P∧¬Q) P/P∨Q
2 ¬(P∨Q) A/¬P∧¬Q
3 P A/contra
4 P∨Q 3∨I
5 ¬(P∨Q) 2R
6 ¬P 3–5¬I
7 Q A/contra
8 P∨Q 7∨I
9 ¬(P∨Q) 2R
10 ¬Q 7–9¬I
11 ¬P∧¬Q 6,10∧I
12 ¬(¬P∧¬Q) 1R
13 P∨Q 2–12¬E
Let’s take stock. We are working with two different kinds of strategies for solving
proofs. First, there are the strategic proof rules that involve manipulating propositions
that are available in the proof by either breaking down complex propositions with
elimination rules or working backward with introduction rules. Second, there are
strategic rules involving assumptions. Whenever you need to make an assumption, the
proposition you assume should be guided by the main operator in the proposition you
are trying to derive; that is, if it is a conditional, use SA#2(→).
Exercise Set #6
A. Identify the strategic assumption rule that you would use and the proposition you
would assume (if necessary) if you were to solve the proof.
1. * ⊢ P→(R→R)
2. ⊢ P→(R∨¬R)
3. * ⊢ P∨¬P
4. ⊢ P∨¬P
5. * ⊢ ¬(P∧¬P)
6. ⊢ (P∨S)∨¬(P∨S)
7. * ⊢ (R∨S)→(P→P)
8. ¬(P∨Q) ⊢ ¬P∧¬Q
9. * ¬(P→Q) ⊢ P∧¬Q
B. Using the strategic assumption rules and strategic proof rules (if needed), solve
the proofs below.
1. * ⊢ P→(R→R)
2. ⊢ P→(R∨¬R)
3. * ⊢ P∨¬P
4. ⊢ P∨¬P
5. * ⊢ ¬(P∧¬P)
6. ⊢ (P∨S)∨¬(P∨S)
7. * ⊢ (R∨S)→(P→P)
8. ¬(P∨Q) ⊢ ¬P∧¬Q
9. * ¬(P→Q) ⊢ P∧¬Q
A.
1. * ⊢ P→(R→R). Assume ‘P’ and use SA#2(→).
3. * ⊢ P∨¬P. Assume ‘¬(P∨¬P)’ and use SA#4(∨).
5. * ⊢ ¬(P∧¬P). Assume ‘P∧¬P’ and use SA#1(P, ¬Q).
7. * ⊢ (R∨S)→(P→P). Assume ‘R∨S’ and use SA#2(→).
9. * ¬(P→Q) ⊢ P∧¬Q. Assume ‘¬P’ and derive a contradiction. Assume ‘Q’
and derive a contradiction. Use SA#3(∧).
B.
1. * ⊢ P→(R→R)
1 P A / R→R
2 R A/R
3 R 2R
4 R→R 2-3→I
5 P→(R→R) 1-4→I
3. * ⊢ P∨¬P
1 ¬(P∨¬P) A / P, ¬P
2 P A / P, ¬P
3 P∨¬P 2∨I
4 ¬(P∨¬P) 1R
5 ¬P 2-4¬I
6 P∨¬P 5∨I
7 P∨¬P 1-6¬E
5. * ⊢ ¬(P∧¬P)
1 P∧¬P A / P, ¬P
2 P 1∧E
3 ¬P 1∧E
4 ¬(P∧¬P) 1-3¬I
7. * ⊢ (R∨S)→(P→P)
1 R∨S A / P→P
2 P A/P
3 P 2R
4 P→P 2-3→I
5 (R∨S)→(P→P) 1-4→I
9. * ¬(P→Q) ⊢ P∧¬Q
1 ¬(P→Q) P / P∧¬Q
2 Q A / P, ¬P
3 P A/Q
4 Q 2R
5 P→Q 3-4→I
6 ¬(P→Q) 1R
7 ¬Q 2-6¬I
8 ¬P A / P, ¬P
9 P A/Q
10 ¬P∨Q 8∨I
11 ¬P A/Q
12 ¬Q A / P, ¬P
13 P 9R
14 ¬P 8R
15 Q 12-14¬E
16 Q A
17 Q 16R
18 Q 10, 11-15, 16-17∨E
19 P→Q 9-18→I
20 ¬(P→Q) 1R
21 P 8-20¬E
22 P∧¬Q 7,21∧I
While the various proof and assumption strategies provide some initial guidance on
how to solve proofs, you may notice that some proofs cannot be solved in a quick
and efficient manner. In order to further simplify proofs, we introduce six additional
derivation forms into our derivation system. The addition of these six derivation rules
to PD gives us PD+.
PD PD+
∧I, ∧E, ∨I, ∨E, →I, →E, ↔I, ↔E, ¬I, ¬E, R DS, MT, HS, DN, DEM, IMP
To see why we might want to introduce DS into the existing set of derivation rules,
consider the following proof of ‘P∨Q, ¬Q ⊢ P’ without the use of DS:
1 P∨Q P
2 ¬Q P
3 P A/P
4 P 3R
5 Q A/P
6 ¬P A/Q∧¬Q
7 Q 5R
8 ¬Q 2R
9 P 6–8¬E
10 P 1,3–4,5–9∨E
This is a lengthy proof to infer something so obvious. With DS, the other disjunct
can be derived in one step. That is, the above proof simplifies to the following:
1 P∨Q P
2 ¬Q P
3 P 1,2DS
This rule is extremely helpful for considering the following:
P↔(Q∨R), P, ¬Q⊢R
1 P↔(Q∨R) P
2 P P
3 ¬Q P/R
4 Q∨R 1,2↔E
5 R A/R
6 R 5R
7 Q A/R
8 ¬R A/Q∧¬Q
9 Q 7R
10 ¬Q 3R
11 R 8–10¬E
12 R 4,5–6,7–11∨E
Again, the above is difficult to solve without the use of DS. The proof is now much
simpler:
1 P↔(Q∨R) P
2 P P
3 ¬Q P/R
4 Q∨R 1,2↔E
5 R 3,4DS
While DS simplifies reasoning with disjunctions, be careful not to confuse DS with
the following invalid form of reasoning:
1 P∨Q P
2 P P
3 Q DS—NO!
To see why this is invalid, remember that a disjunction is true provided either of
the disjuncts is true. From the fact that one of the disjuncts is true, the other cannot be
validly inferred since it is possible for v(P∨Q) = T and v(P) = T, yet v(Q) = F. Consider
a more concrete example.
Again, the introduction of MT also allows for simplifying a number of proofs. Con-
sider the proof of ‘P→Q, ¬Q ⊢¬P’:
1 P→Q P
2 ¬Q P/¬P
3 P A/contra
4 Q 1,3→E
5 ¬Q 2R
6 ¬P 3–5¬I
So, while it is not necessary to introduce MT in order to solve ‘P→Q, ¬Q⊢¬P,’ its
introduction serves to expedite the proof process.
Consider a slightly more complicated use of MT:
(P∧Z)→(Q∨Z), ¬(Q∨Z)⊢¬(P∧Z)
1 (P∧Z)→(Q∨Z) P
2 ¬(Q∨Z) P
3 ¬(P∧Z) 1,2MT
1 P→Q P
2 Q→R P/P→R
3 P A/R
4 Q 1,3→E
5 R 2,4→E
6 P→R 3–5→I
P⊣⊢¬¬P
P ⊢ ¬¬P
¬¬P ⊢ P
The above rule states that from a proposition ‘P,’ the proposition ‘¬¬P’ can be
derived, and from a proposition ‘¬¬P,’ the proposition ‘P’ can be derived.
Consider the uses of DN below:
1 P∧Q P
2 ¬¬(P∧Q) 1DN
3 ¬¬(¬¬P∧Q) 2DN
4 ¬¬(¬¬P∧¬¬Q) 3DN
5 ¬¬P∧¬¬Q 4DN
6 P∧Q 5DNx2
Notice that DN can apply to the complex proposition ‘P∧Q’ at line 1 or to any con-
stituent proposition in ‘P∧Q.’ That is, DN can apply to subformulas. Consider some
invalid examples below:
1 ¬(¬P∧¬Q) P
2 P∧¬Q 1DN—NO!
3 ¬P∧¬Q 2DN—NO!
4 P∧Q 1DN—NO!
5 ¬(¬P∧Q) 4DN—NO!
Notice that the use of DN at line 1 to remove two negations does not remove two
negations from a single proposition but removes one negation from the complex
proposition ‘(¬P∧¬Q)’ and one from ‘P.’ This is an incorrect use of the equivalence
rule because the rule demands that a single proposition can be replaced with a proposi-
tion that is doubly negated or a single doubly negated proposition can be replaced by
removing its double negation.
First, it is important to note that DeM is an equivalence rule, and so it allows for
substituting a formula of one type for a formula of another type. For example, De
Morgan’s Laws allow for deriving ‘¬P∧¬Q’ from ‘¬(P∨Q),’ and vice versa.
De Morgan’s Laws will make many proofs much easier for two reasons. First, DeM
will considerably shorten many proofs. For example, consider the following proof for
‘¬(P∧Q) ¬P∨¬Q’:
1 ¬(P∧Q) P
2 ¬(¬P∨¬Q) A/P∧¬P
3 ¬P A/P∧¬P
4 ¬P∨¬Q 3∨I
5 ¬(¬P∨¬Q) 2R
6 P 3–5¬E
7 ¬Q A/P∧¬P
8 ¬P∨¬Q 7∨I
9 ¬(¬P∨¬Q) 2R
10 Q 7–9¬E
11 P∧Q 6,10∧I
12 ¬(P∧Q) 1R
13 ¬P∨¬Q 2–12¬E
Notice that the proof is thirteen lines long and involves a number of nested assump-
tions. By adding DeM to our set of derivation rules, the above proof can be simplified
to the following:
1 ¬(P∧Q) P
2 ¬P∨¬Q 2DeM
Another benefit of adding DeM to our existing set of derivation rules is that certain
versions of its use will offer a way to apply elimination rules. To see this more clearly,
consider the following:
¬(P∨Q) ¬P
1 ¬(P∨Q) P/¬P
2 ¬P∧¬Q 1DeM
3 ¬P 2∧E
Notice that none of our elimination rules apply to line 1, but once DeM is applied
to line 1, ‘∧E’ can be applied to line 2.
17 Implication (IMP)
From ‘P→Q,’ we can derive ‘¬P∨Q.’ P→Q⊣⊢¬P∨Q IMP
From ‘¬P∨Q,’ we can derive ‘P→Q.’
IMP is an equivalence rule that greatly increases the elegance of our proofs. For
example, consider the following:
P→Q ¬P∨Q
1 P→Q P
2 ¬(¬P∨Q) A/contra
3 ¬¬P∧¬Q 2DEM
4 ¬¬P 1∧E
5 P 4DN
6 Q 1,5→E
7 ¬Q 3∧E
8 ¬P∨Q 2–7¬E
With the addition of IMP, ‘P→Q ¬P∨Q’ can be solved in one line:
1 P→Q P
2 ¬P∨Q 1IMP
¬P∨Q P→Q
1 ¬P∨Q P
2 P A/Q
3 ¬¬P 2DN
4 Q 1,3DS
5 P→Q 2–4→I
Again, with the addition of IMP, ‘¬P∨Q P→Q’ can be solved in one line:
1 ¬P∨Q P
2 P→Q 1IMP
The above proofs show that while proofs involving IMP can be derived without
the use of IMP, the addition of this rule greatly shortens proofs. A second benefit of
introducing IMP is that it simplifies proofs involving negated conditionals. Consider
the following proof:
¬(P→Q) ¬Q
1 ¬(P→Q) P/¬Q
2 ¬(¬P∨Q) 1IMP
Although we cannot apply any elimination rules to line 2, we can apply DeM and
then apply elimination rules to solve the proof.
1 ¬(P→Q) P/¬Q
2 ¬(¬P∨Q) 1 IMP
3 ¬¬P∧¬Q 2DeM
4 ¬Q 3∧E
In this section, we conclude our discussion of proofs by refining our proof strategies
given the introduction of our new derivation rules.
1 P→Q P
2 ¬Q P
3 P∨R P
4 R→W P/W
1 P→Q P
2 ¬Q P
3 P∨R P
4 R→W P/W
5 ¬P 1,2MT
6 R 3,5DS
7 W 4,6→E
Again, the underlying idea behind this strategic proof rule is to start a proof by
simplifying or breaking down any available propositions. Consider the following:
1 (P∧M)∧(¬Q∨R) P
2 P→L P
3 P↔T P
4 ¬R∧W P/(L∧T)∧¬Q
1 (P∧M)∧(¬Q∨R) P
2 P→L P
3 P↔T P
4 ¬R∧W P/(L∧T)∧¬Q
5 P∧M 1∧E
6 ¬Q∨R 1∧E
7 ¬R 4∧E
8 W 4∧E
9 P 5∧E
10 M 5∧E
Notice that ‘∧E’ is used on every available conjunction in the proof. However, the
strategic rule suggests using ‘→E,’ MT, DS, and ‘↔E’ if possible. The remaining
elimination rules give the following:
1 (P∧M)∧(¬Q∨R) P
2 P→L P
3 P↔T P
4 ¬R∧W P/(L∧T)∧¬Q
5 P∧M 1∧E
6 ¬Q∨R 1∧E
7 ¬R 4∧E
8 W 4∧E
9 P 5∧E
10 M 5∧E
11 L 2,9→E
12 T 3,9↔E
13 ¬Q 6,7DS
No more elimination rules can be used. Since the conclusion is ‘(L∧T)∧¬Q,’ using
‘∧I’ allows for deriving the conclusion. Thus, the proof ends as follows.
1 (P∧M)∧(¬Q∨R) P
2 P→L P
3 P↔T P
4 ¬R∧W P/(L∧T)∧¬Q
5 P∧M 1∧E
6 ¬Q∨R 1∧E
7 ¬R 4∧E
8 W 4∧E
9 P 5∧E
10 M 5∧E
11 L 2,9→E
12 T 3,9↔E
13 ¬Q 6,7DS
14 (L∧T) 11,12∧I
15 (L∧T)∧¬Q 13,14∧I
¬[P∨(R∨M)], ¬M→T T
1 ¬[P∨(R∨M)] P
2 ¬M→T P/T
1 ¬[P∨(R∨M)] P
2 ¬M→T P/T
3 ¬P∧¬(R∨M) 1DeM
1 ¬[P∨(R∨M)] P
2 ¬M→T P/T
3 ¬P∧¬(R∨M) 1DeM
4 ¬P 3∧E
5 ¬(R∨M) 3∧E
Again, no more elimination rules can be used, but since there is a negated disjunc-
tion, SP#3(EQ+) suggests using DeM again. Thus,
1 ¬[P∨(R∨M)] P
2 ¬M→T P/T
3 ¬P∧¬(R∨M) 1DeM
4 ¬P 3∧E
5 ¬(R∨M) 3∧E
6 ¬R∧¬M 5DeM
This use of De Morgan’s Laws again allows for the use of elimination rules.
1 ¬[P∨(R∨M)] P
2 ¬M→T P/T
3 ¬P∧¬(R∨M) 1DeM
4 ¬P 3∧E
5 ¬(R∨M) 3∧E
6 ¬R 5DeM
7 ¬M 5DeM
8 T 2,7→E
⊢[P→(Q→R)]→[(¬Q→¬P)→(P→R)]
Since the valid argument form above does not involve any premises, the first step
will be to make an assumption.
First, notice that the main operator of the conclusion is ‘→.’ Since the main opera-
tor is ‘→,’ we will use SA#2(→). It reads,
1 P→(Q→R) A/(¬Q→¬P)→(P→R)
1 P→(Q→R) A/(¬Q→¬P)→(P→R)
2 ¬Q→¬P A/P→R
Again, no elimination rules apply, so make a third assumption, using ‘P→R’ as the
goal.
The main operator of ‘P→R’ is ‘→,’ so assume the antecedent ‘P’ of this proposi-
tion, and the goal will be to derive the consequent ‘R.’
1 P→(Q→R) A/(¬Q→¬P)→(P→R)
2 ¬Q→¬P A/P→R
3 P A/R
Now, we can apply some elimination rules. One way to solve this proof is now
simply to use ‘→E’ and MT. That is,
1 P→(Q→R) A/(¬Q→¬P)→(P→R)
2 ¬Q→¬P A/P→R
3 P A/R
4 Q→R 1,3→E
5 ¬¬P 3DN
6 ¬¬Q 2,5MT
7 Q 6DN
8 R 4,7→E
Now that we have derived ‘R’ at line 8, SA#2(→) says to use ‘→I.’
1 P→(Q→R) A/(¬Q→¬P)→(P→R)
2 ¬Q→¬P A/P→R
3 P A/R
4 Q→R 1,3→E
5 ¬¬P 3DN
6 ¬¬Q 2,5MT
7 Q 6DN
8 R 4,7→E
9 P→R 3–8→I
In assuming ‘¬Q→¬P’ at line 2, our goal was to derive ‘P→R’ in the same sub-
proof. Since we derived ‘P→R,’ SA#2(→) says to use ‘→I.’
1 P→(Q→R) A/(¬Q→¬P)→(P→R)
2 ¬Q→¬P A/P→R
3 P A/R
4 Q→R 1,3→E
5 ¬¬P 3DN
6 ¬¬Q 2,5MT
7 Q 6DN
8 R 4,7→E
9 P→R 3–8→I
10 (¬Q→¬P)→(P→R) 2–9→I
1 P→(Q→R) A/(¬Q→¬P)→(P→R)
2 ¬Q→¬P A/P→R
3 P A/R
4 Q→R 1,3→E
5 ¬¬P 3DN
6 ¬¬Q 2,5MT
7 Q 6DN
8 R 4,7→E
9 P→R 3–8→I
10 (¬Q→¬P)→(P→R) 2–9→I
11 [P→(Q→R)]→[(¬Q→¬P)→(P→R)] 1–10→I
It is important to see that strategic rules can be used iteratively and that what to
assume is almost always guided by the conclusion (or goal proposition) of the proof
(or subproofs).
Let’s look at another way to solve the same proof that starts in a similar manner.
1 P→(Q→R) A/(¬Q→¬P)→(P→R)
2 ¬Q→¬P A/P→R
3 P A/R
In the previous example, we used the elimination rules and then used ‘→I’ to com-
plete the proof. Another way to solve the same proof would be to see that in assuming
‘P’ at line 3, the goal of that proof is ‘R.’ Since ‘R’ is an atomic proposition, we can
use the strategic rule SA#1(P,¬Q). This rule says to assume the negation of ‘R’ and
try to derive a contradiction.
1 P→(Q→R) A/(¬Q→¬P)→(P→R)
2 ¬Q→¬P A/P→R
3 P A/R
4 ¬R A/R∧¬R
Our goal is to produce a contradiction in the proof. From here we will use elimina-
tion rules to do this and then ‘¬E’ to exit the subproof; then we will finish the proof
with repeated uses of ‘→I.’
1 P→(Q→R) A/(¬Q→¬P)→(P→R)
2 ¬Q→¬P A/P→R
3 P A/R
4 ¬R A/R∧¬R
5 Q→R 1,3→E
6 ¬Q 4,5MT
7 ¬P 2,6→E
8 P 3R
9 R 4–8¬E
10 P→R 3–9→I
11 (¬Q→¬P)→(P→R) 2–10→I
12 [P→(Q→R)]→[(¬Q→¬P)→(P→R)] 1–11→I
End-of-Chapter Exercises
A. There are 125 valid arguments below. They are broken into four sections: easy,
medium, hard, and zero-premise deductions. In mastering the derivation rules as-
sociated with proofs, start with the easy proofs and work your way up to the hard
ones. Once you’ve mastered the hard proofs, try to prove some theorems (zero-
premise deductions) of PL.
Easy Proofs
1. * P∧¬Q, T∨Q ⊢ T
2. P→Q, Q→R, ¬R ¬P
3. * A→C, A∧D ⊢ C
4. [(A∧B)∧C]∧D ⊢ A
5. * A ⊢ B→(B∧A)
6. ¬(A∨B) ⊢ ¬A∧¬B
7. * ¬(¬A∨B) ¬¬A∧¬B
8. (P∨Q)∨R, (T∨W)→¬R, T∧¬P ⊢ Q
9. * A∨(¬B→D), ¬(A∨B) ⊢ D
10. ¬A∨¬B ⊢ ¬(A∧B)
11. * A∨(¬B→D), ¬(A∨D) ⊢ B
12. (¬B∧¬D), ¬(B∨D)→S⊢S
13. * (B↔D)∨S, ¬S, B ⊢ D∨W
14. ¬A→Q, (A∨D)→S, ¬S ⊢ Q
15. * R∨R ⊢ R
16. ¬(¬R∨R) ⊢ W
17. * A→B, B→C, ¬C ⊢ ¬A
18. A→B, B→C, A ⊢ C
19. * A→B, A∨C, ¬C ⊢ B
20. A→(B→C), A, C→D ⊢ B→D
21. * A→B, C→D, B→C ⊢ A→D
22. P ⊢ P∨{[(M∨W)∧(F∧Z)]→(C↔D)}
23. * A→(B→C), A, ¬C ⊢ ¬B∨(W→S)
24. M∨(Q∧D), M→F, (Q∧D) → F ⊢ F∨S
25. * P (¬Q∨P)∨¬W
26. (C∧D)→(E∧Q), C, D ⊢ Q
27. * P→(Q∧¬S), F→S, R→P, R, ¬F→¬M ⊢ ¬F∧¬M
28. P, Q∧F, (P∧F)→(W∨R), ¬R, Q→S ⊢ W∧ S
29. * P→Q, Q→W, P ⊢ W
30. P→¬(Q∨¬R), P ⊢ ¬Q
31. * ¬(¬P∧¬Q), ¬Q ⊢ P
32. ¬[W∨(S∨M)] ⊢ ¬W∧¬M
33. ¬[P∨(S∧M)], M ⊢ ¬P∧¬S
1. * P∧¬Q, T∨Q ⊢ T
1 P∧¬Q P
2 T∨Q P/T
3 P 1∧E
4 ¬Q 1∧E
5 T 2,4DS
3. * A→C, A∧D ⊢ C
1 A→C P
2 A∧D P
3 A 2∧E
4 C 1,3→E
5. * A ⊢ B→(B∧A)
1 A P
2 B A/B∧A
3 A 1R
4 B∧A 2,3∧I
5 B→(B∧A) 2–4→I
7. * ¬(¬A∨B) ⊢ ¬¬A∧¬B
1 ¬(¬A∨B) A/¬¬A∧¬B
2 ¬¬A∧¬B 1DeM
9. * A∨(¬B→D), ¬(A∨B) ⊢ D
1 A∨(¬B→D) P
2 ¬(A∨B) P/D
3 ¬A∧¬B 2DeM
4 ¬A 3∧E
5 ¬B 3∧E
6 ¬B→D 1,4DS
7 D 5,6→E
11. * A∨(¬B→D), ¬(A∨D) ⊢ B
1 A∨(¬B→D) P
2 ¬(A∨D) P/B
3 ¬A∧¬D 2DeM
4 ¬A 3∧E
5 ¬D 3∧E
6 ¬B→D 1,4DS
7 ¬¬B 5,6MT
8 B 7DN
13. * (B↔D)∨S, ¬S, B ⊢ D∨W
1 (B↔D)∨S P
2 ¬S P
3 B P
4 B↔D 1,3DS
5 D 3,4↔E
6 D∨W 5∨I
15. * R∨R ⊢ R
1 R∨R P
2 ¬R A
3 R 1,2DS
4 R 2–3¬E
17. * A→B, B→C, ¬C ⊢ ¬A
1 A→B P
2 B→C P
3 ¬C P/¬A
4 ¬B 2,3MT
5 ¬A 1,4MT
19. * A→B, A∨C, ¬C ⊢ B
1 A→B P
2 A∨C P
3 ¬C P/B
4 A 2,3DS
5 B 1,4→E
21. * A→B, C→D, B→C ⊢ A→D
1 A→B P
2 C→D P
3 B→C P/A→D
4 A→C 1,3HS
5 A→D 4,2HS
23. * A→(B→C), A, ¬C ⊢ ¬B∨(W→S)
1 A→(B→C) P
2 A P
3 ¬C P
4 B→C 1,2→E
5 ¬B 3,4MT
6 ¬B∨(W→S) 5∨I
25. * P (¬Q∨P)∨¬W
1 P P/(¬Q∨P)∨¬W
2 ¬Q∨P 1∨I
3 (¬Q∨P)∨¬W 2∨I
27. * P→(Q∧¬S), F→S, R→P, R, ¬F→¬M ⊢ ¬F∧¬M
1 P→(Q∧¬S) P
2 F→S P
3 R→P P
4 R P
5 ¬F→¬M P/¬F∧M
6 P 3,4→E
7 Q∧¬S 1,6→E
8 ¬S 7∧E
9 ¬F 2,8MT
10 ¬M 5,9→E
11 ¬F∧¬M 9,10∧I
29. * P→Q, Q→W, P ⊢ W
1 P→Q P
2 Q→W P
3 P P/W
4 Q 1,3→E
5 W 2,4→E
31. * ¬(¬P∧¬Q), ¬Q ⊢ P
1 ¬(¬P∧¬Q) P
2 ¬Q P/P
3 ¬¬P∨¬¬Q 1DeM
4 P∨Q 3DNx2
5 P 2,4DS
Medium Proofs
40. G→ Μ ⊢ ¬Μ → ¬G
41. * A→¬(B→C), ¬B ¬A
42. * (B→C)→¬(D→E), C ¬E
43. * (S∨W)→M, (S∧T)↔(R∨P), R M
44. L∧(Y∧¬B), P, (P∨¬R)→Z, [Z∨(S∧T)]→W W∨¬M
45. * R∧(S∧T), (T∨M)→W, (W∨¬P)→(A∧B) B
46. B→D, ¬D ⊢ ¬B∨ D
47. * ¬P∨(¬Q∨R), ¬P→(W∧S), (¬Q∨R)→(W∧S) ⊢ S
48. ¬W→(R∨ S), M→¬W, ¬S∧M ⊢ R
49. * (A↔B) ⊢ A→B
50. [P↔(L∨M)]→W, P, L∨M W
51. A, B ⊢ A↔B
52. A↔B ⊢ A→A
53. * C∧(D∨B) ⊢ (C∧D)∨(C∧B)
54. (C∧D)∨(¬C&¬D) ⊢ (C∧D)∨¬C
55. * F∨[(G∧D)∧M] ⊢ (F∨M)∨R
56. A→B, ¬A→C ⊢ ¬B→C
57. M→¬S, ¬M→W ⊢ S→W
58. (A∨B)→¬D, ¬(A∨B)→R ⊢ D→R
59. P ⊢ ¬¬P∨P
60. * (Q∨B) ⊢ (B∨Q)
61. (A∧Β)∧C ⊢ A∧(B∧C)
62. ¬(A∨B)→D, ¬D ⊢ A∨B
63. ¬[(A∧B)∧C]→R, ¬R ⊢ (B∧A)∧C
64. R↔[(M→T)→Z], S∧[¬P∧(¬Q∧¬S)] Z
65. P∧[S∧(R∧¬P)], R→[(M→T)→W] W
66. ¬¬(¬R→¬R)∨B ⊢ (R→R)∨¬¬B
37. * P ¬P→¬S
1 P P/¬P→¬S
2 P∨¬S 1∨I
3 ¬¬P∨¬S 2DN
4 ¬P→¬S 3IMP
alternatively
1 P P/¬P→¬S
2 ¬P A/¬S
3 S A/contra
4 P 1R
5 ¬P 2R
6 ¬S 3–5¬I
7 ¬P→¬S 2–6→I
39. * P, (P∨Q)→W, ¬W ⊢ ¬(P∨Q)
1 P P
2 (P∨Q)→W P
3 ¬W P/¬(P∨Q)
4 P∨Q 1∨I
5 ¬(P∨Q) 2,3MT
41. * A→¬(B→C), ¬B ¬A
1 A→¬(B→C) P
2 ¬B P/¬A
3 A A/contra
4 ¬(B→C) 1→E
5 ¬(¬B∨C) 4IMP
6 ¬¬B∧¬C 5DeM
7 ¬¬B 6∧E
8 B 7DN
9 ¬B 2R
10 ¬A 3–9¬I
42. * (B→C)→¬(D→E), C ¬E
1 (B→C)→¬(D→E) P
2 C P/¬E
3 ¬B∨C 2∨I
4 ¬(D→E) 1,3→E
5 ¬(¬D∨E) 4IMP
6 ¬¬D∧¬E 5DeM
7 ¬E 6∧E
7 (¬Q∨R) A
8 W∧S 3,7→E
9 S 8∧E
10 S 1,4–6,7–9∨E
Hard Proofs
13 C A/R
14 R 9,13→E
15 R 10–14∨E
69. * A∨B, R∨¬(S∨M), A→S, B→M ⊢ R
1 A∨B P
2 R∨¬(S∨M) P
3 A→S P
4 B→M P/R
5 A A/S∨M
6 A→S 3R
7 S 5,6→E
8 S∨M 7∨I
9 B A/S∨M
10 B→M 4R
11 M 9,10→E
12 S∨M 11∨I
13 S∨M 1,5–8,9–12∨E
14 R 2,13DS
71. * A ⊢ ¬(¬A∧¬B)
1 A P/¬(¬A∧¬B)
2 ¬¬A 1DN
3 ¬¬A∨¬¬B 2∨I
4 ¬(¬A∧¬B) 3DeM
73. * A→B, D→E, (¬B∨¬E)∧(¬A∨¬B) ⊢ ¬A∨¬D
1 A→B P
2 D→E P
3 (¬B∨¬E)∧(¬A∨¬B) P/¬A∨¬D
4 ¬B∨¬E 3∧E
5 ¬A∨¬B 3∧E
6 ¬(¬A∨¬D) A/contra
7 A∧D 6DeM
8 A 7∧E
9 D 7∨E
10 B 1,8→E
11 E 2,9→E
12 ¬(B∧E) 4DeM
13 B∧E 10,11∧I
14 ¬A∨¬D 6–13¬E
75. * (A∨B)→(D∨E), [(D∨E)∨F]→(G∨H), (G∨H)→¬D, E→¬G, B ⊢ H
1 (A∨B)→(D∨E) P
2 [(D∨E)∨F]→(G∨H) P
3 (G∨H)→¬D P
4 E→¬G P
5 B P/H
6 A∨B 5∨I
7 D∨E 1,6→E
8 (D∨E)∨F 7∨I
9 G∨H 2,8→E
10 ¬D 3,9→E
11 E 7,10DS
12 ¬G 4,11→E
13 H 9,12DS
77. * A→(B→D), ¬(D→Y)→¬K, (Z∨¬K)∨¬(B→Y) ⊢ ¬Z→¬(A∧K)
1 A→(B→D) P
2 ¬(D→Y)→¬K P
3 (Z∨¬K)∨¬(B→Y) P/¬Z→¬(A∧K)
4 ¬Z A/¬(A∧K)
5 A∧K A/Z∧¬Z
6 A 5∧E
7 B→D 1,6→E
8 K 5∧E
9 ¬¬K 8DN
10 ¬¬(D→Y) 2,9MT
11 D→Y 10DN
12 B→Y 7,11HS
13 ¬¬(B→Y) 12DN
14 Z∨¬K 3,13DS
15 ¬K 4,14DS
16 K 8R
17 ¬(A∧K) 5–16¬I
18 ¬Z→¬(A∧K) 4–17→I
79. * ¬(P→Q) P∧¬Q, two proofs
1 ¬(P→Q) A/P∧¬Q
2 ¬P A
3 ¬P∨Q 2∨I
4 P→Q 3IMP
5 ¬(P→Q) 1R
6 P
7 Q A
8 ¬P∨Q 7∨I
9 P→Q 8IMP
10 ¬(P→Q) 1R
11 ¬Q 7–10→I
12 P∧¬Q 6,11∧I
1 P∧¬Q P/P∧¬Q
2 ¬¬P∧¬Q 1∧E
3 ¬(¬P∨Q) 2DeM
4 ¬(P→Q) 3IMP
86. * P∧(Q∨R) (P∧Q)∨(P∧R), distribution
1 P∧(Q∨R) P/(P∧Q)∨(P∧R)
2 ¬[(P∧Q)∨(P∧R)] A/¬P∧¬P
3 ¬(P∧Q)∧¬(P∧R) 2DEM
4 ¬(P∧Q) 3∧E
5 ¬(P∧R) 3∧E
6 ¬P∨¬Q 4DEM
7 P 1∧E
8 ¬¬P 7DN
9 ¬Q 6,8DS
10 Q∨R 1∧E
11 R 9,10DS
12 ¬P∨¬R 5DEM
13 ¬¬P 7DN
14 ¬R 12,13DS
15 R 11R
16 (P∧Q)∨(P∧R) 2–15¬E
1 (P∧Q)∨(P∧R) P/P∧(Q∨R)
2 ¬P A/P∧¬P
3 ¬P∨¬Q 2∨I
4 ¬(P∧Q) 3DEM
5 P∧R 1¬∨E
6 P 5∧E
7 ¬P 2R
8 P 2–7¬E
9 ¬(Q∨R) A/P∧¬P
10 ¬Q∧¬R 9DEM
11 ¬Q 10∧E
12 ¬R 10∧E
13 ¬P∨¬Q 11∨I
14 ¬(P∧Q) 13DEM
15 P∧R 1,14DS
16 R 15∧E
17 ¬R 12R
18 Q∨R 9–17¬E
19 P∧(Q∨R) 8,18∧I
88. * P→Q ¬Q→¬P, contraposition
1 P→Q P/¬Q→¬P
2 ¬Q A/¬P
3 P A/P∧¬P
4 Q 1→E
5 ¬Q 2R
6 ¬P 3–5¬I
7 ¬Q→¬P 2–6→I
1 ¬Q→¬P P/P→Q
2 P A/Q
3 ¬Q A/P∧¬P
4 ¬P 1→E
5 P 2R
6 Q 3–5¬I
7 ¬Q→¬P 2–6→I
Zero-Premise Deductions
102. P→P
103. * ⊢ P∨¬P, law of excluded middle
104. ⊢ ¬(P∧¬P), principle of noncontradiction
105. * ⊢ (A∧B)→A
106. ⊢ A→¬(B∧¬B)
107. * ⊢ ¬A→(A→B)
108. A→(A∨¬A)
109. * ⊢ P→(¬Q→P)
110. ⊢ [(P→Q)→P]→P
111. * ⊢ ¬[(A→¬A)∧(¬A→A)]
112. ⊢ A→(A∧A)
113. * ⊢ [(A→B)∧(A→D)]→[A→(B∧D)]
114. ⊢ ¬P→¬[(P→Q)→P]
115. * ⊢ [P→(Q→R)]→[(P→Q)→(P→R)], axiom scheme 2
116. ⊢ (A→B)∨(B→D)
117. * ⊢ A→[A→(A∨A)]
118. ⊢ ¬(P∧¬P)
119. * ⊢ [(¬A∨B)∧(¬A∨D)]→[¬A∨(B∧D)]
120. ⊢ [(P→Q)→R]→(¬R→P)
121. * ⊢ (A→B)→[¬(B∧D)→¬(D∧A)]
122. ⊢ (A∧B)→A
123. * ⊢ ¬¬A→A
124. ⊢ A→(B→A)
125. * ⊢ (A∨B)→[(¬A∨B)→ B]
109. * ⊢ P→(¬Q→P)
1 P A/¬Q→P
2 ¬Q A/P
3 P 1R
4 ¬Q→P 2–3→I
5 P→(¬Q→P) 1–4→I
111. * ⊢ ¬[(A→¬A)∧(¬A→A)]
1 (A→¬A)∧(¬A→A) A/P∧¬P
2 A→¬A 1∧E
3 ¬A→A A/B∧¬B
4 A A/¬A∧¬A
5 ¬A 2,4→E
6 ¬A 4,5¬I
7 A 3,6→E
8 ¬[(A→¬A)∧(¬A→A)] 1–7¬I
113. * ⊢ [(A→B)∧(A→D)]→[A→(B∧D)]
1 (A→B)∧(A→D) A
2 A A/B∧D
3 A→B 1∧E
4 A→D 1∧E
5 B 2,3→E
6 D 2,4→E
7 B∧D 5,6∧E
8 A→(B∧D) 2–7→I
9 [(A→B)∧(A→D)]→[A→(B∧D)] 1–8→I
115. * ⊢ [P→(Q→R)]→[(P→Q)→(P→R)]
1 P→(Q→R) A/[(P→Q)→(P→R)]
2 P→Q A/P→R
3 P A/R
4 Q→R 1,3→E
5 Q 2,3→E
6 R 4,5→E
7 P→R 3–6→I
8 (P→Q)→(P→R) 2–7→I
9 [P→(Q→R)]→[(P→Q)→(P→R)] 1–8→I
117. * ⊢ A→[A→(A∨A)]
1 A A/A→(A∨A)
2 A A/A∨A
3 A∨A 2∨I
4 A→(A∨A) 2–3→I
5 A→[A→(A∨A)] 1–4→I
119. * ⊢ [(¬A∨B)∧(¬A∨D)]→[¬A∨(B∧D)]
1 (¬A∨B)∧(¬A∨D) A
2 ¬A∨B 1∧E
3 ¬A∨D 1∧E
4 ¬[¬A∨(B∧D)] A
5 ¬¬A∧¬(B∧D) 4DeM
6 ¬¬A 5∧E
7 ¬(B∧D) 5∧E
8 B 2,6DS
9 D 3,6DS
10 B∧D 8,9∧I
11 ¬A∨(B∧D) 4–10¬E
12 [(¬A∨B)∧(¬A∨D)]→[¬A∨(B∧D)] 1–11→I
121. * ⊢ (A → B) → [¬(B∧D) → ¬(D∧A)]
1 A→B A / ¬(B∧D)→¬(D∧A)
2 ¬(B∧D) A / ¬(D∧A)
3 D∧A A / P, ¬P
4 D 3∧E
5 A 3∧E
6 B 1,5→E
7 B∧D 4,6∧I
8 ¬(B∧D) 2R
9 ¬(D∧A) 3-8¬I
10 ¬(B∧D)→¬(D∧A) 2-9→I
11 (A→B)→[¬(B∧D)→¬(D∧A)] 1-10→I
123. * ⊢ ¬¬A → A
1 ¬¬A A/A
2 A 1DN
3 ¬¬A→A 1-2→I
125. * ⊢ (A∨B)→[(¬A∨B)→B]
1 A∨B A/(¬A∨B)→B
2 ¬A∨B A/B
3 ¬B A/B∧¬B
4 A 1,3DS
5 ¬A 2,3DS
6 B 3–5¬E
7 (¬A∨B)→B 2–6→I
8 (A∨B)→(¬A∨B)→B 1–7→I
also derive ‘P’ from a larger set of propositions ‘D.’ Can you think of some reasons
why one would reject this?
6. Notice that every disjunction can be rewritten as a corresponding conjunction (and
vice versa). What does this suggest about the use of the truth-functional operators
expressed by ‘∨’and‘∧’? How essential is it to the expressive range of PL that both
of these operators are present in the language?
7. Consider the following argument:
All ‘Ss’ are ‘Ps.’
All ‘Ps’ are ‘Qs.’
All ‘Ss’ are ‘Qs.’
This argument appears to be straightforwardly valid. However, now consider the
following translation of this argument into PL.
S
P
Q
Notice that this argument is not valid. What does this mean about the expressive
power of PL? What would we like a formal language to do concerning the set of
valid arguments expressible in English?
5 Reiteration (R) P
Any proposition ‘P’ that occurs in a proof .
or subproof may be rewritten at a level .
of the proof that is equal to ‘P’ or more .
deeply nested than ‘P.’ P R
6 Negation Introduction (¬I)
From a derivation of a proposition ‘Q’ and P A
its literal negation ‘¬Q’ within a subproof . ¬I
involving an assumption ‘P,’ we can derive .
‘¬P’ out of the subproof. .
¬Q
Q
¬P
7 Negation Elimination (¬E)
From a derivation of a proposition‘Q’ and ¬P A
its literal negation ‘¬Q’ within a subproof .
involving an assumption ‘¬P,’we can .
derive ‘P’ out of the subproof. .
¬Q
Q
P ¬E
8 Disjunction Introduction (∨I) P
From ‘P,’ we can validly infer‘P∨Q’ or P∨Q ∨I
‘Q∨P.’ Q∨P ∨Ι
9 Disjunction Elimination (∨E) P∨Q
From ‘P∨Q’and two derivations of ‘R’—
P A
one involving ‘P’ as an assumption in a
subproof, the other involving ‘Q’as an .
assumption in a subproof—we can derive .
‘R’ out of the subproof. .
R
Q A
.
.
.
R
R ∨E
Q A
.
.
.
P
P↔Q ↔I
11 Biconditional Elimination (↔E) P↔Q
From ‘P↔Q’ and ‘P,’ we can derive‘Q.’ P
And from ‘P↔Q’ and ‘Q,’ we can Q ↔E
derive‘P.’ P↔Q
Q
P ↔E
12 Modus Tollens (MT) P→Q
From ‘P→Q’ and ‘¬Q,’ we can derive ¬Q
‘¬P.’ ¬P MT
13 Disjunctive Syllogism (DS) P∨Q
From ‘P∨Q’ and ‘¬Q,’ we can derive ‘P.’ ¬Q
P DS
P∨Q
¬P
Q DS
14 Hypothetical Syllogism (HS) P→Q
From ‘P→Q’ and ‘Q→R,’ we can derive Q→R
‘P→R.’ P→R HS
15 Double Negation (DN)
From ‘P,’ we can derive ‘¬¬P.’ P DN
From ‘¬¬P,’ we can derive ‘P.’ ¬¬P
16 De Morgan’s Laws (DeM)
From ‘¬(P∨Q),’we can derive ‘¬P∧¬Q.’ ¬(P∨Q) ¬P∧¬Q DeM
From ‘¬P∧¬Q,’ we can derive ‘¬(P∨Q).’ ¬(P∧Q) ¬P∨¬Q DeM
From ‘¬(P∧Q),’we can derive ‘¬P∨¬Q.’
From ‘¬P∨¬Q,’ we can derive ‘¬(P∧Q).’
17 Implication (IMP)
From ‘P→Q,’ we can derive ‘¬P∨Q.’ P→Q ¬P∨Q IMP
From ‘¬P∨Q,’ we can derive ‘P→Q.’
Definition
Note
1. These rules are named after British mathematician and logician Augustus De Morgan
(1806–1871).
The language, syntax, and semantics of PL have two strengths. First, logical properties
applicable to arguments and sets of propositions have a corresponding applicability in
English. So, if one is dealing with a valid argument in PL, then that argument is also
valid for English. Second, the semantic properties of arguments and sets of proposi-
tions have decision procedures. That is, there are mechanical procedures for testing
whether any argument is valid, whether propositions in a set have some logical prop-
erty (e.g., they are consistent, equivalent, etc.), and whether any proposition is always
true (a tautology), always false (a contradiction), or neither always true nor always
false (a contingency).
The weakness of PL is that it is not expressive enough. That is, some valid argu-
ments and semantic relationships in English cannot be expressed in propositional
logic. Consider the following example:
This argument is clearly valid in English but cannot be expressed as a valid argu-
ment in PL. Symbolically, the argument is represented as follows:
M
S
R
The above argument is clearly invalid. In order to bring English arguments like
the one above into the domain of symbolic logic, it is necessary to develop a formal
language that does not symbolize sentences as wholes (e.g., John is tall as ‘J’), but
symbolizes parts of sentences. That is, a formal language whose basic unit is not a
247
complete sentence but the subject(s) and predicate(s) of the sentence such a language
will be more expressive and able to represent the above argument as valid. This is the
language of predicate logic (sometimes called the logic of relations). We’ll symbolize
it as RL.
The following chapter first articulates each of the above items, details the syntax or
formation rules of RL, and then explains the semantics of RL (i.e., what it means for a
proposition to be true or false in RL).
There are two internal features of (1): the noun phrase (name) John and the verb
phrase (predicate) is tall. The typical role of proper names like John in (1) and Liz and
Jane in (2) is to designate a singular object of some kind (e.g., a person, place, monu-
ment, country, etc.). In RL , we represent names (individual constants) with lowercase
letters ‘a’ to ‘v,’ with or without numerical subscripts. For example,
j = John
l = Liz
d32 = Jane
One assumption we will make with respect to names is that a name always refers
to one and only one object. In other words, in RL, there is no such thing as an “empty
name” (i.e., a name that fails to refer to an object). However, while it is the case that
every name refers to one object, we will allow for the possibility of multiple names
referring to the same object (e.g., ‘John Santellano’ and ‘Mr. Santellano’ can desig-
nate the same person).
In addition to names, English sentences also involve predicates of different adic-
ity. The adicity of a predicate term is the number of individuals the predicate must
have in order to express a proposition. For example, (1) contains not only the proper
name John but also the predicate (or verb phrase) is tall. The predicate term is tall has
an adicity of 1 since it only needs one individual to yield a sentence that expresses
a proposition (i.e., a sentence capable of being true or false). Likewise, (2) contains
not only the proper names Liz and Jane but also the predicate term is smarter than.
The predicate term is smarter than has an adicity of 2 since it requires at least two
individuals to yield a sentence that expresses a proposition.
In RL, we represent predicates and relations with uppercase letters ‘A’ to ‘Z,’ with
or without numerical subscripts. Again, the numerical subscripts ensure that the vo-
cabulary is infinite. For example,
T = is tall
G = is green
R = is bigger than
F43 = is faster than
Thus far, we have articulated two key items in RL: names and predicates of varying
adicity. In addition to retaining the use of truth-functional operators that form part
of PL, we can now broach how to translate various sentences involving names and
predicates from English into RL. First, consider (1) again:
To translate (1) into RL, begin by replacing any names with a blank and assign the
name a letter with lowercase letters ‘a’ to ‘v.’ Thus,
____is tall
j = John
Since John is the only name in the sentence, we have isolated a predicate term with
a single blank. This shows that is tall has an adicity of 1. After all names have been
removed what remains is called an unsaturated predicate or a rheme. Finally, we can
represent this unsaturated English predicate with the language of predicate logic by
replacing it with an uppercase letter ‘A’ to ‘Z.’ Thus,
T = ____ is tall
To complete the translation into the language of predicate logic, the name is placed
to the right of the predicate. Thus, a complete translation of John is tall into RL is the
following:
Tj
Not all predicates are one-place predicates (i.e., not all predicate terms have an adic-
ity of 1). Consider the following sentence:
Again, we start by removing all of the singular terms, assigning them lowercase
letters ‘j’ and ‘f’ and replacing these singular terms with blanks.
What remains is a two-place predicate, for there are two places where singular
terms might be inserted. We can represent this two-place predicate as follows:
Finally, to complete the translation of John is taller than Frank, we reinsert the sin-
gular terms. Thus, a complete translation of John is taller than Frank is the following:
Rjf
In order to represent the above sentence, begin by deleting all of the singular-
referring expressions from the sentence.
What remains after all of the names have been deleted is the three-place predi-
cate. In the above example, note that ‘and’ is not a sentential operator but part of
the predicate. The next step is to symbolize the three-place predicate as ‘S’ and
complete the translation by inserting abbreviations for individual constants for John,
Frank, and Mary.
Sjfm
Notice that (3) to (5) do not express propositions about singular objects but in-
stead predicate a property to a quantity of objects. To symbolize sentences of this
sort, we cannot use the procedure noted in the previous section. For example, the
proposition expressed by (4) does not name a zombie that is hungry. Instead, (4)
expresses the more indefinite proposition that some object in the universe is both a
zombie and is hungry.
In order to adequately represent (3) to (5), the introduction of two new symbols
and the notion of a domain of discourse is required. For convenience, let’s abbreviate
the domain of discourse as D and use lowercase letters w through z, with or without
subscripts, to represent individual variables. The domain of discourse D is all of the
objects we want to talk about or to which we can refer. So, if our discussion is on the
topic of positive integers, then we would say that the domain of discourse is just the
positive integers. Or, more compactly,
D: positive integers
If our discussion is about human beings, then we would say that the domain of
discourse is just those human beings who exist or have existed. That is,
Individual variables are placeholders whose possible values are the individuals in
the domain of discourse. Individual variables are said to range over individual par-
ticular objects in the domain in that they take these objects as their values. Thus, if
our discussion is on the topic of numbers, an individual variable z is a placeholder for
some number in the universe of discourse.
As placeholders, we can also use variables to indicate the adicity of a predicate. Pre-
viously we indicated this by using blanks (e.g., ____ is tall). Rather than representing
a predicate as a sentence with a blank attached to it, we will fill in the blanks with the
appropriate number of individual variables:
Tx = x is tall
Gx = x is green
Rxy = x is bigger than y
Bxyz = x is between y and z.
example, suppose the objects to which variables and names can refer are limited to
human beings. If this is the case, then we would write,
D: human beings
Indicating the D in this way means that individual variables do not take nonhuman
beings as substitution instances. So, if someone were to say Everyone is crazy, this
would be a shorthand way of saying, Every human being is crazy.
In normal conversation, the domain of discourse is not always stipulated. The sec-
ond way the D is determined is contextually. That is, whenever you have a conversa-
tion, you don’t say, ‘Let’s only talk about colors’ or ‘Let’s only talk about human be-
ings living in the 1900s.’ Often, certain features about the subject matter being talked
about determine what is and is not part of the domain of discourse. But the domain
of discourse can change rapidly and is not always easily determined. If you and your
friends are talking about movies, then the D is movies, but the D can quickly switch to
books, to mutual friends of yours who behave similarly to characters in books, and so
on. For our purposes, whenever we need to translate from predicate logic to English,
or vice versa, we will always stipulate the D.
Finally, the domain is restricted or unrestricted. In the case of arithmetic, the do-
main of discourse for variables is restricted to numbers. In the case of a conversation
about individuals who pay taxes, the domain is restricted to humans. However, in
some cases the domain of discourse is unrestricted. This means that the variable can
be a placeholder for anything. If I wrote, Everything is crazy, this proposition means,
for all x, where x can be anything at all, x is crazy. This includes humans, animals,
rocks, and numbers. Another example: suppose you were to say, Everyone is crazy, in
an unrestricted domain. Here, it is implied that you are only referring to human beings.
But since you are working in an unrestricted domain, it is necessary to specify this.
Thus, Everyone is crazy is translated as for any x in the domain of discourse, if x is a
human being, then x is crazy.
The domain places a constraint on the possible individuals we can substitute for
individual variables. We will call these substitution instances for variables, or substi-
tution instances for short. For example, discussing the mathematical equation ‘x + 5
= 7,’ the domain consists of integers and not shoes, ships, or people. If someone were
to say that a possible substitution instance for x is ‘that patch of green over there,’
we would find this extremely strange because the only objects being considered as
substitution instances are integers and not patches of green. Likewise, if someone
were to say, ‘Everyone has to pay taxes,’ and someone else responded, ‘My cat does
not pay taxes,’ we would take this to be strange because the only objects being con-
sidered as substitution instances are people and not animals or numbers or patches of
green. Thus, it is important to note that the domain places a limitation on what can be
a instance of a variable.
With names, variables, and predicates having been introduced, we now turn to
quantifiers. RL contains two quantifiers: the universal quantifier ‘∀,’ which captures
the English use of ‘all,’ ‘every,’ and ‘any,’ and the existential quantifier ‘∃,’ which
captures the English use of ‘some,’ ‘at least one,’ and the indefinite determiner ‘a.’
Consider the following sentence:
If we assume that the domain of discourse is people and let ‘Mx’ stand for x is
mortal and ‘Hx’ stand for x is happy, then (6) can be read as expressing a number of
possible expressions:
Or, equivalently,
By replacing for every x with the universal quantifier and x is mortal with ‘Mx,’ we
get the following predicate formula:
(6RL) (∀x)Mx
In the case of (7), (7) can be read as expressing a number of possible expressions:
Or, equivalently,
By replacing for at least one x with the existential quantifier and x is happy with
‘Hx,’ we get the following predicate formula:
(7RL) (∃x)Hx
(1) (∃x)Fx
(2) ¬(∃x)(Fx∧Mx)
(3) ¬(∀x)Fx∧(∃y)Ry
(4) (∃x)(∀y)(Rx↔My)
In the case of (1), the scope of the quantifier ranges over ‘Fx,’ which is to the im-
mediate right. In the case of (2), the scope of the quantifier is ‘(Fx∧Mx),’ which is in
the parentheses to the immediate right. In the case of (3), the scope of (∀x) operates
upon ‘Fx’ while (∃y) operates on ‘Ry.’ Finally, (∃x) applies to ‘(∀y)(Rx ↔ My),’
while (∀y) applies to ‘(Rx ↔ My).’
Scope plays an important role in how we translate from English into RL, and vice
versa. Consider the following example:
(5) (∃x)(Ix)∧(∃x)(Ax)
(6) (∃x)(Ix∧Ax)
It may appear that (5) and (6) say the same thing, but they express different proposi-
tions and are true and false under different conditions. Consider the following para-
phrase of (5):
(5*) There exists an x that is intelligent, and there exists an x that is an alien.
In order for (5) to be true, it is not necessary that there exist something that is both
intelligent and an alien. Proposition (5) will be true, for example, if a stupid alien
exists and some intelligent dolphin exists. This is distinct from (6). Consider the fol-
lowing paraphrase of (6)
In the case of proposition (6), (6*) shows that (6) is true if and only if (iff) there
exists something that is both intelligent and an alien.
The difference between (5) and (6) is the result of a difference in the scope of the
quantifier. In the case of (5), the main operator is ‘∧,’ and two different propositional
forms are existentially quantified. In the case of (5), there are two quantifiers. The first
existential quantifier selects an x from the universe of discourse that is said to be intel-
ligent. The second existential quantifier selects an x from the universe that is said to be
an alien. In the case of (5), neither existential quantifier quantifies over the conjunc-
tion in ‘Ix∧Ax.’ This is distinct from the existential quantifier in (6). This existential
quantifier operates not only on ‘Ix’ and ‘Ax’ but on the conjunction of ‘Ix∧Ax.’ That
is, it selects an x from the universe of discourse that is both intelligent and an alien.
In each of the above cases, parentheses are used to determine what is within the
scope of a quantifier. There is thus a parallel between how the scope of quantifiers is
determined and how the scope of negation is determined. For example, the negation in
‘¬(P∧Q)’ applies to the conjunction ‘P∧Q,’ while the negation in ‘¬P∧Q’ applies only
to the atomic ‘P.’ This is the same for quantifiers since (∀x) in ‘(∀x)(Px→Qx)’ applies
to the conditional ‘Px→Qx,’ while (∀x) in ‘(∀x)(Px)∧(∀y)(Qy)’ applies only to ‘Px.’
To illustrate the notion of scope even further, consider the following three examples:
(7) (∃x)(Px∧Gy)∧(∀y)(Py→Gy)
(8) (∃x)[(Mx→Gx)→(∃y)(Py)]
(9) (∀z)¬(Pz∧Qz)∧(∃x)¬(Px)
In the case of (7), ‘Px∧Gy’ fall within the scope of (∃x) while ‘Py→Gy’ falls
within the scope of (∀y). This is indicated by the parentheses. In the case of (8),
‘[(Mx→Gx)→(∃y)(Py)]’ falls within the scope of the leftmost quantifier (∃x), while
(∃y) has ‘Py’ within its scope. Again, this is indicated by the parentheses. Finally,
in the case of (9), ‘¬(Pz∧Oz)’ is within the scope of (∀z) and ‘¬(Px)’ is within the
scope of (∃x).
Exercise Set #1
A. Identify the names and predicates in the following English sentences. Also, iden-
tify the adicity of any predicates you find.
1. * John is tall.
2. John is shorter than Liz.
3. * John is shorter than Liz but taller than Vic.
4. If Vic is standing between Liz and Vic, then Vic is not tall.
5. * If Vic is standing between Sam and Mary, then Vic is tall.
6. Liz is taller than John and taller than Vic, except when she is standing
next to Sam.
7. * If Liz is standing next to Vic, Sam, Mary, and John, then Liz is tall.
8. All men are mortal.
9. * Some men are not mortal.
10. Everyone loves someone.
A.
1. * John is a name, and is tall is a one-place predicate.
3. * John, Liz, and Vic are names, ____ is shorter than ____ is a two-place
predicate, and ____ is taller than ____ is a two-place predicate.
5. * Vic, Sam, and Mary are names, ____ standing between ____ and ____ is a
three-place predicate, and ____ is tall is a one-place predicate.
7. * Liz, Vic, Sam, Mary, and John are names, ____ is standing next to ____ is
a two-place predicate, and ____ is tall is a one-place predicate.
9. * There are no names; ____ is a man and ____ is a mortal are one-place
predicates.
In this section, we distinguish between free and bound variables in RL, specify how
to go about finding the main operator of a well-formed formula (wff, pronounced
‘woof’) in RL, and finally provide the formal syntax of RL.
(1) (∀x)(Fx→Bx)∨Wx
(2) (∃x)Mx∧(∃y)Ry
(3) (∀y)(Mx∧By)
In the case of (1), ‘Fx’ and ‘Bx’ are in the scope of the universal quantifier (∀x),
while ‘Wx’ is not in the scope. In the case of (2), ‘Mx’ is in the scope of (∃x) while
‘Ry’ is in the scope of (∃y). Finally, in the case of (3), ‘Mx’ and ‘By’ are both in the
scope of (∀y). In this section, we clarify the distinction between a bound variable and
a free variable.
Bound variable When a variable is within the scope of a quantifier that quantifies
that specific variable, then the variable is a bound variable.
Free variable A free variable is a variable that is not a bound variable.
(3) (∀y)(Mx∧By)
Thus, since ‘Mx’ is not in the scope of the quantifier that quantifies for that specific
variable, then the x in ‘Mx’ is not a bound variable, and if it is not a bound variable,
then it is a free variable.
To consider this same distinction in a different way, a variable can be free (not
bound) in two cases: (1) when it is not contained in the scope of any quantifier, or (2)
when it is in the scope of a quantifier, but the quantifier does not specifically quantify
for that specific variable. Consider the following example:
(6) [(∀x)(Px→Gy)∧(∃z)(Pxy∧Wz)]∨Rz
In the case of (6), the universal quantifier has ‘Px→Gy’ in its scope, but only x can
be considered a bound variable since (∀x) only specifies the quantity for x. Likewise,
while ‘Pxy∧Wz’is within the scope of the existential quantifier, only the z is a bound
variable since (∃z) only specifies the quantity for z. However, the z in ‘Rz’ is a free
variable since it does not fall within the scope of a quantifier.
(1) (∃x)(Px∧Qx)
(2) (∃x)(Px)∧(∃x)(Qx)
(3) ¬(∃x)(Px∧Qx)
(4) (∀y)(∃x)(Rx→Py)
Each of the above three predicate wffs has a different main operator. In the case of
(1), it is the existential quantifier (∃x). The reason for this is that the ‘∧’ is within the
scope of the quantifier. Thus, (∃x) is the operator with the greatest scope. In the case
of (2), the main operator is the ‘∧.’ The reason for this is that it does not fall within
the scope of a quantifier, and it operates on two separate quantified expressions, that
is,‘(∃x)(Px)’ and ‘(∃x)(Qx).’ In the case of (3), the main operator is the negation (¬).
The reason it is not the ‘∧’ is that the ‘∧’ falls within the scope of the existential quan-
tifier, and the reason it is not the existential quantifier is that the operator for negation
operates upon the existential quantifier. Finally, in the case of (4), the main operator
is the universal quantifier. In that expression, while (∃x) quantifies over the proposi-
tional form ‘(Rx→Py),’(∀y) quantifies over ‘(∃x)(Rx→Py),’ making it the quantifier
with the greatest scope.
Consider a variety of examples:
Notice that (i) says ‘followed by n terms’ and that these terms are either names
or variables. Thus, if ‘P’ is a three-place predicate, then ‘Pabc’ and ‘Paaa’ are wffs
since they are formulas consisting of ‘P’ followed by three names. Similarly, ‘Pxyz,’
‘Pxxx,’ and ‘Pxyx’ are wffs since they are formulas consisting of ‘P’ followed by
three variables. In contrast, ‘Pab’ and ‘Pa’ are not wffs since they are formulas with
‘P’ but are not followed by three terms. It is helpful to distinguish between ‘Pxyz’ (a
wff with at least one free variable) and ‘Pabc’ (a wff consisting only of names). Let’s
call a wff consisting of an n-place predicate ‘P’ followed by n terms where one of
those terms is a free variable an open sentence or open formula. In contrast, let’s call
a wff consisting of an n-place predicate ‘P’ followed by n terms where no term is a
free variable a closed sentence or closed formula.
Notice that (ii) says that if you have a wff ‘P,’ then the negation of that wff is also
a wff. Thus, if ‘Pa,’‘¬Qab,’ and ‘(∀x)Px’ are all wffs, then ‘¬Pa,’‘¬¬Qab,’ and
‘¬(∀x)Px’ are also wffs. Notice that (iii) says that if you have two wffs ‘P’ and ‘Q,’
then placing a truth-functional operator between the two wffs forms a wff. Thus, if
‘Pa,’‘¬Qab,’and ‘(∀x)Px’ are all wffs, then ‘Pa∧¬Qab,’‘¬Qab∨(∀x)Px,’ ‘Pa→(∀x)
Px,’ and ‘¬Qab↔Pa’ are wffs (as are many others).
Perhaps the most complicated of the four formation rules is (iv), and so we will
consider this rule in two parts. First, take a look at the antecedent of (iv):
If ‘P’ is a wff in RL containing a name ‘a,’ and if ‘P(x/a)’ is what results from substi-
tuting the variable x for every occurrence of ‘a’ in ‘P’
First, ‘P(x/a)’ symbolizes the substitution (or replacement) of names for variables.
For example, take the following wff:
(1) Pb
A replacement ‘P(x/b)’ for (1) is just the substitution of the variable x for every ‘b’
in (1). Thus,
Pb ————–—► Px
P(x/b)
(2) Pbb
In the case of (2),‘P(z/b)’ results from substituting every ‘b’ with z. That is,
Notice that this process of substitution alone does not generate a wff. For that we
need to turn to the consequent clause of (iv):
(∀x)Px
(∃x)Px
Finally, (v) ensures that the only wffs allowed into RL are those that are the result
of using (i) to (iv).
To illustrate the use of these rules, we will consider three examples. Consider the
following formulas in RL:
(3) Pab∧Ra
(4) ¬Qa→(∀x)Rx
(5) (∀x)Pxx→¬(∃y)Gy
Assume that ‘Pxy’ is a two-place predicate, and all other predicates are one-place.
First, we start by showing that (3) is a wff.
Notice that ‘Pab’ and ‘Ra’ are both wffs by rule (i). Both have the requisite number
of names after them. Similar to propositional logic, rule (iii) justifies ‘Pab∧Ra’ as a
wff in RL.
Moving to a more complex example, consider (4):
Using the formation rules, the above shows that ‘¬Qa→(∀x)Rx’ is a wff in RL.
Perhaps key to showing this is the use of rule (iv) at line 2. Line 2 states that if ‘Ra’
is a wff and we can substitute the variable x for every ‘a’ in ‘Ra,’ then the resulting
universally quantified proposition is a wff. Since ‘Ra’ is clearly a wff, it follows that
‘(∀x)Rx’ is a wff.
Finally, consider the initial use of formation rules to show that (5) is a wff:
Exercise Set #2
7. * ¬(∃x)(Fx)∧(∃x)(Fx)
8. (∃x)(¬Fx)∧(∃x)(Fx)
9. (Tb∧Qa)→(∀x)(Fx→Gy)
10. Tb∧¬Tb
11. (Ta∧Ra)∨¬(Ta∧Ra)
12. Rx→(∀x)(Px)
C. Using the formation rules, show that the following propositions are wffs in RL,
where ‘Pxy’ is a two-place predicate and ‘Rx’ and ‘Zx’ are one-place predicates:
1. * Ra∧Paa
2. Ra→Paa
3. * (∀x)Pxx
4. (∃x)Pxx
5. * ¬(∃y)Pyy
6. ¬(∀x)Pxx∧(∃x)Zx
7. * (∃x)(∀y)Pxy
8. (¬Pab∧Rb)→Ra
9. (∃x)Pxx↔¬(∀z)Pzz
10. (∃x)(∀y)(Pxx∧Zy)
11. (∃x)Pxy→(∀y)Ry
12. ¬¬(∀x)Pxx
A.
1. * ∀
3. * ∧
5. * ¬
7. * ∨
9. * (∀x)
B.
1. * (∃x)(Rx → Ga)
There are no free variables. The ‘a’ in ‘Ga’ is a name. The x in ‘Rx’
is a bound variable and falls in the scope of (∃x). The proposition has a
constant truth value.
3. * (∃x)(Mx)∨(∀x)(Rx)
There are no free variables and no names. The x’s in ‘Rx’ and ‘Mx’ are
both bound variables. The x in ‘Mx’ falls within the scope of (∃x), and the
x in ‘Rx’ falls within the scope of (∀x). The proposition has a constant
truth value.
5. * Pa∧(∃w)(Vw∧Lx)
The x in ‘Lx’ is free, and the ‘a’ in ‘Pa’ is a name. The w in ‘Vw’ is
within the scope of (∃w) and is bound by ‘Vw.’ The proposition does not
have a constant truth value.
7. * ¬(∃x)(Fx)∧(∃x)(Fx)
There are no free variables and no names. The x’s in both the left and
right ‘Fx’ are bound and scoped. The proposition has a constant truth
value.
C.
1. * ‘Ra∧Paa’ is a wff. Proof: ‘Ra’ is a one-place predicate followed by one
name (rule i). ‘Paa’ is a two-place predicate followed by two names (rule
i). If ‘Ra’ and ‘Paa’ are wffs, then ‘Ra∧Paa’ is a wff (rule iii).
3. * (∀x)Pxx. Proof: If ‘Paa’ is a wff in RL containing a name ‘a,’ and ‘Pxx’ is
what results from substituting x for every occurrence of ‘a’ in ‘Paa,’ then
‘(∀x)Pxx’ is a wff (rule iv).
5. * ¬(∃y)Pyy. Proof: If ‘Paa’ is a wff in RL containing a name ‘a,’ and ‘Pyy’
is what results from substituting y for every occurrence of ‘a’ in ‘Paa,’
then ‘(∃y)Pyy’ is a wff (rule iv). If ‘(∃y)Pyy’ is a wff, then ‘¬(∃y)Pyy’ is
a wff (rule ii).
7. * (∃x)(∀y)Pxy. Proof: If ‘Pab’ is a wff in RL containing a name ‘b,’ and
‘Pay’ is what results from substituting y for every occurrence of ‘b’ in
‘Pab,’ then ‘(∀y)Pay’ is a wff (rule iv). If ‘(∀y)Pay’ is a wff in RL con-
taining a name ‘a,’ and ‘Pxy’ is what results from substituting x for every
occurrence of ‘a’ in ‘Pay,’ then ‘(∃x)(∀y)Pxy’ is a wff (rule iv).
The following provides the semantics of RL. This is done by (1) explaining the nature
of a set and set membership, then (2) articulating what it means to say that a formula
is true in RL.
P = {x | x is a politician}.
Here is another example. Consider the set of positive even integers. Such a set is
infinite, and so we cannot simply list all of the integers (since there are an infinite
number). To represent this set, we can simply write,
The notions of an interpretation and a domain of discourse both need further elabo-
ration. An interpretation of a language requires the stipulation (or selection) of a do-
main of discourse. The domain of discourse consists of all of the things that a language
can meaningfully refer to or talk about.
Domain The domain of discourse (D) consists of all of the things that a language
can meaningfully refer to or talk about.
We assume that domains are never empty. There is always at least one thing that
we can meaningfully talk about. A specification of D is achieved in one of two ways.
First, D can be specified simply by listing the individual objects in the domain. For
example, imagine a domain consisting of three people: John, Sally, and Mary. The
domain consisting of these three people can be represented as follows:
Second, another way of specifying the domain is by stating a class of objects. For
example, suppose that the domain consists of living human beings. It would be ex-
tremely cumbersome to list all of them by name. Instead, the domain consisting of
living human beings can be represented as follows:
Next, we turn to the interpretation of the names (object constants) in RL. In the case
of names, I assigns each name an object in D. This assignment of an element in D to
a name determines the meaning of the name by giving its extension. That is, it deter-
mines the meaning of a name by assigning it an element in D. In RL, if a name lacks
an interpretation, then it is known as a nonreferring (or uninterpreted) term.
Let’s consider this more concretely with an example. Imagine the following D:
Now consider the following set names in RL: ‘a,’ ‘b,’ ‘c.’ In order for these names
to mean anything, they must be interpreted. Interpreting names consists of assigning
them an object in D. Thus, we might write,
I (a) Alfred
I (b) Bill
I (c) Corinne
That is, Alfred in D is assigned to ‘a,’ Bill in D is assigned to ‘b,’ and Corinne in
D is assigned to ‘c.’
Not only does an interpretation function assign objects in D to names (individual
constants), but it also assigns n-tuples to n-place predicates. To get a clearer under-
standing of what an n-tuple is, consider the following n-place predicates:
Sx: x is short
Bx: x is bad
Notice that both of the above predicates are one-place predicates. One way we
might go about interpreting n-place predicates is by assigning each n-place predicate
to a set (or collection) of objects in D. Thus,
Lxy: x loves y
Here the interpretation function does not simply tell us about a collection of single
objects (i.e., about which objects are loved or which objects do the loving). Instead,
an interpretation of this two-place ‘Lxy’ tells us something about pairs of objects (i.e.,
about which objects love which objects). It tells us who loves whom. Thus, rather than
saying an interpretation of an n-place predicate tells us something about a collection of
objects, we say that it tells us something about a set (or collection) of n-tuples, where
an n-tuple is a sequence of n-objects. In the above case, an interpretation of ‘Lxy’ is
an assignment of a set of 2-tuples to ‘Lxy.’
For convenience, we will write tuples by listing the elements (objects) within angle
brackets (< >) and separate elements with commas. For example,
<Alfred, Corinne>
denotes a 3-tuple.
Thus, suppose that in D, Bill loves Corinne (and no one else), Corinne loves Alfred
(and no one else), and Alfred loves no one. The interpretation of ‘Lxy’ in D would be
represented by the following set of 2-tuples:
This says that the interpretation of the predicate x loves y relative to D consists of a
set with two 2-tuples:one 2-tuple is <Bill, Corinne> and the other is <Corinne, Alfred>.
Finally, now that we have an understanding of how the interpretation function as-
signs objects to names and n-tuples to n-place predicates, it is possible to define when
a wff is true in RL. We will call an interpretation where a truth value is assigned to a
wff a valuation (v). For this, let ‘R’ be an n-place predicate, let ‘a1,’. . .‘an’ be a finite
set of names in RL, and let ‘ai’ be a randomly selected name.
This says that we can assign a value of true to ‘Rai’ if and only if our interpreta-
tion of ‘ai’ is in the interpretation of ‘R.’ To consider this concretely, we examine two
examples. First, consider the following wff:
Sa
Let’s say that ‘Sa’ is the predicate logic translation of Alfred is short. According to
(1), ‘Sa’ is true if and only if ‘a’ is in ‘S,’ that is, if and only if an interpretation of the
predicate ‘short’ includes an interpretation of the name Alfred. Earlier, we said that
And so, ‘Sa’ is true in the model since Alfred belongs to the collection of objects
that are short. Second, consider the following more complex wff:
Lca
v(Lca) = T
The interpretation of wffs that involve truth-functional operators as their main op-
erators is the same as in propositional logic and straightforward given that we know
the truth values of their components. Given that ‘P’ and ‘Q’ in (2) to (6) are well-
formed formulas, then relative to a model,
Let’s take stock. Assigning truth values using a model has involved (1) the selection
of a domain, (2) an interpretation function that assigns objects in the domain to names,
(3) an interpretation function that assigns a set of n-tuples to n-place predicates, and
(4) an interpretation function (valuation) that assigns the value true to an atomic sen-
tence if and only if the object or tuple that the name designate is in the set of n-tuples
designated by the n-place predicate. Missing from the above is a valuation of wffs of
the form ‘(∀x)P’ and ‘(∃x)Q.’ In what follows, we assume that formulas of the form
‘(∀x)P’ and ‘(∃x)Q’ are closed; that is, they do not contain any free variables, and x
and only x occurs free in ‘P’ and ‘Q.’ In what follows, we will define truth values of
quantified formulas by relying on the truth values of simpler values of nonquantified
formula. This method requires a little care for, at least initially, we might say that ‘(∃x)
Px’ is true if and only if ‘Px’ is true, given some replacement of x with an object or a
name (object constant) is true. Likewise, a wff like ‘(∀x)Px’ is true if and only if ‘Px’
is true, given that every replacement of x with an object or name yields a true proposi-
tion. However, this will not work without some further elaboration. On the one hand,
we cannot replace variables with objects from the domain since variables are linguistic
items, and replacing a variable with an object won’t yield a wff, or even a proposition.
On the other hand, we cannot replace variables with names from our logical vocabu-
lary because this falsely assumes that we have a name for every object in the domain.
It might be the case that some objects in the domain are unnamed.
The solution to this problem is not simply to expand our logical vocabulary so that
there is a name for every object but to consider the multitude of different ways in
which a single name can be interpreted relative to the domain, that is, to consider the
many different ways that an object in the domain can be assigned to a name.
To see this more clearly, consider the following domain:
I (a): John
I1(a): Vic
I2(a): Liz
Let’s say that for any name ‘a,’ an interpretation ‘Ia’ is a-variant or a-varies if and
only if ‘Ia’ interprets ‘a’ (i.e., it assigns it an object in D) and either does not differ
from I or differs only in the interpretation it assigns to ‘a’ (i.e., it doesn’t differ on the
interpretation of any other feature of RL). Thus, I, I1, and I2 are all a-variant interpreta-
tions of I since they all assign ‘a’ to an object in the domain and either do not differ
from I or differ only in the interpretation they assign to ‘a.’
Using the notion of a variant interpretation, we can define what it means for a quan-
tified formula to be true or false.
7 v(∀x)P = T iff for every name ‘a’ not in ‘P’ and every a-variant interpretation
‘P(a/x) = T.’
v(∀x)P = F iff for at least one ‘a’ not in ‘P’ and at least one a-variant inter-
pretation ‘P(a/x) = F.’
’8 v(∃x)P = T iff for at least one name ‘a’ not in ‘P’ and at least one a-variant
interpretation ‘P(a/x) = T’.
v(∃x)P = F iff for every name ‘a’ not in ‘P’ and every a-variant interpretation
‘P(a/x) = F.’
Exercise Set #3
A. Let D = {a, b, c}, I (a) = a, I (b) = b, I (c) = c, I (H) = {<a>, <b>, <c>}, I (L) =
{<a,b>, <c,b>}. Using this interpretation, determine the truth values of the fol-
lowing wffs:
1. * Lab
2. Lba
3. * Lac
4. Lab∧Lcb
5. * Lba∧Lbc
6. (∃x)(∃y)Lxy
7. (∀x)Hx
8. (∀x)Hx→(∀x)Lxx
9. (∃x)Lxx
B. Let D = {a, b, c, d}, I (a) = a, I (b) = b, I (c) = c, I (d) = d,I (H) = {<a>, <b>, <c>},
I (L) = {<a,a>, <a,b>, <b,b>, <c,c>}. Using this interpretation, determine the truth
values of the following wffs:
1. * (∀x)Lxx
2. (∀x)(∀y)Lxy
3. * (∃x)¬Lxx
4. (∃x)Lxx→Lca
5. * (∀x)Hx∧(∃x)Lxx
6. (∃x)(∃y)Lxy
7. (∀x)Hx
8. (∀x)Hx→(∀x)Lxx
9. (∃x)Lxx
A.
1. * v(Lab) = T
3. * v(Lac) = F
5. * v(Lba∧Lbc) = F
B.
1. * v(∀x)Lxx = F
3. * v(∃x)¬Lxx = F
5. * v((∀x)Hx∧(∃x)Lxx) = F
In this section, we explore various facets of translating from English into predicate
logic and vice versa. This section is designed to give you a basic understanding of how
to translate from natural language into RL.
Using the above translation key, we can translate the following sentences:
(1) to (3) can be translated using the key and the conventions for translating propo-
sitions involving predicates and singular terms:
(1*) Tjf
(2*) Ffj
(3*) Hf∧Lj
Our next step is to develop techniques and conventions for using a translation key
to translate expressions with quantifiers.
(1) (∀x)Hx
(2) (∀x)¬Zx
(3) (∀x)(Zx→Hx)
(4) (∀x)(Zx→¬Hx)
(5) ¬(∀x)(Zx→Hx)
Let’s consider a translation of (1) by taking one part of the formula at a time.
(∀x) is translated as for every x, for all x’s, or for each x. The second part of (1)
says that x is H or x is happy. Putting these two parts together, we get a bridge transla-
tion. A bridge translation is not quite English and not quite predicate logic. Here is a
bridge translation of (1):
Using this bridge translation, we can more easily translate (1) into colloquial Eng-
lish:
It may not be immediately obvious how to render (3B) into English. If it isn’t, then
you can try to make (3B) more concrete by expanding the bridge translation as follows:
(3B*) Choose any object you please in the domain of discourse; if that object is a
zombie, then it will be also be happy.
(4B*) Choose any object you please in the domain of discourse consisting of human
beings (living or dead); if that object is a zombie, then it will not be happy.
Notice that in the case of (5), which is ‘¬(∀x)(Zx→Hx),’ the main operator is nega-
tion. One way to translate this is by translating ‘(∀x)(Zx→Hx)’ as follows:
Next, translate the negation into English by putting ‘not’ in front of this expression.
That is, (5) reads,
Finally, consider universally quantified expressions not involving ‘→’ as the main
operator
(6) (∀x)(Zx∧Hx)
(7) (∀x)(Zx∨Hx)
(8) (∀x)(Zx↔Hx)
In other words,
(1) (∃x)Hx
(2) (∃x)¬Zx
(3) ¬(∃x)Zx
(4) (∃x)(Zx∧Hx)
(5) (∃x)Zx∧(∃x)Hx
Let’s consider a translation of (1) by taking one part of the formula at a time.
(∃x) is translated as for some x, there exists an x, or there is at least one x. The
second part of (1) says that x is H or x is happy. Putting these two parts together, we
get a bridge translation. Again, a bridge translation is not quite English and not quite
predicate logic. Here is a bridge translation of (1):
(1) says that there is at least one object in D that has the property of being happy.
Using the bridge translation (1B), we can more easily translate (1) into colloquial
English:
In the case of (3), note that the negation has wide scope. Thus, we can translate
‘(∃x)Zx’ first and then translate ‘¬(∃x)Zx.’ That is, ‘(∃x)Zx’ translates into Someone
is a zombie, and ‘¬(∃x)Zx’ translates as
Notice that (2) and (3) say something distinct. (2) says that something exists that
is not a zombie, while (3) says that zombies do not exist. Let’s consider (4) and (5)
together. The bridge translations for (4) and (5) are as follows:
Notice that these two propositions do not say the same thing. (4) asserts that there
is something that is both a zombie and happy, while (5) asserts that there is a zombie,
and there is someone who is happy.
Finally, consider some propositions where ‘∧’ is not the main operator.
(6) (∃x)(Zx→Hx)
(7) (∃x)(Zx∨Hx)
(8) (∃x)(Zx↔Hx)
(1) Some rich people are not miserly, and some miserly people are not rich.
D: unrestricted
Rx: x is rich
Px: x is people
Mx: x is miserly
The second step is to determine the main operator of the sentence. In this case, the
proposition is a complex conjunction.
(1*) [Proposition]∧[Proposition]
The third step is to determine the subject of each of the constituent propositions.
The proposition to the left of the conjunct is about people, and the proposition to the
right is about people.
(1**) Px∧Px
Next determine what is said about the subject. In the left conjunct, the proposition
says that people who are rich are not miserly. In the right conjunct, the proposition
says that people who are miserly are not rich.
(1***) [Px∧Rx∧¬Mx]∧[Px∧Mx∧¬Rx]
(1****) (∃x)[(Px∧Rx)∧¬Mx]∧(∃x)[(Px∧Mx)∧¬Rx]
Universal Quantifier
Not everything is moveable. ¬(∀x)Mx
Everything is movable. (∀x)Mx
Nothing is moveable. (∀x)¬Mx
Everything is immoveable. (∀x)¬Mx
It is not true that everything is immoveable. ¬(∀x)¬Mx
Honey tastes sweet. (∀x)(Hx→Tx)
If something is honey, then it tastes sweet. (∀x)(Hx→Tx)
Everything is either sweet or gross. (∀x)(Sx ∨Gx)
Either everything is sweet, or else everything is gross. (∀x)(Sx)∨(∀x)(Gx)
Existential Quantifier
Some people are living. (∃x)(Px∧Lx)
Some people are not living. (∃x)(Px∧¬Lx)
Some living people are mistreated. (∃x)[(Px∧Lx)∧Mx]
Some dead people are mistreated. (∃x)[(Px∧¬Lx)∧Mx]
Some people are liars and thieves. (∃x)[Px∧(Lx∧Tx)]
It is not true that some people are honest. ¬(∃x)(Px∧Hx)
Some people are neither honest nor truthful. (∃x)[Px∧¬(Hx∨Tx)]
Some people are liars, and some are thieves. (∃x)(Px∧Lx)∧(∃x)(Px∧Tx)
Some thieving liars are caught, and some are not. (∃x)[(Tx∧Lx)∧Cx]∧(∃x)
[(Tx∧Lx)∧¬Cx]
Exercise Set #4
A. Using the following translation key, translate the predicate logic expressions be-
low into English: D: living humans, Hxy: x hates y, s: Sally, b: Bob, Lxy: x loves y
1. * (∀x)Lxb
2. (∃x)Hxs
3. * (∀x)(Lxb→¬Hxs)
4. (∃x)(Lxb∧Hxs)
5. * [(∃x)(Lbs∧Hbx)]→Lbs
6. (∀x)Lxx∧(∃y)Hyb
7. * (∃x)Lxx∧(∀y)Hyy
8. [(∃x)Lxb∧(∃x)Lxs]∧[(∃x)¬Lxb∧(∃x)¬Lxs]
9. * (∃x)Lxb∧(∃x)Lbx
10. [(∃x)(¬Hxs)∧(∃x)(Lxs)]∧(∀x)(Lxb)
In this section, quantifiers with overlapping scope are considered. Translation from
English into RL is an art, and so there is no foolproof method or decision procedure
for translating from one language to the other. In what follows, a four-step procedure
is outlined, and a number of examples are provided.
It is helpful to start with a simple case. Consider the following English expression
in a domain of discourse consisting of persons:
Step 3 is the most difficult. Here, you are asked to use the quantifiers from step 1
and the predicates from step 2 to represent the proposition that (1) expresses. (1) ex-
presses the proposition that at least one person in the domain loves at least one person
in the domain; that is, (∃x)(∃y)Lxy.
Finally, it can be helpful to add a fourth step that is used to check the translation.
Step 4 suggests that you read the RL wff in English and check this reading against the
literal meaning of the English sentence. In the case of (1), ‘(∃x)(∃y) Lxy’ is read as
These four steps will aid you in your efforts to translate various English sentences
into predicate logic wffs. To gain further clarity and practice, consider the following
sentence:
Again, let’s take our translation of (2) one step at a time. (2) expresses that every
single member of one set of objects (zombies) loves every single member of another
set of objects (humans). First, notice that (2) has two English expressions (every) that
can be captured by the universal quantifier, and so we can replace these with two
instances of the universal quantifier. Second, we need to isolate two different sets of
objects (all of the zombies and all of the humans), for (1) expresses that all of the zom-
bies love all of the humans and not that all of these zombies love themselves. In order
to isolate these sets, we will ultimately predicate the respective properties of being a
zombie and being a human to two different bound variables x and y.
Next, translate any ordinary language predicates into predicates of RL, making sure
to pay attention to their adicity.
Third, use the quantifiers from step 1 and the predicates from step 2 to try to capture
the meaning of (2).
Finally, check the predicate logic formula against the original English translation
by reading off the predicate wff in English.
This step-by-step method can likewise capture the meaning of a number of other
English sentences.
It is important to point out that while sometimes the order of the quantifiers does
not matter, in other cases it is significant. Consider the following predicate logic ex-
pression in a domain of discourse consisting of persons where ‘Lxy’ is the relational
predicate that stands for x loves y:
(∀x)(∀y)Lxy
(∀y)(∀x)Lxy
These expressions are very similar and, in fact, express the same proposition.
That is, both predicate logic wffs express the proposition that Everyone loves every-
one. In addition, consider the following predicate logic expressions involving the
existential quantifier:
(∃x)(∃y)Lxy
(∃y)(∃x)Lxy
Again, both expressions appear similar, and both express the same proposition that
someone loves someone. Examples like those above may give you the impression that
the order of the quantifiers does not matter when you are either translating a predicate
wff into English or interpreting the expression. This, however, is not the case for the
following two wffs:
(∃x)(∀y)Lxy
(∀x)(∃y)Lxy
Exercise Set #5
A. Using the following translation key, translate the following English sentences into
the language of predicate logic: D: persons, b: Bob, Zx: x is a zombie, Ex: x eats
y, Kx: x kills y, Hx: x is a human, Lxy: x loves y
1. * All zombies are human.
2. No humans are zombies.
3. * Everyone is a zombie, or everyone is a human.
4. Some zombies eat some humans, but no human eats a zombie.
5. * If Bob is not a zombie, then some zombie has not eaten some human.
6. All zombies eat humans unless some human kills every zombie.
7. * If Bob is a zombie, then some zombie ate some human.
8. No humans eat zombies, but some zombies eat humans.
9. * If some zombie kills Bob and Bob eats some human, then some zombie
eats some human.
10. If every zombie eats every human, then there are no humans.
B. Using the following translation key, translate the predicate logic arguments below
into English: D: persons, Px: x is all-powerful, Ex: x is evil, Kx: x is all-knowing,
Hx: x is a human, Lxy: x loves y, s: Sally
1. * Someone loves Sally. Therefore, someone loves someone.
2. There exists something that is all-knowing. There exists something that
is all-loving. Therefore, there exists something that is all-knowing and
all-loving.
A.
1. * (∀x)(Zx→Hx)
3. * (∀x)Zx∨(∀x)Hx
5. * ¬Zb→{(∃x)(∃y)[(Zx∧Hy)∧¬Exy]}
7. * Zb→{(∃x)(∃y)[(Zx∧Hy)∧Exy]}
9. * [(∃x)(Zx∧Kxb)∧(∃x)(Hx∧Ebx)]→{(∃x)(∃y)[(Zx∧Hy)∧Exy]}
B.
1. * (∃x)Lxs (∃x)(∃y)Lxy
3. * (∃x)(Kx∧Px) (∃x)Kx∧(∃x)Px
5. * (∃x)Kx→(∃x)(∀y)Lxy, ¬(∃x)(∀y)Lxy ¬(∃x)Kx
7. * (∃x)(∀y)(Hy∧Lxy), (∃x)(∃y)[(Hx∧Hy)∧¬Lxy] (∀x)(∀y)¬Lxy
Definitions
Domain The domain of discourse (D) consists of all of the things that
a language can meaningfully refer to or talk about.
Interpretation-function An interpretation-function is an assignment of (1) objects in
D to names, (2) a set of n-tuples in D to n-place predicates,
and (3) truth values to sentences.
Model A model in RL is a structure consisting of a domain and an
interpretation function.
In this chapter, we explore the truth-tree method for predicate trees. Unlike proposi-
tional logic, the system of predicate logic is undecidable. As such, there is no decision
procedure like truth tables or trees that always produces a yes or no answer about
whether a given proposition, set of propositions, or argument has a logical property.
However, the truth-tree method does offer a partial decision procedure for predicate
logic formulas in that it will give an answer for a number of propositions, sets of
propositions, and arguments.
In propositional logic, there are nine proposition types that can undergo decomposition.
Propositions of this form are capable of undergoing decomposition using the rules
formulated for propositional truth trees.
283
¬(P→Q) P→Q
P ¬→D
¬Q ¬→D
¬P Q →D
¬¬P
P ¬¬D
In predicate logic, there are four additional proposition types that can undergo de-
composition. These are the following:
1 ¬(∃x)Px P
2 ¬(∀y)Wy P
3 (∀x)¬Px 1¬∃D
4 (∃y)¬Wy 2¬∀D
In the above example, notice that negated quantified expressions fully decompose.
In the case of line 1, the negated existential proposition is decomposed into a univer-
sally quantified proposition that ranges over a negated formula. In the case of line
2, the negated universal proposition is decomposed into an existentially quantified
proposition that ranges over a negated formula. Again, whenever a proposition is fully
decomposed, a checkmark () is placed next to that proposition to indicate that the
proposition cannot be further decomposed.
Consider another example:
1 ¬(∃x)(∀y)Pxy P
2 ¬(∀y)Wy∧(∃z)Pz P
3 (∀x)¬(∀y)Pxy 1¬∃D
4 ¬(∀y)Wy 2∧D
5 (∃z)Pz 2∧D
6 (∃y)¬Wy 4¬∀D
Notice the use of (¬∃D) at line 3. Line 1 is a negated existentially quantified
proposition, and a use of (¬∃D) results in a universally quantified proposition that
ranges over a negated universally quantified proposition. Also notice that since line 2
is a conjunction, (∧D) is applied to line 2, and then (¬∀D) is applied to ‘¬(∀y)Wy.’
The two rules for the decomposition of quantified expressions are as follows:
where ‘a’ is an individual constant (name) where ‘a’ is any individual con-
that does not previously occur in the stant (name).
branch.
According to (∃D) and (∀D), an individual constant (name) is substituted for a bound
variable in a quantified expression. This procedure is symbolized as ‘P(a/x)’ (i.e., re-
place x with ‘a’). Thus, if there is a quantified expression of the form ‘(∀x)P’ or ‘(∃x)P,’
a substitution instance of ‘P(a/x)’ replaces x’s bound by the quantifier with ‘a.’
In order to consider these decomposition rules more closely, consider (∀D). The
decomposition rule (∀D) can be more explicitly stated as follows:
Consistently replace every bound x with any individual constant (name) of your
choosing (even if it already occurs in an open branch) under any (not necessarily
both) open branch of your choosing.
1 (∀x)(Px→Rx) P
2 Pa∨Ra P
3 Pa Ra 2∨D
1 (∀x)(Px→Rx) P
2 Pa∨Ra P
3 Pa Ra 2∨D
4 Pa→Ra 1∀D
Or it can be decomposed under the right branch by replacing each x with an ‘a’:
1 (∀x)(Px→Rx) P
2 Pa∨Ra P
3 Pa Ra 2∨D
4 Pa→Ra 1∀D
1 (∀x)(Px→Rx) P
2 Pa∨Ra P
3 Pa Ra 2∨D
4 Pa→Ra Pa→Ra 1∀D
1 (∀x)(Px→Rx) P
2 Pa∨Ra P
3 Pa Ra 2∨D
4 Pb→Rb Pc→Rc 1∀D
1 (∀x)(Px→Rx) P
2 Pa∨Ra P
3 Pa Ra 2∨D
4 Pa→Ra 1∀D
5 Pb→Rb 1∀D
6 Pc→Rc 1∀D
7 Pd→Rd 1∀D
8 . 1∀D
. 1∀D
. 1∀D
But this is not to say that the use of (∀D) will never yield a completed tree since
there are many truth trees that will close. For example, consider a tree involving the
following set of propositions:
{(∀x)(Pxab→Rmxd), Psab∧¬Rmsd}
1 (∀x)(Pxab→Rmxd) P
2 Psab∧¬Rmsd P
3 Psab 2∧D
4 ¬Rmsd 2∧D
5 Psab→Rmsd 1∀D
Notice that in the above example, the replacement of x with s in line 5 ultimately
results in the tree closing.
(∃x)Px, Pa
1 (∃x)Px P
2 Pa P
3 Pb 1∃D
Notice that a use of (∃D) involves removing the existential quantifier and replacing
the bound variable with an individual constant foreign to the branch. Since ‘a’ already
occurs in the branch containing ‘(∃x)Px,’ we choose the variable replacement ‘P(b/x),’
but we could have chosen ‘P(c/x),’ ‘P(d/x),’ ‘P(e/x),’ and so on.
Consider another, slightly more complicated example involving the following set
of propositions :
{(∃y)(Py→Ry), (∃x)Px∨(∃z)(Qz),Pa}
1 (∃y)(Py→Ry) P
2 (∃x)Px∨(∃z)Qz P
3 Pa P
4 Pb→Rb 1∃D
Notice that in decomposing line 1, the bound y’s were not replaced with ‘a’ since
this would violate the restriction on (∃D). Namely, it would violate the restriction that
states the individual constant used to replace the quantified variable must not occur
previously in the branch. Continuing the tree,
1 (∃y)(Py→Ry) P
2 (∃x)Px∨(∃z)Qz P
3 Pa P
4 Pb→Rb 1∃D
5 ¬Pb Rb 4→D
Notice that the above tree is completed and that the decomposition of line 6 in-
volves replacing existentially bound variables with a variety of different object con-
stants. Note that since each proposition occurs in a different branch, all of these could
be replaced with the same object constant (e.g., ‘c’). It is important to see that the only
restriction on using (∃D) is that you cannot replace a variable with an object constant
that already occurs in that branch.
The reason for the restriction on the use of (∃D) can be explained with an example.
Consider the following tree for the following set of propositions:
{(∃x)(Px), (∃x)(Qx)}
1 (∃x)Px P
2 (∃x)Qx P
3 Pa 1∃D
4 Qa 2∃D—NO!
In the above case, there are two propositions: ‘(∃x)Px’ says that some x in D has
property ‘P,’ while ‘(∃x)Qx’ says that some x in D has property ‘Q.’ These two
propositions do not say that some one object is both ‘P’ and ‘Q.’ That is, the condition
under which ‘(∃x)Px’ and ‘(∃x)Qx’ are true is not the same as the condition under
which ‘(∃x)(Px∧Qx)’ is true. The truth conditions of ‘(∃x)Px’ are represented by se-
lecting some unique and arbitrary individual ‘a’ in the universe of discourse, and the
truth conditions of ‘(∃x)Qx’ are represented by selecting some unique and arbitrary
individual ‘b’ in the universe of discourse. Following the restriction produces the fol-
lowing tree:
1 (∃x)Px P
2 (∃x)Qx P
3 Pa 1∃D
4 Qb 2∃D
Line 4 in the first tree is incorrect, but line 4 in the second tree is correct because
when a substitution instance for ‘(∃x)Qx’ is chosen, ‘a’ cannot be chosen since ‘(∃x)
Px’ and ‘(∃x)Qx’ are not true if and only if ‘Pa’ and ‘Qa’ are true.
In order to protect against the unwarranted assumption that each proposition is refer-
ring to the same object, the use of (∃D) is restricted by only allowing for substitution
instances of individual constants (names) that do not previously occur in the branch.
In formulating a set of strategic rules for predicate truth trees, all of the previous
strategic rules are imported, and additional strategic rules are added specifically for
decomposing quantified expressions.
Of new interest are rules (2) and (5). Rule (2) gives priority to (¬∃D), (¬∀D), and
(∃D) over any use of (∀D). Rule (5) is present to avoid overly complex truth trees.
Consider the following example:
1 (∀x)(∀y)¬Pxy P
2 (∃y)Pay P
3 Pab 2∃D
4 (∀y)¬Pay 1∀D
5 ¬Pab 4∀D
X
The above truth tree closes. This is shown by first using (∃D) at line 3, and then
two instances of (∀D). Notice that the substitution instances for ‘(∀x)(∀y)¬Pxy’ are
‘P(a/x)’ and ‘P(b/y).’ This follows strategic rule (5) whereby constants are chosen
based on whether they already occur in the branch.
Now consider what would happen if we ignored the strategic rules and first used
(∀D) and then (∃D):
1 (∀x)(∀y)¬Pxy P
2 (∃y)Pay P
3 (∀y)¬Pmy 1∀D
4 ¬Pmb 3∀D
5 Pac 2∃D
6 (∀y)¬Pay 1∀D
7 ¬Pac 6∀D
X
In the above example, lines 3 and 4 turn out to be unhelpful. Since our use of
(∃D) at line 5 has the restriction that the substitution instance cannot already occur
previously in the branch, we cannot substitute ‘b’ for y. In the above example, when
using (∃D) at line 5, the substitution form is ‘P(c/y).’ Since a universally quantified
proposition never fully decomposes, we must decompose line 1 again, and this time
our choice of substitutions is guided by ‘Pac,’ which was obtained by (∃D) at line 5.
Exercise Set #1
A. Construct a predicate truth tree for the following sets of propositions. We have
not formulated all of the necessary definitions to determine whether the tree has a
completed open branch, so focus on trying to use the rules correctly.
1. * (∃x)Px, (∀x)¬Px
2. ¬(∀x)(Px), Pb
3. * (∃x)(Px∧Qx), (∀x)Px→(∀x)Qx
4. ¬(∀x)Px, ¬(∀y)(Py∧Gy), (∀z)(Pz∧¬Gz)
5. * ¬(∀x)(Px∧Qx), (∃y)(Py∧Qy)
6. ¬(∀x)¬Fx∧¬(∀x)Fx
7. * (∃x)(∀y)Pxy, (∀x)¬Pxx
8. (∃x)(∃y)Pxy∧(∃z)Pzz, (∀x)(∀y)Pxy
9. (∀x)(∀y)Pxyx↔(∀x)(∀y)Pyxy
10. ¬[(∃x)Px↔¬(∀x)¬Px]
1. * (∃x)Px, (∀x)¬Px
1 (∃x)Px P
2 (∀x)¬Px P
3 Pa 1∃D
4 ¬Pa 2∀D
X
It is important to see that we made use of (∃D) before (∀D) here. If we had
used (∀D) first, our subsequent use of (∃D) would have had to be an object
constant that was foreign to the branch.
3. * (∃x)(Px∧Qx), (∀x)Px→(∀x)Qx
1 (∃x)(Px∧Qx) P
2 (∀x)Px→(∀x)Qx P
3 Pa∧Qa 1∃D
4 Pa 3∧D
5 Qa 3∧D
The two tasks for this section are to use the truth-tree method as a procedure for de-
termining whether a particular proposition, set of propositions, or argument has some
logical property (e.g., consistency, validity). But before the truth-tree method is for-
mulated, it is necessary to redefine a completed open branch and explain how analyz-
ing trees in RL is different from analyzing trees in PL. In the chapter on propositional
trees, a completed open branch was defined as the following: a branch where all the
complex propositions in the branch are decomposed into atomic propositions or their
literal negations. For trees in predicate logic, a new definition is required.
Completed A branch is a completed open branch if and only if (1) all complex
open branch propositions that can be decomposed into atomic propositions or
negated atomic propositions are decomposed; (2) for all univer-
sally quantified propositions ‘(∀x)P’ occurring in the branch,
there is a substitution instance ‘P(a/x)’ for each constant that
occurs in that branch; and (3) the branch is not a closed branch.
Closed tree A tree is a closed tree if and only if all branches close.
Closed branch A branch is a closed branch if and only if there is a proposition
and its literal negation (e.g., ‘P’ and ‘¬P’).
In order to get clearer on the definition of a completed open branch, a few examples
are considered below. First, consider the following tree:
1 (∀x)(Px→Qx) P
2 (∃x)(Px∧¬Qx) P
3 Pa→Qa 1∀D
4 Pb∧¬Qb 2∃D
5 Pb 4∧D
6 ¬Qb 4∧D
7 ¬Pa Qa 3→D
At first glance, it may appear that the tree does contain a completed open branch
because there are no closed branches, and every decomposable proposition has been
decomposed. However, take a closer look at clause (2) in the definition of a completed
open branch:
(2) For all universally quantified propositions ‘(∀x)P’ occurring in the branch, there
is a substitution instance ‘P(a/x)’ for each constant that occurs in that branch.
Notice that ‘b’ is an object constant occurring in the branch at lines 4 to 6, but there
is no substitution instance ‘P(b/x)’ for ‘(∀x)(Px→Qx)’ occurring in the branch con-
taining ‘b.’ Otherwise put, we haven’t decomposed ‘(∀x)(Px→Qx)’ using ‘P(b/x).’
Thus, the tree does not contain a completed open branch. Now consider what happens
when ‘(∀x)(Px→Qx)’ is decomposed using ‘b’ as a substitution instance.
1 (∀x)(Px→Qx) P
2 (∃x)(Px∧¬Qx) P
3 Pa→Qa 1∀D
4 Pb∧¬Qb 2∃D
5 Pb 4∧D
6 ¬Qb 4∧D
7 ¬Pa Qa 3→D
8 Pb→Qb Pb→Qb
In decomposing ‘(∀x)(Px→Qx)’ and using ‘P(b/x),’ the tree turns out to close.
Thus, it is important that clause (2) of (∀D) be attended to because, as the example
above shows, ignoring this feature will yield an open tree instead of a closed tree.
Consider a tree with the following stack of propositions:
(∀x)(¬Px→¬Rx), (∀x)(Rx→Px)
1 (∀x)(¬Px→¬Rx) P
2 (∀x)(Rx→Px) P
3 ¬Pa→¬Ra 1∀D
4 Ra→Pa 2∀D
In the above example, there are two universally quantified expressions that are not
checked off, yet the above tree is completed since for all universally quantified propo-
sitions ‘(∀x)P’ occurring on the branch, there is a substitution instance ‘P(a/x)’ for
each constant that occurs on that branch. This constant is ‘a.’ The tree is completed,
and since its branches do not close, the above tree is a completed open tree.
One way to show that a proposition, set of propositions, or argument has one of
these properties is to construct an interpretation in a model. For example, to show
that ‘{(∀x)Px, (∃x)Rx}’ is consistent in RL involves showing that there is at least one
interpretation in a model where v(∀x)Px = T and v(∃x)Rx = T. Here is an example of
such a model:
D = positive integers
P = {x | x is greater than 0}
R = {x | x is even}
On this interpretation, v(∀x)Px = T since every positive integer is greater than zero.
In addition, v(∃x)Px = T since there is at least one positive integer that is even. For
example, four is a positive integer that is even. Since v(∀x)Px = T and v(∃x)Px = T in
the interpretation above, there is at least one interpretation of the model such that all
of the propositions from the set are true.
Similar procedures can be formulated for each of the above properties. The focus
of this section, however, is to develop a clearer understanding of how these properties
can be determined using truth trees. In PL, a completed open branch tells us that there
is a valuation (truth-value assignment) that would make every proposition in the stack
true. Similarly, in RL, a completed open branch tells us that there is an interpretation
in a model for which every proposition in the stack is true. Thus, the presence of a
completed open branch tells us that we can construct a model such that every proposi-
tion in the stack is true.
To illustrate, consider a very simple tree consisting of ‘(∃x)Px’ and ‘Pa’:
1 (∃x)Px P
2 Pa P
3 Pb 1∃D
0
The above tree has a completed open branch, and so there is an interpretation for
which all of the propositions in the branch are true. If we wanted, we could construct
an interpretation in a model that would show ‘(∃x)Px,’‘Pa,’ and ‘Pb’ as being con-
sistent. To do this, we would stipulate a domain of discourse involving two objects,
letting ‘a’ stand for an object and ‘b’ stand for an object, and assign the one-place
predicate ‘P’ an extension.
D: {John, Fred}
Px: x is a person {John, Fred}
a: John
b: Fred
In this interpretation of the model, ‘Pa,’ and ‘Pb’ are true, and thus ‘(∃x)Px’ is also
true. ‘(∃x)Px’ is true because there is at least one object in the domain that is a person,
while ‘Pa’ is true because ‘a’ refers to John, and John is in the extension of persons;
likewise ‘Pb’ is true because ‘b’ refers to Fred, and Fred is in the extension of persons.
Thus, using the tree, we can read off the propositions in the completed open branches
and then give an interpretation in a model that shows the propositions in that set are
true. And if this is the case, then truth trees offer us a method for determining certain
properties of propositions, sets of propositions, and arguments. For instance, the above
tree has a completed open branch, which shows that the stack of propositions is true
under at least one interpretation and so is consistent.
Consider a slightly more complicated example:
1 (∀x)Px P
2 (∃x)Px∨¬(∃x)Px P
4 Pa 3∃D
5 Pa 1∀D
6 0 (∀x)¬Px 3¬∃D
7 ¬Pa 6∀D
8 Pa 1∀D
X
In the above tree, there is a completed open branch and a closed branch. The closed
branch on the right-hand side indicates that there is no interpretation for which ‘¬(∃x)
Px’ and ‘(∀x)Px’ are true. However, the completed open branch (on the left-hand
side) indicates that there is an interpretation for which all of the propositions in the
branch are true. As such, we can construct an interpretation in a model such that the
propositions in the branch are true.
D: {1}
Px: x is a number
In this model, ‘(∀x)Px’ is true since every number in the domain is a number, and
‘(∃x)Px’ is true since there is a number in the domain that is a number.
Finally, let’s consider a tree involving a slightly more complicated proposition:
1 (∃x)(∃y)[(Ox∧Ey)∧Gxy] P
2 (∃y)[(Oa∧Ey)∧Gay] 1∃D
3 (Oa∧Eb)∧Gab 1∃D
4 Oa∧Eb 3∧D
5 Gab 3∧D
6 Oa 4∧D
7 Eb 4∧D
0
The above tree has a completed open branch, and so there is an interpreta-
tion for which the propositions ‘Gab,’ ‘Oa,’ and ‘Eb’ are true, and thus ‘(∃x)(∃y)
[(Ox∧Ey)∧Gxy]’ is true. Again, a model can be constructed to reflect this fact:
D: {1, 2, 3}
Ox: x is an odd number
Ex: x is an even number
Gxy: x is greater than y
a: 3
b: 2
‘Gab’ is true since it is true that 3 is greater than 2. ‘Oa’ is true since three is an
odd number. Lastly, ‘Eb’ is true since 2 is an even number. Thus, the predicate wff
‘(∃x)(∃y)[(Ox∧Ey)∧Gxy],’ which says that there exists an odd number greater than
some existent even number, is also true. In short, the truth tree, along with the model,
demonstrates that the set ‘{(∃x)(∃y)[(Ox∧Ey)∧Gxy]}’ is not a contradiction in RL.
It is important to note that the tree method can be analyzed semantically such that
a completed open branch indicates that there is at least one interpretation that makes
the propositions in the stack being decomposed true. For the remainder of this chapter,
however, we avoid the discussion and construction of models for predicate truth trees
and focus on how the truth-tree method can be used to determine whether a proposi-
tion, set of propositions, or argument has a particular logical property.
Below, we provide four examples of consistent and inconsistent trees. First, consider
(∀x)(Px→Rx), ¬(∀x)(¬Rx→¬Px)
1 (∀x)(Px→Rx) P
2 ¬(∀x)(¬Rx→¬Px) P
3 (∃x)¬(¬Rx→¬Px) 2¬∀D
4 ¬(¬Ra→¬Pa) 3∃D
5 Pa→Ra 1∀D
6 ¬Ra 4¬→D
7 ¬¬Pa 4¬→D
8 Pa 7¬¬D
9 ¬Pa Ra 5→D
X X
The above tree is a closed tree and shows that ‘{(∀x)(Px→Rx), ¬(∀x)(¬Rx→¬Px)}’
is inconsistent. Next, consider
1 (∀x)(Px)→(∀y)(Ry) P
2 ¬(∀y)Ry P
3 (∃x)¬Px P
4 (∃y)¬Ry 2¬∀D
5 ¬Ra 4∃D
6 ¬Pb 3∃D
The above tree has a completed open branch and so shows that the stack composing
the tree is consistent. Notice that in lines 10 and 11, two uses of (∀D) are required
to complete the tree since an ‘a’ and ‘b’ are found as object constants in the branch.
Consider a third tree involving the following propositions:
1 ¬¬(∀x)(Px)∨(∀y)(Ry) P
2 ¬(∀y)(Ry) P
3 (Ra∧Rb)∧Pa P
4 Ra∧Rb 3∧D
5 Pa 3∧D
6 Ra 4∧D
7 Rb 4∧D
8 (∃y)¬(Ry) 2¬∀D
9 ¬Rc 8∃D
11 (∀x)(Px) 10¬¬D
12 Pa 11∀D
13 Pb 11∀D
14 Pc 11∀D
15 0 Ra 10∀D
16 Rb 10∀D
17 Rc 10∀D
X
The above tree has a completed open branch, which shows that the stack compos-
ing the tree is consistent. Notice again that lines 12 to 17 required multiple uses of
(∀D) since each universally quantified proposition occurring on the branch requires a
substitution instance ‘P(a/x)’ for each constant that occurs on that branch.
Consider one final example involving the following set of propositions:
1 ¬(∀x)(∃y)(Pxy)∧(∀y)¬(∃x)(Rxy) P
2 ¬(∀y)(∀x)(Rxy) P
3 (Rab∧Rba)∧Pab P
4 Rab∧Rba 3∧D
5 Pab 3∧D
6 Rab 4∧D
7 Rba 4∧D
8 ¬(∀x)(∃y)(Pxy) 1∧D
9 (∀y)¬(∃x)(Rxy) 1∧D
10 (∃x)¬(∃y)(Pxy) 8¬∀D
11 (∃y)¬(∀x)(Rxy) 2¬∀D
12 ¬(∃y)(Pcy) 10 ∃D
13 (∀y)¬(Pcy) 12¬∃D
14 ¬Pca 13∀D
15 ¬Pcb 13∀D
16 ¬Pcc 13∀D
17 ¬(∀x)(Rxe) 11∃D
18 ¬(∃x)¬(Rxe) 17¬∀D
19 ¬Rfe 18∃D
20 ¬(∃x)(Rxa) 9∀D
21 ¬(∃x)(Rxb) 9∀D
22 ¬(∃x)(Rxe) 9∀D
23 ¬(∃x)(Rxf) 9∀D
24 (∀x)¬(Rxa) 20¬∃D
25 (∀x)¬(Rxb) 21¬∃D
26 (∀x)¬(Rxe) 22¬∃D
27 (∀x)¬(Rxf) 23¬∃D
28 ¬Rab 23¬∃D
X
Yikes! the initial set of propositions is inconsistent because the tree is closed. In the
above table, it is important to look for a proposition and its literal negation as soon as
possible. Rather than starting by decomposing line 24 with multiple uses of (∀D), you
can decompose line 25 into line 30 using one instance of (∀D) involving ‘P(b/x).’ The
reason for this is to generate the contradiction and close the tree.
Exercise Set #2
A. Using the truth-tree method, test the following sets of propositions for logical
consistency and inconsistency.
1. * (∃x)(Px→Qx), (∃x)Px
2. (∃x)(Px→Rx), ¬Pa, ¬Pb
3. * (∀x)Px∨(∃y)Qy, (∃x)(Px∧Qa)
4. (∃x)(Px∨Gx), ¬(∀x)(Px→¬Gx)
5. * (∀x)(Px→Mx), (∃x)Px, ¬(∃x)Mx
6. (∃x)(∀y)(Px→Gy), (∃x)(¬Gx→¬Px)
7. * (∀x)(Px↔Wx), (∃x)[Px∧(∃y)(¬Py∧Wy)]
8. (∀x)(Px→Qx), (∀x)(Px∨Qx)
9. * (∀x)(Px∧Mx), Pa, Pb, (∃x)Rx
10. (∀x)(∀y)(Px→Py), ¬Pb, ¬Pa
11. * (∀x)(Px→Rx), (∃x)(Mx∧¬Rx), (∃x)¬(Px→Rx)
12. ¬(∀x)¬(∀y)(∀z)[Px→(My→Tz)]
A.
1. * (∃x)(Px→Qx), (∃x)Px; consistent.
1 (∃x)(Px→Qx) P
2 (∃x)Px P
3 Pa 2∃D
4 Pb→Qb 1∃D
5 ¬Pb Qb 4→D
0 0
8 ¬Pa Ma 7→D
X X
7. * (∀x)(Px↔Wx), (∃x)[Px∧(∃y)(¬Py∧Wy)]; inconsistent.
1 (∀x)(Px↔Wx) P
2 (∃x)[Px∧(∃y)(¬Py∧Wy)] P
3 Pa∧(∃y)(¬Py∧Wy) 2∃D
4 Pa 3∧D
5 (∃y)(¬Py∧Wy) 3∧D
6 ¬Pb∧Wb 5∃D
7 ¬Pb 6∧D
8 Wb 6∧D
9 Pa↔Wa 1∀D
10 Pb↔Wb 1∀D
11 Pb ¬Pb 10↔D
12 Wb ¬Wb 10↔D
X X
9. * (∀x)(Px∧Mx), Pa, Pb, (∃x)Rx; consistent.
1 (∀x)(Px∧Mx) P
2 Pa P
3 Pb P
4 (∃x)Rx P
5 Rc 4∃D
6 Pa∧Ma 1∀D
7 Pb∧Mb 1∀D
8 Pc∧Mc 1∀D
9 Pa 6∧D
10 Ma 6∧D
11 Pb 7∧D
12 Mb 7∧D
13 Pc 8∧D
14 Mc 8∧D
0
11. * (∀x)(Px→Rx), (∃x)(Mx∧¬Rx), (∃x)¬(Px→Rx); inconsistent.
1 (∀x)(Px→Rx) P
2 (∃x)(Mx∧¬Rx) P
3 (∃x)¬(Px→Rx) P
4 Ma∧¬Ra 2∃D
5 ¬(Pb→Rb) 3∃D
6 Ma 4∧D
7 ¬Ra 4∧D
8 Pb 5¬→D
9 ¬Rb 5¬→D
10 Pa→Ra 1∀D
11 Pb→Rb 1∀D
12 ¬Pb Rb 11→D
X
13 ¬Pa Ra 10→D
X X
(∃x)¬(∀y)[Px→(Qx∨¬Ry)]
1 (∃x)¬(∀y)[Px→(Qx∨¬Ry)] P
2 ¬(∀y)[Pa→(Qa∨¬Ry)] 1∃D
3 (∃y)¬[Pa→(Qa∨¬Ry)] 2¬∀D
4 ¬[Pa→(Qa∨¬Rb)] 3∃D
5 Pa 4¬→D
6 ¬(Qa∨¬Rb) 4¬→D
1 ¬(∃x)¬(∀y)[Px→(Qx∨¬Ry)] P
2 (∀x)¬¬(∀y)[Px→(Qx∨¬Ry)] 1¬∃D
3 ¬¬(∀y)[Pa→(Qa∨¬Ry)] 2∀D
4 (∀y)[Pa→(Qa∨¬Ry)] 3¬¬D
5 Pa→(Qa∨¬Ra) 4∀D
7 Qa ¬Ra 6∨D
0 0
The above tree is shown not to be a tautology because all branches for ‘¬P’ do not
close. That is, there is at least one open and completed branch. Since it is not the case
that trees for ‘P’ and ‘¬P’ close, the proposition ‘P’ is neither a contradiction nor a
tautology. And if ‘P’ is neither a contradiction nor a tautology, it is a contingency.
Consider another proposition:
(∀x)(Px→Qx)∧(∃x)(Px∧¬Qx)
1 (∀x)(Px→Qx)∧(∃x)(Px∧¬Qx) P
2 (∀x)(Px→Qx) 1∧D
3 (∃x)(Px∧¬Qx) 1∧D
4 Pa∧¬Qa 3∃D
5 Pa 4∧D
6 ¬Qa 4∧D
7 Pa→Qa 2∀D
8 ¬Pa Qa 7→D
9 X X
(∀x)(Px→Px)∧(∀y)(Qy∨¬Qy)
1 ¬[(∀x)(Px→Px)∧(∀y)(Qy∨¬Qy)] P
If all the branches for ‘¬P’ close, then ‘P’ is a tautology. All of the branches for
‘¬[(∀x)(Px→Px)∧(∀y)(Qy∨¬Qy)]’ close, thus ‘(∀x)(Px→Px)∧(∀y)(Qy∨¬Qy)’ is a
tautology.
Exercise Set #3
A. Using the truth-tree method, test the following propositions to determine whether
each is a contradiction, tautology, or contingency.
1. * (∃x)Px∨¬(∃x)Px
2. (∃x)Px∨(∃x)¬Px
3. * (∀x)(Px→Gx)
4. (∀x)(Px∨¬Px)
5. * (∀x)(Px∧¬Mx)∨(∃x)(¬Px∨Mx)
6. (∃x)(Fx∧Px)∨(∀y)(Py→Fy)
7. * (∀x)(∀y)Pxy∧(∃x)(∃y)¬Pxy
8. (∃y)(∀x)(Pxy∧¬Pyx)
9. * (∀x)Pxx→Paa
A.
1. * (∃x)Px∨¬(∃x)Px; tautology.
1 ¬[(∃x)Px∨¬(∃x)Px] P
2 ¬(∃x)Px 1¬∨D
3 ¬¬(∃x)Px 1¬∨D
4 (∃x)Px 3¬¬D
X
3. * (∀x)(Px→Gx); first tree, not a contradiction.
1 (∀x)(Px→Gx) P
2 Pa→Ga 1∀D
3 ¬Pa Ga 2→D
0 0
(∀x)(Px→Gx); second tree, not a tautology. Since it is neither a tautology
nor a contradiction, ‘(∀x)(Px→Gx)’ is a contingency.
1 ¬(∀x)(Px→Gx) P
2 (∃x)¬(Px→Gx) 1¬∀D
3 ¬(Pa→Ga) 2∃D
4 Pa 3¬→D
5 ¬Ga 3¬→D
0
5. * (∀x)(Px∧¬Mx)∨(∃x)(¬Px∨Mx); first tree, not a contradiction.
1 (∀x)(Px∧¬Mx)∨(∃x)(¬Px∨Mx) P
3 ¬Pa∨Ma 2∃D
4 ¬Pa Ma 3∨D
5 Pa∧¬Ma 0 0 2∀D
6 Pa 5∧D
7 ¬Ma 5∧D
0
(∀x)(Px∧¬Mx)∨(∃x)(¬Px∨Mx); second tree, a tautology. It is a tautology
because all branches close for ‘¬[(∀x)(Px∧¬Mx)∨(∃x)(¬Px∨Mx)].’
1 ¬[(∀x)(Px∧¬Mx)∨(∃x)(¬Px∨Mx)] P
2 ¬(∀x)(Px∧¬Mx) 1¬∨D
3 ¬(∃x)(¬Px∨Mx) 1¬∨D
4 (∃x)¬(Px∧¬Mx) 2¬∀D
5 (∀x)¬(¬Px∨Mx) 3¬∃D
6 ¬(Pa∧¬Ma) 4∃D
7 ¬(¬Pa∨Ma) 5∀D
8 ¬¬Pa 7¬∨D
9 ¬Ma 7¬∨D
10 Pa 8¬¬D
9. * (∀x)Pxx→Paa; tautology.
1 ¬[(∀x)Pxx→Paa] P
2 (∀x)Pxx 1¬→D
3 ¬Paa 1¬→D
4 Paa 2∀D
X
Equivalence A pair of propositions ‘P’ and ‘Q’ is shown by the truth-tree method
to be equivalent if and only if the tree of the stack of ‘¬(P↔Q)’
determines a closed tree; that is, all branches for ‘¬(P↔Q)’ close.
First, consider the following two propositions: ‘(∀x)Px’ and ‘¬(∃x)Px.’ In order to
test whether two propositions ‘P’ and ‘Q’ are logically equivalent, put them in negated
biconditional form,‘¬(P↔Q),’ and use a truth tree to determine whether all branches
close. If the tree closes, then ‘(∀x)Px’ and ‘¬(∃x)Px’ are equivalent. If there is a com-
pleted open branch, then ‘(∀x)Px’ and ‘¬(∃x)Px’ are not equivalent.
1 ¬[(∀x)Px↔¬(∃x)Px] P
The above tree has a completed open branch, and so the truth-tree method shows
that ‘(∀x)Px’ and ‘¬(∃x)Px’ are not equivalent.
Here is a more complicated example. Test the following two propositions for logi-
cal equivalence: ‘(∀x)¬(Px∨Gx)’ and ‘(∀y)(¬Py∧¬Gy).’
1 ¬{[(∀x)¬(Px∨Gx)]↔[(∀y)(¬Py∧¬Gy)]} P
13 Pa Ga 12∨D
14 ¬Pa∧¬Ga ¬Pa∧¬Ga 3∀D
15 ¬Pa ¬Ga 14∧D
X X
In the above tree, we see that all branches of the negated biconditional close. Thus,
‘(∀x)¬(Px∨Gx)’ and ‘(∀y)(¬Py∧¬Gy)’ are logically equivalent.
Exercise Set #4
A. Using the truth-tree method, test the following sets of propositions for logical
equivalence.
1. * (∀x)¬Px, ¬(∃x)Px
2. ¬¬(∀x)Px, ¬(∃x)¬Px
3. * (∀y)Pyy, ¬(∃x)¬Pxx
4. (∀y)Pyy∧(∀z)Pzz, ¬(∃x)¬Pxx
5. * (∀x)(Px→Qx), ¬(∃x)(Px∧¬Qx)
6. (∃x)(Px∧¬Qx),¬(∀x)(¬Px∨Qx)
7. * (∃x)(∀y)(Px→Gy), (∃x)¬(∃y)¬(Px→Gy)
8. (∃x)Px∧(∃y)Gy, (∀x)Px∧(∀y)Py
9. (∀x)Mxx, (∃x)Mxx
10. (∀x)(∀y)Pxy, ¬(∃x)(∃y)¬Pxy
A.
1. * (∀x)Px, ¬(∃x)Px; not equivalent.
1 ¬[(∀x)Px↔¬(∃x)Px] P
6 Pa 5∧D
7 ¬Qa 5∧D
8 Pa→Qa 2∀D
9 (∃x)¬(Px→Qx) 2¬∀D
10 (∀x)¬(Px∧¬Qx)
11 ¬(Pa→Qa) 9∃D
12 Pa 11¬→D
13 ¬Qa 11¬→D
14 ¬(Pa∧¬Qa) 10∀D
7.3.5 Validity
In this section, the truth-tree method is used to determine whether an argument ‘P, Q,
R, . . ., Y ⊢ Z’ is valid in RL.
First, we consider a very simple argument: ‘(∀x)Px, Pa ⊢ Pa.’ Remember that set-
ting up the tree to test for validity requires listing the premises and the literal negation
of the conclusion in the stack.
1 (∀x)Px P
2 ¬Px P
1 (∀x)Pa P
2 ¬Pa P
3 Pa 1∀D
X
The above tree is closed. This shows that under no interpretation is it the case that
all of the premises ‘P,’‘Q,’‘R,’. . ., ‘Y’ and the negation of the conclusion ‘¬Z’ are
jointly true. In other words, under no interpretation is it the case that the premises are
true and the conclusion is false. Thus, the above tree shows that the argument ‘(∀x)
Pa ⊢ Pa’ is valid.
Moving on to a more complicated example, consider the following argument:
In order to test this argument for validity, it is necessary to stack the premises and
the negation of the conclusion. That is, if the argument is valid, then the following set
of propositions should yield a closed tree:
1 (∀x)(Px→Qx) P
2 (∃y)(Py) P
3 ¬(∃x)(Px→Qx) P
4 Pa 2∃D
5 (∀x)¬(Px→Qx) 3¬∃D
6 Pa→Qa 1∀D
7 ¬(Pa→Qa) 5∀D
8 Pa 7¬→D
9 ¬Qa 7¬→D
10 ¬Pa Qa 6→D
X X
In the above tree, each of the branches is closed, so the tree is closed. Since the tree
closes, the argument is valid.
Consider a final example:
Pa∧Qb, (∀x)(∀y)[Px→(Qy→Rx)] ⊢ Ra
Again, to determine whether or not this argument is valid, we test to see whether
the following set of propositions is consistent:
1 Pa∧Qb P
2 (∀x)(∀y)[Px→(Qy→Rx)] P
3 ¬Ra P
4 Pa 1∧D
5 Qb 1∧D
6 (∀y)[Pa→(Qy→Ra)] 2∀D
7 (∀y)[Pb→(Qy→Rb)] 2∀D
8 Pa→(Qa→Ra) 6∀D
9 Pa→(Qb→Ra) 6∀D
10 Pb→(Qa→Rb) 7∀D
11 Pb→(Qb→Rb) 7∀D
13 ¬Qb Ra 12→D
X X
In the above tree, all the branches close; therefore the tree is closed. Thus, ‘{Pa∧Qb,
(∀x)(∀y)[Px→(Qy→Rx)], ¬Ra}’ is inconsistent, and it is impossible for the premises
to be true and the conclusion to be false. Therefore, the argument is deductively valid.
Exercise Set #5
A. Using the truth-tree method, test the following arguments for validity.
1. * (∀x)(Px→Gx), Pa ⊢ Ga
2. (∀x)(Px→Gx), Ga ⊢ Pa
3. * (∀x)(∀y)(Pxy→Gxy), Pab ⊢ Gab
4. (∀x)(∀y)(Pyx→Gyx), Pab ⊢ (∃x)(∃y)Gyx
5. * Pa, Pb, Pc, (∀x)(Px→Gx) ⊢ ¬(∃y)Gy
5 ¬Pa Ga 4→D
X X
3. * (∀x)(∀y)(Pxy→Gxy);Pab ⊢ Gab; valid.
1 (∀x)(∀y)(Pxy→Gxy) P
2 Pab P
3 ¬Gab P
4 (∀y)(Pay→Gay) 1∀D
5 (∀y)(Pby→Gby) 1∀D
6 Pab→Gab 4∀D
12 ¬Pa Ga 8→D
X
13 ¬Pb Gb 9→D
X
14 ¬Pc Gc 10→D
X
15 ¬Pd Gd 11→D
0 0
It is important to recognize that lines 8 to 11 are all necessary for the determination
of a completed open branch at line 15. They are necessary because in order for there
to be a completed open branch, for each universally quantified proposition ‘(∀x)P’
occurring on a branch, there must be a substitution instance ‘P(a/x)’ for each constant
already occurring on that branch. So, since object constants ‘a,’ ‘b,’ ‘c,’ and ‘d’ occur
on lines 1, 2, 3, and 7, respectively, we need to make use of each of the following
substitution instances for ‘(∀x)(Px→Gx),’ which occurs on line 4: ‘P(a/x),’ ‘P(b/x),’
‘P(c/x),’ ‘P(d/x).’
Unlike PL, RL is undecidable. That is, there is no mechanical procedure that can
always, in a finite number of steps, deliver a yes or no answer to questions about
whether a given proposition, set of propositions, or argument has a property like con-
sistency, tautology, validity, and the like. For some trees, the application of predicate
decomposition rules will result in a process of decomposition that does not, in a finite
number of steps, yield a closed tree or a completed open branch.
For example, consider the following tree for ‘(∀x)(∃y)(Pxy)’:
1 (∀x)(∃y)(Pxy) P
2 (∃y)Pay 1∀D
3 Pab 2∃D
4 (∃y)Pby 1∀D
5 Pbc 4∃D
6 (∃y)Pcy 1∀D
7 Pcd 6∃D
.
.
.
Notice that in the above tree, the decomposition procedure will continue indefi-
nitely since every time an (∃D) is used, another use of (∀D) will be required, followed
by another (∃D), followed by another (∀D), and so on. This indefinite process of de-
composition thus does not yield a completed open branch and so the truth-tree method
does not show that ‘(∀x)(∃y)(Pxy)’ is consistent. In order to avert this problem for
predicate formulas that have finite models (i.e., interpretations of domains that are
finite), there is a way to revise (∃D) to show that ‘(∀x)(∃y)(Pxy)’ yields a completed
open branch:1
(∃x)P
1 (∀x)(∃y)Pxy P
2 (∃y)Pay 1∀D
Notice that after (∀D) is applied to line 1, we have ‘(∃y)Pay.’ When (N∃D) is ap-
plied at line 3, ‘P(a/y)’ is used as a substitution instance on the left-hand side, and
‘P(b/y)’ is used as a substitution instance on the right-hand side. Because of this, we
can construct a finite model such that ‘(∀x)(∃y)Pxy’ is true. However, notice that the
right-hand side does not yield a closed branch or a completed open branch. Since the
right-hand side involves an object constant ‘b’ and a universally quantified proposi-
tion, another round of (∀D) and (N∃D) will need to be undertaken.
1 (∀x)(∃y)Pxy P
2 (∃y)Pay 1∀D
While another round of (∀D) and (N∃D) yields additional completed open
branches, which will allow for the construction of finite models, the right-hand branch
does not close. This indicates that the revision of (∃D) as (N∃D) will ensure that the
truth-tree method can determine whether a set of propositions has a logical property
provided that set has a finite model. In cases where a set of propositions has only an
infinite model, the truth-tree method will neither yield a completed open branch nor
a closed tree.
Although we have touched on the undecidability of predicate logic in this section,
the proof of this feature of RL is a topic in meta theory and so is not discussed in this
introduction to logic. Nevertheless, despite the undecidability of predicate logic, this
undecidability only affects a small fraction of propositions in RL, and so the truth-tree
method remains a useful method for determining logical properties
End-of-Chapter Exercises
A. Using the truth-tree method, test the following sets of propositions to determine
whether they are logically consistent.
1. * ¬(∀x)(Px→Qx), ¬(∃x)(Px∧¬Qx)
2. (∃x)Px∧(∃x)¬Px
3. * (∃x)(∃y)(Pxy∧Gxy), (∀y)(∀z)¬Gyz
4. ¬(∃x)¬(∀y)Pxy, (∃x)Px
5. * ¬(∀x)¬(Px→Mx), ¬(∃x)¬(Px∧¬Mx), (∀y)(Py→Zy)∨(∃x)(Mx∨Px)
B. Using the truth-tree method, test the following propositions to determine whether
each is a contradiction, tautology, or contingency.
1. * (∃x)[(¬Fx∨¬Px)∨(∀y)(Py→Fy)]
2. (∃x)(∃y)(Pxy→Rxy)
3. (∃x)Px∧(∃x)¬Px
4. (∀x)(¬Px↔(Px∨Rx)]
5. (∃x)(∀y)(∀z)(Pxyz∨¬Rxyz)
C. Using the truth-tree method, test the following pairs of propositions to determine
whether they are equivalent.
1. (∃x)Px, ¬(∀x)¬Px
2. (∃x)Px∧(∃x)Rx, (∃x)(Px∧Rx)
3. (∃x)(Px∧Rx), ¬(∀x)¬(¬Px∨¬Rx)
4. (∃x)Pxx, (∃x)(∃y)Pxy
5. (∃x)(∃y)Pxy, (∃x)(∃y)Pyx
D. Using the truth-tree method, test the following arguments for validity.
1. (∀x)(Px→Rx), (∃x)Px⊢(∃x)Rx
2. (∃x)(∃y)Pxy, (∃y)(∃x)Pyx→(∀x)Px⊢(∀y)Py
3. * (∃x)Px∧(∀y)¬Qy, (∃x)(Px∧Qx)⊢(∀x)(∀y)(∀z)Pxyz
4. (∃x)Px∨(∀y)Qy, ¬(∀y)Qy⊢¬(∃x)Px∨(∀x)(∀y)(∀z)Pxyz
5. * (∀x)[Px→¬(∀y)¬(Qy→My)], (∃x)Px, (∃x)Qx⊢(∃x)(Mx∨Px)
A.
1. * (∀x)(Px→Qx), ¬(∃x)(Px∧¬Qx); inconsistent.
1 ¬(∀x)(Px→Qx) P
2 ¬(∃x)(Px∧¬Qx) P
3 (∀x)¬(Px∧¬Qx) 2¬∃D
4 (∃x)¬(Px→Qx) 1¬∀D
5 ¬(Pa→Qa) 4∃D
6 ¬(Pa∧¬Qa) 3∀D
7 Pa 5¬→D
8 ¬Qa 5¬→D
12 ¬Pa Ma 7→D
X X
B.
1. * (∃x)[(¬Fx∨¬Px)∨(∀y)(Py→Fy)]; first tree, not a contradiction.
1 (∃x)[(¬Fx∨¬Px)∨(∀y)(Py→Fy)] P
2 (¬Fa∨¬Pa)∨(∀y)(Py→Fy) 1∃D
Completed A branch is a completed open branch if and only if (1) all complex
open branch propositions that can be decomposed into atomic propositions or
negated atomic propositions are decomposed; (2) for all universally
quantified propositions ‘(∀x)P’ occurring in the branch, there is a
substitution instance ‘P(a/x)’ for each constant that occurs in that
branch; and (3) the branch is not a closed branch.
Closed tree A tree is a closed tree if and only if all branches close.
Closed branch A branch is a closed branch if and only if there is a proposition and
its literal negation (e.g., ‘P’ and ‘¬P’).
Consistency A set of propositions ‘{P, Q, R, . . ., Z}’ is shown by the truth-tree
method to be consistent if and only if a complete tree of the stack
of ‘P,’ ‘Q,’ ‘R,’ . . ., ‘Z’ is an open tree; that is, there is at least one
completed open branch.
Inconsistency A set of propositions ‘{P, Q, R, . . ., Z}’ is shown by the truth-tree
method to be inconsistent if and only if a completed tree of the stack
of ‘P,’ ‘Q,’ ‘R,’ . . ., ‘Z’ is a closed tree; that is, all branches close.
Tautology A proposition ‘P’ is shown by the truth-tree method to be a tautol-
ogy if and only if the tree ‘¬P’ determines a closed tree; that is, all
branches close.
Contradiction A proposition ‘P’ is shown by the truth-tree method to be a contra-
diction if and only if the tree ‘P’ determines a closed tree; that is,
all branches close.
Contingency A proposition ‘P’ is shown by the truth-tree method to be a contin-
gency if and only if ‘P’ is neither a tautology nor a contradiction;
that is, the tree of ‘P’ does not determine a closed tree, and the tree
of ‘¬P’ does not determine a closed tree.
Equivalence A pair of propositions ‘P’ and ‘Q’ are shown by the truth-tree
method to be equivalent if and only if the tree of the stack of
‘¬(P↔Q)’ determines a closed tree; that is, all branches for
‘¬(P↔Q)’ close.
Validity An argument ‘P, Q, R, . . ., Y ⊢ Z’ is shown by the truth-tree method
to be valid in RL if and only if the stack ‘P,’ ‘Q,’ ‘R,’ . . ., ‘Y,’ ‘¬Z’
determines a closed tree.
Invalidity An argument ‘P, Q, R, . . ., Y ⊢ Z’ is shown by the truth-tree method
to be invalid in RL if and only if the stack ‘P,’ ‘Q,’ ‘R,’ . . ., ‘Y,’
‘¬Z’ has at least one completed open branch.
Note
1. See George Boolos, “Trees and Finite Satisfiability: Proof of a Conjecture of Burgess,”
Notre Dame Journal of Formal Logic 25, no. 3(1984): 193–97.
The entire set of derivation rules from propositional logic (PD) are imported into the
natural deductive system of predicate logic (RD). In addition, new deductive deriva-
tion rules are formulated for derivations involving quantified expressions.
In this section, the natural deduction system of (RD) is articulated. (RD) consists of
four quantifier derivation rules. Two derivation rules pertain to the elimination of
quantifiers (‘∀E’ and ‘∃E’) and the other two derivation rules pertain to the introduc-
tion of quantifiers (‘∀I’ and ‘∃I’).
1 (∀x)Px P
2 Pa 1∀E
325
Notice that x is bound in ‘(∀x)Px,’ and that the application of (∀E) involves remov-
ing the quantifier and replacing x with an individual constant (name) of your choosing.
In the above example, (∀E) is used on ‘(∀x)Px’ to derive ‘Pa.’ However, this is
not the only proposition that can be inferred from ‘(∀x)Px’ since variables bound by
the universal quantifier can be consistently replaced with any substitution instance
‘P(a/x),’‘P(a1/x),’. . .‘P(an/x)’ of your choosing.
1 (∀x)Px P
2 Pa 1∀E
3 Pb 1∀E
4 Pc 1∀E
5 Pd 1∀E
6 Pe100 1∀E
Another feature of (∀E) to keep in mind is that it demands that bound variables be
consistently (or uniformly) replaced with any substitution instance ‘P(a/x)’ of your
choosing.
1 (∀x)Pxx P
2 Paa 1∀E
3 Pcc 1∀E
Notice that each bound x has been uniformly replaced by an ‘a’ at line 2 and uni-
formly replaced by a ‘c’ at line 3. These are correct uses of (∀E). It is important to
note that in applying (∀E) to ‘(∀x)Pxx,’ we cannot replace one x with ‘a’ and another
with ‘b’:
1 (∀x)Pxx P
2 Pab 1∀E—NO!
3 Pba 1∀E—NO!
Notice that lines 2 and 3 are incorrect uses of (∀E) because they do not uniformly
replace bound variables with an individual constant. At line 2, one bound x is replaced
by ‘a’ while another is replaced by ‘b.’ This is not a consistent replacement of bound
variables with individual constants.
With a basic understanding of the correct formal use of (∀E), consider the follow-
ing English argument that involves (∀E):
1 Everyone is a person. P
2 If Alfred is a person, then Bob is a zombie. P
3 Alfred is a person. 1∀E
4 Therefore, Bob is a zombie. 2,3→E
Notice that since line 1 states that everyone is a person, a use of (∀E) allows for
deriving the proposition that a particular person in the domain of discourse is a person.
In the formal language of predicate logic the above argument is the following:
(∀x)Px, Pa→Zb ⊢ Zb
1 (∀x)Px P
2 Pa→Zb P
3 Pa 1∀E
4 Zb 2,3→E
In the above example, since line 1 states that everything is a person, we are justified
in inferring a substitution instance of that expression. In other words, from the quanti-
fied expression ‘(∀x)Px’ at line 1 we are justified in inferring a substitution instance
‘Pa’ at line 3. More concretely, since everything is a person, it follows in inferring that
Alfred is a person. Universal elimination (∀E) is valid for it would be impossible for
Everything is a person to be true, yet Alfred is a person to be false.
Again, note that a different substitution instance ‘P(b/x)’ or ‘P(c/x)’ or ‘P(d/x)’
could have been chosen. To repeat, any substitution instance within the domain of
discourse can be derived. That is, we could have inferred Bob is a person, or Frank
is a person, or Mary is a person from line 1 since ‘(∀x)Px’ says that everything is
a person. The reason ‘P(a/x)’ is chosen rather than any other proposition is that this
selection will aid in the solution of the proof.
Here is another example of a proof involving (∀E):
(∀x)[Px→(∀y)(Qx→Wy)],Pb∧Qb ⊢ Wt
1 (∀x)[Px→(∀y)(Qx→Wy)] P
2 Pb∧Qb P
3 Pb 2∧E
4 Pb→(∀y)(Qb→Wy) 1∀E
5 (∀y)(Qb→Wy) 3,4→E
6 Qb→Wt 5∀E
7 Qb 2∧E
8 Wt 6,7→E
Notice three things about the above proof. First, when (∀E) is applied to the left-
most quantifier in line 1, every variable bound by that quantifier is replaced with the
same substitution instance ‘P(b/x).’ This is another way of saying that (∀E) requires
uniform replacement of bound variables with individual constants. Second, notice that
at line 4 the universal quantifier (∀y) is not the main operator, so (∀E) cannot be ap-
plied to it. When using (∀E), make sure that the proposition you are applying it to is a
universally quantified proposition. Third, notice that at line 4, when (∀E) is applied to
line 1, ‘P(b/x)’ is chosen as a substitution instance. Also notice that at line 6, ‘P(t/y)’
is the substitution instance rather than ‘P(b/y).’ Consider the following question:
When using (∀E), I know that I can uniformly replace bound variables with any
individual constant of my choosing. However, which individual constant should
I choose?
To see the first consideration more clearly, take a look at the beginning portion of
the above proof.
1 (∀x)[Px→(∀y)(Qx→Wy)] P
2 Pb∧Qb P
3 Pb 2∧E
4 Pb→(∀y)(Qb→Wy) 1∀E
5 (∀y)(Qb→Wy) 3,4→E
Notice that when (∀E) is used at line 4, ‘b’ is chosen because ‘b’ already occurs in
the proof at lines 2 and 3, and because choosing ‘b’ will allow for inferring line 5. The
general idea is that choosing an individual already found in the proof will facilitate the
use of other derivation rules.
To see the second consideration more clearly, take a look at the latter portion of
the above proof.
5 (∀y)(Qb→Wy) 3,4→E
6 Qb→Wt 5∀E
7 Qb 2∧E
8 Wt 6,7→E
Notice that when (∀E) is used at line 6, ‘t’ is chosen because ‘t’ occurs in the con-
clusion.
These considerations allow for formulating a strategic rule for the use of (∀E) and
the rest of the quantifier rules more generally.
SQ#1(∀E) When using (∀E), the choice of substitution instances ‘P(a/x)’ should
be guided by the individual constants (names) already occurring in the
proof and any individual constants (names) occurring in the conclu-
sion.
Consider another example of a proof involving the use of (∀E) for the following
argument:
(∀x)(Pxa∧Qxa) ⊢ (Paa∧Pba)∧Pma
1 (∀x)(Pxa∧Qxa) P
2 Paa∧Qaa 1∀E
3 Pba∧Qba 1∀E
4 Pma∧Qma 1∀E
5 Paa 2∧E
6 Pba 3∧E
7 Pma 4∧E
8 Paa∧Pba 5,6∧I
9 (Paa∧Pba)∧Pma 7,8∧I
In the above proof, notice that the use of (∀E) at lines 2 to 4 is guided by the indi-
vidual constants occurring in the conclusion.
Zr ⊢ (∃x)Zx
1 Zr P
2 (∃x)Zx 1∃I
Pa→Qa, Pa ⊢ (∃y)Qy∧(∃x)Px
1 Pa→Qa P
2 Pa P
3 Qa 1,2→E
4 (∃x)Px 2∃I
5 (∃y)Qy 3∃I
6 (∃x)Px∧(∃y)Qy 4,5∧I
In the above example, from both ‘Pa’ and ‘Qa,’ two existentially quantified expres-
sions are inferred. That is, ‘P(a/x)’ and ‘Q(a/y)’ are replaced by existentially quanti-
fied expressions.
1 Laab P
2 (∃x)Lxab 1∃I
3 (∃x)Laxb 1∃I
4 (∃x)Lxxb 1∃I
5 (∃x)Lxxx 1∃I—NO!
Excluding line 5, each of the above uses of (∃I) is valid. Notice that at lines 2 and 3,
only one individual constant from line 1 is replaced by the use of (∃I). This is accept-
able since (∃I) says that an existentially quantified proposition ‘(∃x)Px’ can be derived
by replacing at least one individual constant with an existentially quantified variable.
Also note that line 4 is correct since while more than one individual constant is being
replaced, it is being replaced in a uniform manner. That is, more than one ‘a’ is being
replaced by existentially quantified x’s. However, note that line 5 is not correct since
the replacement of bound variables is not uniform. At line 5, not only is each ‘a’ being
replaced by an x but so is every ‘b.’
With a basic understanding of the correct formal use of (∃I), consider the following
English argument that involves (∃Ι):
1 Rick is a zombie. P
2 Therefore, someone is a zombie. 1∃I
Each use of (∃I) is acceptable since if it is true that Alfred loves himself, then lines
2 to 4 must be true. The above argument formally corresponds to the following:
1 Laa P
2 (∃x)Lxa 1∃I
3 (∃x)Lax 1∃I
4 (∃x)Lxx 1∃I
1 Lab P
2 (∃x)Lxx 1∃I—NO!
The derivation from lines 1 to 2 does not follow for supposing that Alice is madly
in love with Bob. She sees him, and her heart goes aflutter. There is nothing about
Alice’s love for another that entails that someone loves him- or herself.
However, suppose a slightly different conclusion to the above proof. That is, in-
stead of ‘Lab ⊢ (∃x)Lxx,’ suppose the proof is ‘Lab ⊢ (∃x)(∃y)Lxy.’
1 Lab P/(∃x)(∃y)Lxy
2 (∃y)Lay 1∃I
3 (∃x)(∃y)Lxy 2∃I
Notice that at line 2 (∃I) can be applied to a proposition that is already existentially
quantified. That is, (∃I) can be applied to ‘(∃y)Lay’ by consistently replacing at least
one individual constant with an existentially quantified variable.
Consider another example that illustrates mistaken uses of (∃I):
1 Lab P
2 Lab→Rab P
3 (∃x)Lxx 1∃I—NO!
4 (∃x)Lax→Rab 2∃I—NO!
5 (∃x)Lax→(∃x)Rax 4∃I—NO!
First, lines 4 and 5 are incorrect because (∃I), as a quantifier rule, can only be ap-
plied to the whole proposition ‘Lab→Rab’ at line 2 and not to subformulas within that
proposition. Second, line 3 is invalid because there is not a consistent replacement of
a possible substitution instance ‘P(a/x)’ with an existentially quantified proposition
‘(∃x)P.’ In the case of line 3, two different substitution instances are being replaced
with a single bound variable, that is, ‘P(a/x)’ and ‘P(b/x)’ with ‘(∃x)P.’
With a clarified notion of how to use (∃I) and why it is valid, it is helpful both to
develop a strategy for using (∃I) and to reinforce our understanding of (∃I) with a
number of examples. Consider an example involving both (∀E) and (∃I):
1 (∀x)(Px∧Rx) P
2 Pa→Wb P/(∃y)Wy
3 Pa∧Ra 1∀E
4 Pa 3∧E
5 Wb 2,4→E
6 (∃y)Wy 5∃I
The above example illustrates a strategy for using (∃I). Namely, if our aim is to
obtain an existentially quantified proposition ‘(∃x)P,’ try to formulate a substitution
instance ‘P(a/x)’ such that a use of (∃I) would result in ‘(∃x)P.’ In the proof above,
since the conclusion is ‘(∃y)Wy,’ a subgoal of the proof is a possible substitution in-
stance of ‘(∃y)Wy,’ that is, ‘W(a/x),’ ‘W(b/x),’ ‘W(c/x),’ and so on.
SQ#2(∃I) When using (∃I), aim at deriving a substitution instance ‘P(a/x)’ such
that a use of (∃I) will result in the desired conclusion.
In other words, if the ultimate goal is to derive ‘(∃x)Px,’ aim to derive a substitution
instance of ‘(∃x)Px,’ like ‘Pa,’ ‘Pb,’ ‘Pr,’ so that a use of (∃I) will result in ‘(∃x)Px.’
One way of thinking about this is to work backward in the proof. Consider the fol-
lowing again:
1 (∀x)(Px∧Rx) P
2 Pa→Wb P/(∃y)Wy
.
.
k W(a, . . ., v/x)
k+1 (∃y)Wy k∃I
Notice that the above proof starts by moving one step backward from the conclu-
sion and then making ‘W(a. . .v/x)’ the goal. Once, ‘W(a. . .v/x)’ is obtained, a simple
use of (∃I) will yield the conclusion.
Here is another example. Prove the following:
(∀x)Px, (∀y)Zy⊢(∃x)(Px∧Zx)
Before beginning this proof, it can be helpful to mentally work backward from the
conclusion.
1 (∀x)Px P
2 (∀y)Zy P/(∃x)(Px∧Zx)
.
.
k P(a/x)∧Z(a/x)
k+1 (∃x)(Px∧Zx) k∃I
If the goal of the conclusion is ‘(∃x)(Px∧Zx),’ the subgoal will be something like
‘Pa∧Za’ since propositions like ‘Pa∧Zb’ or ‘Pc∧Za’ will not allow for a uniform
replacement when (∃I) is used. With ‘Pa∧Za’ as a subgoal, consider the completed
proof.
1 (∀x)Px P
2 (∀y)Zy P/(∃x)(Px∧Zx)
3 Pa 1∀E
4 Za 2∀E
5 Pa∧Za 3,4∧I
6 (∃x)(Px∧Zx) 5∃I
Notice that when using (∀E) at line 4, it is important to choose ‘a’ rather than some
other individual constant (e.g., ‘b’) since using (∃I) on ‘Pa∧Zb’ would only allow for
inferring ‘(∃x)(Px∧Zb)’ and not ‘(∃x)(Px∧Zx).’
Exercise Set #1
A.
1. * Pa→Qb, Pa ⊢ (∃y)Qy
1 Pa→Qb P
2 Pa P/(∃y)Qy
3 Qb 1,2→E
4 (∃y)Qy 3∃I
3. * (∀x)(Px→Rx), Pa∧Ma ⊢ (∃z)Rz
1 (∀x)(Px→Rx) P
2 Pa∧Ma P/(∃z)Rz
3 Pa→Ra 1∀E
4 Pa 2∧E
5 Ra 3,4→E
6 (∃z)Rz 5∃I
5. * (∀x)Px, (∀x)Px→Gb ⊢ (∃x)Gx
1 (∀x)Px P
2 (∀x)Px→Gb P/(∃x)Gx
3 Gb 1,2→E
4 (∃x)Gx 3∃I
Notice that you cannot apply (∀E) on line 2 because it is not the main
operator of the wff. Line 2 is a conditional, and since the antecedent of the
conditional is in line 1, you can apply ‘→E.’
7. * (∀x)Px→(∃y)Gy, ¬(∃y)Gy ⊢ ¬(∀x)Px
1 (∀x)Px→(∃y)Gy P
2 ¬(∃y)Gy P
3 ¬(∀x)Px 1,2MT
Notice again that the main operator of line 1 is a conditional, and the
main operator of line 2 is a negation. You cannot apply (∀E) or (∃I) to
either of these rules since the quantifier is not the main operator of the
wff. You can, however, apply MT since line 2 is the negation of the
consequent in line 1.
9. * (∀x)(∀y)(Pxy→Rxy), ⊢ (∃x)Rxx
1 (∀x)(∀y)(Pxy→Rxy) P
2 Paa P
3 (∀y)(Pay→Ray) 1∀E
4 Paa→Raa 3∀E
5 Raa 2,4→E
6 (∃x)Rxx 5∃I
11. * Wab∧Qbc ⊢ [(∃y)(Way∧Qyc)∧(∃y)(Wyb)]∧(∃y)(Qyc)
1 Wab∧Qbc P
2 (∃y)(Way∧Qyc) 1∃I
3 Wab 1∧E
4 Qbc 1∧E
5 (∃y)Wyb 3∃I
6 (∃y)Qyc 4∃I
7 (∃y)(Way∧Qyc)∧(∃y)(Wyb) 2,5∧I
8 [(∃y)(Way∧Qyc)∧(∃y)(Wyb)]∧(∃y)(Qyc) 6,7∧I
Notice that line 2 involves an application of (∃I) to the complex proposi-
tion ‘Pab∧Qbc,’ while lines 5 and 6 involve applications of (∃I) to atomic
propositions. Note that (∃I) can be applied to a complex proposition,
and this does not consist of a replacement of two substitution instances
(∀x)Px ⊢ (∀y)Py
1 (∀x)Px P
2 Pa 1∀E
3 (∀y)Py 2∀I
When using (∀I), always check to see if either of the restrictions has been violated.
For instance,
(1) Does the substitution instance ‘P(a/x)’ occur as a premise or as an open assumption?
(2) Does the substitution instance ‘P(a/x)’ not occur in ‘(∀x)P?’
Notice that the use of (∀I) at line 3 does not violate the two restrictions placed on
(∀I). Namely, (1) ‘a’ does not occur in any premise or open assumption in the proof,
and (2) there is not an ‘a’ in the resulting quantified proposition ‘(∀y)Py.’ In this case,
it should be somewhat obvious why it is valid to infer ‘(∀y)Py’ at line 3 from ‘Pa’ at
line 2. This is because ‘(∀x)Px’ and ‘(∀y)Py’ are notational variants. That is, ‘(∀x)
Px’ says that everything in the domain of discourse has the property ‘P,’ and ‘(∀y)
Py’ says exactly the same thing. So, if ‘(∀x)Px’ is true, then ‘(∀y)Py’ cannot be false
since ‘(∀y)Py’ and ‘(∀x)Px’ are logically equivalent.
Here is another example of a proof involving (∀Ι):
(∀x)(Px∧Rx) ⊢ (∀y)(Py)
1 (∀x)(Px∧Rx) P
2 Pb∧Rb 1∀E
3 Pb 2∧E
4 (∀y)Py 3∀I
Notice that the use of (∀I) at line 4 does not violate the two restrictions placed on
(∀I). Namely, (1) ‘b’ does not occur in any premise or open assumption in the proof,
and (2) there is not a ‘b’ in the resulting quantified proposition ‘(∀y)Py.’ Here, ‘(∀x)
(Px∧Rx)’ and ‘(∀y)Py’ are not notational variants, but it should be obvious that it is
impossible for ‘(∀x)(Px∧Rx)’ to be true and ‘(∀y)Py’ false since ‘(∀x)(Px∧Rx)’ says
that everything is both ‘P’ and R, while ‘(∀y)Py’ says that everything is ‘P.’
Consider a third example:
1 (∀x)(Pxc∨Qx) P
2 (∀x)¬Qx P
3 Pac∨Qa 1∀E
4 ¬Qa 2∀E
5 Pac 3,4DS
6 (∀x)Pxc 5∀I
Notice that the use of (∀I) at line 6 does not violate the two restrictions placed on
(∀I). Namely, (1) ‘a’ does not occur in any premise or open assumption in the proof,
and (2) there is not an ‘a’ in the resulting quantified proposition ‘(∀x)Pxc.’ But in the
above proof, it is no longer immediately obvious why (∀I) is valid. Thus, you might
ask yourself the following question:
In fact, there seem to be at least two cases where a use of (∀I) is invalid. First,
consider the statement Johnny Walker is a criminal (i.e., ‘Cj’). This proposition in-
volves an individual constant (name) that selects a single individual; that is, ‘j’ names
Johnny Walker. It would be invalid to generalize from this proposition to Everyone is
a criminal, or ‘(∀x)Cx.’ That is, it is possible for v(Cj) = T, yet v(∀x)Cx = F. This is
possible because there is nothing about the truth of Johnny Walker being a criminal
that makes it such that everyone else is a criminal. Second, consider the statement
Someone is a criminal, or ‘(∃x)Cx.’ Unlike in the first case where ‘Cj’ states that
some specific individual is a criminal, ‘(∃x)Cx’ states that there exists a criminal in the
domain of discourse but does not specifically identify that person. But, again, it would
be invalid to generalize from ‘(∃x)Cx’ to Everyone is a criminal since it is possible for
the former to be true while the latter is false. This is invalid for the same reason as in
the first case: there is nothing about the truth of someone being a criminal that makes
it such that everyone is a criminal. It is possible that the property of being a criminal
is idiosyncratic to that existent person.
These invalid uses of (∀I) are sometimes known as hasty generalizations. That a
single specific (or some nonspecific) object has a property does not entail that every
object has that property. These two cases are what motivates the two restrictions
placed on the use of (∀I). Namely, a use of (∀I) is not valid when it is used to gener-
alize an instantiating constant that names either a single specific individual or some
unknown individual. However, a use of (∀I) is valid when the instantiating constant it
generalizes is an arbitrarily-selected individual, that is, an individual chosen at random
from the domain of discourse.
Let’s look at three concrete examples involving (∀I), each increasing in complex-
ity. First, consider a domain of discourse consisting of living humans. Suppose that
you wanted to prove the following claim: All men are mortal, or ‘(∀x)(Mx→Rx).’
Before beginning, let’s take some propositions as premises. Let’s use All organisms
are mortal, or ‘(∀x)(Ox→Rx),’ as a premise and All men are organisms, or ‘(∀x)
(Mx→Ox),’ as a premise.
1 (∀x)(Ox→Rx) P
2 (∀x)(Mx→Ox) P
Your first step in trying to prove that All men are mortal would be to assume a
randomly selected man from the domain of discourse.
1 (∀x)(Ox→Rx) P
2 (∀x)(Mx→Ox) P
3 Ma A
In this case, the object constant ‘a’ refers not to a single man (e.g., Frank or some
unknown man), but to a man randomly selected from the domain of discourse. So, in
principle, it could be any man, although not every man. The next step would be to
derive from these assumptions that the randomly selected man is mortal.
1 (∀x)(Ox→Rx) P
2 (∀x)(Mx→Ox) P
3 Ma A
4 Oa→Ra 1∀E
5 Ma→Oa 2∀E
6 Oa 3,5→E
7 Ra 4,6→E
8 Ma→Ra 3–7→I
Line 8 reads, If ‘a’ is a man, then ‘a’ is mortal. The next step is to use (∀I) on line 8.
1 (∀x)(Ox→Rx) P
2 (∀x)(Mx→Ox) P
3 Ma A
4 Oa→Ra 1∀E
5 Ma→Oa 2∀E
6 Oa 3,5→E
7 Ra 4,6→E
8 Ma→Ra 3–7→I
9 (∀x)(Mx→Rx) 8∀I
Line 9 reads, All men are mortal. Note two things. First, line 9 does not violate
either of the two restrictions on the use of (∀I), and it is not a hasty generalization
because it generalizes not from some specific man and not from some unknown man
but from a randomly selected man. We have shown then that if we randomly select a
man from the universe of discourse and show that the man has the property of being
mortal, we can generalize that every man has that property. This is valid because the
generalization depends not on any idiosyncratic property that belongs to a specific
man but on a property of all men.
As a second example, consider that the smallest and the only even prime number is
2. This follows from the fact that 1 is not a prime number (by definition), that every
even number greater than 2 is divisible by 2, and that by definition, a prime number
is divisible only by itself and 1. Suppose then that we want to claim No positive, even
integer greater than 2 is prime. We know that this is true, but we want to prove that
it follows from the considerations above. In other words, we want to prove something
roughly translated as follows:
(∀x){[(Ix∧Qx)∧(Ex∧Gx]→¬Px}
The above formula reads, No integer that is positive and even and greater than 2 is
prime. In order to derive this statement, start with the following assumption:
1 (Ia∧Qa)∧(Ea∧Ga) A
This says to assume that some randomly selected integer ‘a’ from a universe of
discourse consisting of positive integers is an even integer greater than 2. From this as-
sumption, the definition of a prime number, and the fact that every even number greater
than 2 is divisible by 2, we can infer that this randomly selected integer is not prime.
1 (Ia∧Qa)∧(Ea∧Ga) A
. . .
. . .
. . .
k ¬Pa .
Next, we can make use of conditional introduction and exit the subproof.
1 (Ia∧Qa)∧(Ea∧Ga) A
. . .
. . .
. . .
k ¬Pa .
k+1 [(Ia∧Qa)∧(Ea∧Ga)]→¬Pa 1 – k,→I
Line k + 1 states that this randomly selected integer is not prime. From here, we can
make use of (∀I) and conclude the proof.
1 (Ia∧Qa)∧(Ea∧Ga) A
. . .
. . .
. . .
k ¬Pa .
k+1 [(Ia∧Qa)∧(Ea∧Ga)]→¬Pa 1 – k,→I
k+2 (∀x){[(Ix∧Qx)∧(Ex∧Gx]→¬Px} k + 1,∀I
The above proof does not violate either of our two restrictions: ‘a’ occurs in neither
a premise nor an open assumption. This is evident from the fact that the above proof
makes use of no premises, and the subproof involving an assumption closes once ‘→I’
is applied at line k +1. We can generalize at line k +2 because when we chose ‘a’ from
the universe of discourse, we do not choose a specific positive integer that is even and
greater than 2. That is, ‘a’ is not an abbreviation for 4, or 6, or 8. In the assumption,
the selection of ‘a’ is random or arbitrary, for ‘a’ is any individual number that has
the property of being an integer, positive, even, and greater than 2. So if we can say,
Take an integer that is positive, even, and greater than 2, it follows that this integer
will not be prime, and we can reason to Any integer that is positive, even, and greater
than 2 is not prime. And from this we can generalize to No integer that is positive,
even, and greater than 2 is prime.
As one final example, consider the general structure of Euclid’s proof of the Py-
thagorean theorem involving a randomly selected right triangle. This is proposition
I.47 in Euclid’s Elements. The conclusion of the proof is the following:
For every right triangle, the square of the hypotenuse is equal to the sum of the
squares of the legs. That is, c2 = a2 + b2.
We can prove this by beginning with an assumption. That is, assume some ran-
domly selected Euclidean triangle has an angle equal to 90 degrees. That is, assume
ABC where ∠ACB is 90°. From this assumption, various assumptions about space
and lines, and more general facts about triangles in general, we derive the conse-
quence that the randomly selected triangle is such that the square of the hypotenuse is
equal to the square of its two sides. After deriving this consequence, a use of ‘→Ι’ is
applied in order to exit the subproof, and then the conditional is generalized.
The proof of the Pythagorean theorem thus involves an inference from an arbitrarily
selected individual constant to a universal statement about all right triangles.
Before considering a strategy for using (∀I), it is helpful to examine a number of
incorrect uses of (∀I) so as to reinforce its correct usage.
1 Fbc∧Fbd P
2 Fbc 1∧E
3 Fbd 1∧E
4 (∀x)(Fxc) 2∀I—NO!
5 (∀x)(Fbx) 2∀Ι—NO!
6 (∀x)(Fxd) 3∀I—NO!
7 (∀x)(Fbx) 3∀I—NO!
Lines 4 to 7 are all mistaken uses of (∀I) because it violates the first restriction,
namely, that the individuating constant does not occur as a premise or in an open as-
sumption. In the above case, ‘b’, ‘c,’ and ‘d’ all occur in premise 1 and therefore do
not refer to some arbitrarily selected individual that can be generalized. For example,
the use of (∀Ι) in line 4 on line 2 is incorrect since it involves replacing a substitution
instance ‘F(b/x)’ with a universally quantified variable.
Consider another violation of this same restriction:
1 Fbb P
2 Raa A
3 (∀x)Rxx 2∀I—NO!
Line 3 violates the first restriction, namely, that the individuating constant does not
occur as a premise or in an open assumption. At line 2, ‘a’ occurs as an open assump-
tion; therefore the use of (∀I) at line 3 is not acceptable. However, with this said, the
following is an acceptable usage:
1 Fbb P
2 Raa A
3 Raa∨Pa 2∨I
4 Raa→(Raa∨Pa) 2–3→I
5 (∀x)[Rxx→(Rxx∨Px)] 4∀I
Line 5 is valid is because the instantiating constant ‘a’ no longer occurs in an open
assumption since the subproof involving the assumption closes at line 4.
Consider an example of the second restriction on the use of (∀I):
1 (∀x)(Pxx→Lb) P
2 Pcc→Lb 1∀E
3 (∀y)(Pyc→Lb) 2∀I—NO!
4 (∀y)(Pcy→Lb) 2∀I—NO!
5 (∀y)(Pyy→Lb) 2∀I
In the above example, lines 3 and 4 are incorrect insofar as they violate the second
restriction. Namely, the substitution instance ‘P(c/x)’ occurs in the quantified expres-
sion, that is,‘(∀y)(Pyc→Lb)’ and ‘(∀y)(Pcy→Lb).’ However, line 5 is permissible,
for when (∀I) is applied, the substitution instance is not in the quantified expression.
Finally, note a strategic rule for the use of (∀I).
1 Rcc A
2 Rcc 1R
3 Rcc→Rcc 1–2→I
4 (∀x)(Rxx→Rxx) 3∀I
Notice that ‘Rcc→Rcc’ is a proposition such that a use of (∀I) will result in ‘(∀x)
(Rxx→Rxx).’ It is important to see that ‘Rcc’ is assumed rather than ‘(∀x)Rxx.’ Con-
sider the above proof but this time not using SQ#3(∀I) as a strategy.
1 (∀x)Rxx A
2 (∀x)Rxx 1R
3 (∀x)Rxx→(∀x)Rxx 1–2→I
Notice that line 3 is not the desired conclusion. This strategic rule suggests that
if the goal is a universally quantified proposition ‘(∀x)P,’ the first step should be a
substitution instance ‘P(a/x)’ from which a universally quantified proposition ‘(∀x)P’
can ultimately be derived using (∀I).
The strategic rule for (∀I) is similar to that for (∃I) insofar as one way of thinking
about its use is to mentally work backward in the proof. Consider the following proof:
⊢ (∀x)[Rxx→(Sxx∨Rxx)]
1 A
. .
. .
k .
k+1 Raa→(Saa∨Raa) 1–k
k+2 (∀x)[Rxx→(Sxx∨Rxx)] k + 1,∀I
Notice that while the conclusion of the proof is ‘(∀x)[Rxx→(Sxx∨Rxx)],’ the strate-
gic rule for obtaining universally quantified propositions says that we ought to derive a
substitution instance ‘P(a/x)’ such that a use of (∀I) will result in the desired conclusion.
In the above proof, ‘Raa→(Saa∨Raa)’ or some other similar proposition is an example.
With ‘Raa→(Saa∨Raa)’ as our subgoal, we can now use earlier strategic rules to solve
the proof. For instance, since ‘Raa→(Saa∨Raa)’ is a conditional, the strategic rule for
conditionals will help solve the proof.
1 Raa A/Saa∨Raa
2 Saa∨Raa 1∨I
3 Raa→(Saa∨Raa) 1–2→I
4 (∀x)[Rxx→(Sxx ∨ Rxx)] 3∀I
1 (∃x)Px P
2 Pa A/∃E
3 (∃y)Py 2∃I
4 (∃y)Py 1,2–3∃E
Note three things about this use of (∃E). First, note that a use of (∃E) requires that
an assumption be made at line 2. Second, note that ‘(∃y)Py’ clearly follows from ‘(∃x)
Px.’ The two are notational variants. Third, and finally, consider whether the use of
this derivation rule above has violated either of the two restrictions. Whenever using
(∃E), always ask whether these two restrictions have been violated:
(1) Does the individuating constant ‘a’ occur in any premise or in an active proof (or
subproof) prior to its arbitrary introduction in the assumption ‘P(a/x)’?
(2) Does the individuating constant ‘a’ occur in proposition ‘Q’ discharged from the
subproof?
If the answer to both of these questions is no, then you have not violated the restric-
tions placed on the use of this derivation rule. In the case of the above example, note
that neither of the two restrictions is violated. That is, when ‘Pa’ is assumed at line 2, the
individual constant ‘a’ does not occur in any premise or in an active proof (or subproof),
and when ‘(∃y)Py’ is discharged from the subproof, ‘a’ is not found in ‘(∃y)Py.’
Consider another slightly more complicated example involving (∃E):
(∃x)Px ⊢ (∃x)(Px∨¬Mx)
1 (∃x)Px P
2 Pa A/∃E
3 Pa∨¬Ma 2∨I
4 (∃x)(Px∨¬Mx) 3∃I
5 (∃x)(Px∨¬Mx) 1,2–4∃E
In the above example, note that none of the restrictions have been violated. This is
evident because (1) the individuating constant ‘a’ employed in the assumption at line
2 does not occur in the premise or any open assumptions. This includes the fact that
the individuating constant ‘a’ employed in the assumption at line 2 is not in the exis-
tentially quantified expression ‘(∃x)Px’ at line 1. Also, (2) the individuating constant
‘a’ does not occur in the proposition that is discharged from the subproof, that is, (∃x)
(Px∨¬Mx).
Consider another example:
1 (∃z)(Wz∧Mz) P
2 Wa∧Ma A/∃E
3 Wa 2∧E
4 Ma 2∧E
5 (∃z)Wz 3∃I
6 (∃z)Mz 4∃I
7 (∃z)Wz∧(∃z)Mz 5,6∧I
8 (∃z)Wz∧(∃z)Mz 1,2–7∃E
(∃x)P
P(a/x) (1) The individuating constant ‘a’ should not occur in any premise
. or in an active proof (or subproof) prior to its arbitrary introduction
. in the assumption P(a/x).
.
.
.
Q
Q (2) The individuating constant ‘a’ should not occur in proposition
Q discharged from the subproof.
Consider a violation of the first restriction, namely, that the individuating constant
‘a’ does not occur in any premise or in an active proof (or subproof) prior to its arbi-
trary introduction in the assumption ‘P(a/x).’
1 Ea→(∀x)Px P
2 (∃x)Ex P
3 Ea A
4 (∀x)(Px) 1,3→E
5 (∀x)Px 2,3–4∃E—NO!
3 .
4 .
5 Therefore, every number is prime. 2,3–4 ∃E—NO!
Propositions in lines 1 and 2 are both true, but line 5 is clearly false since 10 is not
a prime number. The invalid use of (∃E) results from inferring more than what is as-
serted by the existential proposition at line 2. ‘(∃x)Px’ says some number is prime,
while the faulty use of (∃E) assumes that 3 is even. However, the following is not a
violation of the first restriction:
1 (∃x)(Pxx∧Qxx) P
2 Paa∧Qaa A/∃E
3 Qaa 2∧E
4 (∃x)Qxx 3∃I
5 (∃x)Qxx 1,2–4∃E
Notice that the individuating constant ‘a’ at line 2 does not occur in any premise
or assumption.
Consider another example where the first restriction is violated:
1 (∀x)(∃y)Txy P
2 (∃y)Tay 1∀I
3 Taa A/∃E
4 (∃x)Txx 3∃I
5 (∃x)Txx 2,3–4∃E—NO!
The first restriction states that the individuating constant ‘a’ does not occur in any
premise or in an active proof (or subproof) prior to its arbitrary introduction in the as-
sumption ‘P(a/x).’ So, while no individuating constant occurs as a part of the premise
of line 1, the individuating constant ‘a’ does occur as a part of an active proof at line 2.
Given the above restriction, the following strategic rule can be formulated for using
(∃E):
Next, consider an example that violates the second restriction on the use of (∃E).
This restriction states that the instantiating constant ‘a’ in the assumed substitution
instance ‘P(a/x)’ does not occur in proposition ‘Q’ discharged from the subproof.
1 (∃z)(Wzz∧Mz) P
2 Wbb∧Mb A/∃E
3 Wbb 2∧E
4 (∃x)(Wbx) 4∃I
5 (∃x)(Wbx) 1,2–5∃E—NO!
The above argument is invalid. The fallacious step of reasoning occurs at line 5 be-
cause the instantiating constant ‘b’ in the assumption at line 2 is in the proposition de-
rived by ‘∃E’ outside the subproof at line 5. To see that this is invalid, consider that line
1 only refers to there being at least one object z that is both ‘Wzz’ and ‘Mz.’ While we
could validly infer ‘(∃x)(Wzz),’ we could never validly infer a specific object constant.
Finally, note that the examples illustrating (∃E) all begin with an existentially quan-
tified proposition ‘(∃x)P’ and result in an existentially quantified proposition ‘(∃x)P.’
However, note that (∃E) does not mandate this, and in a number of examples the derived
proposition is not an existentially quantified proposition. Here are two, the first being
somewhat trivial.
1 (∀x)Px P
2 (∃x)Zx P
3 Za A/∃E
4 (∀x)Px 1R
5 (∀x)Px 2,3–4∃E
Notice that ‘(∀x)Px’ does not violate either of the two restrictions placed on the
use of (∃E).
Finally, consider the following derivation:
1 (∃y)Py→(∀y)Ry P
2 (∃x)Px P/(∀y)Ry
3 Pa A/∃E
4 (∃y)Py 3∃I
5 (∀y)Ry 1,4→E
6 (∀y)Ry 2,3–5∃E
In the above example, notice that ‘(∀y)Ry’ cannot be derived by using lines 1 and
2 and ‘→E’ since ‘(∃y)Py’ and ‘(∃x)Px’ are not the same proposition.
Exercise Set #2
5. * (∀x)(∀y)(Zxa∧Mxy) ⊢ (∀z)(Zza∧Mzz)
6. ⊢ (∀x)(Px)→(∀x)(Px)
7. * ⊢ (∀x)(Px)→(∀x)(Px∨Qx)
8. Zaaa→Qb, (∃x)Px, (∃y)Py→(∃x)Lx, (∃z)Lz→(∀x)Zxxx ⊢ Qb
1. * (∃x)(Gx) ⊢ (∃z)(Gz)
1 (∃x)Gx P
2 Ga A/∃E
3 (∃z)Gz 2∃I
4 (∃z)Gz 1,2–3∃E
This proof is a straightforward example of (∃E), although it illustrates that the
choice of variable is not relevant. Rather than ‘(∃z)Gz,’ we could have inferred
‘(∃y)Gy’ or ‘(∃x)Gx.’
3. * (∃x)[Rx∧(∃z)(Mz)] ⊢ (∃y)My
1 (∃x)[Rx∧(∃z)(Mz)] P/(∃y)My
2 Ra∧(∃z)Mz A/∃E
3 (∃z)Mz 2∧E
4 Mb A/∃E
5 (∃y)My 4∃I
6 (∃y)My 3,4–5∃E
7 (∃y)My 1,2–6∃E
The key to solving the above example is to make repeated use of (∃E).
5. * (∀x)(∀y)(Zxa∧Mxy) ⊢ (∀z)(Zza∧Mzz)
1 (∀x)(∀y)(Zxa∧Mxy) P/(∀z)(Zza∧Mzz)
2 (∀y)(Zba∧Mby) 1∀E
3 Zba∧Mbb 2∀E
4 (∀z)(Zza∧Mzz) 3∀I
7. * ⊢ (∀x)(Px)→(∀x)(Px∨Qx)
1 (∀x)Px A/(∀x)(Px∨Qx)
2 ¬(Pa∨Qa) A/P∧¬P
3 ¬Pa∧¬Qa 2DEM
4 Pa 1∀E
5 ¬Pa 3∧E
6 Pa∨Qa 2–5¬E
7 (∀x)(Px∨Qx) 6∀I
8 (∀x)Px→(∀x)(Px∨Qx) 1–7→I
The first step in solving the above theorem is recognizing that the theorem is a
conditional. Recognizing that it is a conditional, we first assume the anteced-
ent of the conditional. The second step is realizing that our goal formula is a
universally quantified disjunction. In order to obtain this, our next assumption
will not simply be a substitution instance of ‘(∀x)(Px∨Qx)’ but will be guided
by our strategic rule for a substitution instance that is a disjunction. Thus, we
assume the negation of the disjunction, derive a contradiction, and use ‘¬E’ to
exit with our desired conclusion.
In propositional logic, we devised a system of natural deduction (PD) and then added
a number of additional rules to form (PD+), whose main purpose was to simplify or
expedite proofs. Similarly, in predicate logic, (∀I), (∀E), (∃I), and (∃E) form a sys-
tem of natural deduction (RD). In this section, we develop this system by adding an
equivalence rule called quantifier negation (QN). The addition of QN to RD forms
RD+. One of the central benefits of adding QN to the existing set of derivation rules
is that it will allow us to readily deal with the negated universal ‘¬(∀x)P’ and negated
‘¬(∃x)P’ propositions.
1 ¬(∀x)Px P
2 (∃x)¬Px 1QN
3 ¬(∀x)Px 2QN
Notice that the application of QN on line 1 allows for replacing the negated
universal ‘¬(∀x)Px’ with an existential proposition that quantifies over a negated
propositional form ‘(∃x)¬Px.’ Also notice that QN allows for replacing the existen-
tial proposition that quantifies over a negated propositional form ‘(∃x)¬Px’ with a
negated universally quantified proposition ‘¬(∀x)Px.’
Consider another simple use of QN:
1 ¬(∃z)(Wzz∧Mz) P
2 (∀z)¬(Wzz∧Mz) 1QN
3 ¬(∃z)(Wzz∧Mz) 2QN
1 ¬(∀z)¬(Wzz→¬Mz) P
2 (∃x)(Px∧Rx) P
3 (∃z)¬¬(Wzz→¬Mz) 1QN
4 (∃z)(Wzz→¬Mz) 3DN
5 ¬¬(∃x)(Px∧Rx) 2DN
6 ¬(∀z)¬(Wzz→¬Mz) 3QN
1 ¬(∀z)¬(∃y)[Wzy→¬(∀x)¬My] P
2 ¬(∀z)¬(∃y)[Wzy→(∃x)¬¬My] 1QN
3 (∃z)¬¬(∃y)[Wzy→(∃x)¬¬My] 2QN
4 (∃z)(∃y)[Wzy→(∃x)¬¬My] 3DN
5 (∃z)(∃y)[Wzy→(∃x)My] 4DN
Notice that the first use of QN in line 2 is on the quantified antecedent of the con-
ditional. That is, from a negated universally quantified subformula ‘¬(∀x)¬My,’ we
can infer the existentially quantified negated subformula ‘(∃x)¬¬My.’
QN is a derived rule and so can be proved using the underived quantifier rules
(‘∀E,’‘∀I,’‘∃I,’‘∃E’). In order to prove this, the following needs to be shown without
using QN:
¬(∀x)P⊣ ⊢ (∃x)¬P
¬(∃x)P⊣ ⊢ (∀x)¬P
The following proof shows ‘¬(∀x)Px ⊢ (∃x)¬Px.’ The remainder of the proofs are
left as exercises.
1 ¬(∀x)Px P/(∃x)¬Px
2 ¬(∃x)¬Px A/contra
3 ¬Pa A/contra
4 (∃x)¬Px 3∃I
5 ¬(∃x)¬Px 2R
6 Pa 3–5¬E
7 (∀x)Px 6∀I
8 ¬(∀x)Px 1R
9 (∃x)¬Px 2–8¬E
In this section, we illustrate the use of quantifier rules and various strategies for prov-
ing propositions in predicate logic.
Consider the following proof:
1 (∀x)(Ax→Bx) P
2 (∃x)¬Bx P
3 ¬Ba A/∃E
4 Aa→Ba 1∀E
5 ¬Aa 3,4MT
6 (∃x)¬Ax 5∃I
7 (∃x)¬Ax 2,3–6∃E
There are two things of note about this proof. First, notice that since the goal of the
proof is ‘(∃x)¬Ax,’ a subgoal of the proof will be ‘¬Aa’ or some proposition such that
we can use (∃I) to obtain ‘(∃x)¬Ax.’ Second, notice that since ‘(∃x)¬Bx’ will play a
role in the proof, one should start the proof by assuming ‘¬Ba’ because starting the
proof by using (∀E) on line 1 will not further the proof.
Consider another proof:
Before moving forward in this proof, notice again that the conclusion is an exis-
tentially quantified proposition. Thus, the subgoal of the proof will be ‘S(a. . .v/x).’
1 (∀x)(∀y)(Pxy∧Qxy) P
2 (∀z)Pzz→Sb P/(∃x)Sb
3 (∀y)(Pay∧Qay) 1∀E
4 Paa∧Qaa 3∀E
5 Paa 4∧E
6 (∀z)Pzz 5∀I
7 Sb 2,6→E
8 (∃x)Sx 7∃I
⊢ (∀x)[¬(Qx→Rx)→¬Px]→(∀x)[¬(Px→Rx)→¬(Px→Qx)]
This is a zero-premise deduction, and while the size of the formula is intimidating,
you should recognize that it is simply an complex instance of ‘(∀x)P→(∀x)Q.’ Since
the proposition is a conditional, first assume the antecedent ‘(∀x)P.’ Our goal is the
consequent ‘(∀x)Q.’
1 (∀x)[¬(Qx→Rx)→¬Px] A/(∀x)[¬(Px→Rx)→¬(Px→Qx)]
Next, we examine the main connective of ‘(∀x)Q,’ which is the quantifier (∀x).
This means that we will ultimately have to use (∀I) in order to obtain ‘(∀x)Q.’ So, our
assumption will be a possible substitution instance of ‘(∀x)Q.’ Here we have two op-
tions. First, we could either assume ‘¬Q(a/x)’ and derive a contradiction. Second, we
could assume a substitution instance of the antecedent ‘Q(a/x),’ derive the consequent,
and then use ‘→I.’ We pursue this second option.
1 (∀x)[¬(Qx→Rx)→¬Px] A/(∀x)[¬(Px→Rx)→¬(Px→Qx)]
2 ¬(Pa→Ra) A/¬(Pa→Qa)
Since our current goal is the negation of a conditional, we employ the strategic rule
for negated propositions ‘¬R’ and work toward a contradiction.
1 (∀x)[¬(Qx→Rx)→¬Px] A/(∀x)[¬(Px→Rx)→¬(Px→Qx)]
2 ¬(Pa→Ra) A/¬(Pa→Qa)
3 Pa→Qa A/P∧¬P
At this point, no more assumptions are necessary, and what remains is to work
toward the goals set on the right-hand side of the proof.
1 (∀x)[¬(Qx→Rx)→¬Px] A/(∀x)[¬(Px→Rx)→¬(Px→Qx)]
2 ¬(Pa→Ra) A/¬(Pa→Qa)
3 Pa→Qa A/P∧¬P
4 ¬(Qa→Ra)→¬Pa 1∀E
5 ¬(¬Pa∨Ra) 2IMP
6 Pa∧¬Ra 5DEM+DN
7 Pa 6∧E
8 Qa 3,7→E
9 ¬¬Pa 7DN
10 Qa→Ra 4,9MT+DN
11 Ra 8,10→E
12 ¬Ra 6∧E
13 ¬(Pa→Qa) 3–12¬I
14 ¬(Pa→Ra)→¬(Pa→Qa) 2–13→I
15 (∀x)[¬(Px→Rx)→¬(Px→Qx)] 14∀I
16 (∀x)[¬(Qx→Rx)→¬Px]→(∀x) 1–15→I
[¬(Px→Rx)→¬(Px→Qx)]
End-of-Chapter Exercises
A.
1. * Fc∧Fb ⊢ (∃x)Fx∧Fb
1 Fc∧Fb P/(∃x)Fx∧Fb
2 Fb 1∧E
3 (∃x)Fx 2∃I
4 (∃x)Fx∧Fb 2,3∧I
3. * (∃x)Gx∧(∀y)My ⊢ (∃z)Gz
1 (∃x)Gx∧(∀y)My P/(∃z)Gz
2 (∃x)Gx 1∧E
3 Ga A/∃E
4 (∃z)Gz 3∃I
5 (∃z)Gz 2,3–4∃E
5. * (∃x)(Px∧Wx), (∀x)(Px→Mx) ⊢ (∃x)(Mx)
1 (∃x)(Px∧Wx) P
2 (∀x)(Px→Mx) P/(∃x)(Mx)
3 Pa∧Wa A/∃E
4 Pa 3∧E
5 Pa → Ma 2∀E
6 Ma 4,5→E
7 (∃x)Mx 6∃I
8 (∃x)Mx 1,3–7∃E
7. * (∀x)(Rx∧Ma) ⊢ ¬(∃y)¬(Ry)
1 (∀x)(Rx∧Ma) P/¬(∃y)¬(Ry)
2 Rb∧Ma 1∀E
3 Rb 2∧E
4 (∀y)Ry 3∀I
5 ¬¬(∀y)Ry 4DN
6 ¬(∃y)¬Ry 5QN
Notice that we choose ‘P(b/x)’ rather than ‘P(a/x)’ as a substitution in-
stance for ‘(∀x)P’ at line 1. If you choose ‘P(a/x),’ then you cannot make
use of (∀I) at line 4 since one of the restrictions on (∀I) is that the instan-
tiating constant does not occur as a premise or an open assumption.
5. * ⊢ (∃x)Ax→(∃x)(Ax∧Ax)
1 (∃x)Ax A/(∃x)(Ax∧Ax)
2 Aa A/∃E
3 Aa 1R
4 Aa∧Aa 2,3∧I
5 (∃x)(Ax∧Ax) 4∃I
6 (∃x)(Ax∧Ax) 1,2–5∃E
7 (∃x)Ax→(∃x)(Ax∧Ax) 1-6→I
7. * ⊢ ¬(∃x)¬[(Ax→Bx)∨(Bx→Dx)]
1 ¬[(Aa→Ba)∨(Ba→Da)] A
2 ¬(Aa→Ba)∧¬(Ba→Da) 1DeM
3 ¬(Aa→Ba) 2∧E
4 ¬(Ba→Da) 2∧E
5 ¬(¬Aa∨Ba) 3IMP
6 ¬(¬Ba∨Da) 4IMP
7 ¬¬Aa∧¬Ba 5DeM
8 ¬¬Ba∧¬Da 6DeM
9 ¬Ba 7∧E
10 ¬¬Ba 8∧E
11 Ba 10DN
12 [(Aa→Ba)∨(Ba→Da)] 1–11¬E
13 (∀x)[(Ax→Bx)∨(Bx→Dx)] 12∀I
14 ¬¬(∀x)[(Ax→Bx)∨(Bx→Dx)] 13DN
15 ¬(∃x)¬[(Ax→Bx)∨(Bx→Dx)] 14QN
9. * ⊢ (∀x)(Ax→Bx)→(∃x)[¬(Bx∧Dx)→¬(Dx∧Ax)]
1 (∀x)(Ax→Bx) A/(∃x)[¬(Bx∧Dx)→
¬(Dx∧Ax)]
2 ¬(Ba∧Da) A/¬(Da∧Aa)
3 ¬Ba∨¬Da 2DEM
4 Aa→Ba 1∀E
5 Ba→¬Da 3IMP
6 Aa→¬Da 4,5HS
7 ¬Aa∨¬Da 6IMP
8 Da∧Aa A/contra
9 Da 8∧E
10 Aa 8∧E
11 ¬Aa 7,9DS
12 ¬(Da∧Aa) 8-11¬I
13 ¬(Ba∧Da)→¬(Da∧Aa) 2–12→I
14 (∃x)[¬(Bx∧Dx)→¬(Dx∧Ax) 13∃I
15 (∀x)(Ax→Bx)→(∃x)[¬(Bx∧Dx)→¬(Dx∧Ax)] 1–14→I
11. * ⊢ (∀x)[Px→(Qx→Rx)]→(∀x)[(Px→Qx)→(Px→Rx)]
1 (∀x)[Px→(Qx→Rx)] A/(∀x)[(Px→Qx)→
(Px→Rx)]
2 Pa→Qa A/Pa
3 Pa A/Ra
4 Qa 2,3→E
5 Pa→(Qa→Ra) 1∀E
6 Qa→Ra 3,5→E
7 Ra 4,6→E
8 Pa→Ra 3–7→I
9 (Pa→Qa)→(Pa→Ra) 2–8→I
10 (∀x)[(Px→Qx)→(Px→Rx)] 9∀I
11 (∀x)[Px→(Qx→Rx)]→(∀x)[(Px→Qx)→(Px→Rx)] 1–10→I
SQ#1(∀E) When using (∀E), the choice of substitution instances ‘P(a/x)’ should
be guided by the individual constants already occurring in the proof and
any individual constants occurring in the conclusion.
SQ#2(∃I) When using (∃I), aim at deriving a substitution instance ‘P(a/x)’ such
that a use of (∃I) will result in the desired conclusion.
SQ#3(∀I) When the goal proposition is a universally quantified proposition ‘(∀x)
P,’ derive a substitution instance ‘P(a/x)’ such that a use of (∀I) will
result in the desired conclusion.
SQ#4(∃E) Generally, when deciding on a substitution instance ‘P(a/x)’ to assume
for a use of (∃E), choose one that is foreign to the proof.
Propositional Logic
Propositional Operators
359
¬(P→Q) P→Q
P ¬→D
¬Q ¬→D
¬P Q →D
¬¬P
P ¬¬D
Conjunction P∧R
Disjunction P∨R
Conditional P→R
Biconditional P↔R
Negated conjunction ¬(P∧R)
Negated disjunction ¬(P∨R)
Negated conditional ¬(P→R)
Negated biconditional ¬(P↔R)
Double negation ¬¬P
Q A
.
.
.
R
R ∨E
10 Biconditional Introduction (↔I)
From a derivation of ‘Q’ within a subproof P A
involving an assumption ‘P’ and from a .
derivation of ‘P’ within a separate subproof .
involving an assumption ‘Q,’ we can derive .
‘P↔Q’ out of the subproof.
Q
Q A
.
.
.
P
P↔Q ↔I
11 Biconditional Elimination (↔E) P↔Q
From ‘P↔Q’ and ‘P,’ we can derive ‘Q.’ P
And from ‘P↔Q’ and ‘Q,’ we can derive ‘P.’ Q ↔E
P↔Q
Q
P ↔E
SA#3(∧) If the conclusion is a conjunction, you will need two steps. First, as-
sume the negation of one of the conjuncts, derive a contradiction, and
then use ‘¬I’ or ‘¬E.’ Second, in a separate subproof, assume the ne-
gation of the other conjunct, derive a contradiction, and then use ‘¬I’
or ‘¬E.’ From this point, a use of ‘∧I’ will solve the proof.
SA#4(∨) If the conclusion is a disjunction, assume the negation of the whole
disjunction, derive a contradiction, and then use ‘¬I’ or ‘¬E.’
Predicate Logic
Predicate Operators
Existential (∃x)P
Universal (∀x)P
Negated existential ¬(∃x)P
Negated universal ¬(∀x)P
SQ#1(∀E) When using (∀E), the choice of substitution instances ‘P(a/x)’ should
be guided by the object constants already occurring in the proof and any
object constants occurring in the conclusion.
SQ#2(∃I) When using (∃I), aim at deriving a substitution instance ‘P(a/x)’ such
that a use of (∃I) will result in the desired conclusion.
SQ#3(∀I) When the goal proposition is a universally quantified proposition ‘(∀x)P,’
derive a substitution instance ‘P(a/x)’ such that a use of (∀I) will result in
the desired conclusion.
SQ#4(∃E) Generally, when deciding on a substitution instance ‘P(a/x)’ to assume
for a use of (∃E), choose one that is foreign to the proof.
Logic plays an important role in philosophical theorizing and is itself a topic of philosophical
reflection. Philosophical logic is sometimes characterized as philosophy informed by and sen-
sitive to the work done in logic. This makes philosophical logic an extremely broad discipline
concerned with a variety of different linguistic, metaphysical, epistemological, or even ethical
problems. In contrast, philosophy of logic is philosophy about logic; that is, it is an area of
philosophy that takes logic as its primary object of inquiry.
Goldstein, Laurence. 2005. Logic: Key Concepts in Philosophy. New York: Continuum.
Grayling, A. C. 1997. An Introduction to Philosophical Logic. Malden, MA: Blackwell Press.
Haack, Susan. 1978. Philosophy of Logics. Cambridge: Cambridge University Press.
Jacquette, Dale, ed. 2002. Philosophy of Logic: An Anthology. Malden, MA: Blackwell
Publishers.
———, ed. 2005. A Companion to Philosophical Logic. Malden, MA: Wiley-Blackwell.
Sider, Theodore. 2010. Logic for Philosophy. Oxford: Oxford University Press.
Wolfram, Sybil. 1989. Philosophical Logic: An Introduction. New York: Routledge.
Modal Logic
Modal languages and syntax can be viewed as an extension of the language and syntax of
propositional and/or predicate logic. This type of logic aims to provide a syntax, semantics, and
deductive system for languages that involve modal expressions like can, may, possible, must,
and necessarily. So, in continuing your study of logic, looking at an introductory modal logic
textbook can be a great place to start.
Beall, J. C., and Bas C. van Fraassen. 2003. Possibilities and Paradox. Oxford: Oxford Uni-
versity Press.
Bell, John L., David DeVidi, and Graham Solomon. 2007. Logical Options: An Introduction to
Classical and Alternative Logics. Orchard Park, NY: Broadview Press.
367
Haack, Susan. 1978. “Modal Logic.” In Philosophy of Logics, 170–203. Cambridge: Cam-
bridge University Press.
Hughes, G. E., and M. J. Cresswell. 1996. A New Introduction to Modal Logic. London and
New York: Routledge.
Nolt, John. 1997. Logics. Belmont, CA: Wadsworth Publishing Company.
Priest, Graham. 2008. An Introduction to Non-Classical Logic: From If to Is. 2nd ed. Cam-
bridge: Cambridge University Press.
The semantics articulated in this book is known as classical semantics. One crucial feature
of classical semantics is that it operates under the assumption that every proposition is either
true or false (not both and not neither), also known as the principle of bivalence. Nonclassical
logics are logical systems that reject the principle of bivalence and develop a logical system
built around a semantics involving more truth values than true and false (e.g., indeterminate).
In addition to revising the semantics of propositional or predicate logic, logicians also take is-
sue with certain derivation or inference rules (e.g., negation introduction or elimination) and
suggest a more restrictive or alternative system of derivation rules. Such logics are sometimes
known as deviant logics. Lastly, you may have noticed that the following derivation is valid in
RD: Pa ⊢ (∃z)Pz. This is because the semantics of RL is such that every object constant (name)
denotes a member of the domain of discourse D. What about names that do not have referents
or refer to objects that do not exist? Free logic is a logic where individual constants may either
fail to denote an existing object or denote objects outside D. This logic is useful for analyzing
discourse where names do not refer to existing entities (e.g., works of fiction).
Nondeductive Logics
In this text, the focus has been on deductively valid arguments. But symbolic logic—a branch
of logic that represents how we ought to reason by using a formal language consisting of ab-
stract symbols—is not confined only to valid arguments. Logics have been developed for non-
deductive arguments. That is, arguments where the truth of the premises does not guarantee the
truth of the conclusion. These logics, known as nondeductive (or inductive) logics, range from
those that deal with probability, how to make rational decisions under ignorance (decision the-
ory), to those that deal with rational interaction between groups of individuals (game theory).
Hacking, Ian. 2001. An Introduction to Probability and Inductive Logic. Cambridge: Cam-
bridge University Press.
Maher, Patrick. 1993. Betting on Theories. Cambridge: Cambridge University Press.
Resnik, Michael. Choices: An Introduction to Decision Theory. Minneapolis: University of
Minnesota Press
Skyrms, Brian. 1966. Choice and Chance: An Introduction to Inductive Logic. Belmont, CA:
Dickenson Publishing Company.
Higher-Order Logic
In this text, we have focused primary on first-order logic. In first-order logics, quantifiers range
over individual objects. A higher-order logic is a system of logic in which quantifiers range
over a variety of different types of objects (e.g., propositions by quantifying propositional vari-
ables or properties expressed by quantifying n-place predicate variables).
Bell, John L., David DeVidi, and Graham Solomon. 2007. Logical Options: An Introduction to
Classical and Alternative Logics. Orchard Park, NY: Broadview Press.
Nolt, John. 1997. Logics. Belmont, CA: Wadsworth Publishing Company.
Van Benthem, Johan, and Kees Doets. 1983. “Higher-Order Logic.” In Handbook of Philo-
sophical Logic, edited by D. Gabbay and F. Guenthner, 275–329. Dordrecht: D. Reidel.
History of Logic
Logic has a long history, from Aristotle’s (384–322 bc) creation of the first logical system,
through Leibniz’s (ad 1646–1716) attempts at replacing all scientific thinking with a universal
logical calculus, to modern-day research in logic by mathematicians and philosophers.
Adamson, Robert. 1911. Short History of Logic. Edinburgh: W. Blackwood and Sons.
Gabbay, Dov M., and John Woods, eds. 2004. Handbook of the History of Logic. Amsterdam:
Elsevier North Holland.
Gensler, Harry J. 2006. Historical Dictionary of Logic. Lanham, MD: Scarecrow Press.
Haaparanta, Leila, ed. 2009. The Development of Modern Logic. Oxford: Oxford University
Press.
King, Peter, and Stewart Shapiro. 1995. “The History of Logic.” In The Oxford Companion
to Philosophy, edited by Ted Honderich, 495–500. Ipswich, MA: Oxford University Press.
Kneale, William, and Martha Kneale. 1962. The Development of Logic. Oxford: Clarendon
Press.
Prior, Arthur N. 1955. Formal Logic. Oxford: Clarendon Press.
There are a number of alternative symbolizations of propositional and predicate logic. Two ex-
amples are Polish (or prenix) notation and Iconic (or diagrammatic) notation. In Polish notation
(developed by Jan Łukasiewicz) truth-functional operators are placed before the propositions
to which they apply. Instead of symbolizing ‘p and q’ as ‘P∧Q,’ in Polish notation ‘p and q’ is
symbolized as ‘Kpq,’ where ‘K’ is the truth-functional operator for conjunction. Although not
widely used in logic, prefixing and postfixing logical operators are often used in computer pro-
gramming languages, specifically one of the oldest, LISP. Early efforts to represent expressions
and conceptual relationships between propositions are found in Euler and Venn diagrams, but
these notations lacked an accompanying proof system. One form of visual logic (developed by
Charles S. Peirce) that overcame this feature is known as existential graphs. Instead of symbol-
izing ‘not p and not q’ using specific symbols for truth-functional operators, Peirce proposed
various conventions, propositional letters, and the use of circles that circumscribe letters to
represent the same proposition. That is, ‘not not-p and not-q,’ represented as ‘¬(¬P∧¬R)’ in
PL, is represented as follows in Peirce’s existential graphs:
Roberts, Don D. 1973. The Existential Graphs of Charles S. Peirce. The Hague: Mouton.
Shin, Sun-Joo. 2002. The Iconic Logic of Peirce’s Graphs. Cambridge, MA: MIT Press.
Sowa, John F. 1993. “Relating Diagrams to Logic.” In Lecture Notes in Artificial Intelligence,
edited by Guy W. Mineau, Bernard Moulin, and John F. Sowa, 1–35. Berlin: Springer-Verlag.
Woleński, Jan. 2003. “The Achievements of the Polish School of Logic.” In The Cambridge
History of Philosophy,1870–1945, edited by Thomas Baldwin, 401–16. Cambridge: Cam-
bridge University Press.
abductive argument. See argument, types of closed formula. See under formula
adicity, 249–50 closed sentence. See formula
argument: definition of, 7–8, 23; how to closed subproof. See under subproof
identify, 8–10; organization of, 9; types completed open branch. See under branch
of, 10–12, 14–17 completed open tree. See under truth tree
argument indicators, 9 complex proposition, 27, 62
Aristotle, 60, 369 conclusion, 7–8, 10, 161
arrow, right (→). See truth-functional conditional, 48–51; and causal statements,
operators 50; valuation explained, 84–88. See also
arrow, double (↔). See truth-functional truth-functional operators
operators conjunction, 30–32
assumptions, 8, 169–73; strategies involving connective, 25–26, 62. See also binary
199–206, 242 operator; truth-functional operators
atomic proposition, 26, 29, 62 consistency: using truth tables to test for,
a-variant (a-varies). See variant interpretation 82–84, 97; using truth trees to test for,
142–45, 157, 299–302, 323; in predicate
backward method for solving proofs. See logic, 296
under propositional derivation strategies constants, individual, 248–50
binary operator, 37. See also connective contingency: using truth tables to test for,
biconditional, 51–52 78–80, 97, 323; using truth trees to test
bivalence, 60, 139, 368 for, 145–47, 157, 305–7; in predicate
bound variable. See under variable logic, 296, 305–7
branch: closed, 111, 158, 294, 323; contradiction: intuitive definition, 16, 23;
completed open in propositional logic, using truth tables to test for, 77–78, 97,
112–13, 158; completed open in predicate 323; using truth trees to test for, 145–47,
logic, 294–96, 323; definition of, 109, 157, 305–7; in predicate logic, 296, 305–7
158; fully vs. partially decomposed, 110–
11; open, 111, 159 De Morgan, Augustus, 245
branching decomposition rule. See De Morgan’s Laws (DeM). See under
propositional decomposition rules; truth propositional derivation (PD and PD+)
tree rules
decision procedure, 65, 76, 96, 99, 283.
caret (∧). See truth-functional operators See also truth tables; truth trees;
closed branch. See under branch undecidability
371
375