0% found this document useful (0 votes)
203 views482 pages

(John Nolt) Logics

The best book for the ones who wants learn Logics.

Uploaded by

legolases
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
203 views482 pages

(John Nolt) Logics

The best book for the ones who wants learn Logics.

Uploaded by

legolases
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 482
. fon ne er . sy | ee ee ey es r : — / - Q)) vot V~((P->Q)-~(P. 6 ~O}} Pp ae P—>Q) P& ~Q) Veale ocics isnova typographical error, it’s intentional. A ground- . Logics is a one-word breaking ‘essrsicrs tude of systems this book employs to teach the application of logic—from stu dy of classical propositional and predicate logics to such modern alternatives as higher-order, modal, ‘lo ic | deontic, and non-classical g logics. In presenting the formal and philosophical aspects of these various . logics, this book provides designed fo. stwivascis2cen in the practical considera- tions in logic. introduce Moving from the basics of formal logic to more advanced topics, Logics uses concrete problems yo U to th e@ to introduce each system and proceeds to an account of the system’s semantics. The book’s detailed, carefully paced FONge Of Sonn ssse ra step-by-step through the complex structure of syn- tax, semantics, and metatheory, ultimately allowing you to better understand logic and its applications. In addition, Logics includes: ® Discussion of recent developments in logic, such as supervalua~ tions, fuzzy logics, rele- vance logics, and nonmonotonic logics ® Careful balance between practical and formal philosophical issues ® Discussion of all for- mal systems is moti- vated by practical considerations = Semantics is presented before proof theory to aid learning = Explicit, measured pre- sentation of metatheo- ry explains “the rules of the game” very clearly = Examples and exercises drive and reinforce the topic discussions = ABACUS software (included with the book) provides guid- ance and extra exercis- es on the relationship of logic to computer programming ISBN O-534-50b40-2 Fs 90000 Sererclpear | 9 780534!506407" :/ /www.thomson.com/wadsworth.html ia changing the way the world learns To get extra value from this book for no additional cost, go to: http://www.thomson.com/wadsworth.html thomson.com is the World Wide Web site for Wadsworth/ITP. nd is your direct source to dozens of ondine resources. thomson.com helps you find out about supplements, experiment with demonstration software, search for a job, and send email to many of our authors. You can even preview new publications and exciting new technologies. thomson.com: It’s where you'll find us in the future. LOGIcs JOHN NOLT University of Tennessee, Knoxville adsworth Publishing Company 1@)P® An International Thomson Publishing Company Belmont, CA ¢ Albany, NY ¢ Bonn ¢ Boston ¢ Cincinnati * Detroit ¢ Johannesburg London * Madrid ¢ Melbourne ¢ Mexico City ¢ New York © Paris ® San Francisco Singapore * Tokyo * Toronto * Washington Philosophy Editor: Peter Adams Advertising Project Manager: Joseph Jodar Assistant Editor: Clayton Glad Designer: John Edeen Editorial Assistant: Greg Bruck Copy Editor: Margaret Moore Project Editor: Gary Mcdonald Cover Designer: Randall Goodall Production: Greg Hubit Bookworks ‘Compositor: Thompson Type Print Buyer: Barbara Britton Printer: Quebecor Printing / Fairfield A Division of International Thomson Publishing Inc. acid-free recycled paper IGP The ITP logo is a registered trademark under license. Duxbury Press and the leaf logo are trademarks used under license. Printed in the United States of America 12345678 9 10 COPYRIGHT © 1997 by Wadsworth Publishing Company ® This text is printed on For more information, contact Wadsworth Publishing Company, 10 Davis Drive, Belmont, CA 94002, or electronically at htp/hvww.thomson.com/wadsworth.html International Thomson Publishing Europe __ International Thomson Editores Berkshire House 168-173 Campos Eliseos 385, Piso 7 High Holborn Col. Polanco London WCIV7AA, England 11560 Mexico D-R, México ‘Thomas Nelson Australia International Thomson Publishing Asia 102 Dodds Street 221 Henderson Road South Melbourne 3205 #05~10 Henderson Building Victoria, Australia Singapore 0315 Nelson Canada International Thomson Publishing Japan 1120 Birchmount Road Hirakawacho Kyowa Building, 3F Scarborough, Ontario 2-2-1 Hirakawacho Canada MIK SG4 Chiyoda-ku, Tokyo 102, Japan International Thomson Publishing GmbH International Thomson Publishing, Konigswinterer Strasse 418 Southern Africa $3227 Bonn, Germany Building 18, Constantia Park 240 Old Pretoria Road Halfway House, 1685 South Africa All rights reserved. No part of this work covered by the copyright hereon may be reproduced or used in any form or by any means—graphic, electronic, or mechanical, including photocopying, recording, taping, or information storage and retrieval systems— without the written permission of the publisher, Library of Congress Cataloging-in-Publication Data Nolt, john Eric. Logics / John Nolt. Pp. cm. Includes index. ISBN 0-534-50640-2 1, Logics. I. Title. BC7INSS 1997 160—de20 96-32646 To Karen, Jenna, and Ben CONTENTS PARTI Preface xi Informal Logic 1 Chapter1 InformalLogic 3 What Is Logic? 3 Validity and Counterexamples 6 Relevance 13 Argument Indicators 17 Use and Mention 20 Classical Propositional Logic 23 Chapter 2 Classical Propositional Logic: Syntax 25 2.1 Argument Forms 25 2.2 Formalization 31 2.3. FormationRules 36 Chapter 3. Classical Propositional Logic: Semantics 39 3.1 Truth Conditions 39 3.2 Truth Tables. 51 3.3 Semantic Trees 65 3.4 Valuations and Possible Situations 75 Chapter 4 Classical Propositional Logic: Inference 79 4.1 Chains of Inference 79 4.2 Simple Formal inference Rules 82 4.3 Hypothetical Derivations 89 4.4 Theorems and Shortcuts 102 4.5 Alternative Proof Techniques and the Limitations of Proofs 107 viii Contents PARTI Chapter 5 Classical Propositional Logic: Metatheory 5.1 Introduction to Metalogic 113 5.2 Conditional Proof 116 5.3 Reductioad Absurdum 121 5.4 MixedStrategies 122 5.5 Mathematical Induction 124 5.6 Algorithms 129 5.7 Decidability 133 5,8 Soundness of the Tree Test 138 5.9 Completeness of the Tree Test 142 5.10 Soundness and Completeness of the Natural Deduction Rules 145 Classical Predicate Logic 159 Chapter 6 Classical Predicate Logic: Syntax 161 6.1 Quantifies, Predicates, and Names 161 6.2 Syntax for Predicate Logic 171 63 Identity 174 6.4 Functions 180 Chapter 7 Classical Predicate Logic: Semantics. 185 71 Sets and n-tuples 185 7.2. Semantics for Predicate Logic 188 73 Using the Semantics 202 7.4 Trees for Predicate Logic 208 Chapter 8 Classical Predicate Logic: Inference 224 8.1 Existential introduction 224 8.2 Existential Elimination 227 83 Universal Elimination 233 8.4 Universal introduction 235 85 Identity 241 86 Functions 243 Chapter 9 Classical Predicate Logic: Soundness, Completeness, and Inexpressibility 245 9.1 Soundness of the Tree Test 245 9.2 Completeness of the Tree Test 255 93 Soundness and Completeness of the Rules of Inference 9.4 Inexpressibility 261 Chapter 10 Classical Predicate Logic: Undecidability 10.1 Abaci 268 113 259 268 PARTV Contents ix 10.2. Logical Programming Notation 272 10.3. The ABACUS Program 275 10.4 Church's Thesis 283 10.5 TheHalting Problem 288 10.6 The Undecidabilty of Predicate Logic 296 10.7 How Far Does the Undecidability Extend? 302 Extensions of Classical Logic 305 Chapter 11. Leibnizian Modal Logic 307 11.1 Modal Operators 307 11.2. Leibnizian Semantics 310 11.3 ANatural Model? 325 11.4 Inference in Leibnizian Logic 328 Chapter 12 Kripkean Modal Logic 334 12.1 Kripkean Semantics 334 12.2 Inference in Kripkean Logics 344 12.3 StrictConditionals 346 12.4 Lewis Conditionals 351 Chapter 13 Deonticand Tense Logics 357 13.1 AModal DeonticLogic 357 13.2 AModal Tense Logic 365 Chapter 14 Higher-Order Logics 382 14.1 Higher-Order Logics: Syntax 382 14.2 Second-Order Logic: Semantics 387 Nonclassical Logics 395 Chapter 15 Mildly Nonclassical Logics 397 15.1 Free Logics 397 15.2 Multivalued Logics 406 15.3 Supervaluations 414 Chapter 16 Radically Nonclassical Logics 420 16.1. Infinite Valued and Fuzzy Logics 420 16.2 IntuitionisticLogics 427 x Contents 16.3 Relevance Logics 439 16.4 ANonmonotonic Logic: PROLOG 447 16.5 Conclusion: Logical Pluralism 461 index 463 PREFACE This book has many uses. Chapters 1, 2, 3, 4, 6, 7, and 8 provide the basics for a one-term introduction to formal logic. But later chapters contain an ample stock of advanced material as well, allowing for a variety of two-course sequences. It may also do duty for a second course alone, in which the early chapters provide review and the later chapters a selection of topics. Regardless of how it is used, this book is designed to meet several specific needs. There is, first of all, the need to convey to students some of the diversity of recent developments in logic. Logic, as the title is intended to suggest, is no longer just logic but logics—a study of a multitude of systems developed for an impressive variety of applications. Though classical predicate logic is still the centerpiece, it is by no means all, or even most, of the subject. From the beginning, this book makes the presuppositions of classical logic explicit and points to alternatives. Second, this is a text that seeks to balance the formal and philosophical with the practical, A wide range of formal topics is covered, and there is frequent reference to their philosophical roots. But in no case do I treat any system as merely a formal object. Logic is, first and foremost, the study of reasoning, and its life- blood is the elucidation of particular arguments. Thus, even where this book ex- amines exotic formal systems, practical understanding of inference is always the primary concern. Third, to facilitate understanding, each system is introduced, first, by way of, concrete problems that motivate it and then by an account of its semantics. Proof theory, though usually historically prior, is relegated to third place, since much that is puzzling about proofs can be elucidated semantically, whereas relatively litele that is puzzling about semantics can be illuminated by proofs. The ultimate step for each system is an ascent to the vantage point of metatheory, where the deepest understanding may be achieved. In doing semantics, some metatheory is, of course, unavoidable. The main issue is how explicit to be about it. I have been very explicit. Metatheory baffles many students chiefly because the rules of the game are rarely explained. In the first five sections of Chapter 5 I have endeavored to explain them. With respect to metatheory itself, my aim has been to err on the side of too much help, rather than not enough. I hope, however, that that aim is not incom- patible with elegance. Detailed explanations generally precede the more difficult metaproofs, but the metaproofs themselves are as simple and nontechnical as I can make them. xi Prerace I wouild like to thank those students and colleagues at the University of Tennessee, Knoxville, who helped to shape this book—especially Hilde Nelson, Eddy Falls, and Scott Nixon, who read nearly the entire manuscript, tested numer- ous exercises, and provided many valuable suggestions, corrections, and clarifica- tions, David Reisman, Molly Finneseth, Annette Mendola, George Davis, John Fitzpatrick, Betsy Postow, Jack Thompson, and John Zavodny also contributed important corrections. Wadsworth’s careful reviewers caught many a mistake, omission, of unclear phrase. For this I thank John Bickle, East Carolina University; Robert L. Causey, University of Texas at Austin; John Clifford, University of Mis- souri; Glenn J. Ross, Franklin and Marshall College; Catherine Shamey, Santa Monica College; and James Van Evra, University of Waterloo. Their thoughtful suggestions have made this a better book than I could have written alone. CHAPTER INFORMAL LOGIC This chapter introduces logic from an intuitive, or informal, point of view. In later chapters, as we examine not specific arguments but argument forms, we will look back at the concepts introduced here from various formal viewpoints. The informal stance, however, is fundamental. tis the milieu out of which the study of logic emerges and to which, ultimately, it must return—on pain of losing its roots and becoming irrelevant to the concerns that produced it. 1.1 WHAT IS LOGIC? Logic is the study of reasoning. Reasoning is a process of thought, but there exists no uncontroversial method for studying thought. As a result, contemporary logic, which likes to think of itself as founded on hard (though perhaps not empirical) facts, has nothing to say about thought. Instead, logicians study certain excres- cences of thought: verbalized bits of reasoning—arguments. An argument is a sequence of declarative sentences, one of which is intended as a conclusion; the remaining sentences, the premises, are intended to prove or at least provide some evidence for the conclusion, The premises and conclusion express propositions—which may be true or false—as opposed to questions, com- mands, of exclamations. Nondeclarative sentences may sometimes suggest prem- ises or conclusions, but they never are premises or conclusions. Declarative sentences are not themselves propositions. Some theorists have held that propositions are assertions made by sentences in particular contexts; others, that they are the meanings of sentences or the thoughts sentences express. But it is generally agreed that between sentences and propositions there is an important difference. The sentence “I am a woman” uttered by me expresses a different proposition than the same sentence uttered by you. When I utter the sentence, the proposition I assert is false; if you are a woman, when you utter the 3 4 Cuaprer 1 sentence you assert a true proposition. Even if you are not a woman, the proposi- tion you assert by uttering this sentence is different than the one I assert by uttering it; your proposition is about you, mine about me. Logicians, however, tend in practice to ignore the differences between sen- tences and propositions, studying the former asf they were the latter. This practice presupposes that each argument we study is uttered in a fixed context (a given speaker in a given circumstance), since only relative to such a fixed context does each sentence in the argument express a unique proposition, To illustrate, consider the following argument: All women are mortal. Tam a woman. 1am mortal. ‘The symbol ‘:.” means “therefore” and is used to mark the conclusion. The speaker might be you now, or me at age 13, or Queen Victoria reflecting on her imminent death, It doesn’t matter who the speaker is, but we do presuppose that there is a single speaker, not several, so that, for example, ‘I’ in the second premise refers to the same person as ‘I’ in the conclusion, We also keep fixed some other presuppo- sitions about the context: that, for example, the speaker is consistently using the English language—not some Alice-in- Wonderland tongue in which familiar words have unfamiliar meanings—and that demonstrative words like ‘this’ or ‘that’ have clear and unambiguous reference. Having by fiat frozen these aspects of context, we have obliterated the differ- ence between sentences and propositions and can proceed to treat sentences as if they were the propositions they express. (In this book we shall sometimes use the word ‘statement’ to designate sentences whose context has thus been frozen.) In this way we shift our focus from such elusive entities as assertions, meanings, or thoughts, to sentences, which can be pinned down on paper and dissected neatly into discrete components. Contemporary logic thus replaces thoughts, meanings, or acts with sym- bols—letters, words, phrases, and sentences, Whether that is an illuminating or useful strategy, you will be able to judge for yourself by the time you finish this book, But this much is undeniable: Logicians have learned a great deal about systems of symbols—and much that is astonishing, unexpected, or useful, as we shall see. Our definition of ‘argument’ stipulated that an argument’ premises must be intended to give evidence for the conclusion. But an argument need not actually give evidence. There are bad arguments as well as good ones. Consider this: Humans are the only rational beings. Rationality alone enables a being to make moral judgments. Only humans are ends-in-themselves. Now this is an argument, but it’s bad. (Of course, no famous Western philosopher would ever really have reasoned this way!) Intuitively, the reason it’s bad is that ‘we can’t see what the capacity for moral judgment has to do with being an end InrorMaLloaic 5 itself. Still, bad as itis, i’s an argument; the author intended the first two proposi- tions (sentences) to be taken as evidence for or proof of the third, and that’s all that being an argument requires. Let’s now consider a good one. I'll begin with a claim. The claim is that in certain matters your will is not free. In fact there is one act you cannot initiate no matter how strong your will. The act is this: to criticize all and only those people who are un-self-critical. For example, consider yourself. Are you going to criticize yourself or not? If you do, then you will criticize someone who is self-critical (namely, you)—and so you're not criticizing only the un-self-critical. On the other hand, if you don’t criticize yourself, then you fail to criticize someone who is un- self-critical (namely, you again)—and so you don't criticize all the un-self-critical. So either way you fail." Now consider your thoughts as you read the previous paragraph. (I assume you read it with comprehension; if not, now might be a good time to try again.) When I first made the claim, unless you had read this sort of thing before, you were probably puzzled. You wondered, among other things, what I was up to. At a certain point (or maybe not a certain point—maybe slowly), a light went on and you saw it. The dawning of that light is insight. A good argument, when it works, gives you insight. It enables you to see why the conclusion is true—not “see” in a literal sense, of course, but “in your mind’s eye.” What was wrong with the bad argument given above was that it didn’t yield any insight at all. Ie puzzled us and offered no resolution to our puzzlement. Here I am talking about thought (insight, puzzlement, dawning lights, and so on), when I said just a few paragraphs back that we were going to talk about symbol systems. That's because I want to make vivid a certain contrast. There is much to be noticed about the experience—the phenomenology—of argumenta- tion, But contemporary logicians try to explain as much as possible of what makes an argument good or bad without using mentalistic jargon, which they view with suspicion, They prefer to talk about symbols. The previous argument showed us something about insight, but it’s rather flashy for an introductory illustration; let’s consider a more mundane and time- worn example: All men are mortal. Socrates is a man, Socrates is mortal. * The reasoning here is identical to the reasoning of Russell's barber paradox and to the core of the argument by which we will prove the halting problem unsolvable (see Section 10.5}. 2 The origin of this argument is a mystery to me. It appears in many logic text books, going way back in history, so presumably it has a classical source. The obvious source would be Aristotle, since Aristotle invented formal logic, but an Aristotle scholar assures me that this argument is nowhere to be found among the Philosopher's works. 6 CHaeter 1 This is good too, if not so good as our last example. It could, I suppose, convey insight to a sheltered three-year-old. Its virtues, according to hoary tradition, 1. 2, Its reasoning is valid. Now obviously, these virtues don’t by themselves add up to a prize-winning argu- ment. There are other things we'd like—such as significance, substance, relevance to some larger context—but the two listed above are virtues. Arguments that lack them are not likely to convey insight into true conclusions. So they are as good a place as any to start if we want to understand what makes an argument good. Virtue 1, however, is the business of just about everybody but the logician. To tell whether or not a given premise is true (except for logically true or logically false propositions, cases to which we will later return), we must turn to science, conscience, or common sense—not to logi 4.2 VALIDITY AND COUNTEREXAMPLES That leaves us with virtue 2, the one that generally interests logicians. Most logi- cians have belonged to a school of thought known as the classical tradition. In the first four parts of this book we will consider logic from the classical perspective, though in the fifth we shall step outside of it. To say that an argument is valid is, according to the classical tradition, to say that there is no way for the conclusion not to be true while the premises are true. We'll sometimes put this in terms of ‘possible situations”: There is no possible situation in which the premises are true but the conclusion isn’t. ‘The Socrates argument is valid, for there is no possible situation in which all men are mortal, Socrates is a man, and Socrates is not mortal; we can’t even coherently think such a thing. The end-in-itself argument is invalid (i.e., not valid), for there is a possible situation in which the premises are true and the conclusion isn’t. That is, it is possible that humans are the only rational beings and that rationality alone en- ables a being to make moral judgments but that humans are not the only ends-in- themselves, One way this is possible is if being an end-in-itself has nothing to do with the ability to make moral judgments, but rather is linked to some more general capacity, such as sentience or the ability to live and flourish. Thus perhaps other critters are also ends-in-themselves even if the argument'’s premises are true. A possible situation in which an argument’s premises are true and its conclu- sion is not true is called a counterexample to the argument. We may define validity more briefly simply by saying that a valid argument is one without a counterexample. ‘When we speak of possible situations, the term ‘possible’ is to be understood in a very broad sense. To be possible, a situation need not be something we can Informat Locic 7 bring about; it doesn’t even have to obey the laws of physics. It just has to be something we can coherently conceive—that is, it has to be thinkable and describ- able without self-contradiction, Thus, intuitively,’ to tell whether or not an argument is valid, we try to conceive or imagine a possible situation in which its premises are true and conclu- sion is untrue. If we succeed (i.e., if we can describe a counterexample), the argu- ment is invalid. If we fail, then either we have not been imaginative enough or the argument is valid. This makes logicians nervous; they'd like to have a test that doesn’t rely on human ingenuity; much of this book will be devoted to explaining what they do about this anxiety and how their efforts fare. But most people are not so skittish. We appeal to counterexamples almost unconsciously in everyday life. Consider this mundane argument: ‘They said on the radio that it’s going to be a beautiful day today. Ie is going to be beautiful today. One natural (albeit cynical) reply is, “They could be wrong.” This reply demon- strates the invalidity of the argument by describing a counterexample—that is, a possible situation in which the conclusion (‘I’s going to be a beautiful day today") is untrue even though the premise (‘They said so on the radio’) is true: namely, the situation in which the forecasters are wrong. A counterexample need not be an actual situation, though it might; it is enough that the situation be conceptually possible. Thus it need not be true that the forecasters are wrong; to see the invalidity of the argument, we need only realize that itis possible they are wrong. To give a counterexample, then, is merely to tell a kind of story. The story needn't be true, but it must be conceptually coherent. The cynical respondent to our argument above hints at such a story with the remark “They could be wrong.” ‘That's enough for casual conversation. But for logical analysis it’s useful to be more explicit. A well-stated description of a counterexample should contain three elements: 1, Affirmations of all the argument’ premises. 2, A denial of the argument’s conclusion. 3. An explanation of how this can be—that is, how the conclusion can stil be untrue while the premises are all true. If we flesh out the cynic’s counterexample to make all of these elements explicit, the result might be something like this: ‘They said on the radio that it’s going to be a beautiful day today. But they are wrong. A cold front is moving in unexpectedly and will bring rain in- stead of a beautiful day. § When I say intuitively’, I mean from an informal point of view. We are still calk- ing about thoughts here, not symbols. This is typical of informal logic. The for- mal, symbolic approach begins with the next chapter. 8 CHaeter 1 All three elements are now present. The first sentence of this “story” affirms the premise. The second denies the conclusion. The third explains how the conclusion could be untrue even though the premise is true. This is not, of course, the only possible situation that would make the prem- ises but not the conclusion true. I made up the idea of an unexpected cold front more ot less arbitrarily. There are other counterexamples as well. Maybe an un- expected warm front will bring rain, Or maybe there will be an unexpected dust storm. Or maybe the radio announcer knew it was going to be an awful day and flat out lied. Each of these scenarios is a counterexample. This is typical; invalid arguments usually have indefinitely many counterexamples, each of which is by itself sufficient to show that the argument is invalid. Let's consider another example. Is the following argument valid or invalid? All philosophers are freethinkers. Alis not a philosopher. Alis not a freethinker. ‘To answer, we try to imagine a counterexample. Is there a way for the conclusion not to be true while the premises are true? (To say that the conclusion is not true, of course, is to say that Alisa freethinker.) A moment's thought should reveal that this is quite possible. Here’s one counterexample: All philosophers are freethinkers and Al is not a philosopher, but Al is nev- ertheless a freethinker, because there are some freethinking bricklayers who are not philosophers, and Al is one of these. Again all three elements of a well-described counterexample are present. The state- ment ‘All philosophers are freethinkers and Al is not a philosopher’ affirms both of the premises. The statement ‘Al is nevertheless a freethinker’ denies the conclu- sion, and the remainder of the story explains how this can be so. The story is perfectly coherent, and thus it shows us how the conclusion could be untrue even if the premises were true. Notice again that the counterexample need not be an actual situation. It's just a story, a scenario, a fiction. In fact, it isn’t true that all philosophers are freethinkers, and maybe it isn’t true that Al (whoever Al is) is a freethinker, either. That doesn’t matter; our story still provides a counterexample, and it shows that the argument is invalid, by showing how it could be that the conclusion is untrue while the premises are true. Notice, further, that we ncedn’t have said that Al isa bricklayer; for purposes of the example, he could have been an anarcho-communist or some other species of freethinker—or an unspecified kind of freethinker. The details are flexible; what counts, however we formulate the details, is that our “story” is coherent and that, it makes the premises true and the conclusion untrue. Let’s consider another argument: All philosophers are freethinkers. Alisa philosopher. Alisa freethinker. InromMatLosic 9 This has no counterexample. If we affirm the premises, then we cannot without lapsing into incoherence deny the conclusion. If all philosophers are freethinkers and Al is one of the philosophers, then he must be a freethinker, This argument is valid. That, of course, doesn’t mean it’s a good argument in all respects. On the contrary, some philosophers are dogmatically religious, so the first premise is false, which makes the argument unconvincing, But still the reasoning is valid. Sometimes what appears to be a counterexample turns out on closer exami- nation not to be. Unless the mistake is trivial (e.g., the story fails to make all the premises true or fails to make the conclusion untrue), the problem is often that the alleged counterexample is subtly incoherent and hence impossible. To return to the argument about Socrates, suppose someone said The argument is invalid because we can envision a situation in which all men are mortal and Socrates is a man, but Socrates is nevertheless immortal because he has an immortal soul. This story does seem to make the premises of the argument true and the conclusion false. But is it really intelligible? If having an immortal soul makes one immortal and the man Socrates has an immortal soul, then not all men are mortal. The story is incoherent; it contradicts itself. It is therefore not a genuine counterexample, since a counterexample is a possible situation; that is, its description must be conceptually coherent. Some additional invalid arguments with accompanying counterexamples are listed below. Keep in mind that invalid arguments generally have many counter- examples so that the counterexamples presented here are not the only ones. Note also thar each counterexample contains all three elements (though sometimes more than one element may be expressed by the same sentence). The three elements, 1. Affirmations of all the argument’s premises. 2. A denial of the argument’s conclusion. 3. An explanation of how this can be—that is, how the conclusion can be untrue while the premises are all true. In each case, the counterexample is a logically coherent story (not an argument) that shows how the conclusion could be untrue while the premises are true, thus proving that the argument is invalid. Notice how each of the counterexamples below performs this function: Invalid Argument Sandy is not a man, Sandy is a woman. Counterexample Sandy is neither a man nor a woman but a hamster. 10 Chapter 1 Invalid Argument If the TV is unplugged, it doesn’t work. The TV is not working. < Iesunplugged. Counterexample If the TV is unplugged it doesn’t work, and it’s not working. However, it is plugged in. The reason it’s not working is that there's a short in the circuitry. Invalid Argument All charged particles have mass. Neutrons are particles that have mass. Neutrons are charged particles. Counterexample All charged particles have mass, but so do some uncharged particles, including neutrons. Invalid Argument ‘The winning ticket is number 540. Beth holds ticket number 539. Beth does not hold the winning ticket. Counterexample ‘The winning ticket is number 540; Beth is holding both ticket 539 and ticket 540. Invalid Argument There is nobody in this room taller than Amy. Bill isin this room. :. Bill is shorter than Amy. Counterexample Bill and Amy are the only ones in this room, and they are the same height. Informatlocic 14 Invalid Argument Sally does not believe that Eve ate the apple. Sally believes that Eve did not eat the apple. Counterexample Sally has no opinion about the story of Eve. She doesn’t believe that Eve ate the apple, but she doesn’t disbelieve it either. Invalid Argument Some people smoke cigars. Some people smoke pipes. «Some people smoke both cigars and pipes. Counterexample There are pipe-smokers and cigar-smokers, but nobody smokes both pipes and cigaes, so the two groups don’t have any members in common. Argument Some people smoke cigars. Some people do not smoke cigars. Counterexample There are people, and all of them smoke cigars. (If everybody does, then some people do and so the premise is true!) Invalid Argument ‘We need to raise some money for our club. Having a bake sale would raise money. ‘We should have a bake sale. Counterexample ‘We need to raise money for the club, and having a bake sale would raise money, but so would other kinds of events, like holding a car wash or a telethon, Some of these alternative fund-raising ideas better suit the needs of the club and the abilities of its members, and so they are what should be done instead of a bake sale. 12 Chapter 1 Invalid Argument Kate hit me first. :. Thad to hit her back. Counterexample Kate hit the (obviously immature) arguer first. But the arguer could have turned the other cheek or simply walked away; there was no need to hit back. Let’s take stock. What launched our discussion of counterexamples was talk of validity, and what led us to validity was a look at the two virtues of a good argument, namely: 1. The premises are true. 2, The reasoning is valid. Logicians sometimes suggest that these two virtues are sufficient for a good argu- ment. I have already expressed doubts about this. But we can see why someone might believe it if we consider the two virtues together. To say that the reasoning is valid is to say that there is no counterexample—that is, there is no way for the conclusion not to be true while the premises are true. Now, if we add virtue 1— namely, that the premises are true—we see that the two virtues together add up to a guarantee of the truth of the conclusion. An argument that has both virtues— true premises and valid reasoning—is said to be sound. Sound reasoning certifies that its conclusion is true. If that’s all we want from reasoning, then virtues 1 and 2 are all we need. In the classical logical tradition, it has been customary to ask for no more. But I think ‘we generally want more. We want insight, significance, cogency . .. well, at least we want relevance. Virtues 1 and 2 don’t even give us that—as we shall see in the next section. Exercise 1.2 Classify the following arguments as valid or invalid. For those that are invalid, describe a counterexample, making sure that your description includes all three elements of a well-described counterexample. Take each argument as it stands; that is, don’t alter the problem by, for example, adding premises. 1. No plants are sentient. All morally considerable things are sentient. No plants are morally considerable. 2. All mathematical truths are knowable. All mathematical truths are eternal. All that is knowable is eternal. 3. Most geniuses have been close to madness. Blake was a genius. Blake was close to madness. InrorwatLocic 13 4. Most of the sentences in this book are true. Most of the sentences in this book are about logic. There are true sentences about logic in this book. A high gasoline tax is the most effective way to reduce the trade deficit. ‘We need to reduce the trade deficit. 2 Weneed a high gasoline tax. Some angels are fallen. Some angels are not fallen. To know something is to be certain of it. ‘We cannot be certain of anything. ‘We cannot know anything. ‘The surface area of China is smaller than the surface area of Russia. If and only if BO Charen 2 the conclusion untrue. The hope would be that either we find an invalid instance, thus showing the sequent to be invalid, or we fail to find an invalid instance, but as a result of our search become familiar enough with the sequent to see that it is valid. Consider, for example, the sequent ‘P v QF P ++ Q. To test its validity informally, we consider instances, more or less at random. Suppose we take this instance: Either itis a skunk or it is a badger. It isa skunk if and only if itis a badger. Now it is not difficult to formulate a counterexample. Consider a possible situa tion in which the animal referred to by the word ‘it’ is a skunk but not a badger. Then the premise is certainly true, but the conclusion is false. For the conclusion asserts that if it’s a skunk it’s also a badger, and vice versa, but in the situation we are envisioning (which is perfectly possible) itis a skunk but not a badger. If we try to find an invalid instance of the sequent ‘P — Q, -Q + ~P’, by contrast, we meet with repeated failure. (This sequent, incidentally, is called modus tollens—i.e., mode of denying, or denying the consequent.) Consider this instance: If you press the accelerator, the engine speeds up. Iris not the case that the engine speeds up. Therefore, you are not pressing the accelerator. ‘We might at first attempt a counterexample along these lines: Maybe the engine is, malfunctioning or simply turned off so that even though you are pressing the accelerator it is not speeding up. This, of course, is a possible situation. But itis not a counterexample, because it is a situation in which the first premise is false. ‘Or maybe the engine is not speeding up, though whenever you press the accelera- tor it does speed up. But then you are certainly not pressing the accelerator. This, once again, is a possible situation, but it is not a counterexample because it is a situation in which the conclusion is true. By repeated failures to find a counterexample, both with this instance and with other instances of modus tollens, we might eventually gain confidence in the validity of the form itself and maybe even see why it is valid. This method, how- ever, is intuitive and imprecise. It relies heavily on inventiveness and the powers of imagination. And since for all of us these powers are limited, it does not guarantee correct answer—or any answer at all. There are better methods, as we shall soon see. Exercise 2.1.1 Check the following forms for validity informally by attempting to construct an instance that has an obvious counterexample. If you can do so, write out the instance and describe the counterexample that shows it to be invalid. If not, or if you see that the argument is valid, simply write ‘valid’ for that form. 1, P+Q,-PF-Q CLASSICAL PROPOSMIONAL LOGIC: SYNTAX 31 2.PEP&Q 3. P&QEP 4. PVQEP 5. PEPVQ 6. P+ QEQ—P 7. P+QrP—~-Q 8. P+ QEQaP 9. PHQEPKQ 10. P,-P+Q Exercise 2.1.2 Given that modus ponens is also called affirming the antecedent and modus tollens is also called denying the consequent, what is the name of the sequent in problem 1 of Exercise 2.1.1? 2.2 FORMALIZATION In this section we present the syntax (grammar) of the language of propositional logic. Fundamental to an understanding of syntax is the notion of the scope of a logical operator. The scope of a particular occurrence of an operator consists of that occurrence of the operator itself, together with whatever it is operating on. Consider, for example, the following pair of English sentences: Ie is not the case that boron is both a compound and an element. Boron is a compound, and it is not the case that itis an element. In the first sentence—which, incidentally, is true—the negation operator applies to the entire conjunction ‘boron is both a compound and an element’, or, more explicitly, ‘boron is a compound and boron is an element’. Thus the scope of the negation operator is the entire sentence. In the second sentence, which is false, the negation operator applies only to the subsentence ‘it [boron] is an element’. Its scope is thus only the second conjunct of the second sentence—that is, the sub- sentence ‘itis not the case that it [boron] is an element’. In representing the forms of these two sentences in the language of proposi- tional logic, we need some conventions for indicating scope. For this purpose, we borrow from algebra the idea of using brackets as punctuation. Using brackets, we can represent the form of the first of the two sentences above as “~(C & E)’. The second is then symbolized as ‘C & ~E”. In algebra, the negative sign ‘—" is presumed to apply just to the term it immediately prefixes, unless brackets are used to extend its scope. Thus the ex- pression ‘—3 + 5° represents the number 2, because ‘—” applies just to the numeral “3. But in the expression ‘(3 + 5)’, the ‘—” applies to ‘(3 + 5)’ so that this expression represents the number —8, We use brackets similarly in logic. The 32 Cuarrer 2 negation sign is presumed to apply to whatever formula it immediately prefixes, unless we extend its scope with brackets. Brackets are also needed to determine scope when two or more binary oper- ators occur in the same formula, Suppose you receive the following announcement in the mail: You have won ten thousand dollars and a Caribbean cruise or a dinner for two. You might well be puzzled, for this announcement is ambiguous. Using ‘T” for ‘you have won ten thousand dollars’, ‘C’ for ‘you have won a Caribbean cruise’, and ‘D’ for ‘you have won a dinner for two’, we may symbolize the sentence either as‘T & (Cv DJ’ or as “(T & C) VD". There is a big difference. If the first formula represents what is meant, you have won ten thousand dollars, plus a cruise or a dinner. If (as is most likely) the second formula represents what is meant, then, even if the announcement is true, you probably have won only a dinner. This sort of multiplicity of meaning is called scope ambiguity. In the first formula, the scope of the conjunction operator is the whole formula and the scope of the disjunction operator is just the second conjunct. In the second formula, the scope of the disjunction operator is the whole formula and the scope of the con- junction operator is just the first disjunct. Without brackets, the scopes of the two operators are indeterminate and, as with the English sentence, it is not clear what is meant. Because one of the purposes of propositional logic is to clarify though, its grammatical rules prohibit expressions such as “T & Cv D’, which are ambiguous in just the way the contest announcement is, because of the absence of brackets. To prevent such ambiguities, each binary operator in a grammatical formula must be accompanied by a pair of brackets that indicate its scope. There is only one exception: We may omit a pair of brackets that surround everything else in a formula, since brackets in this position are not needed to prevent ambiguity. Thus, instead of ‘(I & (C v D))’, which is strictly correct, having a pair of brackets for each of the formula’s two binary operators, we may, if we like, write *T & (C v 1D)’, as we did in the preceding example, dropping the outermost brackets, the ones that indicate the scope of the ‘&”. Negation requires no brackets of its own, We can negate any part of the formula ‘(C v D)’, for example, simply by appropriate placement of the nega- tion operator. The possible locations for a single negation operator are as follows: ‘(Cv Dy’, (Cv Dy’, (C v -D)’. The brackets that come with the ‘v’ (which we have kept here in the second and third formulas, even though we could have omitted them), together with the placement of ‘~” suffice to define the scopes of both ‘~" and ‘V’. Even when we iterate negations, as in ‘~~P? (“it is not the case that it is not the case that P”), no brackets are needed. Because ‘~’ applies to the smallest whole formula to its right, the scope of the leftmost occurrence of ‘~" is, the whole formula and the scope of the rightmost occurrence of "is “~P*. All formulas of propositional logic contain sentence letters as their ultimate constituents. Thus sentence letters are called atomic formulas, and, by analogy, formulas consisting of more than just a single sentence letter are called complex, or molecular, formulas. CLASSICAL PROPOSITIONAL LoGic: SYNTAX 33. Recall that the scope of an occurrence of a logical operator is that occurrence of the operator together with all the parts of the formula to which it applies. More precisely, it is the smallest formula containing that occurrence of the operator (which often is only a part of a larger formula). Each molecular formula has one and only one operator whose scope is that entire formula. This operator is called the formula’s main operator, and it defines the formula’s fundamental form. The main operator of the formula ‘(P & (Q v R)), for example, is “&’. The formula as a whole is thus a conjunction, though its second conjunct is a disjunction. In the formula ‘~(P + (Q v R))’ the main operator is ‘“~’. This formula is therefore negative; more specifically, it is the negation of a conditional whose consequent is a disjunction, Recognition of the main operator in a formula is crucial in con- structing semantic trees or planning proof strategies, procedures that are discussed in the next two chapters. The task of representing the forms of arguments in propositional logic is complicated by the fact that there are many ways of expressing the logical opera- tors in natural language. English has, for example, many ways of expressing ne- gation. Usually, of course, instead of saying ‘it is not the case that’, we simply append ‘not’ to the sentence’s verb. ‘It is not the case that I am going’ sounds better as ‘I am not going’. But we use the more awkward wording to emphasize that from a logical point of view negation is an operation that applies to a whole sentence, not just to a verb. Prefixes such as ‘non-’, ‘im, ‘in, ‘un”, ‘a, “ir-’, and so on may also express negation. But not always. ‘He is incompetent’ is arguably synonymous with ‘it is not the case that he is competent’, but ‘gasoline is inflammable’ does not mean the same thing as ‘itis not the case that gasoline is flammable’! Likewise, ‘she uncov- ered the dough’ does not mean the same thing as “it is not the case that she covered the dough’. Negation is not the only kind of opposition. To determine whether we are dealing with true negation or some other form of opposition, we must ask whether what we are dealing with can be adequately expressed by the phrase ‘it is not the case that’. If so, it is negation. If not, itis something else. Conjunction may be expressed not only by the terms ‘and’ or ‘both . . .and’, but also by ‘but’, ‘nevertheless’, ‘furthermore’, ‘moreover’, ‘yer’, ‘still’, and so on. These terms connote differing nuances of contrast or connection, yet like ‘and? they all perform the logical operation of linking two sentences into a compound sentence that affirms them both. Even a semicolon between two sentences may express conjunction in English, English sentences with compound subjects or predicates are usually treated in propositional logic as conjunctions of two complete sentences. Hence we think of the sentence ‘Sal and Jeff were here” as abbreviating ‘Sal was here and Jeff was here’ and the sentence ‘Sal danced and sang’ as abbreviating ‘Sal danced and Sal sang’. Often where two sentences are linked by a word that expresses conjunction, ‘we may question whether to treat them as a conjunction or as separate sentences. From a logical point of view it makes little difference. The argument ‘He’s big and he’s mean, so he’s dangerous’ is equally well rendered into propositional logic as ‘B & MI D’ oras ‘B, Mt D’. Neither sequent is valid. 34 Cuarrer 2 Conditionals also have important variants. They can, for example, be pre- sented in reverse order, provided that the antecedent remains attached to the ‘if. ‘The statement ‘f it rains, it pours’, for example, can also be expressed as ‘it pours if it rains’. The form is in each case the same: ‘R — P*. In either order, the antece- dent is always the clause prefixed by ‘if’. There is one exception. Where ‘if? is preceded by the term ‘only’, what it prefixes is the consequent, not the antecedent. The following four sentences, for example, all assert the same (true) conditional proposition: If you are pregnant, then you are female. You are female if you are pregnant. Only if you are female are you pregnant. ‘You are pregnant only if you are female. In each case, ‘you are pregnant’ is the antecedent and ‘you are female’ the conse- quent. The form of all four is the same: ‘P — F’. If we reverse antecedents and consequents, we get four sentences of the form ‘F — P*, These, too, all affirm the same proposition, but it is a different proposition from that affirmed by the first group of four, a proposition that is (fortunately) not in all cases true. If you are female, then you are pregnant. You are pregnant if you are female. Only if you are pregnant are you female. You are female only if you are pregnant. Many people find it difficult to keep the meanings of ‘if’ and ‘only if distinct. Keep in mind that ‘if? always prefixes antecedents and ‘only if” always prefixes conse- quents, and you should have no trouble. Given these remarks about ‘if’ and ‘only if’, it ought to be clear that the biconditional operator ‘if and only if? may be understood as a conjunction of two conditionals, one expressed by ‘if’, the other by ‘only if’. Hence ‘P ++ Q’ just means *(P + Q) & (Q— P)’. We could therefore dispense with the symbol ‘+’ and treat all biconditionals as conjunctions of two conditionals in this way. But we retain ‘', partly in deference to tradition, partly because it saves writing. ‘Apart from the optional addition of ‘either’, the term ‘or’ has few variants in English. We may regard it, however, as a component of the important term ‘neither . .. nor’. Etymologically, this is a contraction of ‘not either . .. or’; itthus expresses negated disjunction. The sentence ‘it will neither snow nor rain’, for example, may be symbolized as ‘~(S v Ry’. It is also acceptable to symbolize this statement as ‘~S & ~R’, which is logically equivalent to ‘~(S v R)’, though this symbolization has the disadvantage of failing to reflect the English etymology. The term ‘unless’ may be thought of as expressing another two-operator combination, a conditional with a negated antecedent. ‘We will starve unless we cat’ says the same thing as ‘if we do not eat, we will starve’. We may thus symbolize CLASsiCAL PROPOSITIONAL LoGic: SYNTAX 35, the sentence as ‘~E — $'. Alternatively, ‘unless’ may be understood simply as expressing disjunction, in which case it prefixes the first disjunct. So we may also symbolize ‘we will starve unless we eat’ as ‘E v S’. These two symbolizations are equally correct. Exercise 2.2.1 Formalize each of the sentences below, using the following interpretation scheme: P— the peasants revolt Q—the queen hesitates R—the revolution will succeed — $ — the slaves revolt 1, Either the peasants will revolt or the slaves will revolt. 2. Both the peasants and the slaves will revolt. 3. The peasants and the slaves will not both revolt. 4. If the peasants revolt, then the revolution will not succeed. 5. The peasants revolt if and only if they don’t fail to revolt. 6. Only if the peasants revolt will the slaves revolt. 7. The revolution will succeed only if the queen hesitates. 8. If the peasants revolt and the queen hesitates, the revolution will succeed. 9, Ifthe peasants revolt, then the revolution will succeed if the queen hesitates. 10. The revolution will not succeed unless the queen hesitates 11, The peasants will revolt whether or not the queen hesitates. 12. The revolution will succeed if the slaves and the peasants both revolt. 13. If either the peasants or the slaves revolt and the queen hesitates, then the revolution will succeed. 14, If the peasants revolt but the slaves don’t, the revolution will not succeed, and if both the peasants and the slaves revolt, the revolution will succeed. 45. If the peasants revolt if and only if the slaves revolt, then neither will revolt. @ 2.2.2 Use premise and conclusion indicators to determine the premises and conclusions of the following arguments, then symbolize them in the formal notation of prop- ositional logic using the sentence letters whose interpretation is specified below. (The forms of all of these arguments, incidentally, are valid in classical logic.) Sentence Letter Interpretation B Descartes believes that he thinks E Descartes exists K, Descartes knows that he thinks K, Descartes knows that he exists J Descartes is justified in believing he thinks T Descartes thinks 1. If Descartes thinks, then he exists; for he doesn’t both think and not exist. 36 Charen 2 2. If Descartes thinks, then he exists. Hence he does not think, because he does not exist. 3. Descartes is justified in believing that he thinks if he knows that he thinks. But he is not justified in believing that he thinks, so he does not know that he thinks. 4. If Descartes knows that he thinks, then he exists. For if he knows that he thinks, then he thinks; and if he thinks, then he exists. 5. Descartes does not exist. For either he knows that he exists or he doesn’t exist; and he doesn’t know that he exists. 6. Descartes believes that he thinks. If he does not think, he does not believe that he thinks, Therefore Descartes thinks. 7. If Descartes thinks, then he knows that he exists, and if he knows that he exists, then he exists. Therefore, if Descartes thinks, then he both knows that he exists and really does exist. 8. If Descartes does not exist, then he doesn’t think; so if he thinks, it is not the case that he does not exist. 9. Descartes neither exists nor does not exist. Therefore Descartes thinks. 10. Descartes knows that he thinks if and only if (1) he believes that he thinks, (2) he is justified in believing that he thinks, and (3) he does in fact think. ‘Therefore, if Descartes does not think, then he does not know that he thinks. 2.3 FORMATION RULES ‘The formulas of propositional logic have a grammar, and that grammar (or syn- tax) may be precisely articulated as formation rules. Formation rules define what counts as a formula by giving general directions for assembling formulas out of simple symbols, or characters. They are the rules of grammar for a formal lan- guage. In order to state the formation rules for propositional logic, we need first to define the character set for propositional logic—that is, the alphabet and pune- tuation marks from which the formulas of its language are constructed. We stipu- late that a character for the language of propositional logic is anything belonging to one of the following four sets: Sentence letters: Capital letters from the English alphabet Numerals: orzs4seras Logical operators: ~ & Vv = = Brackets: () The only novelty here is the numerals. These are used to form subscripts for sentence letters when we want to use the same letter for two different sentences and need a means to keep the letters distinct. Moreover, without subscripts we could symbolize no more noncompound sentences than we have capital letters— that is, twenty-six. And though we are unlikely in practice to need more than CLASSICAL PROPOSITIONAL LoGic: SyNTAx 37 twenty-six letters at once, a system of logic should not be subject to such arbitrary restrictions. With these ideas in mind, we are ready to state the formation rules—the rules of grammar for the language of propositional logic; they define the notion of a grammatical formula by telling how to construct such formulas, starting with sentence letters, and combining them with the operators and brackets. Formation Rules for Propositional Logic 1. Any sentence letter, with or without a sequence of numerals as a sub- script, is a formula, 2. If@ is a formula, then so is ~®. 3. If@ and ¥ are formulas, then so are ( & ¥), (® v ¥), (@— ¥) and (=) Anything that is not a formula by finitely many applications of these rules is not a formula. Notice that in stating the formation rules, we use Greek letters (which belong to the metalanguage (see Section 1.5), not to the language of prop- ositional logic). They are variables that stand for formulas of propositional logic. The Greek indicates generality. For example, “® and “W” in rule 3 stand for any formulas, no matter how simple or complex. When they are combined with oper- ators and brackets into a complex expression, this expression stands for any for- mula obtainable by replacing the Greek letters with formulas. Thus, for example, the expression ‘(® & 'P)’ stands for “(P & Q)’, “(-R & $)’, (PV R) & (Q—>~-S))’, and so on. Use of English letters here would be inappropriate, since they would too easily be confused with individual expressions of the object language.’ In contrast to such expressions as ‘(P & Q)’, expressions containing Greek letters, such as ‘(® & ¥)’, are not formulas. Rather, they are metalinguistic devices used for referring to whole classes of formulas. Repeated (recursive) application of the formation rules enables us to con- struct a great variety of formulas. So, for example, ‘P” and ‘Q’ are formulas by rule 1. Hence by rule 3, ‘(Pv Q)’ is a formula, Now by rule 1 again ‘R’ is a formula, from which it follows by rule 2 that ‘-R’ is a formula and again by rule 2 that -~-R’ is a formula, Hence, since both ‘(P v Q)’ and ‘~-R’ are formulas, by rule 3 “(Pv Q) + Ry’ is a formula, And since this is a formula, by rule 2 again, ‘~((P VQ) = ~Ry’is also a formula. And so on! In this way we can build up formulas as complex as we like. 5 To see this more clearly, suppose that instead of rule 2 we wrote: 2) EP's a formula, then so is -P" ‘Then the rule would tell us only how to generate this one formula ‘-P’. It would not tell us how to generate ‘~-P" or “~Q’. I, by contrast, we put the rule this way: 2" If Pisa formula, then sois-P ‘we would be mixing the object language and the metalanguage confusingly e's not clear what this means. The Greek says exactly what we wane while avoiding these problems. 38 Cuaprer 2 Notice that the only formation rule that introduces brackets is rule 3. This means that the only legitimate function of a pair of brackets is to delineate the scope of some binary operator. In particular, brackets are not used to indicate the scopes of either sentence letters or the negation operator. Thus, for example, none of the following expressions count as formulas: (Pp) ~(P)(-P) (~P) (All wrong!) Exercise 2.3 Some of the following expressions are formulas of propositional logic. Others are not. For those that are, explain how they are built up by the formation rules. For those that aren’t, explain why they can’t be built up by the formation rules. 1. (P) v(Q) (P&Q) P&Q ~(Pv(Q&§)) (o-¥) + ((P & Q)v (R&S) 8 P 9. (PP) 10. (P& Q&R) CHAPTER CLASSICAL PROPOSITIONAL LOGIC: SEMANTICS 3.1 TRUTH CONDITIONS In this chapter we examine semantics of classical propositional logic. Semantics is the study of meaning, The logical meaning of an expression is usually understood as its contribution to the truth or falsity of sentences in which it occurs. By rigor- ously characterizing the meanings (in this sense) of the logical operators, we deepen our understanding of validity and related concepts. Logicians have traditionally defined meaning in terms of possible truth. To know the meaning of a sentence, they have assumed, is to know which possible situations or circumstances make it true and which make it false. For example, if we wished to check a student’s understanding of the sentence ‘The government is an oligarchy’, we might describe various possible political arrangements, asking cach time whether the government described was an oligarchy. The pattern of the student's responses to these scenarios would quickly reveal whether she knows what ‘The government is an oligarchy’ means. Or, to take a more sophisticated example, philosophers sometimes debate what is meant by such sentences as ‘James knows that God exists’, To clarify their understanding, they ask whether or not the sentence would be true in various possible situations. Suppose, for example, that James has been brought up from earliest childhood to believe in God. Would that make it true that he knows God 39 40 Cuarren 3 exists? Suppose he has had a mystical vision in which it seemed to him that God gave him a message. Would that make it true? Suppose that he has had such a vision and that God really did give him the message. The point of these queries is to clarify the meaning of the sentence ‘James knows that God exists’—or, more broadly, to clarify the meaning of the predicate ‘knows’ in application to religious assertions. And the general assumption of the inquiry is that to know the meaning, of a sentence or term is to know which possible situations make that sentence true or that term truly applicable. This assumption is often expressed by saying that the meaning of a term is its truth conditions. The truth conditions for a term are rules that specify the possible situations in which sentences containing that term are true and the poss ble situations in which sentences containing that term are false. In this section we give a truth-conditional semantics for the five logical op- erators introduced in Chapter 2. That is, we explain their meanings in terms of the possible situations in which sentences containing them are true and the possible situations in which sentences containing them are false. In doing so, we shall employ the concept of truth value. A truth value is a kind of semantic quantity that characterizes propositions. For now, we as sume that there are only two truth values: true, or T, and false, or F. A true proposition has the value T and a false proposition the value F. Moreover, we assume that in each possible situation each proposition has one, and only one, of these truth values. This assumption is called the principle of bivalence. Logics based on the principle of bivalence and the assumption that meaning is truth conditions are called classical. The dominant logics in Western thought have been classical. Some philosophers have held that classical logic is universally the best form of logic, or even the only true logic. This book dissents from that view. In Part V wwe shall explore reasons for thinking that the principle of bivalence, though appro- priate for some applications of logic, is less appropriate for others. We shall con- sider truth values other than T and F and the possibility that sentences may have more than one truth value, or none at all. And in Section 16.2, we question even the idea that meaning has anything to do with truth. There we explore a semantics that defines the meanings of terms, not as their truth conditions, but as their assertibility conditions—the conditions under which statements containing these terms are confirmable by adequate evidence. And beyond that we shall glimpse still more radical ways of departing from the classical tradition. Each of these novel semantic assumptions alters our conception of what valid reasoning is. For now, however, we present the semantics of the logical operators in the classical way, as bivalent truth conditions. ‘We begin with classical logic for two reasons. First, itis highly established in the Western logical tradition—our tradition. Second, it is, from a semantic view- point at least, the simplest logic, and itis best to start with what is simple. Our immediate task, then, is to define the meanings of the five logical oper- ators in terms of their truth conditions. We begin with the conjunction operator. If we conjoin two sentences—say, ‘it is Wednesday’ and ‘it is hot’—we obtain a single sentence (‘it is Wednesday and it is hor’) that is true if and only if both CLASSICAL PROPOSITIONAL LoGic: SeMaNTiCs 41 original sentences were true, and false otherwise. Hence the truth conditions for conjunction may be stated as follows: The truth value of a conjunction is T in a given situation iff the truth value of each of its conjunets is T in that situation. and The truth value of a conjunction is F in a given situation iff one or both of its conjuncts does not have the value T in that situation. (The term ‘iff’ is a commonly used abbreviation for ‘if and only if.) To understand these truth conditions is, by the lights of classical logic, to understand what con- junction means. All of this is well and good if we aim to state the truth conditions for con- junction when applied to statements of natural language. But we have taken a step of abstraction into formal logic. We are no longer concerned primarily with sen- tences, like ‘it’s Wednesday and it’s hot’, but with formulas, like W & H’. Simple sentences have been replaced with sentence letters. But a sentence letter, such as “W’, has in itself no meaning and is neither true nor false. Of course we can give it a meaning by associating it with a particular statement of natural language. But this we do differently in different contexts. For one problem ‘W” may mean “it’s Wednesday,” for another “Water is H,O.” So what can it mean to talk about a situation that makes a mere formula like ‘W & H’ true? Two things are needed to make talk about possible situations intelligible in formal propositional logic. The first is an interpretation of the sentence letters. Interpretations are given by associating sentence letters with statements of natural language. The interpretation of a sentence letter may vary from problem to prob- lem, but within a given problem we keep the interpretation fixed. Thus we may stipulate, for example, that (for the duration of this example) *W” means “it’s Wednesday” and ‘H’ means “it’s hot.” Let us now, in fact, stipulate this. The second thing we need to make sense of the notion of a possible situation is the concept of a valuatio: DEFINITION A valuation of a formula or set of formulas of propositional logic is an assignment of one and only one of the truth values T and F to each of the sentence letters occurring in that formula or in any formula of that set. For a formula, such as ‘W & H’, that contains two sentence letters, there are four valuations, as shown in the following table: = = mds mane 42 Cuaerer 3 That is, both “W” and ‘H’ might be true, “W” might be true and ‘H’ false, “W" might be false and ‘H’ true, and both ‘W” and ‘H’ might be false. Given an interpretation of the sentence letters, each valuation defines a situation. For example, given the interpretation stipulated above, the valuation that assigns T to both ‘W" and ‘H’ defines a possible situation in which it is both Wednesday and hot. Similarly, the valuation that assigns T to “W" and F to ‘HY’ defines a possible situation in which it is Wednesday but not hot, and so on. Stipulation of the interpretation, however, though obviously essential for applying logic to natural language, is inessential from a purely formal point of view. To state truth conditions for the logical operators, we need only to say how the truth values of sentences containing them depend on the truth values of their components, not what the components themselves have been interpreted to mean. Hence truth conditions for formal logic need concern themselves only with valua- tions, not with interpretations. A valuation alone is not a possible situation, but merely a pattern of truth values—the empty form, as it were, of a possible situa- tion, Ithas the advantage, however, of being an entity definable with mathematical precision. If we disregard particular interpretations of sentence letters and think of abstract valuations rather than possible situations, we enter a realm of formal thought where everything is sharply defined and clear. The truth conditions for conjunction now look like this: The truth value of a conjunction is T on a given valuation iff the truth value of each of its conjuncts is T on that valuation and The truth value of a conjunction is F on a given valuation iffone or both of its conjuncts does not have the value T on that valuation. Because these more abstract truth conditions are stated in terms of valuations, they are often referred to as valuation rules. Valuation rules will appear so often from now on that it will be useful to abbreviate them. We shall use the script letter “V” to stand for valuations, and, as in the previous section, Greek capital letters will stand for formulas. Instead of the cumbersome phrase ‘the value assigned to ® by ¥”, we shall write “V()’. Thus, to say that V assigns the value T to ®, we write simply “V(®) = T’. Using, this nota- tion, we may state the valuation rules for conjunction more compactly as follows: °V(@ & W) =T iff both V() = T and V(¥) =T. V(@ & W) =F iff either V() 4 T or ¥(¥) #T, or both. © We could also state the second rule this way: V(@ & ¥) =F iff either ¥() =F or V(W) = But then a question mighe arise regarding the truth value of *® & "if somehow © or lacked truth value or had some value other than T or: Defining the falsity of the conjunction in terms of the untruth, rather than the falsity, ofits ‘components makes conjunctions bivalent even if their components are not. Bi- Classical PRoPosmoNal Loic: SeManics 43, ‘The same idea may be expressed in tabular form, listing the four possible combi- nations of truth value for ® and ¥ on the left and the resulting truth value for ® & ¥ on the right: This is called a truth table. Truth tables are, perhaps, easier to read than valuation rules. But, unlike the rules, they have the disadvantage of not being generalizable to more advanced forms of logic. Rules will prove more useful in the long run, which is why we emphasize them here. Ler’s now examine the truth conditions for the negation operator. If we attach it to a sentence—say, ‘It’s snowing’—we get a negated sentence: for exam- ple, ‘It is not the case that it’s snowing’. If the sentence is true, its negation is false. If the sentence is false, its negation is true. This is vividly apparent when negation is iterated, The sentence ‘It is not the case that it is not the case that it’s snowing’, for example, is just an elaborate way of saying ‘It’s snowing’; any situation in which one sentence is true is a situation in which the other is true, and in any situation in which one sentence is false, the other is false as well. Two negations “cancel out,” producing a statement with the same meaning as the original. By the same principle, three negations have the same effect as one, four likewise cancel out, and so on, Negation, then, is simply an operation that inverts truth value. Hence the truth conditions for negation may be stated precisely as follows: V(-@) =T iff V() 4 T. ) =F iff ¥(®) =T. This, according to classical logic, is the meaning of negation, We can also represent these rules in a truth table. Since the negation operator is monadic, applying to a single formula rather than to two, there are only two cases to consider instead of four: the case in which that formula is true and the case in which it is false. The table shows that ~@ has the value listed in the right column when @ has the value listed to the left: valence is thus built into the valuation rules themselves. In fact, throughout this book I consistently define all semantic ideas in terms of truth and untruth, rather than truth and falsehood. This saves a good bit of trouble in the metathcoretic work of Chapter $ and facilitates a smooth transition to nonclassical logics in Part V. ‘Often both valuation rules are stated together in a very compact fashion, as follows: ‘Vio & ¥)=T iff both V(@)=Tand VC) Tyotherwise, V(@ & ¥) =F (Our formulation says exactly this, but itis more explicit about what “otherwise” 44 Cuarren 3 o | T F F T Ler’s now consider the truth conditions for ‘or’, ‘Or’ has two meanings. It can mean “either . ..of .. .and possibly both” or “either. . .or. . .and not both.” The first meaning is called inclusive disjunction and the second exclusive disjune- tion. This ambiguity is unfortunate. Suppose, for example, that on a true-false quiz you find the statement Four is either an even number or a square number. What should you answer? Four is both even and square. So, you might argue, itis not either even or square—that is, not just one of these two things—it’s both. In that case you would mark the starement false. On the other hand, you might think that since four is even (and also since it’s square), it’s true that it is even or square. In that case you would mark the statement true. In neither case would you be wrong, and in neither case would you have misunderstood anything, But if you marked the statement false, that would mean you understood the ‘or’ exclusively, and if you marked it true, that would mean you understood it inclusively This problem might be less acute if we were speakers of Latin, In Latin there are two words for ‘or': ‘vel’ and ‘auf’, In most contexts, ‘vel’ more naturally expresses the idea “either ... or... and possibly both” (the inclusive sense), and ‘aut’ tends to mean “either . .. or... and not both” (the exclusive sense of ‘or’). In English we sometimes resolve the ambiguity by using the compound term ‘and/ or for the inclusive sense. But both ‘or’ by itself and ‘either . . .or’ generally admit of both readings. When we use them, we or our listeners may not know exactly what we mean. This situation would be intolerable in a formal logical language. Formal logic aims at precision. Its operators must have clear and unambiguous meanings. Therefore, when we introduce an operator like ‘v’ we must stipulate precisely what it means. Logicians usually have found the inclusive sense of ‘or’ more useful, and so, by convention, that is the sense they have given to the operator ‘V". In fact, ‘V7 is just an abbreviation for ‘vel’. ‘Apart from cases in which both disjunets are true (the cases on which the inclusive and exclusive senses of ‘or’ disagree), the truth conditions for ‘or’ are clear. If one disjunct is true and the other false (e.g., ‘either the sun is a star or the moon is’), then the whole disjunction is true. And if both disjuncts are false (e.g. ‘either the moon is a star or the earth is’), then the disjunction is false. Hence the valuation rules for the operator ‘V’ are as follows: Vibv¥) Vibv¥) iff either V() = T or V(¥) =T, or both. F iff both °V(®) 4 T and V(¥) #T. ‘The corresponding truth table is: CLASSICAL PROPOSITIONAL Locic: Semantics 45, The logical operator ‘v’, then, accurately symbolizes the English ‘or’ only when ‘or’ is used in the inclusive sense. In spite of this, we need not introduce a special symbol for exclusive disjunction, since ‘P or Q’, where ‘or’ is intended in the exclusive sense, may be symbolized in our notation as “(Pv Q) & ~(P & Q)'—that is, “either P or Q, but not both P and Q.” In formalizing arguments involving disjunction, we will for the sake of simplicity and consistency treat the disjunctions as inclusive, except when there is strong reason not to. But on those fairly frequent occasions when the meaning of ‘or’ is unclear, we should keep in mind that this policy is essentially arbitrary. ‘We now turn to the truth conditions for conditional statements. Under what conditions is — W true? Let's consider the case in which the antecedent @ is false (we assume nothing about the consequent ‘?). Now, though @ is false, the condi- tional invites us to consider what is the case if , hence to suppose ® true. This, however, yields a contradiction, from which (as we saw in Section 1.3) any prop. osition validly follows. Take a specific instance: The statement ‘Napoleon con- quered Russia’ is false. Given this, itis (in a certain sense) true, for example, that if Napoleon conquered Russia, then Caesar conquered the universe. Indeed, if Napoleon conquered Russia, then anything you like is true—because the fact is that Napoleon didn’t conquer Russia. Thus it appears that when ® is false, & ¥ is true, regardless of the truth value of ¥. Let us next consider the case in which the consequent ¥ is true. Then, whether or not & is true, Y is true (trivially). Take a specific instance: The state. ment ‘Iron is a metal’ is true. Then any conditional containing ‘Iron is a metal’ as its consequent is true. For example, ‘If today is Tuesday, then iron is a metal’ would be true—because whether o not it is Tuesday (i.e., regardless of the truth value of the antecedent), iron is a metal. Thus in general we may infer that ® — ¥ is true when ¥ is true, regardless of the truth value of ®. We have now concluded that + is true whenever @ is false or whenever \W is true, Together these conclusions account for three of the four truth combina- tions for ® and 5 that is, © — is true whenever and Y are both true, or ® is false and is true, or @ and Ware both false. The only remaining case is the one in which @ is true and ¥ false. But in this case the conditional is clearly false. If, for example, it is Tuesday and the weather is not hot, then the conditional ‘if it is Tuesday, then it is hot’ is obviously false. To summarize, we have concluded that © — W is true if ¥ is true (regardless of the truth value of ©), & — Vis also true if @ is not true (regardless of the truth value of W), and & — W is false if ® is true and ¥ is not true. This covers all possible cases. Hence the truth conditions for” are as follows: 46 Cuarren 3 "V(O > Y) =T iff either V(®) 4 T or V(¥)=T, or both. V(> > ¥) =F iff both V(®) = T and V(¥) #T. The corresponding truth table is ‘The conditional defined by these truth conditions is called the material con- ditional, If you were unconvinced by the reasoning that led us to the truth conditions for the material conditional, you are not alone. Many logicians (your author among them) are troubled by this reasoning. Consider, once again, the last line on the truth table, the case in which ® and are both false. I argued that the conditional was true in that case, since its antecedent contradicts the facts, and from a contradiction anything follows. Now surely conditionals are sometimes true when their antecedents and consequents are both false, The statement (A) If you are less than an inch tall, then you are less than a foot tall. for example, is uncontroversially true, though (taking ‘you’ as referring to you) its antecedent and consequent are both false. But the antecedent and consequent of the following statement are also both false, and yet, unlike statement (A), this statement seems false: (B) Ifyou have no lungs, then you can breathe with your eyeballs. If, as these examples suggest, English conditionals are sometimes true and some- times false when their antecedents and consequents are both false, then the truth, value of an English conditional must not be determined solely by the truth values of its components. Something else must figure into the truth conditions. ‘An operator which forms compounds whose truth value is strictly a function of the truth values of the components is said to be truth-functional. The symbols *&’, ~’, ‘V’, and the material conditional as defined by the truth conditions above are truth-functional. But we have seen evidence that suggests that ‘if . . . then’ is not a truth-functional operator and hence is not the material conditional. Intuitively, what makes statement (A) true is not that its antecedent contra dicts the facts, but that it is necessary, given that you are less than an inch tall, that you are also less than a foot tall. Correspondingly, what makes statement (B) false ‘seems to be the lack of just such a necessary connection: It is not necessary, given that you have no lungs, that you can breathe with your eyeballs. This suggests that an English statement of the form ‘if P then Q’ is true if and only if such a necessary connection exists, regardless of the truth values of the components. CLASSICAL PROPOSITIONAL LOGIC: SEMANTICS 47 ‘The truth conditions for the material conditional take into account only the truth values of the antecedent and consequent, not the presence or lack of such a necessary connection. This leads to anomalies not only in the case in which the antecedent and consequent are both false, but also in the case in which the ante- cedent is false and the consequent true. In that case a material conditional is true, But consider this statement: If there are no people, then people exist. Once again, contrary to the truth conditions for the material conditional, this seems false, and once again, the necessary connection is lacking. This case, in fact, is especially anomalous, since here the antecedent does not merely fail to necessi- tate the consequent—it actually necessitates the negation of the consequent. Further anomalies occur in the case in which the antecedent and consequent are both true. Consider this example: If the Mississippi contains more than a thimbleful of water, then it is the greatest river in North Ameri The Mississippi contains considerably more than a thimbleful of water and it is the greatest river in North America, so both the antecedent and consequent are true. If it is a material conditional, it is therefore true. Yet many English speakers would say that this conditional is false. It seems false, once again, because it is not necessary given merely that the Mississippi contains more than a thimbleful of water thatit is the greatest river in North America. Thus English conditionals seem to be true only when there is a necessary connection between antecedent and consequent, whereas the truth conditions for material conditionals ignore all such connections, taking into account only the truth values of the antecedent and consequent. What, then, of the reasoning by which I arrived at the truth conditions for the material conditional in the first place? It is sound—as applies to the material conditional, but not to English conditionals. I claimed, for example, that when the consequent ¥ of @— W is true, then whether or not & is true, Y is true. But this claim tacitly ignores the possibility that the truth or falsity of might necessitate the falsity of ¥ so that (taking this necessary connection into account) it would be wrong to conclude that is true whether or not ® is true. Thus I arrived at the truth conditions for the material conditional by implicitly assuming that such necessary connections do not affect the conditional’s truth valu A similar assumption underlies my reasoning in the case in which the ante- cedent is false. I claimed that when ® is false, the supposition that @ is true yields a contradiction, from which any proposition validly follows. Thus, given that ® is false, if © then ¥, that is, © — is true—for any proposition ¥. Thus I assumed that what determines the truth value of the conditional is simply the contradiction of its antecedent with the facts, rather than a necessary connection, or lack thereof, between the antecedent and consequent. It was, therefore, by assuming that such necessary connections do not affect the truth value of the conditional that I arrived at the truth conditions for the material conditional. But this assumption seems false for at least some English re 48 CHarrer 3 conditionals. The material conditional is therefore not just another way of writing the English ‘if... then’? Bue then if our aim is to evaluate arguments, which we normally formulate in English, why bother with the material conditional? Part of the answer is historical. Beginning with the work of the Scottish philosopher David Hume (1711-1776), many thinkers, among them some of the founders of contemporary logic, have doubted the intelligibility of this idea of necessary connection. As a result, many have found the material conditional (whose truth conditions, though odd, are at least exact) preferable to English conditionals, whose truth conditions seem bound up with the suspect notion of, necessity. Indeed, logicians have long dreamed of an ideal language, free of all ambiguity, unclarity, and dubious metaphysics; and the replacement of the English conditional by the material conditional offered hope of progress toward that goal. Early in this century, Bertrand Russell, Ludwig Wittgenstein, and other prominent philosophers held that with the creation of such a language the perennial philo- sophical problems, which they regarded as linguistic confusions, would simply dissolve. There is indeed much to be said for the replacement of murkier notions by clearer ones, but in the end we may be left wondering whether we have really solved the problems or merely changed the subject. In any case, these early thinkers had little choice but to embrace the material conditional. They needed some sort of conditional operator, and it was not until midcentury that logicians began to formulate rigorous and illuminating truth cot ditions involving ideas of necessary connection. Moreover, the material condi tional does mimic English conditionals fairly well in many cases. Like English conditionals, itis always false when its antecedent is true and its consequent false; and in the other cases its truth value sometimes agrees and sometimes disagrees with that of English conditionals. Certainly, it offers the best approximation to English conditionals among truth-functional operators. Moreover, its truth con- ditions are simple and precise. For these reasons, the material conditional has become the standard conditional of logic and mathematics. Lately, however, logicians have formulated a variety of alternative truth con- ditions that seem to reflect more adequately the meanings of English conditionals. We shall consider some of these in later chapters. Unfortunately, none of these alternatives has won universal acclaim as the true meaning of ‘if. . . then’. That is why we still bother with the material conditional. This having been said, however, it must be admitted that the common text- book practice of symbolizing ‘if... then’ in English as the material conditional (a practice in which this textbook too has indulged) is not wholly defensible. The material conditional is at best a rough approximation to ‘if... then’, and some patterns of reasoning valid for the one are not valid for the other. We shall be more 2 That, atleast, is my view. Some logicians still insist that English conditionals are material conditionals. They say unabashedly that statements like ‘if you have no lungs, then you can breathe with your eyeballs’ are true. We can sce why they say this by understanding what they mean by ‘f. .. then’, but I do not think that that is what the rest of us usually mean by “if ’ chen’. CLASSICAL PROPOSMIONAL Locic: SemanTics 49) careful about the difference between the material conditional and English con: tionals for the remainder of this chapter. When considering instances of argument forms containing the material conditional, we shall not translate the material con- ditional back into English as ‘if... then’, but instead retain the symbol ‘—? as a reminder that its meaning lies solely in its truth conditions, not in what we nor- mally mean by ‘if... then’. It remains to discuss the truth conditions for the biconditional operator ‘+", which we have associated with the English expression ‘if and only if. Since, as we saw in Section 2.2, ‘if’ prefixes antecedents and ‘only if” prefixes consequents, the statement form oify may be symbolized as ¥ + @, and the form © only if ¥ as ® + ¥, Thus the biconditional, as its name implies, can be understood as a pair of conditionals—more precisely, as a conjunction of two conditionals. As a con- junction, it is true if both conditionals are true, and it is false if either or both are untrue. Now if ® and ¥ are either both true or both untrue, both conditionals are true (by the valuation rules for ‘—"). But if ® is true and ¥ untrue, then ® = ¥ is untrue; and if ® is untrue and Y true, then ‘Y — @ is untrue. In either of these cases, the biconditional is false. Thus the truth conditions for the biconditional are as follows: V(@ oY) =T iff either V(®) = T and V(P)=T, or V() # T and V() #T. V(@ + W) =F iff either V() = T and V(Y) 4 T, of V(®) # T and ¥(¥) = ‘These rules yield the following truth table: The biconditional, in other words, is true if the two constituents have the same truth value and false if they differ in truth value. Because the truth conditions for “+ are just those for a conjunction of two material conditionals, ‘~ is often called the material biconditional operator, and where @ + ¥ is true, @ and ¥ are called material equivalents. Two formulas, then, are materially equivalent on a valuation if and only if they have the same truth value on that valuation. The material biconditional shares with the material conditional the oddity of ignoring necessary connections between its components. If its components are either both true or both untrue a material biconditional is true, regardless of the existence or lack of existence of necessary or relevant connections between the nN 50 CHapTer 3 TABLE 3.1 Valuation Rules for Propositional Lo: For any formulas ® and and any valuation 1, ¥(-@)=Tiff V(0) #T; ‘V(-@) = Fiff V(b) =T. 2. V(b & ) =T iff both ¥(@) = T and V(¥)=T; V(@ & ¥) =F iffeither V() #T or ¥(¥) #T, oF both, 3. V(@v ¥)=T iffeither V1) =T or V(¥) =, or both V(@ v ¥) =F iff both V() # T and V(¥) #T. 4, V(b ¥) =Tiffeither V() # T or V(¥)=T, or both; ‘V(b ¥) =F iff both V() =T and V(¥) #T. 5. V(b + Y)=T iff either ¥(®) =T and ¥(¥) =, or V(@) #T and V(¥) +7; ‘V(b 4 Y) =F iff either V(o) =T and V(W) #T, or V(@) #T and V(¥) =T. components. Thus (importing ‘~" into English) the following statements, however odd, are both true: Life evolved on earth + Ronald Reagan was president of the United States. Grass is purple + grass is colorless. The first statement is true because both of its components are true, the second because both of its components are false. Obviously, then, ‘+ differs from ‘if and only if, just as —° differs from “if... then’. The material biconditional and English biconditionals do agree, however, when one component is true and the other false; here the biconditional itself is surely false. To summarize: The meanings of the five truth-functional operators of prop- ositional logic are given by the rules in Table 3.1. These rules constitute the com- plete semantics for classical propositional logic. For each operator there is a rule telling when formulas of which it is the main operator are true and a rule telling when those formulas are false. Together this pair of rules implies each such for- mula is false if and only if it is not true. Hence collectively the valuation rules embody the principle of bivalence—the principle that each formula is either true or false, but not both, on all valuations. The rules are numbered 1-5. We will use this numbering for future reference. Exercise 3.1 ‘The five operators discussed in this section are a somewhat arbitrary selection from among many possible truth-functional operators, some of them expressible by common words or phrases of English. Invent symbols and formulate truth tables and a valuation rule for binary operators expressible by these English terms (you may find it easier to do the truth tables first): Classical ProPosmonat Loic: Semannics 51 1. exclusive ‘or’ 2. ‘...unless...” 3. ‘neither... nor. 4, ‘not both ...and ...’sthis is sometimes called the nand operator. 3.2 TRUTH TABLES The valuation rules tell us what the operators of propositional logic mean. But they do more than that. They also enable us to understand why in some cases one formula must be true if others are true; that is, they enable us to understand why some argument forms are valid—and not merely to understand, but to confirm our understanding by calculation. Because propositional logic makes such calcu- lations possible, itis sometimes called the propositional calculus. In Chapter 1 we said that an argument is valid iff it has no counterexam- ple—that is, iff there is no possible situation in which its premises are true but its conclusion is untrue. In this chapter we have shifted our attention from speci- fic arguments to argument forms. For forms, too, we may define a notion of counterexample: DEFINITION A counterexample to a sequent or argument form is a valua- tion on which its premises are true and its conclusion is not true. This notion of a counterexample is closely related to the earlier one. Given a counterexample to a sequent, we can always convert it to a counterexample to an argument that is an instance of that sequent by giving an appropriate interpreta- tion to the form’s sentence letters. Consider, for example, the invalid sequent ‘P v Qt Q’. The valuation V such that V(‘P") = T and V(°Q’) = F is a counterexample to this sequent. For since ‘V(P’) =, by the valuation rule for disjunction ¥(‘P v Q’) = T. But V('Q’) = E. That is, ‘V is a valuation that makes the premise ‘Pv Q’ of this sequent true and its conclusion ‘Q’ untrue. Now any interpretation that correlates “P* with a true state- ment and ‘Q’ with a false one produces an instance of this sequent that has a counterexample. And, indeed, this counterexample will describe an actual situa- tion. (An actual situation is, of course, a kind of possible situation; anything actual can be coherently described.) For example, suppose we interpret ‘P* by the true statement ‘People are mammals’ and ‘Q’ by the false statement ‘Quail are mam- mals’. Then (retaining the ‘v’ symbol) this interpretation yields the following in- stance of the sequent People are mammals v Quail are mammals. Quail are mammals. 52 CHarren 3 And in the actual situation the premise is true but the conclusion isn’t. Conversely, given a counterexample to an instance of a sequent, we can always construct a counterexample to the sequent by assigning to its sentence letters the truth values of the corresponding sentences in the counterexample to the instance. Consider, for example, this argument, which is an instance of the sequent ‘P + P & Q’: Bill is a prince. Bill isa prince & Jill is a queen. Here is a counterexample to this argument: Bill is a prince, but Jill, the miller’s daughter, is a poor but honest maiden, not a queen. This counterexample makes ‘Bill is a prince’ true and ‘Jill is a queen’ false. We can turn it into a counterexample to the sequent by ignoring our interpretation of the sentence letters and assigning these truth values directly to the corresponding sen- tence letters themselves. The result is the valuation V such that V(‘P*) = T and ‘°V(*Q’) =F, which is a counterexample to the sequent. In this way we can always convert a counterexample to an instance of a sequent into a counterexample to the sequent itself, and vice versa. This realization leads us to a new understanding of the concept of a valid argument form. In Section 2.1, we defined a valid argument form as a form all of whose instances are valid arguments. Thus an argument form is valid iff none of its instances have counterexamples. But we have just seen that for each possible situation that is a counterexample to an instance there is a valuation that is a counterexample to the form, and vice versa. Therefore, to say that no instances of the form have counter- examples is equivalent to saying that the form itself has no counterexamples. Thus we may equally well define validity for an argument form as follows: DEFINITION A sequent or argument form is valid iff there is no valuation on which its premises are true and its conclusion is not true. Likewise, since an invalid form is just one that has an instance with a coun- terexample, and since there is a counterexample to some instance iff the form has a counterexample, we may likewise redefine the concept of invalidity for an argu- ment form: DEFINITION A sequent or argument form is invalid iff there is at least one valuation on which its premises are true and its conclusion is not true. ‘We shall rely on these new definitions from now on. Central to both is the concept used by the valuation rules to define truth conditions: the concept of a Classical Proposmonat Locic: SEMANTICS 53. valuation. Thus these definitions illuminate the relationship between the concepts of validity and invalidity and the valuation rules. The remainder of this chapter shows how to utilize this relationship to develop computational tests for validity and other semantic properties. Asa first step in this direction we note that, given a valuation, the valuation rules enable us to calculate the truth value of a formula or set of formulas from the truth values assigned to their component sentence letters, For example, given the valuation V such that V(‘P’) = T, V(‘Q’) =, and V(‘R’) = F, we can calculate the truth value of the formula ‘P — (Q & ~R)’ as follows. Since V(‘R’) +, by the valuation rule for negation, /(‘-R’) = T. And since both V(‘(Q) =T and V(‘-R’) = T, by the valuation rule for conjunction ¥(‘Q & ~R’) =T. And, finally, since both ‘V(‘P))=T and V(‘Q & -R’) =T, by the valuation rule for the material conditional, V(P = (Q&-R))= ‘We may list the results of such calculations for all the valuations of a formula or set of formulas on a truth table. If we do this for the set of formulas that comprises a sequent, the table will display all the possible valuations of the prem- ises and conclusion, each as a single horizontal line, We can then, simply by scan- ning down the table, check to see if there is a line (i.e., a valuation) on which the premises are true and the conclusion is false. If so, that line represents a counter- example to the sequent and the sequent is invalid. If not, then (since all the valua- tions of the sequent are displayed on the table) there is no counterexample and the sequent is valid. Here at last is a simple, mathematically rigorous test for validity, one that relies on neither intuition nor imagination! Let’s try it out. Our example will be a sequent expressing modus ponens, where “—’ is now explicitly understood as the material conditional. This sequent contains only two sentence letters, so it has four possible valuations (both ‘P" and °Q true, P” true and “Q’ false, ‘P” false and *Q’ true, both ‘P" and ‘Q false), which we list in the two leftmost columns of the table. Then, beneath each formula of the sequent, we write its truth value on each of those valuations, like this: PQ | PQ P +Q TT TT TF TOF FOOT FOOT FOF FOR Each horizontal line represents a single valuation. For example, the second line from the bottom represents the valuation ¥ such that V(‘P’) and ¥(‘Q’) =T. To the right, below each formula of the sequent, is listed the truth value of that formula on. On this valuation, for example, ‘P + Q’ is true. Since the truth table is a complete list of valuations, if there is a valuation on which the premises are true and the conclusion is not, it will show up as a line on the table. In this case, there is no such line, that is, no counterexample. (The only valuation on which both premises are true is the first one listed, the valuation ‘V such that ‘V(‘P') =T and ‘V(‘Q’) = T. But on this valuation the form’s conclusion ‘Q’ is true.) Thus modus ponens is valid for the material conditional. 54 CHaeter 3 ‘Affirming the consequent, which is sometimes confused with modus ponens, is, as we saw in Section 2.1, intuitively invalid. Its truth table confirms our intuitions: Q T FE os F (On the third valuation listed in the table the premises ‘P —~ Q’ and ‘Q’ are both true, but the conclusion ‘P” is false. This valuation is therefore a counterexample to the sequent, proving it invalid. ‘We can use the counterexample displayed in the truth table to construct instances of the sequent that are invalid arguments. Since the valuation which makes ‘P’ false and ‘Q’ true provides a counterexample, we need merely substitute any sentence that is actually false for ‘P” and any sentence that is actually true for °Q to obtain an invalid instance. Since these are the truth values these sentences have in the actual situation, a description of the actual situation constitutes a counterexample to that instance. Let ‘P’, for example, be the false sentence ‘Logic is a kind of biology’ and ‘Q’ the true sentence “Logic is an intellectual discipline’. Then we obtain this instance: Logic is a kind of biology — Logic is an intellectual discipline. Logic is an intellectual discipline. s+ Logic is a kind of biology. The premises of this argument are true and its conclusion false in the actual situation. ‘This technique sometimes yields puzzling instances whose conditional prem- ise, though actually true, seems false if we confuse “— with Sf... then’. But keeping the truth conditions for ‘" distinctly in mind resolves the puzzlement. To obtain the values listed under the formulas in the preceding tables, we just copied them from one of the leftmost columns (if the formula was a sentence letter) or derived them directly from the valuation rules (in the case of the condi- tional formulas). With more complex formulas, however, we may need to apply the valuation rules successively, a step ata time, to calculate truth values for whole formulas. Consider the sequent ‘(P & Q) v (~P & ~Q) F -P + ~Q’. (To give this some intuitive content, we might interpret ‘P” as the statement ‘The princess dines’ and QEQ>P 5. P+Q,-PF-Q 6. PQ,-Qr-~P 7. PEQ—P 8. -QFQ—P 9. PFP=Q 10. ~(P+Q)FP& ~Q 11. PVQFP=Q 12. PR QEPSQ 13. PRQEQEP 14, PVQPR,Q>RER 15. PVQ,QvREPVR 16. P+Q,Q—>PEP HQ 64 CHarter 3 17. Pr ~(P & -P) 18. ~(P & Q)F -P & -Q 19. {Pv-P)FQ 20. P+ (PQ) (P&Q) Exercise 3.2.2 Use truth tables to determine whether the following formulas are valid, contingent, or inconsistent. Write your answer beside the table. p—-P P+ ~-P Po~-P PoP (Pv Q)v (-P & -Q) P&P PvP (P&P+Q)—Q (Pv Q) = (QvP) (P= Q) = (-PvQ) PY SNAVeyYye Exercise 3.2.3 Use truth tables to determine whether the following sets of formulas are consistent or inconsistent. Write your answer beside the table. P=QQ—-P PHQQu-P PvQ,-B,-Q P&Q-P ~PKQ,PVQ veep Exercise 3.2.4 1, Use a truth table to verify that ‘P + Q’ is logically equivalent both to (P= Q) & (Q— PY’ and to ‘(P & Q) v (-P & -Q)’. 2. Usea truth table to verify that “P & Q’ is logically equivalent to ‘~(-P v ~Q)’. . Use a truth table to verify that ‘P v Q is logically equivalent to ‘~(~P & ~Q)’. 4, Use a truth table to verify that ‘Q v P’ and QP’, which are both ways of symbolizing ‘P unless Q’, are equivalent. 5. Find equivalents for the forms ~© and ® & W in terms of‘, and show that they are logical equivalents by constructing the appropriate truth tables. 6. Find a logical equivalent for v ¥ in terms of P, and demonstrate the equiva- ence with a truth table. Do the same thing in terms of ‘P. Classical PROPOSITIONAL Loic: Semantics 65. 3.3 SEMANTIC TREES A semantic tree is a device for displaying all the valuations on which the formula or set of formulas is true, Since classical logic is bivalent, the valuations on which the formula or set of formulas is false are then simply those not displayed. Thus trees do the same job as truth tables. But they do it more efficiently; especially for long problems, a tree generally requires less computation and writing than the corresponding truth table. A truth table for a formula or sequent containing 7 sentence letters has 2” lines. For # = 10, for example, 2” = 1024—a good many more lines than we are likely to want to write. But a tree for a formula or sequent with ten sentence letters (or even more) may fit easily within a page. Moreover, as we shall see in Section 7.4, trees have the advantage of being straightforwardly generalizable to predicate logic, which truth tables are not. Suppose, for instance, that we want a list of the valuations on which the formula “~P & (Q v Ry’ is true. To obtain this by the tree method, we write the formula and then begin to break it down into those smaller formulas which, ac~ cording to the valuation rules, must be true in order to make ‘-P & (Q v RY’ true. Now “P & (Q v R)’ is a conjunction, and a conjunction is true iff both of its conjunets are true, So we write P & (Q v RJ’, then check it off (to indicate that it has been analyzed), and write its two conjunets beneath it, like this: ~P&(QvR) > QvR A formula which has been checked off is in effect eliminated. We need pay no further attention to it, What remains, then, are the two formulas ‘~P” and ‘QV R’. (Q& (RV) Premise 2 / ~P—Q) Negation of conclusion 3. P 2 * Mn = 5. ¥ Q&(RvS) 1- 6. X3,5 Q 5& 7. Rvs 5& 8. X4,6 Both paths close, so there is no valuation which makes the premises but not the conclusion true. Hence the sequent is valid. 72 Cuaerer 3 Notice that we closed the right branch without analyzing ‘R v $. This is permissible. Any path may be closed as soon as a formula and its negation both appear on it. Analyzing ‘R v S? would have split this path into two new paths, but each of these still would have contained both ‘Q’ and ‘-Q and hence each still would have closed. Closing a path as soon as possible saves work. It also saves work to apply nonbranching rules first. When I began the tree, had the choice of analyzing either ‘P — (Q & (Rv $))’ or ‘(P+ Q)’ first. I chose the latter, because it is a negated conditional, and the negated conditional rule does not branch. If I had analyzed ‘P — (Q & (R v $))’, which is a conditional, first, then I would have had to use the conditional rule, which does branch, Then when 1 analyzed ‘~(P + Q)’, I would have had to write the results twice, once at the bottom of each open path, Analyzing ‘P— (Q & (Rv $)} first is not wrong, but it requires more writing, as can be seen by comparing the resulting tree with the previous tree: 1. J Pa(Q&(RVS)) Premise 2 / ~(P+Q) Negation of conclusion 3. ¥ Q&(RVS) 14 4. P 2-> 5. -Q 2-— 6 Q 3& 7. Rvs 3% 8. X5,6 Yer this tree, though more complicated, gives the same answer as the first. There is, then, some flexibility in the order of application of the rules. But in general it is best where possible to apply nonbranching rules before the branching ones. Let’s next test the sequent ‘(P ++ Q) F ~(P + Ry for validity. Once again we list the premises and the negation of the conclusion so that the tree searches for counterexamples. In this case the conclusion is a negation, so its negation is a double negative. Here is the tree: 1. v¥ (P=Q) Premise 2. VP +R) Negation of conclusion 3. V (PSR) pes 4 P P in 5. Q A le 6. P ~P P -P be 7R -R R -Ro3e 8. X46 X46 CLASSICAL PROPOSITIONAL LOGIC: SEMANTICS 73 The leftmost and rightmost branches remain open. The leftmost branch reveals that the premise ‘(P ++ Q)’ is true and the conclusion ‘~(P = R)’ false (because its negation is true) on the valuation on which ‘P’, ‘Q’, and ‘R’ are all true. This valuation, in other words, is a counterexample to the sequent. The rightmost branch reveals that the valuation on which ‘P’, “Q’, and ‘R’ are all false is also a counterexample. Thus the sequent is invalid. If we begin a tree, not with premises and a negated conclusion, but with a single formula or set of formulas, as we did in the first examples of this section, the tree tests this formula or set of formulas for consistency. Ifall paths close, there is no valuation on which the formula or set of formulas is true, and so that formula or set is inconsistent. If one or more paths remain open after the tree is finished, these represent valuations on which the formula or all members of the set are true, and so the formula or set is consistent. Trees may also be used to test formulas for validity. The easiest way to do this is to search for valuations on which the formula is not true. IF no such valua- tions exist, then the formula is valid. Thus we begin the tree with the formula’s negation. If all paths close, there are no valuations on which its negation is untrue so that the original formula is true on all valuations. Consider, for example, the formula ‘(P + Q) + ~(P & ~Q)’. When we negate it and do a tree, all paths close: 1. P+ QP & Negation of formula 2. v¥ (P+Q) ¥-P>Q 1- 3. {P & ~Q) ¥ ~(P&-Q) 1m 4. P&-Q 5. P 2s 6. -Q 2-5 7. -P Q 2A 3-& 8X 5,7 X 67 X 5,7 67 Therefore, since there is no valuation on which “~((P + Q) + ~(P & ~Q))’is true, (P+ Q) + ~(P & ~Q)’is true on all valuations, that is, vali Notice that I closed the rightmost path before ‘—Q’ was fully analyzed. This is a legitimate use of the negation rule. If the path had not closed, however, the tree would not be finished until the negated negation rule was applied to ‘--Q’. Here is a list of some of the ways in which trees may be used to test for various semantic properties: To determine whether a sequent is valid, construct a tree starting with its premises and the negation of its conclusion. If all paths close, the sequent is valid. If not, it is invalid and the open paths display the counterexamples, 74 Cnarter 3 To determine whether a formula or set of formulas is consistent, construct a tree starting with that formula (or set of formulas). If all paths close, that formula (or set of formulas) is inconsistent. If not, it is consistent, and the open paths display the valuations that make the formula (or all members of the set) true. To determine whether a formula is valid, construct a tree starting with its negation. If all paths close, the formula is valid. If not, then the formula is not valid, and the open paths display the valuations on which itis false. ‘To determine whether a formula is contingent, construct two trees, one to test it for consistency and one to test it for validity. Ifthe formula is consis- tent but not valid, then it is contingent. Constructing trees is just a matter of following the rules, but there are a few ‘common errors to avoid. Keep these in mind: The rules for constructing trees apply only to whole formulas, not to their parts. Thus, for example, the use of shown below is not permissible: 1. / P+~-Q Given 2. /P=Q 1~ (Wrong!) Although using ~~ on subformulas does not produce wrong answers, it is never necessary and technically is a violation of the double negation rule. Trying to apply other rules to parts of formulas, however, often does pro- duce wrong answers. Arnule applied to a formula cannot affect paths not containing that formula. Consider, for example, the following incomplete tree: 1. J Pv(QvR) Given 2. Ply QvRiv Here the formula “Q v R’ at the end of the right-branching path remains to be analyzed. The next step is to apply the disjunction rule to this formula. In doing so, we split this right-branching path but add nothing to the path at the left, for it does not contain ‘Q v R’ and is in fact already finished. The negation rule applies only to formulas on the same path. In the follow- ing tree, for example, both ‘P” and ‘-P” appear, but neither path closes be- cause the formulas don’t appear on the same path: 1 J PoP Given a i> Pin CLassical Prorosmonat Loic: SeManncs 75: ‘To'summarize: A finished tree for a formula or a set of formulas displays all the valuations on which that formula or all members of that set are true. Thus trees do the same work as truth tables, but in most cases they do it more efficiently. Moreover, as we shall see in later chapters, they may be used for some logics to which truth tables are inapplicable. Exercise 3.3.1 Redo Exercises 3.2.1, 3.2.2, and 3.2.3 using trees instead of truth tables. Exercise 3.3.2 1. How might trees be used to prove that two formulas are logically equivalent? Explain, 2. To prove a formula valid using trees, we construct a tree from its negation. Is there a way to prove a formula valid by doing a tree on that formula without negating it? Explain, 3.4 VALUATIONS AND POSSIBLE SITUATIONS ‘We saw in Section 2.1 that while any instance of a valid form is a valid argument, not every instance of an invalid form is an invalid argument, We noted, for exam. ple, that this instance of the invalid sequent affirming the consequent is in fact a valid argument: If some men are saints, then some saints are men. Some saints are men. Some men are saints. How can this be? The answer lies in the distinction between valuations and possi- ble situations, Suppose we let ‘S,’ stand for ‘Some men are saints’ and ‘S," for ‘Some saints are men’. Then we can represent the form of the argument as ‘S, — S,, S; FS)’. Here is its truth table: SS | SiS, S$, FS, TT TT T Tr FoF T FOT T T F FOF T F F ‘The valuation in which ‘S,’ is false and ‘S,’ true is a counterexample to the sequent or argument form, But the corresponding situation—the one in which ‘Some men 76 Cuarer 3 are saints’ is false and ‘Some saints are men’ is true—isn’t a counterexample to the argument because it isn’t a possible situation. The very idea of a situation in which some men are saints but itis not the case that some saints are men (i.e., no saints are men) is nonsense. Of course we can easily find other interpretations of ‘S,’ and ‘S,'—and consequently other instances of this form—to which the valuation that makes ‘S,” false and ‘S,” true provides a genuine counterexample, even an actual counterexample. But on this particular interpretation, that valuation corresponds to an impossible situation. (Incidentally, so does the valuation that makes ‘S,’ true and ‘S," false.) An impossible situation, if it even makes sense to talk about such a thing, cannot be a counterexample. Depending on how we interpret the sentence letters, then, a particular valu- ation may or may not correspond to a possible situation, For many interpretations, all valuations correspond to possible situations. For example, if we let ‘S,” stand for ‘Itis sunny’ and ‘S,” for ‘It is Sunday’, every line on the truth table above (every valuation) represents a possible situation, and the valuation on which ‘S,” is false and ‘S," true represents a counterexample to the argument as well as to the form. In such cases, the statements corresponding to the sentence letters are said to be logically independent. But where the statements corresponding to the sentence letters logically imply one another or exclude one another, in various combina- tions, some valuations represent impossible situations. ‘Such nonindependent statements as ‘Some men are saints’ and ‘Some saints are men’ have interrelated semantic structures that are not represented in propo- sitional argument forms in which they are symbolized simply as sentence letters. (In this case, the semantic structures in question are relationships among the logi- cal meanings of the words ‘some’, ‘men’, and ‘saints’.) In Chapter 6 we shall begin to formalize the semantic structures of such statements, and we shalll redefine the notion of a valuation so that it reflects more of these semantic structures and yields a more powerful and precise logic. Later we shall explore ways of creating logics that are more powerful and precise still. But at no point shall our concept of a valuation become so sophisticated that a valuation may never represent an impos- sible situation—which is to say that at no point do we ever achieve a formal semantics or formal logic that reflects all the logical dependencies inherent in natural language. Certain consequences of this disparity between valuations and possible situ- ations, between formal and informal logic, will haunt us throughout this book: An invalid sequent may have valid instances. The reason for this we have already seen. The counterexamples to the sequent may on some interpreta- tions represent impossible situations so that there are no possible situations which make the corresponding argument’s premises true while its conclu- sion is untrue. No argument is valid because of having an invalid form, but an argument may be valid in spite of having an invalid form, because of elements of its semantic structure not represented in the form. A contingent formula may have valid or inconsistent instances. A contin- gent formula is true on some valuations and false on others. But on some CLASSICAL PROPOSITIONAL LOGIC: SEMANTICS 77, interpretations either the valuations on which the contingent formula is true or those on which it is false may all correspond to impossible situations. In the former case, the interpretation yields an inconsistent instance. In the latter, provided that at least one of the valuations on which the formula is true corresponds to a possible situation, the interpretation yields a valid instance. Example: ‘P & Q’ is a contingent formula, but the instance ‘Some women are mortal & Nothing is mortal’ is an inconsistent statement, and the instance ‘Every woman is a woman & Every mortal is a mortal’ is a valid statement (logical truth). (Check the truth table of P & Q' to see which valuations correspond to impossible situations in each case.) A consistent formula or set of formulas may have inconsistent in- stances. That is, though there is a valuation that makes the formula or set of formulas true, there may not be a possible situation that makes a particu- lar instance of that formula or set of formulas true, again because the situa- tion corresponding to that valuation may be impossible, Example: The set consisting of the formulas ‘P” and “Q’ is consistent, but if we interpret ‘P? as ‘Smoking is permitted? and ‘Q’ as ‘Smoking is forbidden’, the set of state- ments for which these letters stand is inconsistent. All of this sounds discouraging. Nevertheless: All instances of a valid sequent are valid arguments. A valid sequent has no valuation on which its premises are true but its conclusion is not true. Some valuations may on a particular interpretation correspond to impossible situ- ations. Yet since a valid sequent has no valuations representing situations (possible or impossible) in which the premises are true and the conclusion is false, none of its valuations represents a possible situation that is a counter example to the instance. Hence, if a sequent is valid, all of its instances must be valid as well. Valid sequents are, in other words, perfectly reliable pat- terns of inference. All instances of a valid formula are logical truths. A valid formula is a for- mula true on all valuations. Even if on a given interpretation some valua- tions of such a formula do not represent possible situations, the formula is still true on all the others and hence true in all the situations that are possi ble. Therefore any statement obtained by interpreting a valid formula must be true in all possible situations. That is, it must be a logical truth. All instances of an inconsistent formula are inconsistent statements. An in- consistent formula is true on no valuations. Hence, even if on a given inter- pretation some of its valuations represent impossible situations, still the formula is true on none of the remaining valuations which represent possi- ble situations. Therefore any statement obtained by interpreting an incon- sistent formula is not true in any possible situation. Under the same interpretation, logically equivalent formulas have as their instances logically equivalent statements. Logically equivalent formulas are formulas whose truth value is the same on all valuations. Once again, even 78 CHarer 3 if given interpretation rules out some of these valuations as impossible, still the formulas will have the same truth value in the remaining valua- tions—the ones representing possible situations. Hence any two statements obtained by interpreting them will have the same truth values in all possible situations. That is, they will be equivalent statements. ‘To summarize: Formal validity (for both formulas and sequents), inconsis- tency, and equivalence are reliable indicators of their informal counterparts. For- mal invalidity, contingency, and consistency are not. CHAPTER CLASSICAL PROPOSITIONAL LOGIC: INFERENCE 4.1 CHAINS OF INFERENCE Most people can at best understand arguments that use about three or four prem- ises at once. For more complicated arguments, we generally break the argument down into more digestible chunks. Beginning with one or two or three premises, we draw a subconclusion, which functions as a stopping point on the way to the main conclusion the argument aims to establish. This subconclusion summarizes the contribution of these premises to the argument so that they may henceforth be forgotten. This subconclusion is then combined with a few more premises to draw a further conclusion, and the process is repeated, step by small step, until the final conclusion emerges. The following example illustrates the utility of breaking com- plex inferences down into smaller ones: ‘The meeting must be held on Monday, Wednesday, or Friday. Ar least four of these five people must be there: Al, Beth, Carla, Dave, and Em. Em can’t come on Monday or Wednesday. Carla and Dave can’t both come on Monday or Friday, though either of them could come alone on those days. Alcan come only on Monday and Friday. ‘The meeting must be held on Friday. 79 80 CHarren 4 The argument is difficult to understand all at once, but it becomes easy if analyzed into small steps. For example, from the premises Em can’t come on Monday or Wednesday. and Alcan come only on Monday and Friday. we can deduce the subconclusion Neither Al nor Em can come on Wednesday. [And from this subconclusion together with the premise [At least four of these five people must be there: Al, Beth, Carla, Dave, and Em, we can further conclude The meeting can’t be held on Wednesday. In addition, from the premises Em can’t come on Monday or Wednesday. and Carla and Dave can’t both come on Monday or Friday, though either of them could come alone on those days. we can conclude Em and either Carla or Dave can’t come on Monday. Putting this together with the premise At least four of these five people must be there: Al, Beth, Carla, Dave, and Em. yields the conclusion ‘The meeting can’t be held on Monday. Combining this with the previously derived conclusion that the meeting can’t be held on Wednesday and with the premise “The meeting must be held on Monday, Wednesday, or Friday. wwe get the conclusion The meeting must be held on Friday. Thus we analyze a complicated and forbidding inference into a sequence of simple inferences. The result of this process is summarized below ina more compact form, which we shall call a proof. A proof begins with the premises, or assumptions, of the unanalyzed argument, listed on separately numbered lines. We indicate which statements are assumptions by writing an ‘A’ to the right of each. Each successive CLASSICAL PROPOSITIONAL LOGIC: INFERENCE 81 conclusion is written on a new numbered line, with the line numbers of the prem- ises (either assumptions or previous conclusions) from which it was deduced listed to the right. Here is our reasoning recorded as a proof: 1, The meeting must be held on Monday, Wednesday, or Friday, A At least four of these five people must be there: Al, Beth, 2. Carla,Dave, and Em. 3. Emcan’t come on Monday or Wednesday. Carla and Dave can’t both come on Monday or Friday, though 4, either of them could come alone on those days. 5. Alcan come only on Monday and Friday. 6. Neither Al nor Em can come on Wednesday. 7. The meeting can’t be held on Wednesday. 8, Emand either Carla or Dave can’t come on Monday. 3 9. The meeting can’t be held on Monday. 10. ‘The meeting must be held on Friday. PRR S> Pepe eR A 27,9 The series of conclusions is listed on lines 6-10. None of these conclusions is drawn from more than three premises. Each inference is plainly valid. The proof ends when the desired conclusion is reached. In the remainder of this chapter, we explore a more formal version of this proof technique. Exercise 4.1 Analyze each of the following arguments into simple inferences involving at most three premises each, and write the analyzed argument as a proof. Each inference in this proof should be obviously valid in the informal sense of validity discussed in Chapter 1, but it need not exemplify any prescribed formal rule, (Some simple formal inference rules are introduced in the next section.) There is not just one right answer; each argument may be analyzed in many ways. 1, If the person exists after death, then the person is not a living body. The person is not a dead body. Any body is either alive or dead. The person exists after death. 2 The person is not a body. 2. x isan odd number. xty=25, x>3. 30/x is a whole number. x<10. y=20. 3. You will graduate this semester. In order to graduate this semester, you must fulfill the humanities require~ ment this semester. You fulfill the humanities requirement when and only when you have taken and passed either (1) two courses in literature and a single course in either 82 Carrer 4 philosophy or art or (2) two courses in philosophy and a single course in either literature or art. You have taken and passed one art course but have taken no courses in philosophy or literature. You have time to take at most two courses this semester. ‘Among the philosophy courses, only one is offered at a time when you can take it. You will take two literature courses this semester. 4.2 SIMPLE FORMAL INFERENCE RULES In this section we introduce the idea of a proof, not for an argument but for a sequent or argument form, The idea, once again, is to break a complicated or dubious inference down into smaller inferences, each of which has a simple form. In formal proofs we require that these smaller inferences have one of a well-defined set of forms that we already recognize as valid. In the system of formal logic that we shall adopt there are ten such forms. The most familiar of them is modus ponens (introduced in Section 2.1). To illustrate, let's construct a formal proof to demonstrate the validity of the sequent: P=+(Q—(S—T)),RP-Q,S + T The first step is to write the assumptions in a numbered list, indicating that they are assumptions by writing an ‘A’ to the right of each: 1. P+(Q4($—T) A 2? A 3. P=Q A 4.8 A Then we look for familiar inference patterns among the premises. For example, from premises 2. and 3, we may infer by modus ponens the conclusion Q. So we write this as a conclusion, listing to the right the line numbers of the premises from which it was inferred and the form or rule of inference by which it was inferred: 3.Q 2, 3 modus ponens ‘The formula ‘P’, which is assumed on line 2, is also the antecedent of the condi- tional assumption on line 1. Though the consequent of this conditional is a com- plex formula, rather than a single sentence letter, we still recognize here another instance of modus ponens. So we draw the conclusion: 6. Q> (ST) 1, 2 modus ponens (We have dropped the unnecessary outer brackets, as usual, and will continue to do so without comment from now on.) Now lines 5 and 6 can be combined to. obtain yet another conclusion: ne CLASSICAL PROPOSITIONAL LOGIC: INFERENCE 83 7 S—T 5, 6 modus ponens And lines 4 and 7 yield the conclusion: 8.7 4,7 modus ponens This is the conclusion we wanted to establish. And now we have succeeded. For by showing that it is possible to get from the assumptions of the sequent to its conclusion by simple steps of valid reasoning, we have shown that the sequent itself is valid. To see why it is valid, consider a preliminary conclusion C, validly drawn from some initial set of premises. Now let new premises be used together with C, to validly draw a second conclusion C;. Since C, was validly drawn, by the defini- tion of validity C, is true on any valuation on which the original premises are true And similarly, since C, validly follows from C, together with the new premises, Cy is true on any valuation on which both C, and the new premises are true. But since C, is true on any valuation on which the original premises are true, C, is true on any valuation on which the original premises ard also the new premises are true. Hence, since C, is true on any valuation on which both C, and the new premises are true, C, is true on any valuation on which both the initial premises and the new premises are true. That is, the inference from the initial premises together with the new premises to C, is valid. Further, if we were to add still more premises and validly draw yet a third conclusion Cs, the same reasoning would show that the inference combining all three sets of premises to the conclusion C; is also valid. And so it goes. Thus, by stringing together valid inferences, we prove the validity of the inference whose premises are all the assumptions made along the way and whose conclusion is the final conclusion of the string. In the rest of this section, we shall show how to break down any valid sequent in propositional logic into a sequence of simple and (more or less) obvi- ously valid patterns of reasoning. Such sequences, as exemplified by lines 1-8 above, are called proofs. Modus ponens is not the only pattern used in proofs. We shall construct proofs in propositional logic by the so-called natural deduction method, which utilizes ten distinct patterns of reasoning, or rules of inference (or inference rules), of which modus ponens is one. (There are many other methods of proof, which use different types and numbers of rules, though for classical logic, at least, they all yield the same results, Some of the alternative methods are dis- cussed in Section 4.5.) Proofs, of course, are only as credible as their inference rules. As we intro- duce each tule, we shall verify its validity using the semantics developed in Chapter 3. This will enable us to see that our proof technique is sound—that is, that if we start with assumptions true on some valuation, we shall always, no matter how ‘many times we apply these rules, arrive at conclusions that are likewise true on that valuation. Thus a proof establishes that there are no counterexamples to the sequent of which it is a proof; it is a third formal method (in addition to truth tables and trees) for showing that a sequent is valid. In Section 5.10 we shall show that the entire system of rules introduced here is not only sound but also complete—that is, capable of providing a proof for every valid sequent of propo- sitional logic. 84 CHarren 4 Our first inference rule, modus ponens, may be stated as follows: Given any conditional and its antecedent, infer its consequent. Or using the Greek letters of Section 2.3: Given ®— ¥ and ®, infer ¥. ® and ¥ may be any formulas, simple or complex. For example, in the inference from assumptions 1 and 2 to conclusion 6 in the proof above, ® is ‘P* and Y is Q=(S Ty. Modus ponens is clearly valid, as we can see by examining its truth table. ‘That is, no matter what the truth values of © and Y may be, it can never happen that ®— ¥ and @ are true but ¥ is untrue: In addition to modus ponens, we shall introduce nine other rules, for a total of ten—two for each of the five logical operators. For each operator one of the ‘two rules, called an introduction rule, allows us to reason to (introduces) conclu- sions in which that operator is the main operator. The second rule allows us to reason from premises in which that operator is the main operator; it is known as the operator's elimination rule, because it enables us to break a premise into its components, thus “eliminating” the operator. Modus ponens is the elimination rule for the conditional. Given a formula ©, it allows us to “eliminate” the conditional operator from — and wind up just with ¥. In doing proofs, then, we shall call modus ponens conditional elimi- nation, which we abbreviate as ‘+E’. Officially, we state the rule of modus ponens as follows: Conditional Elimination (E) Given (@— ¥) and ©, infer ¥. ‘The introduction rule for the conditional has some special features which are best appreciated only after some practice with the other rules, We shall therefore consider it later. Perhaps the simplest rules are those for conjunction. Indeed, these may seem utterly trivial. Here is the conjunction elimination rul Conjunction Elimination (&E) From (® & '¥), infer either ® or ‘That is, we may “climinate” a conjunction by inferring one or the other of its conjuncts. (We can, if we like, infer both, but that takes two applications of the rule.) Conjunction elimination is sometimes known as simplification. This rule is obviously valid. The only way @ & ¥ can be true on a valuation is if both of its CLASSICAL PROPOSITIONAL LOGIC: INFERENCE 85, conjuncts are true on that valuation. Hence there is no valuation on which ® & ¥ is true and either of its conjuncts is untrue. The following proof for the sequent &E and +E: = (P & Q), REQ exemplifies both, 1, R>(P&Q) A 2R A 3. P&Q 1,29E 4.Q 3 &E As before, we begin by writing the assumptions on numbered lines (lines 1 and 2) and marking them with an ‘A’ to indicate that they are assumptions. ‘+E’ (modus ponens) applied to lines 1 and 2 gets us the conclusion ‘P & Q’ at line 3, and from this by conjunction elimination we obtain the desired conclusion Ler’s now consider the conjunction introduction rule. This rule enables us to infer conclusions whose main operator is a conjunction: Conjunction Introduction (&I) From ® and ¥, infer (® & ¥). Conjunction introduction is also called conjunction or (more rarely) adjunction, It allows us to join any two previously established formulas together with ‘8. If these formulas are true, then by the valuation rule for “& the resulting conjunction must be true as well, and so clearly the rule is valid. We may illustrate both &E. and &I by constructing a proof for the sequent ‘P & Qt Q & P’. (This sequent is hardly less obviously valid than the rules themselves, but it nicely illustrates their use.) 1, P&Q A 2P 1&E 3. Q 1&E 4. Q&P 2,3 &1 Starting with the assumption ‘P & Q’, we break it into its components at lines 2 and 3 by &E, then introduce the desired conclusion by &I (whose purpose, re- member, is to create conjunctive conclusions) at line 4. Intuitively, the reasoning is. this: Given that the conjunction ‘P & Q' is true, ‘P" is true and “Q' is true as wel. But then the conjunction *Q & P* is also true. The order in which the premises are listed is irrelevant to the application of a rule of inference. Thus, even though ‘P’ is listed on line 2 of this proof and ‘Q’ on line 3, we may legitimately infer “Q & P”, in which “Q’ comes first. Moreover, the use of two different Greek letters in stating a rule does not imply that the formulas designated by those letters must be different. In the &T rule, for example, ® and can stand for any formulas without restriction—even for the same formula. The following proof of the trivial but valid sequent ‘P + P & P’ illustrates this point: 1.P A 2. P&P 1,181 86 CHarrer 4 Here we apply the rule of &I (from ® and ¥, infer (® & )) to a case in which both ® and Ware ‘P’, That is, we infer from ‘P” and ‘P” again the conclusion ‘P & P’. Since we have used ‘P” twice, we list line 1 twice in the annotation. Though odd, this sort of move is quite legitimate, not only for &I but (where applicable) for other rules as well. Given, for example, that the sun is hot, it validly follows thae the sun is hot and the sun is hot—though we are not likely to have much use for that conclusion. ‘The elimination and introduction rules for the biconditional are closely re~ lated to those for conjunction. This is not surprising since ‘P + Q’ has the same truth conditions as the conjunction ‘(P — Q) & (Q— P)’. Thus, like the conjune- tion rules, the biconditional rules simply break the complex formula into its con- ditional components or assemble it from these components: Biconditional Elimination (+E) From ( + ¥), infer either ( — ¥) or (Yo). Biconditional Introduction (+1) From (®— W) and (‘¥ + ®), infer (O+¥). As with conjunction elimination, the biconditional elimination rule gives us a choice of which of the two components to infer. This rule is used here in a proof of PHQ,PEPRQ: 1 PHQ A 2.P A 3. P=Q 14E 4..Q 2,3E 5. P&Q 2,4 bl ‘We “eliminate” the biconditional at line 3, obtaining one of its component condi- tionals, ‘P + Q’. Next we use modus ponens (+E) at line 4 to obtain ‘Q’, one of the conjuncts of our desired conclusion. The other conjunc, ‘P”, was already given as an assumption. Conjunction introduction enables us to combine these conjuncts into our conclusion at line 5. The following proof of (P+ Q) —+ (Q— P), P—- Qk P+ Q illustrates the use of the other biconditional rule, biconditional introduction: 1. P+ Q)> (QP) A 2. P=Q A 3. QaP 1,278 4, PQ 2,341 We next consider the disjunction introduction rule (sometimes called the addition rule): Disjunction Introduction (v1) From ®, infer either (® v ¥) or (‘¥ v ©). That is, given any formula, we may infer its disjunction (as either first or second disjunct) with any other formula. If, for example, my best friend is Jim, then it is CLASSICAL PROPOSITIONAL LOGIC: INFERENCE 87 certainly true that either my best friend is Jim or my best friend is Sally. And it is obvious from the valuation rule for ‘v’ that this pattern is valid in general, for whenever either disjunct of any disjunction is true, the disjunction itself is also true. ‘The following proof of (P v Q) +R, PE R v's illustrates the use of VI: 1. (PVQ—>R A 2? A 3. PVQ avi 4.R 1,3 5. RvS 4vl To use the conditional assumption ‘(P v Q) — R’ we must “eliminate” the condi- tional by +E. But to do this we must first obtain its antecedent, ‘P v Q’. Since we are given ‘P” as an assumption, we can infer ‘P v Q simply by applying v1 at line 3. This enables us to derive ‘R’ at line 4. The conclusion we want to reach, how- ever, is ‘R v S’. But this can be deduced from ‘R’ by applying vI once again, this time to line 4. The disjunction elimination rule, VE, allows us to draw conclusions from disjunctive premises, provided that we have established certain conditionals: , Disjunction Elimination (VE) From (® v ¥), (®— @), and (¥ — ©), infer 0. Disjunction elimination is also known as constructive dilemma. It is valid, as can be seen by inspection of this truth table: (ov), (®-0), (Ye) T T mH 8. modal maa mda sadam Sawada mandnanale Consider, for example, the argument ABCD is either a rectangle or a parallelogram. If ABCD isa rectangle, then it is a quadrilateral. If ABCD isa parallelogram, then it is a quadrilateral, «ABCD isa quadrilateral. We may symbolize this argument as ‘R v P, R+ Q, P+ QE Q’. Its proof is a single step of v rs 88 CHAPTER 4 2. R=Q A 3. P=Q A 4.Q 1,2, 3 VE Here @ is R’, ¥ is ‘PY, and © is ‘Q’, Notice that since vE uses three premises, we must cite three lines to the right when using it. Sometimes the same line is cited twice, as in the proof of ‘Pv P,P QrQ: 1. PvP A 2. PQ A 3. Q 1, 2,2 vE In this proof, ® and ¥ are both ‘P* and © is “Q’. The first premise is, of course, redundant, but redundancy does not affect validity. “The most interesting uses of vE are those in which the conditional premises necessary for proving the conclusion are not given as assumptions but must them- selves be proved. This, however, requires the use of the rule =I, which is intro- duced in the next section. The negation elimination rule, which is sometimes called the double negation rule, allows us to “cancel out” double negations when these have the rest of the formula in their scope: Negation Elimination (~E) From —-®, infer ®. This rule, too, is obviously valid, For by the valuation rule for ‘~’, if ~ is true, then ~@ is false and hence is true. To say, for example, that I am not not tired is the same thing as to say that I am tired. Here is an example of the use of negation elimination, in the proof of ‘P= ~-Q, PF Q’: 1 P+~-Q A 2P A 3. --Q 1,2-E 4..Q 3-E Neither the negation elimination rule nor any of the other rules allow us to operate inside formulas. It is a mistake, for example, to do the proof just illustrated this way: A A 1~E (wrong!) 2,3-E Negation elimination operates only on doubly negated formulas. ‘P — ~~Q’ is a conditional, not a doubly negated formula. We must use conditional elimination to separate ‘~~Q’ (which is a doubly negated formula) from the conditional before negation elimination can be applied. It is not really invalid to eliminate double negations inside formulas; it’s just not a legitimate use of our negation elimination rule. We never need to use it this way, because our elimination rules always enable us to break formulas down CLassicaL PROPOSMONAL LOGIC: INFERENCE 89 (where this may validly be done) so that the double negation sooner or later appears on a line by itself and hence becomes accessible to the negation elimination rule, We could be more liberal, permitting elimination of double negations inside formulas, but only at the expense of complicating some of our metatheoretic work later on, Conservatism now will pay off later. Finally, we should note that there is no one correct way to prove a sequent. If the sequent is valid, then it will have many different proofs, all of them correct, but varying in the kinds of rules used or in their order of application. Often, however, there is one simplest proof, more obvious than all the rest. In construct. ing proofs, good logicians strive for simplicity and elegance and thus make their discipline an art. Exercise 4.2 Construct proofs for the following sequents: 1, P+Q,Q>R,PER 2, P=(Q=RLBQER 3. P&QPSRER 4.P4QPSRPEQ&R 5. (PR QV>R,P+Q,PER 6. P&QEQVR 7. Pt (Pv Q) & (Pv R) 8. P,((Q& R)vP) SES 9. PEPVP 10. PF (PvP) & (P&P) 11. P>(Q—R),P> (RQ), PFQGR 12. P+Q, (P+Q)>RER 13, (P+ (Q&R)),PER 14, (PQ) ~(Q— P), P+ QE Qu P 15. P+ ~-Q,P&REQvS 16. PVQ,Q—>P,P—+PEP 17. PvQ,Q—>--R,P—+--RERVS 18. (P & (QV R)) +S, P,—-RES 19. P+PEP oP 20. Qk Qv (~-QeP) 4.3 HYPOTHETICAL DERIVATIONS ‘We have now encountered eight of the ten rules. I saved the remaining two until last because they make use of a special mechanism: the hypothetical derivation. A hypothetical derivation is a proof made on the basis of a temporary assumption, or hypothesis, which we do not assert to be true, but only suppose for the sake of OO 90 CHarrer 4 argument. Hypothetical reasoning is common and familiat. In planning a vacation, for example, one might reason as follows: Suppose we stay an extra day at the lake. Then we would get home on Sunday. But then it would be hard to get ready for school on Monday. Here the arguer is not asserting that she and her audience will stay an extra day at the lake, but is only supposing this to see what follows. The conclusion, that it will be hard to get ready for school on Monday, is likewise not asserted or believed. The point is simply that this conclusion would be true if the hypothetical supposi- tion were true. Her reasoning presupposes two unstated assumpt derive the second and third sentences. These are ns, used respectively 0 1. If we stay an extra day at the lake, then we get home on Sunday. and 2. If we get home on Sunday, then it will be hard to get ready for school on Monday. Using ‘S? for ‘we stay an extra day at the lake,’ ‘H? for ‘we get home on Sunday’, and ‘MP for ‘it will be hard to get ready for school on Monday’, we may formalize this reasoning as follows: 1, S>H A 2. HM A 3. s H (for 1) 4, H 1,35E 3. M 2,45 (Assumptions 1 and 2 correspond to the implicit statements I and 2 above. State- ments 3, 4, and 5 represent the first, second, and third sentences of the stated argument, respectively.) Thave done something novel beginning with S on line 3, the line that repre- sents the supposition or hypothesis that we stay an extra day at the lake. Instead of labeling $ as an assumption (‘A’), I have marked it with the notation ‘H (for I)’. This indicates that ‘S? is a hypothesis (‘H’), made only for the sake of a conditional introduction (I) argument and not (like 1 and 2) really assumed and asserted to be true. Moreover, I have drawn a vertical line to the left of ‘S? extending to all subsequent conclusions derived from ‘S’, This line specifies that the reasoning to its right is hypothetical—thar statements 3, 4, and 5 are not genuinely asserted, but only considered for the sake of argument. This hypothetical reasoning has a purpose. In granting assumptions 1 and 2, we see that we can derive ‘M’ from ‘S’; this means the conditional ‘Ss M’ must be true. This conditional, which symbolizes the English sentence ‘if we stay an extra day at the lake, then it will be hard to get ready for school on Monday’, is both the point of the argument and its implicit final conclusion, But this condi- CLASSICAL PROPOSTIONAL LOGIC: INFERENCE 91 tional is not deduced directly from our assumptions, nor from any of the state- ments listed in the argument, either singly or in combination, Rather, we know that ‘$+ M’is true because (given our assumptions) we showed in the hypotheti- cal reasoning (or hypothetical derivation) carried out in lines 3-5 that ‘M’ follows logically from ‘S*. Iris this reasoning, not any single statement or set of statements, that shows ‘S + M? is true. To indicate this, and to draw the argument’s final conclusion, we add a new line to the previous reasoning, as follows: 1, SH A 2.H-M A 3. s H (for 1) 4. H 1,3-E 5. M 2,4-E 6. S4M 3-51 The annotation of line 6 indicates that we have drawn the conclusion ‘S —~ M” from the hypothetical derivation displayed on lines 3-5, The rule used is the rule of conditional introduction (I), commonly known as conditional proof. It may be stated as follows: Conditional Introduction or Conditional Proof (=I) Given a hypothetical derivation of ¥ from ®, end the derivation and infer (® > ). In our example, ® is ‘S’ and is ‘M’, A hypothetical derivation itself begins with a hypothesis, or temporary as- sumption, and ends when a desired conclusion has been reached. In this case the conclusion of the hypothetical derivation was ‘M’. Its duration is marked by in- dentation and a vertical line to the left. Since in this problem the point of the hypothetical derivation was to show that ‘M’ followed from ‘S’, the hypothetical derivation (and hence the vertical line) ends with ‘M’, A conclusion inferred from a hypothetical derivation is not part of the hypothetical derivation, and hence the vertical line does not extend to it. The conclusion, ‘S —* M’ (‘if we stay an extra day at the lake, then it will be hard to get ready for school on Monday’), is not merely hypothetical; it is something the arguer actually asserts and presumably believes. Conditional introduction is, of course, the rule that enables us t0 prove conditional conclusions, We do this by hypothesizing the antecedent of the condi- tional and reasoning hypothetically until we derive the conditional’s consequent. At that point the hypothetical derivation ends. We then apply conditional intro- duction to our hypothetical derivation to obtain the conditional conclusion. This proof of ‘P + Q— (P & Q)’ provides another example: 1. P A 2, Q H (for—1) 3. P&Q 1,281 . Q—(P&Q) 2-351 92 CHarren 4 ‘The sequent’s conclusion, *Q + (P & Q)’, is a conditional, so after listing the assumption ‘P" as usual, we hypothesize the antecedent ‘Q’ of this conditional at line 2. A single step of &I at line 3 enables us to derive its consequent, ‘P & Q’, thus completing the hypothetical derivation. We then get the desired conclusion by applying conditional introduction to the hypothetical derivation at line 4. Conditional introduction is used in proving biconditional conclusions as well as conditional conclusions. But in proving biconditionals we often need to employ it twice in order to prove each of the two conditionals that comprise the bicondi tional before we assemble these components into the biconditional conclusion. The following proof of the sequent ‘P & QF P + Q illustrates this technique: P&Q A H (for 1) 1&E 231 H (for 1) 1&E 5-61 47-1 QaP PHQ Here the conclusion we wish to obtain is ‘P + Q’. The rule for proving bicondi- tional conclusions is I, but to use I to get ‘P = Q’ we must first obtain its “component” conditionals, P+ Q and ‘Q— P”. We do this in lines 2~4 and 5— 7, respectively, by first hypothesizing each conditional’s antecedent, next hypo- thetically deriving its consequent (which in each case involves a simple step of &E from our assumption), and finally applying I to the resulting hypothetical deri- vation (at lines 4 and 7, respectively). Having obtained the two component con- ditionals, we complete the proof with a step of +I at line 8. A step or two of conditional introduction is often used to provide the condi- tionals needed for drawing conclusions from a disjunctive premise by VE. This proof of ‘P v P | P” provides an example that is both elegant and instructive: 1, PvP A 21 P H (for 1) 3. PP 2-2-1 4.P 1, 3, 3 VE. Our assumption is the disjunctive premise ‘P v P”. The standard rule for drawing conclusions from disjunctive premises is VE: From (@ v ¥), (®— ©), and (‘¥— ©), infer ©, If we take ®, ¥, and @ all to be ‘P’, this becomes: From ‘P v P,P P’, and ‘P~ P”, infer ‘P”. Thus we see that if we can prove ‘P + P*, we can use it twii with our assumption ‘P v P” to deduce the desired conclusion ‘P*. But how do we prove ‘P + P”? That's where I comes in. We hypothesize this conditional’s ante~ cedent at line 2 and aim to derive its consequent. The hypothetical derivation is the simplest possible, for its hypothesis and conclusion are the very same statement “P’, There is no need to apply any rules. In hypothesizing ‘P”, we have already in effect concluded ‘P”; the hypothetical derivation ends as soon as it begins at line 2. We then use =I to derive ‘P — P” at line 3 and VE to obtain ‘P’ at line 4, CLAssicaL PROPOSITIONAL LOGIC: INFERENCE 93 Let's consider one more example of the use of +1 in preparation for a step of VE. In this case the sequent to be proved is ‘P v Q, R (P & R) v(Q& RY: 1 PvQ A 2K A 3. P H (for +1) 4. | P&R 2,3 &l 5. (P&R)v(Q&R) 4ul 6. P>((P&R)v(Q&R)) 3-5 I 7 Q H (for >I) 8. | Q&R 2,7 &1 9. (PRR)V(Q&R) 8M 10. Q—((P&R)v(QKR)) 7-91 11. (P&R)V(Q&R) 1,6, 10 VE To use the disjunctive premise ‘P v Q’ to obtain the conclusion ‘(P & R) v (Q & Ry’ by vE, we need two conditional premises: ‘P + ((P & R) v (Q & R)) and $Q= ((P & R) v (Q.& R))’. These are conditionals, so we use I to prove each, the first in lines 3~6, the second in lines 7-10. Once the two conditionals have been established, a single step of vE at line 11 completes the proof. In proving conditionals whose antecedents contain further conditionals, we sometimes need to make two or more hypothetical suppositions in succession. For example, to prove ‘P + (Q— R) > (Q— (P & R))’, we hypothesize the conclusion’s antecedent ‘Q— R’ and then aim to deduce its consequent *Q— (P & R)’. But this consequent is itself a conditional so that we must introduce a second hypothesis, the second conditional’s antecedent, ‘Q’. This enables us to deduce *Q — (P & R)’ by =I. And since this is proved under the initial hypothesis (Q— R)’, final step of I yields the conclusion ‘(Q— R) > (Q— (P & R})’, Here is the proof in full: 1. P A 2, QaR H (for =I) 3. Q H (for =) 4 R 2,3E 5. P&R 1,4 &1 6. Qa (PRR) 3-5-1 7. (Q>R)>(Q> (PRR) 2-6-1 Notice that though the antecedent of (Q — R} — (Q = (P & R))’ is also a conditional, ‘Q— R’, we do not attempt to prove this conditional by hypothesiz~ ing ‘Q’ and deriving ‘R’. The antecedent of a conditional conclusion, no matter how complex, typically figures in a proof as a single hypothesis (line 2 in the proof above) and is not itself proved Finally, after a hypothetical derivation ends, all the formulas contained within it are “off limits” for the rest of the proof. They may not be used or cited later, because they were never genuinely asserted, but only hypothetically enter- tained. The following attempted proof of the invalid sequent ‘P, Q— ~P + P & -P” illustrates how violations of this restriction breed trouble. (If you don’t see that this sequent is invalid, check it with a truth table.) 94 CHarrer 4 LP A 2. Q=-4I A 3. H (for) 4. 2,356 5. Qa-1 3-41 6. P&-P 1, 4 8 (Wrong!) All rules are used correctly through step 5, though steps 3-S are redundant, since all they do is prove *Q— ~P’, which was already given as an assumption at line 2. Step 6, however, is mistaken, since it uses the formula “-P’, which appears in the hypothetical derivation at line 4, after that hypothetical derivation has ended. ‘~P", however, was never proved; it was merely derived from the supposition of ‘’. It cannot be cited after the hypothetical derivation based on ‘Q’ ends at step 4. Violation of this restriction may result in “proofs” of invalid sequents, as it does here. These, of course, are not really proofs, since in a proof the rules must be applied correctly. However, any nonbypothetical assumption or nonhypothetical conclusion and any hypothesis or conclusion within a hypothetical derivation that has not yet ended may be used to draw further conclusions. So, for example, in the proof of PF (QR) > (Q— @ & RY), which was given just before the preceding example, it is permissible to use the hypothesis “Q — R’ {line 2) at line 4 of the hypothetical derivation that begins with “Q’ (line 3), because the hypothetical derivation beginning with ‘Q—+ R’ has nor yet ended. A proof is not complete until all hypothetical derivations have ended. If we were to leave a hypothetical derivation incomplete, then its hypothesis would be an additional assumption in the reasoning: but, being marked with an ‘H’ instead of an ‘A’, it might not be recognized as such, To summarize: is the rule most often used for proving conditional conclu- sions. To prove a conditional conclusion ® + , hypothesize its antecedent ® and reason hypothetically to its consequent ‘. Then, citing this entire hypothetical derivation, deduce © — ¥ by +L. The conclusion @ — ¥ does not belong to the hypothetical derivation, so the vertical line that began with ® does not continue to®— , bur ends with ¥. It is perhaps not so obvious as with the nonhypothetical rules that —I is valid. To recognize its validity, we must keep in mind that the hypothetical deri- vation from © to ¥ must itself have been constructed using valid rules. This means that if a valuation makes true both the proof’s assumptions and , as well as any other hypotheses whose derivations had not ended when was supposed, then it also makes true. That is, there is no valuation that makes these assumptions and hypotheses true and also makes ® true but ¥ untrue, In other words, there is no valuation that makes these assumptions and hypotheses true and ® — ¥ untrue.! But this means that the inference from these assumptions or hypotheses to d—+ Y is valid. Hence the rule 1, which allows us to conclude ® — ¥ from these as- * This reasoning appeals implicitly to the valuation rule for the conditional CuassicaL Prorosmonal Lacic: INFERENCE 95. sumptions and hypotheses, is itself valid; it never leads from true premises to an untrue conclusion. ‘We next consider the rule for proving negative propositions: negation intro- duction, I, often known as indirect proof or reductio ad absurdum (reduction to absurdity). Negation introduction is the rule for proving negated conclusions. To prove ~, hypothesize © and validly derive from ® an “absurdity”—that is, a conclusion known to be false. Since the derivation is valid, if ® and any additional assumptions or hypotheses used in the derivation were true, the derived conclusion would have to be true as well. Therefore, since the derived conclusion is false, cither © or some other assumption or hypothesis used to derive it must be false. So, if these other assumptions or hypotheses are true, it must be ® that is false. Hence ~@ follows from these other assumptions or hypotheses. But how can we formally ensure that the conclusion we derive from ® is false? One way is to require that the conclusion be inconsistent. Inconsistencies of the form ‘@ & ~@”, for example, fill the bill. Actually, any inconsistency would do, but so as not to unduly complicate our rule, we shall require that the conclusion of the hypothetical derivation always have this one form. This restriction, as we shall see in Section 5.10, does not prevent us from proving any valid sequent. Therefore we will state the negation introduction rule as follows: Negation Introduction (~I) Given a hypothetical derivation of any for- mula of the form (\¥ & ~) from ®, end the derivation and infer ~®. The following proof of ‘P+ Q,-Q -P’, a sequent expressing modus tollens, uses this rule, Here ® is ‘P” and ¥ is *Q’: A A H (for ~1) 1,34E 2,481 4 Having listed the assumptions on lines 1 and 2, we note that the desired conclusion is a negation, ‘~P”. To prove this conclusion by ~I, then, we hypothesize “P” at line 3—not, as before, for I, but rather for ~I—and try to derive an “absurdity.” This is accomplished at line 5, where it is established that, given the assumptions ‘P= Q and *-Q’, P’ leads to absurdity. Therefore, given these assumptions, ‘P” must be false, which is what we conclude at line 6 by asserting ‘-P”, Formal indirect proofs are, of course, not merely formal. They may be used to represent specific natural language arguments. So, for example, if we let ‘P” stand for ‘A person is defined by her genome’ and ‘Q’ for ‘Identical twins are the same person’, the reasoning represented by this proof is as follows. It is assumed atlline 1 that ifa person is defined by her genome, then identical twins are the same person and at line 2 that identical twins are not the same person. The argument aims to show that a person is not defined by her genome (line 6). To prove this, we suppose for the sake of argument at line 3 that a person is defined by her genome. We do not, of course, really assert this; we suppose it only to reduce it to absurdity 96 CHarrer 4 and so prove its negation, Together with assumption 1, this supposition leads at line 4 to the conclusion that identical twins are the same person. And this conclu- sion, together with assumption 2, yields the absurd conclusion that identical twins both are and are not the same person. Having shown, given assumptions 1 and 2, that the supposition that a person is defined by her genome leads to absurdity, we conclude on the strength of these assumptions alone that a person is not defined by her genome. This final conclusion is recorded on line 6. The following proof of the sequent “~(P v Q) ~P” provides another example of the application of ~I. Recall that “-(P v Q)’ means “neither P nor Q.” 1. ~PVQ) A 2 P H (for I) 3. PvQ 2vl 4. PvQ&~(PVQ) 1,3 8 5. -P 2-4-1 With respect to our statement of the negation introduction rule, @ here is ‘P’ and W is ‘P v Q’. Once again the conclusion to be proved is ‘-P’. So, after listing the assumption, we hypothesize ‘P” and aim for some contradiction. The trick is to see that we can obtain ‘P v Q’, which contradicts our assumption, “P’, The contradiction (absurdity) is reached at line 4 by 8c. ‘P’ absurdity, we deduce “Pat line 5. ‘Negation introduction may also be used, in combination with negation elim- ination, to prove unnegated conclusions. To prove an unnegated conclusion , we may hypothesize ~, derive an absurdity, and apply ~I. But since ~I adds a nega- tion sign to the hypothesis that is reduced to absurdity, it enables us to conclude only ~-@, not the desired conclusion ©. However, from ~~© we can deduce © by negation elimination and so complete the proof. The following proof of ‘-(P & -Q), PQ’ uses this strategy.? In this case @ is “Q's with respect to the formal statement of the ~Irule, ® is ‘~Q’ and ¥ is *P & ~Q’: 1, ~(P & ~Q) A 2.P A 3. -Q H (for ~1) 4. P&-Q 2,3 &l 5. (P & ~Q) & ~(P & -Q) 1,4 &I 6. ~-Q 3-5-1 7.Q 6-E Negation introduction is often combined with conditional introduction, as in this proof of the sequent, ‘P + Q + ~Q— -P”, which expresses the pattern of inference called contraposition: 1. P=Q A aI Q H (for >I) * To see why this form ought ro be vali, recall that “~(P & ~Q)’ is equivalent ro Poe. CLASSICAL PROPOSITIONAL LOGIC: INFERENCE 97 3. P H (for -1) 4. Q 1,35E 5. Q&-Q 2481 6. 4 3-5-1 7. -Q- 2-61 Having written our assumption, we note that the conclusion for which we are aiming, “~Q — ~P’, is a conditional. So we hypothesize its antecedent at line 2 for —I, aiming to derive its consequent, “-P’. But ‘~P" is a negation, and ~Lis the rule for proving negations. So, to set up a derivation of *-P’, we hypothesize ‘Pat line 3 for ~I and try to deduce a contradiction. The contradiction is obtained at line 5, which enables us to use ~1 at line 6 to get ‘~P”. Having now derived ‘-P* from‘-Q”, we can deduce ‘-P—+ ~Q’ by 1 at line 7 to complete the proof. Negation introduction is used in a peculiar way in the proof of the principle ex falso quodlibet, the principle expressed by the sequent ‘P, ~P + Q’. (We demon- strated the validity of this sequent using a truth table in Section 3.2,) A A -Q H (for ~1) P&-P 1,2 &1 3-4-1 5-E ‘Q'is an unnegated conclusion, but ~I enables us to prove it nevertheless, To do so. we must reduce “-Q' to absurdity to obtain “-~Q’, from which ‘Q’ follows E, What is genuinely peculiar about this proof is that ‘~Q’ is not used in the derivation of the contradiction ‘P & ~P’. The contradiction comes directly from assumptions 1 and 2. This undermines the notion that it is “-Q° that is being reduced to absurdity, for the absurdity lies in the assumptions, not in -Q. This pattern of reasoning is, however, legitimate in classical logic. Having assumed an absurdity, we can reduce any formula to absurdity: All formulas validly follow Validly—but not relevantly. There is no counterexample to the sequent ‘P, ~P + Q’, but many instances of this sequent are irrelevant. Relevance logicians, who advocate a notion of validity stricter than the classical notion, would reject. step 5 of this proof as invalid. Since the hypothesis “-Q’ was not used in the derivation of the contradiction, they argue, no conclusion concerning ‘“~Q’ can legitimately be drawn, We note their protest here but set it aside. They will get their say in Section 16.3, In the meantime, we will accept such peculiar uses of 1 as valid, ‘We next consider a proof of the sequent ‘P v Q, -P FQ’, which expresses the Pattern of inference called disjunctive syllogism. This proof also employs the irrel- evant use of ~1 illustrated in the previous problem. Because this sort of irrelevant move is unavoidable in proofs of disjunctive syllogism, many relevance logicians like disjunctive syllogism no better than they like ex falso quodlibet. 98 CHapteR 4 10. Q=Q 9,91 11. Q 1,8, 10 VE Our first assumption is a disjunction; to use it we need VE. But to use vE with ‘Pv Q to obtain the conclusion ‘Q, we need these two conditionals: ‘P — Q’ and °Q= Q. These we obtain by I, the first in lines 3-8, the second in lines 9-10. To prove ‘P — Q’, we hypothesize its antecedent ‘P” at line 3. We now have hypothesized ‘P” and assumed *-P” so that we can obtain any conclusion we please. ‘We want ‘Q’, the consequent of ‘P + Q’, in order to complete our conditional proof. To get it, we hypothesize ‘~Q’ for reduction to absurdity. As in the previous example, however, we derive the absurdity (at line 5), not from this hypothesis but from previous (and irrelevant) assumptions. Nevertheless, this allows us to con- clude ‘~-Q’ at line 6 by ~1, from which we obtain *Q’ at line ~. The hypothetical derivation at lines 3-7 has thus established ‘P + Q’, a fact we record at line 8. The proof of ‘Q— Q’ at lines 910 is trivial. Having obtained the necessary premises at lines 1, 8, and 10, we finish with a step of vE. Although there are many (indeed, infinitely many!) different proofs for each valid sequent, there is often one way that is the simplest and most direct. Finding that way is a matter of strategy. Often the best strategy for a proof can be “read” directly from the form of the conclusion—that is, from the identity of its main operator, as Table 4.1 indicates. It is common, as we have seen in some of the examples worked earlier, for different strategies to be used successively in different stages of a proof. To illustrate how Table 4.1 provides guidance in doing this, lets prove the sequent “Pv QF QP’, We begin by noting that the conclusion of this sequent is of the form v . The first suggestion in the table for conclusions of this form is to use V1 if either or ¥ (i.e., in this instance ‘P’ or ‘Q’) is present as a premise. But we have neither premise, so this suggestion is inapplicable. We then try the second suggestion, which is applicable if there is a premise of the form © v A. ‘Pv Q is such a premise. The table then recommends proving as subconclusions the conditionals © > (® v ¥) and A > (® v ¥) (i.e., in this case ‘P > (Q v P)’ and “Q = (Q v P)’). A subconclusion is simply a conclusion useful for obtaining the main conclusion. It may be, but is not always, the conclusion of a hypothetical derivation, Now the task is to prove the two subconclusions ‘P — (Q v P)’ and *Q = (Q v Py. These are both of the form — ¥. So we consult Table 4.1 regarding strategies for proving conclusions of this form. The table recommends in each case

You might also like