(John Nolt) Logics
(John Nolt) Logics
QEQ>P 5. P+Q,-PF-Q 6. PQ,-Qr-~P 7. PEQ—P 8. -QFQ—P 9. PFP=Q 10. ~(P+Q)FP& ~Q 11. PVQFP=Q 12. PR QEPSQ 13. PRQEQEP 14, PVQPR,Q>RER 15. PVQ,QvREPVR 16. P+Q,Q—>PEP HQ64 CHarter 3 17. Pr ~(P & -P) 18. ~(P & Q)F -P & -Q 19. {Pv-P)FQ 20. P+ (PQ) (P&Q) Exercise 3.2.2 Use truth tables to determine whether the following formulas are valid, contingent, or inconsistent. Write your answer beside the table. p—-P P+ ~-P Po~-P PoP (Pv Q)v (-P & -Q) P&P PvP (P&P+Q)—Q (Pv Q) = (QvP) (P= Q) = (-PvQ) PY SNAVeyYye Exercise 3.2.3 Use truth tables to determine whether the following sets of formulas are consistent or inconsistent. Write your answer beside the table. P=QQ—-P PHQQu-P PvQ,-B,-Q P&Q-P ~PKQ,PVQ veep Exercise 3.2.4 1, Use a truth table to verify that ‘P + Q’ is logically equivalent both to (P= Q) & (Q— PY’ and to ‘(P & Q) v (-P & -Q)’. 2. Usea truth table to verify that “P & Q’ is logically equivalent to ‘~(-P v ~Q)’. . Use a truth table to verify that ‘P v Q is logically equivalent to ‘~(~P & ~Q)’. 4, Use a truth table to verify that ‘Q v P’ and QP’, which are both ways of symbolizing ‘P unless Q’, are equivalent. 5. Find equivalents for the forms ~© and ® & W in terms of‘, and show that they are logical equivalents by constructing the appropriate truth tables. 6. Find a logical equivalent for v ¥ in terms of P, and demonstrate the equiva- ence with a truth table. Do the same thing in terms of ‘P.Classical PROPOSITIONAL Loic: Semantics 65. 3.3 SEMANTIC TREES A semantic tree is a device for displaying all the valuations on which the formula or set of formulas is true, Since classical logic is bivalent, the valuations on which the formula or set of formulas is false are then simply those not displayed. Thus trees do the same job as truth tables. But they do it more efficiently; especially for long problems, a tree generally requires less computation and writing than the corresponding truth table. A truth table for a formula or sequent containing 7 sentence letters has 2” lines. For # = 10, for example, 2” = 1024—a good many more lines than we are likely to want to write. But a tree for a formula or sequent with ten sentence letters (or even more) may fit easily within a page. Moreover, as we shall see in Section 7.4, trees have the advantage of being straightforwardly generalizable to predicate logic, which truth tables are not. Suppose, for instance, that we want a list of the valuations on which the formula “~P & (Q v Ry’ is true. To obtain this by the tree method, we write the formula and then begin to break it down into those smaller formulas which, ac~ cording to the valuation rules, must be true in order to make ‘-P & (Q v RY’ true. Now “P & (Q v R)’ is a conjunction, and a conjunction is true iff both of its conjunets are true, So we write P & (Q v RJ’, then check it off (to indicate that it has been analyzed), and write its two conjunets beneath it, like this: ~P&(QvR) > QvR A formula which has been checked off is in effect eliminated. We need pay no further attention to it, What remains, then, are the two formulas ‘~P” and ‘QV R’.(Q& (RV) Premise 2 / ~P—Q) Negation of conclusion 3. P 2 * Mn = 5. ¥ Q&(RvS) 1- 6. X3,5 Q 5& 7. Rvs 5& 8. X4,6 Both paths close, so there is no valuation which makes the premises but not the conclusion true. Hence the sequent is valid.72 Cuaerer 3 Notice that we closed the right branch without analyzing ‘R v $. This is permissible. Any path may be closed as soon as a formula and its negation both appear on it. Analyzing ‘R v S? would have split this path into two new paths, but each of these still would have contained both ‘Q’ and ‘-Q and hence each still would have closed. Closing a path as soon as possible saves work. It also saves work to apply nonbranching rules first. When I began the tree, had the choice of analyzing either ‘P — (Q & (Rv $))’ or ‘(P+ Q)’ first. I chose the latter, because it is a negated conditional, and the negated conditional rule does not branch. If I had analyzed ‘P — (Q & (R v $))’, which is a conditional, first, then I would have had to use the conditional rule, which does branch, Then when 1 analyzed ‘~(P + Q)’, I would have had to write the results twice, once at the bottom of each open path, Analyzing ‘P— (Q & (Rv $)} first is not wrong, but it requires more writing, as can be seen by comparing the resulting tree with the previous tree: 1. J Pa(Q&(RVS)) Premise 2 / ~(P+Q) Negation of conclusion 3. ¥ Q&(RVS) 14 4. P 2-> 5. -Q 2-— 6 Q 3& 7. Rvs 3% 8. X5,6 Yer this tree, though more complicated, gives the same answer as the first. There is, then, some flexibility in the order of application of the rules. But in general it is best where possible to apply nonbranching rules before the branching ones. Let’s next test the sequent ‘(P ++ Q) F ~(P + Ry for validity. Once again we list the premises and the negation of the conclusion so that the tree searches for counterexamples. In this case the conclusion is a negation, so its negation is a double negative. Here is the tree: 1. v¥ (P=Q) Premise 2. VP +R) Negation of conclusion 3. V (PSR) pes 4 P P in 5. Q A le 6. P ~P P -P be 7R -R R -Ro3e 8. X46 X46CLASSICAL PROPOSITIONAL LOGIC: SEMANTICS 73 The leftmost and rightmost branches remain open. The leftmost branch reveals that the premise ‘(P ++ Q)’ is true and the conclusion ‘~(P = R)’ false (because its negation is true) on the valuation on which ‘P’, ‘Q’, and ‘R’ are all true. This valuation, in other words, is a counterexample to the sequent. The rightmost branch reveals that the valuation on which ‘P’, “Q’, and ‘R’ are all false is also a counterexample. Thus the sequent is invalid. If we begin a tree, not with premises and a negated conclusion, but with a single formula or set of formulas, as we did in the first examples of this section, the tree tests this formula or set of formulas for consistency. Ifall paths close, there is no valuation on which the formula or set of formulas is true, and so that formula or set is inconsistent. If one or more paths remain open after the tree is finished, these represent valuations on which the formula or all members of the set are true, and so the formula or set is consistent. Trees may also be used to test formulas for validity. The easiest way to do this is to search for valuations on which the formula is not true. IF no such valua- tions exist, then the formula is valid. Thus we begin the tree with the formula’s negation. If all paths close, there are no valuations on which its negation is untrue so that the original formula is true on all valuations. Consider, for example, the formula ‘(P + Q) + ~(P & ~Q)’. When we negate it and do a tree, all paths close: 1. P+ QP & Negation of formula 2. v¥ (P+Q) ¥-P>Q 1- 3. {P & ~Q) ¥ ~(P&-Q) 1m 4. P&-Q 5. P 2s 6. -Q 2-5 7. -P Q 2A 3-& 8X 5,7 X 67 X 5,7 67 Therefore, since there is no valuation on which “~((P + Q) + ~(P & ~Q))’is true, (P+ Q) + ~(P & ~Q)’is true on all valuations, that is, vali Notice that I closed the rightmost path before ‘—Q’ was fully analyzed. This is a legitimate use of the negation rule. If the path had not closed, however, the tree would not be finished until the negated negation rule was applied to ‘--Q’. Here is a list of some of the ways in which trees may be used to test for various semantic properties: To determine whether a sequent is valid, construct a tree starting with its premises and the negation of its conclusion. If all paths close, the sequent is valid. If not, it is invalid and the open paths display the counterexamples,74 Cnarter 3 To determine whether a formula or set of formulas is consistent, construct a tree starting with that formula (or set of formulas). If all paths close, that formula (or set of formulas) is inconsistent. If not, it is consistent, and the open paths display the valuations that make the formula (or all members of the set) true. To determine whether a formula is valid, construct a tree starting with its negation. If all paths close, the formula is valid. If not, then the formula is not valid, and the open paths display the valuations on which itis false. ‘To determine whether a formula is contingent, construct two trees, one to test it for consistency and one to test it for validity. Ifthe formula is consis- tent but not valid, then it is contingent. Constructing trees is just a matter of following the rules, but there are a few ‘common errors to avoid. Keep these in mind: The rules for constructing trees apply only to whole formulas, not to their parts. Thus, for example, the use of shown below is not permissible: 1. / P+~-Q Given 2. /P=Q 1~ (Wrong!) Although using ~~ on subformulas does not produce wrong answers, it is never necessary and technically is a violation of the double negation rule. Trying to apply other rules to parts of formulas, however, often does pro- duce wrong answers. Arnule applied to a formula cannot affect paths not containing that formula. Consider, for example, the following incomplete tree: 1. J Pv(QvR) Given 2. Ply QvRiv Here the formula “Q v R’ at the end of the right-branching path remains to be analyzed. The next step is to apply the disjunction rule to this formula. In doing so, we split this right-branching path but add nothing to the path at the left, for it does not contain ‘Q v R’ and is in fact already finished. The negation rule applies only to formulas on the same path. In the follow- ing tree, for example, both ‘P” and ‘-P” appear, but neither path closes be- cause the formulas don’t appear on the same path: 1 J PoP Given a i> PinCLassical Prorosmonat Loic: SeManncs 75: ‘To'summarize: A finished tree for a formula or a set of formulas displays all the valuations on which that formula or all members of that set are true. Thus trees do the same work as truth tables, but in most cases they do it more efficiently. Moreover, as we shall see in later chapters, they may be used for some logics to which truth tables are inapplicable. Exercise 3.3.1 Redo Exercises 3.2.1, 3.2.2, and 3.2.3 using trees instead of truth tables. Exercise 3.3.2 1. How might trees be used to prove that two formulas are logically equivalent? Explain, 2. To prove a formula valid using trees, we construct a tree from its negation. Is there a way to prove a formula valid by doing a tree on that formula without negating it? Explain, 3.4 VALUATIONS AND POSSIBLE SITUATIONS ‘We saw in Section 2.1 that while any instance of a valid form is a valid argument, not every instance of an invalid form is an invalid argument, We noted, for exam. ple, that this instance of the invalid sequent affirming the consequent is in fact a valid argument: If some men are saints, then some saints are men. Some saints are men. Some men are saints. How can this be? The answer lies in the distinction between valuations and possi- ble situations, Suppose we let ‘S,’ stand for ‘Some men are saints’ and ‘S," for ‘Some saints are men’. Then we can represent the form of the argument as ‘S, — S,, S; FS)’. Here is its truth table: SS | SiS, S$, FS, TT TT T Tr FoF T FOT T T F FOF T F F ‘The valuation in which ‘S,’ is false and ‘S,’ true is a counterexample to the sequent or argument form, But the corresponding situation—the one in which ‘Some men76 Cuarer 3 are saints’ is false and ‘Some saints are men’ is true—isn’t a counterexample to the argument because it isn’t a possible situation. The very idea of a situation in which some men are saints but itis not the case that some saints are men (i.e., no saints are men) is nonsense. Of course we can easily find other interpretations of ‘S,’ and ‘S,'—and consequently other instances of this form—to which the valuation that makes ‘S,” false and ‘S,” true provides a genuine counterexample, even an actual counterexample. But on this particular interpretation, that valuation corresponds to an impossible situation. (Incidentally, so does the valuation that makes ‘S,’ true and ‘S," false.) An impossible situation, if it even makes sense to talk about such a thing, cannot be a counterexample. Depending on how we interpret the sentence letters, then, a particular valu- ation may or may not correspond to a possible situation, For many interpretations, all valuations correspond to possible situations. For example, if we let ‘S,” stand for ‘Itis sunny’ and ‘S,” for ‘It is Sunday’, every line on the truth table above (every valuation) represents a possible situation, and the valuation on which ‘S,” is false and ‘S," true represents a counterexample to the argument as well as to the form. In such cases, the statements corresponding to the sentence letters are said to be logically independent. But where the statements corresponding to the sentence letters logically imply one another or exclude one another, in various combina- tions, some valuations represent impossible situations. ‘Such nonindependent statements as ‘Some men are saints’ and ‘Some saints are men’ have interrelated semantic structures that are not represented in propo- sitional argument forms in which they are symbolized simply as sentence letters. (In this case, the semantic structures in question are relationships among the logi- cal meanings of the words ‘some’, ‘men’, and ‘saints’.) In Chapter 6 we shall begin to formalize the semantic structures of such statements, and we shalll redefine the notion of a valuation so that it reflects more of these semantic structures and yields a more powerful and precise logic. Later we shall explore ways of creating logics that are more powerful and precise still. But at no point shall our concept of a valuation become so sophisticated that a valuation may never represent an impos- sible situation—which is to say that at no point do we ever achieve a formal semantics or formal logic that reflects all the logical dependencies inherent in natural language. Certain consequences of this disparity between valuations and possible situ- ations, between formal and informal logic, will haunt us throughout this book: An invalid sequent may have valid instances. The reason for this we have already seen. The counterexamples to the sequent may on some interpreta- tions represent impossible situations so that there are no possible situations which make the corresponding argument’s premises true while its conclu- sion is untrue. No argument is valid because of having an invalid form, but an argument may be valid in spite of having an invalid form, because of elements of its semantic structure not represented in the form. A contingent formula may have valid or inconsistent instances. A contin- gent formula is true on some valuations and false on others. But on someCLASSICAL PROPOSITIONAL LOGIC: SEMANTICS 77, interpretations either the valuations on which the contingent formula is true or those on which it is false may all correspond to impossible situations. In the former case, the interpretation yields an inconsistent instance. In the latter, provided that at least one of the valuations on which the formula is true corresponds to a possible situation, the interpretation yields a valid instance. Example: ‘P & Q’ is a contingent formula, but the instance ‘Some women are mortal & Nothing is mortal’ is an inconsistent statement, and the instance ‘Every woman is a woman & Every mortal is a mortal’ is a valid statement (logical truth). (Check the truth table of P & Q' to see which valuations correspond to impossible situations in each case.) A consistent formula or set of formulas may have inconsistent in- stances. That is, though there is a valuation that makes the formula or set of formulas true, there may not be a possible situation that makes a particu- lar instance of that formula or set of formulas true, again because the situa- tion corresponding to that valuation may be impossible, Example: The set consisting of the formulas ‘P” and “Q’ is consistent, but if we interpret ‘P? as ‘Smoking is permitted? and ‘Q’ as ‘Smoking is forbidden’, the set of state- ments for which these letters stand is inconsistent. All of this sounds discouraging. Nevertheless: All instances of a valid sequent are valid arguments. A valid sequent has no valuation on which its premises are true but its conclusion is not true. Some valuations may on a particular interpretation correspond to impossible situ- ations. Yet since a valid sequent has no valuations representing situations (possible or impossible) in which the premises are true and the conclusion is false, none of its valuations represents a possible situation that is a counter example to the instance. Hence, if a sequent is valid, all of its instances must be valid as well. Valid sequents are, in other words, perfectly reliable pat- terns of inference. All instances of a valid formula are logical truths. A valid formula is a for- mula true on all valuations. Even if on a given interpretation some valua- tions of such a formula do not represent possible situations, the formula is still true on all the others and hence true in all the situations that are possi ble. Therefore any statement obtained by interpreting a valid formula must be true in all possible situations. That is, it must be a logical truth. All instances of an inconsistent formula are inconsistent statements. An in- consistent formula is true on no valuations. Hence, even if on a given inter- pretation some of its valuations represent impossible situations, still the formula is true on none of the remaining valuations which represent possi- ble situations. Therefore any statement obtained by interpreting an incon- sistent formula is not true in any possible situation. Under the same interpretation, logically equivalent formulas have as their instances logically equivalent statements. Logically equivalent formulas are formulas whose truth value is the same on all valuations. Once again, even78 CHarer 3 if given interpretation rules out some of these valuations as impossible, still the formulas will have the same truth value in the remaining valua- tions—the ones representing possible situations. Hence any two statements obtained by interpreting them will have the same truth values in all possible situations. That is, they will be equivalent statements. ‘To summarize: Formal validity (for both formulas and sequents), inconsis- tency, and equivalence are reliable indicators of their informal counterparts. For- mal invalidity, contingency, and consistency are not.CHAPTER CLASSICAL PROPOSITIONAL LOGIC: INFERENCE 4.1 CHAINS OF INFERENCE Most people can at best understand arguments that use about three or four prem- ises at once. For more complicated arguments, we generally break the argument down into more digestible chunks. Beginning with one or two or three premises, we draw a subconclusion, which functions as a stopping point on the way to the main conclusion the argument aims to establish. This subconclusion summarizes the contribution of these premises to the argument so that they may henceforth be forgotten. This subconclusion is then combined with a few more premises to draw a further conclusion, and the process is repeated, step by small step, until the final conclusion emerges. The following example illustrates the utility of breaking com- plex inferences down into smaller ones: ‘The meeting must be held on Monday, Wednesday, or Friday. Ar least four of these five people must be there: Al, Beth, Carla, Dave, and Em. Em can’t come on Monday or Wednesday. Carla and Dave can’t both come on Monday or Friday, though either of them could come alone on those days. Alcan come only on Monday and Friday. ‘The meeting must be held on Friday. 7980 CHarren 4 The argument is difficult to understand all at once, but it becomes easy if analyzed into small steps. For example, from the premises Em can’t come on Monday or Wednesday. and Alcan come only on Monday and Friday. we can deduce the subconclusion Neither Al nor Em can come on Wednesday. [And from this subconclusion together with the premise [At least four of these five people must be there: Al, Beth, Carla, Dave, and Em, we can further conclude The meeting can’t be held on Wednesday. In addition, from the premises Em can’t come on Monday or Wednesday. and Carla and Dave can’t both come on Monday or Friday, though either of them could come alone on those days. we can conclude Em and either Carla or Dave can’t come on Monday. Putting this together with the premise At least four of these five people must be there: Al, Beth, Carla, Dave, and Em. yields the conclusion ‘The meeting can’t be held on Monday. Combining this with the previously derived conclusion that the meeting can’t be held on Wednesday and with the premise “The meeting must be held on Monday, Wednesday, or Friday. wwe get the conclusion The meeting must be held on Friday. Thus we analyze a complicated and forbidding inference into a sequence of simple inferences. The result of this process is summarized below ina more compact form, which we shall call a proof. A proof begins with the premises, or assumptions, of the unanalyzed argument, listed on separately numbered lines. We indicate which statements are assumptions by writing an ‘A’ to the right of each. Each successiveCLASSICAL PROPOSITIONAL LOGIC: INFERENCE 81 conclusion is written on a new numbered line, with the line numbers of the prem- ises (either assumptions or previous conclusions) from which it was deduced listed to the right. Here is our reasoning recorded as a proof: 1, The meeting must be held on Monday, Wednesday, or Friday, A At least four of these five people must be there: Al, Beth, 2. Carla,Dave, and Em. 3. Emcan’t come on Monday or Wednesday. Carla and Dave can’t both come on Monday or Friday, though 4, either of them could come alone on those days. 5. Alcan come only on Monday and Friday. 6. Neither Al nor Em can come on Wednesday. 7. The meeting can’t be held on Wednesday. 8, Emand either Carla or Dave can’t come on Monday. 3 9. The meeting can’t be held on Monday. 10. ‘The meeting must be held on Friday. PRR S> Pepe eR A 27,9 The series of conclusions is listed on lines 6-10. None of these conclusions is drawn from more than three premises. Each inference is plainly valid. The proof ends when the desired conclusion is reached. In the remainder of this chapter, we explore a more formal version of this proof technique. Exercise 4.1 Analyze each of the following arguments into simple inferences involving at most three premises each, and write the analyzed argument as a proof. Each inference in this proof should be obviously valid in the informal sense of validity discussed in Chapter 1, but it need not exemplify any prescribed formal rule, (Some simple formal inference rules are introduced in the next section.) There is not just one right answer; each argument may be analyzed in many ways. 1, If the person exists after death, then the person is not a living body. The person is not a dead body. Any body is either alive or dead. The person exists after death. 2 The person is not a body. 2. x isan odd number. xty=25, x>3. 30/x is a whole number. x<10. y=20. 3. You will graduate this semester. In order to graduate this semester, you must fulfill the humanities require~ ment this semester. You fulfill the humanities requirement when and only when you have taken and passed either (1) two courses in literature and a single course in either82 Carrer 4 philosophy or art or (2) two courses in philosophy and a single course in either literature or art. You have taken and passed one art course but have taken no courses in philosophy or literature. You have time to take at most two courses this semester. ‘Among the philosophy courses, only one is offered at a time when you can take it. You will take two literature courses this semester. 4.2 SIMPLE FORMAL INFERENCE RULES In this section we introduce the idea of a proof, not for an argument but for a sequent or argument form, The idea, once again, is to break a complicated or dubious inference down into smaller inferences, each of which has a simple form. In formal proofs we require that these smaller inferences have one of a well-defined set of forms that we already recognize as valid. In the system of formal logic that we shall adopt there are ten such forms. The most familiar of them is modus ponens (introduced in Section 2.1). To illustrate, let's construct a formal proof to demonstrate the validity of the sequent: P=+(Q—(S—T)),RP-Q,S + T The first step is to write the assumptions in a numbered list, indicating that they are assumptions by writing an ‘A’ to the right of each: 1. P+(Q4($—T) A 2? A 3. P=Q A 4.8 A Then we look for familiar inference patterns among the premises. For example, from premises 2. and 3, we may infer by modus ponens the conclusion Q. So we write this as a conclusion, listing to the right the line numbers of the premises from which it was inferred and the form or rule of inference by which it was inferred: 3.Q 2, 3 modus ponens ‘The formula ‘P’, which is assumed on line 2, is also the antecedent of the condi- tional assumption on line 1. Though the consequent of this conditional is a com- plex formula, rather than a single sentence letter, we still recognize here another instance of modus ponens. So we draw the conclusion: 6. Q> (ST) 1, 2 modus ponens (We have dropped the unnecessary outer brackets, as usual, and will continue to do so without comment from now on.) Now lines 5 and 6 can be combined to. obtain yet another conclusion:ne CLASSICAL PROPOSITIONAL LOGIC: INFERENCE 83 7 S—T 5, 6 modus ponens And lines 4 and 7 yield the conclusion: 8.7 4,7 modus ponens This is the conclusion we wanted to establish. And now we have succeeded. For by showing that it is possible to get from the assumptions of the sequent to its conclusion by simple steps of valid reasoning, we have shown that the sequent itself is valid. To see why it is valid, consider a preliminary conclusion C, validly drawn from some initial set of premises. Now let new premises be used together with C, to validly draw a second conclusion C;. Since C, was validly drawn, by the defini- tion of validity C, is true on any valuation on which the original premises are true And similarly, since C, validly follows from C, together with the new premises, Cy is true on any valuation on which both C, and the new premises are true. But since C, is true on any valuation on which the original premises are true, C, is true on any valuation on which the original premises ard also the new premises are true. Hence, since C, is true on any valuation on which both C, and the new premises are true, C, is true on any valuation on which both the initial premises and the new premises are true. That is, the inference from the initial premises together with the new premises to C, is valid. Further, if we were to add still more premises and validly draw yet a third conclusion Cs, the same reasoning would show that the inference combining all three sets of premises to the conclusion C; is also valid. And so it goes. Thus, by stringing together valid inferences, we prove the validity of the inference whose premises are all the assumptions made along the way and whose conclusion is the final conclusion of the string. In the rest of this section, we shall show how to break down any valid sequent in propositional logic into a sequence of simple and (more or less) obvi- ously valid patterns of reasoning. Such sequences, as exemplified by lines 1-8 above, are called proofs. Modus ponens is not the only pattern used in proofs. We shall construct proofs in propositional logic by the so-called natural deduction method, which utilizes ten distinct patterns of reasoning, or rules of inference (or inference rules), of which modus ponens is one. (There are many other methods of proof, which use different types and numbers of rules, though for classical logic, at least, they all yield the same results, Some of the alternative methods are dis- cussed in Section 4.5.) Proofs, of course, are only as credible as their inference rules. As we intro- duce each tule, we shall verify its validity using the semantics developed in Chapter 3. This will enable us to see that our proof technique is sound—that is, that if we start with assumptions true on some valuation, we shall always, no matter how ‘many times we apply these rules, arrive at conclusions that are likewise true on that valuation. Thus a proof establishes that there are no counterexamples to the sequent of which it is a proof; it is a third formal method (in addition to truth tables and trees) for showing that a sequent is valid. In Section 5.10 we shall show that the entire system of rules introduced here is not only sound but also complete—that is, capable of providing a proof for every valid sequent of propo- sitional logic.84 CHarren 4 Our first inference rule, modus ponens, may be stated as follows: Given any conditional and its antecedent, infer its consequent. Or using the Greek letters of Section 2.3: Given ®— ¥ and ®, infer ¥. ® and ¥ may be any formulas, simple or complex. For example, in the inference from assumptions 1 and 2 to conclusion 6 in the proof above, ® is ‘P* and Y is Q=(S Ty. Modus ponens is clearly valid, as we can see by examining its truth table. ‘That is, no matter what the truth values of © and Y may be, it can never happen that ®— ¥ and @ are true but ¥ is untrue: In addition to modus ponens, we shall introduce nine other rules, for a total of ten—two for each of the five logical operators. For each operator one of the ‘two rules, called an introduction rule, allows us to reason to (introduces) conclu- sions in which that operator is the main operator. The second rule allows us to reason from premises in which that operator is the main operator; it is known as the operator's elimination rule, because it enables us to break a premise into its components, thus “eliminating” the operator. Modus ponens is the elimination rule for the conditional. Given a formula ©, it allows us to “eliminate” the conditional operator from — and wind up just with ¥. In doing proofs, then, we shall call modus ponens conditional elimi- nation, which we abbreviate as ‘+E’. Officially, we state the rule of modus ponens as follows: Conditional Elimination (E) Given (@— ¥) and ©, infer ¥. ‘The introduction rule for the conditional has some special features which are best appreciated only after some practice with the other rules, We shall therefore consider it later. Perhaps the simplest rules are those for conjunction. Indeed, these may seem utterly trivial. Here is the conjunction elimination rul Conjunction Elimination (&E) From (® & '¥), infer either ® or ‘That is, we may “climinate” a conjunction by inferring one or the other of its conjuncts. (We can, if we like, infer both, but that takes two applications of the rule.) Conjunction elimination is sometimes known as simplification. This rule is obviously valid. The only way @ & ¥ can be true on a valuation is if both of itsCLASSICAL PROPOSITIONAL LOGIC: INFERENCE 85, conjuncts are true on that valuation. Hence there is no valuation on which ® & ¥ is true and either of its conjuncts is untrue. The following proof for the sequent &E and +E: = (P & Q), REQ exemplifies both, 1, R>(P&Q) A 2R A 3. P&Q 1,29E 4.Q 3 &E As before, we begin by writing the assumptions on numbered lines (lines 1 and 2) and marking them with an ‘A’ to indicate that they are assumptions. ‘+E’ (modus ponens) applied to lines 1 and 2 gets us the conclusion ‘P & Q’ at line 3, and from this by conjunction elimination we obtain the desired conclusion Ler’s now consider the conjunction introduction rule. This rule enables us to infer conclusions whose main operator is a conjunction: Conjunction Introduction (&I) From ® and ¥, infer (® & ¥). Conjunction introduction is also called conjunction or (more rarely) adjunction, It allows us to join any two previously established formulas together with ‘8. If these formulas are true, then by the valuation rule for “& the resulting conjunction must be true as well, and so clearly the rule is valid. We may illustrate both &E. and &I by constructing a proof for the sequent ‘P & Qt Q & P’. (This sequent is hardly less obviously valid than the rules themselves, but it nicely illustrates their use.) 1, P&Q A 2P 1&E 3. Q 1&E 4. Q&P 2,3 &1 Starting with the assumption ‘P & Q’, we break it into its components at lines 2 and 3 by &E, then introduce the desired conclusion by &I (whose purpose, re- member, is to create conjunctive conclusions) at line 4. Intuitively, the reasoning is. this: Given that the conjunction ‘P & Q' is true, ‘P" is true and “Q' is true as wel. But then the conjunction *Q & P* is also true. The order in which the premises are listed is irrelevant to the application of a rule of inference. Thus, even though ‘P’ is listed on line 2 of this proof and ‘Q’ on line 3, we may legitimately infer “Q & P”, in which “Q’ comes first. Moreover, the use of two different Greek letters in stating a rule does not imply that the formulas designated by those letters must be different. In the &T rule, for example, ® and can stand for any formulas without restriction—even for the same formula. The following proof of the trivial but valid sequent ‘P + P & P’ illustrates this point: 1.P A 2. P&P 1,18186 CHarrer 4 Here we apply the rule of &I (from ® and ¥, infer (® & )) to a case in which both ® and Ware ‘P’, That is, we infer from ‘P” and ‘P” again the conclusion ‘P & P’. Since we have used ‘P” twice, we list line 1 twice in the annotation. Though odd, this sort of move is quite legitimate, not only for &I but (where applicable) for other rules as well. Given, for example, that the sun is hot, it validly follows thae the sun is hot and the sun is hot—though we are not likely to have much use for that conclusion. ‘The elimination and introduction rules for the biconditional are closely re~ lated to those for conjunction. This is not surprising since ‘P + Q’ has the same truth conditions as the conjunction ‘(P — Q) & (Q— P)’. Thus, like the conjune- tion rules, the biconditional rules simply break the complex formula into its con- ditional components or assemble it from these components: Biconditional Elimination (+E) From ( + ¥), infer either ( — ¥) or (Yo). Biconditional Introduction (+1) From (®— W) and (‘¥ + ®), infer (O+¥). As with conjunction elimination, the biconditional elimination rule gives us a choice of which of the two components to infer. This rule is used here in a proof of PHQ,PEPRQ: 1 PHQ A 2.P A 3. P=Q 14E 4..Q 2,3E 5. P&Q 2,4 bl ‘We “eliminate” the biconditional at line 3, obtaining one of its component condi- tionals, ‘P + Q’. Next we use modus ponens (+E) at line 4 to obtain ‘Q’, one of the conjuncts of our desired conclusion. The other conjunc, ‘P”, was already given as an assumption. Conjunction introduction enables us to combine these conjuncts into our conclusion at line 5. The following proof of (P+ Q) —+ (Q— P), P—- Qk P+ Q illustrates the use of the other biconditional rule, biconditional introduction: 1. P+ Q)> (QP) A 2. P=Q A 3. QaP 1,278 4, PQ 2,341 We next consider the disjunction introduction rule (sometimes called the addition rule): Disjunction Introduction (v1) From ®, infer either (® v ¥) or (‘¥ v ©). That is, given any formula, we may infer its disjunction (as either first or second disjunct) with any other formula. If, for example, my best friend is Jim, then it isCLASSICAL PROPOSITIONAL LOGIC: INFERENCE 87 certainly true that either my best friend is Jim or my best friend is Sally. And it is obvious from the valuation rule for ‘v’ that this pattern is valid in general, for whenever either disjunct of any disjunction is true, the disjunction itself is also true. ‘The following proof of (P v Q) +R, PE R v's illustrates the use of VI: 1. (PVQ—>R A 2? A 3. PVQ avi 4.R 1,3 5. RvS 4vl To use the conditional assumption ‘(P v Q) — R’ we must “eliminate” the condi- tional by +E. But to do this we must first obtain its antecedent, ‘P v Q’. Since we are given ‘P” as an assumption, we can infer ‘P v Q simply by applying v1 at line 3. This enables us to derive ‘R’ at line 4. The conclusion we want to reach, how- ever, is ‘R v S’. But this can be deduced from ‘R’ by applying vI once again, this time to line 4. The disjunction elimination rule, VE, allows us to draw conclusions from disjunctive premises, provided that we have established certain conditionals: , Disjunction Elimination (VE) From (® v ¥), (®— @), and (¥ — ©), infer 0. Disjunction elimination is also known as constructive dilemma. It is valid, as can be seen by inspection of this truth table: (ov), (®-0), (Ye) T T mH 8. modal maa mda sadam Sawada mandnanale Consider, for example, the argument ABCD is either a rectangle or a parallelogram. If ABCD isa rectangle, then it is a quadrilateral. If ABCD isa parallelogram, then it is a quadrilateral, «ABCD isa quadrilateral. We may symbolize this argument as ‘R v P, R+ Q, P+ QE Q’. Its proof is a single step of vrs 88 CHAPTER 4 2. R=Q A 3. P=Q A 4.Q 1,2, 3 VE Here @ is R’, ¥ is ‘PY, and © is ‘Q’, Notice that since vE uses three premises, we must cite three lines to the right when using it. Sometimes the same line is cited twice, as in the proof of ‘Pv P,P QrQ: 1. PvP A 2. PQ A 3. Q 1, 2,2 vE In this proof, ® and ¥ are both ‘P* and © is “Q’. The first premise is, of course, redundant, but redundancy does not affect validity. “The most interesting uses of vE are those in which the conditional premises necessary for proving the conclusion are not given as assumptions but must them- selves be proved. This, however, requires the use of the rule =I, which is intro- duced in the next section. The negation elimination rule, which is sometimes called the double negation rule, allows us to “cancel out” double negations when these have the rest of the formula in their scope: Negation Elimination (~E) From —-®, infer ®. This rule, too, is obviously valid, For by the valuation rule for ‘~’, if ~ is true, then ~@ is false and hence is true. To say, for example, that I am not not tired is the same thing as to say that I am tired. Here is an example of the use of negation elimination, in the proof of ‘P= ~-Q, PF Q’: 1 P+~-Q A 2P A 3. --Q 1,2-E 4..Q 3-E Neither the negation elimination rule nor any of the other rules allow us to operate inside formulas. It is a mistake, for example, to do the proof just illustrated this way: A A 1~E (wrong!) 2,3-E Negation elimination operates only on doubly negated formulas. ‘P — ~~Q’ is a conditional, not a doubly negated formula. We must use conditional elimination to separate ‘~~Q’ (which is a doubly negated formula) from the conditional before negation elimination can be applied. It is not really invalid to eliminate double negations inside formulas; it’s just not a legitimate use of our negation elimination rule. We never need to use it this way, because our elimination rules always enable us to break formulas downCLassicaL PROPOSMONAL LOGIC: INFERENCE 89 (where this may validly be done) so that the double negation sooner or later appears on a line by itself and hence becomes accessible to the negation elimination rule, We could be more liberal, permitting elimination of double negations inside formulas, but only at the expense of complicating some of our metatheoretic work later on, Conservatism now will pay off later. Finally, we should note that there is no one correct way to prove a sequent. If the sequent is valid, then it will have many different proofs, all of them correct, but varying in the kinds of rules used or in their order of application. Often, however, there is one simplest proof, more obvious than all the rest. In construct. ing proofs, good logicians strive for simplicity and elegance and thus make their discipline an art. Exercise 4.2 Construct proofs for the following sequents: 1, P+Q,Q>R,PER 2, P=(Q=RLBQER 3. P&QPSRER 4.P4QPSRPEQ&R 5. (PR QV>R,P+Q,PER 6. P&QEQVR 7. Pt (Pv Q) & (Pv R) 8. P,((Q& R)vP) SES 9. PEPVP 10. PF (PvP) & (P&P) 11. P>(Q—R),P> (RQ), PFQGR 12. P+Q, (P+Q)>RER 13, (P+ (Q&R)),PER 14, (PQ) ~(Q— P), P+ QE Qu P 15. P+ ~-Q,P&REQvS 16. PVQ,Q—>P,P—+PEP 17. PvQ,Q—>--R,P—+--RERVS 18. (P & (QV R)) +S, P,—-RES 19. P+PEP oP 20. Qk Qv (~-QeP) 4.3 HYPOTHETICAL DERIVATIONS ‘We have now encountered eight of the ten rules. I saved the remaining two until last because they make use of a special mechanism: the hypothetical derivation. A hypothetical derivation is a proof made on the basis of a temporary assumption, or hypothesis, which we do not assert to be true, but only suppose for the sake ofOO 90 CHarrer 4 argument. Hypothetical reasoning is common and familiat. In planning a vacation, for example, one might reason as follows: Suppose we stay an extra day at the lake. Then we would get home on Sunday. But then it would be hard to get ready for school on Monday. Here the arguer is not asserting that she and her audience will stay an extra day at the lake, but is only supposing this to see what follows. The conclusion, that it will be hard to get ready for school on Monday, is likewise not asserted or believed. The point is simply that this conclusion would be true if the hypothetical supposi- tion were true. Her reasoning presupposes two unstated assumpt derive the second and third sentences. These are ns, used respectively 0 1. If we stay an extra day at the lake, then we get home on Sunday. and 2. If we get home on Sunday, then it will be hard to get ready for school on Monday. Using ‘S? for ‘we stay an extra day at the lake,’ ‘H? for ‘we get home on Sunday’, and ‘MP for ‘it will be hard to get ready for school on Monday’, we may formalize this reasoning as follows: 1, S>H A 2. HM A 3. s H (for 1) 4, H 1,35E 3. M 2,45 (Assumptions 1 and 2 correspond to the implicit statements I and 2 above. State- ments 3, 4, and 5 represent the first, second, and third sentences of the stated argument, respectively.) Thave done something novel beginning with S on line 3, the line that repre- sents the supposition or hypothesis that we stay an extra day at the lake. Instead of labeling $ as an assumption (‘A’), I have marked it with the notation ‘H (for I)’. This indicates that ‘S? is a hypothesis (‘H’), made only for the sake of a conditional introduction (I) argument and not (like 1 and 2) really assumed and asserted to be true. Moreover, I have drawn a vertical line to the left of ‘S? extending to all subsequent conclusions derived from ‘S’, This line specifies that the reasoning to its right is hypothetical—thar statements 3, 4, and 5 are not genuinely asserted, but only considered for the sake of argument. This hypothetical reasoning has a purpose. In granting assumptions 1 and 2, we see that we can derive ‘M’ from ‘S’; this means the conditional ‘Ss M’ must be true. This conditional, which symbolizes the English sentence ‘if we stay an extra day at the lake, then it will be hard to get ready for school on Monday’, is both the point of the argument and its implicit final conclusion, But this condi-CLASSICAL PROPOSTIONAL LOGIC: INFERENCE 91 tional is not deduced directly from our assumptions, nor from any of the state- ments listed in the argument, either singly or in combination, Rather, we know that ‘$+ M’is true because (given our assumptions) we showed in the hypotheti- cal reasoning (or hypothetical derivation) carried out in lines 3-5 that ‘M’ follows logically from ‘S*. Iris this reasoning, not any single statement or set of statements, that shows ‘S + M? is true. To indicate this, and to draw the argument’s final conclusion, we add a new line to the previous reasoning, as follows: 1, SH A 2.H-M A 3. s H (for 1) 4. H 1,3-E 5. M 2,4-E 6. S4M 3-51 The annotation of line 6 indicates that we have drawn the conclusion ‘S —~ M” from the hypothetical derivation displayed on lines 3-5, The rule used is the rule of conditional introduction (I), commonly known as conditional proof. It may be stated as follows: Conditional Introduction or Conditional Proof (=I) Given a hypothetical derivation of ¥ from ®, end the derivation and infer (® > ). In our example, ® is ‘S’ and is ‘M’, A hypothetical derivation itself begins with a hypothesis, or temporary as- sumption, and ends when a desired conclusion has been reached. In this case the conclusion of the hypothetical derivation was ‘M’. Its duration is marked by in- dentation and a vertical line to the left. Since in this problem the point of the hypothetical derivation was to show that ‘M’ followed from ‘S’, the hypothetical derivation (and hence the vertical line) ends with ‘M’, A conclusion inferred from a hypothetical derivation is not part of the hypothetical derivation, and hence the vertical line does not extend to it. The conclusion, ‘S —* M’ (‘if we stay an extra day at the lake, then it will be hard to get ready for school on Monday’), is not merely hypothetical; it is something the arguer actually asserts and presumably believes. Conditional introduction is, of course, the rule that enables us t0 prove conditional conclusions, We do this by hypothesizing the antecedent of the condi- tional and reasoning hypothetically until we derive the conditional’s consequent. At that point the hypothetical derivation ends. We then apply conditional intro- duction to our hypothetical derivation to obtain the conditional conclusion. This proof of ‘P + Q— (P & Q)’ provides another example: 1. P A 2, Q H (for—1) 3. P&Q 1,281 . Q—(P&Q) 2-35192 CHarren 4 ‘The sequent’s conclusion, *Q + (P & Q)’, is a conditional, so after listing the assumption ‘P" as usual, we hypothesize the antecedent ‘Q’ of this conditional at line 2. A single step of &I at line 3 enables us to derive its consequent, ‘P & Q’, thus completing the hypothetical derivation. We then get the desired conclusion by applying conditional introduction to the hypothetical derivation at line 4. Conditional introduction is used in proving biconditional conclusions as well as conditional conclusions. But in proving biconditionals we often need to employ it twice in order to prove each of the two conditionals that comprise the bicondi tional before we assemble these components into the biconditional conclusion. The following proof of the sequent ‘P & QF P + Q illustrates this technique: P&Q A H (for 1) 1&E 231 H (for 1) 1&E 5-61 47-1 QaP PHQ Here the conclusion we wish to obtain is ‘P + Q’. The rule for proving bicondi- tional conclusions is I, but to use I to get ‘P = Q’ we must first obtain its “component” conditionals, P+ Q and ‘Q— P”. We do this in lines 2~4 and 5— 7, respectively, by first hypothesizing each conditional’s antecedent, next hypo- thetically deriving its consequent (which in each case involves a simple step of &E from our assumption), and finally applying I to the resulting hypothetical deri- vation (at lines 4 and 7, respectively). Having obtained the two component con- ditionals, we complete the proof with a step of +I at line 8. A step or two of conditional introduction is often used to provide the condi- tionals needed for drawing conclusions from a disjunctive premise by VE. This proof of ‘P v P | P” provides an example that is both elegant and instructive: 1, PvP A 21 P H (for 1) 3. PP 2-2-1 4.P 1, 3, 3 VE. Our assumption is the disjunctive premise ‘P v P”. The standard rule for drawing conclusions from disjunctive premises is VE: From (@ v ¥), (®— ©), and (‘¥— ©), infer ©, If we take ®, ¥, and @ all to be ‘P’, this becomes: From ‘P v P,P P’, and ‘P~ P”, infer ‘P”. Thus we see that if we can prove ‘P + P*, we can use it twii with our assumption ‘P v P” to deduce the desired conclusion ‘P*. But how do we prove ‘P + P”? That's where I comes in. We hypothesize this conditional’s ante~ cedent at line 2 and aim to derive its consequent. The hypothetical derivation is the simplest possible, for its hypothesis and conclusion are the very same statement “P’, There is no need to apply any rules. In hypothesizing ‘P”, we have already in effect concluded ‘P”; the hypothetical derivation ends as soon as it begins at line 2. We then use =I to derive ‘P — P” at line 3 and VE to obtain ‘P’ at line 4,CLAssicaL PROPOSITIONAL LOGIC: INFERENCE 93 Let's consider one more example of the use of +1 in preparation for a step of VE. In this case the sequent to be proved is ‘P v Q, R (P & R) v(Q& RY: 1 PvQ A 2K A 3. P H (for +1) 4. | P&R 2,3 &l 5. (P&R)v(Q&R) 4ul 6. P>((P&R)v(Q&R)) 3-5 I 7 Q H (for >I) 8. | Q&R 2,7 &1 9. (PRR)V(Q&R) 8M 10. Q—((P&R)v(QKR)) 7-91 11. (P&R)V(Q&R) 1,6, 10 VE To use the disjunctive premise ‘P v Q’ to obtain the conclusion ‘(P & R) v (Q & Ry’ by vE, we need two conditional premises: ‘P + ((P & R) v (Q & R)) and $Q= ((P & R) v (Q.& R))’. These are conditionals, so we use I to prove each, the first in lines 3~6, the second in lines 7-10. Once the two conditionals have been established, a single step of vE at line 11 completes the proof. In proving conditionals whose antecedents contain further conditionals, we sometimes need to make two or more hypothetical suppositions in succession. For example, to prove ‘P + (Q— R) > (Q— (P & R))’, we hypothesize the conclusion’s antecedent ‘Q— R’ and then aim to deduce its consequent *Q— (P & R)’. But this consequent is itself a conditional so that we must introduce a second hypothesis, the second conditional’s antecedent, ‘Q’. This enables us to deduce *Q — (P & R)’ by =I. And since this is proved under the initial hypothesis (Q— R)’, final step of I yields the conclusion ‘(Q— R) > (Q— (P & R})’, Here is the proof in full: 1. P A 2, QaR H (for =I) 3. Q H (for =) 4 R 2,3E 5. P&R 1,4 &1 6. Qa (PRR) 3-5-1 7. (Q>R)>(Q> (PRR) 2-6-1 Notice that though the antecedent of (Q — R} — (Q = (P & R))’ is also a conditional, ‘Q— R’, we do not attempt to prove this conditional by hypothesiz~ ing ‘Q’ and deriving ‘R’. The antecedent of a conditional conclusion, no matter how complex, typically figures in a proof as a single hypothesis (line 2 in the proof above) and is not itself proved Finally, after a hypothetical derivation ends, all the formulas contained within it are “off limits” for the rest of the proof. They may not be used or cited later, because they were never genuinely asserted, but only hypothetically enter- tained. The following attempted proof of the invalid sequent ‘P, Q— ~P + P & -P” illustrates how violations of this restriction breed trouble. (If you don’t see that this sequent is invalid, check it with a truth table.)94 CHarrer 4 LP A 2. Q=-4I A 3. H (for) 4. 2,356 5. Qa-1 3-41 6. P&-P 1, 4 8 (Wrong!) All rules are used correctly through step 5, though steps 3-S are redundant, since all they do is prove *Q— ~P’, which was already given as an assumption at line 2. Step 6, however, is mistaken, since it uses the formula “-P’, which appears in the hypothetical derivation at line 4, after that hypothetical derivation has ended. ‘~P", however, was never proved; it was merely derived from the supposition of ‘’. It cannot be cited after the hypothetical derivation based on ‘Q’ ends at step 4. Violation of this restriction may result in “proofs” of invalid sequents, as it does here. These, of course, are not really proofs, since in a proof the rules must be applied correctly. However, any nonbypothetical assumption or nonhypothetical conclusion and any hypothesis or conclusion within a hypothetical derivation that has not yet ended may be used to draw further conclusions. So, for example, in the proof of PF (QR) > (Q— @ & RY), which was given just before the preceding example, it is permissible to use the hypothesis “Q — R’ {line 2) at line 4 of the hypothetical derivation that begins with “Q’ (line 3), because the hypothetical derivation beginning with ‘Q—+ R’ has nor yet ended. A proof is not complete until all hypothetical derivations have ended. If we were to leave a hypothetical derivation incomplete, then its hypothesis would be an additional assumption in the reasoning: but, being marked with an ‘H’ instead of an ‘A’, it might not be recognized as such, To summarize: is the rule most often used for proving conditional conclu- sions. To prove a conditional conclusion ® + , hypothesize its antecedent ® and reason hypothetically to its consequent ‘. Then, citing this entire hypothetical derivation, deduce © — ¥ by +L. The conclusion @ — ¥ does not belong to the hypothetical derivation, so the vertical line that began with ® does not continue to®— , bur ends with ¥. It is perhaps not so obvious as with the nonhypothetical rules that —I is valid. To recognize its validity, we must keep in mind that the hypothetical deri- vation from © to ¥ must itself have been constructed using valid rules. This means that if a valuation makes true both the proof’s assumptions and , as well as any other hypotheses whose derivations had not ended when was supposed, then it also makes true. That is, there is no valuation that makes these assumptions and hypotheses true and also makes ® true but ¥ untrue, In other words, there is no valuation that makes these assumptions and hypotheses true and ® — ¥ untrue.! But this means that the inference from these assumptions or hypotheses to d—+ Y is valid. Hence the rule 1, which allows us to conclude ® — ¥ from these as- * This reasoning appeals implicitly to the valuation rule for the conditionalCuassicaL Prorosmonal Lacic: INFERENCE 95. sumptions and hypotheses, is itself valid; it never leads from true premises to an untrue conclusion. ‘We next consider the rule for proving negative propositions: negation intro- duction, I, often known as indirect proof or reductio ad absurdum (reduction to absurdity). Negation introduction is the rule for proving negated conclusions. To prove ~, hypothesize © and validly derive from ® an “absurdity”—that is, a conclusion known to be false. Since the derivation is valid, if ® and any additional assumptions or hypotheses used in the derivation were true, the derived conclusion would have to be true as well. Therefore, since the derived conclusion is false, cither © or some other assumption or hypothesis used to derive it must be false. So, if these other assumptions or hypotheses are true, it must be ® that is false. Hence ~@ follows from these other assumptions or hypotheses. But how can we formally ensure that the conclusion we derive from ® is false? One way is to require that the conclusion be inconsistent. Inconsistencies of the form ‘@ & ~@”, for example, fill the bill. Actually, any inconsistency would do, but so as not to unduly complicate our rule, we shall require that the conclusion of the hypothetical derivation always have this one form. This restriction, as we shall see in Section 5.10, does not prevent us from proving any valid sequent. Therefore we will state the negation introduction rule as follows: Negation Introduction (~I) Given a hypothetical derivation of any for- mula of the form (\¥ & ~) from ®, end the derivation and infer ~®. The following proof of ‘P+ Q,-Q -P’, a sequent expressing modus tollens, uses this rule, Here ® is ‘P” and ¥ is *Q’: A A H (for ~1) 1,34E 2,481 4 Having listed the assumptions on lines 1 and 2, we note that the desired conclusion is a negation, ‘~P”. To prove this conclusion by ~I, then, we hypothesize “P” at line 3—not, as before, for I, but rather for ~I—and try to derive an “absurdity.” This is accomplished at line 5, where it is established that, given the assumptions ‘P= Q and *-Q’, P’ leads to absurdity. Therefore, given these assumptions, ‘P” must be false, which is what we conclude at line 6 by asserting ‘-P”, Formal indirect proofs are, of course, not merely formal. They may be used to represent specific natural language arguments. So, for example, if we let ‘P” stand for ‘A person is defined by her genome’ and ‘Q’ for ‘Identical twins are the same person’, the reasoning represented by this proof is as follows. It is assumed atlline 1 that ifa person is defined by her genome, then identical twins are the same person and at line 2 that identical twins are not the same person. The argument aims to show that a person is not defined by her genome (line 6). To prove this, we suppose for the sake of argument at line 3 that a person is defined by her genome. We do not, of course, really assert this; we suppose it only to reduce it to absurdity96 CHarrer 4 and so prove its negation, Together with assumption 1, this supposition leads at line 4 to the conclusion that identical twins are the same person. And this conclu- sion, together with assumption 2, yields the absurd conclusion that identical twins both are and are not the same person. Having shown, given assumptions 1 and 2, that the supposition that a person is defined by her genome leads to absurdity, we conclude on the strength of these assumptions alone that a person is not defined by her genome. This final conclusion is recorded on line 6. The following proof of the sequent “~(P v Q) ~P” provides another example of the application of ~I. Recall that “-(P v Q)’ means “neither P nor Q.” 1. ~PVQ) A 2 P H (for I) 3. PvQ 2vl 4. PvQ&~(PVQ) 1,3 8 5. -P 2-4-1 With respect to our statement of the negation introduction rule, @ here is ‘P’ and W is ‘P v Q’. Once again the conclusion to be proved is ‘-P’. So, after listing the assumption, we hypothesize ‘P” and aim for some contradiction. The trick is to see that we can obtain ‘P v Q’, which contradicts our assumption, “P’, The contradiction (absurdity) is reached at line 4 by 8c. ‘P’ absurdity, we deduce “Pat line 5. ‘Negation introduction may also be used, in combination with negation elim- ination, to prove unnegated conclusions. To prove an unnegated conclusion , we may hypothesize ~, derive an absurdity, and apply ~I. But since ~I adds a nega- tion sign to the hypothesis that is reduced to absurdity, it enables us to conclude only ~-@, not the desired conclusion ©. However, from ~~© we can deduce © by negation elimination and so complete the proof. The following proof of ‘-(P & -Q), PQ’ uses this strategy.? In this case @ is “Q's with respect to the formal statement of the ~Irule, ® is ‘~Q’ and ¥ is *P & ~Q’: 1, ~(P & ~Q) A 2.P A 3. -Q H (for ~1) 4. P&-Q 2,3 &l 5. (P & ~Q) & ~(P & -Q) 1,4 &I 6. ~-Q 3-5-1 7.Q 6-E Negation introduction is often combined with conditional introduction, as in this proof of the sequent, ‘P + Q + ~Q— -P”, which expresses the pattern of inference called contraposition: 1. P=Q A aI Q H (for >I) * To see why this form ought ro be vali, recall that “~(P & ~Q)’ is equivalent ro Poe.CLASSICAL PROPOSITIONAL LOGIC: INFERENCE 97 3. P H (for -1) 4. Q 1,35E 5. Q&-Q 2481 6. 4 3-5-1 7. -Q- 2-61 Having written our assumption, we note that the conclusion for which we are aiming, “~Q — ~P’, is a conditional. So we hypothesize its antecedent at line 2 for —I, aiming to derive its consequent, “-P’. But ‘~P" is a negation, and ~Lis the rule for proving negations. So, to set up a derivation of *-P’, we hypothesize ‘Pat line 3 for ~I and try to deduce a contradiction. The contradiction is obtained at line 5, which enables us to use ~1 at line 6 to get ‘~P”. Having now derived ‘-P* from‘-Q”, we can deduce ‘-P—+ ~Q’ by 1 at line 7 to complete the proof. Negation introduction is used in a peculiar way in the proof of the principle ex falso quodlibet, the principle expressed by the sequent ‘P, ~P + Q’. (We demon- strated the validity of this sequent using a truth table in Section 3.2,) A A -Q H (for ~1) P&-P 1,2 &1 3-4-1 5-E ‘Q'is an unnegated conclusion, but ~I enables us to prove it nevertheless, To do so. we must reduce “-Q' to absurdity to obtain “-~Q’, from which ‘Q’ follows E, What is genuinely peculiar about this proof is that ‘~Q’ is not used in the derivation of the contradiction ‘P & ~P’. The contradiction comes directly from assumptions 1 and 2. This undermines the notion that it is “-Q° that is being reduced to absurdity, for the absurdity lies in the assumptions, not in -Q. This pattern of reasoning is, however, legitimate in classical logic. Having assumed an absurdity, we can reduce any formula to absurdity: All formulas validly follow Validly—but not relevantly. There is no counterexample to the sequent ‘P, ~P + Q’, but many instances of this sequent are irrelevant. Relevance logicians, who advocate a notion of validity stricter than the classical notion, would reject. step 5 of this proof as invalid. Since the hypothesis “-Q’ was not used in the derivation of the contradiction, they argue, no conclusion concerning ‘“~Q’ can legitimately be drawn, We note their protest here but set it aside. They will get their say in Section 16.3, In the meantime, we will accept such peculiar uses of 1 as valid, ‘We next consider a proof of the sequent ‘P v Q, -P FQ’, which expresses the Pattern of inference called disjunctive syllogism. This proof also employs the irrel- evant use of ~1 illustrated in the previous problem. Because this sort of irrelevant move is unavoidable in proofs of disjunctive syllogism, many relevance logicians like disjunctive syllogism no better than they like ex falso quodlibet.98 CHapteR 4 10. Q=Q 9,91 11. Q 1,8, 10 VE Our first assumption is a disjunction; to use it we need VE. But to use vE with ‘Pv Q to obtain the conclusion ‘Q, we need these two conditionals: ‘P — Q’ and °Q= Q. These we obtain by I, the first in lines 3-8, the second in lines 9-10. To prove ‘P — Q’, we hypothesize its antecedent ‘P” at line 3. We now have hypothesized ‘P” and assumed *-P” so that we can obtain any conclusion we please. ‘We want ‘Q’, the consequent of ‘P + Q’, in order to complete our conditional proof. To get it, we hypothesize ‘~Q’ for reduction to absurdity. As in the previous example, however, we derive the absurdity (at line 5), not from this hypothesis but from previous (and irrelevant) assumptions. Nevertheless, this allows us to con- clude ‘~-Q’ at line 6 by ~1, from which we obtain *Q’ at line ~. The hypothetical derivation at lines 3-7 has thus established ‘P + Q’, a fact we record at line 8. The proof of ‘Q— Q’ at lines 910 is trivial. Having obtained the necessary premises at lines 1, 8, and 10, we finish with a step of vE. Although there are many (indeed, infinitely many!) different proofs for each valid sequent, there is often one way that is the simplest and most direct. Finding that way is a matter of strategy. Often the best strategy for a proof can be “read” directly from the form of the conclusion—that is, from the identity of its main operator, as Table 4.1 indicates. It is common, as we have seen in some of the examples worked earlier, for different strategies to be used successively in different stages of a proof. To illustrate how Table 4.1 provides guidance in doing this, lets prove the sequent “Pv QF QP’, We begin by noting that the conclusion of this sequent is of the form v . The first suggestion in the table for conclusions of this form is to use V1 if either or ¥ (i.e., in this instance ‘P’ or ‘Q’) is present as a premise. But we have neither premise, so this suggestion is inapplicable. We then try the second suggestion, which is applicable if there is a premise of the form © v A. ‘Pv Q is such a premise. The table then recommends proving as subconclusions the conditionals © > (® v ¥) and A > (® v ¥) (i.e., in this case ‘P > (Q v P)’ and “Q = (Q v P)’). A subconclusion is simply a conclusion useful for obtaining the main conclusion. It may be, but is not always, the conclusion of a hypothetical derivation, Now the task is to prove the two subconclusions ‘P — (Q v P)’ and *Q = (Q v Py. These are both of the form — ¥. So we consult Table 4.1 regarding strategies for proving conclusions of this form. The table recommends in each case