0% found this document useful (0 votes)
30 views40 pages

Jasand Gen Studia Logica Published Version

The document discusses two formulations of natural deduction: Gentzen's and Jaśkowski's. While logically equivalent, Gentzen's formulation more easily supports normalization theorems and inferential semantics. The paper investigates cases where Jaśkowski's formulation may be better suited, such as incorporating novel logical connectives. It demonstrates this by considering a Sheffer function for intuitionistic logic.

Uploaded by

metodoiset2025
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views40 pages

Jasand Gen Studia Logica Published Version

The document discusses two formulations of natural deduction: Gentzen's and Jaśkowski's. While logically equivalent, Gentzen's formulation more easily supports normalization theorems and inferential semantics. The paper investigates cases where Jaśkowski's formulation may be better suited, such as incorporating novel logical connectives. It demonstrates this by considering a Sheffer function for intuitionistic logic.

Uploaded by

metodoiset2025
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

Gentzen and Jaśkowski

Natural Deduction:
Allen P. Hazen
Francis Jeffry Pelletier
Fundamentally Similar
but Importantly
Different

Abstract. Gentzen’s and Jaśkowski’s formulations of natural deduction are logically


equivalent in the normal sense of those words. However, Gentzen’s formulation more
straightforwardly lends itself both to a normalization theorem and to a theory of “meaning”
for connectives (which leads to a view of semantics called ‘inferentialism’). The present
paper investigates cases where Jaskowski’s formulation seems better suited. These cases
range from the phenomenology and epistemology of proof construction to the ways to
incorporate novel logical connectives into the language. We close with a demonstration of
this latter aspect by considering a Sheffer function for intuitionistic logic.

Keywords: Jaśkowski, Gentzen, Natural deduction, Classical logic, Intuitionistic logic,


Inferential semantics, Generalized natural deduction, Sheffer functions.

1. Historical Remarks and Background

Two banalities that we often hear are (a) that it often happens that two
apparently radically different theories or approaches in some field of study
turn out to share fundamental underlying similarities that were hidden by
the differences in approach; and the inverse of that observation, (b) that it
can happen that two apparently similar approaches or theories can in fact
conceal some important differences when they are applied to areas that the
approaches weren’t originally designed to consider.
As indicated in its title, the present paper is a study of this second
banality as it has occurred in the natural deduction approach to the field of
logic. It seems apposite to pursue this study now, at the 80th anniversary of
the first publications of the natural deduction approach to logic. Astonishing
as it now seems, there were two totally independent strands of research that
each resulted in the publication of two approaches to natural deduction, and

Special Issue: Gentzen’s and Jaśkowski’s Heritage


80 Years of Natural Deduction and Sequent Calculi
Edited by Andrzej Indrzejczak

Studia Logica (2014) 102: 1–40 c Springer 2014


2 A. Hazen, F. J. Pelletier

these were published (amazingly) in the same year, with (amazingly!) no


interaction between the two authors nor between the two publishing venues.
The thrust of this paper is that, although it is commonly thought that
the two papers (Jaśkowski, 1934; Gentzen, 1934) were “merely two different
approaches to the same topic”, and that the differences in their approaches
were minor, we find that these differences are in fact quite important in the
further development of natural deduction. This is what we will set out to
demonstrate in the following pages.1

1.1. Jaśkowski on Natural Deduction


According to Jaśkowski (1934), Jan Lukasiewicz had raised the issue in his
1926 seminars that mathematicians do not construct their proofs by means
of an axiomatic theory (the systems of logic that had been developed at
the time) but rather made use of other reasoning methods; especially they
allow themselves to make “arbitrary assumptions” and see where they lead.
Lukasiewicz wondered whether there could be a logical theory that embodied
this insight but which yielded the same set of theorems as the axiomatic sys-
tems then in existence. Again according to Jaśkowski (1934), he (Jaśkowski)
developed such a system and presented it to the First Polish Mathematical
Congress in 1927 at Lvov, and it was mentioned (by title) in their pub-
lished proceedings of 1929. There seems to be no copies of Jaśkowski’s
original paper in circulation, and our knowledge of the system derives from
a lengthy footnote in Jaśkowski, 1934. (This is also where he said that it
was presented and an abstract published in the Proceedings. Jan Woleński,
in personal communication, tells us that in his copy of the Proceedings,
Jaśkowski’s work (Jaśkowski, 1929) was reported only by title.) Although
the footnote describes the earlier use of (what we call) a graphical method
to represent these proofs, the main method described in Jaśkowski (1934)
is rather different—what we in earlier works called a bookkeeping method.
Celluci (1995) recounts Quine’s visit to Warsaw in 1933, and his meeting
with Jaśkowski. Perhaps the change in representational method might be
due to a suggestion of Quine (who also used a version of this bookkeeping
method in his own later system, Quine, 1950).
In the present paper we will concentrate on Jaśkowski’s graphical meth-
od, especially as it was later developed and refined by Fitch (1952). We
will call this representational method “Fitch-style”, although it should be
1
Much of the material in the upcoming historical introduction is available in a sequence
of papers by the present authors: Pelletier (1999, 2000); Pelletier and Hazen (2012). Some
related material is surveyed in (Indrzejczak, 2010, Chapt. 2).
Fundamentally Similar but Importantly Different 3

kept in mind that it was originally developed by Jaśkowski. (And so we


often call it ‘Jaśkowski-Fitch’). As Jaśkowski originally described his earlier
method, it consisted in drawing boxes or rectangles around portions of a
proof. The restrictions on completion of subproofs (as we now call them)
are enforced by restrictions on how the boxes can be drawn. We would
now say that Jaśkowski’s system had two subproof methods: conditional-
proof (conditional-introduction)2 and reductio ad absurdum (negation-elim-
ination). It also had rules for the direct manipulation of formulas (e.g.,
Modus Ponens).
Proofs in this system look like this3 :

1. ((p → q) ∧ (¬r → ¬q)) Supposition

2. p Supposition
3. ((p → q) ∧ (¬r → ¬q)) 1 Reiterate
4. (p → q) 3 Simplification
5. q 2, 4 Modus Ponens
6. (¬r → ¬q) 3 Simplification

7. ¬r Supposition
8. (¬r → ¬q) 6 Reiterate
9. ¬q 7, 8 Modus Ponens
10. q 5 Reiterate

11. r 7–10 Reductio ad Absurdum

12. p→r 2-11 Conditionalization

13. (((p → q) ∧ (¬r → ¬q)) → (p → r)) 1–12 Conditionalization

2
Obviously, this rule of Conditional-Introduction is closely related to the deduction
theorem, that from the fact that Γ, ϕ ` ψ it follows that Γ ` (ϕ → ψ). The difference
is primarily that Conditional-Introduction is a rule of inference in the object language,
whereas the deduction theorem is a metalinguistic theorem that guarantees that proofs
of one sort could be converted into proofs of the other sort. According to (Kleene, 1967,
p. 39fn33) “The deduction theorem as an informal theorem proved about particular sys-
tems like the propositional calculus and the predicate calculus. . . first appears explicitly in
Herbrand (1930) (and without proof in Herbrand, 1928); and as a general methodological
principle for axiomatic-deductive systems in Tarski (1930). According to (Tarski, 1956,
p. 32fn), it was known and applied by Tarski since 1921.”
3
Jaśkowski had different primitive connectives and rules of inference, but it is clear how
this proof would be represented if he did have ∧ in his language.
4 A. Hazen, F. J. Pelletier

In this graphical method, each time an assumption is made it starts a new


portion of the proof which is to be enclosed with a rectangle (a “subproof”).
The first line of this subproof is the assumption. . . in the case of trying to
apply conditional introduction, the assumption will be the antecedent of
the conditional to be proved and the remainder of this subproof will be
an attempt to generate the consequent of that conditional. If this can be
done, then Jaśkowski’s rule conditionalization says that the conditional
can be asserted as proved in the subproof level of the box that surrounds
the one just completed. So the present proof will assume the antecedent,
((p ⊃ q) ∧ (¬r ⊃ ¬q)), thereby starting a subproof trying to generate the
consequent, (p ⊃ r). But this consequent itself has a conditional as main
connective, and so it too should be proved by conditionalization with a
yet-further-embedded subproof that assumes its antecedent, p, and tries to
generate its consequent, r. As it turns out, this subproof calls for a yet
further embedded subproof, which is completed using Jaśkowski’s reductio
ad absurdum.
The main difference between Jaśkowski’s graphical method and Fitch’s is
that Fitch does not completely draw the whole rectangle around the embed-
ded subproof (but only the left side of the rectangle), and he partially un-
derlines the assumption. The same proof displayed above using Jaśkowski’s
graphical method is done like the following in Fitch’s representation (with
a little laxness on identifying the exact rules Fitch employs).

1 ((p ⊃ q) ∧ (¬r ⊃ ¬q))


2 p
3 ((p ⊃ q) ∧ (¬r ⊃ ¬q)) 1, Reiteration
4 (p ⊃ q) 3, ∧E
5 q 2,4 ⊃E
6 (¬r ⊃ ¬q)) 3, ∧E
7 ¬r
8 (¬r ⊃ ¬q)) 6, Reiteration
9 ¬q 7,8 ⊃E
10 q 5, Reiteration
11 r 7–10, ¬E
12 (p ⊃ r) 2–11, ⊃I
13 (((p ⊃ q) ∧ (¬r ⊃ ¬q))) 1–12, ⊃I
Fundamentally Similar but Importantly Different 5

This Fitch-style of natural deduction proof-representation is pretty well-


known, since it was followed by a large number of elementary textbooks and
is employed by a large number of writers when they display their own proofs.4
We will be comparing this style of natural deduction proof-representation
with another, that of Gentzen.

1.2. Gentzen on Natural Deduction


The method of presenting natural deduction proofs in Gentzen (1934) was by
means of trees. The leaves of a tree were assumed formulas, and each interior
node was the result of applying one of the rules of inference to parent nodes.
The root of the tree is the formula to be proved. The following is a tree
corresponding to the example we have been looking at, although it should
be mentioned that Gentzen’s main rule for indirect proofs first generated ⊥
(“the absurd proposition”) from the two parts of a contradiction, and then
generated the negation of the relevant assumption.5

1 1
((p → q) ∧ (¬r → ¬q)) ((p → q) ∧ (¬r → ¬q))
3 ∧-E ∧-E 2
¬r (¬r → ¬q) (p → q) p
¬q →-E q →-E
⊥-I
⊥ ⊥-E (3)
r →-I (2)
(p → r)
→-I (1)
(((p → q) ∧ (¬r → ¬q)) → (p → r))

The lines indicate a transition from the upper formula(s) to the one just
beneath the line, using the rule of inference indicated on the right edge of
the line. (We might replace these horizontal lines with vertical or splitting
lines to more clearly indicate tree-branches, and label these branches with
the rule of inference responsible, and the result would look even more tree-
like). Gentzen uses the numerals on the leaves as a way to keep track of
subproofs. Here the main antecedent of the conditional to be proved is
entered (twice, since there are two separate things to do with it) with the
numeral ‘1’, the antecedent of the consequent of the main theorem is entered
with numeral ‘2’, and the formula ¬r (to be used in the reductio part of the
proof) is entered with numeral ‘3’. When the relevant “scope changing” rule

4
See Pelletier (1999, 2000) for details of the elementary textbooks that use this method.
5
He also considered a double negation rule, for classical logic.
6 A. Hazen, F. J. Pelletier

is applied (indicated by citing the numeral of that branch as part of the


citation of the rule of inference, in parentheses) this numeral gets “crossed
out”, indicating that this subproof is finished.
Gentzen (unlike Jaśkowski) considered both classical logic and intuition-
ist logic, and many have claimed that his characterization of “the proper
form of rules of logic” in terms of introduction and elimination rules for
each of the logical connectives was the key to describing not only what was
“meant” by a logical connective, but also what The One True Logic must
be — namely, a logic in which all connectives obeyed this property. As
shown in Gentzen (1934), it is intuitionist logic that would be The One True
Logic if this criterion were correct. And that one would need to add some
further rules which were not in the introduction-and-elimination (Int-Elim)
paradigm in order to get classical logic.
Although this was described in Gentzen’s work and employed using
Gentzen’s tree-format representation of proofs, it is pretty straightforward
to adopt the same Int-Elim restrictions on the “correct” rules and require
that proofs in the Jaśkowski (or Fitch) graphical representations implement
them. Thus one can straightforwardly have graphical style proofs for intu-
itionist logic as well as for classical logic.

2. Some Heuristic Advantages of Jaśkowski over Gentzen

Pretty clearly, in any of the standard senses of the word, the systems gen-
erated using the Gentzen representation of proofs are equivalent to those
generated by employing the Jaśkowski-Fitch representational method. Their
respective intuitionist logic systems are equivalent, and their respective clas-
sical logic systems are equivalent. The particular styles of representing
proofs do not affect these equivalences, in any of the standard meanings
of this term.6
In this section we will be arguing that despite this “fundamental equiva-
lence” of Gentzen and Jaśkowski-Fitch systems, we think that there are some
important differences lurking; and we turn our attention to describing them.

6
The same can perhaps not be said of the other natural deduction methods, such as
sequent natural deduction (the first description of which is in Gentzen, 1936), which was
popularized by Suppes (1957); Mates (1965); Lemmon (1965), since there are various
differences of a meta-theoretic nature between natural deduction and sequent natural
deduction. And the notion of “generalized natural deduction” that we discuss later in this
paper is also importantly different; we reserve our comments about this method until a
later section.
Fundamentally Similar but Importantly Different 7

2.1. Phenomenology and Epistemology

Though the Jaśkowski and Gentzen systems have the same fundamental
mathematical properties, the notion of a subproof in Jaśkowski-inspired sys-
tems is suggestive for phenomenology and epistemology. From a purely
formal standpoint, the Gentzen and Jaśkowski-Fitch presentations of nat-
ural deduction are both workable. If one is interested in theoretical proof
theory—in proving that derivations can be converted into a normal form,
etc.—Gentzen’s trees are perhaps more convenient: a single occurrence of a
formula in a Jaśkowski-Fitch derivation can be cited as a premise for mul-
tiple inferences lower down in the proof, and this leads to complications
which the Gentzen presentation avoids by putting multiple copies of the
premise on different branches of the tree. By the same token, if one engaged
in the construction of many, large, formal derivations—if, for example, one
were trying to rewrite Principia Mathematica, or were trying to construct
a formal verification of a complex computer program—the Jaśkowski-Fitch
“linearization” of natural deduction could provide significant economies: the
increase in the size (number of symbols) of a formal derivation when it is
converted from a linear form to a tree-form can be exponential.
But there are also non-formal standpoints. Logic is a branch of philoso-
phy as well as a branch of mathematics, and one reason for studying formal
systems of logic is the hope that, as simplified models of informal proof, they
will yield insights into the epistemology or the phenomenology of deductive
reasoning. (This may have been a major part of Lukasiewicz’s motivation
in posing the question that inspired Jaśkowski. Gentzen, in contrast, was
a member of the Hilbert school, and—though he clearly valued the way
in which his N-systems represent the methodology of informal mathematical
proof—was more interested in formal aspects: his primary hope was that the
normalization, or, with the L-systems, cut-elimination properties of his sys-
tems would lead to consistency proofs for mathematical axiomatic systems.)
Here, it seems to us, the Jaśkowski-Fitch presentation is more perspicuous.7
The nodes in a Gentzen-style natural deduction tree are occupied by for-
mulas. Those at the leaves of the tree are termed hypotheses, and the others
are said to be inferred, by one or another rule, from the formulas standing
immediately above it. Speaking of inference, however, though appropriate
for rules like modus ponens or the conjunction rules, is lunatic for other rules.
In Disjunction Elimination (∨E), a formula γ is inferred from three formulas:
a “major premise” α ∨ β, a “minor premise” γ (derived under the hypothesis

7
The interested reader can consult a fuller discussion in Hazen (1999).
8 A. Hazen, F. J. Pelletier

α), and a “minor premise” γ (derived under the hypothesis β): this sounds
like a logically trivial rule of inferring γ from two copies of itself (together
with a redundant major premise)! ∀I sounds even worse: infer ∀xF x from
an instance, F a: this sounds simply invalid. (As does the comparable rule
of I in a formulation of modal logic: infer α from α.) Obviously more
is going on: the key logical property of the rules is that in applying them
hypotheses can be “discharged”, and that, with the quantifier rule, there
is a restriction on the form of undischarged hypotheses standing above the
“inference”. But this notion of “discharging” a hypothesis is—if the formal
system is to be thought of as somehow representing some aspects of natu-
ral reasoning—in need of analysis. One can perhaps characterize it roughly
(how many of us have done this when trying to explain natural deduction to
students in elementary courses?) in terms of pretending: parts of a formal
derivation are written under the pretense that we believe a hypothesis and
consist of sentences we would be willing to assert if that pretense were true,
and other parts—those coming after the hypothesis has been “discharged”—
are written honestly, without pretense. But what does this psychological or
theatrical business of pretending have to do with deductive reason?
Things look different in a Jaśkowski-Fitch version of natural deduction.
Formulas standing as conclusions of rules like modus ponens or the Con-
junction rules are still naturally described as inferred, or at least inferable,
from the formulas standing as premises for the applications of these rules:
at least in the main proof (i.e., not in a subproof, not under a subsequently
discharged hypothesis) an application of one of these rules can be thought
of as representing a step in a possible episode of reasoning, in which the
reasoner infers a conclusion from premises which are already believed. In
the other rules, however, the conclusion is not presented as being inferred
(solely) from other formulas at all. In ∨E the conclusion γ is justified by ref-
erence to (in the terminology of Fitch, 1952: is a direct consequence of ) two
subproofs containing γ, one with the hypothesis α, one with the hypothesis
β, along with the formula α ∨ β. In ∀I, the assertion of ∀xF x is justified by
reference, not to the formula F a, but to a subproof (“general with respect
to” the eigenvariable a) containing it.
This, it seems to us, is more faithful to the phenomenology of reasoning.
If one draws a conclusion from two already believed premisses by inferring
it from them, one is thinking about the subject matter of the three proposi-
tions, not about their logical relations. You believe α, you believe α → β. At
some stage, while thinking about their common subject matter, you call both
to conscious consideration at the same time, and at that stage you come to
Fundamentally Similar but Importantly Different 9

believe β: that is what inference is.8 In contrast, when drawing a conclusion


in a context where the formalized representation of one’s reasoning would
involve ∨E, one must be aware of logical relations: one cannot think solely
about the subject matter of the propositions involved. You believe α ∨ β.
You don’t (initially) believe γ, but you are aware that the argument from
α to γ is valid, and likewise that the argument from β to γ is valid. Your
awareness of these validities, as well as your (disjunctive) belief about the
subject matter, plays an essential role in your coming to believe γ. If there
is inference, it is not simply inference from premisses you already believe
about the subject matter: it is from the (subject matter relevant) α ∨ β
together with two metalogical premisses, α validly implies γ, and β validly
implies γ, premisses you come to believe not by inference, but by inspection
of deductions from hypotheses you don’t believe.9
There is another issue belonging to the philosophy of language (or to epis-
temology broadly construed) connected with natural deduction: the relation
between logical rules and the meanings of logical operators. Here Gentzen
had something but Jaśkowski apparently nothing to say. Gentzen, 1934,
p. 295 remarks, almost as an aside, that in his N-systems the Introduction
rules can be seen as defining the operators, with the Elimination rules giving

8
This is, of course, an oversimplification. Sometimes, after all, what happens when
you simultaneously consider α (which you believe), α → β (which you also believe) and
β (which you disbelieve) is not that you come to believe β, but rather that . . . your
confidence in one or the other or both of α and α → β is shaken. Modus ponens and
modus tollens are both logical rules, but more is needed to determine which rule will
guide your inference, your change of belief. For further discussion of the complex relation
between logical implication and inference, see Harman (1973).
9
Alonzo Church also saw the difference in the structure of the reasonings represented
by natural deduction and axiomatic formulations of logic, but for some purposes preferred
the axiomatic! Cf. (Church, 1956, pp. 164–165):
The idea of using the deduction theorem as a primitive rule of inference in formu-
lations of the propositional calculus or functional calculus is due independently to
Jaśkowski and Gentzen. Such a primitive rule of inference has a less elementary
character than is otherwise usual [. . . ], and indeed it would not be admissible for
a logistic system according to the definition we actually gave of it [. . . ]. But this
disadvantage may be thought to be partly offset by a certain naturalness of the
method; indeed to take the deduction theorem as a primitive rule is just to recog-
nize formally the usual informal procedure (common especially in mathematical
reasoning) of proving an implication by making an assumption and drawing a
conclusion.
What gives “the deduction theorem” (i.e., →I) its “less elementary character” is precisely
that in it a conclusion is drawn, not from one or a fixed number of formulas, but from a
more complex structure.
10 A. Hazen, F. J. Pelletier

consequences valid in virtue of these definitions. Subsequent writers have


expanded on this (e.g., Belnap, 1962; Prawitz, 1979; Dummett, 1978, 1993;
Brandom, 1994, 2000; Peregrin, 2008; Read, 2010), and the idea (which can
be made fairly precise for logical operators and seems temptingly workable
at least in the context of Intuitionistic logic) has been extended into a gen-
eral program in the philosophy of language: Inferentialism, the attempt to
explain the meaning of vocabulary items as determined by, even in some
sense supervenient on, inferential patterns connected with those items. This
aperçu of Gentzen, to which nothing corresponds in Jaśkowski’s paper, de-
pends on a formal feature of his N-systems: each operator has two rules, an
Introduction and an Elimination, governing it, these rules being so related
as to allow derivations to be normalized: converted into derivations in which
nothing is inferred by an Elimination rule from a (major) premise itself in-
ferred by an Introduction rule. Jaśkowski’s aims in formulating his systems
did not require this feature, and, indeed, some of his systems lack it. In
what follows we will see a number of examples of applications of the natural
deduction idea in which Gentzen’s pairing of Introduction and Elimination
rules is not achieved. In this respect these systems can be seen as being in
the tradition of Jaśkowski, but not that of Gentzen. In some cases we will
also find that they are easier to formulate using the Jaśkowski-Fitch tech-
nique of subproofs than they would be to put in Gentzen’s format of trees
of formulas.

2.2. Jaśkowski-Friendly Logical Systems


We offer in this subsection a series of examples of logical systems that, we
feel, are more in keeping with Jaśkowski’s vision than Gentzen’s. We think
of two dimensions along which this is true: the picture in which rules for the
connectives need not (versus must) follow the Int-Elim rules paradigm, and
advantages of the subproof-and-reiteration picture of the development of a
proof (versus proofs as tree structures).

2.2.1. Subminimal Negation


One prominent feature of Gentzen’s work on natural deduction is the sys-
tematic pairing of Introduction and Elimination rules: each logical operator
is governed by one rule of each kind (counting doublets like, e.g., the left
and right versions of disjunction introduction or conjunction elimination as
single rules) which are—in a sense made precise in the proof of the normal-
ization theorem (cf. Prawitz, 1965)—inverses to one another. Reflecting on
Fundamentally Similar but Importantly Different 11

this pattern, he made the remark about how the operators could be thought
of as defined by introduction rules, with the elimination rules simply draw-
ing out consequences of the definitions: a remark that has since inspired the
major philosophical project of “inferential semantics.” This pairing of rules
is far less prominent in Jaśkowski’s work: he did not use the terminology
of ‘Introduction’ and ‘Elimination’, and, indeed, was happy to consider sys-
tems with unpaired rules. Particularly in the light of the intense current
philosophical interest in inferential semantics, it is worth noting that much
later work has been done on systems of Jaśkowski’s more relaxed style. Such
systems, it seems fair to say, don’t define the operators governed by the rules
in the sense in which Gentzen-style rules can be said to, but they should not
be scorned on that account! In at least some cases, moreover, the formula-
tion of the rules of these systems seems simpler when when subproofs are
used than it would be in a more Gentzen-ish presentation.
At least in the most familiar formulations, rules for Intuitionistic Nega-
tion don’t seem to define it in a way completely parallel to that in which
Gentzen’s rules for positive connectives define them. Gentzen’s own prefer-
ence, followed by many writers, seems to have been to treat negation as a
defined connective:
¬α =df (α → ⊥)
where ⊥ is a propositional Falsum or Absurdity constant. And, contrasting
strangely with the idea that logical operators are defined by Introduction
rules, there is no Introduction rule for ⊥10 : its meaning is embodied in an
Elimination rule, Ex falso quodlibet or Explosion, providing that any formula
whatever may be inferred from ⊥. But rules can also be given for negation
as a primitive connective:
Negation Introduction: ¬α may be inferred from a subproof in
which, for some formula β, both β and ¬β are derived from the
hypothesis α
Negation Elimination: Any formula whatever may be inferred
from the two premises α and ¬α.
Now these rules (which are derivable from the usual rules for implication and
the Explosion rule for ⊥ when negation is defined in terms of implication
and ⊥) don’t quite fit the standard formula for Introduction and Elimi-
nation rules—the mention of negation in describing the subproof required

10
The apparent use of an ⊥ introduction rule in the example proof done in the Gentzen
style in Section 1.2 is in fact a derived rule: the negation there is more properly q → ⊥,
and the rule that is appealed to is in fact →E.
12 A. Hazen, F. J. Pelletier

for the Introduction rule is nonstandard—but they have as good a claim


as any Gentzen-ish pair to specify uniquely the meaning of the connective
they govern. Certainly they pass the standard test: if you incorporated two
connectives, ¬1 and ¬2 , into a formal language, governed by copies of these
rules, ¬1 and ¬2 would be interderivable. But this is so only because of
the assumption that absurdity implies everything! (Note that if we had two
propositional constants, ⊥1 and ⊥2 , each governed by an Explosion rule,
they would be interderivable.)
Which leads us to a consideration of a weaker logical system. It is now
standardly called Minimal Logic, after the title of Johansson’s 1936 paper,
in which it was proposed as a response to C.I. Lewis’s worries about the
“paradoxes of implication”: it avoids the (counterintuitive?) principle that a
contradiction implies everything. The same logic (or at least its implication-
negation fragment) had, however, been proposed by Kolmogorov (1925)—as
a formalization of the logic of Brouwerian Intuitionism—over a decade before
Johansson’s paper; and Jaśkowski, in his natural deduction paper, chose
Kolmogorov’s system (rather than the now-standard logic of Heyting) as his
intuitionistic system. (In the terminologic of Jaśkowski’s paper it is ITD, the
“intutitionistic theory of deduction.”) For this logic, we have the Negation
Introduction rule as stated above. . . but no Negation Elimination rule. This
represents a major departure from Gentzen’s paradigm. It is perhaps not as
often remarked as it should be that, in whatever sense Gentzen’s rules define
the operators they govern, the single negation rule of Minimal Logic does not
define its negation operator. Negation Introduction by itself is not enough
to make ¬1 α and ¬2 α equivalent when both are governed by duplicates
of it. One can have a formal language, in which the positive connectives
are governed by the standard intuitionistic rules, with two distinct, non-
equivalent, negation operators, each governed by the rules of Minimal Logic.
(This can perhaps be seen most clearly by noting that Minimal Logic is what
you get if you define negation in terms of → and ⊥, as above, but do not
assume Explosion—or, indeed, anything else—about what ⊥ means. Two
propositional constants, ⊥1 and ⊥2 , aren’t interderivable if we don’t make
any assumptions about them at all!)
It is perhaps insufficiently appreciated just how radically this diverges
from the Gentzenian paradigm! The single rule governing the negation op-
erator in Minimal Logic is formally the same as the ¬-Introduction operator
of Intuititionistic logic, so perhaps it is assumed that there can be nothing
very novel about the system. A still weaker logic of negation will perhaps
dramatize the point: the system, again coinciding with standard intuition-
istic logic in its treatment of the positive connectives, known as Subminimal
Fundamentally Similar but Importantly Different 13

Negation.11 Semantically, this can be thought of as defining negation, not


as implication of some unique “absurd” proposition, but by reference to a
class of absurd propositions, with a negation being counted as true if the
negated formula implies some absurdity. On this interpretation, a number
of principles holding in Intuitionistic and Minimal Logic fail. For example,
(¬α ∧ ¬β) → ¬(α ∨ β): this holds in the stronger logics, since if α and β
both imply “the” absurd, their disjunction implies it, but it is not valid in
Subminimal Logic, since α and β might imply different absurd propositions,
neither of which is implied by their disjunction.
Subminimal Logic can be axiomatized over Positive Intuitionisitic Logic
by adding a single axiom scheme, the two-negations-added form of contra-
position. Equivalently, a Jaśkowski-Fitch form of natural deduction could
put the rule as:
Contraposition: ¬α may be inferred from ¬β together with a sub-
proof in which β is derived from the hypothesis α.
We have stated the rule in a Jaśkowski-Fitch style. Of course, it could be
given in a Gentzen style, but we find it to be considerably less simple:
ContrapositionG : ¬α may be inferred from the two formulas β and
¬β; occurrences of α as hypotheses above β may be discharged.

2.2.2. Modal Logic


The issues arising in connection with negation re-occur in considering nat-
ural deduction formulations of modal logic. The idea that a connective is
somehow defined by its introduction rule (or by its introduction and elimi-
nation rules together) seems thoroughly implausible for modal operators: if
nothing else, the existence of many different interpretations (logical, physi-
cal, temporal, epistemic. . . ) for modal logics such as S4 and S5 implies that

11
Cf. Hazen (1992, 1995); Dunn (1993). A semantic description can be found in (Curry,
1963, pp. 255, 262), where it is proposed that a formalized theory might include, in addition
to its axioms, some set of specified counteraxioms, with the negation of a given formula
being counted as a theorem if one or another of the counteraxioms is derivable from it
in the theory. Curry did not, however, name or specify rules for this sort of negation:
the weakest logic of negation treated in detail is Minimal Logic, obtained by adding a
new Falsum constant that is assumed to follow from any counteraxiom. The logic was
considered, apparently without reference to Curry, around 1990 by I.L. Humberstone, who
conjectured its axiomatization and proposed the proof of its completeness as a problem to
Hazen. Hazen (1992) gives two completeness proofs, one by a suitably modified canonical
model and the other proof-theoretic, obtained by embedding the logic in a quantified
Positive Logic, quantified variables being thought of as ranging over “counteraxioms.”
14 A. Hazen, F. J. Pelletier

the meanings of the modal operators are not determined by the logical rules
governing them. (Which is not to say that the search for an “inferential”
semantics of some more general kind for modal notions is hopeless: just that
the simple approach suggested by Gentzen is insufficient.) The demand for
“neatly balanced” introduction and elimination rules, therefore, seems to
lose much of its philosophical force in connection with them. It is perhaps
unsurprising that the first writer on natural deduction systems for modal log-
ics, Fitch, claimed to be inspired by both Gentzen and Jaśkowski; certainly
he was happy to stray from Gentzen’s strict path of Int-Elim balance.12
As has often been noted, the logical behavior of modal operators shows
analogies with that of quantifiers. The rules for a necessity operator (putting
aside questions of what the “squares” of logical, physical, temporal, epis-
temic, deontic, doxastic . . . logics have in common!) will resemble those for
a universal quantifier. We have (subscripted for the Jaśkowski/Fitch systems
and the Gentzen systems)
Necessity IntroductionJ−F : α can be inferred from a subproof
having no hypothesis, but containing α as a line (with some restric-
tions on what can be “reiterated” into the subproof, see discussion
in the next subsection).
or
Necessity IntroductionG : α can be inferred from α (with some
restrictions on the undischarged hypotheses allowed above α).
For an elimination rule, we take the principle ab necesse ad esse valet con-
sequentia:
Necessity Elimination: α may be inferred from α.
So far, so neat. But suppose we generalize from the alethic modal logics
to such weaker ones as deontic logics. Here necessity is thought of as obliga-
tion or requiredness, and we can’t assume that what ought to be is (Adam
and Eve shouldn’t have eaten the apple. . . ). We must (as in Fitch, 1966)
replace the elimination rule with the weaker
Deontic Necessity Elimination: ♦α may be inferred from α.

12
Starting in the 1960s, many natural deduction systems for many modal logics were
published, largely by logicians based in philosophy departments. For an encyclopedic
survey, see Fitting (1983). A nice classification of approaches to natural deduction in
modal logics can be found in (Indrzejczak, 2010, Chapters 6–10). This book contains
much else that is of interest about natural deduction.
Fundamentally Similar but Importantly Different 15

Going further afield, a natural deduction system for the weak modal logic
K will (like Minimal Logic’s version of negation) have an introduction rule
but no elimination rule at all for the modal operator!
Possibility is, in the same way, analogous to existential quantification.
Due to the lesser expressive power of the modal language, however, strictly
analogous rules are harder to find, and the obvious ones have (less obvi-
ous!) weaknesses. Possibility Introduction (for alethic modalities) is simple
enough:
(Alethic) Possibility Introduction: ♦α may be inferred from α.
Possibility Elimination is more problematic. The guiding idea is that what-
ever follows logically from a possible proposition must itself be possible, and
this can be embodied in the rule
Possibility EliminationJ−F : ♦β may be inferred from ♦α together
with a subproof (with the same restrictions on reiteration as for Ne-
cessity Introduction subproofs) having α as a hypothesis and β as
a line.
(Note that this is perhaps not strictly speaking an elimination rule, as the
conclusion as well as the premise has to be governed by a possibility operator:
it is analogous to the rule for Subminimal negation.) This pair of rules for
possibility is pleasingly similar to the rules for the existential quantifier13 ,
but fails to yield a number of desirable derivabilities:

i) It does not, in combination with standard classical rules for non-modal


logic, allow the derivation of ¬♦⊥. (This is easy to see: the rules are
validated by an interpretation on which every proposition is considered
possible!)
ii) As a consequence, the combination of these rules, classical rules for
negation, and the definition α =df ¬♦¬α does not allow us to derive
the rule of Necessity Introduction. (This contrasts with the situation

13
In fact, the Possibility Elimination rule is more nearly parallel, not to the standard
Existential Quantifier Elimination rule, but to a simplified version (mistakenly believed to
be “the” rule by some elementary students): ∃xF (x) may be inferred from ∃yG(y) together
with a subproof (with eigenvariable a) in which F (a) is derived from the hypothesis G(a).,
This rule can perhaps be seen as a version of the method of ecthesis from Aristotle’s Prior
Analytics. When categorical sentences are formalized, in the usual way, in First Order
Logic, this rule can replace Existential Quantifier Introduction in the proof of the validity
of valid syllogisms, but outside syllogistic it has weaknesses paralleling those of Possibility
Elimination.
16 A. Hazen, F. J. Pelletier

when necessity is taken as primitive: the necessity rules, classical nega-


tion rules, and the definition ♦α =df ¬¬α yield both possibility rules
as derived rules.)
iii) Similarly, when both modal operators are taken as primitive, the four
rules do not, in combination with classical negation rules, suffice to
prove the equivalences α a` ¬♦¬α and ♦α a` ¬¬α.14
iv) Even in a modal logic based on intuitionistic, rather than classical,
logic, it seems desirable to have possibility distributing over disjunction:
♦(α ∨ β) ought to be equivalent to ♦α ∨ ♦β, but the possibility rules
(together with standard disjunction rules) don’t allow us to infer the
latter from the former (as noted on Fitch, 1952, p. 73).

Natural deduction systems of modal logic based on these rules do not,


therefore, have all the nice metatheoretic properties Gentzen found in non-
modal natural deduction. Supplemented by some rule or rules to compensate
for the weaknesses enumerated (Fitch, 1952 adds the equivalences of (iii) as
additional primitive rules), however, they provide a simple and efficient way
to formalize intuitively plausible reasoning: Jaśkowski’s more modest goal
for natural deduction.

2.2.3. Advantages of Subproofs with Restrictions on Reiteration

In the discussion just concluded, we argued that in modal logics we should


not (in general: there’s an exception coming up!) expect Introduction and
Elimination rules to pair off as Gentzen’s do, but didn’t give any positive
reason for preferring the Jaśkowski-Fitch subproof-and-reiteration format for
natural deduction rules. This format does, however, have distinct advantages
over Gentzen’s for modal logics. The rule of “Reiteration”, in its simplest
form, permits a formula, occurring as a line of a (sub)proof above a given
(sub)subproof, to be repeated within the given (sub)subproof. This version
of the rule suffices for the formulation of rules for the standard, nonmodal,
operators of propositional logic. More complicated versions of Reiteration
are used elsewhere. For example, the rules for ∀I and ∃E make use of sub-
proofs that are (in Fitch’s terminology) general with respect to a parameter
or Eigenvariable, and for such subproofs Reiteration is restricted: a formula

14
These weaknesses of the possibility elimination rule (and of analogous sequent-calculus
rules) were overlooked by some early writers on natural deduction for modal logics: cf.
Routley, 1975).
Fundamentally Similar but Importantly Different 17

may be reiterated into a general subproof only if it does not contain the
Eigenvariable.15
A fairly obvious way to do this would be to have a restriction that a
formula may only be reiterated into a strict subproof if it begins with a
necessity operator. (This version, assuming the “alethic” rule of E, gets
you S4 .) The corresponding proviso in a Gentzen-style natural deduction
system would be that all undischarged hypotheses above a I inference must
begin with  operators: there isn’t too much to choose between them here,
though proof checking might be a bit easier with the Fitch version, since in
the Gentzen version you have to check which hypotheses are discharged. But
a wide variety of modal logics can be given Fitch-style formulations simply by
modifying the restriction on Reiteration into strict subproofs. For S5 , allow
formulas beginning with ♦ operators to be reiterated. For T , allow only
formulas starting with a  operator, but delete the first necessity operator
from the reiterated copy. For B, allow any formula to be reiterated, but
prefix the reiterated copy with a ♦ operator.16
In at least some cases, the flexibility of Reiteration-with-restrictions
seems to allow more straightforward derivations. In S5 , the formula
α → ♦α
is valid. Its proof in a Gentzen-style system, with restrictions on the form of
undischarged hypotheses allowed above I inferences, however, is . . . round-
about. One cannot simply use →I with the hypothesis α: the restriction
on hypotheses would prevent the derivation of ♦α. One must, instead,
derive two conditionals, α → ♦α and ♦α → ♦α, and then derive the
desired conclusion from them by the non-modal rules supporting transitivity
of implication: the available proof, then, is not a “normal” derivation.17 In

15
The analogous rules (I and ♦E) in modal logic similarly employ strict subproofs.
Thinking of the modal operators as generalizing over “possible worlds” motivates cor-
responding restrictions on Reiteration into strict subproofs: the restriction on general
subproofs prevents the Eigenvariable from being confused with a name for any particular
object, and we want, similarly, to prevent assertions about the actual world from being
treated as holding about arbitrary worlds.
16
The versions giving T and S4 can be found in Fitch (1952); they and the version for
S5 are stated and proved correct in an Appendix to Hughes and Cresswell (1968). The
version giving B is in Fitting (1983) and Bonevac (1987).
17
Corcoran and Weaver (1969) can be taken as presenting such a Gentzen-style system,
though it is formally a purely metalinguistic study and does not explicitly formulate a
natural deduction system. (Prawitz, 1965, p. 60) contains a compressed discussion of the
relations between natural deduction systems for S5 analogous to that in Corcoran and
Weaver and to the one described in our text.
18 A. Hazen, F. J. Pelletier

a Fitch-style system, we can use α as the hypothesis of a →I subproof, infer


♦α by ♦I, and then reiterate this into an inner, strict, subproof to obtain
♦α by I. To obtain the same advantage with a Gentzen-style system one
would need some a restriction along the lines of
Any path leading up from a I inference to an undischarged and non-
modalized hypothesis must pass through some modalized formula
But this is surely less elegant!18

2.2.4. Natural Deduction for Other Intensional Logics


Reiteration with restrictions seems to have similar advantages in connection
with intensional logics other than the modal ones. We do not here survey
more than just one, but it is suggestive of the fact that the linear method of
Jaśkowski-Fitch has advantages over the tree method of Gentzen.
Thomason (1970) presents a natural-deduction formulation of the Stalna-
ker-Thomason logic of conditionals. The standard →E rule for conditionals
—modus ponens—is postulated, along with a →I rule using special sub-
proofs: A conditional α > β may be inferred from a (special) subproof with
α as hypothesis and β as a line. There are four conditions under which a
formula may be reiterated into a special subproof19 ; the basic one (which
would be applicable to a wide range of conditional logics, e.g. logics of con-
ditional obligation) is that γ may appear as a reiterated line of a special

18
In fact, a further liberalization of the restriction on reiteration into strict subproofs
allows a very elegant formulation of propositional S5 . Call a formula fully modalized if
every occurrence of a propositional letter in it is in the scope of some modal operator.
Now allow all and only fully modalized formulas to be reiterated into strict subproofs, and
strengthen the Possibility-elimination rule to
A fully modalized formula β maybe inferred from ♦α and a strict subproof with
α as its hypothesis containing β as a line.
This formulation maximizes the formal parallel between modal and quantifier rules; it
also allows, e.g., the derivation of the  rules from the ♦ and ¬ rules when  is taken
as a defined operator. It doesn’t quite allow a full normalization theorem: we may still
sometimes have to use ♦I, on a non-fully-modalized formula to allow its reiteration into a
strict subproof and then use ♦E, inside that subproof. It can be shown, however, that only
this limited “abnormality” is required: derivations can be put into a normal form with the
weak subformula property that every formula occurring in them is either (i) a subformula
of the conclusion or of one of the premises, or (ii) the negation of such a formula, or (iii)
the result of prefixing a single possibility operator to a formula of one of the first two sorts.
19
One of them yields the principle of “conditional excluded middle.” Alas, it is needed
for other things as well, so a system for the logic of Lewis, 1973, which does not contain
this principle, cannot be obtained simply by dropping it without replacement.
Fundamentally Similar but Importantly Different 19

subproof with hypothesis α if α > γ occurs as an earlier line of the contain-


ing proof. (Another has the effect of guaranteeing that equivalent formulas
are substitutable as antecedents, another that “necessary” formulas may be
reiterated.)
A corresponding Gentzen-style system is doubtless formulable, but we
doubt it would be as simple or intuitive.

2.2.5. Natural Deduction for Free Logic


Jaśkowski first introduces quantifiers binding propositional variables, yield-
ing what he calls the extended theory of deduction (= what Church, 1956
calls the extended propositional calculus ). ∀E (Jaśkowski’s Rule V) allows
any instance of a universal quantification to be inferred from it. His Rule VI
is essentially the same as Gentzen’s formulation of ∀I: an instance, F (q), may
be followed by its universal quantification, ∀qF (q), provided that the variable
q is not one occurring freely in any hypothesis in effect at that point in the
deduction. He does not give rules for the existential quantifier, treating it as
defined. Fitch (1952), however, does give rules for the existential quantifier.
∃I allows, simply, the inference of an existential quantification from one of
its instances, but ∃E involves reasoning from a hypothesis, and so—in what
we have been calling the Jaśkowski-Fitch form of natural deduction—the
erection of a new subproof:
∃E: a formula, α, may be inferred from a pair of items, the existen-
tially quantified formula ∃qF (q) and a subproof having the instance
F (s) as hypothesis and α as an item, provided that the free propo-
sitional variable s (the eigenvariable of the subproof) does not occur
free in α or in any formula from outside the subproof appealed to
within it: any formula, that is, which is reiterated into the subproof.
Given this formulation of the ∃E rule, it was natural for him20 to reformulate
∀I in a way that makes use of a similar subproof:
∀I: a universal quantification ∀qF (q) may be inferred from a categor-
ical (i.e. hypothesis-less) subproof containing its instance F (q) as an
item, provided that the free variable q (the Eigenvariable of the sub-
proof) does not occur free in any formula from outside the subproof

20
Though not strictly forced on him: various elementary textbooks, for example
Bergmann et al. (2008) and others that are displayed in Pelletier and Hazen (2012),
combine a Fitch-like ∃E rule, using a subproof, with a Gentzen or Jaśkowski-like formula-
tion of ∀I, with no subproof. The interderivability of the rules for the two quantifiers, in
classical logic, is certainly easier to see, however, when they are given parallel formulations.
20 A. Hazen, F. J. Pelletier

appealed to within it: any formula, that is, which is reiterated into
the subproof.
In what follows we will consider this, rather than the more Gentzen-like
formulation of Jaśkowski’s original paper, as the Jaśkowski-Fitch rule: we
claim that various modified forms of the quantifier rules are simpler and more
perspicuous when we use it than when we use the Gentzen formulation.21
Having proved basic properties of the system with propositional quan-
tification, Jaśkowski, in the final section of his paper, introduces individual
variables to give a formulation of the calculus of functions (= First Order
Logic). He notes that adopting the rules without change would give “a sys-
tem. . . differing from those of Principia Mathematica and of Hilbert only
in” its different set of well-formed formulas. He then complains that in this
system if would be possible to prove
∀xF (x) → ¬∀x¬F (x),
with the meaning “If for every x, F (x), then for some x, F (x).” But this,
he says, fails in the empty domain (Jaśkowski: “the null field of individu-
als”): “under the supposition that no individual exists in the world, this
proposition is false.” And he thinks it better to have the existence of indi-
viduals settled by non-logical theories rather than written into the rules of
logic itself.22 And so he goes on to give modified rules for what is now called
21
Obiter dictum, an unusual feature of Fitch (1952) is that it treats modal logic before
quantifiers. Were modal logic a more important or interesting part of the logic curriculum,
this might have pedagogical value: students can get used to the idea of special subproofs
with restrictions on reiteration before they have to master the complexities of substitution
for variables.
22
It took a surprisingly long time for the logical community to come to terms with this
issue. As early as 1919, [Russell, 1919, p. 203] remarked that he had come to regard it as a
“defect in logical purity” that the axioms of Principia Mathematica allowed the proof that
at least one individual exists. Most logicians were willing to tolerate the defect: if one is in-
terested in the metamathematics of a formalized theory that requires a nonempty domain,
it is hardly a major worry if the existence of an object can be proven without appeal to the
non-logical axioms! By the early 1950s, however, several logicians turned their attention to
axiomatizing First Order Logic in a way that did not require nonemptiness of the domain.
The best-known effort in this direction is Quine (1954). Quine cites the earlier Church
(1951); Mostowski (1951); Hailperin (1953), but not Jaśkowski’s much earlier formulation,
though Mostowski (1951) does. (The philosophical community outside mathematical logic
was even slower. In 1953 the British philosophical journal Analysis proposed an essay
competition on the topic of whether the logical validity of ∀x(F (x) ∨ ¬F (x)) entails the
existence of at least one individual: the winning responses, Black (1953); Kapp (1953);
Cooper (1953), discuss the distinction between natural language and the formalism of
logic, but show no awareness that the problem can be avoided by making a minor change
in that formalism!)
Fundamentally Similar but Importantly Different 21

inclusive First Order Logic: First Order Logic with validity defined as truth
in all domains, empty as well as non-empty.
The revised formulation of ∀I makes use of special subproofs. These sub-
proofs start with a declaration that a certain variable—in effect the Eigen-
variable of the subproof—is to be treated as if it were a constant in that
subproof: in particular, it may be substituted for the universally quantified
variable in an inference by ∀E. ∀I allows a universal quantification, ∀xF x,
to be written, not in, but after a subproof with Eigenvariable a containing
F (a) as a line. In Fitch’s terminology: the universal quantification, rather
than being a direct consequence of a formula, is a direct consequence of the
subproof. Given that formulas containing free variables are of use only as
part of the machinery of quantifier rules, we can simplify the statement of
the rules of the system a bit:

(i) The actual Introduction and Elimination rules for the Universal Quan-
tifier (and also for the Existential Quantifier if we want to include it
as a primitive of the system) are exactly as they are in ordinary, non-
inclusive, First Order Logic, but
(ii) Formulas in which variables occur free may only occur within ∀I (or
∃E) subproofs of which they are eigenvariables, either as lines of these
subproofs or as lines of further subproofs subordinate to them.

The derivation of the suspect ∀xF (x) → ∃xF (x):

1 ∀xF x hypothesis
2 F (a) ∀E
3 ∃x(F (x)) ∃I
4 ∀xF (x) → ∃xF (x) 1-3, →I

violates the restriction: the subproof 1-3 is an ordinary subproof, with no


eigenvariable, but contains a formula, 2, in which the variable a occurs free.
It is easy to see that the restricted system is sound for the inclusive interpre-
tation: the only way to get something out of an ∀I subproof is to make an
inference from it by the ∀I rule, and this will yield a formula beginning with
a (perhaps vacuous!) universal quantifier.23 But all universal quantifications

23
There is an annoying technical issue with vacuous quantification and inclusive logic.
If the well-formedness definition permits vacuous quantifiers—the simplest option—then
22 A. Hazen, F. J. Pelletier

are trivially true in the empty domain.24 It is, of course, possible to formu-
late a corresponding restriction on Gentzen-style proofs: any path through
the tree leading up from the root to a hypothesis containing a free variable
must pass through a formula inferred by an ∀I (or ∃E) inference having
that variable as eigenvariable. But it seems to us that the formulation—due
essentially to Jaśkowski—in terms of subproofs is more perspicuous (and
perhaps, graphically, makes proofs easier to check).
Note, however, that the restriction applies only to free variables: if the
system is used with a language containing individual constants, it once again
becomes possible to prove the existence of at least one individual, as con-
stants are assumed to denote objects in the domain over which the quantified
variables range. In the late 1950s and 1960s, several logicians25 developed
systems avoiding this defect in logical purity. Versions of First Order Logic
in which constants are allowed not to denote are called free logics (logics,
that is, that are free of existential presuppositions on their constants). As
we have seen, inclusive logic doesn’t have to be free, and free logics do not
always allow the empty domain, but the two modifications to standard First
Order logic seem to be in a similar spirit, and the most natural systems seem
to be those which are both inclusive and free: what are called universally
free logics. Here it seems simplest26 to enrich the language with an exis-
tence predicate: for any term, t, whether constant, variable27 , or a complex
term built up from these by using function symbols28 , E!(t) is interpreted

the most natural interpretation, as argued by Quine (1954), is to count all formulas be-
ginning with universal quantifiers as true and all beginning with existential quantifiers as
false. But then the elimination of a vacuous universal quantifier can lead from truth to
falsity: ∀x∃y(F y → F y) is true and ∃y(F y → F y) false in the empty domain. The neces-
sary restriction on the inference rules can be brought under the letter of the statement in
the text by saying that universal quantifier elimination is always instantiation to a term,
and that the term involved is deemed to “occur” in a subproof even if it has no occurrence
in the conclusion of the ∀-elimination inference (and similarly for ∃-introduction).
24
Systems of this sort are simple, and apparently natural, as witness the fact that they
have been repeatedly re-invented by different authors: cf., e.g., Wu (1979).
25
A representative few sources—we make no claims of completeness for the list—would
include Leonard (1957); Hintikka (1959); Leblanc and Hailperin (1959); Rescher (1959);
Lambert (1963).
26
There are alternatives. In First Order Logic with Identity we could define E!(t) as
∃x(t = x).
27
Some authors have treated free variables and individual constants differently, but—at
least if the free logic is to be incorporated as part of a modal logic with quantification over
contingent existents—it seems better to give them a common treatment, as here.
28
Logicians concerned with metaphysical applications speak of an existence predicate.
Those concerned with formalizing a theory of partial functions for use in computer science
Fundamentally Similar but Importantly Different 23

as meaning that t denotes some object in the domain of quantification. All


the quantifier rules now get minor amendments involving this. ∀E and ∃I
require an extra premiss, stating that the term replacing the quantified vari-
able denotes. ∀I subproofs now have a hypothesis, and ∃E subproofs a
second hypothesis, saying that the eigenvariable denotes. A system of this
sort (as part of a modal system) is presented in Hazen (1990). Our earlier
formulation of inclusive logic can be seen as an abbreviated special case of
this system: if all constants are assumed to denote, existence premises for
constants can be left tacit, and since all eigenvariable-possessing subproofs
have existential hypotheses for their eigenvariables, we can save lines by
leaving them tacit as well.
Mainstream mathematical logicians have tended not to feel the need for
free logic, preferring to consider formalizations of mathematical theories in
languages defined so that every term denotes an object in the structure de-
scribed. Still, there are applications, such as modal logic and the theory of
partially computable functions, in which the logical flexibility of free logic is
convenient. There are also contexts in which many-sorted logics are useful,
and the machinery of free logic can be used for them: use multiple “exis-
tence” predicates, each saying that a term denotes an object of one of the
sorts.29 And, though it is surely possible to design a Gentzen-style sys-
tem for universally free logic, it seems to us that the Jaśkowski-Fitch-style
described above is more perspicuous and more convenient in use.30

2.3. Sheffer Stroke Functions in Classical Logic


Sheffer (1913) is usually credited with “discovering” the truth-functional
connective now popularly known as nand, and remarked in a footnote there
there was also a function we now call nor. Of course, both these func-
tions had already made their appearances in earlier logicians, but the name
‘Sheffer’ has stuck.
Price (1961) gives these three rules for nand, symbolized as | (their
names subscripted with ‘P’ to indicate Price):
|IP : From a subproof [α] · · · (β|β), infer (α|β)

might prefer to speak of a definedness or even a convergence predicate. The logics can be
the same.
29
Hailperin’s formulation of inclusive logic, in Hailperin (1953) similarly gains impor-
tance as a preliminary to Hailperin (1957).
30
Garson (2006) contains a number of natural deduction systems (and also tableaux
methods) for both “normal” modal logics and also free logics, done in a student-oriented
manner.
24 A. Hazen, F. J. Pelletier

|EP : From the two formulas α and (β|α), infer (β|β)


||EP : From the two formulas α and ((β|β)|α), infer β

Price shows that these three rules are independent of one another, and are
“complete” for the classical propositional logic. Note that the third rule
clearly violates the Int-Elim picture. This of course is to be expected, since
we are dealing with classical logic and any complete set of rules will some-
where involve a violation of this ideal. But a consequence of this is that
the goal of having all rules matched as Int-Elim rules cannot be given. The
Jaśkowski-Fitch method, however, has no such difficulties; this seems to be
yet another place where the formalism of Jaśkowski-Fitch is superior to that
of Gentzen.
In the classical propositional logic, where ↑ (nand) and ↓ (nor) were
introduced, either of these functions could be employed as a complete foun-
dation for the logic. But it is easy to see that there can be no pure Int-Elim
rules for either one of them: there will need to be some rule like Price’s ||EP
rule that does not merely eliminate the main occurrence of |. In turn, this
suggests that the picture offered by inferentialism falls short.
Here’s another set of rules in the same vein, again illustrating the issue
that using the usual types of natural deduction rules is going to involve us
in something that violates the inferentialist’s desired form of rules. We start
by defining explicit contradiction, e.c., as a three (or two) member set of
formulas containing, for some α and β, (i) α, (ii) β, and (iii) α|β 31 An e.c.
is derivable from given hypotheses off all its members are. (We subscript
these rules with ‘1’):

|I1 : From a subproof with two hypotheses, α and β, that contains an


e.c., infer (α|β) in the superordinate proof.32
|E1 : From the formula (α|β)|(α|β), both α and β can be inferred.33
||Transfer1 : From the two formulas α and (α|β), infer (β|β)34

As can be easily seen, both the |E1 and ||Transfer1 rules violate the form of
rule that inferentialism requires. Is it at all possible for there to be a set of

31
With, for any α, the two-member set {α, (α|α)} as a special case.
32
In the special case where α = β, this is just the standard Negation Introduction
(Reductio) rule.
33
Given the definition of ∧, this is just Conjunction Elimination. For the special case
where α = β, this is just Double Negation Elimination.
34
A logically equivalent (in the context of the other rules) version of ||Transfer1 , but
which may be more easy to employ in proofs, would conclude (β|γ) from the same premises.
Fundamentally Similar but Importantly Different 25

rules for | that are inferentialism-acceptable? Before answering this question,


and before moving to the case of Sheffer strokes in intuitionist logic, we pause
for a parenthetical piece of background.

3. Generalized Natural Deduction: Case Studies

3.1. Generalized Natural Deduction


Schroeder-Heister (1984a,b) gave a reformulation of natural deduction rules
where, besides allowing single formulas to be hypotheses of a subproof, state-
ments that some non-logical inference held, or was valid, were also allowed to
be hypotheses. This employs a structural generalization of ordinary natural
deduction: we can have subproofs in which, instead of a formula, a “rule”
is hypothesized. Fredric Fitch, in his 1966 paper on natural deduction rules
for obligation, introduces notational conventions about “columns” that have
much the same effect (see footnote 38 below). Schroeder-Heister calls this
method of representing natural deduction “generalized natural deduction”.
Notationally, the Jaśkowski-Fitch format lends itself to this generalization
more readily than Gentzen’s: his notation for generalized Gentzen-style nat-
ural deduction is very awkward, and it is hard to avoid the suspicion that
most of the time he worked with the analogous “generalized” Sequent Calcu-
lus (with “higher order” sequents, having sequents as well as formulas in the
antecedent), which he also defines. In contrast, the notation of Fitch (1966)
is easy to read and use in the actual construction of formal derivations.
A simple example of the idea might be the following intuitive argument.
Suppose we are given: β follows from α. We certainly seem justified in
concluding: from α or γ we can conclude either β or γ. Yet the standard
way of expressing this would be the proof

1 (α → β) premise
2 α∨γ premise
3 α hypothesis
4 (α → β) 1, Reiteration
5 β 4,3, → E
6 β∨γ 5, ∨I
7 γ hypothesis
8 β∨γ 7, ∨I
9 β∨γ 2, 3-6, 7-8 ∨E
26 A. Hazen, F. J. Pelletier

But this proof supposes that (α → β) is the regimentation of the argument


“β follows from α”—and we have been long taught that this is a bad iden-
tification! A better regimentation would allow that “β follows from α” is
an argument (a non-logical one, being assumed by the larger argument), so
that a more appropriate regimentation of the reasoning would be:

1 α (Hypothesized
2 β inference)
3 α∨γ premise
4 α hypothesis (for ∨E)
5 α 1-2, hypothesized argument
6 β Reiterated)
7 β 4, 5-6, “column elimination”
8 β∨γ 7, ∨I
9 γ hypothesis (for ∨E)
10 β∨γ 9, ∨I
11 β∨γ 4, 5-8, 9-10, ∨E
Here we see just what is in the informal presentation. We hypothesize
(lines 1 and 2) some non-logical inference; we correctly infer β ∨ γ from that
inference’s conclusion; and so we have that the argument from α to β ∨ γ
is shown.
We will give more details concerning this generalized natural deduction
in our remarks about the issue of definability of connectives in intuitionist
logic, followed by a discussion the Sheffer stroke. . . first in classical logic and
then in intuitionist logic.

3.2. Humberstone’s Umlaut Function35


The possibility of characterizing a logical operator in terms of its Introduc-
tion and Elimination rules has made possible a precise formulation of an
interesting question. One of the properties of classical logic that elemen-
tary students are often told about is functional completeness: every possible
35
This section closely follows a discussion that appears in Pelletier and Hazen (2012).
Fundamentally Similar but Importantly Different 27

truth-functional connective (of any arity) is explicitly definable in terms of


the standard ones. The question should present itself of whether there is any
comparable result for intuitionistic logic. But this can’t be addressed until
we have some definite idea of what counts as a possible intuitionistic con-
nective. We now have a proposal: a possible intuitionistic connective is one
that can be added (conservatively) to a formulation of intuitionistic logic by
giving an introduction rule (and an appropriately matched, not too strong
and not too weak elimination rule) for it. Appealing to this concept of a
possible connective, Zucker and Tragesser (1978) prove a kind of functional
completeness theorem. They give a general format for stating introduction
rules, and show that any operator that can be added to intuitionistic logic
by a rule fitting this format can be defined in terms of the usual intuition-
istic operators. Unexpectedly, the converse seems not to hold: there are
operators, explicitly definable from standard intuitionistic ones, which do
not have natural deduction rules of the usual sort. For a simple example,
consider the connective v̈ defined in intuitionistic logic by the equivalence:36
(α v̈ β) =df ((α → β) → β).
(In classical logic, this equivalence is a well-known possible definition for
disjunction, but intuitionistically (α v̈ β) is much weaker than (α ∨ β).) The
introduction and elimination rules for the standard operators of intuition-
istic logic are pure, in the sense that no operator appears in the schematic
presentation of the rule other than the one the rules are for, and v̈ has no
pure introduction and elimination rules.37 (Obviously, a system in which ev-
ery connective has pure rules will have the separation property: in deriving
a conclusion from a set of premisses, no rule for a connective not actually
occurring in the premisses or conclusion need be used.) To get around this
problem, Schroeder-Heister (1984b) can use his generalized natural deduc-
tion: subproofs may have inferences instead of (or in addition to) formulas as
hypotheses.38 In this framework we can have rules of v̈I allowing the infer-

36
This connective was suggested to Allen Hazen by Lloyd Humberstone.
37
Trivially, it has impure rules: an introduction rule allowing (α v̈ β) to be inferred from
its definiens and a converse elimination rule.
38
Fitch (1966) had proposed a similar generalization, but used it only for abbreviative
purposes. He represents an inference from H1 , · · · , Hn to C by using a notation similar
to that for a subproof, but with no intermediate steps between the hypotheses and the
conclusion. (Rather than using logically loaded words like rule or inference, he calls such
things simply columns.) These diagrams can occur in a proof in any way a formula can:
they can be used as hypotheses of subproofs, they may be reiterated, etc. There are two
rules for their manipulation: by Column Introduction, an abbreviated column without
intermediate steps can be inferred from a real subproof with the same hypotheses and last
28 A. Hazen, F. J. Pelletier

ence of (α v̈ β) from a subproof in which β is derived on the hypothesis that


α ` β is valid, and v̈E allowing β to be inferred from (α v̈ β) and a subproof
in which β is derived on the hypothesis α. Schroeder-Heister proves that
any connective characterized by natural deduction rules of this generalized
sort is explicitly definable in terms of the standard intuitionistic connectives,
and that any connective so definable is characterized by generalized Intro-
duction and Elimination rules of a tightly constrained form, with at most
a controlled bit of impurity (see §3.4 below for details). Schroeder-Heister
(1984a) proves a similar result for intuitionistic logic with quantifiers.

3.3. Generalized Natural Deduction and the Sheffer Stroke in


Classical Logic
We return now to the topic of whether it is possible to give a set of rules
for | that are inferentialism-acceptable. Recalling that e.c. means “explicit
contradiction” as defined earlier (and importantly, will always contain at
least one formula with | as its main connective), we start by considering a
possible set of pure Int-Elim rules for an “ambiguous” (see below) | (we use
the subscript ‘2’ here).

|I2 : Given a subproof having either α or β as its only hypothesis and


which contains an e.c., infer α|β.
|E2 : Given an e.c., infer any formula α

It is obvious that the system defined by these rules (and standard general
framework stuff about, e.g., reiteration into subproofs) is consistent, and
that they can be used to add a | connective conservatively to classical or
intuitionistic natural deduction systems: the rules allow normalization (in
the sense of Prawitz, 1965). It should be almost as immediately obvious that
they aren’t complete: the elimination rule demands auxiliary premisses, so
you can’t always apply it to derive things that would then allow you to re-
introduce the | by the introduction rule. Put another way: the rules are
valid both for |1 (defined by (α|1 β) =df ¬(α ∧ β)) and for |2 (defined by
(α|2 β) =df (¬α ∨ ¬β)), and these are not intuitionistically equivalent. As
a result, the system defined by these rules does not characterize a unique
connective, but rather an operator that is ambiguous between (at least) these
two readings. See, for discussion, (Humberstone, 2011, pp. 605–628).

line, and by Column Elimination the conclusion of a column may be inferred from the
column together with its hypothesis or hypotheses. The reader is referred to Fitch’s paper
for further discussion and examples.
Fundamentally Similar but Importantly Different 29

Still, the distinction between the two readings is collapsed in classical


logic, so there might be a lingering hope that the rules might be, in some
sense, sufficient in a classical context. They certainly don’t, all by them-
selves, give a complete classical logic of |. After all, they are sound for the
intuitionistic |1 connective, and there are classical principles concerning |
which do not hold for this intuitionistic connective. (For example: if θ is
derivable from α and also derivable from α|α, then θ may be asserted—
a stroke-analogue of an excluded middle rule.) So the hope, if there is one,
must be for using these |-rules in a system which is, by other means, forced
to be classical. Here’s an example: Suppose we add the |, governed by these
rules, to a natural deduction system for the classical logic of, say, ∧ and ¬.
We would then have a complete classical system for ∧, ¬, and |. (Proof: the
system is strong enough to prove the equivalence of (α|β) with ¬(α ∧ β).)
But this result—call it parasitic completeness—isn’t very exciting. It
would be nicer to have a result that didn’t depend on using other connectives.
Well, one way would be to add a third rule for |. There are various natural
deduction systems for | in the literature (two were described above in §2.3),
and they all have more than two rules. For instance, rules similar to these
two, plus another rule. But we think this isn’t the most satisfying way to go:
for one thing, it sacrifices one of the nice features of (Gentzen-style) natural
deduction, the pairing of Int-Elim rules.
Starting with a system that is sound on an intuitionistic interpretation,
there can be several different ways of strengthening it into a classical system,
adding new rules for any of a variety of operators.
Here are five such rules (convention: enclosing a formula in square brack-
ets indicates that it is an assumption, and the vertical dots beneath an as-
sumption show a proof that leads to some formula. The horizontal line then
says that if there is such a subproof, one can infer the formula below the line.)
[¬α]
[α] [¬α] .. [α → β]
.
.. .. ¬¬α β ..
. . α (¬¬E)
.
β β .. α (Peirce)
(LEM) . α
β ¬β
(Indr.Pr.)
α
[¬α]
..
.
¬β
(Contrapose)
β→α
30 A. Hazen, F. J. Pelletier

Gentzen himself employed the Law of the Excluded Middle, LEM, which is
the left-most of these five rules, although he also mentioned that double-
negation elimination (¬¬E) could also be used. The negation-eliminating
version of a reductio proof (¬E) is a popular addition to the pure Int-Elim
rules in many elementary logic textbooks. Peirce’s Law and the displayed
version of a contraposition law can also yield classical logic. Any of these
rules could easily be (and have been) added to either Gentzen’s or Fitch’s
formulations to describe classical logic.
Maybe one could argue that in some way a formulation that plays with
rules for ¬ is more fundamental than others, but it is hard to see how.
Anyway, in order to get separation, the rule that is added to | can’t in-
volve either → or ¬. Furthermore, in a sequent calculus, classicality can be
achieved without any change to the rules for any connective, by a structural
change: allowing multiple succedent formulas. So we would like to try to
find a way to get classical logic by a rule that doesn’t involve any particular
connective!
Here’s a possibility: For any formulas α, β, and γ: γ may be asserted if it
is both derivable from the formula α and also derivable, with no particular
formula as extra hypothesis, if we allow the (non-logical) inference of β from
α. Or, to put it into a more Jaśkowski-Fitch style of exposition:

Rule B2 39 : γ is a consequence of two subproofs, each containing γ


as an item, one with α as hypothesis and the other with no single
formula as hypothesis, but within which an additional, non-logical,
rule of inferring β from α may be used.

This is a connective-free classicizing rule: added to an intuitionistic system


it gives us classical logic. It is analogous to the move, in sequent calculus,
to sequents with multiple succedent formulas in classicizing without postu-
lating anything new about any particular connective. (Schroeder-Heister,
1984b showed that adding a new connective to intuitionistic logic by proper
generalized Int-Elim rules would always yield a definitional extension of intu-
itionistic logic. Rule B2 is a generalized rule which is neither an Introduction
nor an Elimination rule for any connective: we have obtained classical logic
by a non-Schroeder-Heisterian application of the Schroeder-Heister frame-
work.) To illustrate how the rule works, consider the following derivation of
α from ¬¬α, using Rule B2 but only intuitionistically acceptable rules for
the connectives:

39
‘B’ for Bivalence, or for Boolean perhaps.
Fundamentally Similar but Importantly Different 31

1 ¬¬α premise
2 α assume
3 α 2, Repetition
4 α (Hypothesized
5 ¬α inference)
6 α assume
7 α 4–5, Reiteration of
8 ¬α hypothesized inference
9 ¬α 6, 7–8, ` E
10 ¬α 6–9, ¬I
11 ¬¬α 1, Reiteration
12 α 8, 9, explosion
13 α 2–3, 4–12, B2
14 (¬¬α → α) 1–13, Conditional proof

A system having the rules |I2 , |E2 and B2 is sound and complete for the
classical logic of nand.40
Of course, B2 is not the only way to introduce classical logic in this
manner. We could instead formulate a rule related to Peirce’s Law as follows:

Rule P2 α is a consequence of a subproof containing α as an item


and having no single formula as a hypothesis, but within which an
additional, non-logical, rule of inferring β from α may be used.

3.4. Generalized Natural Deduction and Sheffer Strokes in Intu-


itionistic Logic
Our success in giving pure rules for v̈ might lead us to conjecture that gen-
eralized natural deduction can provide pure introduction and elimination

40
For simplicity of exposition, we forego the proofs. Soundness should be obvious. The
completeness proof uses a Henkin construction and involves some subtleties—note that
multiple nested applications of B2 allows us to encode truth tables in the derivation.
32 A. Hazen, F. J. Pelletier

rules for all (intuitionistic) connectives. This, alas, would be a mistake.


(Schroeder-Heister, 1984b instead considers connectives introduced in or-
der, with the conventional {→, ∧, ∨, ⊥} at the head of the list, each con-
nective being characterized by Int-Elim rules in whose schemata only earlier
connectives may appear: he proves that these are precisely the connectives
definable from the conventional ones.) If a connective is definable using only
→, as v̈ is, pure rules are easy to provide: the introduction rules will require
subproofs like those for →I, with hypothesized inferences when the “impli-
cation” to be introduced has another “implication” as antecedent, and the
elimination rules will be similar to modus ponens, but with columns instead
of minor premises if the connective is defined by a conditional with a condi-
tional antecedent. Allowing ∧ as well to occur in the definiens presents no
essential difficulty: for a connective defined by a conjunction, the introduc-
tion rule will require whatever would be required by each conjunct, and the
elimination rule will permit any inference that would be permitted by either
conjunct.
Both the difficulties encountered in going further, and a further gener-
alization of generalized natural deduction, can be illustrated by using the
“Sheffer-connective” given in Došen (1985) as an example. (In popular us-
age a Sheffer Stroke for a logic is any function that will generate all the
truth functions for the logic. But to also include logics that are not directly
amenable to this “truth conditional” characterization, a different account
can be given by means of what Hendry and Massey (1969) calls an “in-
digenous Sheffer function”. If a function f can define some other set of
connectives in a given logical system, and in turn they can define f . . . in
whichever way is appropriate for the logical system in question, f is then
said to be an indigenous Sheffer function for the logic defined by the initial
set of functions.)
Došen (1985) shows that
F(α, β, γ) =df ((α ∨ β) ↔ (γ ↔ ¬β))41
is an indigenous Sheffer-connective for intuitionistic propositional logic de-
fined by {∨, ∧, →, ¬}. Since a biconditional is equivalent to a conjunction of
conditionals, a suitable introduction rule will allow F(α, β, γ) to be inferred
from a collection of subproofs sufficient to establish the implications from

41
We prefer to use this and similar functions, even though they employ the defined
symbol ↔, because (as reported in Došen, 1985) Kuznetsov (1965) has shown that there
are no indigenous Sheffer functions that have less than five occurrences of variables when
they employ only the symbols {∨, ∧, →, ¬} in their definitions. Using ↔ we need employ
only four occurrences.
Fundamentally Similar but Importantly Different 33

left to right and from right to left. Consider first the left to right part of
this. A conditional with a disjunction as antecedent is equivalent to a con-
junction of two conditionals, and the right-hand side is itself equivalent to a
conjunction of conditionals, so, in order to establish
((α ∨ β) → (γ ↔ ¬β))
it would suffice to have four subproofs, which (if we were allowed to use the
negation operator!) could have the forms

Preliminary I-1*: A subproof with α and γ as hypotheses containing


¬β as a line,
Preliminary I-1: A subproof with α and ¬β as hypotheses containing
γ as a line,
Preliminary I-2: A subproof with β and γ as hypotheses containing
¬β as a line.
Preliminary I-2*: A subproof with β and ¬β as hypotheses containing
γ as a line

Two of these, however, can be omitted. A contradiction implies anything,


so I-2* would be trivial, and in effect I-2 subsumes I-1*: an I-2 subproof
in effect derives absurdity from β and γ, and I-1* does the same thing with
an extra hypothesis, α.
So, first problem: how does one formulate these without the using ¬
symbol? A solution is possible because the meaning of ¬ can be specified
logically: ¬φ means that φ implies any proposition whatsoever! Where, as
in our Preliminary I-1, a negation is being used as a premise, this means
it can be replaced by a collection of inferences in which whatever we need is
inferred from the negated formula. So we can have

FI-1: A subproof containing γ as a line, and with, as hypotheses, the


formula α and some number of inferences of other formulas from β.42

But what sort of ¬-free subproofs, in the other direction, would be equiv-
alent to a subproof in which a negation is derived? In an infinitary logic one
might have a rule requiring an infinite subproof in which every formula what-
soever is derived from the one we want to negate, but we want a rule that can
be used in a real formal system! Here a further generalization of generalized

42
In Fitch’s (1966) terminology: hypothesized “columns,” each having β as a hypothesis
and some other formula as a line.
34 A. Hazen, F. J. Pelletier

natural deduction is needed. Recall that, in a system with propositional


quantifiers, ¬φ can be defined as ∀p(φ → p). So, let us allow free proposi-
tional variables in our language, and allow subproofs (of the sort that would
be used in the ∀I rule for propositional quantification) general with respect
to a propositional variable: a subproof, that is, into which neither the given
propositional variable nor any formula containing it as a subformula may be
reiterated. Then we may have:
FI-2: A subproof, general with respect to some propositional vari-
able p not occurring in β or γ, with β and γ as hypotheses and p as
a line.43

(Our final FI rule will be: F(α, β, γ) may be inferred from three subproofs,
of the forms FI-1 and FI-2 described here and a FI-3 establishing the right
to left implication. FI-3 is described below, after we discuss the FE rules.)
The elimination rule for a connective defined as a biconditional will have
multiple forms, corresponding to modus ponens for the left to right and the
right to left conditionals. Let us, again, start by considering the left to right
forms. Since a conditional with a disjunctive antecedent is equivalent to a
conjunction of conditionals, we distinguish “subforms” for the two disjuncts
of (α ∨ β). So the first two forms of FE can be taken as
FE-1: from F(α, β, γ), α and a subproof, general with respect to a
propositional variable p not occurring in any of α, β or γ, in which p
is derived from the hypothesis β, to infer γ, and
FE-2: from F(α, β, γ), β and γ to infer any formula whatever.

Right to left, F(α, β, γ) tells us that if γ is equivalent to ¬β we may infer


α ∨ β. Avoiding the symbol ∨ in a form of the elimination rule is fairly easy:
we simply take the equivalence of γ and ¬β to license any inference we could
make by disjunction elimination if we had the premise α ∨ β. So our third
form of FE is
FE-3: any formula, φ, may be inferred from F(α, β, γ) together with
four subproofs:
— one, general with respect to a variable p, in which p is derived
from β and γ,

43
One of us recalls seeing a suggestion by Fitch of an alternative introduction rule for
negation: ¬α is a direct consequence of a subproof, general with respect to p, in which the
propositional variable p (not occurring in α) is derived from the hypothesis α. We have
been unable to locate it in his publications.
Fundamentally Similar but Importantly Different 35

— one in which γ is derived from some number of hypothesized “col-


umns”, in each of which some formula is inferred from β,
— one in which φ is derived from the hypothesis α, and
— one in which φ is derived from the hypothesis β.

Returning now to the final introduction rule for F: in the subproof


needed for the right to left case of the FI rule, it is not a matter of using α∨β,
as in E-3, but of establishing it. Here again we need44 our generalization of
generalized natural deduction. (Prawitz, 1965, p. 67) notes that disjunction
can be defined, in intuitionistic logic with propositional quantification, as
(α ∨ β) =df ∀p((α → p) → ((β → p) → p))45
Making use of this idea, we can specify the third subproof needed for FI:

FI-3: Subproof, general with respect to p, in which a propositional


variable p, not occurring in α, β or γ, is derived from
— the hypothesized inference (“column”) from α to p,
— the hypothesized inference from β to p,
— a hypothesized inference of γ from some number of hypothesized
inferences of other formulas from β,
— some number of hypothesized inferences of other formulas from β
and γ.

The system with just the F-connective, governed by these FI and FE


rules, is sound and complete for intuitionistic logic. It has the Gentzen-
Prawitz normalizability property: if F(α, β, γ) is derivable by the FI rule,
then anything inferable from it by FE is derivable without making the
detour through the “maximum” formula. The rules specify the meaning of
the connective uniquely: if we have two connectives, F1 and F2 , governed by
“copies” of the same pair of rules, F2 (α, β, γ) is derivable from F1 (α, β, γ).46
(Hint: the three forms of the elimination rule correspond roughly to the three
subproofs needed for the introduction rule, though both F1 E-2 and F1 E-1
are used in constructing the F2 I-1 subproof.)

44
(Schroeder-Heister, 1984b, p. 1296) notes that no set of connective-free rules of ordi-
nary generalized natural deduction can replace the ∨.
45
In the context of classical rather than intuitionistic logic, Russell (1906) states this as
an equivalence (though not adopting it as a definition), as his Proposition 7.5.
46
For discussion of the philosophical significance of these properties, see Belnap (1962).
36 A. Hazen, F. J. Pelletier

Although some readers will like our “informal” presentation of the rules
for F, certainly other readers would prefer to see a more “formal” presenta-
tion of them. Such readers are directed to Schroeder-Heister (2014), where
a formal version is given in his Section 5.
We note in passing that ⊥ is something of an anomaly: it has an elimi-
nation rule (anything whatever can be inferred from ⊥), but no introduction
rule. We can’t think of any real use for it, but those who love symmetry
can use this further generalization to give one: ⊥ may be inferred from a
categorical subproof, general with respect to a propositional variable p, in
which p occurs as a line.

4. Conclusion

Mathematically, Gentzen’s natural deduction and Jaśkowski’s “supposition-


al” system are essentially the same thing, and proof theorists find Gentzen’s
presentation more elegant. But, we hope to have convinced you, the Jaś-
kowski-Fitch version has certain advantages, and probably helped some later
logicians discover their modifications and extensions of the natural deduction
method.

Acknowledgements. We gratefully acknowledge the assistance of An-


drzej Indrzejczak in carefully proofreading this paper as well as making sev-
eral important suggestions. We also give thanks to Peter Schroeder-Heister
for his comments on our paper. We direct the reader’s attention to his paper
(Schroeder-Heister, 2014) in this special issue for further related aspects of
this topic.

References
[1] Belnap, N. (1962) Tonk, plonk and plink, Analysis 22:130–134.
[2] Bergmann, M., J. Moor, and J. Nelson (2008), The Logic Book, Fifth Edition,
Random House, New York.
[3] Black, M. (1953) Does the logical truth that ∃x(F x ∨ ¬F x) entail that at least one
individual exists?, Analysis 14:1–2.
[4] Bonevac, D. (1987) Deduction, Mayfield Press, Mountain View, CA.
[5] Brandom, R. (1994) Making it Explicit, Harvard University Press, Cambridge, MA.
[6] Brandom, R. (2000) Articulating Reasons, Harvard UP, Cambridge, MA.
[7] Cellucci, C. (1995) On Quine’s approach to natural deduction, in P. Leonardi,
and M. Santambrogio (eds.), On Quine: New Essays, Cambridge UP, Cambridge,
pp. 314–335.
Fundamentally Similar but Importantly Different 37

[8] Church, A., A formulation of the logic of sense and denotation, in P. Henle (ed.),
Structure, Method and Meaning: Essays in Honor of H.M. Sheffer, LiberalArts Press,
NY, 1951.
[9] Church, A., Introduction to Mathematical Logic, Princeton UP, Princeton, 1956.
[10] Cooper, N. (1953) Does the logical truth that ∃x(F x ∨ ¬F x) entail that at least one
individual exists?, Analysis 14:3–5.
[11] Corcoran, J., and G. Weaver (1969) Logical consequence in modal logic: Natural
deduction in S5, Notre Dame Journal of Formal Logic 10:370–384.
[12] Curry, H. (1963) Foundations of Mathematical Logic, McGraw-Hill, New York.
[13] Došen, K. (1985) An intuitionistic Sheffer function, Notre Dame Journal of Formal
Logic 26:479–482.
[14] Dummett, M. (1978) The philosophical basis of intuitionistic logic, in Truth and
Other Enigmas, Duckworth, London, pp. 215–247.
[15] Dummett, M. (1993) Language and truth, in The Seas of Language, Clarendon,
Oxford, pp. 117–165.
[16] Dunn, Michael (1993) Star and perp, Philosophical Perspectives: Language and
Logic 7:331–358.
[17] Fitch, F. (1952) Symbolic Logic: An Introduction, Ronald Press, NY.
[18] Fitch, F. (1966) Natural deduction rules for obligation, American Philosophical
Quarterly 3:27–28.
[19] Fitting, M. (1983) Proof Methods for Modal and Intuitionistic Logics, Reidel, Dor-
drecht.
[20] Garson, J. (2006) Modal Logic for Philosophers, Cambridge Univ. Press, Cambridge.
[21] Gentzen, G. (1934) Untersuchungen über das logische Schließen, I and II, Math-
ematische Zeitschrift 39:176–210, 405–431. English translation “Investigations into
Logical Deduction”, published in American Philosophical Quarterly 1:288–306, 1964,
and 2:204–218, 1965. Reprinted in M. E. Szabo (ed.) (1969) The Collected Papers
of Gerhard Gentzen, North-Holland, Amsterdam, pp. 68–131. Page references to the
APQ version.
[22] Gentzen, G. (1936) Die Widerspruchsfreiheit der reinen Zahlentheorie, Mathematis-
che Annalen 112:493–565. English translation “The Consistency of Elementary Num-
ber Theory” published in M. E. Szabo (ed.) (1969) The Collected Papers of Gerhard
Gentzen, North-Holland, Amsterdam, pp. 132–213.
[23] Hailperin, T. (1953) Quantification theory and empty individual domains, Journal
of Symbolic Logic 18:197–200.
[24] Hailperin, T. (1957) A theory of restricted quantification, I and II, Journal of
Symbolic Logic 22:19–35 and 113–129. Correction in Journal of Symbolic Logic 25:54–
56, (1960).
[25] Harman, G. (1973) Thought, Princeton UP, Princeton.
[26] Hazen, A. P. (1990) Actuality and quantification, Notre Dame Journal of Formal
Logic 41:498–508.
[27] Hazen, A. P. (1992) Subminimal negation, Tech. rep., University of Melbourne.
University of Melbourne Philosophy Department Preprint 1/92.
[28] Hazen, A. P. (1995) Is even minimal negation constructive?, Analysis 55:105–107.
[29] Hazen, A. P. (1999) Logic and analyticity, European Review of Philosophy 4:79–110.
38 A. Hazen, F. J. Pelletier

Special issue on “The Nature of Logic”, A. Varzi (ed.). This special issue is sometimes
characterized as a separate book under that title and editor.
[30] Hendry, H., and G. Massey (1969) On the concepts of Sheffer functions, in K. Lam-
bert (ed.), The Logical Way of Doing Things, Yale UP, New Haven, CT, pp. 279–293.
[31] Herbrand, J. (1928) Sur la théorie de la démonstration, Comptes rendus hebdo-
madaires des séances de l’Académie des Sciences (Paris) 186:1274–1276.
[32] Herbrand, J. (1930) Recherches sur la théorie de la démonstration, Ph.D. thesis,
University of Paris. Reprinted in W. Goldfarb (ed. & trans.) (1971) Logical Writings,
D. Reidel, Dordrecht.
[33] Hintikka, J. (1959) Existential presuppositions and existential commitments, Jour-
nal of Philosophy 56:125–137.
[34] Hughes, G., and M. Cresswell (1968) An Introduction to Modal Logic, Methuen,
London.
[35] Humberstone, L. (2011) The Connectives, MIT Press, Cambridge, MA.
[36] Indrzejczak, A. (2010) Natural Deduction, Hybrid Systems and Modal Logics,
Springer, Berlin.
[37] Jaśkowski, S. (1929) Teoria dedukcji oparta na regulach zalożeniowych (Theory of
deduction based on suppositional rules), in Ksiȩga pamia̧tkowa pierwszego polskiego
zjazdu matematycznego (Proceedings of the First Polish Mathematical Congress),
1927, Polish Mathematical Society, Kraków, p. 36.
[38] Jaśkowski, S. (1934) On the rules of suppositions in formal logic, Studia Logica 1:5–
32. Reprinted in S. McCall (1967) Polish Logic 1920–1939 Oxford UP, pp. 232–258.
[39] Johansson, I. (1936) The minimal calculus, a reduced intuitionistic formalism, Com-
positio Mathematica 4:119–136. Original title Der Minimalkalkul, ein reduzierter in-
tuitionistischer Formalismus.
[40] Kapp, A. (1953) Does the logical truth that ∃x(F x ∨ ¬F x) entail that at least one
individual exists?, Analysis 14:2–3.
[41] Kleene, S. (1967) Elementary Logic, Wiley, NY.
[42] Kolmogorov, A. (1925) On the principle of the excluded middle, in J. van Heijenoort
(ed.), From Frege to Gödel: A Sourcebook in Mathematical Logic, 1879–1931, Harvard
UP, Cambridge, MA, pp. 414–437. Originally published as “O principe tertium non
datur”, Matematiceškij Sbornik 32:646–667.
[43] Kuznetsov, A. (1965) Analogi ‘shtrikha sheffera’ v konstruktivnoı̆ logike, Doklady
Akademii Nauk SSSR 160:274–277.
[44] Lambert, K. (1963) Existential import revisited, Notre Dame Journal of Formal
Logic 4:288–292.
[45] Leblanc, H. and T. Hailperin (1959) Nondesignating singular terms, Philosophical
Review 68:129–136.
[46] Lemmon, E. J. (1965) Beginning Logic, Nelson, London.
[47] Leonard, H. S. (1957) The logic of existence, Philosophical Studies 7:49–64.
[48] Lewis, D. (1973) Counterfactuals, Blackwell, Oxford.
[49] Mates, B. (1965) Elementary Logic, Oxford UP, NY.
[50] Mostowski, A. (1951) On the rules of proof in the pure functional calculus of the
first order, Journal of Symbolic Logic 16:107–111.
Fundamentally Similar but Importantly Different 39

[51] Pelletier, F. J. (1999) A brief history of natural deduction, History and Philosophy
of Logic 20:1–31.
[52] Pelletier, F. J. (2000) A history of natural deduction and elementary logic text-
books, in J. Woods, and B. Brown (eds.), Logical Consequence: Rival Approaches,
Vol. 1, Hermes Science Pubs., Oxford, pp. 105–138.
[53] Pelletier, F. J., and A. P. Hazen (2012) A brief history of natural deduction, in
D. Gabbay, F. J. Pelletier, and J. Woods (eds.), Handbook of the History of Logic;
Vol. 11: A History of Logic’s Central Concepts, Elsevier, Amsterdam, pp. 341–414.
[54] Peregrin, J. (2008) What is the logic of inference?, Studia Logica 88:263–294.
[55] Prawitz, D. (1965) Natural Deduction: A Proof-theoretical Study, Almqvist & Wick-
sell, Stockholm.
[56] Prawitz, D. (1979) Proofs and the meaning and completeness of the logical con-
stants, in J. Hintikka, I. Niiniluoto, and E. Saarinen (eds.), Essays on Mathematical
and Philosophical Logic, Reidel, Dordrecht, pp. 25–40.
[57] Price, R. (1961) The stroke function and natural deduction, Zeitschrift für mathe-
matische Logik und Grundlagen der Mathematik 7:117–123.
[58] Quine, W. V. (1950) Methods of Logic, Henry Holt & Co., New York.
[59] Quine, W. V. (1954) Quantification and the empty domain, Journal of Symbolic
Logic 19:177–179. Reprinted, with correction, in Quine (1995).
[60] Quine, W. V. (1995) Selected Logic Papers: Enlarged Edition, Harvard UP, Cam-
bridge, MA.
[61] Read, S. (2010) General-elimination harmony and the meaning of the logical con-
stants, Journal of Philosophical Logic 39:557–576.
[62] Rescher, N. (1959) On the logic of existence and denotation, Philosophical Review
69:157–180.
[63] Routley, R. (1975) Review of Ohnishi & Matsumoto, Journal of Symbolic Logic
40:466–467.
[64] Russell, B. (1906) The theory of implication, American Journal of Mathematics
28:159–202.
[65] Russell, B. (1919) Introduction to Mathematical Philosophy, Allen and Unwin, Lon-
don.
[66] Schroeder-Heister, P. (1984a) Generalized rules for quantifiers and the complete-
ness of the intuitionistic operators ∧, ∨, ⊃, ⊥, ∀, ∃, in M. Richter, E. Börger, W. Ober-
schelp, B. Schinzel, and W. Thomas (eds.), Computation and Proof Theory: Proceed-
ings of the Logic Colloquium Held in Aachen, July 18–23, 1983, Part II, Springer-
Verlag, Berlin, pp. 399–426. Volume 1104 of Lecture Notes in Mathematics.
[67] Schroeder-Heister, P. (1984b) A natural extension of natural deduction, Journal
of Symbolic Logic 49:1284–1300.
[68] Schroeder-Heister, P. (2014) The calculus of higher-level rules and the founda-
tional approach to proof-theoretic harmony, Studia Logica.
[69] Sheffer, H. (1913) A set of five independent postulates for Boolean algebras, with
application to logical constants, Trans. of the American Mathematical Society 14:481–
488.
[70] Suppes, P. (1957) Introduction to Logic, Van Nostrand/Reinhold Press, Princeton.
40 A. Hazen, F. J. Pelletier

[71] Tarski, A. (1930) Über einige fundamentalen Begriffe der Metamathematik, Comptes
rendus des séances de la Société des Sciences et Lettres de Varsovie (Classe III) 23:22–
29. English translation “On Some Fundamental Concepts of Metamathematics” in
Tarski, 1956, pp. 30–37.
[72] Tarski, A. (1956) Logic, Semantics, Metamathematics, Clarendon, Oxford.
[73] Thomason, R. (1970) A Fitch-style formulation of conditional logic, Logique et Anal-
yse 52:397–412.
[74] Wu, K. Johnson (1979) Natural deduction for free logic, Logique et Analyse 88:435–
445.
[75] Zucker, J., and R. Tragesser (1978) The adequacy problem for inferential logic,
Journal of Philosophical Logic 7:501–516.

Allen P. Hazen
Department of Philosophy
University of Alberta
Edmonton, Canada
[email protected]

Francis Jeffry Pelletier


Department of Philosophy
University of Alberta
Edmonton, Canada
[email protected]

You might also like