0% found this document useful (0 votes)
17 views37 pages

Equational Logic 2 (Tourlakis) - Paper

This document presents a formalization of equational predicate logic. It begins with an introduction and abstract. Section 1 defines the syntax of the logic, including terms, atomic formulas, and well-formed formulas. Remark 1.5 clarifies some aspects of the syntax. The document focuses on defining a simpler formalization than previous work, basing the logic on propositional rules of inference including a version of Leibniz's rule. It aims to provide a unifying way to understand implication steps in calculational proofs.

Uploaded by

Cole Arora
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views37 pages

Equational Logic 2 (Tourlakis) - Paper

This document presents a formalization of equational predicate logic. It begins with an introduction and abstract. Section 1 defines the syntax of the logic, including terms, atomic formulas, and well-formed formulas. Remark 1.5 clarifies some aspects of the syntax. The document focuses on defining a simpler formalization than previous work, basing the logic on propositional rules of inference including a version of Leibniz's rule. It aims to provide a unifying way to understand implication steps in calculational proofs.

Uploaded by

Cole Arora
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

A Basic Formal Equational Predicate Logic

George Tourlakis

Technical Report CS-1998-09

November 25, 1998

Department of Computer Science


4700 Keele Street North York, Ontario M3J 1P3 Canada
A Basic Formal Equational Predicate Logic
George Tourlakis, York University
[email protected]
Abstract
We present the details of a formalization of Equational Predicate Logic based on a propo-
sitional version of the Leibniz rule (PSL—Propositional Strong Leibniz), EQN (equanimity)
and TR (transitivity).
All the above rules are “strong”, that is, they are applicable to arbitrary premises (not
just to absolute theorems).
It is shown that a strong no-capture Leibniz (SLCS—Strong Leibniz with “Contextual
Substitution”), and a weak full-capture version (Weak-Leibniz with Uniform Substitution,
or WLUS) are derived rules. “Weak” means that the rule is only applicable on absolutely
deducible premises.
We also derive general rules MON (monotonicity) and AMON (antimonotonicity), which
allow us to “calculate” appropriate conclusions ` C[p \ A] ⇒ C[p \ B] or ` C[p \ A] ⇐ C[p \ B]
from the assumption ` A ⇒ B.

Introduction.
This note builds further on [To], where the logical “calculus” of Equational
(Predicate) Logic outlined in [GS1] was formalized and shown to be sound and
complete.
We propose here a simpler formalization than the one in [To], basing the
proof-apparatus solely on propositional rules of inference—one of which, of
course, is a version of “Leibniz”. This entails an unconstrained Deduction The-
orem (contrast with [To]), which in turn further simplifies the steps of our
reasoning.
While our “foundations” include “just” a propositional version of Leibniz,
we show that there are derived rules valid in the logic, which allow the use of
Leibniz-style substitution within the scope of a quantifier.
We also address one “weakness”—to which David Gries has already called
attention in [Gr]—of the current literature ([DSc, GS1]) on equational or cal-
culational reasoning. That is, while it is customary to mix =-steps (that is,
an application of a conjunctional ≡) and ⇒-steps (that is, an application of a
conjunctional ⇒) in a calculational proof, and while we have well documented
rules to handle the former, yet the latter type of step normally seems to rely on
a compendium of ad hoc rules. We hope to have contributed towards remedy-
ing this state of affairs, as we present a unifying, yet simple and rigorous way
to understand, ascertain validity, and therefore annotate and utilize ⇒-steps,
using the rules monotonicity and antimonotonicity.
We conclude with a section on soundness and completeness of the proposed
Logic.
The term “basic” in the title is meant to convey that we include no more
than what is necessary to lay the foundations. In particular, examples that
illustrate the power of calculational reasoning were left out.
2

The layout of the paper is as follows: Section 1 introduces the formal lan-
guage. Section 2 the axioms, the rules of inference, and sets the rules of the game
(definitions of the two main types of substitution,† of theorem and of proof).
Section 3 introduces a few metatheorems, including the Deduction Theorem.
The “main lemma” in section 6 is lemma 6.15 that shows the eliminability of
propositional variables. An Appendix argues that all the axioms in [GS1] are
here theorems, but the exposition in the Appendix is no more than a “link” to
[To].

1. Syntax
Equational (first order) logic, like all (first order) logic, is “spoken” in a first
order language, L. L is a triple (V, Term, Wff), where V is the alphabet, i.e.,
the set of basic syntactic objects (symbols) that we use to built “terms” and
“formulas”. We start with a description of V , and then we describe the set of
terms (Term) and the set of formulas (Wff).

Alphabet

1. Object variables. An enumerable set x1 , x2 , . . . . We normally use the meta-


symbols x, y, z, u, v, w with or without primes to stand for (object) variables.

2. Boolean (or Sentential, or Propositional) variables.


An enumerable set v1 , v2 , . . . . We normally use the metasymbols p, q, r with
or without primes to stand for Boolean variables.

 These variables will be only used to give a user-friendly notation for the
various versions of the rule Leibniz. 
3. Equality (between “objects”—see 1.3 below for its syntactic role), “≈”.‡

4. Brackets, ( and ).

5. The Boolean or propositional connectives, ≡, ⇒, ∨, ∧, ¬.

6. The quantifier, ∀ (the existential quantifier, ∃, will be taken to be a meta-


symbol, introduced definitionally).
† There is a significant departure from [To] in the notation for substitution. In [To] there

was just one notation—using the symbol [∗ := ∗∗], where ∗ is a variable and ∗∗ a formula or a
term. This notation would be annotated by surrounding text, which would indicate if capture
of free variables was, or was not, allowed. Here, instead, we ask the notation to fend for its
meaning, using different notations for the capture and no capture versions.
‡ Following the practice in Enderton [En], we use ≈ for formal equality and = for informal

(metamathematical) equality. This will enable us to write in the metatheory, for example,
A = B where A and B are formulas, meaning that the “strings” A and B are identical. A
conflict will arise though, since we will be also using = quasi-formally (in equational style
proofs) as a conjunctional alias of ≡.
1. Syntax 3

7. The special Boolean (propositional) constants true and false.

8. A set of symbols (possibly empty) for constants. We normally use the meta-
symbols a, b, c, d, e, with or without subscripts, to stand for constants unless
we have in mind some alternative “standard” notation in selected areas of
application of the 1st order logic (e.g., ∅, 0, ω, etc.).

9. A set of symbols for predicates or relations (possibly empty) for each possible
“arity” n > 0. We normally use P, Q, R with or without primes to stand for
predicate symbols.

10. Finally, a set of symbols for functions (possibly empty) for each possible
“arity” n > 0. We normally use f, g, h with or without primes to stand for
function symbols.

 1.1 Remark. Any two symbols mentioned in items 1–10 are distinct. More-
over (if they are built from simpler “sub-symbols”, e.g., x1 , x2 , x3 , . . . might
really be x|x, x||x, x|||x, . . . ), none is a substring (or subexpression) of any other. 
1.2 Definition. (Terms) The set of terms, Term, is the ⊆-smallest set of
strings or “expressions” over the alphabet 1–10 with the following two proper-
ties:
Any of the items in 1 or 8 (a, b, c, x, y, z, etc.) are in Term.
If f is a function† of arity n and t1 , t2 , . . . , tn are in Term, then so is the
string f t1 t2 . . . tn . 

1.3 Definition. (Atomic Formulas) The set of atomic formulas, Af, contains
precisely:

1) The symbols true, false, and every Boolean variable (that is, p, q, . . . ).

2) The strings t ≈ s for every possible choice of terms t, s.

3) The strings P t1 t2 . . . tn for every possible choices of n-ary predicates P (for


all choices of n > 0) and all possible choices of terms t1 , t2 , . . . , tn .

† We will omit the qualification “symbol” from terminology such as “function symbol”,

“constant symbol”, “predicate symbol”.


4

1.4 Definition. (Well-Formed Formulas) The set of well-formed formulas, Wff,


is the ⊆-smallest set of strings or “expressions” over the alphabet 1–10 with the
following properties:

a) Af ⊆ Wff.

b) If A, B are in Wff, then so are (A ≡ B), (A ⇒ B), (A ∧ B), (A ∨ B).

c) If A is in Wff, then so is (¬A).

d) If A is in Wff and x is any object variable (which may or may not occur (as
a substring) in the formula A), then the string ((∀x)A) is also in Wff.
We say that A is the scope of (∀x).

 1.5 Remark. (1) A, B in the definition are so-called syntactic or meta-variables,


used as names for (arbitrary) formulas. In general, we will let the letters
A, B, C, D, E (with or without primes) be names for well-formed formulas, or
just formulas as we often say. The definition of Wff given above is standard. In
particular, it allows formulas such as ((∀x)((∀x)x = 0)) in the interest of making
the formation rules “context free” (in some presentations, formation rule 1.4(d)
requires that x be not already quantified in A).
(2) We introduce a meta-symbol (∃) solely in the metalanguage via the def-
inition “((∃x)A) stands for, or abbreviates, (¬((∀x)(¬A))).”
(3) We often write more explicitly, ((∀x)A[x]) and ((∃x)A[x]) for ((∀x)A)
and ((∃x)A). This is intended to draw attention to the variable x of A, which
has now become “bound”. Of course, notwithstanding the notation A[x] (which
only says that x may occur in A), x might actually not be a substring of A.
In that case, intuitively, ((∀x)A), ((∃x)A) and A “mean” the same thing. This
intuition is actually captured by 3.11.
(4) To minimize the use of brackets in the metanotation we adopt standard
priorities, that is, ∀, ∃, and ¬ have the highest, and then we have (in decreasing
order of priority) ∧, ∨, ⇒, ≡. All associativities are right (this is in variance with
[GS1], but is just another acceptable—and common—convention of how to be
sloppy in the metalanguage, and get away with it).
(5) The language just defined, L, is one-sorted, that is, it has a single sort or
type of object variable. The reasons for this choice are articulated in [To] and
we will not repeat them here. 
A variable that is quantified is bound in the scope of the quantifier. Non
quantified variables are free. The above loose description is tightened below by
induction on formulas.
2. Axioms and Rules of Inference 5

1.6 Definition. (Free and bound variables) An object variable x occurs free in
a term t or atomic formula A iff it occurs in t or A as a substring.
x occurs free in (¬A) iff it occurs free in A.
x occurs free in (A ◦ B)—where ◦ ∈ {≡, ∨, ∧, ⇒}—iff it occurs free in at
least one of A or B.
x occurs free in ((∀y)A) iff x occurs free in A, and y 6= x.†
The y in in ((∀y)A) is, of course, not free—even if it is so in A—as we have
just concluded in this inductive definition. We say that it is bound in ((∀y)A).
Trivially, terms and atomic formulas have no bound variables. 

1.7 Definition. (Closed terms and formulas. Sentences. Open formulas) A


term or formula is closed iff no free variables occur in it. A closed formula is
called a sentence.
A formula is open iff it contains no quantifiers (thus, it could also be closed!) 

2. Axioms and Rules of Inference


Now that we have our language, L, we will embark on using it to formally
effect “deductions”. These deductions “start” at the “axioms”. Deductions
employ “acceptable” purely syntactic—i.e., based on form, not on substance—
rules that allow us to write a formula down (to deduce it) solely because certain
other formulas that are syntactically related to it were already deduced (i.e.,
written down). These string-manipulation rules are called “rules of inference”.
We describe in this section the axioms and the rules of inference that we will
accept into our logical calculus.
These are chosen so that a logic equivalent to that in [Bou, En] results. The
characterizing feature will be that all primary rules of inference are “proposi-
tional”. This will entail a very simple version of the Deduction Theorem that
is applicable without constraints. We have the choice of taking all tautologies
as schemata in Ax1 below, or restricting the set to just those axiom schemata
given in [GS3].
We will succumb to the temptation of taking the big leap and adopting all
tautologies as axioms. Technically, this is fine, since the set of all tautologies
is “recognizable” (recursive),‡ and all tautologies will have to be deducible at
any rate, no matter how we start-up the logic. Pedagogically it is also fine.
At this point the student presumably knows of the Tautology Theorem of Post
(completeness of Propositional calculus)—perhaps even of its proof—and she or
he would rather navigate through the challenges of the new calculus unimpeded
by a requirement to re-discover (prove) from scratch all the tautologies he or
† Recall that = and 6= are used in the metalanguage as equality and inequality (respectively).

In this case, we are comparing the strings x and y.


‡ It is common practice that those axioms of logic that are “common” to all mathematics

(logical) form a “recognizable” set.


6

she needs as tools here, all over again. We will need a precise definition of
tautologies in our first order language L.

2.1 Definition. (Prime formulas in Wff) A formula A ∈ Wff is a prime


formula iff it is any of
Pri1. Atomic
Pri2. A formula of the form (∀x)A.
Let P denote the set of all prime formulas in our language. Clearly, P contains
each propositional variable v1 , v2 , . . . . 

 That is, a prime formula has no “explicit” propositional connectives (in the case
labeled Pri2 any propositional connectives are hidden inside the scope of (∀x)).
Clearly, A ∈ Wff iff A is a Propositional Calculus formula over P (i.e,
propositional variables will be all the strings in P − {true, false}). 
2.2 Definition. (Tautologies in Wff) A formula A ∈ Wff is a tautology iff it
is so when viewed as a Propositional Calculus formula over P.
We call the set of all tautologies, as defined here, Taut. The symbol |=Taut A
says A ∈ Taut. 

The following generalizes 2.2 and will also be needed.

2.3 Definition. (Tautologically Implies, for formulas in Wff) Given formulas


B ∈ Wff and Ai ∈ Wff, for i = 1, . . . , m. We say that A1 , . . . , Am tautolog-
ically implies B, in symbols A1 , . . . , Am |=Taut B, iff (viewing the formulas as
Propositional Calculus formulas over P) every truth assignment to the prime
formulas that makes each of the Ai true, also makes B true. 

 While a definition for an infinite set of premises is possible, we will not need it
here. 
Before presenting the axioms, we introduce some notational conventions re-
garding substitution.

2.4 Definition. (Substitutions) We will have two type of substitutions:


Contextual Substitution: A[p := W ] and A[x := t] denote, respectively, the
result of replacing all p by the formula W and all free x by the term t, provided
no variable of W or t was “captured” (by a quantifier) during substitution. If
the proviso is not valid, then the substitution is undefined.
Uniform Substitution: A[p \ W ] and A[x \ t] denote, respectively, the result
of replacing all p by the formula W and all free x by the term t. No restrictions.
The symbols [p := ∗] and [p \ ∗] above, where ∗ is a W or a t, lie in the
metalanguage. These metasymbols have the highest priority. 
2. Axioms and Rules of Inference 7

 2.5 Remark. (1) An inductive definition (by induction on terms and formulas)
of the string A[x := t] is instructive and is given below:
First off, let us define s[x := t], where s is also a term:
If s = x† , then s[x := t] = t. If s = a (a constant), then s[x := t] = a. If
s = y and y 6= x (i.e., they are different strings!), then s[x := t] = y.
If s = f r1 . . . rn —where f has arity n and r1 , . . . , rn are terms—then s[x :=
t] = f r1 [x := t]r2 [x := t] . . . rn [x := t].
We turn now to formulas.
If A is true, false or p (a boolean variable), then A[x := t] = A. If A = s ≈ r,
where s and r are terms, then A[x := t] = s[x := t] ≈ r[x := t]. If A = P r1 . . . rn
(P has arity n), then A[x := t] = P r1 [x := t]r2 [x := t] . . . rn [x := t].
If A = (B ◦ C), where ◦ ∈ {≡, ∨, ∧, ⇒}, then A[x := t] = (B[x := t] ◦ C[x :=
t]).
If A = (¬B), then A[x := t] = (¬B[x := t]).
In both cases above, the left hand side is defined just in case the right hand
side is.
Finally, (the “interesting case”): say A = ((∀y)B). If y = x, then (x is not
free in A) A[x := t] = A.
If y 6= x and B[x := t] is defined, then A[x := t] is defined provided y is not
a substring of t. In that case, A[x := t] = ((∀y)B[x := t]).
(2) Similarly, we define A[p := W ] inductively below (◦ ∈ {≡, ∨, &, ⇒} as
before):


 W if A = p



 A if A is atomic, but not p



 (¬B[p := W ]) if A = (¬B) and B[p := W ] is defined

A[p := W ] = (B[p := W ] ◦ C[p := W ]) if A = (B ◦ C) and B[p := W ] and



 C[p := W ] are defined



((∀y)B[p := W ])

 if A = ((∀y)B), provided B[p := W ]

is defined and y is not free in W

The cases for A[x \ t] and A[p \ W ] have exactly the same inductive definitions,
except that we drop the hedging “if defined” throughout, and we also drop the
restriction that y (of (∀y)) be not free in W or t. 
2.6 Definition. (Axioms and Axiom schemata) The axioms (schemata) are all
the possible “partial” generalizations‡ of the following (exactly as in [En]§ ):
† For one last time, recall that = is metalogical, and it here denotes equal strings!
‡B is a partial generalization of A iff it is the expression consisting of A, prefixed with zero
or more strings (∀x)—x may or may not occur free in A. Repetitions of the same prefix-string
(∀x) are allowed. The well known “universal closure of A”, that is (∀x1 )(∀x2 ) . . . (∀xn )A—
where x1 , x2 , . . . , xn is the full list of free variables in A—is a special case.
§ Actually, [En] only allows atomic formulas in Ax6 below, and derives Ax6 as a theorem.

We chose this avenue for convenience.


8

Ax1. All formulas in Taut.

Ax2. (Schema) For every formula A,

(∀x)A ⇒ A[x := t], for any term t.

 By 2.4, the notation imposes a condition on t. The condition is that


during substitution no variable of t (all such are free, of course) was
“captured” by a quantifier. We say that “t must be substitutable in x”.
NB. We often see the above written (in metalinguistic argot) as

(∀x)A[x] ⇒ A[t]

or even
(∀x)A ⇒ A[t]

where the presence of A[x] (or (∀x)A, or (∃x)A) and A[t] in the same
context means that t replaces contextually all x occurrences in A.

Ax3. (Schema) For every formula A and variable x not free in A,

A ⇒ (∀x)A.

Ax4. (Schema) For every formulas A and B,

(∀x)(A ⇒ B) ⇒ (∀x) ⇒ A(∀x)B

Ax5. (Schema) For each object variable x, the formula x ≈ x.

Ax6. (Leibniz’s characterization of equality—1st order version. Schema) For


any formula A, any object variable x and any term t, the formula

x ≈ t ⇒ A ≡ A[x := t].

NB. The above is written usually as

x ≈ t ⇒ A[x] ≡ A[t]

or even
x ≈ t ⇒ A ≡ A[t]

as long as we remember that the substitution of t for x must be contextual.


2. Axioms and Rules of Inference 9

 2.7 Remark. (1) In any formal setting that introduces many-sorts explicitly
in the syntax, one will need as many versions of Ax2–Ax6 as there are sorts.
(2) Axioms Ax5–Ax6 characterize equality between “objects”. Adding
these two axioms makes the logical system (explicitly) applicable to mathe-
matical theories such as number theory and set theory. These axioms will be
used to prove the “one point rule” of [GS1] (in the Appendix).
(3) In Ax2 and Ax6 we imposed the condition that t must to be “substi-
tutable” in x by utilizing contextual substitution [x := t].
Here is why:
Take A to stand for (∃y)¬x ≈ y. Then (∀x)A ⇒ A[x \ y] is
(∀x)(∃y)¬x ≈ y ⇒ (∃y)¬y ≈ y
and x ≈ y ⇒ A ≡ A[x \ y] is
x ≈ y ⇒ (∃y)¬x ≈ y ≡ (∃y)¬y ≈ y
neither of which, obviously, is universally valid.
The meta-remedy is to move the quantified variable(s) out of harm’s way,
i.e., rename them so that no quantified variable in A has the same name as any
(free, of course) variable in t.
This renaming is formally correct (i.e., it does not change the meaning of the
formula) as we will see in the “variant” (meta)theorem (3.10). Of course, it is
always possible to effect this renaming since we have countably many variables,
and only finitely many appear free in t and A.
This trivial remedy allows us to render the conditions in Ax2 and Ax6
harmless. Essentially, a t is always “substitutable” (so that we can use [x \ t]
instead of the restrictive [x := t]) after renaming. 
2.8 Definition. (Rules of Inference) The following three are the rules of in-
ference. These rules are relations on the set Wff and are written traditionally
as “fractions”. We call the “numerator” the premise(s) and the “denominator”
the conclusion.
We say that a rule of inference is applied to the formula(s) in the numerator,
and that it yields (or results in) the formula in the denominator. We emphasize
that the domain of the rules we describe below is the set Wff. That is why we
call the rules “strong” (a “weak” rule applies on a proper subset of Wff only.
That subset is not yet defined(!). No wonder then that we prefer “strong” over
“weak” rules).
Any set S ⊆ Wff is closed under some rule of inference iff whenever the rule
is applied to formulas in S, it also yields formulas in S.
Inf1. (Propositional (Strong) Leibniz, P SL) For any formulas A, B, C and any
propositional variable p (which may or may not occur in C)
A≡B
(P SL)
C[p := A] ≡ C[p := B]
10

on the condition that p is not in the scope of a quantifier.

 Given the condition on p, it makes no difference if in P SL above we used


[p \ ∗] instead of [p := ∗]. 
Inf2. (Equanimity, EQN) For any formulas A, B

A, A ≡ B
B

Inf3. (Transitivity of ≡, TR) For any formulas A, B, C

A ≡ B, B ≡ C
A≡C

 2.9 Remark. (1) P SL is the primary rule in the propositional calculus frag-
ment of Equational Logic as it is presented in [GS1]. An additional predicate
calculus version is also given there (twin rule 8.12; the second of the two needs
a correction—see the Appendix). The Leibniz rule (or its variants) is at the
heart of equational or calculational reasoning. In standard approaches to logic
it is not a primary rule, rather it appears as the well known “derived rule”
(metatheorem) that if Γ ` A ≡ B † and if we replace one or more occurrences
of the subformula A of a formula D (here D is C[p := A]) by B, to obtain D0
(that is C[p := B]), then Γ ` D ≡ D0 . No restriction on p is necessary (as we
prove in section 4). I.e., we show that the above quoted “Leibniz” is a derived
rule in our system.
Shoenfield [Sh] calls this derived rule “the equivalence theorem”.‡
(2) [GS1] use “=” for “≡” in contexts where they want the symbol to act con-
junctionally, rather than associatively, e.g., in successive steps of an equational-
style proof. We will follow this practice as well.
This may create a few confusing moments, as we use = in the metalanguage
as well! 
We next define Γ-theorems, that is, formulas we can prove from the set of
formulas Γ (this Γ may be empty).

2.10 Definition. (Γ-theorems) The set of Γ-theorems, ThmΓ , is the ⊆-smallest


subset of Wff that satisfies the following:

Th1. ThmΓ contains as a subset all the axioms defined in 2.6.


† The
meaning of the symbol ` is defined in 2.10 below.
‡ The
syntactic apparatus in [Sh], but not ours!—see section 4—allows a stronger “Leibniz”.
It allows Inf1 with uniform substitution and without the caveat on p! See also [To].
2. Axioms and Rules of Inference 11

 We call these formulas the logical axioms. 


Th2. Γ ⊆ ThmΓ .

 We call every member of Γ a nonlogical axiom. 


Th3. ThmΓ is closed under each rule Inf1–Inf3.
The (meta)statement A ∈ ThmΓ is traditionally written as Γ ` A, and we say
that A is proved from Γ or that it is a Γ-theorem.
If Γ = ∅, then rather than ∅ ` A we write ` A. We often say in this case
that A is absolutely provable (or provable with no nonlogical axioms).
We often write A, B, . . . , D ` E for {A, B, . . . , D} ` E. 

 2.11 Remark. Now we can spell out what a “weak” rule of inference is: It is
a rule whose domain is restricted to be Thm∅ .
None of Inf1–Inf3 is weak. 
2.12 Definition. (Γ-proofs) A finite sequence A1 , . . . , An of members of Wff
is a Γ-proof iff every Ai , for i = 1, . . . , n is one of
Pr1. A logical axiom (as in Th1 above).
Pr2. A member of Γ.
Pr3. The result of a rule Inf1–Inf3 applied to (an) appropriate formula(s) Aj
with j < i.


 2.13 Remark. (1) It is a well known result on inductive definitions that Γ `


A is equivalent to “A appears in some Γ-proof”—in the sense of the above
definition—and also equivalent to “A is at the end of some Γ-proof”.
(2) It follows from 2.12 that if each of A1 , . . . , An has a Γ-proof and B has
an {A1 , . . . , An }-proof, then B has a Γ-proof. Indeed, simply concatenate each
of the given Γ-proofs (in any sequence). Append to the right of that sequence
the given {A1 , . . . , An }-proof (that ends with B). Clearly, the entire sequence
is a Γ-proof, and ends with B.
We refer to this phenomenon as the transitivity of `.
(3) If Γ ⊆ ∆ and Γ ` A, then also ∆ ` A as it follows from 2.10 or 2.12. In
particular, ` A implies Γ ` A for any Γ.
(4) The inductive definition of theorems (2.10) allows one to prove properties
of ThmΓ by induction on theorems. The equivalent iterative definition 2.12, via
the concept of proof, allows us a different kind of induction, on the length of
proofs. 
12

3. Basic metatheorems and theorems


3.1 Metatheorem. (Redundant true) For any formula A and any set of for-
mulas Γ, Γ ` A iff Γ ` A ≡ true.

Proof.

A
= hA ≡ A ≡ true is a tautology, hence a logical axiomi
A ≡ true

So Γ ` A iff Γ ` A ≡ true by EQN.† 

3.2 Metatheorem. (Modus ponens—derived rule) A, A ⇒ B ` B for any


formulas A and B.

 From 2.10 follows that any of the primary rules can be written “linearly”, that
is, premises first, followed by ` —instead of the “fraction-line”—followed by the
conclusion.
We (almost) always use this linear format for derived rules. 
Proof. We have
A hnonlogical assumptioni (1)
and
A⇒B hnonlogical assumptioni (2)
Thus,

A⇒B
= hredundant true via (1); plus PSLi
true ⇒ B
= h(true ⇒ B) ≡ B is a tautologyi
B

Since A, A ⇒ B ` A ⇒ B, the above “calculation” and EQN yield A, A ⇒ B `


B. 

 Let us call our logic, that is, a language L along with the adopted axioms,
rules of inference and the definition (2.10) of Γ-theorems (an) E-logic (“E” for
Equational).
Let us call En-logic what we obtain by keeping all else the same, but adopting
modus ponens as the only primary rule of inference. This is, essentially, the logic
in [En] (except that [En] allows neither propositional variables nor propositional
constants).
† We take symmetry of ≡ for granted, and leave it unmentioned, due to axiom group Ax1.
3. Basic metatheorems and theorems 13

We may subscript the symbol ` by an E or En to indicate in which logic we


are working. Thus, e.g., Γ `En A means we deduced A from Γ working in the
En-logic.
Why introduce En-logic? Its metatheory is a bit simpler, due to the presence
of just one solitary rule of inference. For this reason we will often work in En-
logic to establish metatheorems, e.g., (below) that we have closure of theorems
under (∀x) (under some reasonable restrictions), Deduction Theorem, etc.
First we need to show that we are not barking up the wrong tree. E-logic
and En-logic are equivalent. 
3.3 Lemma. (The extended tautology theorem) If A1 , . . . , An |=Taut B then,
A1 , . . . , An ` B in either E-logic or En-logic.

Proof. The assumption yields that


|=Taut A1 ⇒ . . . ⇒ An ⇒ B (1)
Thus (the formula in (1) is an axiom of both logics),
A1 , . . . , An ` A1 ⇒ . . . ⇒ An ⇒ B (2)
Applying modus ponens to (2), n times, we deduce B. 

3.4 Metatheorem. For any Γ and any formula A, Γ `En A iff Γ `E A.

Proof. We already have shown that modus ponens is a derived rule of E-logic.
Thus, if Γ `En A, then Γ `E A.
Conversely, since every rule Inf1–Inf3 of E-logic is derived in En-logic, when-
ever Γ `E A we also get Γ `En A.
Now, why are Inf1–Inf3 derived in En-logic?
The reason is that each has the form of a tautological implication,
A1 , . . . , An |=Taut B
for n = 1 (PSL) or n = 2 (EQN, TR).† 

3.5 Metatheorem. (Generalization) For any Γ and any A, if Γ ` A with a


restriction, then Γ ` (∀x)A.
The restriction is: There is a Γ-proof of A, such that x does not occur free
in any formula of Γ that was used in the proof.

Proof. The (meta)proof will be by induction on the length of a Γ-proof that


deduces A while respecting the restriction on x. In view of 3.4, we do the
(meta)proof about En-logic (see [En]‡ ).
† That is why the rules, in particular, PSL, are called “propositional”.
‡ Actually Enderton requires an unnecessarily strong condition: That x is not free in any
formula in Γ. He does so, presumably, because he offers a proof by induction on Γ-theorems.
Induction on Γ-proofs, as we opted for here, is satisfied with a lesser restriction, imposed on
finitely many formulas of Γ.
14

The idea is very simple: Let B1 , B2 , . . . , Bn be such a Γ-proof effected in


En-logic, where Bn = A. We show (by induction on n), that in the sequence

(∀x)B1 , (∀x)B2 , . . . , (∀x)Bn (1)

every formula is a Γ-theorem of En-logic, hence also of E-logic.


Basis n = 1. If B1 ∈ Γ, then x is not free in B1 . By Ax3, and modus
ponens, B1 ` (∀x)B1 , hence Γ ` (∀x)B1 by transitivity of `.
If B1 is logical, then (∀x)B1 is also logical (partial generalization—recall 2.6).
Thus, again, Γ ` (∀x)B1 (by 2.10 or 2.12).
Assume the claim for n ≤ k (Induction Hypothesis, in short I.H.).
We look at n = k + 1. If Bn is logical or in Γ, we already have seen how
to handle it. Suppose then that Bn is actually there because we had applied
modus ponens, namely, Bi , Bi ⇒ Bn ` Bn , and that Bi ⇒ Bn is the formula
Bj . Of course, i and j are each less than n, hence ≤ k, so the I.H. applies.
Thus,
Γ ` (∀x)Bi (2)
and
Γ ` (∀x)(Bi ⇒ Bn ) (3)
Applying modus ponens—via (2) and (3)—twice to the following instance of
Ax4, (∀x)(Bi ⇒ Bn ) ⇒ (∀x)Bi ⇒ (∀x)Bn , we get Γ ` (∀x)Bn . 

 An important observation flows immediately from the proof of 3.5.


The sequence (1) can be “padded” to be a Γ-proof without using any ad-
ditional Γ-formulas beyond those used to derive A in the first place. Indeed,
inspect the Basis and Induction Steps: They utilized whatever was utilized out
of Γ already, modus ponens and logical axioms, in order to show that each
(∀x)Bi is deducible. 
3.6 Corollary. (“Weak” Generalization) For any formula A, if ` A, then `
(∀x)A.

Proof. Take Γ = ∅ above. 

 We trivially have a derived rule specialization, sort of the converse of gener-


alization. It says that if Γ ` (∀x)A, then Γ ` A. To see that it holds, note
that (∀x)A ⇒ A[x := x] is a logical axiom (it is easy to check that x is always
substitutable in x). By modus ponens, Γ ` A (of course, A[x := x] = A).
In particular, we can state: ` A iff ` (∀x)A.
“Weak” implies that there is a strong generalization rule as well. That is
the “rule” A ` (∀x)A.
This rule is not derivable in E-logic. We will see why once we have proved
the Deduction Theorem. 
3. Basic metatheorems and theorems 15

3.7 Corollary. (Substitution of terms) If there is a Γ-proof of A[x1 , . . . , xn ]


so that none of the variables x1 , . . . , xn occurs free in the Γ-formulas used in
the proof, then Γ ` A[t1 , . . . , tn ], for any terms t1 , . . . , tn .

Proof. Of course, the ti must be “substitutable” in the respective variables.


One can comfortably be silent about this in view of the variant theorem (3.10,
below).
We illustrate the proof for n = 2. What makes it interesting is the require-
ment to have simultaneous substitution. To that end we first substitute into x1
and x2 new variables z, w—i.e., not occurring in the ti nor in A (neither free
nor bound).
So let Γ ` A[x1 , x2 ] as restricted in the corollary statement.
The proof is the following sequence.

h3.5i
(∀x1 )A[x1 , x2 ]
hAx2 and modus ponens; x1 := zi
A[z, x2 ]
h3.5—see also the remark following 3.5i
(∀x2 )A[z, x2 ]
hAx2 and modus ponens; x2 := wi
A[z, w]
hNow z := t1 , w := t2 , in any order,
is the same as “simultaneous substitution”i
h3.5—see also the remark following 3.5i
(∀z)A[z, w]
hAx2 and modus ponens; z := t1 i
A[t1 , w]
h3.5—see also the remark following 3.5i
(∀w)A[t1 , w]
hAx2 and modus ponens; w := t2 i
A[t1 , t2 ]

3.8 Metatheorem. (The Deduction Theorem) For any formulas A and B and
set of formulas Γ, if Γ, A ` B, then Γ ` A ⇒ B.

 NB. Γ, A means Γ ∪ {A}. A converse of the metatheorem is also true trivially:


That is, Γ ` A ⇒ B implies Γ, A ` B. This follows by modus ponens. 
16

Proof. The proof is by induction on Γ, A-theorems and, once again, it is carried


out in En-logic.
Basis. Let B be logical or nonlogical. Then Γ ` B (2.10).
Trivially, B |=Taut A ⇒ B, hence, by 3.3 and the transitivity of `, Γ ` A ⇒
B.
If B is the same string as A, then A ⇒ B is a logical axiom (tautology),
hence Γ ` A ⇒ B (2.10).
There is only one induction step.
Modus ponens. Let Γ, A ` C, and Γ, A ` C ⇒ B.
By I.H., Γ ` A ⇒ C and Γ ` A ⇒ C ⇒ B.
Since A ⇒ C, A ⇒ C ⇒ B |=Taut A ⇒ B, we have Γ ` A ⇒ B. 

 3.9 Remark. (1) We now see why our E-logic (equivalently, En-logic) does
not support strong generalization A ` (∀x)A. If it did, then, by the Deduction
Theorem that we have just proved,

` A ⇒ (∀x)A (i)

Even though we have not discussed semantics yet (we do so in section 6), still we
can see intuitively that no self-respecting logic should have the above formula
as an absolute theorem, since it is not an “absolute truth”. For example, over
the natural numbers, N, we have an obviously invalid “special case” of the
schema (i):
x ≈ 0 ⇒ (∀x)x ≈ 0

In some expositions the Deduction Theorem is constrained by requiring that A


be closed ([Sh, To]), or a complicated condition on its variables is given ([Men]).
Which version is right? They both are. If all the primary rules of inference
are “propositional” as they are here, then the Deduction theorem is uncon-
strained because we do not have strong generalization. If, on the other hand,
the rules of inference manipulate object variables via quantification (e.g., strong
generalization, or other “stronger” rules are present—see [Men, Sh, To]), then
one cannot avoid constraining the application of the deduction Theorem, lest
one wants to derive (the invalid) (i) above.
(2) This divergence of approach in choosing rules of inference has some ad-
ditional repercussions.
One has to be careful in defining the semantic counterpart of `, namely,
|= (see section 6). One wants the two symbols to “track each other” faithfully
(Gödel’s completeness theorem).† 
† In [Men] |= is defined inconsistently with `.
3. Basic metatheorems and theorems 17

3.10 Metatheorem. (The variant [or, dummy renaming] metatheorem) For


any formula (∀x)A, if z does not occur in it (i.e., is neither free nor bound),
then ` (∀x)A ≡ (∀z)A[x := z].

NB. We often write this (under the stated conditions) as ` (∀x)A[x] ≡ (∀z)A[z].
Proof. Since z is substitutable in x under the stated conditions, A[x := z] is, of
course, defined. Thus, by Ax2 and modus ponens

(∀x)A ` A[x := z]

By 3.5—since z is not free in (∀x)A—we also have

(∀x)A ` (∀z)A[x := z]

By the Deduction Theorem,

` (∀x)A ⇒ (∀z)A[x := z]

Noting that x is not free in (∀z)A[x := z] and is substitutable in z (in A[x :=


z])—indeed, A[x := z][z := x] = A—we can repeat the above argument to
get (∀z)A[x := z] ` A, hence (by 3.5) (∀z)A[x := z] ` (∀x)A, and, finally,
` (∀z)A[x := z] ⇒ (∀x)A. 

 Why is A[x := z][z := x] = A? We can see this by induction on A (recall that


z occurs as neither free nor bound in A).
If A is atomic, the claim is trivial. The claim also clearly “propagates” with
the propositional formation rules.
Consider then the case that A = (∀w)B. Note that w = x is possible under
our assumptions, but w = z is not. If w = x, then A[x := z] = A, in particular,
z is not free in A, hence A[x := z][z := x] = A as well. So let us work with
w 6= x. By I.H. B[x := z][z := x] = B. Now

A[x := z][z := x] = ((∀w)B)[x := z][z := x]


= ((∀w)B[x := z])[z := x] hsee 2.5—w 6= zi
= ((∀w)B[x := z][z := x]) hsee 2.5—w 6= xi
= ((∀w)B) hI.H.i
=A


We conclude this section with a couple of useful metatheorems.

3.11 Lemma. If x is not free in A, then ` A ≡ (∀x)A.

Proof. This trivial fact (Ax2 and Ax3 and tautological implication) is only
stated here to make it “quotable”. 
18

3.12 Metatheorem. If Γ ` A ⇒ B with a condition on x, then Γ ` A ⇒


(∀x)B.
The condition on x is: x is not free in A, and there is a Γ-proof of A ⇒ B,
such that no Γ-formula of that proof contains x free.

Proof. Apply 3.5 to conclude Γ ` (∀x)(A ⇒ B). Thus,

(∀x)(A ⇒ B)
` hAx4 and modus ponensi
(∀x)A ⇒ (∀x)B
= hPSL and 3.11i
A ⇒ (∀x)B

By EQN, Γ ` A ⇒ (∀x)B. 

 By 3.8, for any two formulas A and B, ` and ⇒ are “interchangeable” (strictly
speaking, ` A ⇒ B iff A ` B).
For this reason, assuming that ⇒ is conjunctional when and only
when it is used at the left margin of an annotated proof,† the above
proof could be re-written using ⇒ (the latter notation seems to be preferred in
[GS1, Gr]). The hints have to change though!

(∀x)(A ⇒ B)
⇒ hAx4i
(∀x)A ⇒ (∀x)B
= hPSL and 3.11i
A ⇒ (∀x)B (1)

Note that we dropped the justification “modus ponens”.


Of course, in any proof like the following

A1
◦ hHintsi
A2
◦ hHintsi
A3
◦ hHintsi
..
.
◦ hHintsi
An
† The “standard” ⇒ is, of course, not conjunctional: E.g., p ⇒ q ⇒ r does not say (p ⇒

q) ∧ (q ⇒ r).
4. Derived Leibniz rules 19

where ◦ ∈ {⇒, =} —both used conjunctionally—it follows by tautological im-


plication that Γ ` A1 ⇒ An , where Γ is the set of (nonlogical) axioms that
made the above “calculation” tick.
Therefore, if moreover Γ ` A1 , then (by modus ponens) we conclude that
Γ ` An .
Taking all this for granted, we normally terminate a calculational proof, such
as the one ending with (1) above, without any additional comment (contrast
with the proof of 3.12) 
3.13 Corollary. If x is not free in A and ` A ⇒ B, then ` A ⇒ (∀x)B.

3.14 Corollary. If x is not free in A, then ` (∀x)(A ⇒ B) ≡ A ⇒ (∀x)B.

Proof. (⇒) This was done in the proof of 3.12.


(⇐) We assume then A ⇒ (∀x)B, and prove (∀x)(A ⇒ B).
Now, A ⇒ (∀x)B ` A ⇒ B by ` (∀x)B ⇒ B and tautological implication.
Since x is not free in A ⇒ (∀x)B, we are done by 3.5 and 3.8. 

3.15 Corollary. Suppose we have a Γ-proof of A ⇒ B, where x does not occur


free in whatever nonlogical axioms were used. Then Γ ` (∀x)A ⇒ (∀x)B.

Proof. By 3.5, Γ ` (∀x)(A ⇒ B). The result follows by Ax4 and modus
ponens. 

3.16 Corollary. The duals of 3.12–3.15 hold.

Proof. A trivial exercise, using the definition of the “text” ((∃x)A), namely,
(¬(∀x)(¬A)). 

4. Derived Leibniz rules


In this section we aim to increase our flexibility in carrying out calculational
proofs, by introducing some derived rules of the type “Leibniz”. Let us start
with a version we do not have (Strong Leibniz with Uniform Substitution—recall
that “strong” means that the premise is an arbitrary formula A ≡ B).

A≡B
(SLU S)
C[p \ A] ≡ C[p \ B]

That SLUS is “invalid” in our Logic follows from 3.4 in [To], “strong general-
ization”, which is a derived rule if SLUS is available. But we have seen that
E-logic does not support strong generalization.
20

4.1 Metatheorem. (Strong Leibniz with Contextual Substitution—SLCS) The


following is a derived rule:

A≡B
(SLCS)
C[p := A] ≡ C[p := B]

Proof. This was proved in [To] by induction on the formula C.


Basis. C is atomic. If C = p, then C[p := A] = A and C[p := B] = B, so
our conclusion is our hypothesis.
In all other cases C[p := A] ≡ C[p := B] is the tautology C ≡ C.
Induction Step(s) (I.S.).

I.S.1. C = ¬D. By I.H., A ≡ B ` D[p := A] ≡ D[p := B].


Since D[p := A] ≡ D[p := B] |=Taut ¬D[p := A] ≡ ¬D[p := B], we are
done in this case.

I.S.2. C = D ◦ G (◦ ∈ {∧, ∨, ⇒, ≡}). By I.H., A ≡ B ` D[p := A] ≡ D[p := B]


and A ≡ B ` G[p := A] ≡ G[p := B]. Since D[p := A] ≡ D[p :=
B], G[p := A] ≡ G[p := B] |=Taut (D ◦ G)[p := A] ≡ (D ◦ G)[p := B], we
are done once more.

I.S.3. C = (∀x)D. By I.H., A ≡ B ` D[p := A] ≡ D[p := B], hence (Tautology


Theorem) A ≡ B ` D[p := A] ⇒ D[p := B].
Since x is not free in A ≡ B, 3.15 yields A ≡ B ` (∀x)D[p := A] ⇒
(∀x)D[p := B].
Similarly (from A ≡ B ` D[p := A] ⇐ D[p := B]), A ≡ B ` (∀x)D[p :=
A] ⇐ (∀x)D[p := B], hence, one more application of the Tautology
Theorem gives A ≡ B ` (∀x)D[p := A] ≡ (∀x)D[p := B].

4.2 Metatheorem. (Weak Leibniz with Uniform Substitution—WLUS) The


following is a derived rule: If ` A ≡ B then ` C[p \ A] ≡ C[p \ B].

Proof. The proof is as above, with the following differences:


We assume ` A ≡ B throughout. The I.H. then reads that “` D[p \ A] ≡
D[p \ B], for all immediate subformulas, D, of C”.
The induction step I.S.3 now reads: Let C = (∀x)D. By I.H. we have
` D[p \ A] ≡ D[p \ B]. By the Tautology Theorem ` D[p \ A] ⇒ D[p \ B], thus
` (∀x)D[p \ A] ⇒ (∀x)D[p \ B] by 3.15 (Γ = ∅).
Similarly we obtain ` (∀x)D[p \ A] ⇐ (∀x)D[p \ B] and are done by the
Tautology Theorem. 
5. Monotonicity 21

5. Monotonicity
5.1 Definition. We define a set of strings, the I-Forms and D-Forms, by in-
duction. It is the smallest set of strings over the alphabet V ∪ {∗}, where ∗ is
a new symbol added to the alphabet V , satisfying the following:

Form1. (Basis) ∗ is an I-Form

Form2. If A is any formula and U is an I-Form (respectively, D-Form), then the


following are also I-Forms (respectively, D-Forms): (U ∨ A), (A ∨ U),
(U ∧ A), (A ∧ U), (A ⇒ U), ((∀x)U) and ((∃x)U), while the following
are D-Forms (respectively, I-Forms): (¬U) and (U ⇒ A).

We will just say U is a Form, if we do not wish to spell out its type (I or D). We
will use calligraphic capital letters U, V, W, X , Y to denote Forms. 

5.2 Definition. For any Form U and any formula A or form W, the symbols
U[A] and U[W] mean, respectively, the result of the uniform substitutions U[∗ \
A] and U[∗ \ W]. 

 Our I-Forms and D-Forms—“I” for increasing, and “D” for decreasing—are mo-
tivated by, but are different from,† the Positive and Negative Forms of Schütte
[Schü].
The expected behaviour of the Forms is that they are “monotonic functions”
of the ∗-“variable” in the following sense: We expect that ` A ⇒ B will imply
` U[A] ⇒ U[B] if U is an I-Form, and ` U[A] ⇐ U[B] if it is a D-Form.
Now, ⇒ is “like” ≤ in Boolean algebras, the latter defined by “a ≤ b means
a ∨ b = b” (compare with [GS1], Axiom 3.57 for ⇒). This observation justifies
the terminology “monotonic functions”. 
We now pursue in detail the intentions stated in the above remark.

5.3 Lemma. Every Form contains exactly one occurrence of ∗ as a substring.

Proof. Induction on Forms. The basis is immediate. Moreover, the property we


are asked to prove obviously “propagates” with the formation rules. 

5.4 Lemma. For any Form U and formula A, U[A] is a formula.

Proof. Induction on Forms. The basis is obvious, and clearly the property
propagates with the formation rules. 
† For example, (∗ ∧ A) is an I-Form but not a Positive Form in the sense of [Schü], since

the latter would necessitate, in particular, that (true ∧ A) be a tautology.


22

5.5 Lemma. No Form has both types I and D.

Proof. Induction on Forms (really using the least principle and proof by contra-
diction). Let U have the least complexity among forms that have both types.
This is not the “basic” Form ∗ as that is declared to have just type I.
Can it be a form (V ∨ A)? No, for it has both types I and D, so that also
V must have both types I and D, contradicting the assumption that U was the
least complex schizophrenic Form. We obtain similar contradictions in the case
of all the other formation rules. 

5.6 Lemma. For any Forms U and V, we have the following composition prop-
erties:

(1) If U is an I-Form, then U[V] has the type of V.

(2) If U is an D-Form, then U[V] has the type opposite to that of V.

Proof. We do induction on U to prove (1) and (2) simultaneously. The basis is


obvious, as U = ∗, hence U[V] = V.

Case 1. U = (W ∨ A), for some A ∈ Wff. U[V] = (W[V] ∨ A). U and W have
the same type.
By I.H. and the definition of Forms, the claim follows.
We omit a few similar cases . . .

Case 2. U = (W ⇒ A), for some A ∈ Wff. U[V] = (W[V] ⇒ A). U and W


have opposite types.
By I.H. and the definition of Forms, the claim follows.
We omit a few similar cases . . .

Case 3. U = ((∀x)W). U[V] = ((∀x)W[V]). U and W have the same type.


By I.H. and the definition of Forms, the claim follows.

 5.7 Remark. Thus, if U is obtained by a chain of compositions, it is an I-Form


if the chain contains an even number of D-Forms, it will be a D-Form otherwise.
For example, if U is a D-Form, then U[A ⇒ ∗] is still a D-Form, but U[∗ ⇒ A]
is an I-Form. 
5.8 Metatheorem. (Monotonicity and Antimonotonicity) Let ` A ⇒ B.
If U is an I-Form, then ` U[A] ⇒ U[B], else (a D-Form) ` U[A] ⇐ U[B].
5. Monotonicity 23

 We call MON the rule “if ` A ⇒ B and U is an I-Form, then ` U[A] ⇒ U[B]”.
We call AMON the rule “if ` A ⇒ B and U is an D-Form, then ` U[A] ⇐
U[B]”. 
Proof. Induction on U.
Basis. U = ∗, hence we want to prove ` A ⇒ B, which is the same as the
hypothesis.
The induction steps:
Case 1. U = (W ∨ C), for some C ∈ Wff. If W is an I-Form, then (I.H.)
` W[A] ⇒ W[B], hence ` (W[A] ∨ C) ⇒ (W[B] ∨ C) by tautological
implication.
If W is a D-Form, then (I.H.) ` W[A] ⇐ W[B], hence ` (W[A]∨C) ⇐
(W[B] ∨ C) by tautological implication.
Case 2. U = (W ⇒ C), for some C ∈ Wff. If W is an I-Form, then (I.H.) `
W[A] ⇒ W[B], hence ` (W[A] ⇒ C) ⇐ (W[B] ⇒ C) by tautological
implication.
If W is a D-Form, then (I.H.) ` W[A] ⇐ W[B], hence ` (W[A] ⇒
C) ⇒ (W[B] ⇒ C) by tautological implication.
We omit a few similar cases based on tautological implication . . .
Case 3. U = ((∀x)W). If W is an I-Form, then (I.H.) ` W[A] ⇒ W[B].
By 3.15, ` ((∀x)W[A]) ⇒ ((∀x)W[B]).
If W is a D-Form, then (I.H.) ` W[A] ⇐ W[B].
By 3.15, ` ((∀x)W[A]) ⇐ ((∀x)W[B]).
Case 4. U = ((∃x)W). As above, but relying on 3.16 instead.


 MON and AMON are applied after we have eliminated the presence of ≡ from
formulas. 
5.9 Example. We illustrate the use of the rules MON or AMON by revisiting
the calculational proof fragment of 3.12.
(∀x)(A ⇒ B)
⇒ hAx4i
(∀x)A ⇒ (∀x)B
⇒ hAMON—on ∗ ⇒ (∀x)B—and Ax3i
A ⇒ (∀x)B

If we are willing to weaken the type of substitution we effect into Forms, we


can strengthen the type of premise in MON and AMON:
24

5.10 Metatheorem. (“Strong” MON/AMON with contextual substitution)


The following are derived rules:
A ⇒ B ` U[∗ := A] ⇒ U[∗ := B], if U is an I-Form
A ⇒ B ` U[∗ := A] ⇐ U[∗ := B], if U is a D-Form

Proof. The proof is as in 5.8, except that the induction steps under cases 3
and 4 are modified as follows:
Case 3. U = ((∀x)W). If W is an I-Form, then (I.H.)
A ⇒ B ` W[∗ := A] ⇒ W[∗ := B] (i)
We want to argue that
A ⇒ B ` (∀x)W[∗ := A] ⇒ (∀x)W [∗ := B] (ii)
where we have already incorporated ((∀x)W)[∗ := A] = (∀x)(W[∗ :=
A]), etc., and then dropped the unnecessary brackets.
Now, if the substitutions in (ii) are not defined, then there is nothing
to state (let alone prove).
Assuming that they are defined, then A ⇒ B has no free occurrence
of x. By 3.15, (ii) follows from (i).
If W is a D-Form, then we argue as above on the I.H.
A ⇒ B ` W[∗ := A] ⇐ W[∗ := B].

Case 4. U = ((∃x)W). We use here 3.16 instead of 3.15.



 The above is as far as it goes. If we allow uniform substitution as well, to obtain
“Extra strong MON/AMON”, then the rule yields strong generalization, hence
it is not valid in E-logic. The following calculation illustrates this point.
A
= hA ≡ true ⇒ A is a tautologyi
true ⇒ A
⇒ h“Extra strong MON” and the I-Form (∀x)∗i
(∀x)true ⇒ (∀x)A
= hPSL and 3.11i
true ⇒ (∀x)A
= hA ≡ true ⇒ A is a tautologyi


(∀x)A
The above ⇒ (on the left margin) is, of course, ` (see the
lowing 3.12). Thus we have just “proved” A ` (∀x)A.
-passage fol-

6. Soundness and Completeness of E-logic 25

6. Soundness and Completeness of E-logic


We introduce semantics that accurately reflect our syntactic choices (most im-
portantly, to disallow strong generalization). In particular, we will define “log-
ically implies”, |=, so that Γ ` A iff Γ |= A.† Since we have an unconstrained
Deduction Theorem that says “A ` B iff ` A ⇒ B”, we need our semantics to
also say “A |= B iff |= A ⇒ B”.
Thus the semantics defined here will be different from those in [To].
We still want to keep the “English” definition of A |= B identical to that in
[To]: “Every model of A is a model of B”. For that reason the term model will
mean here something else! (See [Schü]; the two definitions of model are identical
if we only deal with sentences).

6.1 Definition. Given a language L = (V, Term, Wff), a structure M =


(M, I) appropriate for L is such that M 6= ∅ is a set (the “domain”) and I
is a mapping that assigns
(1) to each object variable x of V a unique member xI ∈ M
(2) to each constant a of V a unique member aI ∈ M
(3) to each function f of V —of arity n—a unique (total) function f I : M n → M
(4) to each predicate P of V —of arity n—a unique set P I ⊆ M n
(5) to each propositional variable p of V a unique member pI of the two element
set {t, f} (we understand t as “true” and f as “false”)
(6) moreover we set trueI = t and falseI = f, where the use of “=” here is
metamathematical (equality on {t, f}).


 Item (1) makes the difference between the definition of semantics here and in
[To]. 
6.2 Definition. Given L and a structure M = (M, I) appropriate for L. L(M)
denotes the language obtained from L by adding in V a unique name î for each
object i ∈ M . This amends both sets Term, Wff into Term(M), Wff(M).
Members of the latter sets are called M-terms and M-formulas respectively.
We extend I to the new constants: îI = i for all i ∈ M (where the meta-
mathematical “=” is that on M ). 

 All we have done here is to allow ourselves to do substitutions like [x := i]


formally. We do instead, [x := î]. One next gives “meaning” to all terms in
L(M). We do not restrict this to just closed terms (as it was done in [To]) since
here we are “freezing” object variables anyhow (we instantiate them via I). 
† We will not verify this “strong” Gödel completeness that is based on “compactness”.
26

6.3 Definition. For terms t in Term(M) we define the symbol tI ∈ M induc-


tively:
(1) If t is any of x (object variable), a (original constant), or î (imported con-
stant), then tI has already been defined.
(2) If t is the string f t1 . . . tn , where f is n-ary, and t1 , . . . , tn are M-terms, we
define tI to be the object (of M ) f I (tI1 , . . . , tIn ).


Finally, we give meaning to all M-formulas, again not restricting attention


to just sentences.

6.4 Definition. For any formula A in Wff(M) we define the symbol AI in-
ductively. In all cases, AI ∈ {t, f}.
(1) If A is any of p or true or false, then AI has already been defined.
(2) If A is the string t ≈ s, where t and s are M-terms, then AI = t iff tI = sI
(again, the last two occurrences of = refer to equality on {t, f} and M
respectively).
(3) If A is the string P t1 . . . tn , where P is an n-ary predicate and the ti are
M-terms, then AI = t iff (tI1 , . . . , tIn ) ∈ P I .
(4) If A is any of ¬B, B ∧ C, B ∨ C, B ⇒ C, B ≡ C, then AI is determined by
the usual truth tables using the values B I and C I .
(5) If A is (∀x)B, then AI = t iff (B[x := î])I = t for all i ∈ M .

Of course, the above also gives meaning to tI and AI for any terms and
formulas over the original language, since Term(M) ⊇ Term and Wff(M) ⊇
Wff. 

 We have “imported” constants from M into L in order to be able to state the


semantics of (∀x)B above in the simple manner we just did (following [Schü]). 
6.5 Definition. Let A ∈ Wff and M be a structure as above.
We say that A is satisfiable in M, or that M is a model of A† —in symbols
|=M A—iff AI = t.
For any set of formulas Γ from Wff, |=M Γ denotes the sentence “M is a
model of Γ”, and means that for all A ∈ Γ, |=M A.
A formula A is universally valid (we often say just valid) iff every structure
appropriate for the language is a model of A. In particular, that says that fixing
M , A is “true in all states” I.
Under these circumstances we simply write |= A. 

† Contrast with the “models” in [To].


6. Soundness and Completeness of E-logic 27

6.6 Definition. We say that Γ logically implies A, in symbols Γ |= A, to mean


that every model of Γ is also a model of A. 

 Clearly, in the case of A |= B the above says that, having fixed the domain M ,
every “state” I that makes A true makes B true. Thus, A |= B exactly when
|= A ⇒ B, as earlier promised. 
6.7 Definition. (First order theories) A (first order) theory is a collection of
the following objects:

Theory1. A first order language L = (V, Term, Wff),

Theory2. A set of logical axioms,

Theory3. A set of rules of inference,

Theory4. A set of nonlogical axioms, plus a definition of “deduction” (proof)


and “theorem” (2.12, and 2.10).

We often name the theory by the name of its nonlogical axioms (as in “let Γ be
a theory . . . ”, in which case we write Γ ` A to indicate that A is a Γ-theorem),
but we may also name the theory by other characteristics, e.g., the choice of
language. For example, we may have two theories under discussion, that only
differ in the choice of the language (L vs., say, L0 ). We may call one the theory
T and the other the theory T 0 , in which case we indicate where deductions
take place by writing Γ `T A or Γ `T 0 A as the case may be. Similarly, all
other things might be the same, except that the choice of rules of inference (or
logical axioms) is different. Again, we choose names to reflect these different
choices. We have already used such notation and terminology: E-logic and En-
logic. We now are saying that we may also use the terminology “E-theory” and
“En-theory”.
A pure theory is one with Γ = ∅. 

6.8 Remark. Remarks embedded in the above definition justify the use of the
indefinite article in “A pure theory . . . ”.

6.9 Definition. (Soundness) A pure theory is sound, iff ` A implies |= A, that


is, iff all the theorems of the theory are universally valid. 

Towards the soundness result below we carefully look at two nastily tedious
(but easy) lemmata.
28

6.10 Lemma. Given a term t, variables x 6= y, where y is not in t, and a


constant a. Then, for any term s and formula A, s[x := t][y := a] = s[y :=
a][x := t] and A[x := t][y := a] = A[y := a][x := t].

Proof. (Induction on s):


Basis.


 s=x then t

s = y then a
s[x := t][y := a] =

 s=z where z ∈
/ {x, y}, then z


s=b then b
= s[y := a][x := t]

For the induction step let s = f r1 . . . rn , where f has arity n. Then

s[x := t][y := a] = f r1 [x := t][y := a] . . . rn [x := t][y := a]


= f r1 [y := a][x := t] . . . rn [y := a][x := t] by I.H.
= s[y := a][x := t]

Induction on A):
Basis.


A=p then p



A = true then true



A = false

 then false

A = P r . . . r
1 n then
A[x := t][y := a] =

 P r1 [x := t][y := a] . . . rn [x := t][y := a] =



 P r1 [y := a][x := t] . . . rn [y := a][x := t]



A = r ≈ s

 then r[x := t][y := a] ≈ s[x := t][y := a]


= r[y := a][x := t] ≈ s[y := a][x := t]

= A[y := a][x := t]

The property we are proving, trivially, propagates with boolean connectives.


Let us do the induction step just in the case where A = (∀w)B. If w = x the
result is trivial. Otherwise,

A[x := t][y := a] = ((∀w)B)[x := t][y := a]


= ((∀w)B[x := t][y := a])
= ((∀w)B[y := a][x := t]) by I.H.
= ((∀w)B)[y := a][x := t]
= A[y := a][x := t]


6. Soundness and Completeness of E-logic 29

6.11 Lemma. Given a structure M = (M, I), a term s, and formula A over
L(M).
Let t be another term over L(M), such that tI = i ∈ M .
Then, (s[x := t])I = (s[x := î])I and (A[x := t])I = (A[x := î])I , in the
latter case on the assumption that A[x := t] is defined.

 This almost says the intuitively expected (but formally incorrect): (A[t])I =
AI [tI ]. 
Proof. (Induction on s):
Basis. s[x := t] = s if s ∈ {y, a, ĵ} (y 6= x). Hence (s[x := t])I = sI =
(s[x := î])I in this case. If s = x, then x[x := t] = t and x[x := î] = î, and the
claim follows once more.
For the induction step let s = f r1 . . . rn , where f has arity n. Then

(s[x := t])I = f I (r1 [x := t])I , . . . , (rn [x := t])I

= f I (r1 [x := î])I , . . . , (rn [x := î])I by I.H.
I
= (s[x := î])

(Induction on A):
Basis.


A = p then p
A[x := t] = A = true then true


A = false then false

= A = A[x := î]

Thus the claim follows in the above cases.


If A = P r1 . . . rn , then†

(A[x := t])I = P I (r1 [x := t])I , . . . , (rn [x := t])I

= P I (r1 [x := î])I , . . . , (rn [x := î])I
= (A[x := î])I

Similarly if A = r ≈ s.
The property we are proving, clearly, propagates with boolean connectives.
Let us do the induction step just in the case where A = (∀w)B. If w = x the
result is trivial. Otherwise, we note that—since we assume that t is substitutable

† For a metamathematical relation Q, as usual, Q(a, b, . . . ) = t stands for (a, b, . . . ) ∈ Q.


30

in x—w does not occur in t, and proceed as follows:


I
(A[x := t])I = t iff ((∀w)B)[x := t] = t
I
iff ((∀w)B[x := t]) = t
iff (B[x := t][w := ĵ])I = t for all j ∈ M , by 6.4(5)
iff (B[w := ĵ][x := t])I = t for all j ∈ M , by 6.10
I
iff (B[w := ĵ])[x := t] = t for all j ∈ M
I
iff (B[w := ĵ])[x := î] = t for all j ∈ M , by I.H.
iff (B[w := ĵ][x := î])I = t for all j ∈ M
iff (B[x := î][w := ĵ])I = t for all j ∈ M , by 6.10
I
iff ((∀w)B[x := î]) = t by 6.4(5)
I
iff ((∀w)B)[x := î] = t
iff (A[x := î])I = t

6.12 Metatheorem. Any pure E-theory is sound.

Proof. The pure E-theory over a fixed alphabet L is equivalent to the En-theory
over the same alphabet (3.4). Thus the proof proceeds for an En-theory.
Let ` A. Pick an arbitrary structure M = (M, I) appropriate for L and do
induction on ∅-theorems to show that |=M A.
Basis. A is a logical axiom (see 2.6).
Now, axioms in group Ax1 are tautologies over prime formulas. That is,
regardless of the values P I of prime formulas P in B, if B is a tautology, then
B I = t (see 2.2 and 6.4(4)). By 6.4(5), any (partial) generalization, A, of B
will also come out t under I. Thus, |=M A in this case.
We next show that if A is a partial generalization of (∀x)B ⇒ B[x := t],
then AI = t, from which follows that |=M A. We ask the reader to verify the
satisfiability—in the arbitrary M—of all the remaining axioms.
By 6.4(5), it suffices to prove that
I
(∀x)B ⇒ B[x := t] = t (1)

Arguing by contradiction, let


I
(∀x)B =t (2)

but
(B[x := t])I = f (3)
By 6.4(5) and (2), (B[x := î])I = t for all i ∈ M .
6. Soundness and Completeness of E-logic 31

By 6.11 and (3), (B[x := î])I = f , for some i ∈ M , contradicting what we


just said one line ago. This proves (1).
Induction step. W show that if |= C and |= C ⇒ A, then |= A.
Indeed, fix an M = (M, I) and show that AI = t. By assumption, C I = t
and C I ⇒ AI = t. Modus ponens and truth tables do the rest. 

 A by-product of soundness is consistency. An (E-)theory Γ is consistent iff


ThmΓ ⊂ Wff (proper subset).
Thus, any pure E-theory is consistent, since, by soundness, false is not prov-
able. 
6.13 Definition. A theory Γ is complete iff Γ |= A implies Γ ` A for any
formula A. 

We show the completeness of a pure E-theory by proving that it extends


conservatively† the E-theory obtained by leaving all else the same but dropping
all propositional variables and constants from the alphabet V . (The latter is
known to be complete [En].)
We employ two technical lemmata:

6.14 Lemma. (Substitution into propositional variables) In any E-theory (En-


theory) Γ, if Γ ` A, with a condition on the proof, then Γ ` A[p := W ] for any
formula W and propositional variable p.
The condition is: The propositional variable p does not occur in any formula
of Γ used in the proof of A.

Proof. Induction on the length n of the Γ-proof of A.


Say that a proof (satisfying the condition) is

B1 , . . . , Bn (1)

where Bn = A.
Basis. n = 1. Suppose that A is a logical axiom. Then A[p := W ] is as well
(by 2.6), thus Γ ` A[p := W ].
Suppose that A is a nonlogical axiom. Then A[p := W ] = A by the condition
on the proof, thus Γ ` A[p := W ].
We may assume that we are working in En-logic. On the induction hypothesis
that the claim is fine for proof-lengths < n, let us address the case of n:
If A (i.e., Bn ) is logical or nonlogical, then we have nothing to add.
† A theory T 0 over the language L0 is a conservative extension of a theory T over the

language L, if, first of all, every theorem of T is a theorem of T 0 , and (the conservative part)
moreover, any theorem of T 0 that is over L—the language of T —is also a theorem of T . That
is, T 0 proves no new theorems in the old language.
32

So let Bj = (Bi ⇒ A) in (1) above, where i and j are each less than n (i.e.,
the last step of the proof was an application of modus ponens).
By I.H., Γ ` Bi [p := W ] and Γ ` Bi [p := W ] ⇒ A[p := W ]. Thus, by modus
ponens, Γ ` A[p := W ]. 

6.15 Main Lemma. ([To]) Let A be a formula over the language L of sec-
tion 1, and let p be a propositional variable that occurs in A.
Extend the language L by adding P , a new 1-ary predicate symbol.
Then, |= A iff |= A[p := (∀x)P x] and ` A iff ` A[p := (∀x)P x].

Proof. (|=) The only-if is by soundness (substitution 6.14 was used).


For the if-part pick any structure M = (M, I) and prove that AI = t. To
this end, expand M to M0 = (M, I 0 ) where I 0 is the same as I, except that it
0 0
also gives meaning to P as follows: If pI = t, set P I = M , else set P I = ∅.
0 0
Clearly, pI = ((∀x)P x)I .
I 0 0
By assumption, A[p := (∀x)P x] = t, hence, AI = AI = t as well.
(`) The only-if is the result of 6.14.
For the if part, let ` A[p := (∀x)P x]. By induction on ∅-theorems we show
that ` A as well.
Basis. A[p := (∀x)P x] is a logical axiom. If it is a partial generalization of
a formula B[p := (∀x)P x] in group Ax1, then B[p := (∀x)P x] is a tautology.
But then so is B, hence (by 3.6) ` A.
Ax5 is not applicable (it cannot contain (∀x)P x).
It is clear that if A[p := (∀x)P x] is a partial generalization of a formula in
any of the groups Ax2–Ax4 or Ax6, then replacing all occurrences of (∀x)P x
in A[p := (∀x)P x] by the propositional variable p results to a formula of the
same form, so A is still an axiom, hence ` A.
Modus Ponens. Let ` B[p := (∀x)P x] and ` B[p := (∀x)P x] ⇒ A[p :=
(∀x)P x]. By I.H., ` B and ` B ⇒ A.
Hence ` A. 

6.16 Corollary. ([To]) Any pure E-theory is complete.

Proof. Fix attention to the pure E-theory over a fixed language L. Let A be a
formula in the language, and let |= A.
Denote by A0 the formula obtained from A by replacing each occurrence of
true (respectively false) by p∨¬p (respectively p∧¬p) where p is a propositional
variable not occurring in A. Let A00 be obtained from A0 by replacing each
propositional variable p, q, . . . in it by (∀x)P x, (∀x)Qx, . . . respectively, where
P, Q, . . . are new predicate symbols (so we expand L by these additions).
Clearly, by 6.15, |= A00 . This formula is in the language of [En] (which is the
same as L of section 1, but it has no propositional variables or constants). Thus,
by completeness of En-logic/E-logic over such a “restricted” language (proved
in [En]), ` A00 , the proof being carried out in the restricted language.
7. Appendix 33

But, trivially, this proof is valid over the language L (same axioms, same
rules), hence also ` A0 , by 6.15.
Finally, by SLCS—since ` p ∨ ¬p ≡ true and ` p ∧ ¬p ≡ false—and EQN,
we get ` A. 

7. Appendix
The reader is referred to [To] where all the axioms in [GS1], chapters 8 and 9,
were shown to be derived in the logic of [To].
Practically identical proofs are available within our E-logic, and they will


not be repeated here.
The justification of uses of generalization will have to be more careful in E-logic
(in [To] we could just go ahead with strong generalization). The reader should
be able to provide the right wording in each case, “translating” the proofs in
[To] to the present setting. 
We recall that axiom schemata Ax5 and Ax6 are used in the proof of the
“one-point rule” (see [To]).
We will only revisit here axiom (9.5) of [GS1]—which is not an axiom of
our E-logic—and the Leibniz rules (8.12) of [GS1]. Nomenclature and numbers
given in brackets are those in [GS1].
A.1 “Distributivity of ∨ over ∀ (9.5)”. This says that

` (∀x)(A ∨ B) ≡ A ∨ (∀x)B (∨∀)

provided that x is not free in A. A proof of (∨∀) in E-logic follows:

(∀x)(A ∨ B)
= hWLUS and ` A ∨ B ≡ ¬A ⇒ Bi
(∀x)(¬A ⇒ B)
= h3.14i
¬A ⇒ (∀x)B
= h` ¬A ⇒ (∀x)B ≡ A ∨ (∀x)Bi
A ∨ (∀x)B

A.2 The twin rules “Leibniz (8.12)” ([GS1], p.148), are stated below. The
ones immediately below are the “no-capture” versions, using contextual
substitution.
A≡B
(∀x)(C[p := A] ⇒ D) ≡ (∀x)(C[p := B] ⇒ D)
and
D ⇒ (A ≡ B)
(1)
(∀x)(D ⇒ C[p := A]) ≡ (∀x)(D ⇒ C[p := B])
34

We prove the “weak” “full-capture” versions (2) and (3).


It is obvious that we cannot do any better: Full-capture “strong” versions
will yield strong generalization!

` A ≡ B implies ` (∀x)(C[p \ A] ⇒ D) ≡ (∀x)(C[p \ B] ⇒ D) (2)

and

` A ≡ B implies ` (∀x)(D ⇒ C[p \ A]) ≡ (∀x)(D ⇒ C[p \ B]) (3)

Now, implication (2) is an instance of WLUS, where, without loss of gen-


erality, p occurs only in C. So it holds.
(3) has an identical proof. But what happened to the D ⇒-part on the
premise side? It was dropped, because the rule is invalid with it (see also
[To]).
Indeed, take D = x ≈ 0, C = (∀x)p, A = x ≈ 0 and B = true. Then,

|= D ⇒ (A ≡ B)

that is
|= x ≈ 0 ⇒ (x ≈ 0 ≡ true)
but
6|= (∀x)(D ⇒ C[p \ A]) ≡ (∀x)(D ⇒ C[p \ B])
that is

6|= (∀x)(x ≈ 0 ⇒ (∀x)x ≈ 0) ≡ (∀x)(x ≈ 0 ⇒ (∀x)true)

As an aside, (1) is valid if the premise has an absolute proof:

` D ⇒ (A ≡ B)

To prove the conclusion of (1), establish instead

` D ⇒ (C[p := A]) ≡ C[p := B]) (4)

To this end, assume D. This yields A ≡ B † (by modus ponens and the
premise of (1)). By SLCS, C[p := A] ≡ C[p := B] follows, and hence so
does (4) (Deduction Theorem).
Now, the formula in (4) yields

` (D ⇒ C[p := A]) ≡ (D ⇒ C[p := B]) (5)

which (Tautology Theorem) yields

` (D ⇒ C[p := A]) ⇒ (D ⇒ C[p := B]) (6)


† Not as an absolute theorem!
8. Bibliography 35

and
` (D ⇒ C[p := A]) ⇐ (D ⇒ C[p := B]) (7)
Since both (6) and (7) are absolute theorems, MON—on the I-Form (∀x)∗ —
and the Tautology Theorem conclude the argument.
Note that we cannot do any better: If (1) is taken literally (“strongly”),
then it yields the invalid in E-logic strong generalization A ` (∀x)A (take
D = B = true, C = p in (1)).

Acknowledgements. I wish to thank my colleague Jonathan Ostroff for the


many illuminating discussions we have had about Equational Logic.

8. Bibliography
[Ba] Barwise, J. “An introduction to first-order logic”, in Handbook of Math-
ematical Logic (J. Barwise, Ed.), 5–46, Amsterdam: North-Holland Pub-
lishing Company, 1978.

[Bou] Bourbaki, N. Élémens de Mathématique; Théorie des Ensembles, Ch. 1,


Paris: Hermann, 1966.

[DSc] Dijkstra, E. W., and Scholten, C. S. B. Predicate Calculus and Program


Semantics, New York: Springer-Verlag, 1990.

[En] Enderton, H. B. A mathematical introduction to logic, New York: Aca-


demic Press, 1972.

[Gr] Gries, D. Foundations for Calculational Logic, Mathematical methods


in program development (Marktoberdorf, 1996), 83–126, NATO Adv. Sci.
Inst. Ser. f Comput. Systems Sci., 158, Berlin: Springer-Verlag, 1997.

[GS1] Gries, D. and Schneider, F. B. A Logical Approach to Discrete Math,


New York: Springer-Verlag, 1994.

[GS2] Gries, D. and Schneider, F. B. Equational propositional logic, Informa-


tion Processing Letters, 53, 145–152, 1995.

[GS3] Gries, D. and Schneider, F. B. Formalizations of substitution of equals


for equals, Pre-print, Sept. 1998 (personal communication).

[Man] Manin, Yu. I. A Course in Mathematical Logic, New York: Springer-


Verlag, 1977.

[Men] Mendelson, E. Introduction to mathematical logic, 3rd Edition, Mon-


terey, Calif: Wadsworth & Brooks, 1987.
36

[Schü] Schütte, K. Proof Theory, Berlin, Heidelberg, New York: Springer-Verlag,


1977.

[Sh] Shoenfield, J. R. Mathematical Logic, Reading, Mass.: Addison-Wesley,


1967.

[To] Tourlakis, G. On the soundness and completeness of Equational Predi-


cate Logics, Department of Computer Science TR CS-1998-08, York Uni-
versity, November 1998.

You might also like