0% found this document useful (0 votes)
4 views

LogicLectureNotes

These lecture notes by Dr. habil. Iosif Petrakis cover various aspects of mathematical logic, including derivations in minimal, intuitionistic, and classical logic, as well as models and Gödel's incompleteness theorems. The content is structured into chapters that explore foundational concepts, proof theory, model theory, and connections to category theory and type theory. Acknowledgments are given to students and contributors who provided feedback during the course at Ludwig-Maximilians-Universität.

Uploaded by

brousygreen
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

LogicLectureNotes

These lecture notes by Dr. habil. Iosif Petrakis cover various aspects of mathematical logic, including derivations in minimal, intuitionistic, and classical logic, as well as models and Gödel's incompleteness theorems. The content is structured into chapters that explore foundational concepts, proof theory, model theory, and connections to category theory and type theory. Acknowledgments are given to students and contributors who provided feedback during the course at Ludwig-Maximilians-Universität.

Uploaded by

brousygreen
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 130

Logic: Lecture Notes

Dr. habil. Iosif Petrakis

Ludwig-Maximilians-Universität, Mathematisches Institut


Wintersemester 20/21
Acknowledgments

These lecture notes, and especially chapters 3 and 4, are based on [19]. In chapters 1 and 2 a
categorical perspective is introduced. I would like to thank the students of LMU who followed
this course, for providing their comments, suggestions and corrections. I especially thank Nils
Köpp for assisting this course.
iv
Contents

1 Derivations in Minimal Logic 3


1.1 Inductive definitions in metatheory . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 First-order languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4 Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.5 Substitutions in terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.6 Substitutions in formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.7 The Brouwer-Heyting-Kolmogorov-interpretation . . . . . . . . . . . . . . . . . 13
1.8 Gentzen’s derivations, a first presentation . . . . . . . . . . . . . . . . . . . . . 15
1.9 Gentzen’s derivations, a more formal presentation . . . . . . . . . . . . . . . . . 18
1.10 The preorder category of formulas . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.11 More examples of derivations in minimal logic . . . . . . . . . . . . . . . . . . . 22
1.12 Extension, cut, and the deduction theorem . . . . . . . . . . . . . . . . . . . . 24
1.13 The category of formulas is cartesian closed . . . . . . . . . . . . . . . . . . . . 27
1.14 Functors associated to the main logical symbols . . . . . . . . . . . . . . . . . . 31
1.15 Natural transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.16 Galois connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
1.17 The quantifiers as adjoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

2 Derivations in Intuitionistic and Classical Logic 41


2.1 Derivations in intuitionistic logic . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.2 Derivations in classical logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2.3 Monos, epis and subobjects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
2.4 The groupoid category of formulas . . . . . . . . . . . . . . . . . . . . . . . . . 51
2.5 The negative fragment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
2.6 Weak disjunction and weak existence . . . . . . . . . . . . . . . . . . . . . . . . 54
2.7 Logical operations on functors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
2.8 Functors on functors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
2.9 Functors associated to weak “or” and weak “exists” . . . . . . . . . . . . . . . 62
2.10 The Gödel-Gentzen translation . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
2.11 The Gödel-Gentzen functor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
2.12 Applications of the Gödel-Gentzen functor . . . . . . . . . . . . . . . . . . . . . 73
2.13 The Gödel-Gentzen translation as a continuous function . . . . . . . . . . . . . 76
2 Contents

3 Models 79
3.1 Trees, fans, and spreads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
3.2 Fan models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
3.3 The Tarski-Beth definition of truth in a fan model . . . . . . . . . . . . . . . . 84
3.4 Soundness of minimal logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
3.5 Countermodels and intuitionistic fan models . . . . . . . . . . . . . . . . . . . . 91
3.6 Completeness of minimal logic . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
3.7 L-models and classical models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
3.8 Soundness theorem of classical logic . . . . . . . . . . . . . . . . . . . . . . . . 98
3.9 Completeness of classical logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
3.10 The compactness theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

4 Gödel’s incompleteness theorems 105


4.1 Elementary functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
4.2 A non-elementary function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
4.3 Elementary relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
4.4 The set of functions E . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
4.5 Coding finite lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
4.6 Gödel numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
4.7 Undefinability of the notion of truth . . . . . . . . . . . . . . . . . . . . . . . . 117
4.8 Representable relations and functions . . . . . . . . . . . . . . . . . . . . . . . 118
4.9 Undefinability of the notion of truth in formal theories . . . . . . . . . . . . . . 119
4.10 Recursive functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
4.11 Undecidability and incompleteness . . . . . . . . . . . . . . . . . . . . . . . . . 122
Chapter 1

Derivations in Minimal Logic

Mathematical logic (ML), or simply logic, is concerned with the study of formal systems related
to the foundations and practice of mathematics. ML is a very broad field encompassing various
theories, like the following. Proof theory, the main object of study of which is the concept
of (formal) derivation, or (formal) proof (see e.g., [18]). Model theory studies interpretations,
or models, of formal theories (see e.g., [6]). Axiomatic set theory is the formal theory of sets
that underlies most of the standard mathematical practice (see e.g., [12]). It is also called
Zermelo-Fraenkel set theory (ZF). The theory ZFC is ZF together with the axiom of choice.
ML has strong connections to category theory, a theory developed first by Eilenberg and Mac
Lane within homology and homotopy theory (see e.g. [2]). Categorical logic is that part of
category theory connected to logic (see [14]). Computability theory is the theory of computable
functions, or in general of algorithmic objects (see e.g., [17]).
An alternative to the notion of set is the concept of type (or data-type). Type theory,
which has its origins to Russell’s so called ramified theory of types, is evolved in modern times
to Martin-Löf type theory (MLTT), which also has many applications to theoretical computer
science (see [15], [16]). Recently, the late Fields medalist V. Voevodsky revealed unexpected
connections between homotopy theory and logic, developing Homotopy Type Theory (HoTT),
an extension of MLTT with his axiom of univalence and higher inductive types (see [21]).
In this chapter we develop the basics of first-order theories and we study derivations in
minimal logic. Although standard mathematics is done within classical logic, great mathemati-
cians, have developed mathematics within constructive logic. The aforementioned theories
MLTT and HoTT are within intuitionistic logic. There is also constructive set theory (see [1]),
constructive computability theory (see [5]). For basic mathematical theories within construc-
tive logic see [3], [4]. Minimal logic is the most general (constructive) logic that we study
here.

1.1 Inductive definitions in metatheory


In order to define the fundamental concepts of a first-order language, we need a so-called
metatheory M that permits such definitions. This metatheory M is, in principle, a formal
theory the exact description of which is left here open. What we ask from M is to include
some theory of natural numbers, of sets and functions and of rather simple inductively defined
sets. For example, one could take M to be the whole Zermelo-Fraenkel set theory (ZF),
but smaller parts of ZF would also suffice. One could use a constructive theory of sets as a
4 CHAPTER 1. DERIVATIONS IN MINIMAL LOGIC

metatheory M. Next we explain the kind of inductive definitions that must be possible in M.
An inductively defined set, or an inductive set, X is determined by two kinds of rules
(or axioms); the introduction rules, which determine the way the elements of X are formed,
or introduced, and the induction principle IndX for X (or elimination rule for X) which
guarantees that X is the least set satisfying its introduction rules.
Example 1.1.1. The most fundamental example of an inductive set is that of the set of
natural numbers N. Its introduction rules are:

n∈N
0∈N , Succ(n) ∈ N
.

According to these rules, the elements of N are formed by the element 0 and by the primitive,
or given successor-function Succ : N → N. These rules alone do not determine a unique set; for
example the rationals Q and the reals R satisfy the same rules. We determine N by postulating
that N is the least set satisfying the above rules. This we do in a “ bottom-up” way1 with
the induction principle for N. If P, Q, R are formulas in our metatheory M, the M-formula
P ⇒ Q ⇒ R is the formula P ⇒ (Q ⇒ R) or (P & Q) ⇒ R i.e., ‘if P and if Q, then R.
The induction principle IndN for N is the following formula (in M): for every formula A(n)
on N in M,
A(0) ⇒ ∀n∈N (A(n) ⇒ A(Succ(n))) ⇒ ∀n∈N (A(n)).
The interpretation of IndN is the following: the hypotheses of IndN say that A satisfies the two
formation rules for N i.e., A(0) and ∀n∈N (A(n) → A(Succ(n))). In this case A is a “competitor”
predicate to N. Then, if we view A as the set of all objects such that A(n) holds, the conclusion
of IndN guarantees that N ⊆ A, i.e., ∀n∈N (A(n)). In other words, N is “smaller” than A, and
this is the case for any such A.
Notice that we use the following conventions in M:
 
∀x∈X φ(x) :⇔ ∀x x ∈ X ⇒ φ(x) ,
 
∃x∈X φ(x) :⇔ ∃x x ∈ X & φ(x) .
The induction principle in an inductive definition is the main tool for proving properties of
the defined set. In the case of N, one can prove (exercise) its corresponding recursion theorem
RecN , which determines the way one defines functions on N. According to a simplified version
of it, if X is a set, x0 ∈ X and g : X → X, there exists a unique function f : N → X such that
f (0) = x0 ,
f (Succ(n)) = g(f (n)); n ∈ N,
To show e.g., the uniqueness of f with the above properties, let h : N → X such that h(0) = x0
and h(Succ(n)) = g(h(n)), for every n ∈ N. Using IndN on A(n) :⇔ (f (n) = h(n)), we get
∀n (A(n)). As an example of a function defined through RecN , let Double : N → N defined by
Double(0) = 0,
Double(Succ(n)) = Succ(Succ(Double(n)))
i.e., X = N, x0 = 0 and g = Succ ◦ Succ.
1
If we work inside the set of real numbers R, we can also do this in a “top-down” way by defining N to be
the intersection of all sets satisfying these introduction rules. Notice that also in this way N is the least subset
of R satisfying its introduction rules.
1.2. FIRST-ORDER LANGUAGES 5

Example 1.1.2. Let A be a non-empty set that we call alphabet. The set A∗ of words over
A is introduced by the following rules

w ∈ A∗ , a ∈ A
nilA∗ ∈ A∗ ,
w ? a ∈ A∗
.

The symbol nilA∗ denotes the empty word, while the word w ? s denotes the concatenation
of the word w and the letter a ∈ A. The induction principle IndA∗ for A∗ is the following: if
P (w) is any formula on A∗ in M, then

P (nilA∗ ) ⇒ ∀w∈A∗ ∀a∈A P (w) ⇒ P (w ? a) ⇒ ∀w∈A∗ (P (w)).

A simplified version of the corresponding recursion theorem RecA∗ is the following: If X is a


set, x0 ∈ X, and if ga : X → X, for every a ∈ A, there is a function f : A∗ → X such that

f (nilA∗ ) = x0 ,

f (w ? a) = ga (f (w)); w ∈ A∗ , a ∈ A.
As an example of a function defined through RecA∗ , if X = A∗ , w0 ∈ A∗ and if ga (w) = w ? a,
for every a ∈ A, let the function fw0 : A∗ → A∗ defined by

fw0 (nilA∗ ) = w0 ,

fw0 (w ? a) = ga (fw0 (w))


i.e., fw0 (w) = w0 ? w is the concatenation of the words w0 and w (we use the same symbol for
the concatenation of a word and a symbol and for the concatenation of two words).

If ZF is our metatheory M, then the proof of the recursion theorem that corresponds to
an inductive definition can be complicated. If as metatheory we use a theory like Martin-
Löf’s type theory MLTT, there is a completely mechanical, hence trivial, way to recover the
corresponding recursion rule from the induction rule of an inductive definition.

1.2 First-order languages


Definition 1.2.1. Let Var = {vn | n ∈ N} be a fixed countably infinite set of variables. We
also denote the elements of Var by x, y, z, etc. Let L = {→, ∧, ∨, ∀, ∃, (, ), , }, where each
element of L is called a logical symbol. A first-order language over Var and L is a pair
L = (Rel, Fun), where Var, L, Rel, Fun are pairwise disjoint sets2 such that
[
Rel = Rel(n) ,
n∈N

where for every n ∈ N, Rel(n) is a, possibly empty, set of n-ary relation symbols or predicate
symbols. Moreover, Rel(n) ∩ Rel(m) = ∅, for every n 6= m. A 0-ary relation symbol is called a
propositional symbol. The symbol ⊥ (read falsum) is required as a fixed propositional symbol
2
For simplicity, we do no use the more accurate notations RelL , FunL .
6 CHAPTER 1. DERIVATIONS IN MINIMAL LOGIC

i.e., Rel(0) is inhabited by ⊥. The language will not, unless stated otherwise, contain the
equality symbol =, which is a 2-ary relation symbol. Moreover,
[
Fun = Fun(n) ,
n∈N

where for every n ∈ N, Fun(n) is a, possible empty, set of n-ary function symbols. Moreover,
Fun(n) ∩ Fun(m) = ∅, for every n 6= m. A 0-ary function symbol is called constant, and let

Const = Fun(0) .

Clearly, the above definition rests on some theory of sets, and of natural numbers, which,
as we have already said, are presupposed for our metaheory M. The equality symbol used in
Definition 1.2.1 is the equality (of sets, or objects) in M. If our formal language includes one
more fixed countably infinite set of variables VAR = {Vn | n ∈ N}, where Vi is a variable of
another sort, e.g., a set-variable, then one could define the notion of a second-order language
over Var, VAR and L in a similar fashion.
Example 1.2.2. The first-order language of arithmetic is the pair ({⊥, =}, {0, S, +, ·}), which
is written for simplicity as (⊥, =, 0, S, +, ·), where 0 ∈ Const, S ∈ Fun(1) , and +, · ∈ Fun(2) .
The first-order language of Zermelo-Fraenkel set theory (ZF) is the pair ({⊥, =, ∈}, ∅)}), which
is written for simplicity as (⊥, =, ∈), where ∈ is in Rel(2) .

1.3 Terms
The set TermL of terms of a first-order language L is inductively defined. For simplicity we
omit the subscript L. N+ denotes the set of strictly positive natural numbers.
Definition 1.3.1. The set Term of terms of a first-order language L is defined by the following
introduction rules:
x ∈ Var c ∈ Const
, ,
x ∈ Term c ∈ Term
n ∈ N+ , t1 , . . . , tn ∈ Term, f ∈ Fun(n)
,
f (t1 , . . . , tn ) ∈ Term
to which, the following induction principle IndTerm corresponds:

∀x∈Var (P (x)) ⇒

∀c∈Const (P (c)) ⇒
∀n∈N+ ∀t1 ,...,tn ∈Term ∀f ∈Fun(n) ((P (t1 ) & . . . & P (tn )) ⇒ P (f (t1 , . . . , tn )) ⇒
∀t∈Term (P (t)),
where P (t) is any formula (in M) on Term.
In words, every variable is a term, every constant is a term, and if t1 , . . . , tn are terms and
f is an n-ary function symbol with n ≥ 1, then f (t1 , . . . , tn ) is a term. If r, s are terms and ◦
is a binary function symbol, we usually write r ◦ s instead of ◦(r, s). E.g.,

0, S(0), S(S(0)), S(0) + S(S(0))


1.3. TERMS 7

are terms of the language of arithmetic. As in the case of the induction principle for natural
numbers, the induction principle for Term expresses that Term is the least set satisfying its
defining rules. A formula P (t) on Term could be “the number of left parentheses, (, occurring
in t is equal to the number of right parentheses, ), occurring in t”. We need to express this
formula in mathematical terms. For that we need the following recursion theorem for Term.

Proposition 1.3.2 (Recursion theorem for Term (RecTerm )). Let X be a set. If

FVar : Var → X,

FConst : Const → X,
Ff,n : X n → X,
for every n ∈ N+ and f ∈ Fun(n) , are given functions, there is a unique function F : Term → X
such that, for every n ∈ N+ , t1 , . . . , tn ∈ Term, and f ∈ Fun(n) ,

F (x) = FVar (x), x ∈ Var,

F (c) = FConst (c), c ∈ Const,


F (f (t1 , . . . , tn )) = Ff,n (F (t1 ), . . . , F (tn )).

Proof. The proof of the existence of F is similar to the corresponding existence-proof in


the recursion theorem for N, and it is an exercise. The uniqueness of F is also shown with
the use of IndTerm . If G : Term → X satisfies the defining properties of F , we show that
∀t∈Term F (t) = G(t) , by using IndTerm on the formula P (t) :⇔ F (t) = G(t).

Using the recursion theorem for Term one can define e.g., the function Pleft : Term → N
such that Pleft (t) is the number of left parentheses occurring in t ∈ Term. It suffice to define
it on the variables, the constants, and the complex terms f (t1 , . . . , tn ) supposing that Pleft is
defined on the terms t1 , . . . , tn . Namely, we define

Pleft (ui ) = 0,

Pleft (c) = 0,
n
X
Pleft (f (t1 , . . . , tn )) = 1 + Pleft (ti ).
i=1

Here we used the recursion theorem for Term with respect the functions FVar → Var → N,
FConst : Const → N, and Ff,n : Nn → N, where n ∈ N+ and f ∈ Fun(n) , defined by the rules:

FVar (x) = 0 = FConst (c),


n
X
Ff,n (m1 , . . . , mn ) = 1 + mi .
i=1

In exactly the same way, one defines the function Pright : Term → N such that Pright (t) is the
number of right parentheses occurring in t ∈ Term. Now we can show the following.

Proposition 1.3.3. ∀t∈Term Pleft (t) = Pright (t) .
8 CHAPTER 1. DERIVATIONS IN MINIMAL LOGIC

Proof. We apply IndTerm on the formula P (t) :⇔ Pleft (t) = Pright (t). The validity of P (x),
for every x ∈ Var, and P (c), for every c ∈ Const is trivial. If f (t1 , . . . , tn ) is a complex term,
such that P (ti ) holds, for every i ∈ {1, . . . , n}, then by the inductive hypothesis we get
n
X

Pleft f (t1 , . . . , tn ) = 1 + Pleft (ti )
i=1
Xn
=1+ Pright (ti )
i=1

= Pright f (t1 , . . . , tn ) .

1.4 Formulas
Definition 1.4.1. The set of formulas Form of a first-order language L is defined by the
following introduction rules:

n ∈ N, t1 , . . . , tn ∈ Term, R ∈ Rel(n)
,
R(t1 , . . . , tn ) ∈ Form

A, B ∈ Form
,
A → B, A ∧ B, A ∨ B ∈ Form
A ∈ Form, x ∈ Var
,
∀x A, ∃x A ∈ Form
to which, the following induction principle IndForm corresponds:

∀n∈N ∀t1 ,...,tn ∈Term ∀R∈Rel(n) (P (R(t1 , . . . , tn ))) ⇒



∀A,B∈Form P (A) & P (B) ⇒ P (A → B) & P (A ∧ B) & P (A ∨ B) ⇒

∀A∈Form ∀x∈Var P (A) ⇒ P (∀x A) & P (∃x A) ⇒
∀A∈Form (P (A)),
where P (A) is any formula in M on Form. The formulas of the form R(t1 , . . . , tn ) are called
prime formulas, or atomic formulas, or just atoms. If r, s are terms and ∼ is a binary relation
symbol, we also write r ∼ s for the prime formula ∼ (r, s). Since ⊥ ∈ Rel(0) , we get ⊥ ∈ Form.
We call A → B the implication from A to B, A ∧ B the conjunction of A, B, and A ∨ B the
disjunction of A, B. The negation ¬A of a formula A is defined as the formula

¬A = A → ⊥.

As usual, we use the notational convention A → B → C = A → (B → C). The formulas


generated by the prime formulas are called complex, or non-atomic formulas. A formula ∀x A
is called a universal formula, and a formula ∃x A is called an existential formula.

As usual, the induction principle IndForm expresses that Form is the least set satisfying its
introduction rules. Note that IndForm consists of formulas in M, where the same quantifiers and
logical symbols, except from the meta-theoretic implication symbol ⇒ and the metatheoretic
conjuction symbol &, are used. Since the variables occurring in these meta-theoretic formulas
1.4. FORMULAS 9

are different from Var, it is easy to understand from the context the difference between the
formulas in Form and the formulas in M. The expression ⊥ → ⊥ is a formula, and also the
expressions ∀x (⊥ → ⊥) and ∃x (R(x) ∨ S(x)). Notice that we have added two parentheses
(left and right) in both last examples, in order to make them clear to read. Alternatively, one
could have used the introduction rule
A, B ∈ Form
,
(A → B), (A ∧ B), (A ∨ B) ∈ Form

but then it would be cumbersome to be faithful to it all the time. It is easy to associate to
each formula its formation tree i.e., the tree of all introduction rules that generate the formula,
which lies in the root of this tree. A metatheoretic formula P (A) on Form could express e.g.,
“the number of left parentheses occurring in A is equal to the number of right parentheses
occurring in A”. As in the case of terms, we need a recursion theorem for Form to formulate
P (A). Let Prime be the set of all prime formulas i.e., the set

Prime = R(t1 , . . . , tn ) | R ∈ Rel(n) , t1 , . . . , tn ∈ Term, n ∈ N .




Proposition 1.4.2 (Recursion theorem for Form (RecForm )). Let X be a set. If

FPrime : Prime → X

F→ : X × X → X, F∧ : X × X → X, F∨ : X × X → X,
F∀,x : X → X, F∃,x : X → X,
for every x ∈ Var, are given functions, there is a unique function F : Form → X such that

F (R(t1 , . . . , tn )) = FPrime (R(t1 , . . . , tn )),

F (A → B) = F→ (F (A), F (B)),
F (A ∧ B) = F∧ (F (A), F (B)),
F (A ∨ B) = F∨ (F (A), F (B)),
F (∀x A) = F∀,x (F (A)),
F (∃x A) = F∃,x (F (A)).

Proof. We proceed similarly to the proof of Proposition 1.3.2.

 left : Form → N, Pright : Form →


It is a simple exercise to define recursively the functions P
N, and show inductively that ∀A∈Form Pleft (A) = Pright (A) . Next we define recursively the
height |A| ∈ N of a formula A, which represents the “height” of the formation-tree of A with
respect to the introduction rules of the set Form.

Definition 1.4.3. The function height |.| : Form → N

|P | = 0, P ∈ Prime,
|A2B| = max{|A|, |B|} + 1, 2 ∈ {→, ∧, ∨},
|4x A| = |A| + 1, 4 ∈ {∀, ∃}.
10 CHAPTER 1. DERIVATIONS IN MINIMAL LOGIC

In Definition 1.4.3 we applied RecForm on the following N-valued functions, defined by

FRel (P ) = 0,

F2 (m, n) = max{m, n} + 1,

F4,x (m) = m + 1.

Definition 1.4.4. The function length ||.|| : Form → N is defined recursively by the clauses

||P || = 1, P ∈ Prime,

||A 2 B|| = ||A|| + ||B||, 2 ∈ {→, ∧, ∨},

||4x A|| = 1 + ||A||, 4 ∈ {∀, ∃}.

Proposition 1.4.5. ∀A∈Form (||A|| + 1 ≤ 2|A|+1 ).

Proof. Exercise.

1.5 Substitutions in terms


Next we define the set of free variables occurring in a term t, and the set of free variables
occurring in a formula A. As prime formulas are defined by an n-relation symbol and n-number
of terms, it is very often the case that in order to define a function on Form, we need first
to define a corresponding function on Term. If X is a set, P fin (X) denotes the set of finite
subsets of X. If Y, Z ⊆ X, then Y \ Z = {x ∈ X | x ∈ Y & x ∈ / Z}.

Definition 1.5.1. Let the function FVTerm : Term → P fin (Var) defined by

FVTerm (x) = {x},

FVTerm (c) = ∅,
n
[
FVTerm (f (t1 , . . . , tn )) = FVTerm (ti ).
i=1

The function FVForm : Form → P fin (Var) is defined by

FVForm (R) = ∅, R ∈ Rel(0) ,

n
[
FVForm (R(t1 , . . . , tn )) = FVTerm (ti ), R ∈ Rel(n) , n ∈ N+ ,
i=1

FVForm (A2B) = FVForm (A) ∪ FVForm (B),

FVForm (4x A) = FVForm (A) \ {x}.


If FV(A) = ∅, then A is called a sentence, or a closed formula.
1.6. SUBSTITUTIONS IN FORMULAS 11

According to Definition 1.5.1, a variable y is free in a prime formula A, if just occurs in A,


it is free in A2B, if it is free in A or free in B, and it is free in 4x A, if it is free in A and
y 6= x. E.g., the formulas

∀y (R(y) → S(y)), ∀y (R(y) → ∀z S(z))

are sentences, while y is free in the formula

(∀y (R(y)) → S(y).

Definition 1.5.2. W(L) is the set of finite lists of symbols from the set Var ∪ L ∪ Rel ∪ Fun.
The set W(L) can be defined inductively as the set [Var ∪ L ∪ Rel ∪ Fun]∗ of words over the
alphabet Var ∪ L ∪ Rel ∪ Fun (see Example 1.1.2).
Clearly, Term, Form ( W(L), as e.g., f R ∧ g(⊥, u8 is a word neither in Term nor in Form.
Definition 1.5.3. If s ∈ Term and x ∈ Var are fixed, the function

Subs/x : Term → W(L)

t 7→ Subs/x (t) = t[x := s],


determines the word generated by substituting x from s in t, and it is defined by the clauses

s , x = vn
vn [x := s] =
vn , otherwise,

c[x := s] = c,
f (t1 , . . . , tn )[x := s] = f (t1 [x := s], . . . , tn [x := s]).
Proposition 1.5.4. If s ∈ Term and x ∈ Var, then ∀t∈Term (t[x := s] ∈ Term).
Proof. It follows trivially by IndTerm .

Proposition 1.5.5. If s ∈ Term and x ∈ Var, then ∀t∈Term (x ∈


/ FV(t) ⇒ t[x := s] = t).
Proof. We use induction on Term. If t = vi , for some vi ∈ Var, then x ∈ / FV(vi ) ⇔ x ∈ / {vi } ⇔
x 6= vi , hence vi [x := s] = vi . If t = c, for some c ∈ Const, then x ∈ / FV(c) ⇔ x ∈ / ∅, which is
always the case. By definition of substitution we get c[x := s] = c. If t = f (t1 , . . . , tn ), for
some f ∈ Fun(n) and t1 , . . . , tn ∈ Term, then x ∈ / FV(f (t1 , . . . , tn )) ⇔ x ∈ / FV(ti ), for every
i ∈ {1, . . . , n}. By the inductive hypothesis on t1 , . . . , tn we get ti [x := s] = ti , for every
i ∈ {1, . . . , n}. Hence, f (t1 , . . . , tn )[x := s] = f (t1 [x := s], . . . , tn [x := s]) = f (t1 , . . . , tn ).

1.6 Substitutions in formulas


If we consider the formula ∃y (¬(y = x)), then the possible substitution of x from y would
generate the formula ∃y (¬(y = y)), which cannot be true in any “interpretation” of these
symbols i.e., when y ranges over some collection of objects and = is the equality of the objects
in this collection. Hence, we need to be careful with substitution on semantical, rather than
syntactical, grounds. Note also that x is free in A, and if it is substituted by y, then y is not
free in A (in this case we say that it is bound in A). This is often called a “capture”, and we
want to avoid it.
12 CHAPTER 1. DERIVATIONS IN MINIMAL LOGIC

Definition 1.6.1. Let s ∈ Term, such that FV(s) = {y1 , . . . , ym }, and x ∈ Var. If 2 = {0, 1},
we define a function
Frees,x : Form → 2
that determines when “the variable x is substitutable i.e., it is free to be substituted, from
s in some formula”. Namely, if Frees,x (A) = 1, then x is substitutable from s in A, and if
Frees,x (A) = 0, then x is not substitutable from s in A. From now on, when we define a
function on Form that is based on a function on Term, as in the case of FVForm and FVTerm ,
we omit the subscripts and we understand from the context their domain of definition. The
function Frees,x is defined by

Frees,x (P ) = 1; P ∈ Prime,

Frees,x (A2B) = Frees,x (A) · Frees,x (B),



 0 , x = y ∨ [x 6= y & y ∈ {y1 , . . . , ym }]
Frees,x (4y A) = 1, , x 6= y & x ∈
/ FV(A) \ {y}
Frees,x (A) , x 6= y & y ∈
/ {y1 , . . . , ym } & x ∈ FV(A).

According to Definition 1.6.1, x is substitutable from s in a prime formula, since there are
no quantifiers in it that can generate a capture. It is substitutable in A2B, if it is substitutable
both in A and B. In the case of an ∃-, or ∀-formula, if x is not free in A (which is equivalent
to x =6 y &x∈ / FV(A) \ {y}), then we set Frees,x (4y A) ≡ 1, since no capture is possible to
be generated.
If then we consider the formula ∃y (¬(y = x)), by Definition 1.6.1 we get

Freey,x ∃y (¬(y = x)) = 0.

If x, y, z are distinct variables, it is easy to see that

Freez,x (R(x)) = 1,

Freez,x ∀z R(x) = 0,

Freef (x,z),x ∀y S(x, y) = 1, if x 6= y, y 6= z,

Freef (x,z),x ∃z ∀y (S(x, y) ⇒ R(x)) = 0.
Definition 1.6.2. If s ∈ Term and x ∈ Var are fixed, the function

Subs/x : Form → W(L)

A 7→ Subs/x (A) = A[x := s],


determines the word generated by substituting x from s in A, and it is defined as follows: If
Frees,x (A) = 0 then A[x := s] = A. If Frees,x (A) = 1, then

R[x := s] = R, R ∈ Rel(0) ,

R(t1 , . . . , tn )[x := s] = R(t1 [x := s], . . . , tn [x := s]), R ∈ Rel(n) , n ∈ N+ ,


(A2B)[x := s] = (A[x := s]2B[x := s]),
(4y A)[x := s] = 4y (A[x := s]).
Often, we write for simplicity A(s) instead of A[x := s].
1.7. THE BROUWER-HEYTING-KOLMOGOROV-INTERPRETATION 13

Note that if Frees,x (A2B) = 1, then Frees,x (A) = Frees,x (B) = 1.

Proposition 1.6.3. If x ∈ Var and s ∈ Term, then ∀A∈Form (A[x := s] ∈ Form).

Proof. Exercise.

Proposition 1.6.4. If x ∈ Var and s ∈ Term, then ∀A∈Form (x ∈


/ FV(A) ⇒ A[x := s] = A).

Proof. We use induction on Form. If A = R, for some R ∈ Rel(0) , then x ∈ / FV(R) ⇔ x ∈


/ ∅,
which is always the case. Since Frees,x (R) = 1, by definition of substitution we get R[x :=
s] = R. If A = R(t1 , . . . , tn ),S for some R ∈ Rel(n) , n ∈ N+ , and t1 , . . . , tn ∈ Term, then
x∈ / FV(R(t1 , . . . , tn )) ⇔ x ∈ / ni=1 FV(ti ), for every i ∈ {1, . . . , n}. By Proposition 1.5.5 we
get ti [x := s] = ti , for every i ∈ {1, . . . , n}, hence, since Frees,x (R(t1 , . . . , tn )) = 1, we have
that
R(t1 , . . . , tn )[x := s] = R(t1 [x := s], . . . , tn [x := s]) = R(t1 , . . . , tn ).
If our formula is of the the form A2B, then x ∈/ FV(A2B) ⇔ x ∈/ FV(A) ∪ FV(B) ⇔ x ∈/
FV(A) and x ∈ / FV(B). If Frees,x (A2B) = 0, then we get immediately what we want. If
Frees,x (A2B) = 1, then by the inductive hypothesis on A, B we get A[x := s] = A and
B[x := s] = B, hence by Definition 1.6.2 we have that

(A2B)[x := s] = (A[x := s]2B[x := s]) = (A2B).

If our formula is of the form 4y A, then x ∈ / FV(4y A) ⇔ x ∈ / FV(A) \ {y} ⇔ x ∈/ FV(A) or


x = y. If x = y, then Frees,x (4y A) = 0, hence (4y A)[x := s] = 4y A. If x ∈
/ FV(A) \ {y} and
x 6= y, then x ∈
/ FV(A), and by inductive hypothesis on A we get

(4y A)[x := s] = 4y (A[x := s]) = 4y A.

If x ∈ FV(A), and x 6= y ∧ y ∈
/ {y1 , . . . , ym }, the required implication follows trivially.

If ~x = (x1 , . . . , xn ) is a given n-tuple of distinct variables in Var and ~s = (s1 , . . . , sn ) is a


given n-tuple of terms in Term, for some n ∈ N+ , we can define similarly for every formula A
the formula A[~x := ~s] generated by the substitution of xi from si in A, for every i ∈ {1, . . . , n}.

1.7 The Brouwer-Heyting-Kolmogorov-interpretation


The next thing to answer is “what does it mean to prove some A ∈ Form?”. A first infor-
mal answer was given by intuitionists like Brouwer and Heyting, and, independently, from
Kolmogorov. The combination of the proof-interpretation of formlulas given by Brouwer,
Heyting, and Kolmogorov is called the Brouwer-Heyting-Kolmogorov-interpretation, or the
BHK-interpretation. Notice that this is interpretation presupposes an informal, or primitive,
or unexplained, notion of proof. Moreover, the interpretation of a proof of a prime formula,
other than ⊥, is not addressed in BHK-interpretation.

Definition 1.7.1 (BHK-interpretation). Let A, B ∈ Form, such that it is understood what it


means “q is a proof (or witness, or evidence) of A” and “r is a proof of B”.
(∧) A proof of A ∧ B is a pair (p0 , p1 ) such that p0 is a proof of A and p1 is a proof of B.
(→) A proof of A → B is a rule r that associates to any proof p of A a proof r(p) of B.
14 CHAPTER 1. DERIVATIONS IN MINIMAL LOGIC

(∨) A proof of A ∨ B is a pair (i, pi ), where if i = 0, then p0 is a proof of A, and if i = 1, then


p1 is a proof of B.
(⊥) There is no proof of ⊥.
For the next two rules let A(x) be a formula i.e., FV(A) ⊆ {x}, such that it is understood
what it means “q is a proof of A(x)”.
(∀) A proof of ∀x A(x) is a rule R that associates to any given x a proof Rx of A(x).
(∃) A proof of ∃x A(x) is a pair (x, q), where q is a proof of A(x).
We write p : A to denote that p is a proof of A.

Usually, the BHK-interpretation of a quantified formula requires that x ∈ X, for some


given set X. The extension of BHK-interpretation to formulas ∀x A(x) and ∃x A(x), where
FV(A) is larger than some singleton {x}, is obvious. The notions of rule in the clauses (→)
and (∀) are unclear, and are taken as primitive. As we have already said, the nature of a proof,
or a witness, is also left unexplained. Despite these problems, the BHK-interpretation captures
essential elements of the mathematical process of proof. Especially, it captures, informally,
the notion of a constructive proof, as the clauses for (∨) and (∃) indicate. A formal version of
the BHK-interpretation of Form is a so-called realisability interpretation (see [20]).

Example 1.7.2. Let the formula D = (A → B → C) → (A → B) → A → C, which,


according to our notational convention, is the formula

(A → (B → C)) → ((A → B) → (A → C)).

According to BHK, a proof

p : (A → B → C) → ((A → B) → (A → C))

is a rule that sends a supposed proof q : A → (B → C) to a proof

p(q) : (A → B) → (A → C),

which, in turn, is a rule that sends a proof r : A → B to a proof [p(q)](r)


 = [p(q)](r) : A → C.
This proof is a rule that sends a proof s : A to a proof [p(q)](r) (s) : C. Hence we need to
define the later proof through our supposed proofs. By definition q(s) : B → C, and hence
[q(s)](r(s)) : C. Thus we define
 
[p(q)](r) (s) = [q(s)](r(s)).

Example 1.7.3. Let the formula

E = ∀x (A → B) → A → ∀x B, if x ∈
/ FV(A).

According to BHK, a proof p : ∀x (A → B) → A → ∀x B is a rule that sends a supposed proof


q : ∀x (A → B) to a proof
p(q) : A → ∀x B,
which is a rule that sends a proof r : A to some proof

[p(q)](r) : ∀x B.
1.8. GENTZEN’S DERIVATIONS, A FIRST PRESENTATION 15

The proof q : ∀x (A → B) is understood as a family of proofs



q = qx : A → B x ,

and, similarly, the required proof [p(q)](r) : ∀x B is a family of proofs


 
 
[p(q)](r) = [p(q)](r) x : B .
x

We define this family of proofs by the rule


 
[p(q)](r) x = qx (r).

Example 1.7.4. A BHK-proof p : A → A is a rule that associates to every q : A a proof of A.


Clearly the identity rule p(q) = q is such a proof.

Example 1.7.5. A BHK-proof p∗ : A → ¬¬A is a rule that associates to every q : A a proof


p∗ (q) : (A → ⊥) → ⊥. If r : A → ⊥, we need to get a proof [p∗ (q)](r) : ⊥. For that we define
[p∗ (q)](r) = r(q).

It is easy to see that there is no straightforward method to find a BHK-proof of the converse
implication ¬¬A → A, which is an instance of the so-called double negation elimination
principle (DNE). As we will see later in the course, this principle holds only classically. There
are some instances of DNE though, that can be shown constructively.

Example 1.7.6. A BHK-proof p : ¬¬¬A → ¬A is a rule p such that for every q : (¬¬A) → ⊥,
we have that p(q) : A → ⊥. Let p∗ : A → ¬¬A from Example 1.7.5. If r : A, we define
[p(q)](r) = q(p∗ (r)).

1.8 Gentzen’s derivations, a first presentation


Another informal proof of D = (A → B → C) → (A → B) → A → C goes as follows: Assume
A → B → C. To show (A → B) → A → C, we assume A → B. To show A → C we
assume A. We show C by using the third assumption twice and we have B → C by the first
assumption, and B by the second assumption. From B → C and B we obtain C. Then we
obtain A → C by canceling the assumption on A, and (A → B) → A → C by canceling the
second assumption; and the result follows by canceling the first assumption.
Another informal proof of E = ∀x (A → B) → A → ∀x B, where x ∈ / FV(A), goes as follows:
Assume ∀x (A → B). To show A → ∀x B we assume A. To show ∀x B let x be arbitrary;
note that we have not made any assumptions on x. To show B we have A → B by the first
assumption, and hence also B by the second assumption. Hence ∀x B. Hence A → ∀x B,
canceling the second assumption. Hence E, canceling the first assumption.
A characteristic feature of this second kind of informal proofs is that assumptions are
introduced and eliminated again. At any point in time during the proof the free or “open”
assumptions are known, but as the proof progresses, free assumptions may become canceled
or “closed” through what we later call the “introduction rule” for →.
We reserve the word proof for the informal level; a formal representation of a proof will be
called a derivation. An intuitive way to communicate derivations is to view them as labeled
trees each node of which denotes a rule application. The labels of the inner nodes are the
16 CHAPTER 1. DERIVATIONS IN MINIMAL LOGIC

formulas derived as conclusions at those points, and the labels of the leaves are formulas
or terms. The labels of the nodes immediately above a node k are the premises of the rule
application. At the root of the tree we have the conclusion (or end formula) of the whole
derivation. In natural deduction systems one works with assumptions at leaves of the tree;
they can be either open or closed (canceled). Any of these assumptions carries a marker . As
markers we use assumption variables denoted u, v, w, u0 , u1 , . . . . The variables in Var will
now often be called object variables, to distinguish them from assumption variables. If at a
node below an assumption the dependency on this assumption is removed (it becomes closed),
we record this by writing down the assumption variable. Since the same assumption may be
used more than once (this was the case in the first example above), the assumption marked
with u (written u : A) may appear many times. Of course we insist that distinct assumption
formulas must have distinct markers.
An inner node of the tree is understood as the result of passing from premises to the
conclusion of a given rule. The label of the node then contains, in addition to the conclusion,
also the name of the rule. In some cases the rule binds or closes or cancels an assumption
variable u (and hence removes the dependency of all assumptions u : A thus marked). An
application of the ∀-introduction rule similarly binds an object variable x (and hence removes
the dependency on x). In both cases the bound assumption or object variable is added to the
label of the node.
First we have an assumption rule, allowing to write down an arbitrary formula A together
with a marker u:
u: A assumption.
The other rules of natural deduction split into introduction rules (I-rules for short) and
elimination rules (E-rules) for the logical connectives. E.g., for implication → there is an
introduction rule →+ and an elimination rule →− also called modus ponens. The left premise
A → B in →− is called the major (or main) premise, and the right premise A the minor
(or side) premise. Note that with an application of the →+ -rule all assumptions above it
marked with u : A are canceled (which is denoted by putting square brackets around these
assumptions), and the u then gets written alongside. There may of course be other uncanceled
assumptions v : A of the same formula A, which may get canceled at a later stage. We use
symbols like M, N, K, for derivations.

Definition 1.8.1 (A rather simplified presentation of Gentzen’s derivations). The tree

a: A 1
A
A
is a derivation tree of a formula A from assumption A. We use the variable assumption a : A
only for this tree. The introduction and elimination rules for implication are:

[u : A]
|M |N
|M
A→B A
B + B →−
A→B → u

For the universal quantifier ∀ there is an introduction rule ∀+ (again marked, but now
with the bound variable x) and an elimination rule ∀− whose right premise is the term r to
be substituted. The rule ∀+ x with conclusion ∀x A is subject to the following (eigen-)variable
1.8. GENTZEN’S DERIVATIONS, A FIRST PRESENTATION 17

condition to avoid capture: the derivation M of the premise A must not contain any open
assumption having x as a free variable.

|M |M
A + ∀x A r ∈ Term −
∀x A ∀ x A(r)

For disjunction the introduction and elimination rules are

[u : A] [v : B]
|M |N
|M |N |K
A ∨+ B ∨+
A∨B 0 A∨B 1 A∨B C C −
∨ u, v
C
For conjunction we have the rules

[u : A] [v : B]
|M |N
|M |N
A B +
A∧B ∧ A∧B C −
∧ u, v
C
and for the existential quantifier we have the rules

[u : A]
|M
|M |N
r ∈ Term A(r)
∃x A ∃+ ∃x A B −
∃ x, u (var.cond.)
B
Similar to ∀+ x the rule ∃− x, u is subject to an (eigen-)variable condition: in the derivation N
the variable x (i) should not occur free in the formula of any open assumption other than u : A,
and (ii) should not occur free in B. Again, in each of the elimination rules ∨− , ∧− and ∃−
the left premise is called major (or main) premise, and the right premise is called the minor
(or side) premise.

Notice that, as in the case of the BHK-interpretation, there is no rule for the derivation
of a prime formula P , other than the trivial unit-rule 1P . It is a nice exercise to check the
compatibility of Gentzen’s rules to the corresponding BHK-proofs. The rule ∨− u, v

[u : A] [v : B]
|M |N |K
A∨B C C −
∨ u, v
C
is understood as follows: given a derivation tree for A ∨ B and derivation trees for C with
assumption variables u : A and v : B, respectively, a derivation tree for C is formed, such
that u : A and v : B are cancelled. Similarly we understand the rules →+ u, ∧− u, v and
∃− x, u. The above definition is a quite complex inductive definition. In order to rewrite it,
we introduce the following notions. Note that the rules of Definition 1.8.1 are used in the
presence of free assumptions in the same way. E.g., next follows a derivation tree for C with
assumption formula G:
18 CHAPTER 1. DERIVATIONS IN MINIMAL LOGIC

w: G [u : A] [v : B]
|M |N |K
A∨B C C −
∨ u, v
C

We now give derivations of the two example formulas D, E, treated informally above.
Since in many cases the rule used is determined by the conclusion, we suppress in such cases
the name of the rule. Moreover, often we write only a : A, instead of the whole tree that
corresponds to 1A . First we give the derivation of D:

[u : A → B → C] [w : A] [v : A → B] [w : A]
B→C B
C +
A→C → w +
→ v
(A → B) → A → C
→+ u
(A → B → C) → (A → B) → A → C

Next we give the derivation of E:

[u : ∀x (A → B)] x ∈ Var
A→B [v : A]
B +
∀x B ∀ x +
A → ∀x B → v
→+ u
∀x (A → B) → A → ∀x B

Note that the variable condition is satisfied: In the derivation of B the still open assumption
formulas are A and ∀x (A → B); by hypothesis x is not free in A, and by Definition 1.5.1 it is
also not free in ∀x (A → B).

1.9 Gentzen’s derivations, a more formal presentation


Next we present a more formal version of the previous, non-trivial, inductive definition in M.

Definition 1.9.1. Let Avar be a new infinite set of “assumption variables”, and let

Aform = Avar × Form,

where, for every (u, A) ∈ Aform we write u : A. If V is a non-empty finite subset of Aform i.e.,
V = {u1 : A1 , . . . , un : An }, we define
 
Form(V ) = A ∈ Form | ∃u∈Avar u : A ∈ V = {A1 , . . . , An }.

Definition 1.9.2 (A formal presentation of Gentzen’s derivations). We define inductively the


set DV (A) of derivations of a formula A with assumption variables in V , where V is a finite
subset of Aform. If V = ∅, we write D(A). The following introduction-rules are considered:
(1A ) The tree 1A is an element of D{a : A} (A).
1.9. GENTZEN’S DERIVATIONS, A MORE FORMAL PRESENTATION 19

M ∈ D{u : A} (B)
(→+ u) M
.
+
A→B → u ∈ D(A → B)

M ∈ DV (A → B) N ∈ DW (A)
(→− ) M N −
.
B → ∈ DV ∪W (B)

M ∈ DV (A) N ∈ DW (B)
(∧+ ) M N +
.
A∧B ∧ ∈ DV ∪W (A ∧ B)

M ∈ DV (A ∧ B) N ∈ D{u : A,w : B}∪W (C) {u : A, w : B} ∩ W = ∅


(∧− u, w) M N −
.
C ∧ u,w ∈ DV ∪W (C)

M ∈ DV (A)
(∨+
0) M
.
+
A∨B ∨0 ∈ DV (A ∨ B)

N ∈ DW (B)
(∨+
1) N
.
+
A∨B ∨1 ∈ DW (A ∨ B)

M ∈ DV (A ∨ B) N ∈ D{u : A}∪U (C) K ∈ D{w : B}∪W (C) {u : A}∩U ={w : B}∩W =∅


(∨− u, w) M N K −
.
C ∨ u,w ∈ DV ∪U ∪W (C)

M ∈ DV (A) x ∈ Var ∀B∈Form(V ) (x ∈


/ FV(B))
(∀+ ) M +
.
∀x A ∀ ∈ DV (∀x A)

M ∈ DV (∀x A) t ∈ Term Freet,x (A) = 1


(∀− ) M t −
.
A[x:=t] ∀ ∈ DV (A[x := t])

t ∈ Term x ∈ Var Freet,x (A) = 1 M ∈ DV (A[x := t])


(∃+ ) t M +
.
∃x A ∃ ∈ DV (∃x A)

M ∈ DV (∃x A) N ∈ D{u : A}∪W (B) W ∩ {u : A} = ∅ x ∈


/ FV(B) ∀B∈Form(W ) (x ∈
/ FV(B))
(∃− x, u) M N −
.
B ∃ x,u ∈ DV ∪W (B)

For simplicity we do not include here the corresponding induction principle3 . If V =


{u : A}, W = {v : A}, M ∈ DV (B) and N ∈ DW (B), we say that M and N are equal, and we
write M = N .
3
Although half of the above introduction rules are called elimination-rules, all of them are introduction rules
to the definition of DV (A). They are called elimination-rules because the corresponding logical symbol in L is
“eliminated” in the end-formula lying at the root of the rule.
20 CHAPTER 1. DERIVATIONS IN MINIMAL LOGIC

It is easy to see that the above defined equality is an equivalence relation.

Definition 1.9.3. A formula A is called derivable in minimal logic, or simply derivable,


written ` A, if there is a derivation of A (without free assumptions) using the natural deduction
rules of Definition 1.9.2 i.e.,
` A :⇔ ∃M (M ∈ D(A)).
A formula A is called derivable from assumptions A1 , . . . , An , written

{A1 , . . . , An } ` A, or simpler A1 , . . . , An ` A,

if there is a derivation of A with free assumptions among A1 , . . . , An i.e.,


 

A1 , . . . , An ` A :⇔ ∃V ⊆fin Aform Form(V ) ⊆ {A1 , . . . , An } & ∃M M ∈ DV (A) .

If Γ ⊆ Form, a formula A is called derivable from Γ, written Γ ` A, if A is derivable from


finitely many assumptions A1 , . . . , An ∈ Γ.

By definition we have that



A ` A :⇔ ∃V ⊆fin Aform Form(V ) ⊆ {A} & ∃M (M ∈ DV (A)) .

If V = {a : A}, then Form(V ) = {A} and 1A ∈ DV (A). Hence we always have that A ` A.

1.10 The preorder category of formulas


Definition 1.10.1 (Eilenberg, Mac Lane (1945)). A category C is a structure (C0 , C1 , dom, cod, ◦, 1),
where
(i) C0 is the collection of the objects of C,
(ii) C1 is the collection of the arrows of C,
(iii) For every f in C1 , dom(f ), the domain of f , and cod(f ), the codomain of f , are objects
in C0 , and we write f : A → B, where A = dom(f ) and B = cod(f ),
(iv) If f : A → B and g : B → C are arrows of C i.e., dom(g) = cod(f ), there is an arrow
g ◦ f : A → C, which is called the composite of f and g,
(v) For every A in C0 , there is an arrow 1A : A → A, the identity arrow of A,
such that the following conditions are satisfied:
(a) If f : A → B, then f ◦ 1A = f = 1B ◦ f .
(b) If f : A → B, g : B → C and h : C → D, then h ◦ (g ◦ f ) = (h ◦ g) ◦ f .
If A, B are in C0 , we denote by HomC (A, B), or simply by Hom(A, B), if C is clear from the
context, the collection of arrows f in C1 with dom(f ) = A and cod(f ) = B.

Example 1.10.2. The collection of sets and functions between them is the simplest example
of a category, which is denoted by Set.

The objects of a category are not necessarily sets, and hence the arrows are not necessarily
functions. This is exactly the case with the category of formulas Form.
1.10. THE PREORDER CATEGORY OF FORMULAS 21

Proposition 1.10.3. The category of formulas Form has objects the formulas in Form and
an arrow from A to B is a derivation of B from an assumption of the form u : A i.e.,

M : A → B :⇔ M ∈ D{u : A} (B).

Proof. (Exercise). One has to define the composition N ◦M , where N : B → C and M : A → B.


As expected, the unit arrow 1A is the trivial derivation of a from assumption A. To prove
that Form satisfies properties (a) and (b) of Definition 1.10.1, one needs to use the definition
of equality of derivations given in Definition 1.9.2.

Although a formula A is not a set, we have already discussed approaches to logic, like Martin
Löf’s type theory MLTT, where a set, or a type, is also a formula. In BHK-interpretation
one can also understand a formula A as the “set” of its proofs p : A. Moreover, the arrow in
Form is a not a function, but as it is an arrow in this category, it behaves as an “abstract”
function with respect to the abstract operation of composition in Form. Recall that the
arrow M : A → B in Form is captured in BHK-interpretation by some rule that behaves like
a function! So, in some sense, the category Form captures the “set-character” of a formula
and the “function-character” of a proof p : A → B in BHK-interpretation.
Notice that if L : A → B in Form i.e., L ∈ D{u : A} (B), and if M : A → B in Form i.e.,
M ∈ D{v : A} (B), are arrows in Form, then by the definition of equality of derivations in
Definition 1.9.2 we get L = M . Hence any two arrows A → B in Form are equal, or, in other
words, there is at most one arrow from A to B. An immediate consequence of this fact is that
proofs of equality of arrows in Form become trivial, as two arrows are always equal when
they have the same domain and codomain.

Definition 1.10.4. A category C is called a preorder, or a thin category, if there is at most


one arrow f ∈ C1 between objects A and B in C0 .

Recall the following definition.

Definition 1.10.5. preorder is a pair (I, ), where I is a set, and  ⊆ I × I such that:

(i) ∀i∈I i  i .

(ii) ∀i,j,k∈I i  j & j  k ⇒ i  k .
If a preorder satisfies the condition

(iii) ∀i,j∈I i  j & j  i ⇒ i = j ,
it is called a partially ordered set, or a poset.

A preorder (I, ) becomes a category with objects the elements of I and a unique arrow
from i to j, if and only if i  j. Conditions (i) and (ii) above ensure that I is a category.
Moreover, any thin category generates a preorder.

Definition 1.10.6. In the case of the thin category Form we define



A ≤ B :⇔ ∃M M : A → B .

Clearly, a poset is also a thin category. Many categorical notions are generalisations of
order-theoretic concepts. In many cases, a category can be seen as a generalised poset, allowing
more arrows between its objects.
22 CHAPTER 1. DERIVATIONS IN MINIMAL LOGIC

1.11 More examples of derivations in minimal logic


Proposition 1.11.1. The following formulas are derivable:
(i) A → A.
(ii) A → ¬¬A.
(iii) (Brouwer) ¬¬¬A → ¬A.

Proof. The derivation for (i) is


[a : A]
1A
A +
A→A → a
The derivation for (ii) is

[u : A → ⊥] [a : A]

→+ u
(A → ⊥) → ⊥
→+ a
A → (A → ⊥) → ⊥

The derivation for (iii) is an exercise.

Note that double negation elimination i.e., the formula DNEA = ¬¬A → A, is in general
not derivable in minimal logic. But this we cannot show now.

Proposition 1.11.2. The following are derivable.

(i) (A → B) → ¬B → ¬A,
(ii) ¬(A → B) → ¬B,
(iii) ¬¬(A → B) → ¬¬A → ¬¬B,
(iv) (⊥ → B) → (¬¬A → ¬¬B) → ¬¬(A → B),
(v) ¬¬∀x A → ∀x ¬¬A.

Proof. Exercise.

Proposition 1.11.3. We consider the following formulas:

ax∨+
0 = A → A ∨ B,
ax∨+
1 = B → A ∨ B,
ax∨− = A ∨ B → (A → C) → (B → C) → C,
ax∧+ = A → B → A ∧ B,
ax∧− = A ∧ B → (A → B → C) → C,
ax∃+ = A → ∃x A,
ax∃− = ∃x A → ∀x (A → B) → B (x ∈
/ FV(B)).

(i) The formulas ax∨+ + − + +


0 , ax∨1 and ax∨ are equivalent, as axioms, to the rules ∨0 , ∨1 and

∨ u, v over minimal logic.
(ii) The formulas ax∧+ and ax∧− as axioms are equivalent, as axioms, to the rules ∧+ and
∧− over minimal logic.
1.11. MORE EXAMPLES OF DERIVATIONS IN MINIMAL LOGIC 23

(iii) The formulas ax∃+ and ax∃− are equivalent, as axioms, to the rules ∃+ and ∃− x, u over
minimal logic.

Proof. (i) First we show that from the axiom ax∨+0 , a derivation of which is considered the
formula itself, and a supposed derivation M of A we get the following derivation of A ∨ B
|M
A→A∨B A
A∨B →−

Similarly we show that from the formula ax∨+1 and a supposed derivation N of B we get a
derivation of A ∨ B. Next we show that from the formula ax∨− and supposed derivations M
of A ∨ B, N of C with assumption A, and K of C with assumption B we get the following
derivation of C
[u : A]
|M |N [v : B]
ax∨− A∨B C |K
→− +
(A → C) → (B → C) → C A → C →− u C
→ +
(B → C) → C B → C →− v
C →

Conversely, from the rule ∨+ +


0 we get the following derivation of ax∨0

[a : A]
1A
A ∨+
A∨B 0 +
A→A∨B → a
Similarly, from the rule ∨+ +
1 we get a derivation of ax∨1 . From the elimination rule for
disjunction we get the following derivation of ax∨−

[u : A ∨ B] [v : A → C] [v 0 : A] [w : B → C] [w0 : B]
1A∨B
A∨B C C − 0 0
∨ v ,w
C
→+ w
(B → C) → C
→+ v
(A → C) → (B → C) → C
→+ u
A ∨ B → (A → C) → (B → C) → C
(ii) and (iii) are exercises.

A similar result holds for axioms corresponding to the rules ∀+ x and ∀− . Note that in the
above derivation of C

u : A ∨ B ax [v : A → C] [v 0 : A] [w : B → C] [w0 : B]
A∨B C C
∨− v 0 , w0
C
we used the rule ∨− v 0 , w0 in the “extended” way described in Definition 1.9.2, where the
assumption variables u : A ∨ B, v : A → C and w : B → C are still open. Of course, they will
be canceled later in the derivation of ax∨− . The notation B ← A means A → B.
24 CHAPTER 1. DERIVATIONS IN MINIMAL LOGIC

Proposition 1.11.4. The following formulas are derivable

(i) (A ∧ B → C) ↔ (A → B → C),
(ii) (A → B ∧ C) ↔ (A → B) ∧ (A → C),

(iii) (A ∨ B → C) ↔ (A → C) ∧ (B → C),
(iv) (A → B ∨ C) ← (A → B) ∨ (A → C),

(v) (∀x A → B) ← ∃x (A → B) if x ∈
/ FV(B),
(vi) (A → ∀x B) ↔ ∀x (A → B) if x ∈
/ FV(A),

(vii) (∃x A → B) ↔ ∀x (A → B) if x ∈
/ FV(B),
(viii) (A → ∃x B) ← ∃x (A → B) if x ∈
/ FV(A).

Proof. (i)-(vii) are exercise. A derivation of the final formula is

[w : A → B] [v : A]
x ∈ Var B
[u : ∃x (A → B)] ∃x B −
∃ x, w
∃x B +
A → ∃x B → v
→+ u
∃x (A → B) → A → ∃x B

The variable condition for ∃− is satisfied since the variable x (i) is not free in the formula A
of the open assumption v : A, and (ii) is not free in ∃x B. Of course, it is not a problem, if it
occurs free in A → B.

1.12 Extension, cut, and the deduction theorem


Next we prove the extension-rule and the cut-rule.

Proposition 1.12.1. If Γ, ∆ ⊆ Form and A, B ∈ Form, the following rules hold:

Γ ` A, Γ ⊆ ∆
ext
∆`A

Γ ` A, ∆ ∪ {A} ` B
cut
Γ∪∆`B
Proof. The ext-rule is an immediate consequence of the definition of Γ ` A. Suppose
next that there are C1 , . . . , Cn ∈ Γ and D1 , . . . , Dm ∈ ∆ such that C1 , . . . , Cn ` A and
D1 , . . . , Dm , A ` B. The following is a derivation of B from assumptions in Γ ∪ ∆:

u1 : D1 . . . um : Dm [u : A]
|M w1 : C1 . . . wn : Cn
B |N
+
A→B → u A
B →−
1.12. EXTENSION, CUT, AND THE DEDUCTION THEOREM 25

The following rules are special cases of the cut-rule for Γ = ∆ and Γ = ∆ = ∅, respectively.

Γ ` A, Γ ∪ {A} ` B
Γ`B
` A, A ` B
`B
From now on, we also denote Γ ` A by the tree

Γ
|M
A
Proposition 1.12.2. Let Γ ⊆ Form and A, B ∈ Form.
(i) Γ ` (A → B) ⇒ (Γ ` A ⇒ Γ ` B).
(ii) (Γ ` A or Γ ` B) ⇒ Γ ` A ∨ B.
(iii) Γ ` (A ∧ B) ⇔ (Γ ` A and Γ ` B).
(iv) Γ ` ∀y A ⇒ Γ ` A(s), for every s ∈ Term such that Frees,y (A) = 1.
(v) If s ∈ Term such that Frees,y (A) = 1 and Γ ` A(s), then Γ ` ∃y A.

Proof. (i) If Γ ` (A → B) and Γ ` A, the following is a derivation of B from Γ:


Γ Γ
|M |N
A→B A
B →−

(ii) If Γ ` A, the following is a derivation of A ∨ B from Γ:


Γ
|M
A ∨+
A∨B 0
If Γ ` B, we proceed similarly.
(iii) If Γ ` A ∧ B, the following is a derivation of A from Γ:
Γ
|M [a : A][v : B]
1A
A∧B A −
∧ a, v
A
Notice that in the above derivation of A we used the ext-rule. In order to show Γ ` B, we
proceed similarly. If Γ ` A and Γ ` B, the following is a derivation of A ∧ B from Γ:
Γ Γ
|M |N
A B +
A∧B ∧

(iv) and (v) If Γ ` ∀y A, the left derivation is a derivation of A(s) from Γ, and if Γ ` A(s), the
right derivation is a derivation of ∃y A from Γ:
26 CHAPTER 1. DERIVATIONS IN MINIMAL LOGIC

Γ Γ
|M |M
∀y A s ∈ Term s ∈ Term A(s)
∀− ∃+
A(s) ∃y A

Proposition 1.12.3. Let Γ ⊆ Form and A, B ∈ Form.


(i) (Deduction theorem) Γ ∪ {A} ` B ⇔ Γ ` A → B.
(ii) If for every A1 , . . . , An , An+1 ∈ Form, we define

1
^
Ai = A1 ,
i=1

n+1
^ n
^ 
Ai = Ai ∧ An+1 ,
i=1 i=1

then  n
^  

∀n∈N+ ∀A1 ,...,An ,A∈Form {A1 , . . . , An } ` A ⇔ ` Ai → A .
i=1

Proof. (i) If C1 , . . . , Cn ∈ Γ such that C1 , . . . , Cn , A ` B, then

u1 : C1 . . . un : Cn [u : A]
|M
B +
A→B → u
is a derivation of A → B from Γ. Conversely, if C1 , . . . , Cn ∈ Γ such that C1 , . . . , Cn , ` A → B,
the following is a derivation of B from Γ ∪ {A}:

u1 : C1 . . . un : Cn
|M a: A 1A
A→B A
B →−

(ii) We use induction on N+ . If n = 1, our goal-formula becomes



∀A,B∈Form {A} ` B ⇔ ` A → B ,

which follows from (i) for Γ = ∅. Our inductive hypothesis is


 n
^  
∀A1 ,...,An ,A∈Form {A1 , . . . , An } ` A ⇔ ` Ai → A ,
i=1

and we show
  n+1
^  
∀A1 ,...,An ,An+1 ,A∈Form {A1 , . . . , An , An+1 } ` A ⇔ ` Ai →A .
i=1
1.13. THE CATEGORY OF FORMULAS IS CARTESIAN CLOSED 27

If we fix A1 , . . . , An , An+1 , A, we have that

{A1 , . . . , An , An+1 } ` A ⇔ {A1 , . . . , An } ∪ {An+1 } ` A


(i)
⇔ {A1 , . . . , An } ` An+1 → A
^ n 
(∗)
⇔` Ai → (An+1 → A)
i=1
n
^ 
(∗∗)
⇔ ` Ai ∧ An+1 → A
i=1
 n+1
^ 
=` Ai → A,
i=1

where (∗) follows by the inductive hypothesis on A1 , . . . , An and the formula An+1 → A, and
(∗∗) follows by the derivation

` (A → B → C) ↔ (A ∧ B → C)

in Proposition 1.11.4(i), and the corollary of Proposition 1.12.2(i)

` A ↔ B ⇒ (` A ⇔ ` B).

1.13 The category of formulas is cartesian closed


Definition 1.13.1. If C is a category, and f : A → B is an arrow in C, f is called an iso,
or an isomorphism, if there is an arrow g : B → A such that g ◦ f = 1A and f ◦ g = 1B . In
this case we say that A and B are isomorphic, and we write A ∼
= B.

Clearly, the relation of isomorphism in a category satisfies the properties of an equivalence


relation, and it is a categorical alternative to the notion of equality. In the category of formulas
Form if A ≤ B and B ≤ A i.e., if there are derivations M : A → B and N : B → A, then
A∼ = B, since by the thinness of Form we get N ◦ M = 1A and M ◦ N = 1B .

Remark 1.13.2. If A, B ∈ Form such that ` A and ` B, then A ∼


= B.

Proof. Exercise.

Definition 1.13.3. Let > be a fixed formula such that ` >. We call the formula > verum
i.e., true.

Definition 1.13.4. If C is a category, an object T of C is called terminal, if there is a unique


arrow f : A → T , for every object A of C. Dually, an object I of C is called initial, if there is
a unique arrow g : I → A, for every object A of C.

Notice that the notion of an initial (terminal) object is dual to the notion of a terminal
(initial) object i.e., we get the definition of a terminal object by reversing the arrow in the
definition of an initial object, and vice versa. One could have named a terminal object a
coinitial object and an initial object a coterminal one. This duality between concepts and
“coconcepts” is very often in category theory.
28 CHAPTER 1. DERIVATIONS IN MINIMAL LOGIC

In the category of sets Set any singleton, like 1 = {0} is terminal, and the empty set
∅ is initial. It is straightforward to show that terminal, or initial objects in a category C
are unique up to isomorphism i.e., any two terminal, or initial objects in a category C are
isomorphic (exercise).

Proposition 1.13.5. The formula > is a terminal object in Form.

Proof. Exercise.

If there is a formula I ∈ Form, such that I is an initial element in Form, then this is
expected to be the formula ⊥ (can you find a reason for that?). But such a thing cannot be
shown now, it has to be “postulated” (see derivations in intuitionistic logic).

Definition 1.13.6. Let C be a category and A, B objects of C. A product of A and B is an


object A × B of C together with arrows prA : A × B → A and prB : A × B → B, such that
the universal property of product is satisfied i.e., if C is an object in C and fA : C → A and
fB : C → B, there is a unique arrow h = hfA , fB i : C → A × B, such that the following inner
diagrams commute i.e., prA ◦ h = fA and prB ◦ h = fB .

C
fA h fB

A pr A × B pr B.
A B

A category C has products, if for every objects A, B of C, there is a product A × B in C (for


simplicity we avoid to mention the corresponding projection arrows).

In Set the product of sets A, B is their cartesian product together with the two projection
maps. Next we show that the product A × B in C, if it exists, is unique up to isomorphism
i.e., if there is some object D and arrow $A : D → A and $B : D → B such that the universal
property of products is satisfied, then D ∼ = A × B. In the universal property for D let

C
fA h fB

A $ D $ B
A B

C = D, fA = $A and fB = $B . Since the following inner diagrams also commute

D
$A 1D $B

A $ D $ B,
A B

we get h = 1D , and the arrow h$A , $B i is unique, Since A × B and D both satisfy the
universal property of the products, from the previous remark we get g ◦ h = 1A×B
1.13. THE CATEGORY OF FORMULAS IS CARTESIAN CLOSED 29

A×B
prA h prB

A $ D $ B.
A B
prA g prB

A×B

Similarly from the commutative diagrams

D
$A g $B

A prA A × B prB B
$A h $B

we get that h◦g = 1D , hence A×B ∼


= D. The following arrow will be used in Definition 1.13.11.

Definition 1.13.7. If a category C has products, f : A → B and f 0 : A0 → B 0 are in C1 , then


f × f 0 = hf ◦ prA , f 0 ◦ prA0 i : A × A0 → B × B 0

A × A0
prA prA0

A f × f0 A0
f f0
prB prB 0
B B × B0 B0

Proposition 1.13.8. If A, B ∈ Form, then A∧B is a product of A, B in Form. Consequently,


Form has products.

Proof. Exercise. Notice that the commutativity of the corresponding diagrams is trivially
satisfied, as Form is a thin category.

Next we define the dual notion to the product of objects in a category. Notice that the
arrows in the universal property of coproduct are reversed with respect to the arrows in the
universal property of the product.

Definition 1.13.9. Let C be a category and A, B objects of C. A coproduct of A and B is


an object A + B of C together with arrows iA : A → A + B and iB : B → A + B, such that the
universal property of coproduct is satisfied i.e., if C is an object in C and fA : A → C and
fB : B → C, there is a unique arrow h = [fA , fB ] : A + B → C, such that the following inner
diagrams commute i.e., h ◦ iA = fA and h ◦ iB = fB .
30 CHAPTER 1. DERIVATIONS IN MINIMAL LOGIC

C
fA h fB

A A+B B.
iA iB

The arrows iA , iB are called coprojections, or injections. A category C has coproducts, if for
every objects A, B of C, there is a coproduct A + B in C (for simplicity we avoid to mention
the corresponding coprojection arrows).
In Set the coproduct of sets A, B is their disjoint union

A + B = {(i, x) ∈ {0, 1} × A ∪ B | (i = 0 & x ∈ A) or (i = 1 & x ∈ B)}

together with the injections iA : A → A + B, where iA (a) = (0, a), for every a ∈ A, and
iB : B → A + B, where iB (b) = (1, b), for every b ∈ B. A coproduct A + B in C, if it exists, is
unique up to isomorphism (the proof is dual to the proof for the product).
Proposition 1.13.10. If A, B ∈ Form, then A ∨ B is a coproduct of A, B in Form. Conse-
quently, Form has coproducts.
Proof. Exercise.

Definition 1.13.11. If B, C are objects of a category C with products, an exponential of


B and C is an object B → C in C together with an arrow evalB,C : (B → C) × B → C,
such that for any object D in C and every arrow f : D × B → C there is a unique arrow
fb: D → (B → C) such that evalB,C ◦ (fb × 1B ) = f

evalB,C
(B → C) × B C,

fb × 1B f

D×B

where the arrow fb × 1B is determined in Definition 1.13.7


D×B
prD prB

D fb × 1B B
fb 1B
prB→C prB
B→C (B → C) × B B.

The arrow fb is called the (exponential) transpose of f . A category has exponentials, if for
every B, C in C there is an exponential B → C in C.
An exponential B → C of B and C is unique up to isomorphism (exercise). In Set an
exponential of the sets B and C is the set of all functions from B to C i.e.,

C B = {f ∈ P(B × C) | f : B → C},
1.14. FUNCTORS ASSOCIATED TO THE MAIN LOGICAL SYMBOLS 31

together with the function evalB,C : C B × B → C, defined by


evalB,C (f, b) = f (b); f ∈ C B , b ∈ B.
Proposition 1.13.12. If B, C ∈ Form, then B → C is the exponential of B and C in Form.
Consequently, Form has exponentials.
Proof. Exercise.

Definition 1.13.13. A category C is called cartesian closed, if it has a terminal object,


products and exponentials.
Clearly, the category of sets Set is cartesian closed.
Corollary 1.13.14. The category Form is cartesian closed.
Proof. It follows from Propositions 1.13.5, 1.13.8, and 1.13.12.

1.14 Functors associated to the main logical symbols


The concept of functor is the natural notion of “map”, or “arrow”, between categories.
Definition 1.14.1. Let C and D be categories. A covariant functor, or simply a functor,
from C to D is a pair F = (F0 , F1 ), where:
(i) F0 maps an object A of C to an object F0 (A) of D,
(ii) F1 maps an arrow f : A → B of C to an arrow F1 (f ) : F0 (A) → F0 (B) of D, such that
(a) For every A in C0 we have that F1 (1A ) = 1F0 (A)

F0 (A)

1F0 (A) F1 (1A )

F0 (A).

(b) If f : A → B and g : B → C, then F1 (g ◦ f ) = F1 (g) ◦ F1 (f ) i.e., the following diagram


F1 (f ) F1 (g)
F0 (A) F0 (B) F0 (C)

F1 (g ◦ f )

commutes, where for simplicity we use the same symbol for the operation of composition in
the categories C and D. In this case we write4 F : C → D. A functor C → C is called an
endofunctor (on C). Two functors F, G : C → D are equal, if F0 = G0 and F1 = G1 .
A contravariant functor from C to D is a pair F := (F0 , F1 ), where:
(i) F0 maps an object A of C to an object F0 (A) of D,
(ii0 ) F1 maps an arrow f : A → B of C to an arrow F1 (f ) : F0 (B) → F0 (A) of D, such that
(a) F1 (1A ) = 1F0 (A) , for every A in C0 .
(b0 ) If f : A → B and g : B → C, then F1 (g ◦ f ) = F1 (f ) ◦ F1 (g) i.e., the following diagram
4
In the literature it is often written F (C) and F (f ), instead of F0 (C) and F1 (f ).
32 CHAPTER 1. DERIVATIONS IN MINIMAL LOGIC

F1 (g) F1 (f )
F0 (C) F0 (B) F0 (A).

F1 (g ◦ f )

commutes. In this case we write5 F : C op → D, where C op is the opposite category of C i.e.,


it has the objects of C and an arrow f : A → B in C op is an arrow f : B → A in C. Two
contravariant functors F, G : C op → D are equal, if F0 = G0 and F1 = G1 .
Example 1.14.2. If C is a category, the identity functor on C is the pair IdC = (IdC C
0 , Id1 ) :
C → C, where IdC C
0 (A) = A, for every A in C0 , and if f : A → B, then Id1 (f ) = f .

Example 1.14.3. The pair (G0 , G1 ) : Setop → Set, where G0 (X) = F(X) = {φ : X → R},
and if f : X → Y , then G1 (f ) : F(Y ) → F(X) is defined by [G1 (f )](θ) = θ ◦ f
f
X Y
θ◦f θ
R,

for every θ ∈ F(Y ), is a contravariant functor from Set to Set. If X is a set, then
[G1 (idX )](φ) = φ ◦ idX = φ
and since φ ∈ F(X) is arbitrary, we conclude that G1 (idX ) = idF(X) = idG0 (X) . If f : X → Y
and g : Y → Z, then G1 (f ) : F(Y ) → F(X), G1 (g) : F(Z) → F(Y ) and G1 (g ◦ f ) : F(Z) →
F(X). Moreover, if η ∈ F(Z), we have that
[G1 (g ◦ f )](η) = η ◦ (g ◦ f )
= (η ◦ g) ◦ f
= [G1 (f )](η ◦ g)

= G1 (f ) [G1 (g)](η)
= [G1 (f ) ◦ G1 (g)](η).
Example 1.14.4. If C, D, E are categories, F : C → D, and G : D → E are functors, their
composition G ◦ F : C → E, where G ◦ F = (G0 ◦ F0 , G1 ◦ F1 ), is a functor.
Definition 1.14.5. The collection of all categories with arrows the functors between them is
a category, which is called the category of categories, and it is denoted by Cat. The unit arrow
1C is the identity functor IdC , while the composition of functors is defined in Example 1.14.4.
Example 1.14.6. An endofunctor F : Form → Form is a monotone function from (Form, ≤)
to itself. The same is the case for any functor between preorders. Recall that if (I, ≤) and
(J, ) are preorders (see Definition 1.10.5), a function f : I → J is monotone, if
∀i,i0 ∈I i ≤ i0 ⇒ f (i)  f (i0 ) .


Conversely, if f is a monotone function from (Form, ≤) to itself, then f generates an endofunctor


on Form, the 0-part of which is f .
5
It is easy to show that a covariant functor F : C op → D is exactly a contravariant functor from C to D.
1.14. FUNCTORS ASSOCIATED TO THE MAIN LOGICAL SYMBOLS 33

Definition 1.14.7. If C, D are categories, the product category C × D has objects pairs (c, d),
where c ∈ C0 and d ∈ D0 . An arrow from (c, d) to (c0 , d0 ) is a pair (f, g), where f : c → c0
in C1 and g : d → d0 in D1 . If (f, g) : (c, d) → (c0 , d0 ) and (f 0 , g 0 ) : (c0 , d0 ) → (c00 , d00 ), their
composition is defined by
(f 0 , g 0 ) ◦ (f, g) = f 0 ◦ f, g 0 ◦ g .


Moreover, 1(c,d) = (1c , 1d ). The projection functor PrC : C × D → C is the pair PrC C

0 , Pr1 ,
where PrC C
0 (c, d) = c, for every object (c, d) of C × D, and Pr1 (f, g) = f , for every arrow
D
(f, g) in C × D. The projection functor Pr : C × D → D is defined similarly.

It is immediate to show that C × D is a category and PrC and PrD are functors. Moreover,
the product category C × D is a product of C and D in Cat.

Definition 1.14.8. Let the following functors:


V V
(i) : FormV× Form → Form, where 0 (A, B) = A ∧ B, for every object (A, B) in Form ×
Form, and 1 M : A → A0 , N : B → B 0 : (A ∧ B) → (A0 ∧ B 0 ) is the following derivation of
A0 ∧ B 0 from assumption w : A ∧ B, given derivations M and N ,

[u : A] [v : B]
|M |N
A 0 B0 +
w : A ∧ B 1A∧B
0 0 ∧
A∧B A ∧B − u, v

A0 ∧ B 0
W W
(ii) : Form × Form → Form, where 0 (A, B) = A ∨ B,0 for 0every object (A, B) in Form ×
0 0
W
Form, and 1 M : A → A , N : B → B : (A ∨ B) → (A ∨ B ) is the following derivation of
A0 ∨ B 0 from assumption w : A ∨ B, given derivations M and N ,

[u : A] [v : B]
|M |N
w: A ∨ B 1A∨B
A0 ∨+ B0 ∨+
A∨B A ∨ B0 0
0 A ∨ B 0 1−
0
∨ , u, v
A0 ∨ B 0
(iii) → : Formop × Form → Form, where (→)0 (A, B) = A → B, for every (A, B) in
Formop × Form. The definition of (→)1 (M, N ) : (A, B) → (A0 , B 0 ) : (A → B) → (A0 →
B 0 ), where M : A0 → A and N : B → B 0 , is an exercise.
(iv) ∀x : Form → Form, where ∀x 0 (A) = ∀x A, for every A ∈ Form, and ∀x 1 M : A →
 

B) : ∀x A → ∀x B is the following derivation of ∀x B from assumption ∀x A, given derivation M

[u : A]
|M w : ∀x A 1∀x A
B ∀x A x ∈ Var
+ ∀−
A→B → u A(x)
B →−
+
∀x B ∀ x

∃x : Form → Form, ∃x ∃x
 
(v) where 0
(A) = ∃x A, for every A ∈ Form, and 1
M: A →
34 CHAPTER 1. DERIVATIONS IN MINIMAL LOGIC

B) : ∃x A → ∃x B is the following derivation of ∃x B from assumption ∃x A, given derivation M


[u : A]
|M
w : ∃x A 1 x ∈ Var B +
∃x A
∃x A
∃x B − ∃
∃ x, u
∃x B
Notice that in the definition of the derivation ∀x 1 (M ), if x ∈

/ FV(A), then A(x) = A,
by Proposition 1.6.4, while if x ∈ FV(A), then A = A(x), trivially. The variable condition
in the application of the rule ∀+ is satisfied, as in the above derivation of B the only open
assumption is w : ∀x A and x ∈ / FV(∀x A). In the definition of the derivation ∃x 1 (M ) the
variable condition in the application of the rule ∃− x, u is satisfied, as in the above derivation
of ∃x B the variable x is not free in ∃x B, while it can be free in the only open (until then)
assumption u : A. The proof that these pairs are functors is immediate, as Form is thin.
There are more functors related to the logical symbols of a first-order language, which are
variations of the functors given in Definition 1.14.8.
Definition 1.14.9. If we fix a formula B, let the functors B , B , →B : Form → Form,
V W
defined by the rules A 7→ B ∧ A, A 7→ B ∨ A, and A 7→ B → A, respectively. The applica-
tion of these functors on arrows is defined as in the case of the corresponding functors in
Definition 1.14.8.
The functor →B is a special case of the following general functor, although the corre-
sponding proof is more involved.
Proposition 1.14.10. Let C be a cartesian closed category and B in C0 . The rule A 7→
(B → A), where B → A is a fixed exponential of A and B, determines an endofunctor on C.
Proof. Exercise. If f : C → D in C1 , you need to define an arrow (B → C) → (B → D) with
the use of f and the universal properties of the exponentials B → C and B → D.
V W
Definition 1.14.11. Let the functors B , B : Form → Form, defined by the rules A 7→
A ∧ B and A 7→ A ∨ B, respectively.
V V W W
Clearly, the functors B , B and B , B are very similar, respectively. This similarity
is clarified with the use of the following very important notion of “map”, or arrow, between
functors from a category C to a category D.

1.15 Natural transformations


Definition 1.15.1. Let C, D be categories and F = (F0 , F1 ), G = (G0 , G1 ) functors from
C to D. A natural transformation from F to G is a family of arrows in D of the form
τC : F0 (C) → G0 (C), such that for every C in C0 , and every f : C → C 0 in C1 , the following
diagram commutes
F1 (f )
F0 (C) F0 (C 0 )

τC τC 0

G0 (C) G0 (C 0 ).
G1 (f )
1.15. NATURAL TRANSFORMATIONS 35

We denote a natural transformation τ from F to G by τ : F ⇒ G.


Example 1.15.2. Let IdSet = (IdSet Set
0 , Id1 ) be the identity functor on Set (Example 1.14.2),
and let the functor H = (H0 , H1 ) : Set → Set, defined by

H0 (X) = F(F(X)) = {Φ : F(X) → R},


 
and if f : X → Y , then H1 (f ) : F(F(X)) → F(F(Y )) is defined by H1 (f ) (Φ) = Φ ◦ G1 (f ),
for every Φ ∈ F(F(X))

G1 (f )
F(Y ) F(X)

Φ ◦ G1 (f ) Φ
R,

where G1 is defined in the Example 1.14.3. It is straightforward to show that H is a functor


(actually, one can avoid this calculation and infer immediately that H : Set → Set through
the definition of H0 and the fact that G : Setop → Set-why?). The Gelfand transformation is
the following family of arrows in Set
 
τ = τX : X → F(F(X))
X

τX (x) = x̂,
x̂ : F(X) → R, x̂(φ) = φ(x); φ ∈ F(X).
The Gelfand transformation τ is a natural transformation from IdSet to H, as for every
f : X → Y the following diagram commutes
f
X Y
τX τY

F(F(X)) F(F(Y )),


H1 (f )

since, if θ ∈ F(Y ) and x ∈ X, we have that


 
 
H1 (f ) τX (x)) (θ) = τX (x) ◦ G1 (f ) (θ)
 
= x̂ ◦ G1 (f ) (θ)
 
= x̂ [G1 (f )](θ)

= x̂ θ ◦ f
= θ(f (x))
= fd(x)(θ)
 
= (τY ◦ f )(x) (θ).
36 CHAPTER 1. DERIVATIONS IN MINIMAL LOGIC

Definition 1.15.3. If C, D are categories the functor category Fun(C, D) has objects the
functors from C to D, and if F, G : C → D, an arrow from F to G is a natural transformation
from F to G. The identity arrow 1F : F ⇒ F is the family of arrows (1F )C : F0 (C) → F0 (C),
where (1F )C = 1F0 (C) , and the following diagram trivially commutes

F1 (f )
F0 (C) F0 (C 0 )

1F0 (C) 1F0 (C 0 )

F0 (C) F0 (C 0 ).
F1 (f )

If F, G, H : C → D, τ : F ⇒ G and σ : G ⇒ H, the composite arrow σ ◦ τ is defined by

(σ ◦ τ )C = σC ◦ τC : F0 (C) → H0 (C),

for every C in C0 , and, if f : C → C 0 in C1 , the following outer diagram commutes

F1 (f )
F0 (C) F0 (C 0 )

τC τC 0

(σ ◦ τ )C G0 (C) G0 (C 0 ) (σ ◦ τ )C 0
G1 (f )
σC σC 0

H0 (C) H0 (C 0 ),
H1 (f )

since

(σ ◦ τ )C 0 ◦ F1 (f ) = σC 0 ◦ τC 0 ◦ F1 (f )

= σC 0 ◦ τC 0 ◦ F1 (f )

= σC 0 ◦ G1 (f ) ◦ τC

= σC 0 ◦ G1 (f ) ◦ τC

= H1 (f ) ◦ σC ◦ τC

= H1 (f ) ◦ σC ◦ τC
= H1 (f ) ◦ (σ ◦ τ )C .
V V
Example 1.15.4. W functors B , and B are isomorphic in Fun(Form, Form), and also
W The
the functors B and B , are isomorphic in the category Fun(Form, Form) (exercise).

1.16 Galois connections


According to Proposition 1.11.4(i), the formula (A ∧ B → C) ↔ (A → (B → C)) is derivable
in minimal logic. This fact is rephrased as follows:

A ∧ B ≤ C ⇔ A ≤ (B → C).
1.16. GALOIS CONNECTIONS 37

With the help of the functors B , →B : Form → Form the last equivalence is rewritten as
V

 ^
(A) ≤ C ⇔ A ≤ →B 0 (C),

B 0

which in turn is a special case of the equivalence

f : D × B → C ⇔ fb: D → (B → C)

that holds in a category C with exponentials. The last equivalence is understood as follows:
if f : D × B → C, there is a unique arrow fb: D → (B → C), using the universal property of
an exponential B → C of B and C in a category C. Conversely, if g : D → (B → C), there is
a unique arrow f : D × B → C such that fb = g (exercise).

Definition 1.16.1. If (I, ≤) and (J, ) are preorders, a Galois connection, or a Galois
correspondence, between them is a pair of monotone functions (f : I → J, g : J → I), such that
 
∀i∈I ∀j∈J f (i)  j ⇔ i ≤ g(j) .

In this case we say that g is right adjoint to f , or f is left adjoint to g, and we write f a g.

Clearly, we have that


a →B .
^
B

Definition 1.16.2. Let (I, ≤) be a preorder. If i, i0 ∈ I, then i = ∼ i0 :⇔ i ≤ i0 & i0 ≤ i, and


since this is a special case of Definition 1.13.1, we say then that i and i0 are isomorphic.
A closure operator on I is a monotone function Cls : I → I, such that i ≤ Cls(i) and
Cls(Cls(i)) ≤ Cls(i), for every i ∈ I. An interior operator on I is a monotone function
Int : I → I, such that Int(i) ≤ i and Int(i) ≤ Int(Int(i)), for every i ∈ I. An element i of
I is called closed, with respect to the closure operator Cls, if i ∼
= Cls(i), and it is called open,

with respect to the interior operator Int, if i = Int(i). We denote by Closed(I) the set of
closed elements of I, and by Open(I) the set of open elements of I.

As Cls(i) ∼
= Cls(Cls(i)) and Int(i) ∼= Int(Int(i)), we get Cls(i) ∈ Closed(I) and
Int(i) ∈ Open(I), for every i ∈ I. Notice that if (I, ≤) and (J, ) are preorders, and if
f : I → J is monotone, then f preserves isomorphism i.e., i ∼
= i0 ⇒ f (i) ∼
= f (i0 ), for every
0
i, i ∈ I, where we use the same symbol for isomorphic elements of J.

Proposition 1.16.3. Let (I, ≤) and (J, ) be preorders and (f : I → J, g : J → I) a Galois


connection between them.
(i) The composition g ◦ f is a closure operator on I.
(ii) The composition f ◦ g is an interior operator on J.
(iii) The rule i 7→ f (i) determines a function from the closed elements of I with respect to
g ◦ f to the open elements of J with respect to f ◦ g.
(iv) The rule j 7→ g(j) determines a function from the open elements of J with respect to f ◦ g
to the closed elements of I with respect to g ◦ f .

Proof. Exercise.
38 CHAPTER 1. DERIVATIONS IN MINIMAL LOGIC

In a Galois connection the adjoints are unique up to isomorphism.


Corollary 1.16.4. Let (I, ≤) and (J, ) be preorders and (f : I → J, g : J → I) a Galois
connection between them.
(i) If (f : I → J, g 0 : J → I) is a Galois connection, then g(j) ∼
= g 0 (j), for every j ∈ J.
(ii) If (f 0 : I → J, g : J → I) is a Galois connection, then f (i) ∼
= f 0 (i), for every i ∈ I.
Proof. Exercise.

The quantifiers can be described as adjoints. First we give a set-interpretation of this fact.
Definition 1.16.5. Let X, Y be sets. If u : X → Y , let u∗ : P(Y ) → P(X), defined by

u∗ (B) = u−1 (B) = {x ∈ X | u(x) ∈ B}; B ∈ P(Y ).

As (P(X), ⊆) and (P(Y ), ⊆) are posets, it is immediate to see that u∗ is a monotone


function, hence, according to Example 1.14.6, a contravariant functor.
 
Proposition 1.16.6. Let X, Y be sets, and let the preorders P(X), ⊆ and P(X × Y ), ⊆ .
(i) The functions ∃XY : P(X × Y ) → P(X) and ∀Y : P(X × Y ) → P(X), defined by
 
∃XY (C) = x ∈ X | ∃y∈Y (x, y) ∈ C ,
 
∀XY (C) = x ∈ X | ∀y∈Y (x, y) ∈ C ,
for every C ⊆ X × Y , respectively, are monotone.
(ii) If πX : X × Y → X is the projection function to X, then
∗ ∗
∃XY a πX & πX a ∀XY ,
∗ : P(X) → P(X × Y ) is defined according to Definition 1.16.5.
where πX
Proof. Exercise.

1.17 The quantifiers as adjoints


In this section we translate Proposition 1.16.6 into minimal logic.
Definition 1.17.1. If C = (C0 , C1 , dom, cod, ◦, 1) is a category, a subcategory D of C is a
subcollection of objects in C and a subcollection of arrows in C, which are closed under the
operations dom, cod, ◦, and 1 of C. In this case we write D ≤ C. If A, B ∈ C0 , let

C1 (A, B) = {f ∈ C1 | dom(f ) = A & cod(f ) = B}.

If D is a subcategory of C, such that for every A, B ∈ D0 we have that D1 (A, B) = C1 (A, B),
then D is called a full subcategory of C. A category C is called small, if the collections C0
and C1 are both sets. If one of them is a proper class i.e., a class that is not a set, then C is
called large6 . If for every A, B ∈ C0 the collection C1 (A, B) is a set, then C is called locally
small.
6
In Zermelo-Fraenkel set theory a class is either a set or a proper class. The collection of all sets, or the
universe, V is a proper class. That can be shown via the so-celled Russell’s paradox: if V was a set, then we
can define with the scheme of separation the set R = {x ∈ V | x ∈ / x}, and then we reach the contradiction
R∈R⇔R∈ / R.
1.17. THE QUANTIFIERS AS ADJOINTS 39

Example 1.17.2. The category Setfin of all finite sets and functions between them is a full
subcategory of Set. The category Set is large, as the collection V of all sets is a proper class,
but it is locally small, since the collection of all functions between two sets is a set. The
category Form is small.
Example 1.17.3. If x1 , . . . , xn ∈ Var, the category Form(x1 , . . . , xn ) with objects the set
Form(x1 , . . . , xn ) of all formulas A, such that FV(A) ⊆ {x1 , . . . , xn }, together with the usual
derivations between them as arrows, is a full subcategory of Form.
Definition 1.17.4. If x, y ∈ Var, let the functors ∃(x, y), ∀(x, y) : Form(x, y) → Form(x),
defined by the rules
   
∃xy (A) = ∃y A & ∀xy (A) = ∀y A; A ∈ Form(x, y).
0 0

Let also W (x, y) : Form(x) → Form(x, y), where W (x, y) 0 (A) = A, for every A ∈ Form.
Next follows the immediate translation of Proposition 1.16.6 into minimal logic.
Theorem 1.17.5. The following adjunctions hold:
(i) ∃(x, y) a W (x, y).
(ii) W (x, y) a ∀(x, y).
Proof. Exercise.
Example 1.17.6. The category Formx of all formulas A, such that x ∈ / FV(A), together
with the usual derivations between them as arrows, is a full subcategory of Form.
Definition 1.17.7. The functors ∃x : Form → Form and ∀x : Form → Form, defined
in Definition 1.14.8, can be written as functors of the form ∃x : Form → Formx and
∀x : Form → Formx , since x ∈/ FV(∃x A) and x ∈/ FV(∀x A). Let the functor Wx : Formx →
Form, defined by Wx 0 (A) = A, for every A ∈ Form.
Theorem 1.17.8. The following adjunctions hold:
(i) ∃x a Wx .
(ii) Wx a ∀x .
Proof. Let A ∈ Form such that x ∈
/ FV(A), and C ∈ Form.
(i) We show that ∃x 0 (C) ≤ A ⇔ C ≤ Wx )0 (A) i.e., there is an arrow M : ∃x C → A if and
only if there is an arrow N : C → A. Suppose first that M : ∃x C → A. We find a derivation
N of A from assumption C, as follows:
[v : ∃x C]
|M w: C 1C
A x ∈ Term C
→+ v ∃+
∃x C → A ∃x C
A →− .
For the converse, we suppose that there is a derivation N of A with assumption C, and we
find a derivation M of A with assumption ∃x C, as follows:
[u : C]
w : ∃x C |N
1∃x C
∃x C A −
∃ x, u.
A
40 CHAPTER 1. DERIVATIONS IN MINIMAL LOGIC

The variable condition in ∃− x, u is satisfied: as A is in Formx , x ∈


/ FV(A), and the only open
assumption in the above derivation N of A is u : C, and x can be free in C).
(ii) We show that Wx 0 (A) ≤ C ⇔ A ≤ ∀x )0 (C) i.e., there is an arrow M : A → C if and
only if there is an arrow N : A → ∀x C. Suppose first that M : A ∧ B(x) → C. We find a
derivation N of ∀x C from assumption A, as follows:

[v : A]
|M
C a: A
→+ v 1A
A→C A
→−
C
∀+ x
∀x C

The variable condition in ∀+ x is satisfied: the only open assumption in the derivation of C is
A, and by our hypothesis x ∈ / FV(A). For the converse, let N be a derivation of ∀x C with
assumption A. We find a derivation M of C with assumption A, as follows:

[u : A]
|N
∀x C a: A
→+ u 1A
A → ∀x C A
→−
∀x C x ∈ Term
∀−
C
Next we give one more variation of the previous theorem.

Definition 1.17.9. Let L be a first-order language, x is a fixed variable. Moreover, we suppose


that there is a formula B of L, such that x ∈ FV(B) and a fixed derivation K of B in minimal
logic. E.g., if R ∈ Rel(1) , we can take B to be the formula R(x) → R(x). Let the functor
WxB : Formx → Form, defined by WxB 0 (A) = A ∧ B(x), for every A ∈ Form.

Theorem 1.17.10. The following adjunctions hold:


(i) ∃x a WxB .
(ii) WxB a ∀x .

Proof. We proceed exactly as in the proof of Theorem 1.17.8.

In accordance to Corollary 1.16.4, if L is a first-order language as described in Theo-


rem 1.17.10, we have that Wx (A) ∼
= WxB (A), for every A ∈ Form.
Chapter 2

Derivations in Intuitionistic and


Classical Logic

In this chapter we study derivations in intuitionistic and classical logic. We also explore the
relation between minimal, intuitionisitc and classical logic.

2.1 Derivations in intuitionistic logic


The intuitionistic derivations are the minimal derivations extended with the rule “ex-falso-
quodlibet” (from falsity everything follows).

Definition 2.1.1. We define inductively the set DiV (A) of intuitionistic derivations of a
formula A with assumption variables in V , where V is a finite subset of Aform (see Defini-
tion 1.9.1). If V = ∅, we write Di (A). The introduction-rules for DiV (A) are the introduction-
rules for DV (A), given in Definition 1.9.2, together with the following rule:
(0A ) The following tree 0A
o: ⊥ 0
A
A
is an element of Di{o : ⊥} (A), for every A ∈ Form \ {⊥}.
Unless otherwise stated, a derivation in DiV (A) is denoted by Mi . If V = {u : A}, W = {v : A},
Mi ∈ DiV (B) and Ni ∈ DiW (B), we define Mi = Ni . A formula A is derivable in intuitionistic
logic, written `i A, if there is an intuitionistic derivation of A without free assumptions i.e.,

`i A :⇔ ∃Mi (Mi ∈ Di (A)).

A formula A is intuitionistically derivable from assumptions A1 , . . . , An , written {A1 , . . . , An } `i


A, or A1 , . . . , An `i A, if there is an intuitionistic derivation of A with free assumptions among
A1 , . . . , An i.e.,
 
i

A1 , . . . , An `i A :⇔ ∃V ⊆fin Aform Form(V ) ⊆ {A1 , . . . , An } & ∃Mi Mi ∈ DV (A) .

If Γ ⊆ Form, a formula A is called intuitionistically derivable from Γ, written Γ `i A, if A is


intuitionistically derivable from finitely many assumptions A1 , . . . , An ∈ Γ. The category of
42 CHAPTER 2. DERIVATIONS IN INTUITIONISTIC AND CLASSICAL LOGIC

intuitionistic formulas Formi has objects the formulas in Form and an arrow from A to B is
an intuitionistic derivation of B from an assumption of the form u : A, and we write

Mi : A → B :⇔ Mi ∈ Di{u : A} (B).

The induced preorder and isomorphism of the thin category Formi are given by

A ≤i B :⇔ ∃Mi Mi : A → B ,

A∼
=i B :⇔ A ≤i B & B ≤i A.

If A = ⊥, then 1⊥ is already a minimal derivation of ⊥ from ⊥. This is why we exclude


the derivation 0⊥ from the rule (0A ) in Definition 2.1.1. We use the following notation.

Definition 2.1.2. Let 0⊥ = 1⊥ .

If A ∈ Form \ {⊥}, then `i ⊥ → A, as it is shown by the following tree:

[o : ⊥]
0A
A
→+ o.
⊥→A
The addition of the rule (0A ) in the inductive definition of DiV (A) has an immediate
consequence to the category of intuitionistic formulas Formi and to the preorder ≤i .

Proposition 2.1.3. The category of intuitionistic formulas Formi has an initial element,
and the preorder ≤i has a minimal element.

Proof. If A ∈ Form \ {⊥}, then 0A ∈ Di{o : ⊥} (A) i.e., 0A : ⊥ → A. If A = ⊥, then 1⊥ : ⊥ →


⊥. The uniqueness of these arrows follows immediately by the thinness of Formi . By
Definition 2.1.2, 0A : ⊥ → A ⇔ ⊥ ≤i A, for every A ∈ Form i.e., ⊥ is ≤i -minimal.

Proposition 2.1.4. If A ∈ Form and V ⊆fin Aform, then DV (A) ⊆ DiV (A).

Proof. We use induction on DV (A). Let P (M ) :⇔ M ∈ DiV (A), a formula of our metatheory
M on DV (A). The cumbersome to write induction principle IndDV (A) gives us that

∀M ∈DV (A) M ∈ DiV (A) .




E.g., according to the clause of IndDV (A) with respect to the rule (→+ ), if M ∈ D{u : A} (B)
M
such that M ∈ Di{u : A} (B), then A→B →+ u ∈ Di (A → B), as we apply the rule (→+ u) of

DiV (A) on M . For the rest rules of DV (A) we work similarly.

Corollary 2.1.5. Let A, B ∈ Form and Γ ⊆ Form.


(i) If Γ ` A, then Γ `i A.
(ii) The category Form is a subcategory of Formi (is it full?).
(iii) The identity rules A 7→ A and (M : A → B) 7→ (M : A → B) determine the functor
Idmi : Form → Formi .
(iv) If A ≤ B, then A ≤i B.
∼ B, then A ∼
(v) If A = =i B.
2.1. DERIVATIONS IN INTUITIONISTIC LOGIC 43

Proof. All cases follow immediately from Proposition 2.1.3.

The category Formi , as the category Form, has a terminal object >, which is a ≤-maximal,
and hence by Corollary 2.1.5, it is also ≤i -maximal. As there are many ≤i -maximal elements,
there are many ≤i -minimal elements, although isomorphic to each other. E.g., A ∧ ¬A ∼ = ⊥,
for every A ∈ Form. The inequality ⊥ ≤i A ∧ ¬A follows from 0A∧¬A , while the inequality
A ∧ ¬A ≤i ⊥ follows immediately (i.e., by Corollary 2.1.5(iv)) from the minimal inequality
A ∧ ¬A ≤ ⊥, as the following tree is a derivation of ⊥ from A ∧ ¬A in minimal logic:

[a : A] [v : ¬A]
1A 1¬A
w : A ∧ ¬A A ¬A
1A∧¬A →−
A ∧ ¬A ⊥ ∧− a,v.

One could have used a weaker notion of intuitionistic derivability, by not accepting all
instances of the ex-falso-quodlibet. One could have defined `i A :⇔ Efq ` A, where Efq is the
set of formulas defined next.
Definition 2.1.6. Let Efq be the following set of formulas:
Efq = {∀x1 ,...,xn (⊥ → R(x1 , . . . , xn )) | n ∈ N+ , R ∈ Rel(n) , x1 , . . . , xn ∈ Var}
∪ {⊥ → R | R ∈ Rel(0) \ {⊥}}.

Theorem 2.1.7. ∀A∈Form Efq ` (⊥ → A) .
Proof. If A = R(t1 , . . . , tn ), where n ∈ N+ , R ∈ Rel(n) and t1 , . . . , tn ∈ Term, the following is
an intuitionistic derivation of ⊥ → R(t1 , . . . , tn ):
∀x1 ,...,xn (⊥ → R(x1 , . . . , xn )) t1 ∈ Term
∀−
∀x2 ,...,xn (⊥ → R(t1 , x2 . . . , xn )) t2 ∈ Term
∀−
∀x3 ,...,xn (⊥ → R(t1 , t2 , x3 . . . , xn ))
......... .........
∀xn (⊥ → R(t1 , . . . , tn−1 , xn )) tn ∈ Term −

⊥ → R(t1 , t2 . . . , tn )
If we suppose that Efq ` (⊥ → A) and Efq ` (⊥ → B) i.e., that there are minimal derivations
M, N of ⊥ → A and ⊥ → B from Efq, respectively, the following are minimal derivations of
⊥ → A → B, ⊥ → A ∨ B, ⊥ → A ∧ B, ⊥ → ∀x A and ⊥ → ∃x A from Efq, respectively:

Efq
|N
⊥→B [v : ⊥]
B →−
→ +u : A
A→B
→+ v
⊥ → (A → B)
Efq Efq
|M |N
⊥→A [u : ⊥] ⊥→B [u : ⊥]
A →− B →−
A∧B ∧+
→+ u
⊥ → (A ∧ B)
44 CHAPTER 2. DERIVATIONS IN INTUITIONISTIC AND CLASSICAL LOGIC

Efq Efq
|M |M
⊥→A [u : ⊥] ⊥→A [u : ⊥]
→− A →−
A ∨+ +
A∨B 0 ∀x A ∀ x +
→+ u
⊥ → (A ∨ B) ⊥ → ∀x A → u
Efq
|M
⊥→A [u : ⊥]
x ∈ Var A →−
∃x A ∃+
+
⊥ → ∃x A → u
In the above use of the ∀+ x-rule the variable condition is satisfied, as x ∈
/ FV(⊥) = FV(S) = ∅,
for every S ∈ Efq.

Proposition 2.1.8. Let the functor EFQ : Form → Form, where EFQ0 (A) = ⊥ → A, for
every A ∈ Form (of which already established functor is EFQ a special case?).
(i) EFQ preserves products i.e., EFQ (A ∧ B) ∼
0 = EFQ (A) ∧ EFQ (B), for every A, B ∈ Form.
0 0
(ii) EFQ0 (A ∨ B) ≥ EFQ0 (A) ∨ EFQ0 (B), for every A, B ∈ Form.
(iii) EFQ preserves the terminal object > i.e., EFQ (>) ∼
= >.
0
(iv) If EFQ : Formi → Formi is also defined by EFQii
ii
0 (A) = ⊥ → A, for every A ∈ Form,
then EFQii does not preserve the initial element ⊥ i.e., it is not the case that EFQii ∼
0 (⊥) =i ⊥.

Proof. Exercise.

Given that there is no minimal derivation of ⊥ → A, for every A ∈ Form, is the rule
EFQim : Formi → Form, defined as above, a functor (exercise)? Note that the extension-rule,
the cut-rule and the deduction theorem for minimal logic (see section 1.12) are easily extended
to intuitionistic logic. Next we describe the functors associated to negation.
Proposition 2.1.9. Let IdForm be the identity functor on Form (see Example 1.14.2) and let
¬ : Form → Form, defined by ¬0 (A) = ¬A, for every A ∈ Form. For every n ∈ N we define
 Form
n
 Id ,n=0
¬ =  n−1 ¬ ,n=1
¬ ◦ ¬ , n > 1.
(i) ¬2n+1 is a contravariant endofunctor, for every n ∈ N.
2n
(ii) ¬ is a covariant endofunctor, for every n ∈ N.
2n+1
(iii) The endofunctor ¬ is isomorphic to ¬ in Fun(Form, Form), for every n ∈ N+ .
Proof. Exercise.
n
If ¬i : Formi → Formi is defined similarly, for every n ∈ N, then it also satisfies
n
Proposition 2.1.9(i)-(iii). The corresponding negation endofunctor ¬c on the category of
classical formulas, defined in the next section, satisfies extra properties. E.g., the endofunctor
¬2n
c is isomorphic to ¬c
2n−2
, and hence it is isomorphic to IdForm , for every n ≥ 2.
2.2. DERIVATIONS IN CLASSICAL LOGIC 45

2.2 Derivations in classical logic


The classical derivations are the minimal derivations extended with the rule of “double-
negation-elimination”.
Definition 2.2.1. We define inductively the set DcV (A) of classical derivations of a formula
A with assumption variables in V , where V is a finite subset of Aform (see Definition 1.9.1).
If V = ∅, we write Dc (A). The introduction-rules for DcV (A) are the introduction-rules for
DV (A), given in Definition 1.9.2, together with the following rule1 :
(DNEA ) The following tree DNEA
u : ¬¬A DNEA
A
is an element of Dc{u : ¬¬A} (A), for every A ∈ Form \ {⊥}.
Unless otherwise stated, a derivation in DcV (A) is denoted by Mc . If V = {u : A}, W = {v : A},
Mc ∈ DcV (B) and Nc ∈ DcW (B), we define Mc = Nc . A formula A is derivable in classical
logic, written `c A, if there is a classical derivation of A without free assumptions i.e.,

`c A :⇔ ∃Mc (Mc ∈ Dc (A)).

A formula A is classically derivable from assumptions A1 , . . . , An , written {A1 , . . . , An } `c A,


or A1 , . . . , An `c A, if there is a classical derivation of A with free assumptions among
A1 , . . . , An i.e.,
 
c

A1 , . . . , An `c A :⇔ ∃V ⊆fin Aform Form(V ) ⊆ {A1 , . . . , An } & ∃Mc Mc ∈ DV (A) .

If Γ ⊆ Form, a formula A is called classically derivable from Γ, written Γ `c A, if A is


classically derivable from finitely many assumptions A1 , . . . , An ∈ Γ. The category of classical
formulas Formc has objects the formulas in Form and an arrow from A to B is a classical
derivation of B from an assumption of the form u : A, and we write

Mc : A → B :⇔ Mc ∈ Dc{u : A} (B).

The induced preorder and isomorphism of the thin category Formc are given by

A ≤c B :⇔ ∃Mc Mc : A → B ,

A∼
=c B :⇔ A ≤c B & B ≤c A.
The derivation ¬¬⊥ → ⊥ is not considered in the rule (DNEA ), as a derivation of ⊥ from
¬¬⊥ already exists in minimal logic:
[o : ⊥]
1⊥
⊥ +
[v : (⊥ → ⊥) → ⊥] ⊥ → ⊥ →− o
⊥ →
→+ v
((⊥ → ⊥) → ⊥) → ⊥
1
For simplicity we use the same notation for the tree DNEA and for the formula DNEA = ¬¬A → A. It
will always be clear from the context where the notation DNEA refers to.
46 CHAPTER 2. DERIVATIONS IN INTUITIONISTIC AND CLASSICAL LOGIC

Definition 2.2.2. We denote the above minimal derivation of ⊥ from ¬¬⊥ by DNE⊥ .

If A ∈ Form \ {⊥}, then `c ¬¬A → A, as it is shown by the following tree:

[v : ¬¬A]
DNEA
A
→+ v .
¬¬A → A

Proposition 2.2.3. If A ∈ Form and V ⊆fin Aform, there is a unique, canonical embedding
c : Di (A) → Dc (A).
V V

Proof. We use recursion on DiV (A). As the introduction rules of DiV (A) differ from the
introduction rules of DcV (A) only with respect to the rule (0A ) ∈ Di{o : ⊥} (A), it suffices to
describe the rule (0A )c ∈ Dc{o : ⊥} (A). If A 6= ⊥, let the following derivation

[u : ¬¬A] o : ⊥ [w : ¬A]
DNEA 1⊥
A ⊥
→+ u →+ w
¬¬A → A ¬¬A
A →− .

be the derivation (0A )c , which is clearly in Dc{o : ⊥} (A). For all the rest introduction rules of
DiV (A) the embedding Mi 7→ Mic is defined by the identity rule.

Corollary 2.2.4. Let A, B ∈ Form, Γ ⊆ Form, and V ⊆fin Aform.


(i) If Γ `i A, then Γ `c A.
(ii) The rules A 7→ A and (Mi : A → B) 7→ (Mic : A → B) determine the functor Idic : Formi →
Formc .
(iii) DV (A) ⊆ DcV (A).
(iv) If A ≤i B, then A ≤c B.
(v) If A ∼
=i B, then A ∼
=c B.

Proof. All cases follow immediately from Proposition 2.2.3. Notice that for the proof of (iii)
the preservation of the unit arrow 1A follows immediately from the definition of the canonical
embedding c . As we use the identity rule in its definition for all introduction rules of DiV (A),
other than (0A ), we get (1A )c = 1A .

Combining Corollaries 2.1.5 and 2.2.4, we get Γ ` A ⇒ Γ `i A ⇒ Γ `c A,

DV (A)
Id Id
c
DiV (A) DcV (A),

Form
Idmi Idmc
ic
Id
Formi Formc ,
2.2. DERIVATIONS IN CLASSICAL LOGIC 47

where Idmc = Idic ◦ Idmi : Form → Formc , is given by the identity rules, and

A ≤ B ⇒ A ≤i B ⇒ A ≤c B,

A∼
=B⇒A∼
=i B ⇒ A ∼
=c B.
Proposition 2.2.5. If A ∈ Form, let PEMA = A ∨ ¬A.
(i) ` ¬¬PEMA .
(ii) `i PEMA → DNEA .
(iii) ` DNEPEMA → PEMA , hence `c PEMA .

Proof. Exercise.

The addition of the rule (DNEA ) in the inductive definition of DcV (A) has the following
consequence to the preorder ≤c . Note that because of the above implications between the
various preorders and congruences, we usually use the subscript i (or none), if an intuitionistic
(or minimal) preorder or congruence holds in Formc .

Corollary 2.2.6. If A ∈ Form, then A is ≤c -pseudo-complemented i.e., there is a unique, up


to ∼
=-isomorphism, B ∈ Form such that A ∧ B ∼ =c ⊥ and A ∨ B ∼
=c >.
Proof. Let B = ¬A. Then A ∧ ¬A ≤ ⊥ and ⊥ ≤i A ∧ ¬A. Moreover, A ∨ ¬A ≤ > and we
show that > ≤c A ∨ ¬A. Actually, by Proposition 2.2.5(iii) we have that `c A ∨ ¬A. If we
suppose that B ∈ Form, such that A ∧ B ∼
=c ⊥, we can show (exercise) that B ∼
= ¬A.

In contrast to intuitionistic derivability, one gets a weaker notion of classical derivability, if


the corresponding fewer instances of the double-negation-elimination-principle are considered.

Definition 2.2.7. Let Dne be the following set of formulas:

Dne = {∀x1 ,...,xn (¬¬R(x1 , . . . , xn ) → R(x1 , . . . , xn )) | n ∈ N+ , R ∈ Rel(n) ,


x1 , . . . , xn ∈ Var} ∪ {¬¬R → R | R ∈ Rel(0) \ {⊥}}.

Let `∗c A ⇔ Dne ` A and Γ `∗c A ⇔ Γ ∪ Dne ` A. We denote a derivation Γ `∗c A by Mc∗ .

Clearly, `∗c A ⇒ `c A, but not conversely. Next we see which part of the rule (DNEA ) is
captured by the weaker classical derivability `∗c . For that we need a lemma and a definition.

Lemma 2.2.8. Let A, B ∈ Form.


(i) ` (¬¬A → A) → (¬¬B → B) → ¬¬(A ∧ B) → A ∧ B.
(ii) ` (¬¬B → B) → ¬¬(A → B) → A → B.
(iii) ` (¬¬A → A) → ¬¬∀x A → A.

Proof. Exercise.

Definition 2.2.9. The formulas Form∗ without ∨, ∃ are defined inductively by the rules:
P ∈ Prime A, B ∈ Form∗ A ∈ Form∗ , x ∈ Var
, , .
P ∈ Form∗ (A → B), (A ∧ B) ∈ Form∗ ∀x A ∈ Form∗

An induction principle and a recursion theorem correspond to this definition of Form∗ .


48 CHAPTER 2. DERIVATIONS IN INTUITIONISTIC AND CLASSICAL LOGIC

Theorem 2.2.10. ∀A∈Form∗ (`∗c ¬¬A → A).


Proof. We use induction on Form∗ . If A is atomic we work exactly as in the corresponding
case of the proof of Theorem 2.1.7 i.e.,
∀x1 ,...,xn (¬¬R(x1 , . . . , xn ) → R(x1 , . . . , xn )) t1 ∈ Term
∀−
∀x2 ,...,xn (¬¬R(t1 , x2 . . . , xn ) → R(t1 , x2 . . . , xn )) t2 ∈ Term
∀−
∀x3 ,...,xn (¬¬R(t1 , t2 , x3 . . . , xn ) → R(t1 , t2 , x3 . . . , xn ))
......... .........
∀xn (¬¬R(t1 , . . . , tn−1 , xn ) → R(t1 , . . . , tn−1 , xn )) tn ∈ Term −

¬¬R(t1 , t2 , . . . , tn ) → R(t1 , t2 . . . , tn )
Next we suppose that there are classical derivations of `∗c ¬¬A → A, `∗c ¬¬B → B and we find
classical derivations of `∗c ¬¬(A → B) → A → B, `∗c ¬¬(A ∧ B) → A ∧ B and `∗c ¬¬∀x A →
∀x A. By Lemma 2.2.8(ii) there is a derivation M of (¬¬B → B) → ¬¬(A → B) → A → B,
and the required classical derivation is
|M | Mc∗
(¬¬B → B) → ¬¬(A → B) → A → B ¬¬B → B
→−
¬¬(A → B) → A → B
By Lemma 2.2.8(i) there is a derivation N of C = (¬¬A → A) → (¬¬B → B) → ¬¬(A∧B) →
A ∧ B, and the required classical derivation is
|N | Nc∗
C ¬¬A → A | Mc∗
→−
(¬¬B → B) → ¬¬(A ∧ B) → A ∧ B ¬¬B → B
→−
¬¬(A ∧ B) → A ∧ B
By Lemma 2.2.8(iii) there is a derivation K of D = (¬¬A → A) → ¬¬∀x A → A, and the
required classical derivation, where the variable condition is easy to see that it is satisfied, is
|K | Nc∗
D ¬¬A → A
¬¬∀x A → A →− [u : ¬¬∀x A]
A →−
+
∀x A ∀ x +
¬¬∀x A → ∀x A → u
The extension-rule, the cut-rule and the deduction theorem for minimal logic (see sec-
tion 1.12) are easily extended to classical logic.

2.3 Monos, epis and subobjects


Categorically speaking, the obvious commutativity of the following diagram in Formc

0A 0¬A

A ¬A
iA i¬A
A ∨ ¬A ∼
=c >
2.3. MONOS, EPIS AND SUBOBJECTS 49

expresses that in Formc the objects A and ¬A are complemented subobjects of >.

Definition 2.3.1. Let C be a category and f : A → B in C1 . The arrow f is called a monic


arrow, or a mono(morphism), and we write f : A ,→ B, if

∀C∈C0 ∀g,h∈C1 (C,A) f ◦ g = f ◦ h ⇒ g = h

f ◦g
g
f
C A B.
h
f ◦h

The arrow f is called an epi, or an epi(morphism), and we write f : A _ B, if



∀C∈C0 ∀g,h∈C1 (B,C) g ◦ f = h ◦ f ⇒ g = h

g◦f
g
f
A B C.
h
h◦f

If A ∈ C0 , a subobject of A in C is a pair (B, iA A


B ), where B ∈ C0 and iB : B ,→ A. The
category SubC (A) of subobjects of A in C has objects the subobjects of A in C and an arrow
f : (B, iA A
B ) → (C, iC ) is an arrow f : B → C in C1 such that the following diagram commutes:

f
B C

iA
B iA
C
A

If (B, iA
B ) is an object in SubC (A), its unit arrow is the unit 1B in C1 , since the following
diagram is trivially commutative

1B
B B

iA
B iA
B
A.

If f : (B, iA A A A
B ) → (C, iC ) and g : (C, iC ) → (D, iD ) in SubC (A), their composition in SubC (A)
is the composition g ◦ f in C, as the commutativity of the following inner diagrams implies
the commutativity of the following outer diagram
50 CHAPTER 2. DERIVATIONS IN INTUITIONISTIC AND CLASSICAL LOGIC

g◦f

f g
B C D
iA
iA
B
C
iA
D
A

iA A A A
D ◦ (g ◦ f ) = (iD ◦ g) ◦ f = iC ◦ f = iB .

Notice that in the above definition of the abstract injectivity (surjectivity) of arrows is
expressed without reference to the membership relation ∈ of sets. Moreover, the notion of
a subobject is the abstract, categorical version of the notion of subset, and the category of
subobjects SubC (A) of A in C is the abstract, categorical version of the set of subsets of a set.

Proposition 2.3.2. Let C be a category, A ∈ C0 and f ∈ C1 .


(i) If f is an iso, then f is a mono and an epi.
(ii) Every arrow in Form (or in Formi , Formc ) is both a mono and an epi.
(iii) The converse to (i) does not hold, in general.
(iv) If f : (B, iA A
B ) → (C, iC ) in SubC (A), then f is a mono.
(v) There is at most one arrow f : (B, iA A
B ) → (C, iC ) in SubC (A) i.e., SubC (A) is thin.
(vi) In the category of sets Set a function f : A → B is a mono if and only if f is an injection,
and f is an epi if and only if f is a surjection.

Proof. (i) Let g : B → A such that g ◦ f = 1A and f ◦ g = 1B . If g 0 , h0 : C → A, such that


f ◦ g 0 = f ◦ h0 , then

g0 1B g 00
f g f
C A B A B D
h0 1A h00

g ◦ (f ◦ g 0 ) = g ◦ (f ◦ h0 ) ⇒ (g ◦ f ) ◦ g 0 = (g ◦ f ) ◦ h0
⇒ 1A ◦ g 0 = 1A ◦ h0
⇒ g 0 = h0 .

If g 00 , h00 : B → C, such that g 00 ◦ f = h00 ◦ f , then

(g 00 ◦ f ) ◦ g = (h00 ◦ f ) ◦ g ⇒ g 00 ◦ (f ◦ g) = h00 ◦ (f ◦ g)
⇒ g 00 ◦ 1B = h00 ◦ 1B
⇒ g 00 = h00 .

(ii) If M : A → B and K, N : C → A in Form such that M ◦ N = M ◦ K


N
M
C A B.
K
2.4. THE GROUPOID CATEGORY OF FORMULAS 51

the equality N = K follows from the thinness of Form. Similarly, M is an epi.


(iii) By (ii) all arrows in the category Form are monos and epis, but not all arrows in Form
are (going to be) isos (can we show that now?).
(iv) Let g, h : D → B such that f ◦ g = f ◦ h. Then, since iA
B is a mono, we get
g
f
D B C
h
iA
B iA
C
A

iA C A A
C ◦ (f ◦ g) = iA ◦ (f ◦ h) ⇒ (iC ◦ f ) ◦ g = (iC ◦ f ) ◦ h
⇒ iA A
B ◦ g = iB ◦ h
⇒ g = h.

(v) Let f, g : B → C such that the following two diagrams formed by the arrows iA A
B , iC

f
B C
g
iA
B iA
C
A

commute. Then the third diagram, formed by the arrows f, g, also commutes, since iA
B =
A A
iC ◦ g = iC ◦ f ⇒ g = f . Case (vi) is an exercise.

2.4 The groupoid category of formulas


The rule A 7→ (¬¬A → A) does not define an endofunctor on Form (or on Formi , or on
Formc ). If M : A → B, in order to get a derivation M 0 : (¬¬A → A) → (¬¬B → B) one also
needs, in general, a derivation N : B → A. The situation is similar for the rule A 7→ A ∨ ¬A.
Proposition 2.4.1. Let A, B ∈ Form such that ` A ↔ B. Then ` (¬¬A → A) ⇔ ` (¬¬B →
B), and ` (A ∨ ¬A) ⇔ ` (B ∨ ¬B).
Proof. We prove only the first equivalence, while the second is an exercise. If M is a derivation
of ¬¬A → A, N a derivation of A → B, and K a derivation of B → A, then the following is a
derivation of ¬¬B → B:
|K
B→A [w : B]
[v : ¬A] A
⊥ +
[u : ¬¬B] ¬B → w
|M ⊥ +
|N ¬¬A → A ¬¬A →− v
A→B A →
B +
¬¬B → B → u
52 CHAPTER 2. DERIVATIONS IN INTUITIONISTIC AND CLASSICAL LOGIC

The converse implication follows similarly.

The rule A 7→ (¬¬A → A) defines an endofunctor on the various categories of formulas, if


we consider an arrow A → B to be a derivation of the equivalence A ↔ B.
Definition 2.4.2. The groupoid category Formgrp of formulas has objects the formulas and
an arrow A → B is a pair (M : A → B, N : B → A) i.e., (M, N ) is a derivation of A ↔ B,
or of A ∼ = B. In this case we write (M, N ) : A → B. If (M 0 , N 0 ) : B → C, their composition
(M 0 , N 0 ) ◦ (M, N ) : A → C is the pair (M 0 ◦ M, N ◦ N 0 )

M M0
A B C.
N N0

Moreover, 1grp
A = (1A , 1A ), and (M, N ) = (K, L) ⇔ M = K & N = L. Similarly we define
the groupoid categories Formgrp
i and Formgrp
c .

It is immediate to show that Formgrp is a (small) category. Moreover, Formgrp is thin;


if (M, N ), (M 0 , N 0 ) : A → B, then M, M 0 : A → B and N, N 0 : B → A, hence by the thinness
of Form we have that M = M 0 and N = N 0 , and then (M, N ) = (M 0 , N 0 ). Notice that in
Formgrp the object > is no longer a terminal object, since if it was there would be an arrow
(M, N ) : ⊥ → >, hence a derivation N : > → ⊥. For the same reason in Formgrp i the object
⊥ is no longer an initial object. The above construction of the groupoid category Formgrp
from the preorder category Form can be generalised to an arbitrary preorder category.
Definition 2.4.3. A small category C is a groupoid, if every arrow in C1 is an isomorphism.
Corollary 2.4.4. (i) The groupoid category Formgrp is a groupoid.
(ii) If A, B ∈ Form, then A ∼
=B⇔A∼ =grp B.
(iii) If F : Form → Form, the induced endofunctor F grp : Formgrp → Formgrp is defined by

F0grp (A) = F0 (A),

F1grp

M : A → B, N : B → A : F0 (A) → F0 (B),
F1grp (M, N ) = F1 (M ) : F0 (A) → F0 (B), F1 (N ) : F0 (B) → F0 (A) ,


for every A, B ∈ Form and every arrow (M, N ) : A → B in Formgrp .


Proof. (i) Clearly, Formgrp is small (see Definition 1.17.1). If (M, N ) : A → B, then
(N, M ) : B → A, and hence (N, M ) ◦ (M, N ) = (N ◦ M, N ◦ M ) = (1A , 1A ) = 1grp
A

M N M
A B A B.
N M N

Similarly, (M, N ) ◦ (N, M ) = (M ◦ N, M ◦ N ) = (1B , 1B ) = 1grp


B .

(ii) If A = B i.e., if there are M : A → B and N : B → A, then (M, N ) : A → B in Formgrp ,
and by (i) A ∼=grp B. If A ∼ =grp B, there is (M, N ) : A → B in Formgrp i.e., A ∼
= B.
(iii) By definition of F grp we get

F1grp 1grp = F1grp (1A , 1A ) = (F1 (1A ), F1 (1A )) = (1F0 (A) , 1F0 (A) ) = 1grp

A F0 (A) ,
2.5. THE NEGATIVE FRAGMENT 53

F1grp (M 0 , N 0 ) ◦ (M, N ) = F1grp (M 0 ◦ M, N ◦ N 0 )




= F1 (M 0 ◦ M ), F1 (N ◦ N 0 )


= F1 (M 0 ) ◦ F1 (M ), F1 (N ) ◦ F1 (N 0 )


= F1 (M 0 ), F1 (N 0 ) ◦ F1 (M ), F1 (N )
 

= F1grp (M 0 , N 0 ) ◦ F1grp (M, N ).

Corollary 2.4.5. Let DNE : Formgrp → Formgrp be defined by

DNE0 (A) = ¬¬A → A,

DNE1 M : A → B, N : B → A = (M 0 , N 0 ),


M 0 : (¬¬A → A) → (¬¬B → B) & N 0 : (¬¬B → B) → (¬¬A → A),


where the derivations M 0 and N 0 are determined in Proposition 2.4.1.
(i) DNE is an endofunctor on Formgrp .
(ii) If A, B ∈ Form, then DNE0 (A ∧ B) ≥ DNE0 (A) ∧ DNE0 (B).

Proof. Exercise.

Corollary 2.4.6. Let PEM : Formgrp → Formgrp be defined by

PEM0 (A) = A ∨ ¬A,

PEM1 M : A → B, N : B → A = (M 0 , N 0 ),


M 0 : (A ∨ ¬A) → (B ∨ ¬B) & N 0 : (B ∨ ¬B) → (A ∨ ¬A),


where the derivations M 0 and N 0 are determined in Proposition 2.4.1.
(i) PEM is an endofunctor on Formgrp .
(ii) If A, B ∈ Form, then PEM0 (A ∨ B) ≤ PEM0 (A) ∨ PEM0 (B).
(ii) If A, B ∈ Form, then PEM0 (A) ∧ PEM0 (B) ≤ PEM0 (A ∧ B).

Proof. Exercise.

2.5 The negative fragment


The question answered in this section is the following: are there formulas A, other than ⊥, for
which it is possible to show that DNEA = ¬¬A → A is derivable in minimal logic?

Definition 2.5.1. The negative formulas Form− of Form, or the negative fragment of Form,
is defined by the following inductive rules:

P ∈ Prime A, B ∈ Form− A ∈ Form− , x ∈ Var


, , , ,
⊥∈ Form− P → ⊥ ∈ Form− (A ◦ B) ∈ Form− ∀x A ∈ Form−

where ◦ ∈ {→, ∧}. To the definition of Form− corresponds the obvious induction principle. The
negative fragment Form− of Form is the corresponding full subcategory of negative formulas.
The negative fragments Form− −
i and Formc are defined similarly.
54 CHAPTER 2. DERIVATIONS IN INTUITIONISTIC AND CLASSICAL LOGIC

It is immediate to show inductively that ¬A ∈ − , if A ∈ − .

Proposition 2.5.2. (i) ∀A∈Form− A ∈ Form∗ .




(ii) Let R ∈ R(n) . If n > 1 and t1 , . . . , tn ∈ Term, then R(t1 , . . . , tn ) ∈ Form∗ \ Form− . If n = 0
and R 6= ⊥, then R ∈ Form∗ \ Form− .

Proof. (i) By induction on Form (exercise). (ii) Since R ∈ Prim, we get R ∈ ∗ . If R ∈ − , then
R is either ⊥, or of the form P → ⊥, for some P ∈ Prim, or of the form A ◦ B, or of the form
∀x A, for some A, B ∈ Form− . In all these cases we get a contradiction.

Proposition 2.5.3. ∀A∈Form− ` ¬¬A → A .

Proof. By induction on Form− . If A = ⊥, we use ` ¬¬⊥ → ⊥. If A = ¬R~t with R


distinct from ⊥, we must show ¬¬¬R~t → ¬R~t, which is a special case of ` ¬¬¬B → ¬B,
Proposition 1.11.1(iii). Next we suppose that ` ¬¬A → A, ` ¬¬B → B and we show
` ¬¬(A → B) → (A → B). If C = (¬¬B → B) → ¬¬(A → B) → A → B, we use
Lemma 2.2.8(ii) as follows:
|M |N
C ¬¬B → B
¬¬(A → B) → A → B
For the derivation of ` ¬¬(A ∧ B) → (A ∧ B) we use Lemma 2.2.8(i) in a similar manner. If

D = (¬¬A → A) → ¬¬∀x A → A,

for the derivation of ` ¬¬∀x A → ∀x A we use Lemma 2.2.8(iii) as follows:

|M |K
D ¬¬A → A
¬¬∀x A → A u : ¬¬∀x A
A +
∀x A ∀ x +
¬¬∀x A → ∀x A → u

The variable condition is trivially satisfied in the previous use of the rule ∀+ x.

2.6 Weak disjunction and weak existence


One reason for restricting the classical derivation `c to the weak classical derivation `∗c is that
one can replace existential formulas and disjunctions with weak existential formulas and weak
disjunctions, respectively. In this way Theorem 2.2.10 is “enough” for our needs, as through it
we get the double-negation-elimination of Form∗ i.e., of all formulas without ∃ and ∨. Here
we distinguish between two kinds of “exists” and two kinds of “or”: the “weak, or classical
ones, and the “strong” or non-classical ones, with constructive content. In the present context
both kinds occur together and hence we must mark the distinction; we do so by writing a
tilde above the weak disjunction and existence symbols thus ∃, ˜ ∨
˜.

Definition 2.6.1. If A, B ∈ Form, let

˜ B = ¬A → ¬B → ⊥
A∨ & ˜x A = ¬∀x ¬A.

2.6. WEAK DISJUNCTION AND WEAK EXISTENCE 55

Proposition 2.6.2. Let A, B ∈ Form.


(i) If A, B ∈ Form∗ , then A ∨ ˜x A ∈ Form∗ .
˜ B ∈ Form∗ and ∃
(ii) If A, B ∈ Form− , then A ∨
˜ B ∈ Form− and ∃˜x A ∈ Form− .
(iii) ` A ∨ B → A ∨˜ B.
˜x A.
(iv) ` ∃x A → ∃
(v) ` A ∨˜ B ↔ ¬¬(A ∨ B).
˜x A ↔ ¬¬∃x A.
(vi) ` ∃
˜ B ↔ ¬(¬A ∧ ¬B).
(vii) ` A ∨
˜ B) → A ∨
(viii) ` ¬¬(A ∨ ˜ B.
˜ ˜
(ix) ` ¬¬(∃x A) → ∃x A.

Proof. Exercise. For the proof of (x) and (xi) we only mention the following. By (i) Theo-
rem 2.2.10 implies the classical derivability of the double-negation-elimination of A ∨ ˜ B and
˜ ∗
∃x A, if A, B ∈ Form . By (ii) Proposition 2.5.3 implies the derivability of the double-negation-
elimination of A ∨ ˜x A, if A, B ∈ Form− . Using Brouwer’s double-negation-elimination
˜ B and ∃
of a negated formula (Proposition 1.11.1(iii)), we derive these eliminations in minimal logic.

Proposition 2.6.3. The following formulas are derivable.

(i) ˜x A → B) → ∀x (A → B),
(∃ if x ∈
/ FV(B).

(ii) ˜x A → B,
(¬¬B → B) → ∀x (A → B) → ∃ if x ∈
/ FV(B).

(iii) ˜x B) → ∃
(⊥ → B(c)) → (A → ∃ ˜x (A → B), if x ∈
/ FV(A).

(iv) ˜x (A → B) → A → ∃
∃ ˜x B, if x ∈
/ FV(A).

Proof. The following is a derivation of (i):

[u1 : ∀x ¬A] x
¬A [w : A]

˜x A → B →+ u1
∃ ¬∀x ¬A
B +
A → B → +w
∀ x
∀x (A → B)

The following is a derivation of (ii) without the last →+ -rules:

∀x (A → B) x
A→B [u1 : A]
[u2 : ¬B] B

→+ u1
¬A
¬∀x ¬A ∀x ¬A

→+ u2
¬¬B → B ¬¬B
B
56 CHAPTER 2. DERIVATIONS IN INTUITIONISTIC AND CLASSICAL LOGIC

The following is a derivation of (iii) without the last →+ -rules:


∀x ¬(A → B) x [u1 : B]
¬(A → B) A→B

A→∃ ˜x B u2 : A →+ u1
¬B
˜x B
∃ ∀x ¬B
⊥ → B(c) ⊥
∀x ¬(A → B) c B(c)
→+ u2
¬(A → B(c)) A → B(c)

Note that above we used the fact that if x ∈ / FV(A), then A(c) = A (Proposition 1.6.4). The
following is a derivation of (iv) without the last →+ -rules:

∀x ¬B x [u1 : A → B] A
¬B B

→+ u1
¬(A → B)
˜x (A → B) ∀+ x
∃ ∀x ¬(A → B)
⊥ →−

Proposition 2.6.4. The following formulas are derivable.

(i) ∀x (⊥ → A) → (∀x A → B) → ∀x ¬(A → B) → ¬¬A.

(ii) ˜x (A → B)
∀x (¬¬A → A) → (∀x A → B) → ∃ if x ∈
/ FV(B).
Proof. If Ax, Ay stand for A(x), A(y), respectively, we get the following derivation M of (i):
∀y Ay x
Ax + ∀x (⊥ → Ax) y u1 : ¬Ax u2 : Ax
∀x Ax → B ∀x Ax ∀ −x ⊥ → Ay ⊥
B → Ay
∀+ y
∀y Ay → B ∀y Ay
∀x ¬(Ax → B) x B
→+ u2
¬(Ax → B) Ax → B

→+ u1 ,
¬¬Ax
where the last →+ -rules are not included. Using this derivation M we obtain
∀x (¬¬Ax → Ax) x |M
¬¬Ax → Ax ¬¬Ax
Ax
∀x Ax → B ∀x Ax
∀x ¬(Ax → B) x B
¬(Ax → B) Ax → B −
⊥ → .

Note that the assumption ∀x (¬¬A → A) in (ii) is used to derive the assumption ∀x (⊥ → A)
in (i), since ` (¬¬A → A) → ⊥ → A (see the proof of Proposition 2.2.3).
2.6. WEAK DISJUNCTION AND WEAK EXISTENCE 57

˜x (R(x) → ∀x R(x)).
Corollary 2.6.5. If R ∈ Rel(1) , then `∗c ∃

Proof. let A = R(x) and B = ∀x R(x) in Proposition 2.6.4(ii).

˜x (R(x) → ∀x R(x)) is known as the drinker formula, and can be read as “in
The formula ∃
every non-empty bar there is a person such that, if this person drinks, then everybody drinks”.
The next proposition on weak disjunction is similar to Proposition 2.6.3.

Proposition 2.6.6. The following formulas are derivable.

˜ B → C) → (A → C) ∧ (B → C),
(A ∨
(¬¬C → C) → (A → C) → (B → C) → A ∨˜ B → C,
(⊥ → B) → ˜ C) → (A → B) ∨
(A → B ∨ ˜ (A → C),
˜ (A → C) → A → B ∨
(A → B) ∨ ˜ C,
˜ (B → C) → A → B → C,
(¬¬C → C) → (A → C) ∨
(⊥ → C) → ˜ (B → C).
(A → B → C) → (A → C) ∨

Proof. The derivations of the second and the final formula are

A → C u2 : A
u1 : ¬C C B → C u3 : B
⊥ u1 : ¬C C
→+ u2
¬A → ¬B → ⊥ ¬A ⊥
→+ u3
¬B → ⊥ ¬B

→+ u1
¬¬C → C ¬¬C
C

A → B → C u1 : A
B→C u2 : B
C
→+ u1
¬(A → C) A→C
⊥→C ⊥
C
→+ u
¬(B → C) B→C − 2
⊥ →

The weak disjunction and the weak existential quantifier satisfy the same axioms as the
strong variants, if one restricts the conclusion of the elimination axioms to formulas in Form∗ .

Proposition 2.6.7. The following formulas are derivable.

˜ B,
`A→A∨
˜ B,
`B→A∨
`∗c A ∨
˜ B → (A → C) → (B → C) → C (C ∈ Form∗ ),
`A→∃ ˜x A,
`∗c ∃
˜x A → ∀x (A → B) → B / FV(B), B ∈ Form∗ ).
(x ∈
58 CHAPTER 2. DERIVATIONS IN INTUITIONISTIC AND CLASSICAL LOGIC

Proof. The derivations of the last formula is

∀x (A → B) x
A→B u2 : A
u1 : ¬B B

→+ u2
¬A
¬∀x ¬A ∀x ¬A

→+ u
¬¬B → B ¬¬B − 1
B → .

2.7 Logical operations on functors


Next we generalise the composition of functors defined in Example 1.14.4.

Definition 2.7.1. Let C, D, E be categories. If F : C → D and G : D → E are covariant


(contravariant) functors their composition G ◦ F is the pair (G0 ◦ F0 , G1 ◦ F1 ).

Proposition 2.7.2. Let C, D, E be categories.


(i) If F : C → D and G : D op → E, then G ◦ F : C op → E.
(ii) If F : C op → D and G : D → E, then G ◦ F : C op → E.
(iii) If F : C op → D and G : D op → E, then G ◦ F : C → E.
(iii) If F : C → D and G : D → E, then G ◦ F : C → E.

Proof. Exercise.

Corollary 2.7.3. Let F : Form → Form, and ¬ : Formop → Form, defined in Proposi-
tions 2.1.9. We define the following endofunctors on Form:

n
 F ,n=0
¬ F = ¬ ◦ F
n−1 
,n=1
¬ ◦ ¬ F , n > 1.
(i) ¬2n+1 F is a contravariant endofunctor, for every n ∈ N.
2n
(ii) ¬ F is a covariant endofunctor, for every n ∈ N.
2n+1
(iii) The endofunctor ¬ F is isomorphic to ¬F in Fun(Form, Form), for every n ∈ N.
2n
(iv) If F : Formc → Formc , then ¬ F is isomorphic to F in Fun(Formc , Formc ), for every
n ∈ N.

Proof. It follows immediately from Propositions 2.1.9 and 2.7.2.

Proposition 2.7.4. If F, G : Form → Form and B ∈ Form, the following are functors.
(i) F × G : Form → Form × Form, where

(F × G)0 (A) = F0 (A), G0 (A) ; A ∈ Form,
 
(F × G)1 (M : A → B) : F0 (A), G0 (A) → F0 (B), G0 (B) ; M : A → B,
2.7. LOGICAL OPERATIONS ON FUNCTORS 59


(F × G)1 (M ) = F1 (M ) : F0 (A) → F0 (B), G1 (M ) : G0 (A) → G0 (B) .
(ii) F ∧ G : Form → Form, where (F ∧ G)0 (A) = F0 (A) ∧ G0 (A), for every A ∈ Form.
(iii) F ∨ G : Form → Form, where (F ∨ G)0 (A) = F0 (A) ∧ G0 (A), for every A ∈ Form.
(iv) B → F : Form → Form, where (B → F )0 (A) = B → F0 (A), for every A ∈ Form.
(v) ∃x F : Form → Form, where (∃x F )0 (A) = ∃x F0 (A), for every A ∈ Form.
(vi) ∀x F : Form → Form, where (∀x F )0 (A) = ∀x F0 (A), for every A ∈ Form.
Proof. (i) If A, B, C ∈ Form, M : A → B and N : B → C, then by the definition of the unit
arrow and composition in the product category Form × Form we get
 
(F × G)1 (1A ) = F1 (1A ), G1 (1A ) = 1F0 (A) , 1G0 (A) = 1(F0 (A),G0 (A)) = 1(F ×G)0 (A) ,

(F × G)1 (N ◦ M ) = F1 (N ◦ M ), G1 (N ◦ M )

= F1 (N ) ◦ F1 (M ), G1 (N ) ◦ G( M )
 
= F1 (N ), G1 (N ) ◦ F1 (M ), G1 (M )
= (F × G)1 (N ) ◦ (F × G)1 (M ).
(ii)-(vi) These are functors as composition of the functors in Definitions 1.14.8 and 1.14.9 with
F × G or F , respectively, as it is shown in the following commutative diagrams
V
F ×G
Form Form × Form Form,

F ∧G

W
F ×G
Form Form × Form Form,

F ∨G

F →B
Form Form Form,

B→F

F ∃x
Form Form Form,

∃x F

F ∀x
Form Form Form.

∀x F
60 CHAPTER 2. DERIVATIONS IN INTUITIONISTIC AND CLASSICAL LOGIC

If F, G : Formop → Form are contravariant functors, all results in Proposition 2.7.4 are
extended accordingly. If F, G : Form × Form → Form, or more generally, F, G : Formn →
Form, where n > 1, the corresponding functors F ∧ G and F ∨ G are defined similarly.

Proposition 2.7.5. Let F : Form × Form → Form a functor and let a function G0 : Form ×
Form → Form, such that ` F0 (A, B) ↔ G0 (A, B), for every A, B ∈ Form. Then G0 generates
a functor G : Form × Form → Form.

Proof. If (M, N ) : (A, B) → (A0 , B 0 ) in Form × Form, then we define G1 (M, N ) : G0 (A, B) →
G0 (A0 , B 0 ) the arrow LA0 ,B 0 ◦ F1 (M, N ) ◦ KA,B

F1 (M, N )
F0 (A, B) F0 (A0 , B 0 )

KA,B LA0 ,B 0

G0 (A, B) G0 (A0 , B 0 ),
G1 (M, N )

where KA,B : G0 (A, B) → F0 (A, B) and LA0 ,B 0 : F0 (A0 , B 0 ) → G0 (A0 , B 0 ) are found by the
hypotheses ` F0 (A, B) ↔ G0 (A, B) and ` F0 (A0 , B 0 ) ↔ G0 (A0 , B 0 ).

A similar result holds, if F : Formn → Form and G0 : Formn → Form, for every n > 0. All
results of this section are extended naturally to functors F, G : C → Form (in Proposition 2.7.4)
and F : C × D → Form (in Proposition 2.7.5), where C and D are categories.

2.8 Functors on functors


To the functors on formulas associated to the logical symbols in Definition 1.14.8 correspond
functors on endofunctors on Form. For simplicity we use the same symbols for them.

Definition 2.8.1. Let the following functors:


V
(i) : Fun(Form, Form) × Fun(Form, Form) → Fun(Form, Form), defined by
^
(F, G) = F ∧ G,
0
^
(η, τ ) : (F, G) → (F 0 , G0 ) : F ∧ G ⇒ F 0 ∧ G0 ,

1
^  ^
(η, τ ) = (ηA , τA ) : F0 (A) ∧ G0 (A) → F0 0 (A) ∧ G0 0 (A); A ∈ Form,
1 A 1

ηA : F0 (A) → F0 0 (A), τA : G0 (A) → G0 0 (A), and 1 (ηA , τA ) is defined in Definition 1.14.8(i).


V
W
(ii) : Fun(Form, Form) × Fun(Form, Form) → Fun(Form, Form), defined by
_
(F, G) = F ∨ G,
0
_
(η, τ ) : (F, G) → (F 0 , G0 ) : F ∨ G ⇒ F 0 ∨ G0 ,

1
2.8. FUNCTORS ON FUNCTORS 61
_  _
(η, τ ) = (ηA , τA ) : F0 (A) ∨ G0 (A) → F0 0 (A) ∨ G0 0 (A); A ∈ Form,
1 A 1

F0 0 (A), τA : G0 (A) → G0 0 (A), and


W
ηA : F0 (A) → 1 (ηA , τA ) is defined in Definition 1.14.8(ii).
(iii) If B ∈ Form, let →B : Fun(Form, Form) → Fun(Form, Form), defined by
(→B )0 (F ) = B → F,

(→B )1 η : F ⇒ G) : (B → F ) ⇒ (B → G),


(→B )1 (η) A = (→B )1 (ηA ) : (B → F0 (A)) → (B → G0 (A));


 
A ∈ Form,
ηA : F0 (A) → G0 (A), and (→B )1 (ηA ) is defined in Definition 1.14.9.
(iv) ∀x : Fun(Form, Form) → Fun(Form, Form), defined by
 
∀x (F ) = ∀x F,
0
 
∀x

η : F ⇒ G) : ∀x F ⇒ ∀x G,
1
    
∀x η) = ∀x (ηA ) : ∀x F0 (A) → ∀x G0 (A); A ∈ Form,
1 A 1

∀x

where ηA : F0 (A) → G0 (A), and 1
(ηA ) is defined in Definition 1.14.8(iv).
(iv) ∃x : Fun(Form, Form) → Fun(Form, Form), defined by
 
∃x (F ) = ∃x F,
0
 
∃x

η : F ⇒ G) : ∃x F ⇒ ∃x G,
1
    
∃x η) = ∃x (ηA ) : ∃x F0 (A) → ∃x G0 (A); A ∈ Form,
1 A 1

∃x

where ηA : F0 (A) → G0 (A), and (ηA ) is defined in Definition 1.14.8(v).
1
V V
Other functors on formulas, like B and B , induce the corresponding functors on functors
on formulas. The preorder on Form also induces a preorder on Fun(Form, Form).

Definition 2.8.2. If F, G ∈ Fun(Form, Form) and η, τ : F ⇒ G, let



F ≤ G ⇔ ∀A∈Form F0 (A) ≤ G0 (A) ,

η = τ ⇔ ∀A∈Form ηA = τA .
A witness µ of F ≤ G, in symbols µ : F ≤ G, is a family (µA : F0 (A) → G0 (A))A∈Form .

If µ : F ≤ G, by the thinness of Form we get µ : F ⇒ G, and if τ : F ⇒ G, then τ : F ≤ G.


By the definition of equality between natural transformations F ⇒ G we have that the thinness
of Form implies the thinness of Fun(Form, Form). Moreover, the adjunctions of sections 1.16
and 1.17 are extended to functors on functors on formulas.
62 CHAPTER 2. DERIVATIONS IN INTUITIONISTIC AND CLASSICAL LOGIC

Proposition 2.8.3. If B ∈ Form and F, G ∈ Fun(Form, Form), then

◦F ≤ G ⇔ F ≤ (→B ◦ G).
^
B

Proof. By Definition 2.8.2 we have that


^  ^  
B ◦F ≤ G ⇔ ∀A∈Form ◦F (A) ≤ G0 (A)
 B^ 0

⇔ ∀A∈Form (F0 (A)) ≤ G0 (A)
B 0

⇔ ∀A∈Form F0 (A) ∧ B ≤ G0 (A)

⇔ ∀A∈Form F0 (A) ≤ (B → G0 (A))
⇔ ∀A∈Form F0 (A) ≤ (→B )0 (G0 (A))


⇔ ∀A∈Form F0 (A) ≤ (→B ◦ G)0 (A)




⇔ F ≤ (→B ◦ G).

Definition 2.8.4. The functors ∀x and ∃x in Definition 2.8.1 can be seen as functors

∀x, ∃x : Fun(Form, Form) → Fun(Form, Formx),


as e.g., (∃x F )0 (A) = (∃x ◦ F )0 (A) = ∃x F0 (A) ∈ Formx (see Example 1.17.6). Let the functor
Wx : Fun(Form, Formx ) → Fun(Form, Form) be defined by the identity rule F 7→ F , for
every F ∈ Fun(Form, Formx ).
Proposition 2.8.5. The following adjunctions hold: ∃x a Wx and Wx a ∀x .
Proof. Exercise.

2.9 Functors associated to weak “or” and weak “exists”


W
Definition 2.9.1. (i) Let the functor e : Form × Form → Form, defined by
_
f ˜ B;
(A, B) = A ∨ A, B ∈ Form,
0
_
M : A → A0 , N : B → B 0 : A ∨
˜ B → A0 ∨
˜ B0,
f 
1
is the following derivation
[w : A]
|M [v : B]
A0 [u0 : ¬A0 ] |N
⊥ →− B0 [v 0 : ¬B 0 ]
u : ¬A → ¬B → ⊥ A → ⊥ →+ w
+
⊥ →−
→ +
¬B → ⊥ B → ⊥ →+ v
⊥ →
+ 0
¬B → ⊥ → v + 0
→ u.
¬A0 → ¬B 0 → ⊥
] : Form → Form, defined by PEM
(ii) Let PEM ˜ ¬A, for every A ∈ Form.
] 0 (A) = A ∨
2.9. FUNCTORS ASSOCIATED TO WEAK “OR” AND WEAK “EXISTS” 63

W
We can show that e is a functor using also Propositions 2.7.2 and 2.7.5 (exercise). The
]A = A ∨
fact that the rule PEM ˜ ¬A defines an endofunctor on Form follows from the trivial
fact that ` A ∨˜ ¬A, for every A ∈ Form.

Definition 2.9.2. Let the functor ∃ e : Form → Form, defined by


x
   
∃ (A) = ∃xA; & ∃e M : A → B : ∃˜xA → ∃˜xB,
˜
e 
0 1
is the following derivation
[w : A]
|M
B +
A→B → w [v : ∀x ¬B] x −
¬B → ¬A ¬B ∀
→ −
¬A +
u : ¬∀x ¬A ∀x ¬A ∀ −x
⊥ →
→ +v
¬∀x ¬B
The variable condition in the above derivation of ¬A is satisfied, as the only open assumption
is the formula ∀x ¬B. In the derivation tree above we omit to mention the derivation ` (A →
B) → (¬B → ¬A) and the use of (→− ) in order to derive ¬B → ¬A. We can show that
∃e is a functor using also Proposition 2.7.2 (exercise). Next follows the weak analogue to
x
Theorem 1.17.8.
Proposition 2.9.3. The functors ∃ e : Form → Form can be written as a functor of the
x
form ∃x : Form → Formx , where Formx is the subcategory of formulas A with x ∈
e / FV(A).
Let again the functor Wx : Formx → Form, defined by Wx 0 (A) = A, for every A ∈ Form.
If A∈ Form
 x and C ∈ Form, the following hold:
(i) ∃e
x (C) ≤ A ⇒ C ≤ W ) (A).
x 0
0  
(ii) If ` ¬¬A → A, then C ≤ Wx )0 (A) ⇒ ∃e x (C) ≤ A.
0
˜x C, we get the the
Proof. (i) If N : ¬∀x ¬C → A, and since there is a derivation M : ∃x C → ∃
composition N ◦ M : ∃x C → A, hence by the proof of Theorem 1.17.8(i) there is K : C → A.
(ii) If K : C → A, we get the following derivation of (¬∀x ¬C) → ¬¬A:
[w : C]
|K
A +
C→A → w
¬A → ¬C [v : ¬A]
¬C →−
+
u : ¬∀x ¬C ∀x ¬C ∀ −x
⊥ →
→ + v.
¬¬A
The variable condition in the above derivation of ¬C is satisfied, as x ∈
/ FV(¬A) = FV(A),
where ¬A is the only open assumption.
64 CHAPTER 2. DERIVATIONS IN INTUITIONISTIC AND CLASSICAL LOGIC

2.10 The Gödel-Gentzen translation


Definition 2.10.1. The Gödel-Gentzen translation is the unique function

g
: Form → Form

A 7→ Ag ; A ∈ Form

defined by recursion on Form from the following clauses:

⊥g = ⊥,
Rg = ¬¬R, R ∈ Rel(0) \ {⊥},
g
R(t1 , . . . , tn ) = ¬¬R(t1 , . . . , tn ), R ∈ Rel(n) , n ∈ N+ , t1 , . . . , tn ∈ Term,
(A ◦ B)g = Ag ◦ B g , ◦ ∈ {→, ∧},
g g
(∀x A) = ∀x A ,
g
(A ∨ B) = Ag ∨˜ B g = ¬Ag → ¬B g → ⊥,
(∃x A)g =∃˜x Ag = ¬∀x ¬Ag .

If Γ ⊆ Form, let Γg = {C g | C ∈ Γ}.

Corollary 2.10.2. Let n > 1 and A, A1 , . . . , An ∈ Form.


(i) (¬A)g = ¬Ag .
g
(ii) A1 → A2 → . . . → An−1 → An = Ag1 → Ag2 → . . . → Agn−1 → Agn , for every n > 0.
(iii) (EFQA )g = EFQAg .
(iv) (DNEA )g = DNEAg .
] A g = PEM

(v) PEM ] Ag .

Proof. The proofs of all cases are immediate.

Proposition 2.10.3. (i) ∀A∈Form Ag ∈ Form− .




(ii) Let R ∈ R(n) . If n > 1 and t1 , . . . , tn ∈ Term, then R(t1 , . . . , tn ) → ⊥ ∈ Form− \ Formg . If
n = 0 and R 6= ⊥, then R → ⊥ ∈ Form− \ Formg .
(iii) The Gödel-Gentzen translation is not an injection.

Proof. Exercise.

Combining Propositions 2.5.2(ii) and 2.10.3(ii) we get Formg ( Form− ( Form∗ .

Proposition 2.10.4. Let x ∈ Var and s ∈ Term.


(i) ∀A∈Form Ag ∈ Form∗ .


(ii) ∀A∈Form FV(A) = FV(Ag ) .




(iii) ∀A∈Form Frees,x (A) = Frees,x (Ag ) .




(iv) ∀A∈Form (A[x := s])g = Ag [x := s] .



2.10. THE GÖDEL-GENTZEN TRANSLATION 65

Proof. (i) It follows immediately from Propositions 2.10.3(i) and 2.5.2(i).


(ii) We use induction on Form. Let A ∈ Prime. If A = ⊥, then FV(⊥g ) = FV(⊥) = ∅. If
A = R ∈ Rel(0) \ {⊥}, then FV(Rg ) = FV((R → ⊥) → ⊥) = ∅ = FV(R). If A = R(t1 , . . . , tn ),

FV R(t1 , . . . , tn )g = FV (R(t1 , . . . , tn ) → ⊥) → ⊥ = FV R(t1 , . . . , tn ) .


  

If ◦ ∈ {→, ∧} and A, B ∈ Form, by the inductive hypotheses we get

FV (A ◦ B)g = FV Ag ◦ B g = FV(Ag ) ∪ FV(B g ) = FV(A) ∪ FV(B) = FV(A ◦ B),


 

FV ∀x A)g = FV ∀x Ag = FV(Ag ) \ {x} = FV(A) \ {x} = FV(∀x A),


 

FV A ∨ B)g = FV(¬Ag → ¬B g → ⊥) = FV(Ag ) ∪ FV(B g ) = FV(A) ∪ FV(B) = FV(A ∨ B),




FV ∃x A)g = FV [∀x (Ag → ⊥)] → ⊥ = FV(Ag ) \ {x} = FV(A) \ {x} = FV(∃x A).
 

(iii) By Definition 1.6.1 we get the following equalities. Let A ∈ Prime. If A = ⊥, then
Frees,x (⊥g ) = Frees,x (⊥) = 1. If A = R ∈ Rel(0) \ {⊥}, or if A = R(t1 , . . . , tn ), then

Frees,x (Ag ) = Frees,x ((A → ⊥) → ⊥) = Frees,x (A) · Frees,x (⊥) · Frees,x (⊥) = Frees,x (A).

If ◦ ∈ {→, ∧} and A, B ∈ Form, by the inductive hypotheses we get

Frees,x (A ◦ B)g = Frees,x Ag ◦ B g


 

= Frees,x (Ag ) · Frees,x (B g )


= Frees,x (A) · Frees,x (B)
= Frees,x (A ◦ B),

Frees,x A ∨ B)g = Frees,x (¬Ag → ¬B g → ⊥)




= Frees,x (¬A) · Frees,x (¬B)


= Frees,x (A) · Frees,x (B)
= Frees,x (A ∨ B),

Frees,x (∀y A)g = Frees,x ∀y Ag


 

 0 , x = y ∨ [x 6= y & y ∈ {y1 , . . . , ym }]
= 1, ,x= 6 y &x∈ / FV(Ag ) \ {y}
Frees,x (Ag ) ,x= 6 y&y∈ / {y1 , . . . , ym } & x ∈ FV(Ag )


 0 , x = y ∨ [x 6= y & y ∈ {y1 , . . . , ym }]
= 1, ,x= 6 y &x∈ / FV(A) \ {y}
Frees,x (A) ,x= 6 y&y∈ / {y1 , . . . , ym } & x ∈ FV(A)


= Frees,x ∀y A .
66 CHAPTER 2. DERIVATIONS IN INTUITIONISTIC AND CLASSICAL LOGIC

Frees,x (∃y A)g = Frees,x ¬∀y ¬Ag


 

= Frees,x ∀y ¬Ag


 0 , x = y ∨ [x 6= y & y ∈ {y1 , . . . , ym }]
= 1, , x 6= y & x ∈/ FV(¬Ag ) \ {y}
 g
Frees,x (¬A ) , x 6= y & y ∈ / {y1 , . . . , ym } & x ∈ FV(¬Ag )

 0 , x = y ∨ [x 6= y & y ∈ {y1 , . . . , ym }]
= 1, , x 6= y & x ∈/ FV(Ag ) \ {y}
Frees,x (Ag ) , x 6= y & y ∈ / {y1 , . . . , ym } & x ∈ FV(Ag )


 0 , x = y ∨ [x 6= y & y ∈ {y1 , . . . , ym }]
= 1, , x 6= y & x ∈ / FV(A) \ {y}
Frees,x (A) , x 6= y & y ∈ / {y1 , . . . , ym } & x ∈ FV(A)


= Frees,x ∃y A .
(iv) Let A ∈ Prime. If A = ⊥, then (⊥[x := s])g = ⊥g = ⊥ = ⊥g [x := s]. If A = R ∈
Rel(0) \ {⊥} and if A = R(t1 , . . . , tn ), then we we get, respectively,
(R[x := s])g = Rg = ¬¬R = ¬¬R(x := s] = (¬¬R)[x := s] = Rg [x := s].
g g
R(t1 , . . . , tn )[x := s] = R(t1 [x := s], . . . , tn [x := s])
= ¬¬R(t1 [x := s], . . . , tn [x := s])

= ¬¬R(t1 , . . . , tn ) [x := s]
g
= R(t1 , . . . , tn ) [x := s].
If Frees,x (A) = 0 or Frees,x (B) = 0, then Frees,x (A ◦ B) = 0, and hence
g
(A ◦ B)[x := s] = (A ◦ B)g = Ag ◦ B g = (Ag ◦ B g )[x := s] = (A ◦ B)g [x := s],
since by (iii) we get Frees,x (A ◦ B)g = Frees,x (A ◦ B) = 0. Suppose next that Frees,x (A) =


1 = Frees,x (B), hence Frees,x (Ag ) = 1 = Frees,x (B g ). By the inductive hypotheses we get
g g
(A ◦ B)[x := s] = A[x := s] ◦ B[x := s]
g g
= A[x := s] ◦ B[x := s]
= Ag [x := s] ◦ B g [x := s]
= (Ag ◦ B g )[x := s]
= (A ◦ B)g [x := s].
If Frees,x (A) = 0 or Frees,x (B) = 0, then Frees,x (A ∨ B) = 0, and hence
g
(A ∨ B)[x := s] = (A ∨ B)g = (A ∨ B)g [x := s],
since by (iii) we get Frees,x (A ∨ B)g = Frees,x (A ∨ B) = 0. Suppose next that Frees,x (A) =


1 = Frees,x (B), hence Frees,x (Ag ) = 1 = Frees,x (B g ). By the inductive hypotheses we get
g g
(A ∨ B)[x := s] = A[x := s] ∨ B[x := s]
g g
= ¬ A[x := s] → ¬ B[x := s] → ⊥
= ¬Ag [x := s] → ¬B g [x := s] → ⊥
= (Ag ∨
˜ B g )[x := s]
= (A ∨ B)g [x := s].
2.11. THE GÖDEL-GENTZEN FUNCTOR 67

If Frees,x (∀y A) = 0 = Frees,x (∀y A)g = Frees,x (∀y Ag ), then




g g g
(∀y A)[x := s] = ∀y A = ∀y A [x := s].

If Frees,x (∀y A) = 1 = Frees,x (∀y A)g = Frees,x (∀y Ag ), then




g g
(∀y A)[x := s] = ∀y A[x := s]
= ∀y (A[x := s])g
= ∀y Ag [x := s]
= ∀y Ag [x := s]


= (∀y A)g [x := s].

If Frees,x (∃y A) = 0 = Frees,x (∃y A)g , then




g g g
(∃y A)[x := s] = ∃y A = ∃y A [x := s].

If Frees,x (∃y A) = 1 = Frees,x (∃y A)g = Frees,x (¬∀y ¬Ag ), then




g g
(∃y A)[x := s] = ∃y A[x := s]
= ¬∀y ¬(A[x := s])g
= ¬∀y ¬Ag [x := s]
= ¬∀y (¬Ag )[x := s]
= ¬∀y ¬Ag [x := s].


= (∃y A)g [x := s].

Because of Proposition 2.5.2(i), the Gödel-Gentzen translation is also called the negative
translation. Since Ag ∈ Form∗ , by Theorem 2.2.10 we get that `∗c ¬¬Ag → Ag . For the formulas
in Form∗ ∩ Formg one gets though, the minimal derivability of their double-negation-elimination.

Corollary 2.10.5. ∀A∈Form ` ¬¬Ag → Ag .




Proof. Immediately from Propositions 2.5.2(i) and 2.5.3.

Proposition 2.10.6. (i) ∀A∈Form `c A ↔ Ag .




(ii) ∀A∈Form∗ `∗c A ↔ Ag .




Proof. Exercise.

2.11 The Gödel-Gentzen functor


In this section we show that the Gödel-Gentzen translation generates a functor Formc → Form
i.e., not only formulas are translated into formulas in the negative fragment, but also classical
derivations are translated into minimal derivations. The first step in the proof of the functorial
character of the Gödel-Gentzen translation is the existence of the following mapping.

Proposition 2.11.1. There is a function dne : Form− → DV (A) such that dne(A) : ¬¬A → A,
for every A ∈ Form− .
68 CHAPTER 2. DERIVATIONS IN INTUITIONISTIC AND CLASSICAL LOGIC

Proof. We use recursion on Form− and we rewrite accordingly the proof of Proposition 2.5.3
(the details is an exercise).

For simplicity we use the same symbol to the Gödel-Gentzen translation for the function
that translates classical derivations into minimal ones.

Theorem 2.11.2. There is a function g : DcV (A) → DV g (Ag ), where

DcV (A) 3 Mc 7→ Mcg ∈ DV g (Ag ),

V g = {ug : C g | u : C ∈ V }.

Proof. By recursion2 on DcV (A) we map a classical derivation Mc in DcV (A) to a minimal
derivation Mcg in DV g (Ag ) by mapping each introduction-rule of DcV (A) to an element of
DV g (Ag ).

(DNEA ) u : ¬¬A DNE 7→ dne(Ag ) : ¬¬Ag → Ag ,


A
A

where as Ag ∈ Form− and dne : Form− → DV (A), we get dne(Ag ) ∈ D{ug : ¬¬Ag =(¬¬A)g } (Ag ).

(1A ) a: A 1 7→ a g : Ag 1 g
A A
A Ag

(→+ ) If we consider the following left, classical derivation Mc and if we suppose that Ncg is
already defined i.e.,

[u : A] u1 : C1 . . . un : Cn
ug : Ag u1 g : C1 g . . . un g : Cn g
| Nc and | Ncg
B + Bg
A→B → u

we define as Mcg the following minimal derivation

[ug : Ag ] u1 g : C1 g . . . un g : Cn g
| Ncg
Bg + g
A → Bg → u
g

as Ag → B g = (A → B)g .
(→− ) If we consider the following classical derivation Mc

u1 : C1 . . . un : Cn v1 : D1 . . . vm : Dm
| Nc | Kc
A→B A
B →−
2
Actually, what we need do here, as in the proof of Proposition 2.11.1 is the following: first we define by
recursion a function in the set of trees of formulas, and then by induction we prove that the value of this
function is a minimal derivation of Ag from assumptions V g . For simplicity, here we perform the two steps
simultaneously.
2.11. THE GÖDEL-GENTZEN FUNCTOR 69

and if we suppose that that Ncg and Kcg have been defined i.e.,
u1 g : C1 g . . . un g : Cn g v1 g : D1 g . . . vm g : Dm g
| Ncg and | Kcg
(A → B)g Ag
we define Mcg to be the following minimal derivation
u1 g : C1 g . . . un g : Cn g v1 g : D1 g . . . vm g : Dm g
| Ncg | Kcg
Ag → B g Ag
Bg →−

(∀+ ) If we consider the following left, classical derivation Mc , and if we suppose that the
minimal derivation Ncg is already defined i.e.,
u1 : C1 . . . un : Cn
u1 g : C1 g . . . un g : Cn g
| Nc and | Ncg
A + Ag
∀x A ∀ x
we define Mcg to be the following minimal derivation
u1 g : C1 g . . . un g : Cn g
| Ncg
Ag +
∀x Ag ∀ x
/ FV(C1g ) & . . . & ∈
where the variable condition x ∈ / FV(Cng ), is satisfied, since by Propo-
sition 2.10.4(ii) FV(Ci ) = FV(Ci g ), for every i ∈ {1, . . . , n}, and the variable condition
x∈ / FV(C1 ) & . . . & x ∈
/ FV(Cn ) is satisfied in Nc .
(∀− ) If we consider the following left, classical derivation Mc , and if we suppose that Ncg is
already defined i.e.,
u1 : C1 . . . un : Cn
u1 g : C1 g . . . un g : Cn g
| Nc
and | Ncg
∀x A r ∈ Term −
∀ (∀x A)g = ∀x Ag
A(r)
we define Mcg to be the following classical derivation
u1 g : C1 g . . . un g : Cn g
| Ncg
∀x Ag r ∈ Term −

A (r) = A(r)g
g

where by Proposition 2.10.4(iv) we get the required equality Ag (r) = A(r)g .


(∧+ ) If we consider the following classical derivation derivation Mc
u1 : C1 . . . un : Cn v1 : D1 . . . vm : Dm
| Nc | Kc
A AB +
A∧B ∧
70 CHAPTER 2. DERIVATIONS IN INTUITIONISTIC AND CLASSICAL LOGIC

and if we suppose that the following minimal derivations Ncg and Kcg are already defined i.e.,

u1 g : C1 g . . . un g : Cn g v1 g : D1 g . . . vm g : Dm g
| Ncg and | Kcg
Ag Bg
we define Mcg to be the following minimal derivation

u1 g : C1 g . . . un g : Cn g v1 g : D1 g . . . vm g : Dm g
| Ncg | Kcg
Ag Bg +
g g g ∧
A ∧ B = (A ∧ B)

(∧− ) If we consider the following classical derivation Mc

u1 : C1 . . . un : Cn [u : A] [v : B]
| Nc | Kc
A∧B C −
∧ u, v
C
and if we suppose that the minimal derivations Ncg and Kcg are already defined i.e.,

u1 g : C1 g . . . un g : Cn g ug : Ag v g : B g
| Ncg and | Kcg
(A ∧ B)g Cg

we define Mcg to be the following minimal derivation

u1 g : C1 g . . . un g : Cn g [ug : Ag ] [v g : B g ]
| Ncg | Kcg
Ag ∧ B g Cg − g g
∧ u ,v
Cg
(∨+
0 ) If we consider the following, left classical derivation Mc , and if we suppose that the
minimal derivation Ncg is already defined i.e.,

u1 : C1 . . . un : Cn
u1 g : C1 g . . . un g : Cn g
| Nc and | Ncg
A ∨+ Ag
A∨B 0
we define Mcg to be the following minimal derivation of (A ∨ B)g = ¬Ag → ¬B g → ⊥

u1 g : C1 g . . . un g : Cn g
| Ncg
u : Ag → ⊥ Ag
⊥ →−
+ g
¬B g → ⊥ → v : +¬B
¬Ag → ¬B g → ⊥ → u
For the rule ∨+
1 we proceed similarly.
2.11. THE GÖDEL-GENTZEN FUNCTOR 71

(∨− ) For simplicity we use the notations w : Γ for w1 : C1 . . . wn : Cn , and w0 : ∆ for w1 0 : D1 . . . wm 0 : Dm ,


and w00 : E for w1 00 : E1 . . . wk 00 : Em . Let the following classical derivation Mc

w: Γ [u : A] w0 : ∆ [v : B] w00 : E
| Nc | Kc | Lc
A∨B C C −
∨ u, v.
C

We suppose that the minimal derivations Ncg , Kcg and Lgc are already defined i.e.,

w g : Γg ug : Ag w0 g : ∆g v g : B g w00 g : E g
| Ncg | Kcg | Lgc
(A ∨ B)g Cg Cg.

By Proposition 2.6.6 there is a minimal derivation of the formula

˜ B → C.
D(A, B, C) = (¬¬C → C) → (A → C) → (B → C) → A ∨

Hence we can define a function f : Form3 → DV (A) with

f (A, B, C) ∈ D(D(A, B, C)); (A, B, C) ∈ Form3 .

By Proposition 2.11.1 there is a minimal derivation dne(C g ) : ¬¬C g → C g . If

D0 (Ag , B g , C g ) = (Ag → C g ) → (B g → C g ) → Ag ∨
˜ Bg → C g ,

D00 (Ag , B g , C g ) = (B g → C g ) → Ag ∨
˜ Bg → C g ,

we define Mcg to be the following derivation of C g from assumptions Γg , ∆g and E g


u0 g : ¬¬C g
| dne(C g ) [ug : Ag ]w0 g ∈∆g
| f (Ag ,B g ,C g ) (C g + 0g | Kcg [v g : B g ]w00 g ∈E g
D(Ag ,B g ,C g ) ¬¬C g → C g → u Cg | Lgc
D0 (Ag , B g , C g ) A → Cg
g
Cg w g : Γg
D00 (Ag , B g , C g ) B → Cg
g | Ncg
Ag ∨
˜ Bg → C g Ag ∨˜ Bg
Cg

In the above definition of Mcg we write all intermediate derivations as values of appropriate
functions, in order to be compatible to the formulation of the recursion theorem for DcV (A)
that we employ in the proof.
(∃+ ) If we consider the following left, classical derivation Mc , and if we suppose that the
minimal derivation Ncg is already defined i.e.,

u1 : C1 . . . un : Cn
u1 g : C1 g . . . un g : Cn g
| Nc
and | Ncg
r ∈ Term A(r) +
∃ A(r)g = Ag (r)
∃x A
72 CHAPTER 2. DERIVATIONS IN INTUITIONISTIC AND CLASSICAL LOGIC

we define Mcg to be the following minimal derivation of (∃x A)g = ∀x (Ag → ⊥) → ⊥


u1 g : C1 g . . . un g : Cn g
[u : ∀x (Ag → ⊥)] r ∈ Term − | Ncg

Ag (r) → ⊥ Ag (r)
⊥ →−
→ u.+
∀x (Ag → ⊥) → ⊥
(∃− ) Again for simplicity we use the notate w : Γ for w1 : C1 . . . wn : Cn , and w0 : ∆ for
w1 0 : D1 . . . wm 0 : Dm . Let the following classical derivation Mc

w: Γ [u : A] w0 : ∆
| Nc | Kc
∃x A B −
∃ x, u
B
with x ∈ / FV(B). Suppose that the derivations Ncg and Kcg are defined i.e.,
/ FV(∆) and x ∈
w g : Γg ug : Ag w0 g : ∆g
| Ncg and | Kcg
˜ x Ag
∃ Bg
/ FV(B g ) = FV(B), by Proposition 2.6.3(ii) there is a derivation Λ of
As x ∈
˜x Ag → ∀x (Ag → B g ) → B g .
Ex (Ag , B g ) = (¬¬B g → B g ) → ∃

Hence we can define a function gx : Form → Formx → D(A) with

gx (A, B) ∈ D(Ex (A, B)); A ∈ Form & B ∈ Formx ,

as x must not be free in B. By Proposition 2.11.1 let the derivation dne(B g ) : ¬¬B g → B g . If

E 0 x (Ag , B g ) = ∃
˜x Ag → ∀x (Ag → B g ) → B g ,

E 00 x (Ag , B g ) = ∀x (Ag → B g ) → B g ,
we define Mcg to be the following minimal derivation of B g
u0 g : ¬¬B g
| dne(B g ) [ug : Ag ] w0 g : ∆g
| gx (Ag , B g ) (B g w: Γg
+ 0g | Kcg
Ex (Ag , B g ) ¬¬B g → B g → u | Ncg
Bg + g
A → B g → +u
E 0 x (Ag , B g ) ˜ x Ag
∃ g
→− ∀ x
E 00 x (Ag , B g ) ∀x (Ag → B g )
Bg →−

from assumptions Γg and ∆g . Note that the variable condition is satisfied in the above use of
∀+ x, since x ∈
/ FV(∆g ) = FV(∆).

Definition 2.11.3. The Gödel-Gentzen functor GG : Formc → Form is defined by GG0 (A) =
Ag , for every A ∈ Form, and for every arrow Mc : A → B in Formc let

GG1 (Mc : A → B) = Mcg : Ag → B g .


2.12. APPLICATIONS OF THE GÖDEL-GENTZEN FUNCTOR 73

The fact that GG is a functor follows from the equalities

GG1 (1A ) = 1gA = 1Ag = 1GG0 (A) ,

GG1 (N ◦ M ) = (N ◦ M )g = N g ◦ M g = GG1 (N ) ◦ GG2 (M ),


where the equality (N ◦ M )g = N g ◦ M g follows immediately by the thinness of Form.

2.12 Applications of the Gödel-Gentzen functor


Corollary 2.12.1. Let Γ ⊆ Form and A ∈ Form.
(i) Γ `c A ⇒ Γg ` Ag .
(ii) Γ ` A ⇒ Γg ` Ag .
Proof. (i)Let C1 , . . . , Cn ∈ G such that C1 , . . . , Cn `c A i.e., there is a classical derivation Mc
in Dc{u : C1 ,...,un : Cn } (A). By Theorem 2.11.2 the derivation Mcg is in D{ug : C1g ,...,ugn : Cng } (Ag ) i.e.,
C1g , . . . , Cng ` Ag , hence Γg ` Ag .
(ii) It follows immediately from (i) and the fact that Γ ` A ⇒ Γ `c A.

Proposition 2.12.2. (i) GG0 (A ∧ B) ∼


= GG0 (A) ∧ GG0 (B).
(ii) GG0 (>) ∼
= >.
(iii) The Gödel-Gentzen translation defines a functor GGgrp : Formgrp
c → Formgrp such that
∀A∈Form ∃B∈Form B g = GGgrp ∼

0 (B) =c A .

Proof. Exercise.

Definition 2.12.3. (i) A logic, minimal, intuitionistic, or classical, is consistent, if there is


no derivation of ⊥ within it.
(ii) A logic, minimal, intuitionistic, or classical, is inconsistent, if there is a derivation of ⊥
within it.
(iii) A pair of logics (`, `c ), (`, `i ), or (`c , `i ), is a pair of equiconsistent logics, if the
consistency of one logic of the pair is equivalent to the consistency of the other.
At the moment, we cannot prove the consistency of the logics studied. What we can show
though, is that all possible pairs of logics studied here are pairs of equiconsistent logics.
Corollary 2.12.4. (i) If minimal logic is consistent, then classical logic and intuitionistic
logic are consistent.
(ii) If classical logic is consistent, then minimal logic and intuitionistic logic are consistent.
(iii) If intuitionistic logic is consistent, then minimal logic and classical logic are consistent.
(iv) The pairs (`, `c ), (`, `i ), and (`c , `i ) are pairs of equiconsistent logics.
Proof. (i) If in Corollary 2.12.1 we set Γ = ∅ and A = ⊥, we get

(∗) `c ⊥ ⇒ ` ⊥g = ⊥.

Suppose that there is a derivation `c ⊥. Then there is a derivation ` ⊥, which contradicts


our hypothesis. Hence, there is no `c ⊥. We have already shown the implications

(∗∗) ` ⊥ ⇒ `i ⊥ ⇒ `c ⊥.
74 CHAPTER 2. DERIVATIONS IN INTUITIONISTIC AND CLASSICAL LOGIC

By (∗) `i ⊥ ⇒ `c ⊥ ⇒ ` ⊥, hence, if there is a derivation `i ⊥, there is a derivation ` ⊥.


(ii) It follows immediately from (∗∗).
(iii) The consistency of minimal logic follows from (∗∗), and the consistency of classical logic
follows from (∗) and (∗∗). (iv) follows immediately from (i)-(iii).

Definition 2.12.5. The height |M | of a derivation M is the maximum length of a branch


in M , where if B is a branch of M , then its length is the number of its nodes minus 1.
On can define (exercise) accordingly the functions |.| : DV (A) → N, |.| : DiV (A) → N and
|.| : DcV (A) → N, where for simplicity we use the same symbol for all of them.

E.g., for the following derivation tree M

∀y (⊥ → Ay) y u1 : ¬Ax u2 : Ax
⊥ → Ay ⊥
Ay
∀x Ax → B ∀y Ay
∀x ¬(Ax → B) x B
→+ u2
¬(Ax → B) Ax → B

→+ u1
¬¬Ax

we have that |M | = 7, since the length of its longest branch

{¬¬Ax, ⊥, Ax → B, B, ∀y Ay, Ay, ⊥, Ax}

is 8 − 1 = 7. Clearly, |MA | = 1, and |M | ≥ 2, for all other elements M of D.

Corollary 2.12.6. ∀Mc ∈DcV (A) (|Mcg | ≥ |Mc |).

Proof. By induction on DcV (A) and inspection of the proof of Theorem 2.11.2.

Proposition 2.12.7. There are functions g0c , g1c : Form → DcV (A) such that

g0c (A) : Ag → A & g1c : A → Ag ; A ∈ Form.

Proof. By recursion on Form and by inspection of the proof of Proposition 2.10.6(i).

The next theorem is the converse to Theorem 2.11.2.

Theorem 2.12.8. There is a function c : DV g (Ag ) → DcV (A), where

DV g (Ag ) 3 M g 7→ (M g )c ∈ DcV (A).

Proof. We map each minimal derivation M g of Ag from assumptions C1g , . . . , Cng

u1 g : C1 g . . . un g : Cn g
| Mg
Ag
2.12. APPLICATIONS OF THE GÖDEL-GENTZEN FUNCTOR 75

to the following classical derivation (M g )c of A from assumptions C1 , . . . , Cn :

[u1 g : C1 g ] . . . [un g : Cn g ]
| Mg un : Cn
Ag | g1c (Cn )
→ + ug
g
C n → Ag n
Cn g u1 : C1
u0 g : Ag Ag →−
| g0c (A) ... | g1c (C1 )
... →+ ug1
A C1 g → Ag C1 g
Ag → A → u
+ 0g
Ag →−
A →−

Notice that the above function is not defined by recursion (why this is not possible?).
Moreover, if M g : Ag → B g , then (M g )c : A → B. Despite this “functorial” behaviour of
the function c , we cannot define a functor Form → Formc , having c as its 1-part (why?).
Consequently, the following compositions are defined

g◦
c

g c g
DcV (A) DV g (Ag ) DcV (A) DV g (Ag )

g
c◦

g
Mc 7 (Mc)g 7→ [(Mc)g ]c,
→ c

M g 7→ (M g )c 7→ [(M g )c ]g .
c c

Corollary 2.12.9. If Γ ⊆ Form and A ∈ Form, then Γg ` Ag ⇒ Γ `c A.

Proof. We proceed as in the proof of Corollary 2.12.1.

The following translation is a variation of the Gödel-Gentzen translation.

Definition 2.12.10. The Kolmogorov translation is the unique mapping


k
: Form → Form

A 7→ Ak
defined by recursion on Form through the following clauses:

⊥k = ⊥,
k
R = ¬¬R, R ∈ Rel(0) \ {⊥},
(R(t1 , . . . , tn ) = ¬¬R(t1 , . . . , tn ), R ∈ Rel(n) , n ∈ N+ , t1 , . . . , tn ∈ Term,
(A2B)k = ¬¬(Ak 2B k ), 2 ∈ {→, ∧, ∨},
k k
(4x A) = ¬¬(4x A ), 4 ∈ {∀, ∃}.

If Γ ⊆ Form, let Γk = {C k | C ∈ Γ}.


76 CHAPTER 2. DERIVATIONS IN INTUITIONISTIC AND CLASSICAL LOGIC

Proposition 2.12.11. (i) Formk * Form∗ .


(ii) ∀A∈Form ` (Ag ↔ Ak ) .


(iii) ∀A∈Form ` ¬¬Ak → Ak ) .




(iv) The set of formulas for which there is a minimal derivation of their double-negation-
elimination is not included in Form− .
Proof. (i) Clearly (A ∨ B)k , (∃x A)k ∈/ Form∗ .
(ii) and (iii) are exercises.
(iv) It follows from (i) and (iii) and the fact that Form− ⊂ Form∗ .

Corollary 2.12.12. (i) The Kolmogorov translation defines a functor K : Formc → Form

K0 (A) = Ak ,

K1 (Mc : A → B) : Ak → B k .
(ii) If Γ ⊆ Form and A ∈ Form, then Γ `c A ⇒ Γk ` Ak .
(iii) If Γ ⊆ Form and A ∈ Form, then Γ ` A ⇒ Γk ` Ak .
Proof. Exercise.

The Gödel-Gentzen translation was introduced from Gödel in [10], and independently from
Gentzen in [8]. The Kolmogorov translation was introduced even earlier in [13], but it was not
known neither to Gödel nor to Gentzen.

2.13 The Gödel-Gentzen translation as a continuous function


Definition 2.13.1. If A ∈ Form, we define the set

OA = {C ∈ Form | A ` C} = {C ∈ Form | ` A → C}.

Lemma 2.13.2. Let A, C ∈ Form.


(i) A ∈ OA .
(ii) C ∈ OA ⇔ OC ⊆ OA .
(iii) ` A ↔ C ⇔ OC = OA .
Proof. (i) Since ` A → A, we get A ∈ OA .
(ii) Let D ∈ OC i.e., ` C → D. Since by hypothesis we also have that ` A → C, then by the
cut-rule of Proposition 1.12.1 for Γ = ∆ = {A}
{A} ` C, {A} ∪ {C} ` D
cut
{A} ` D
we get ` A → D i.e., D ∈ OA . If OC ⊆ OA , then by (i) we get C ∈ OC , hence C ∈ OA .
(iii) By (ii) OC = OA is equivalent to C ∈ OA and A ∈ OC , hence to ` A ↔ C.

Proposition 2.13.3. The collection of sets

B = {OA | A ∈ Form} ∪ {∅, Form}

is a basis for a topology T (B) on Form.


2.13. THE GÖDEL-GENTZEN TRANSLATION AS A CONTINUOUS FUNCTION 77

Proof. For this it suffices to show3 that if A, B, C ∈ Form such that C ∈ OA ∩ OB , there is
some D ∈ Form such that
C ∈ OD ⊆ OA ∩ OB .
The hypothesis C ∈ OA ∩ OB implies that A ` C and B ` C i.e., C ∈ OA and C ∈ OB , hence
by Lemma 2.13.2(ii) we get OC ⊆ OA and OC ⊆ OB . Hence C ∈ OC ⊆ OA ∩ OB .

We denote the resulting topological space as F = (Form, T (B)). It is easy to see that this
space does not behave well with respect to the separation properties. E.g., it is not T1 , since
A ∧ A is in the complement Form \ {A} of {A}, which is not open; if there was some C ∈ Form
such that A ∧ A ∈ OC ⊆ Form \ {A}, then OA∧A ⊆ OC ⊆ Form \ {A}, but A ∈ OA∧A and
A∈/ Form \ {A}.

Proposition 2.13.4. The Gödel-Gentzen translation g : Form → Form and the Kolmogorov
translation k : Form → Form are continuous functions from F to F.

Proof. We prove the continuity of the Gödel-Gentzen translation and, because of Corol-
lary 2.12.12(ii), the proof of the continuity of the Kolmogorov translation is similar.
By definition, a function f : X → Y between two topological spaces X, Y is continuous, if
the inverse image f −1 (O) of every open set O in Y is open in X. If B is a basis for Y , it is
easy to see that f is continuous if and only if the inverse image f −1 (B) of every basic open set
B in B is open in X. Clearly, g−1 (Form) = Form ∈ T (B) and g−1 (∅) = ∅ ∈ T (B). If A ∈ Form,
g−1
(OA ) = {B ∈ Form | B g ∈ OA } = {B ∈ Form | A ` B g }.

Let B ∈ g−1 (OA ) i.e., A ` B g . We show that

B ∈ OB ⊆ g−1 (OA ),

hence the set g−1 (OA ) is open, as the union of the open sets OB , for every B ∈ g−1 (OA ). The
membership B ∈ OB follows from Lemma 2.13.2(i). Next we fix some C ∈ OB i.e., ` B → C,
and we show that C ∈ g−1 (OA ) i.e., A ` C g . By Corollary 2.12.1(ii) we get

` B → C ⇒ ` Bg → C g ,

hence the following derivation tree


A
| Mg |N
Bg → C g Bg
Cg →−

is a derivation A ` C g .

3
Here we use the fact that if a collection B of subsets of some set X satisfies the property: “for every x ∈ X
and Bi , Bj ∈ B with x ∈ Bi ∩ Bj , there is some Bk ∈ B such that x ∈ Bk ⊆ Bi ∩ Bj ”, then B is a basis for
some topology T (B) on X. This topology T (B) is unique and the smallest topology on X that includes B
(see [7], Theorem 3.2).
78 CHAPTER 2. DERIVATIONS IN INTUITIONISTIC AND CLASSICAL LOGIC
Chapter 3

Models

It is an obvious question to ask whether the logical rules we have been considering suffice i.e.,
whether we have forgotten some necessary rules. To answer this question we first have to fix
the meaning of a formula i.e., provide a semantics for the syntax developed in the previous
chapters. This will be done here by means of fan models. Using this concept of a model we
will prove soundness and completeness.

3.1 Trees, fans, and spreads


Definition 3.1.1. Let X be an inhabited set i.e., a set with a given element (such a set is
non-empty set in a strong, positive sense). We define

n {∅} ,n=0
X =
F({0, . . . , n − 1}, X) , n > 0,

where F({0, . . . , n − 1}, X) denotes the set of functions u from {0, . . . , n − 1} to X. Such a
function is also understood as an n-tuple of elements of X i.e.,
 
u = u(0), u(1), . . . , u(n − 1) = u0 , u1 , . . . , un−1 ,

and we call u a node of elements of X, or a node of X <N , where


[
X <N = X n.
n∈N

We also use the symbol hi for the empty node. The length |u| of a node of X <N is defined by

0 ,u=∅
|u| =
n , u ∈ X n & n > 0.

If u, w ∈ X <N , the relation “u is a (strict) initial segment of w” is defined by

u ≺ w ⇔ |u| < |w| & ∀i∈{0,...,|u|−1} (ui = wi ).

If u ≺ w, then w is a (proper, or strict) successor of u. The relation u  w ⇔ u ≺ w or u = w


is a partial order. If u, w ∈ X <N \ {∅}, their concatenation u ∗ w is the node

u ∗ w = (u0 , . . . u|u|−1 , w0 , . . . , w|w|−1 ).


80 CHAPTER 3. MODELS

If one of them is the empty node, then their concatenation is the other node. A sequence of
elements of X is an element α ∈ X N = F(N, X), and if n ∈ N, the n-th initial part ᾱ(n) of α
is defined by 
∅ ,n=0
ᾱ(n) =
(α0 , α1 , . . . , αn−1 ) , n > 0.
A tree T on X is a subset of X <N , which is closed under initial segments i.e.,

∀u,w∈X <N (w ∈ T & u ≺ w ⇒ u ∈ T ).

An element of T is called a node of T . An infinite path of T is a sequence α of elements of X


i.e., α ∈ X N such that
∀n∈N (ᾱ(n) ∈ T ).
The body [T ] of T is the set of its infinite paths. If u ∈ T , the set Succ(u) of immediate
successor nodes of u is defined by

Succ(u) = {w ∈ T | u ≺ w & |w| = |u| + 1}.

A tree T is (in)finite, if T is an (in)finite set. A tree T it is called well-founded, if it has no


infinite path. A tree T is called finitely branching, or a fan, if Succ(u) is a finite set, for every
u ∈ T . Otherwise, T is called infinitely branching.
Example 3.1.2. The set X <N is a tree on X. Its body X <N is the set X N .
 

Example 3.1.3. If X = N, the tree N<N on N is called the Baire tree. Its body N <N is
 

the set NN , which is called the Baire space. Clearly, the Baire tree is infinitely branching.
Example
 <N  3.1.4. If X = 2 = {0, 1}, the tree 2<N on N is called the Cantor tree. Its body
2 is the set 2N , which is called the Cantor space. Clearly, the Cantor tree is a fan.
Trivially, ∅ ≺ u, for every u ∈ X <N \ {∅}, while a tree T on X is inhabited if and only if
∅ ∈ T . Notice that a node of a tree may have more than one immediate successors, but it has
always a unique immediate predecessor (defined in the obvious way). If T is a finite tree on
X, then, trivially, T is well-founded, but the converse is not true.
Proposition 3.1.5. (i) There is a well-founded, infinite tree.
(ii) An infinite fan F has an infinite path.
Proof. The proof of (i) is an exercise. The proof of (ii) uses classical logic. If u ∈ F , let u be
“good”, if there are infinitely many nodes w ∈ F with u ≺ w. Let u be “bad”, if u is not good.
What we want follows from the observation that if all immediate successor nodes of u are bad,
then u is also bad. The completion of the proof is an exercise.

Definition 3.1.6. A binary relation R ⊆ X × X on X has an infinite descending chain, if


there is α ∈ X N such that ∀n∈N (αn+1 R αn ) i.e.,

. . . α3 R α2 R α1 R α0 ,

and ≺ is called well-founded, if it has no infinite descending chain.


Clearly, <N is a well-founded relation on N, while <Z is not a well-founded relation on Z.
If T is a well-founded tree on X, the relation w R u ⇔ u ≺ w is well-founded relation on T .
3.1. TREES, FANS, AND SPREADS 81

Proposition 3.1.7 (Well-founded induction). If ≺ is a well-founded relation on X, then


 

∀x∈X ∀y∈X (y ≺ x ⇒ P (y)) ⇒ P (x) ⇒ ∀x∈X (P (x)).

Proof. Suppose that there is some x ∈ X such that ¬P (x). This implies the (classical)
existence of some x1 ≺ x such that ¬P (x1 ). By repeating this step (and using some form of
the axiom of choice), we get that ≺ has an infinitely descending chain, which contradicts our
hypothesis.

Definition 3.1.8. Let T be a tree on some inhabited set X. A leaf of T is a node of T without
proper successors (equivalently, without immediate successors). We denote by Leaf(T ) the set
of leaves of T . We call T a spread, if Leaf(T ) = ∅, or equivalently, if every node of T has an
immediate successor1 . A subtree T 0 of T , in symbols T 0 ≤ T , is a subset T 0 of T which is also
a tree on X. A branch A of T is a linearly ordered subtree of T i.e.,

∀u,w∈A (u  w or w  u).

A finite path of T is a finite branch of T . A bar B of a spread S on X is some B ⊆ S, such


that every infinite path of S “hits” the bar B i.e.,

∀α∈[S] ∃n∈N ᾱ(n) ∈ B .

If ᾱ(n) ∈ B, we say that the infinite path α hits the bar B at the node ᾱ(n). A bar B of S is
called uniform, if there is a uniform bound on the length of the initial part of an infinite path
that hits B i.e.,

∃n∈N ∀α∈[S] ∃m≤n ᾱ(m) ∈ B .

Clearly, a(n infinite) path is an infinite branch.

Example 3.1.9. The Baire and the Cantor tree are spreads, and for every n ∈ N the sets

Bn = {u ∈ 2<N | |u| = n}

are uniform bars of 2<N . Note that B0 = {∅} is a uniform bar of every spread.

Proposition 3.1.10. A tree T on X is a subtree of a spread S on X.

Proof. Since X is inhabited by some x0 , we define


[
S=T∪ u(x0 ),
u∈Leaf(T )

u(x0 ) = {u ∗ (x0 , x0 , . . . , x0 ) | n ∈ N+ }
| {z }
n

where u∗(x0 , x0 , . . . , x0 ) is the concatenation of u and the node (x0 , x0 , . . . , x0 ). It is immediate


| {z }
to see that S is a spread having T as a subtree.
1
Clearly, the body of a spread is always non-empty.
82 CHAPTER 3. MODELS

Proposition 3.1.11. Let F be a fan on an inhabited set X, and G a fan and a spread on X.
(i) If all branches of F are finite, then F has a branch of maximal length.
(ii) If B is a bar of G, then B is uniform.
Proof. The proof of (i) rests on Proposition 3.1.5(ii). For the proof of both (i) and (ii) we use
classical reasoning.

Proposition 3.1.12. Let X, Y be inhabited sets, and let F be a a fan on X and G a fan on
Y such that F, G are spreads.
(i) If u ∈ F , and if B(u) = {α ∈ [F ] | u ≺ α}, where u ≺ α ⇔ ∃n∈N (ᾱ(n) = u), then the
family {B(u) | u ∈ F } ∪ {∅} is a basis for a topology TF on [F ].
(ii) Let φ : F → G satisfying the following properties:

∀u,w∈F (u  w ⇒ φ(u)  φ(w)),



∀α∈[F ] lim |φ(ᾱ(n))| = +∞ .
n→+∞

Then, the function [φ] : [F ] → [G], defined by


_
[φ](α) = φ(ᾱ(n)),
n∈N

where u ∨ w = sup {u, w}, is continuous with respect to the topologies TF and TG .
Proof. Exercise.

3.2 Fan models


For the rest L is a countable formal language i.e., the sets Rel, Fun are countable.
Definition 3.2.1. Let the following sets

∅ ,n=0
n=
{0, . . . , n − 1} , n > 0.

If D is an inhabited set, the set Dn = F(n, D) of all functions f : n → D can be identified


with the product set Dn . Moreover, we define

Rel(n) (D) = P(Dn ),


[
Rel(D) = Rel(n) (D),
n∈N
(n)
Fun (D) = F(Dn , D),
[
Fun(D) = Fun(n) (D).
n∈N

If n > 0, an element of Rel(n) (D) is a relation on D of arity n, and an element of Fun(n) (D)
is a function f : Dn → D. Since D0 = {∅}, we get Rel(0) (D) = P({∅}) = {∅, {∅}} = 2. The
value 0 = ∅ represents falsity, and the value 1 = {∅} represents truth. Moreover, the set
Fun(0) (D) = F({∅}, D) can be identified with D.
3.2. FAN MODELS 83

Definition 3.2.2. A fan model of L is a structure M = (D, F, X, i, j) satisfying the following


clauses:
(i) D, X are inhabited sets. We may also use the notation |M| for D.
(ii) F is a fan on X.
(iii) i : Fun → Fun(D) such that for every n ∈ N

in = i|Fun(n) : Fun(n) → Fun(n) (D).

(iv) j : Rel × F → Rel(D) such that for every n ∈ N

jn = j|Rel(n) ×F : Rel(n) × F → Rel(n) (D),

and for every R ∈ Rel the following monotonicity condition is satisfied:



∀u,w∈F u  w ⇒ j(R, u) ⊆ j(R, w) .

We also write
RM (d,
~ u) ⇔ d~ ∈ j(R, u),

where d~ = (d1 , . . . , dn ) and j(R, u) ∈ Rel(n) (D).

From the above definition of a fan model we notice the following:

• If n = 0 and f ∈ Fun(0) = Const, we have that i(f ) ∈ Fun(0) (D) i.e.,

i(f ) ∈ D .

• If n > 0 and f ∈ Fun(n) , we have that i(f ) ∈ Fun(n) (D) i.e.,

i(f ) : Dn → D .

• If n = 0, u ∈ F and R ∈ Rel(0) , we have that j(R, u) ∈ Rel(0) (D) i.e.,

j(R, u) ∈ 2,

hence j(R, u) is either true or false.


• We set no special requirement on the value j(⊥, u) ∈ 2, as minimal logic places no
particular constraints on falsum ⊥.
• If n > 0, u ∈ F and R ∈ Rel(n) , we have that j(R, u) ∈ Rel(n) (D) i.e.,

j(R, u) is an n-ary relation on D.

If M = (D, F, X, i, j) is a fan model of L, we can give the following interpretations:

∗ A node u ∈ F is interpreted as a “possible world”, and its length |u| is its “level”.
∗ The relation u ≺ w is interpreted as: “the possible world w is a possible future of the
possible world u”.
84 CHAPTER 3. MODELS

∗ If R ∈ Rel(0) , the monotonicity condition of jR : F → 2, defined by the rule

u 7→ jR (u) = j(R, u),

for every u ∈ F , is interpreted as: “if R is true at u, it is true at w”, since, if j(R, u) = ∅,
then we always have that j(R, u) ⊆ j(R, w), while if j(R, u) = {∅}, the monotonicity
j(R, u) ⊆ j(R, w), implies that j(R, w) = {∅} too.

The next fact explains why no generality is lost if the fan in a fan model is a spread.

Proposition 3.2.3. If M = (D, F, X, i, j) is a fan model of L, there is a fan model M∗ =


(D, S, X, i, j∗ ) of L such that S is a spread on X.

Proof. If x0 ∈ X, we consider S to be the spread of Proposition 3.1.10 on X. We then define

j∗ R, u ∗ (x0 , x0 , . . . , x0 = j(R, u);



n ∈ N, u ∈ Leaf(F ),
| {z }
n

j∗ (R, u) = j(R, u); u∈


/ Leaf(F ).
By case distinction on the nodes of S it is straightforward to show that j∗ satisfies the
monotonicity condition, and hence M∗ = (D, S, X, i, j∗ ) is a fan model of L.

3.3 The Tarski-Beth definition of truth in a fan model


The first step in the assignment of a mathematical meaning to a formula of L, in the sense of
Tarski and Beth, is to associate an element of an inhabited set to every variable of L.

Definition 3.3.1. If D is a set inhabited by d0 , a variable assignment in D is a map

η : Var → D.

We denote by [x1 7→ d1 , . . . , xn 7→ dn ] the variable assignment defined by



di , x = xi ∈ {x1 , . . . , xn }
[x1 7→ d1 , . . . , xn 7→ dn ](x) =
d0 , x ∈
/ {x1 , . . . , xn }.

It might be that di = dj , for some i, j ∈ {1, . . . , n}. If η ∈ F(Var, D) and d ∈ D, let ηxd be the
variable assignment in D defined by η and d as follows2 :
(
η(y) , if y 6= x,
ηxd (y) =
d , if y = x.

The next step is to associate an element of D to every term of L. This we can do with the
use of an assignment routine and a fixed fan model of L.
2
If we use classical logic in our metatheory, then the use of this instance of the principle of the excluded
middle, x = y or ¬(x = y), is legitimate. If we use constructive logic though, we need to equip the set of
variables Var of L with a decidable equality i.e., an equality satisfying such a disjunction.
3.3. THE TARSKI-BETH DEFINITION OF TRUTH IN A FAN MODEL 85

Definition 3.3.2. Let M = (D, F, X, i, j) be a fan model of L, and let η be a variable


assignment in D. The term assignment in D generated by M and η is the function

ηM : Term → D,

defined by recursion on Term through the following clauses:

ηM (x) = η(x),
ηM (c) = i(c),
ηM (f (t1 , . . . , tn )) = i(f )(ηM (t1 ), . . . , ηM (tn )),

for every x ∈ Var, c ∈ Const, f ∈ Fun(n) , t1 , . . . , tn ∈ Term and n ∈ N+ . We often write

tM [η] = ηM (t),

and when M is fixed, we may even use the same symbol η(t) for ηM (t). If ~t ∈ Term<N , let
(
∅ , if ~t = ∅,
ηM (~t) =
(ηM (t0 ), . . . , ηM (t|~t|−1 )) , if ~t = (t0 , . . . , t|~t|−1 ).

Now we are ready to formulate the Tarski-Beth definition of truth of a formula of L in a


fan model of L. In the rest of this chapter we use the following notation for some formula φ of
our metalanguage (the language of our metatheory):

∀u0 n u φ ⇔ ∀u0 u (|u0 | = |u| + n ⇒ φ).




Definition 3.3.3 (Tarski, Beth). Let M = (D, F, X, i, j) be a fan model of L, such that F is
a spread. We define inductively the relation
“the formula A is true in M at the node u under the variable assignment η”, or “u forces A
under η in M”,
in symbols 
M, u A[η], or simpler u A[η] ,
by the following rules3 :
 
M ~M 0
u (R(~t))[η] ⇔ ∃n∈N ∀u0 n u R (t [η], u ) ,

u (A ∨ B)[η] ⇔ ∃n∈N ∀u0 n u (u0 A[η] or u0 B[η]),


⇔ ∃n∈N ∀u0 n u ∃d∈D u0 A[ηxd ] ,

u (∃x A)[η]
u (A → B)[η] ⇔ ∀u0 u (u0 A[η] ⇒ u0 B[η]),
u (A ∧ B)[η] ⇔ u A[η] & u B[η],
A[ηxd ] .

u (∀x A)[η] ⇔ ∀d∈D u

If A1 , . . . , An ∈ Form, we also use the notation

u {A1 , . . . , An }[η] :⇔ u A1 [η] & . . . & u An [η].


3
These rules, which are written as equivalences, are pairs of inductive rules, in the usual sense i.e., in the
first rule of the pair the formula on the left is the nominator and the formula on the right is the denominator,
while in the second rule of the pair it is the other way around.
86 CHAPTER 3. MODELS

In this definition, the logical connectives →, ∧, ∨, ∀, ∃ on the left hand side are part of the
object language L, whereas the same connectives on the right hand side are to be understood
in the usual sense: they belong to the metalanguage. It should always be clear from the context
whether a formula is part of the object or the metalanguage. Regarding the Beth-Tarski
definition of truth, we make the following remarks.

• If R ∈ Rel(0) and ~t = ∅, then by the first clause of the definition


 
M M 0
u R[η] ⇔ ∃n∈N ∀u0 n u R (∅ [η], u )
 
M 0
⇔ ∃n∈N ∀u0 n u R (∅, u )

⇔ ∃n∈N ∀u0 n u (∅ ∈ j(R, u0 )


⇔ ∃n∈N ∀u0 n u (j(R, u0 ) = 1).

If R ∈ Rel(n) , for some n > 0, then


 
M ~M 0
u (R(~t))[η] ⇔ ∃n∈N ∀u0 n u R (t [η], u )

⇔ ∃n∈N ∀u0 n u (~tM [η] ∈ j(R, u0 )


⇔ ∃n∈N ∀u0 n u ((ηM (t0 ), . . . , ηM (t|~t|−1 )) ∈ j(R, u0 )).

Hence, R (or R(~t)) is true in M at u under η if and only if there is a level of possible
worlds in F such that R is true in M at all possible future worlds of u of that level
under η. If u (R(~t))[η], and if

SF (u) = {w ∈ F | u  w} ∪ {w ∈ F | w  u},

then
BF (u) = w ∈ SF (u) | RM (~tM [η], w)


is a uniform bar of the spread subfan SF (u) of F , with |u| + n as a uniform bound.

F
u0
|u| + n

|u|
u

hi

• The formula A ∨ B is true in M at u under η if and only if for every possible future u0
of u of level |u| + n either A is true at u0 or B is true at u0 , for some n ∈ N.
• The formula A → B is true in M at u under η if and only if for every possible future u0
of u if A is true in M at u0 under η, then B is true in M at u0 under η.
3.3. THE TARSKI-BETH DEFINITION OF TRUTH IN A FAN MODEL 87

• The formula ∀x A is true in M at u under η if and only if the formula A is true in M at


u under ηxd , for every d ∈ D. E.g., if A = R(x), where R ∈ Rel(1) . If d ∈ D, then

u (R(x))[ηxd ] ⇔ ∃n∈N ∀u0 n u ηxd (x) ∈ j(R, u0 )




⇔ ∃n∈N ∀u0 n u (d ∈ j(R, u0 ))

i.e., there is a level of future words of u such that d ∈ j(R, u0 ), and this is the case for
every d ∈ D. As any possible interpretation d of x is in all j(R, u0 ), for some level of
possible future worlds of u, it is natural to define then that ∀x R(x) is true in M under η.
The use of ηxa in the definition of u (∃x A)[η] and u (∀x A)[η] reflects that no capture
occurs when we infer u (∃x A)[η] and u (∀x A)[η] from ∃n∈N ∀u0 n u ∃d∈D (u0 A[ηxd ])
and ∀d∈D (u A[ηxd ]), respectively.
Proposition 3.3.4 (Extension). Let A ∈ Form, M = (D, F, X, i, j) be a fan model of L, F is
a spread, η a variable assignment in D, and u, w ∈ F . Then

uw &u A[η] ⇒ w A[η].

Proof. Exercise.

Proposition 3.3.5. Let M ≡ (D, F, X, i, j) be a fan model of L, F is a spread, η a variable


assignment in D, and A, B ∈ Form.
(i) The set  
JAKM,η = α ∈ [F ] | ∃n∈N ᾱ(n) A[η] }
is open in TF , where the topology TF on [F ] is defined in Proposition 3.1.12.
(ii) The following hold:
JA ∧ BKM,η = JAKM,η ∩ JBKM,η ,
JA ∨ BKM,η = JAKM,η ∪ JBKM,η .
Proof. Exercise.

Next proposition is a kind of converse to Proposition 3.3.4. According to it, in order to


infer the truth of A at some node u from the truth of A in the possible future u0 of u, we need
to know that A is true at all future-nodes of u0 of some level above (or equal to) the level of u0 .
Proposition 3.3.6 (Covering). If A ∈ Form, M = (D, F, X, i, j) is a fan model of L, F is a
spread, and η is a variable assignment in D, then

∃n∈N ∀u0 n u u0 A[η] ⇒ u A[η].


 

Proof. By induction on Form. Case R(~s). Assume ∀u0 n u u0 (R(~s))[η] . Since F is a fan,


there are finitely many nodes u0 such that u0 n u. Let their set be N = {u1 , . . . , ul }. By
definition we have that for each uk ∈ N

∃nk ∈N ∀wk nk uk RM (~sM [η], wk ) .




Let m = max{n1 , . . . , nl }. Then we have that


 
T T
∀wn+m u R (~s [η], w) ,
88 CHAPTER 3. MODELS


hence by the corresponding clause of the Tarski-Beth definition we get u R(~s)[η] . For
this we argue as follows. If w n+m u, then w  wk nk uk , for some k ∈ {1, . . . , l}. Since
by hypothesis, ηM (~s) ∈ j(R, wk ), by the monotonicity of jR we get ηM (~s) ∈ j(R, w) ⇔
RM (~sM [η], w). The cases A ∨ B and ∃x A are handled similarly.
Case A → B. Let N = {u1 , . . . , ul } be the set of all u0  u with |u0 | = |u| + n such that
u0 (A → B)[η]. We show that

∀wu (w A[η] ⇒ w B[η]).

Let w  u and w A[η]. We must show w B[η]. If |w| ≥ |u| + n, then w  uk , for some
k ∈ {1, . . . , l}. Hence, by the hypothesis on uk and the definition of uk (A → B)[η], we get
w B[η]. If |u| ≤ |w| < |u| + n, then by Proposition 3.3.4 for the set N 0 of all elements uj of
N that extend w we have that each uj A[η]. Hence, we also have that uj B[η]. But N 0 is
the set of all successors of w with length |w| + m, where m = |u| + n − |w|. By the induction
hypothesis on the formula B, we get the required w B[η]. The cases A ∧ B and ∀x A are
straightforward to show.

3.4 Soundness of minimal logic


Lemma 3.4.1 (Coincidence). Let M = (D, F, X, i, j) be a fan model of L, t ∈ Term, A ∈ Form,
and η, ξ variable assignments in D.
(i) If η(x) = ξ(x) for all x ∈ FV(t), then ηM (t) = ξM (t).
(ii) If η(x) = ξ(x) for all x ∈ FV(A), then M, u A[η] if and only if M, u A[ξ].

Proof. By Induction on Term and Form, respectively. The details are left to the reader.

Lemma 3.4.2 (Substitution). Let M = (D, F, X, i, j) be a fan model of L, t, r(x) ∈ Term,


A(x) ∈ Form with Freet,x (A) = 1, and η a variable assignment in D.
η (t)
(i) ηM (r(t)) = ηxM (r(x)).
η (t)
(ii) M, u A(t)[η] if and only if M, u A(x)[ηxM ].

Proof. By Induction on Term and Form, respectively. The details are left to the reader.

Next theorem expresses that minimal derivations are sound with respect the Beth-Tarski
notion of truth of a formula in a fan model i.e., they respect truth in a fan model.

Theorem 3.4.3 (Soundness of minimal logic). Let M = (D, F, X, i, j) be a fan model of L,


u ∈ F and η a variable assignment in D. If M ∈ DV (A) such that u {C1 , . . . , Cn }[η], where
{C1 , . . . , Cn } = Form(V ), then u A[η].

Proof. We fix M and we prove by induction on derivations the formula


 

∀M ∈DV (A) ∀η∈F(Var,D) ∀u∈F u Form(V )[η] ⇒ u A[η] .

Case 1A . The validity of u A[η] ⇒ u A[η] is immediate.


3.4. SOUNDNESS OF MINIMAL LOGIC 89

Case →+ . Let the derivation


[A], C1 , . . . , Cn
|N
B +
A→B →
and suppose u {C, . . . , Cn }[η]. We show u (A → B)[η] ⇔ ∀u0 u (u0 A[η] ⇒ u0 B[η])
under the inductive hypothesis on N :

IH(N ) : ∀η ∀w (w {A, C1 , . . . , Cn }[η] ⇒ w B[η]).

We fix u0 such that u0  u and we suppose u0 A[η]. By Extension (Proposition 3.3.4) we get
u0 {C1 , . . . , Cn }[η], hence u0 {A, C1 , . . . , Cn }[η]. Hence, by IH(N ) we get u0 B[η].
Case (→− ). Let the derivation
C1 , . . . , C n D1 , . . . , D m
|N |K
A→B A
B →−

and suppose u {C1 , . . . , Cn , D1 , . . . , Dm }[η]. We show u B[η] under the inductive


hypotheses on N and K:

IH(N ) : ∀η ∀w (w {C1 , . . . , Cn }[η] ⇒ w (A → B)[η]),

IH(K) : ∀η ∀w (w {D1 , . . . , Dm }[η] ⇒ w A[η]).


By IH(N ) we have that u (A → B)[η], and by IH(K) we get u A[η], hence u B[η].
Case (∀+ ) Let the derivation
C1 , . . . , C n
|N
A +
∀x A ∀ x
with the variable condition x ∈
/ FV(C1 ) & . . . & x ∈
/ FV(Cn ), and suppose u {C1 , . . . , Cn }[η].
We show u (∀x A)[η] ⇔ ∀d∈D (u A[ηxd ]) under the inductive hypothesis on N :

IH(N ) : ∀η ∀w (w {C1 , . . . , Cn }[η] ⇒ w A[η]).

Let d ∈ D. By the variable condition we get η|FV(Ci ) = (ηxd )|FV(Ci ) , for every i ∈ {1, . . . , n},
hence by Coincidence (Lemma 3.4.1) we conclude that u {C1 , . . . , Cn }[ηxd ]. By IH(N ) on ηxd
and u we get u A[ηxd ].
Case (∀− ). Let the derivation
C1 , . . . , C n
|N
∀x A r ∈ Term −

A(r)

and suppose u {C1 , . . . , Cn }[η]. We show u A(r)[η] under the inductive hypotheses on N :

IH(N ) : ∀η ∀w (w {C1 , . . . , Cn }[η] ⇒ w (∀x A)[η]).


90 CHAPTER 3. MODELS

 η (r) 
Applying IH(N ) on u we get ∀d∈D (u A[ηxd ]). If we consider d = ηM (r), we get u A ηxM ,
and by Substitution (Lemma 3.4.2) we conclude that u A(r)[η].
Case (∧+ ) and Case (∧− ) are straightforward.
Case (∨+ +
0 ) and Case (∨1 ) are straightforward.
Case (∨− ). Let the derivation

C1 , . . . , C n [A], D1 , . . . , Dm [B], E1 , . . . , El
|N |K |L
A∨B C C −
C ∨

and suppose u {C1 , . . . , Cn , D1 , . . . , Dm , E1 , . . . , El }[η]. We show u C[η] under the


inductive hypotheses on N , K and L:

IH(N ) : ∀η ∀w (w {C1 , . . . , Cn }[η] ⇒ w (A ∨ B)[η]),

IH(K) : ∀η ∀w (w {A, D1 , . . . , Dm }[η] ⇒ w C[η]),


IH(L) : ∀η ∀w (w {B, E1 , . . . , El }[η] ⇒ w C[η]).
By IH(N ) we get u (A ∨ B)[η] ⇔ ∃n ∀u0 n u (u0 A[η] or u0 B[η]). By Covering
(Proposition 3.3.6) it suffices to show for this n ∈ N:

∀u0 n u (u0 C[η]).

We fix u0 such that u0 n u. If u0 A[η], then by Extension and IH(K) we get u0 C[η]. If
u0 B[η], then by Extension and IH(L) we get u0 C[η].
Case (∃+ ) is straightforward.
Case (∃− ). Let the derivation

C1 , . . . , C n [A], D1 , . . . , Dm
|N |K
∃x A B −
B ∃ x

with the variable condition x ∈ / FV(D1 ) & . . . & x ∈


/ FV(Dm ), and x ∈
/ FV(B), and suppose
u {C, . . . , Cn , D1 , . . . , Dm }. We show u B[η] under the inductive hypotheses on N, K:

IH(N ) : ∀η ∀w (w {C1 , . . . , Cn }[η] ⇒ w (∃x A)[η]),

IH(K) : ∀η ∀w (w {A, D1 , . . . , Dm }[η] ⇒ w B[η]).


By IH(N ) we get that u (∃x A)[η] ⇔ ∃n ∀u0 n u ∃d∈D (u0 A[ηxd ]). By Covering it suffices to
show for this n ∈ N:
∀u0 n u (u0 B[η]).
We fix u0 such that u0 n u, and let d ∈ D such that u0 A[ηxd ]. Since by the variable condition
we get η|F V (Di ) = (ηxd )|F V (Di ) , and since by Extension u0 {D1 , . . . , Dm }[η], by Coincidence
we get u0 {A, D1 , . . . , Dm }[ηxd ]. By IH(K) on ηxd and u0 we get u0 B[ηxd ]. Since by the
variable condition we get η|F V (B) = (ηxd )|F V (B) , we conclude that u0 B[η].
3.5. COUNTERMODELS AND INTUITIONISTIC FAN MODELS 91

Corollary 3.4.4. Let Γ ∪ {A} ⊆ Form such that Γ ` A. If M = (D, F, X, i, j) is a fan model
of L, u ∈ F and η is a variable assignment in D, the following hold:
(i) M, u Γ[η] ⇒ M, u A[η].
(ii) If Γ = ∅, then u A[η], and JAKM,η = [F ].
(iii) The set
JuKM,η = {A ∈ Form | u A[η]}
is open in the topology T (B) on Form, defined in Proposition 2.13.3.
Proof. Exercise.

3.5 Countermodels and intuitionistic fan models


The main application of the soundness theorem is its use in the proof of underivability results.
Definition 3.5.1. A countermodel to some derivation Γ ` A is a triple (M, η, u), where
M = (D, F, X, i, j) is a fan model of L, η is a variable assignment in D, and u ∈ F such that

M, u Γ[η] and M, u 6 A[η].

By Corollary 3.4.4 of the soundness theorem, if (M, η, u) is a countermodel to the derivation


Γ ` A, we can conclude that Γ 6` A, since if there was such a derivation we should have
M, u Γ[η] ⇒ M, u A[η], which contradicts the existence of a countermodel.
Example 3.5.2 (Consistency of minimal logic, or minimal underivability of falsum). Suppose
that there is a derivation ` ⊥. By Corollary 3.4.4(ii), if u = hi = ∅, we have that

hi ⊥[η] ⇔ ∃n ∀u∈F |u| = n ⇒ j(⊥, u) = 1 .

To every node of the following fan we write all propositions forced at that node (the nodes
where falsum is forced are considered to be extended, and at every extension-node falsum is
also forced). .
•⊥ ..
•⊥ •
•⊥ •

This is a fan model because monotonicity holds trivially. Clearly, the above condition is not
satisfied, hence ` is consistent. As minimal, intuitionistic, and classical logic are equiconsistent
(Corollary 2.12.4), we conclude that intuitionistic and classical logic are also consistent.
Example 3.5.3 (Minimal underivability of Ex falsum). A countermodel to the derivation
` ⊥ → R, where R ∈ Rel(0) \ {⊥}, is constructed as follows: take F = {x0 }<N , D any
inhabited set, and define j(⊥, ∅) = 1, and j(R, ∅) = 0.
..
..
⊥•
⊥•
⊥•
⊥•
92 CHAPTER 3. MODELS

By extension we get j(⊥, u) = 1, for every u ∈ F . Moreover, we get j(R, u) = 0, for every u ∈ F ;
if there was some u ∈ F \ {∅} such that j(R, u) = 1, then, since this is the only node u0 ∈ F
such that u0 |u0 | ∅, by Covering we would get j(R, ∅) = 1 too. We show that ∅ 6 (⊥ → R)[η],
where η is arbitrary. Suppose that ∅ (⊥ → R)[η] ⇔ ∀u (u ⊥[η] ⇒ u R[η]). For every
u ∈ F though, we have that u ⊥[η] and u 6 R[η].
Definition 3.5.4. An intuitionistic fan model of a countable first-order language L is a fan
model Mi = (D, F, X, i, j) of L such that

∀u∈F j(⊥, u) = 0 .
It is easy to see that if Mi is an intuitionistic fan model, then
Mi , u (⊥ → A)[η],
for every A, ∈ Form, u ∈ F and assignment η in D. Notice that an intuitionistic fan model
provides an immediate proof that `i is consistent, hence by Corolary 2.12.4 we get the
consistency of `, `c once more. Notice that the fan model used in Example 3.5.2 is not
intuitionistic.
Lemma 3.5.5. A fan model M = (D, F, X, i, j) of L, where F is a spread, is intuitionistic if
and only if 
∀η ∀u∈F u 6 ⊥[η] .
Proof. Exercise.
Proposition 3.5.6. Let Mi = (D, F, X, i, j) be an intuitionistic fan model of L, η a variable
assignment, u ∈ F and A ∈ Form.
(i) u (¬A)[η] ⇔ ∀u0 u (u0 6 A[η]).
(ii) u (¬¬A)[η] ⇔ ∀u0 u ¬∀u00 u0 (u00 6 A[η]).
Proof. Exercise.
Definition 3.5.7. An intuitionistic countermodel to some derivation Γ `i A is a triple
(Mi , η, u), where Mi = (D, F, X, i, j) is an intuitionistic fan model, η is a variable assignment
in D, and u ∈ F such that Mi u Γ[η] and Mi u 6 A[η].
Since the soundness theorem of intuitionistic logic follows immediately from the soundness
theorem of minimal logic, we can use it to conclude an intuitionistic underivability Γ 6`i A
from an intuitionistic countermodel to Γ `i A.
Example 3.5.8 (Intuitionistic underivability of DNE). We give an intuitionistic countermodel
to the derivation `i ¬¬P → P . We describe the desired fan model by means of a diagram
below. Next to every node we write all propositions forced at that node (again the nodes
where P is forced are considered to be extended, and at every extension-node P is also forced).
.
•P ..
•P •
•P •

This is a fan model because monotonicity clearly holds. Observe also that j(⊥, u) = 0, for every
node u i.e., it is an intuitionistic fan model, and moreover ∅ 6 P [η]. Using Proposition 3.5.6(ii),
it is easily seen that ∅ (¬¬P )[η]. Thus ∅ 6 (¬¬P → P )[η], and hence 6`i (¬¬P → P ).
3.6. COMPLETENESS OF MINIMAL LOGIC 93

3.6 Completeness of minimal logic


Theorem 3.6.1 (Completeness of minimal logic). Let Γ ∪ {A} ⊆ Form. The following are
equivalent.
(i) Γ ` A.
(ii) Γ A, i.e., for all fan models M, assignments η in |M| and nodes u in the fan of M

M, u Γ[η] ⇒ M, u A[η].

Proof. (Harvey Friedman) Soundness of minimal logic already gives “(i) implies (ii)”. The
main idea in the proof of the other direction is the construction of a fan model M over the
Cantor tree 2<N with domain D the set Term of all terms of the underlying language such
that the following property holds:

Γ ` B ⇔ M, ∅ B[idVar ].

We assume here that Γ ∪ {A} is a set of closed formulas. In order to define M, we will need an
enumeration A0 , A1 , A2 , . . . of the underlying language L (assumed countable), in which every
formula occurs infinitely often. We also fix S an enumeration x0 , x1 , . . . of distinct variables.
Since Γ is countable it can we written Γ = n Γn with finite sets Γn such that Γn ⊆ Γn+1 .
With every node u ∈ 2<N , we associate a finite set ∆u of formulas and a set Vu of variables,
by induction on the length of u. We write ∆ `n B to mean that there is a derivation of height
≤ n of B from ∆.
Let ∆∅ = ∅ and V∅ = ∅. Take a node u such that |u| = n and suppose that ∆u , Vu are
already defined. We define ∆u∗0 , Vu∗0 and ∆u∗1 , Vu∗1 as follows:
Case 0. FV(An ) 6⊆ Vu . Then let

∆u∗0 = ∆u∗1 = ∆u and Vu∗0 = Vu∗1 = Vu .

Case 1. FV(An ) ⊆ Vu and Γn , ∆u 6`n An . Let

∆u∗0 = ∆u and ∆u∗1 = ∆u ∪ {An },


Vu∗0 = Vu∗1 = Vu .

Case 2. FV(An ) ⊆ Vu and Γn , ∆u `n An = A0n ∨ A00n . Let

∆u∗0 = ∆u ∪ {An , A0n } and ∆u∗1 = ∆u ∪ {An , A00n },


Vu∗0 = Vu∗1 = Vu .

Case 3. FV(An ) ⊆ Vu and Γn , ∆u `n An = ∃x A0n (x). Let

∆u∗0 = ∆u∗1 = ∆u ∪ {An , A0n (xi )} and Vu∗0 = Vu∗1 = Vu ∪ {xi },

where xi is the first variable ∈


/ Vu .
Case 4. FV(An ) ⊆ Vu and Γn , ∆u `n An , with An neither a disjunction nor an existentially
quantified formula. Let

∆u∗0 = ∆u∗1 = ∆u ∪ {An } and Vu∗0 = Vu∗1 = Vu .

The following remarks (R1)-(R3) are clear.


94 CHAPTER 3. MODELS

(R1) ∆u , Vu are finite sets.


(R2) FV(∆u ) ⊆ Vu .
(R3) u  w ⇒ ∆u ⊆ ∆w and Vu ⊆ Vw .
(R4) ∀xi ∈Var ∃m ∀u∈2<N (|u| = m ⇒ xi ∈ Vu ).
Remark (R4) is shown as follows: Let the derivation ` ∃x (⊥ → ⊥) with height m0 . Suppose
that for every xj with j < i, there is some mj such that ∀u∈2<N (|u| = mj ⇒ xj ∈ Vu ). Let
n ≥ max{m0 , m1 , . . . , mi−1 } such that An ⇔ ∃x (⊥ → ⊥) (this n can be found, as the formula
∃x (⊥ → ⊥) occurs infinitely often in the fixed enumeration of formulas). Sinve n ≥ m0 , if
|u| = n, then Γn , ∆u `n ∃x (⊥ → ⊥). By definition of n and (R3) we get that x1 , . . . , xi−1 ∈ Vu .
If xi ∈ Vu , then xi ∈ Vu∗j , with j ∈ 2. If xi ∈
/ Vu , and since FV(∃x (⊥ → ⊥)) = ∅ ⊆ Vu , by
Case 3 we have that xi ∈ Vu∗j , since xi is the first variable in the fixed enumeration of Var
that does not occur in Vu . Hence mi = n + 1 satisfies the required property.
We also have the following:
∀u0 n u (Γ, ∆u0 ` B) ⇒ Γ, ∆u ` B, provided FV(B) ⊆ Vu . (3.1)
It is sufficient to show that, for FV(B) ⊆ Vu ,
(Γ, ∆u∗0 ` B) ∧ (Γ, ∆u∗1 ` B) ⇒ (Γ, ∆u ` B).
In cases 0, 1 and 4, this is obvious. For case 2, the claim follows immediately from the
axiom schema ∨− . In case 3, we have FV(An ) ⊆ Vu and Γn , ∆u `n An ⇔ ∃x A0n (x). Assume
Γ, ∆u ∪ {An , A0n (xi )} ` B with xi ∈
/ Vu , and FV(B) ⊆ Vu . Then xi ∈
/ FV(∆u ∪ {An , B}),

hence Γ, ∆u ∪ {An } ` B by ∃ and therefore Γ, ∆u ` B.
Next, we show
Γ, ∆u ` B ⇒ ∃n ∀u0 n u (B ∈ ∆u0 ), provided FV(B) ⊆ Vu . (3.2)
Choose n ≥ |u| such that B = An and Γn , ∆u `n An . For all u0  u, if |u0 | = n + 1 then
An ∈ ∆u0 (we work as above for Cases 2-4).
Using the sets ∆u we define the fan model M = (Term, 2<N , 2, i, j) as follows. If f ∈ Fun(n) ,
then i(f ) : Termn → Term is defined by
i(f )(t1 , . . . , tn ) = f (t1 , . . . , tn ).
Obviously, tM [idVar ] = t for all t ∈ Term. If R ∈ Rel(n) , then j(R, u) ⊆ Termn is defined by
j(R, u) = {(t1 , . . . , tn ) ∈ Termn | R(t1 , . . . , tn ) ∈ ∆u }.
Hence, if R ∈ Rel(0) , j(R, u) = 0, for every u ∈ 2<N . We write u B for M, u B[idVar ],
and we show:
CLAIM. Γ, ∆u ` B ⇔ u B, provided FV(B) ⊆ Vu .
The proof is by induction on the well-founded relation C /∗ B, “C is a proper Gentzen
subformula4 of B” (see Proposition 3.1.7). I.e., if

P (B) ⇔ ∀u FV(B) ⊆ Vu ⇒ (Γ, ∆u ` B ⇔ u B) ,
4
The relation “a formula B is a Gentzen subformula of the formula A” is defined inductively by the rules:
(R) B2C / A (2 ∈ {→, ∧, ∨}),
A/A B / A, C / A
4x B / A, Frees,x (B) = 1
(4 ∈ {∃, ∀}),
B(s) / A
3.6. COMPLETENESS OF MINIMAL LOGIC 95

we show by induction on Form that



∀B∈Form ∀C/∗ B (P (C)) ⇒ P (B) ,

and we conclude that ∀B∈Form (P (B)).


Case R~s. Assume FV(R~s ) ⊆ Vu . The following are equivalent:

Γ, ∆u ` R~s,
∃n ∀u0 n u (R~s ∈ ∆u0 ) by (3.2) and (3.1),
M 0
∃n ∀u0 n u R (~s, u ) by definition of M,
k R~s by definition of , since tM [idVar ] = t.

Case B ∨ C. Assume FV(B ∨ C) ⊆ Vu . For the implication (⇒) let Γ, ∆u ` B ∨ C. Choose


an n ≥ |u| such that Γn , ∆u `n An = B ∨ C. Then, for all u0  u such that |u0 | = n,

∆u∗0 = ∆u0 ∪ {B ∨ C, B} and ∆u0 ∗1 = ∆u0 ∪ {B ∨ C, C},

and therefore by hypothesis on B and C

u0 ∗ 0 B and u0 ∗ 1 C.

Then by definition we have u B ∨ C. For the reverse implication (⇐) we argue as follows:

u B ∨ C,
∃n ∀u0 n u (u0 B ∨ u0 C),
∃n ∀u0 n u ((Γ, ∆u0 ` B) ∨ (Γ, ∆u0 ` C)) by hypothesis on B, C,
∃n ∀u0 n u (Γ, ∆u0 ` B ∨ C),
Γ, ∆u ` B ∨ C by (3.1).

Case B ∧ C. This is easy.


Case B → C. Assume FV(B → C) ⊆ Vk . For (⇒) let Γ, ∆u ` B → C. We must show
u B → C, i.e.,
∀u0 u (u0 B → u0 C).
Let u0  u be such that u0 B. By hypothesis on B, it follows that Γ, ∆u0 ` B. Hence
Γ, ∆u0 ` C follows by assumption. Then again by hypothesis on C we get u0 C.
For (⇐) let u B → C, i.e., ∀u0 u (u0 B → u0 C). We show that Γ, ∆u ` B → C,
using Choose n ≥ lhk such that B = An . For all u0 m u with m = n − |u| we show that
Γ, ∆u0 ` B → C.
If Γn , ∆u0 `n An , then u0 B by induction hypothesis, and u0 C by assumption. Hence
Γ, ∆u0 ` C again by hypothesis on C and thus Γ, ∆u0 ` B → C.
If Γn , ∆u0 6`n An , then by definition ∆u0 ∗1 = ∆u0 ∪ {B}. Hence Γ, ∆u0 ∗1 ` B, and thus
u0 ∗ 1 B by hypothesis on B. Now u0 ∗ 1 C by assumption, and finally Γ, ∆u0 ∗1 ` C by
hypothesis on C. From ∆u0 ∗1 = ∆u0 ∪ {B} it follows that Γ, ∆u0 ` B → C.
Case ∀x B(x). Assume FV(∀x B(x)) ⊆ Vu . For (⇒) let Γ, ∆u ` ∀x B(x). Fix a term t.
Then Γ, ∆u ` B(t). Choose n ≥ |k| such that FV(B(t)) ⊆ Vu0 for all u0 with |u0 | = n. Then
∀u0 m u (Γ, ∆u0 ` B(t)) with m = n − |k|, hence ∀u0 m u (u0 B(t)) by hypothesis on B(t),
hence u B(t) by the covering lemma. This holds for every term t, hence k ∀x B(x).
96 CHAPTER 3. MODELS

For (⇐) assume u ∀x B(x). Pick u0 n u such that Am ⇔ ∃x (⊥ → ⊥), for m = |u| + n.
Then at height m we put some xi into the variable sets: for u0 n u we have xi ∈ / Vu0 but
0
xi ∈ Vu0 ∗j . Clearly u ∗ j B(xi ), hence Γ, ∆u0 ∗j ` B(xi ) by hypothesis on B(xi )), hence
(since at this height we consider the trivial formula ∃x (⊥ → ⊥)) also Γ, ∆u0 ` B(xi ). Since
xi ∈/ Vu0 we obtain Γ, ∆u0 ` ∀x B(x). This holds for all u0 n u, hence Γ, ∆u ` ∀x B(x) by
(3.1).
Case ∃x B(x). Assume FV(∃x B(x)) ⊆ Vu . For (⇒) let Γ, ∆u ` ∃x B(x). Choose an n ≥ |u|
such that Γn , ∆u `n An = ∃x B(x). Then, for all u0  u with |u0 | = n

∆u0 ∗0 = ∆u0 ∗1 = ∆u0 ∪ {∃x B(x), B(xi )}

where xi ∈
/ Vu0 . Hence by hypothesis on B(xi ) (applicable since FV(B(xi )) ⊆ Vu0 ∗j )

u0 ∗ 0 B(xi ) and u0 ∗ 1 B(xi ).

It follows by definition that u ∃x B(x).


For (⇐) assume u ∃x B(x). Then ∀u0 n u ∃t∈Term (u0 B(x)[(idVar )tx ]) for some n, hence
∀u0 n u ∃t∈Term (u0 B(t)). For each of the finitely many u0 n u pick an m such that
∀u00 m u0 (FV(B(t)) ⊆ Vu00 ). Let m0 be the maximum of all these m. Then

∀u00 m0 +n u ∃t∈Term ((u00 B(t)) ∧ FV(B(t)) ⊆ Vu00 ).

The hypothesis on B(t) yields

∀u00 m0 +n k ∃t∈Term (Γ, ∆u00 ` B(t)),


∀u00 m0 +n k (Γ, ∆u00 ` ∃x B(x)),
Γ, ∆u ` ∃x B(x) by (3.1),

and this completes the proof of the claim.


Now we finish the proof of the completeness theorem by showing that (b) implies (a).
We apply (b) to the tree model M constructed above from Γ, the empty node ∅ and the
assignment η = idVar . Then M, ∅ Γ[idVar ] by the claim (since each formula in Γ is derivable
from Γ). Hence M, ∅ A[idVar ] by (b) and therefore Γ ` A by the claim again.

Completeness of intuitionistic logic follows as a corollary.

Corollary 3.6.2 (Completeness of intuitionistic logic). Let Γ ∪ {A} ⊆ Form. The following
are equivalent:
(i) Γ `i A.
(ii) Γ, Efq A, i.e., for all intuitionistic fan models Mi , assignments η in |Mi | and nodes u
in the fan of Mi
Mi , u Γ[η] ⇒ Mi , u A[η].

Proof. It follows immediately from Theorem 3.6.1.


3.7. L-MODELS AND CLASSICAL MODELS 97

3.7 L-models and classical models


For the rest of this section, fix a countable formal language L; we do not mention the
dependence on L in the notation. Since we deal with classical logic, we only consider formulas
built without ∨, ∃ i.e. formulas in Form∗ (see Definition 2.2.9). We define the notion of an
L-model, and what the value of a term and the meaning of a formula in an L-model should be.

Definition 3.7.1. An L-model is a structure M = (D, i, j), where


(i) D is an inhabited set.
(ii) For every n-ary function symbol f , i assigns to f a map i(f ) : Dn → D.
(iii) For every n-ary relation symbol R, j assigns to R an n-ary relation on Dn . In case n = 0,
j(R) is either true or false. We require that j(⊥) is false i.e., j(⊥) = 0.
We may write |M| for the carrier set D of M and f M , RM for the interpretations i(f ), j(R)
of the function and relation symbols. Assignments η and their extensions on Term are defined
as in Section 3.2. We also write tM [η] for ηM (t).

Definition 3.7.2 (Validity). For every L-model M = (D, i, j), assignment η in D and
formula A ∈ Form∗ we define the relation “A is valid in M under the assignment η”, in
symbols M |= A[η] inductively, with respect only formulas without ∨ and ∃ as follows:

M |= R[η] ⇔ j(R) = 1; R ∈ Rel(0) ,


M |= (R~s )[η] ⇔ RM (~sM [η]); ; R ∈ Rel(n) , n > 0
M |= (A → B)[η] ⇔ ((M |= A[η]) ⇒ (M |= B[η])),
M |= (A ∧ B)[η] ⇔ ((M |= A[η]) & (M |= B[η])),
M |= (∀x A)[η] ⇔ ∀d∈D (M |= A[ηxd ]).

Since j(⊥) is false, we have M 6|= ⊥[η].

Lemma 3.7.3 (Coincidence). Let M = (D, i, j) be an L-model, t a term, A ∈ Form∗ , and


η, ξ assignments in D.
(i) If η(x) = ξ(x) for all x ∈ FV(t), then η(t) = ξ(t).
(ii) If η(x) = ξ(x) for all x ∈ FV(A), then M |= A[η] if and only if M |= A[ξ].

Proof. By induction on Term and on Form∗ .

Lemma 3.7.4 (Substitution). Let M = (D, i, j) be an L-model, t, r(x) terms, A(x) ∈ Form∗ ,
and η an assignment in D.
η(t)
(i) η(r(t)) = ηx (r(x)).
η(t)
(ii) M |= A(t)[η] if and only if M |= A(x)[ηx ].

Proof. By induction on Term and on Form∗ .

Definition 3.7.5. An L-model Mc = (D, i, j) is called classical, if for every A ∈ Form∗ , and
every assignment η in D we have that

Mc |= (¬¬A)[η] ⇒ Mc |= A[η].
98 CHAPTER 3. MODELS

If the weaker classical derivation `∗c is only considered, then for the constructive proof of
completeness theorem of classical logic it suffices to assume for Mc that
¬¬RMc (d~ ) ⇒ RMc (d~ )
~
for all relation symbols R and all d~ ∈ D|d| . If classical logic is used in our metatheory, then
every L-model is classical. To show this, we suppose that Mc |= (¬¬A)[η], and we show that
Mc |= A[η] by showing ¬¬(Mc |= A[η]). For that, suppose ¬(Mc |= A[η]). Then we get

Mc |= (¬A)[η] ⇔ Mc |= (A → ⊥)[η] ⇔ Mc |= A[η] ⇒ Mc |= ⊥[η] ,
as the premiss in the last implication is false by our second hypothesis. By our first hypothesis

Mc |= (¬¬A)[η] ⇔ Mc |= (¬A → ⊥)[η] ⇔ Mc |= (¬A)[η] ⇒ Mc |= ⊥[η]
and since the premiss in the last implication holds, we get Mc |= ⊥[η], which contradicts
¬(Mc |= ⊥[η]), hence we showed that ¬¬(Mc |= A[η]). With DNE we get Mc |= A[η].
Moreover, one can show constructively (exercise) that
Mc |= (¬A)[η]) ⇔ ¬(Mc |= A[η]).

3.8 Soundness theorem of classical logic


Theorem 3.8.1 (Soundness of classical logic). Let Mc = (D, i, j) be a classical model of L
and η a variable assignment in D. Let Dc,−
V (A) be the set of classical derivations without the
c,−
rules for ∨ and ∃. If Mc ∈ DV (A) such that Mc |= {C1 , . . . , Cn }[η], where {C1 , . . . , Cn } =
Form(V ), then Mc |= A[η].
Proof. We fix Mc and we prove by induction the following formula
 

∀M ∈Dc,− (A) ∀η∈F(Var,D) Mc |= Form(V )[η] ⇒ Mc |= A[η] .
V

Case DNEA . It follows immediately from the classicality of Mc .


Case 1A . The validity of Mc |= A[η] ⇒ Mc |= A[η] is immediate.
Case →+ . Let the derivation
[A], C1 , . . . , Cn
|N
B +
A→B →
and suppose Mc |= {C, . . . , Cn }[η]. We show Mc |= (A → B)[η] ⇔ Mc |= A[η] ⇒ Mc |= B[η]
under the inductive hypothesis on N :
IH(N ) : ∀η (Mc |= {A, C1 , . . . , Cn }[η] ⇒ Mc |= B[η]).
If Mc |= A[η], then Mc |= {A, C1 , . . . , Cn }[η], hence by IH(N ) we get Mc |= B[η].
Case (→− ). Let the derivation
C1 , . . . , C n D1 , . . . , D m
|N |K
A→B A
B →−
3.9. COMPLETENESS OF CLASSICAL LOGIC 99

and suppose Mc |= {C1 , . . . , Cn , D1 , . . . , Dm }[η]. We show Mc |= B[η] under the inductive


hypotheses on N and K:

IH(N ) : ∀η Mc |= {C1 , . . . , Cn }[η] ⇒ wMc |= (A → B)[η] ,

IH(K) : ∀η Mc |= {D1 , . . . , Dm }[η] ⇒ Mc |= A[η]).


By IH(N ) Mc |= (A → B)[η], and by IH(K) we get Mc |= A[η], hence Mc |= B[η].
Case (∀+ ) Let the derivation
C1 , . . . , C n
|N
A +
∀x A ∀ x
with the variable condition x ∈
/ FV(C1 ) & . . . & x ∈
/ FV(Cn ), and suppose Mc |= {C1 , . . . , Cn }[η].
We show Mc |= (∀x A)[η] ⇔ ∀d∈D (Mc |= A[ηxd ]) under the inductive hypothesis on N :

IH(N ) : ∀η (Mc |= {C1 , . . . , Cn }[η] ⇒ Mc |= A[η]).

Let d ∈ D. By the variable condition η|FV(Ci ) = (ηxd )|FV(Ci ) , for every i ∈ {1, . . . , n}, hence by
Coincidence we conclude that Mc |= {C1 , . . . , Cn }[ηxd ]. By IH(N ) on ηxd we get Mc |= A[ηxd ].
Case (∀− ). Let the derivation

C1 , . . . , C n
|N
∀x A r ∈ Term −

A(r)

and let Mc |= {C1 , . . . , Cn }[η]. We show Mc |= A(r)[η] under the inductive hypotheses on N :

IH(N ) : ∀η (Mc |= {C1 , . . . , Cn }[η] ⇒ Mc |= (∀x A)[η]).


 η (r) 
By IH(N ) we have that ∀d∈D (Mc |= A[ηxd ]). If we consider d = ηM (r), we get Mc |= A ηxM ,
and by Substitution we conclude that Mc |= A(r)[η].
Case (∧+ ) and Case (∧− ) are straightforward.

Corollary 3.8.2. Let Γ ∪ {A} ⊆ Form∗ such that Γ `c A. If Mc = (D, i, j) is a classical


model of L, and η is a variable assignment in D, the following hold:
(i) Mc |= Γ[η] ⇒ Mc |= A[η].
(ii) If Γ = ∅, then Mc |= A[η].

Proof. Exercise.

3.9 Completeness of classical logic


Theorem 3.9.1 (Completeness of classical logic). Let Γ ∪ {A} ⊆ Form∗ . Assume that

Γ |= A
100 CHAPTER 3. MODELS

i.e., for all classical models Mc and assignments η in |Mc | we have that

Mc |= Γ[η] ⇒ Mc |= A[η].

Then “there must exist” a derivation of A from Γ ∪ Dne, in other words,

¬¬ Γ ∪ Dne ` A ⇔ ¬¬(Γ `∗c A).




Proof. (Ulrich Berger, with constructive logic) The proof is based on the proof of completeness
of minimal logic. According to it, a contradiction is derived from the assumption Γ ∪
Dne 6` A. By the completeness theorem for minimal logic, there must be a fan model
M = (Term, 2<N , 2, i, j) and a node u0 such that u0 Γ, Dne and u0 6 A. The details of the
proof are found in [19].

Since in the above proof the carrier set of the classical model in question is the countable
set Term, the following holds immediately.

Remark 3.9.2. The hypothesis Γ |= A of completeness theorem can be replaced by

Γ |=ℵ0 A

i.e., “for all classical models Mc with a countable carrier set |Mc |, for all assignments η,
Mc |= Γ[η] ⇒ Mc |= A[η]”.

Definition 3.9.3. We call a classical models Mc with a countable carrier set |Mc | a countable
(classical) model. Similarly, a finite model Mc is a model with a finite carrier set |Mc |. In
general, the cardinality of a classical model Mc is the cardinality of its carrier set |Mc |.

Corollary 3.8.2(i) of the soundness theorem for classical logic can take the form

Γ `c A ⇒ Γ |= A,

while the completeness theorem can be written as the implication

Γ |= A ⇒ ¬¬(Γ `∗c A).

As the implication
Γ `∗c A ⇒ Γ `c A
implies constructively the implication

¬¬(Γ `∗c A) ⇒ ¬¬(Γ `c A),

we get with constructive logic the implication

Γ |= A ⇒ ¬¬(Γ `c A),

hence with classical logic we get the implication

Γ |= A ⇒ Γ `c A

i.e., the converse implication that expresses the soundness theorem for classical logic.
3.10. THE COMPACTNESS THEOREM 101

3.10 The compactness theorem


Definition 3.10.1. A set of formulas Γ (included in Form∗ ) is consistent, if Γ 6`c ⊥, and it is
satisfiable, if there is (in the weak sense) a classical model Mc and an assignment η in |Mc |
such that Mc |= Γ[η]. I.e.,
Γ is consistent ⇔ ¬(Γ `c ⊥),

Γ is satisfiable ⇔ ¬¬ ∃Mc ∃η∈F(Var,|Mc |) (Mc |= Γ[η]) .

Notice that we use the equivalence between ∃ ˜x A and ¬¬∃x A in the above formulation
of satisfiability (Proposition 2.6.2(vi). The consistency of Γ is a so-called syntactical notion,
while satisfiability of Γ is a so-called semantical one. As classical logic is consistent, the empty
set ∅ is consistent.

Corollary 3.10.2. If Γ ⊆ Form∗ , then Γ is consistent if and only if Γ is satisfiable.

Proof. (with constructive logic) We show only that if Γ is consistent, then Γ is satisfiable,
and the converse implication is an exercise. Assume Γ 6`c ⊥, and also assume that Γ is not
satisfiable i.e., 
¬¬¬ ∃Mc ∃η∈F(Var,|Mc |) (Mc |= Γ[η]) .
By Brouwer’s theorem we get

¬ ∃Mc ∃η∈F(Var,|Mc |) (Mc |= Γ[η]) ,

which implies constructively



∀Mc ∀η∈F(Var,|Mc |) (Mc 6|= Γ[η]) .

Hence, for every every classical model Mc and every assignment η : Var → |Mc | we have that

Mc |= Γ[η] ⇒ Mc |= ⊥[η].

By the completeness theorem for classical logic there must be a derivation Γ `c ⊥ i.e.,

¬¬(Γ `c ⊥).

This, together with the assumption ¬(Γ `c ⊥), lead to a contradiction. hence, we showed

¬¬¬¬ ∃Mc ∃η∈F(Var,|Mc |) (Mc |= Γ[η]) .

By Brouwer’s theorem again we get



¬¬ ∃Mc ∃η∈F(Var,|Mc |) (Mc |= Γ[η])

i.e., Γ is satisfiable.

Of course, the above proof is considerably simplified, if classical logic is used. Among the
many important corollaries of the completeness theorem the compactness and Löwenheim-
Skolem theorems stand out as particularly important. Although their classical proofs are
much simpler, we also show these theorems constructively.
102 CHAPTER 3. MODELS

Corollary 3.10.3 (Compactness theorem). Let Γ ⊆ Form∗ . If every finite subset of Γ is


satisfiable, then Γ is satisfiable.

Proof. (with constructive logic) Assume that Γ is not satisfiable i.e.,



¬¬¬ ∃Mc ∃η∈F(Var,|Mc |) (Mc |= Γ[η]) .

Working as in the proof of Corollary 3.10.2, by the completeness theorem for classical logic
there must be a derivation Γ `c ⊥ i.e., ¬¬(Γ `c ⊥). As

Γ `c ⊥ ⇒ ∃Γ ⊆fin Γ Γ0 `c ⊥ ,
0

we get 
¬¬(Γ `c ⊥) ⇒ ¬¬∃Γ ⊆fin Γ Γ0 `c ⊥ ,
0

By definition 
Γ0 is satisfiable ⇔ ¬¬ ∃Mc ∃η∈F(Var,|Mc |) (Mc |= Γ0 [η]) .
The following implication holds:
 
¬¬∃Γ ⊆fin Γ Γ0 `c ⊥ ⇒ ¬ ∃Mc ∃η∈F(Var,|Mc |) (Mc |= Γ0 [η]) ,
0

which contradicts the satisfiability of Γ0 . To show that implication, suppose

Q = ∃Mc ∃η∈F(Var,|Mc |) Mc |= Γ0 [η]).

For that classical model Mc and assignment η the following implication holds:

∃Γ ⊆fin Γ Γ0 `c ⊥ ⇒ Mc |= ⊥[η],
0

as by the soundness theorem for classical logic

Mc |= Γ0 [η] ⇒ Mc |= ⊥[η],

and by our hypothesis Q we have that Mc |= Γ0 [η]. Hence we get



¬¬∃Γ ⊆fin Γ Γ0 `c ⊥ ⇒ ¬¬(Mc |= ⊥[η]),
0

which contradicts ¬(Mc |= ⊥[η]). Hence we showed



¬¬¬¬ ∃Mc ∃η∈F(Var,|Mc |) (Mc |= Γ[η]) .

By Brouwer’s theorem again we get



¬¬ ∃Mc ∃η∈F(Var,|Mc |) (Mc |= Γ[η])

i.e., Γ is satisfiable.

Corollary 3.10.4 (Löwenheim, Skolem). Let Γ ⊆ Form∗ in a countable language L. If Γ is


satisfiable, then Γ is satisfiable in a countable classical model.

Proof. The proof with classical logic is straightforward. The constructive proof is an exercise.
3.10. THE COMPACTNESS THEOREM 103

Hence, however large a model of a satisfiable Γ can be, we can always find a small model
i.e., a countable one. In the spirit of the converse direction, one can show with compactness
that if there are arbitrarily large finite models of Γ, then there is also an infinite model of Γ.
Before showing this result we interpolate some related notions and facts on equality in L.
Definition 3.10.5. Let the underlying language L contain a binary relation symbol ≈ i.e.,
≈ ∈ Rel(2) . The set EqL of L-equality axioms consists of (the universal closures of)
(Eq1 ) x ≈ x,
(Eq2 ) x ≈ y → y ≈ x,
(Eq3 ) x ≈ y & y ≈ z → x ≈ z,
(Eq4 ) x1 ≈ y1 ∧ . . . ∧ xn ≈ yn → f (x1 , . . . , xn ) ≈ f (y1 , . . . , yn ),
(Eq5 ) x1 ≈ y1 ∧ . . . ∧ xn ≈ yn ∧ R(x1 , . . . , xn ) → R(y1 , . . . , yn ),
for all n-ary function symbols f , for all relation symbols R of L, and n ∈ N.
Note that the equality axioms are formulas of L. If f is a 0-ary function symbol, then Eq4
has as special case the axiom c ≈ c. Notice that this equality is the given “internal” equality of
L and must not be confused with the “external” equality x = y, which is the metatheoretical
equality of the set Var. Consequently, if t, s ∈ Term, we get the following formula of L:
t ≈ s.
Lemma 3.10.6 (Equality). Let r, s, t ∈ Term and A ∈ Form∗ .
(i) EqL ` t ≈ s → r(t) ≈ r(s).
(ii) If Freet,x (A) = Frees,x (A), then EqL ` t ≈ s → (A(t) ↔ A(s)).
Proof. (i) By induction on Term we prove the following formula
 
∀r∈Term EqL ` t ≈ s → r(t) ≈ r(s) .

(ii) By Induction on Form∗ we prove the following formula


 
∀A∈Form∗ EqL ` t ≈ s → (A(t) ↔ A(s)) .

Note that the expressions


t ≈ s → r(t) ≈ r(s),
t ≈ s → (A(t) ↔ A(s))
are formulas of L. An L-model M satisfies the equality axioms if and only if ≈M is a
congruence relation (i.e., an equivalence relation compatible with the functions and relations
of M).
Proposition 3.10.7. Let L be a countable language with equality ≈, and let Γ ⊆ Form∗ .
(i) If for every n ∈ N there is m ∈ N with m > n and there are a classical model Mc with
cardinality m and an assignment in |Mc | such that Mc |= Γ[η], then there is an infinite
classical model Nc and an assignment θ in |Nc | such that Nc |= Γ[θ].
(ii) If for every classical model Mc and every assignment η in |Mc | such that Mc |= Γ[η] we
have that the cardinality of Mc is finite, then there is m ∈ N that bounds their cardinality.
(iii) There is no set Γ that is modeled exactly by all finite finite classical models.
104 CHAPTER 3. MODELS

Proof. (with classical logic) (i) Let C = {cn | n ∈ N} be a set of constants such that cn 6= cm ,
for every n 6= m. Let also the new countable language

L0 = L ∪ C.

We extend the equality of L to L0 by keeping for simplicity the same symbol ≈. Let the
following set Γ0 of formulas in L0 :

Γ0 = Γ ∪ {¬(cn ≈ cm ) | n, m ∈ N & n 6= m.}

If Γ0 0 is a finite subset of Γ0 , it is of the form

Γ0 0 = Γ0 ∪ Σ0 ,

where Γ0 is a finite subset of Γ, and

Σ0 = {¬(cn1 ≈ cm1 ), . . . , ¬(cnk ≈ cmk )},

for some k ∈ N. Clearly, we can find a finite model Mc with cardinality m and an assignment
in |Mc |, such that Mc |= Γ[η], hence Mc |= Γ0 [η], and m > 2k. We can extend η to some
η 0 such that all constant occurring in Σ0 are assigned to pairwise distinct element of |Mc |
under η 0 (clearly, the equality on the carrier set is a congruence). Hence Mc |= Γ0 0 [η 0 ]. By the
compactness theorem for the countable language L0 there is a model classical model Nc and an
assignment θ in |Nc | such that Nc |= Γ0 [θ]. Consequently, Nc is infinite, and clearly Nc |= Γ[θ].
(ii) and (iii) follow immediately from (i).

With classical logic one can also show that the compactness theorem implies the complete-
ness theorem for classical logic (exercise).
Chapter 4

Gödel’s incompleteness theorems

4.1 Elementary functions


The elementary functions are those number-theoretic functions that can be defined explicitly
by compositional terms built up from variables and the constants 0, 1 by repeated applications
· , bounded sums and bounded products.
of addition +, modified subtraction −

Definition 4.1.1. The set of elementary functions of type Nk → N, where k > 1, is defined
inductively by the following rules:

(Elem1 ) 1 , 1 ,
0 ∈ Elem(1) 1 ∈ Elem(1)
1 1
where 0 is the constant function 0 on N and 1 is the constant function 1 on N.

k ∈ N+ , i ∈ {1, . . . , k}
(Elem2 ) ,
prki ∈ Elem(k)
where the projection function prki is defined by prki (x1 , . . . , xk ) = xi .

(Elem3 ) ,
+ ∈ Elem(2)
where +(x, y) = x + y is the addition of natural numbers.

(Elem4 ) ,
· ∈ Elem(2)

· (x, y) = x −
where the modified subtraction − · y is defined by

x− · y = x−y , x≥y
0 , otherwise.

n, k ∈ N+ , f ∈ Elem(n) , f1 . . . . , fn ∈ Elem(k)
(Elem5 ) ,
f ◦ (f1 , . . . , fn ) ∈ Elem(k)
where the composite function f ◦ (f1 , . . . , fn ) is defined by
  
f ◦ (f1 , . . . , fn ) (x1 , . . . , xk ) = f f1 (x1 , . . . , xk ), . . . , fn (x1 , . . . , xk ) .
106 CHAPTER 4. GÖDEL’S INCOMPLETENESS THEOREMS

r ∈ N, f ∈ Elem(r+1)
(Elem6 ) ,
Σf ∈ Elem(r+1)
where  X
Σf (x1 , . . . , xr , y) = f (x1 , . . . , xr , z),
z<y

and 
Σf (x1 , . . . , xr , 0) = 0.

r ∈ N, f ∈ Elem(r+1)
(Elem7 ) ,
Πf ∈ Elem(r+1)
where  Y
Πf (x1 , . . . , xr , y) = f (x1 , . . . , xr , z),
z<y

and 
Πf (x1 , . . . , xr , 0) = 1.
We also define

[
Elem = Elem(k) .
k=1

The function Σf is the bounded sum of f , and the function Πf is the bounded product of f .
By omitting bounded products, one obtains the so-called subelementary functions.

Proposition 4.1.2. The following functions are elementary:


k k
(i) 0 : Nk → N, defined by 0 (x1 , . . . , xk ) = 0.
k k
(ii) 1 : Nk → N, defined by 1 (x1 , . . . , xk ) = 1.
(iii) The identity function idN : N → N.
(iv) The maximum function max2 : N2 → N, where max2 (x, y) = max{x, y}.
(v) The successor function Succ : N → N, where Succ(x) = x + 1.
(vi) The predecessor function Pred : N → N, where

x−1 , x≥1
Pred(x) =
0 , x = 0.

(vii) The product function · : N2 → N, where ·(x, y) = x · y.


(viii) The factorial function ! : N → N, where !(x) = x!.
(ix) The exponential function exp2 : N2 → N, where exp2 (x, y) = xy .

Proof. We show only (vii) and (viii), and the rest is an exercise. We have that
X X
·(x, y) = x · y = pr21 (x, z) = x,
z<y z<y

Y Y
!(x) = x! = Succ(y) = (y + 1).
y<x y<x
4.2. A NON-ELEMENTARY FUNCTION 107

Proposition 4.1.3. Let k ∈ N+ and n, r ∈ N.


· g ∈ Elem(k) , and f · g ∈ Elem(k) .
(i) If f, g ∈ Elem(k) , then f + g ∈ Elem(k) , f −
(ii) nk : Nk → N, defined by nk (x1 , . . . , xk ) = n.
(iii) The function x 7→ xm , where m ∈ N, is in Elem(1) .
(iv) A polynomial on N is in Elem(1) .
(v) The elementary functions are closed under “definition by cases” i.e., if h, g0 , g1 ∈ Elem(k) ,
“the case-distinction function Case(g0 , g1 ; h) of g0 and g1 with respect to h” is in Elem(k) , where
(
g0 (x1 , . . . , xk ) ) , if h(x1 , . . . , xk ) ) = 0
Case(g0 , g1 ; h)(x1 , . . . , xk ) ) =
g1 (x1 , . . . , xk ) ) , otherwise

(vi) The elementary functions are closed under “bounded minimisation” i.e., if f ∈ Elem(r+1) ,
then µf ∈ Elem(r+1) , where
(µf )(x1 , . . . , xr , y) = µz<y (f (x1 , . . . , xr , z) = 0)
where µz<y (f (x1 , . . . , xr , z) = 0) denotes the least z < y such that f (x1 , . . . , xr , z) = 0. If
there is no z < y such that f (x1 , . . . , xr , z) = 0, then µf (x1 , . . . , xr , y) = y.
Proof. Case (iv) can be shown with the use of modified subtraction, and case (v) with the use
of modified subtraction and the bounded sum. Hence, not only the elementary, but in fact the
subelementary functions are closed under bounded minimization. The rest is an exercise.
 
Furthermore, we define µz≤y f (x1 , . . . , xr , z) = 0 as µz<y+1 f (x1 , . . . , xr , z) = 0 .

4.2 A non-elementary function


The existence of non-elementary functions is easily justified on cardinality grounds; the set
Elem is countable, while the set

[

F (N) = F(Nk , N)
k=1

has the cardinality of the set of real numbers. Next we show how to find a non-elementary
function, which is defined explicitly by some rule.
Definition 4.2.1. If k ∈ N, the function 2k : N → N is defined by
20 (m) = m; m ∈ N,
2k+1 (m) = 22k (m) ; m ∈ N.
If m ∈ N, then
21 (m) = 220 (m) = 2m ,
m
22 (m) = 221 (m) = 22 ,
2m
23 (m) = 222 (m) = 22 ,
m
·2
··
2k+1 (m) = 22k (m) = 2 ,
where there are k + 1-many 2’s in the above tower of powers.
108 CHAPTER 4. GÖDEL’S INCOMPLETENESS THEOREMS

Lemma 4.2.2. For every elementary function f : Ns → N there is k ∈ N such that for all
(x1 , . . . , xs ) ∈ Ns we have that

f (x1 , . . . , xs ) ) < 2k max{x1 , . . . , xs }
Proof. By the induction principle that corresponds to the definition of elementary functions
1 1 1 1 n
of arity k. If f = 0 , then 0 (n) = 0 < 2n = 21 (n). If f = 1 , then 1 (n) = 1 < 22 = 22 (n).
If s ∈ N+ , and 1 ≤ i ≤ s, then
prsi (x1 , . . . , xs ) = xi
≤ max{x1 , . . . , xs }
< 2max{x1 ,...,xs }

= 21 max{x1 , . . . , xs } .
For the rest calculations we use the following inequalities:
n < 2n ⇒ nn < (2n )n ,
n
(∗) nn < (2n )n ≤ 22 , for every n > 3.
2n
(∗∗) 2n < 2
2n
The inequality (2n )n ≤ 2 is shown by induction on n > 3, while to show (∗∗), we verify cases
n = 0, . . . , n = 3, and for n > 3 we have that 2n < nn , and we use (∗). Hence,
x + y ≤ 2 max{x, y}
< 2 · 2max{x,y}
(∗∗) max{x,y}
< 22

= 22 max{x, y} ,
· y ≤ max{x, y} < 2max{x,y} = 21 max{x, y} .

x−
Let f1 , . . . , fn ∈ Elem(s) and f ∈ Elem(n) such that

f1 (x1 , . . . , xs ) < 2k1 max{x1 , . . . , xs } ,
............

fn (x1 , . . . , xs ) < 2kn max{x1 , . . . , xs }

f (y1 , . . . , yn ) < 2k max{y1 , . . . , yn } ,
for some k1 , . . . , kn , k ∈ N. If
l = max{k1 , . . . , kn , k},
  
f ◦ (f1 , . . . , fn ) (x1 , . . . , xs ) = f f1 (x1 , . . . , xs ), . . . , fn (x1 , . . . , xs )

< 2k max{f1 (x1 , . . . , xs ), . . . , fn (x1 , . . . , xs )}
  
 
≤ 2k max 2k1 max{x1 , . . . , xs } , . . . , 2kn max{x1 , . . . , xs }

≤ 2k 2l max{x1 , . . . , xs }

≤ 2l 2l max{x1 , . . . , xs }

= 22l max{x1 , . . . , xs } .
4.2. A NON-ELEMENTARY FUNCTION 109

Next we suppose that



f (x1 , . . . , xr , y) ) < 2k max{x1 , . . . , xr , y} ,

for some k ∈ N. As

max{x1 , . . . , xr , 0}, . . . , max{x1 , . . . , xr , y − 1} ≤ max{x1 , . . . , xr , y},

and as 
y ≤ 2k (y) ≤ 2k max{x1 , . . . , xr , y} ,
for every k ∈ N, we have that
 X
Σf (x1 , . . . , xr , y) = f (x1 , . . . , xr , z)
z<y
X 
< 2k max{x1 , . . . , xr , z}
z<y
X 
≤ 2k max{x1 , . . . , xr , y}
z<y

= y2k max{x1 , . . . , xr , y}
 2
≤ 2k max{x1 , . . . , xr , y}

< 2k+2 max{x1 , . . . , xr , y} ,

as, if m > 1, we have that


(∗) 2n (m)
2n (m)2 ≤ 2n (m)m < 2n (m)2n (m) < 22 = 2n+2 (m).

Similarly,
 Y
Πf (x1 , . . . , xr , y) = f (x1 , . . . , xr , z)
z<y
Y 
< 2k max{x1 , . . . , xr , z}
z<y
Y 
≤ 2k max{x1 , . . . , xr , y}
z<y
 y
 
= 2k max{x1 , . . . , xr , y}

 2 max{x1 ,...,xr ,y}
≤ 2k max{x1 , . . . , xr , y} k

< 2k+2 max{x1 , . . . , xr , y} ,
(∗) 2n (m)
as 2n (m)2n (m) < 22 = 2n+2 (m).

By Lemma 4.2.2 we can explicitly define a non-elementary function.


Corollary 4.2.3. The function f : N → N, defined by f (n) = 2n (1), for every n ∈ N, is not
elementary.
Proof. Exercise.
110 CHAPTER 4. GÖDEL’S INCOMPLETENESS THEOREMS

4.3 Elementary relations


Definition 4.3.1. A relation R ⊆ Nk is called elementary if its characteristic function
(
1 , if (x1 , . . . , xk ) ∈ R
χR (x1 , . . . , xk ) =
0 , otherwise

is elementary.

Example 4.3.2. The equality = on N and the inequality < on N are elementary since their
characteristic functions can be described as follows:

· (1 −
χ< (n, m) = 1 − · (m −
· n)),

· (χ< (n, m) + χ< (m, n)).


χ= (n, m) = 1 −
Notice that the above writing is a simplification of the following formulation
2
· 12 − · [pr22 (n, m) −
· pr21 (n, m))] .

χ< (n, m) = 1 −

Furthermore if R ⊆ Ns+1 is elementary then so is the function

f (~n, m) = µk<m R(~n, k)



= µk<m χR (~n, k) = 1
s+1
· χR (~n, k) = 0
 
= µk<m 1 −

as 
s+1
· χR )(~n, k) = 1 − χR (~n, k) = 0 , (~n, k) ∈ R
(1 −
1 , (~n, k) ∈
/ R.
Next we show that he elementary relations are closed under applications of propositional
connectives and bounded quantifiers.

Lemma 4.3.3. Let R, S ⊆ Nk and T ⊆ Nk+1 be elementary. The following relations

¬R = Nk \ R, R & S = R ∩ S, R ∨ S = R ∪ S, R ⇒ S = R ∪ (Nk \ S),


 
A(~x, y) = ∀z<y T (~x, z) = ∀z z < y ⇒ T (~x, z) ,
 
E(~x, y) = ∃z<y T (~x, z) = ∃z z < y & T (~x, z) ,
are also elementary

Proof. The following equalities hold:


k
χ¬R = 1 −· χR , χR & S = χR · χS ,
Y
χA (~x, y) = χT (~x, z),
z<y

and the result for the rest relations follows from their redundancy to them e.g.,

E(~x, y) ⇔ ¬∀z<y ¬T (~x, z) .
4.4. THE SET OF FUNCTIONS E 111

Example 4.3.4. The above closure properties enable us to show that many “natural” functions
and relations of number theory are elementary. E.g., the floor of a positive rational, defined
as a function on pairs of naturals, and the “remainder function” mod : N2 → N, where
mod (n, m) = n mod m is the remainder of the division of n by m are elementary as
 
n
= µk<n (n < (k + 1)m),
m
 
· n
n mod m = n − m.
m
The unary relation Prime and the enumeration-function of primes are also elementary, since

Prime(n) ⇔ 1 < n & ¬∃m<n (1 < m & n mod m = 0),


 X 
pn = µm<22n Prime(m) & n = χPrime (i) .
i<m

The values p0 , p1 , p2 , . . . form the enumeration of primes in increasing order. The inequality
n
p n ≤ 22

for the n-th prime pn can be proved by induction on n: for n = 0 this is clear by our convention
in Proposition 4.1.3(vi), and for n ≥ 1 we obtain
0 1 n−1 n −1 n
pn ≤ p0 p1 · · · pn−1 + 1 ≤ 22 22 · · · 22 + 1 = 22 + 1 < 22 .

4.4 The set of functions E


We define the set of functions E that is going to be equal to the set of elementary functions
Elem. This alternative characterisation of Elem is useful, in order to show that Elem is closed
under limited recursion through the closure of E under limited recursion.
Definition 4.4.1. The set E consists of those number theoretic functions that can be defined
from the initial functions: constant 0, successor Succ, projections, addition +, modified
subtraction, multiplication, and exponentiation 2x , by applications of composition and bounded
minimisation.
Corollary 4.4.2. (i) Every function in E is elementary.
(ii) The characteristic functions of the equality and “less than” relations are in E.
(iii) A relation R ⊆ Nk is an E-relation, if its characteristic function is in E. The E-relations
are closed under propositional connectives and bounded quantifiers.
Proof. (i) By induction on E. All initial functions in E are elementary. The exponentiation-
map x 7→ 2x is shown to be in Elem(1) similarly to the proof for exp2 (Proposition 4.1.2).
Moreover, the elementary functions are closed under composition and bounded minimisation.
(ii) It follows immediately by the writing of their characteristic functions in Example 4.3.2,
1 1
and by the fact that 1 = Succ ◦ 0 ∈ E.
(iii) As the closure under bounded products is not mentioned in the definition of E, we write
the characteristic function of
 
A(~x, y) = ∀z<y T (~x, z) = ∀z z < y ⇒ T (~x, z) ,
112 CHAPTER 4. GÖDEL’S INCOMPLETENESS THEOREMS

as follows:
χA = χ= ◦ prk+1

k+1 , f ,

f (~x, y) = µz<y χT (~x, z) = 0 .
As
χA (~x, y) = 1 ⇔ χ= ◦ prk+1
 
k+1 , f (~x, y) = 1
⇔ prk+1

k+1 (~
x , y) = µ z<y χ T (~
x , z) = 0

⇔ y = µz<y χT (~x, z) = 0 ,
by our convention in Proposition 4.1.3(vi) we have that χT (~x, z) = 1, for every z < y.

Lemma 4.4.3. There are pairing functions π, π1 , π2 in E with the following properties:
(i) π maps N × N bijectively onto N.
(ii) π(a, b) + b + 2 ≤ (a + b + 1)2 , for a + b ≥ 1, hence π(a, b) < (a + b + 1)2 .
(iii) π1 (c), π2 (c) ≤ c.
(iv) π(π1 (c), π2 (c)) = c.
(v) π1 (π(a, b)) = a.
(vi) π2 (π(a, b)) = b.
Proof. We enumerate the pairs of natural numbers
.. .. .. .. ..
. . . . .
(0, 3)(1, 3)(2, 3)(3, 3) . . .
(0, 2)(1, 2)(2, 2)(3, 2) . . .
(0, 1)(1, 1)(2, 1)(3, 1) . . .
(0, 0)(1, 0)(2, 0)(3, 0) . . .
as follows:
..
.
6 ...
3 7 ...
1 4 8 ...
0 2 5 9 ...
I.e., if ∆n are the diagonals:
∆0 = (0, 0),
∆1 = (0, 1)(1, 0),
∆2 = (0, 2)(1, 1)(2, 0),
∆3 = (0, 3)(1, 2)(2, 1)(3, 0),
etc., then the above enumeration enumerates the pairs of all diagonals following the route
∆0 → ∆1 → ∆2 → ∆3 → . . . .
We remark the following:
4.4. THE SET OF FUNCTIONS E 113

• If (a, b) ∈ ∆n , then a + b = n.
• The number of pairs in ∆n is n + 1.
• The number π(a, b) associated to the pair (a, b) counts the number of pairs from (0, 0),
the first pair in the enumeration, until reaching (a, b) in the diagonal ∆a+b and having
gone through the previous diagonals

∆0 → ∆1 → ∆2 → ∆3 → . . . → ∆a+b−1 .

As ∆0 has 1 element, ∆1 has 2 elements, . . ., ∆a+b−1 has a + b number of elements we get

π(a, b) = [1 + 2 + . . . (a + b)] + a
1
= (a + b)(a + b + 1) + a
2
 X 
= i + a.
i≤a+b

The second equality above shows that π is in E (the justification of this is an exercise), while
the third equality shows that π is subelementary. Clearly π : N × N → N is bijective. Moreover,
a, b ≤ π(a, b) and in case π(a, b) 6= 0 also a < π(a, b). Let

π1 (c) = µx≤c ∃y≤c (π(x, y) = c),


π2 (c) = µy≤c ∃x≤c (π(x, y) = c).

As π is in E, we also have that π1 and π2 are in E. Moreover, by their definition, and since
π is subelementary, we also have that π1 and π2 are subelementary. Clearly, πi (c) ≤ c for
i ∈ {1, 2} and
π1 (π(a, b)) = a, π2 (π(a, b)) = b, π(π1 (c), π2 (c)) = c.

For π(a, b) we have the estimate

π(a, b) + b + 2 ≤ (a + b + 1)2 for a + b ≥ 1.

This follows with n = a + b from

1
n(n + 1) + n + 2 ≤ (n + 1)2 for n ≥ 1,
2

which is equivalent to n(n+1)+2(n+1) ≤ 2((n+1)2 −1) and hence to (n+2)(n+1) ≤ 2n(n+2),


which holds for n ≥ 1.

Theorem 4.4.4 (Gödel’s β-function). There is in E a function β with the following property:
For every sequence a0 , . . . , an−1 < b of numbers less than b we can find a number
4
c ≤ 4 · 4n(b+n+1) ,

such that β(c, i) = ai for all i < n.


114 CHAPTER 4. GÖDEL’S INCOMPLETENESS THEOREMS

Proof. Let Y 
a = π(b, n) and d = 1 + π(ai , i)a! .
i<n

From a! and d we can, for each given i < n, reconstruct the number ai as the unique x < b
such that
1 + π(x, i)a! | d.
For clearly ai is such an x, and if some x < b were to satisfy the same condition, then
because π(x, i) < a and the numbers 1 + ka! are relatively prime for k ≤ a, we would have
π(x, i) = π(aj , j) for some j < n. Hence x = aj and i = j, thus x = ai . Therefore

ai = µx<b ∃z<d ((1 + π(x, i)a!)z = d).

We can now define Gödel’s β-function as

β(c, i) = µx<π1 (c) ∃z<π2 (c) ((1 + π(x, i) · π1 (c)) · z = π2 (c)).

Clearly, β is in E. Furthermore with c = π(a!, d) we see that β(c, i) = ai . One can then
estimate the given bound on c, using π(b, n) < (b + n + 1)2 (exercise).

The above definition of β shows that it is a subelementary function.

Theorem 4.4.5. The set of functions E is closed under limited recursion. Thus if g, h, k are
given functions in E and f is defined from them according to the schema

f (m,
~ 0) = g(m),
~
f (m,
~ n + 1) = h(n, f (m,
~ n), m),
~
f (m,
~ n) ≤ k(m,
~ n),

then f is in E.

Proof. Let f be defined from g, h and k in E, by limited recursion as above. Using Gödel’s
β-function as in the last theorem we can find for any given m, ~ n a number c such that
~ i) for all i ≤ n. Let R(m,
β(c, i) = f (m, ~ n, c) be the relation

~ & ∀i<n (β(c, i + 1) = h(i, β(c, i), m))


β(c, 0) = g(m) ~

and note that its characteristic function is in E. It is clear, by induction, that if R(m, ~ n, c)
~ i), for all i ≤ n. Therefore we can define f explicitly by the equation
holds then β(c, i) = f (m,

f (m,
~ n) = β(µc R(m,
~ n, c), n).

The function f is in E, if µc can be bounded by some function in E. However, the theorem


4
on Gödel’s β-function gives a bound 4 · 4(n+1)(b+n+2) , where in this case b can be taken
as the maximum of k(m, ~ i) for i ≤ n. But this can be defined in E as k(m, ~ i0 ), where
i0 = µi≤n ∀j≤n (k(m,
~ j) ≤ k(m,
~ i)). Hence µc can be bounded by a function in E.

Note that it is in the previous proof only that the exponential function is required, in
providing a bound for µc .

Corollary 4.4.6. The set of functions E is equal to the set Elem of elementary functions.
4.5. CODING FINITE LISTS 115

Proof. It is sufficient to show that E is closed under bounded sums and bounded products.
Suppose
P for instance, that f is defined from g in E by bounded summation: f (m,~ n) =
i<n g( m,
~ i). Then f can be defined by limited recursion, as follows
f (m,
~ 0) =0
f (m,
~ n + 1) = f (m,
~ n) + g(m,
~ n)
f (m,
~ n) ≤ n · max g(m,
~ i)
i<n

and the functions (including the bound) from which it is defined are in E (why?). Thus f is in
E by the theorem. If f is defined by bounded product, we proceed similarly.

4.5 Coding finite lists


Computation on lists is a practical necessity, so because we are basing everything here on the
single data type N we must develop some means of coding finite lists or sequences of natural
numbers into N itself. There are various ways to do this and we shall adopt one of the most
traditional, based on the pairing functions π, π1 , π2 .
Definition 4.5.1. The empty sequence ∅ is coded by the number 0 and a sequence n0 , n1 , . . . ,
nk−1 is coded by the sequence number
hn0 , n1 , . . . , nk−1 i = π 0 (. . . π 0 (π 0 (0, n0 ), n1 ), . . . , nk−1 )
with π 0 (a, b) = π(a, b) + 1, thus recursively,
h∅i = 0,
hn0 , n1 , . . . , nk i = π 0 (hn0 , n1 , . . . , nk−1 i, nk ).
Because of the surjectivity of π, every number a can be decoded uniquely as a sequence number
a = hn0 , n1 , . . . , nk−1 i. If a is greater than zero,
· 1)
hd(a) = π2 (a −
is the head i.e., rightmost element, and
· 1)
tl(a) = π1 (a −
is the tail of the list. The k-th iterate of tl is denoted tl(k) and since tl(a) is less than or equal
to a, tl(k) (a) is elementarily definable by limited recursion. Thus we can define elementarily
the length and decoding functions:
lh(a) = µk≤a (tl(k) (a) = 0),
·
(a)i = hd(tl(lha−(i+1)) (a)).
We shall write (a)i,j for ((a)i )j and (a)i,j,k for (((a)i )j )k .
If a = hn0 , n1 , . . . , nk−1 i, it is easy to show that
lh(a) = k and (a)i = ni , for each i < k.
Furthermore (a)i = 0 when i ≥ lh(a). This elementary coding machinery will be used at
various crucial points in the following. Note that the functions lh(·) and (a)i are subelementary,
and so is hn0 , n1 , . . . , nk−1 i for each fixed k.
116 CHAPTER 4. GÖDEL’S INCOMPLETENESS THEOREMS

Lemma 4.5.2 (Estimate for sequence numbers).


k
(n + 1)k ≤ hn, . . . , ni < (n + 1)2 .
| {z }
k

Proof. We prove a slightly strengthened form of the second estimate:


k
hn, . . . , ni + n + 1 ≤ (n + 1)2 ,
| {z }
k

by induction on k. For k = 0 the claim is clear. In the step k 7→ k + 1 we have


hn, . . . , ni + n + 1 = π(hn, . . . , ni, n) + n + 2
| {z } | {z }
k+1 k
≤ (hn, . . . , ni + n + 1)2 by Lemma 4.4.3
| {z }
k
k+1
≤ (n + 1)2 by induction hypothesis.
For the first estimate the base case k = 0 is clear, and in the step we have
hn, . . . , ni = π(hn, . . . , ni, n) + 1
| {z } | {z }
k+1 k
≥ hn, . . . , ni + n + 1
| {z }
k
≥ (n + 1)(k + 1) by induction hypothesis.

4.6 Gödel numbers


Definition 4.6.1. Let L be a countable first-order language. Assume that we have injectively
assigned to every n-ary relation symbol R a symbol number sn(R) of the form h1, n, ii and to
every n-ary function symbol f a symbol number sn(f ) of the form h2, n, ji. Call L elementarily
presented, if the set SymbL of all these symbol numbers is elementary.
In what follows we shall always assume that the languages L considered are elementarily
presented. In particular this applies to every language with finitely many relation and function
symbols.
Definition 4.6.2 (Gödel numbering). Let sn(Var) = h0i. For every L-term r we define
recursively its Gödel number prq by
pxi q = hsn(Var), ii,
pf (r1 . . . rn )q = hsn(f ), pr1 q, . . . , prn qi.
Assign numbers to the logical symbols by sn(→) = h3, 0i and sn(∀) = h3, 1i. For simplicity we
leave out the logical connectives ∧, ∨ and ∃ here; they could be treated similarly. We define
for every L-formula A its Gödel number pAq by
pR(r1 . . . rn )q = hsn(R), pr1 q, . . . , prn qi,
pA → Bq = hsn(→), pAq, pBqi,
p∀xi Aq = hsn(∀), i, pAqi.
4.7. UNDEFINABILITY OF THE NOTION OF TRUTH 117

Assume that 0 is a constant and Succ is a unary function symbol in L. For every a ∈ N the
numeral a ∈ TermL is defined by 0 = 0 and n + 1 = Succ(n).

Proposition 4.6.3. There is an elementary function s such that for every formula C = C(z)
with z = x0 ,
s(pCq, k) = pC(k)q;

Proof. The proof requires a lot of preparation, and it is omitted. Lemma 4.5.2 is necessary to
the proof.

4.7 Undefinability of the notion of truth


Definition 4.7.1. Let M be an L-structure. A relation R ⊆ |M|n is called L-definable in
M,or simply definable in M, if there is an L-formula A(x1 , . . . , xn ) such that

R = (a1 , . . . , an ) ∈ |M|n | M |= A(x1 , . . . xn )[x1 7→ a1 , . . . , xn 7→ an ] .




We assume in this section that |M| = N, 0 is a constant in L and Succ is a unary function
symbol in L with 0M = 0 and SuccM (a) = a + 1.

Recall that for every a ∈ N the numeral a ∈ TermL is defined by 0 = 0 and n + 1 = Succ(n).
Observe that in this case the definability of R ⊆ Nn by A(x1 , . . . , xn ) is equivalent to

R = (a1 , . . . , an ) ∈ Nn | M |= A(a1 , . . . , an ) .


Definition 4.7.2. Let L be an elementarily presented language. We assume in this section


that every elementary relation is definable in M. A set S of formulas is called definable in
M, if
pSq = {pAq | A ∈ S}
is definable in M.

We shall show that already from these assumptions it follows that the notion of truth for
M, more precisely the set

Th(M) = {A ∈ Form | FV(A) = ∅ & M |= A}

of all closed formulas valid in M, is undefinable in M.

Lemma 4.7.3 (Semantical fixed point lemma). If every elementary relation is definable in
M, then for every L-formula B(z) we can find a closed L-formula A such that

M |= A if and only if M |= B(pAq).

Proof. Let s be the elementary function satisfying for every formula C = C(z) with z = x0 ,

s(pCq, k) = pC(k)q

mentioned above. Then in particular

s(pCq, pCq) = pC(pCq)q.


118 CHAPTER 4. GÖDEL’S INCOMPLETENESS THEOREMS

By assumption the graph Gs of s is definable in M, by As (x1 , x2 , x3 ) say. Let

C(z) = ∀x (As (z, z, x) → B(x)), A = C(pCq),

and therefore
A = ∀x (As (pCq, pCq, x) → B(x)).

Hence M |= A if and only if ∀d∈N d = pC(pCq)q ⇒ M |= B(d) , which is the same as
M |= B(pAq).

Theorem 4.7.4 (Tarski’s undefinability theorem). Assume that every elementary relation is
definable in M. Then Th(M) is undefinable in M.

Proof. Assume that pTh(M)q is definable by BW (z). Then for all closed formulas A

M |= A if and only if M |= BW (pAq).

Now consider the formula ¬BW (z) and choose by the fixed point lemma a closed L-formula A
such that
M |= A if and only if M |= ¬BW (pAq).
This contradicts the equivalence above.

4.8 Representable relations and functions


Here we generalize the arguments of the previous section. There we have made essential use
of the notion of truth in a structure M, i.e., of the relation M |= A. The set of all closed
formulas A such that M |= A has been called the theory of M, denoted Th(M). Now, instead
of Th(M), we shall start more generally from an arbitrary theory T .

Definition 4.8.1. Let L be a countable first order language with equality, and let L be the
set of all closed L-formulas. For every set Γ of formulas let L(Γ) be the set of all function
and relation symbols occurring in Γ. An axiom system Γ is a set of closed formulas such that
EqL(Γ) ⊆ Γ. A model of an axiom system Γ is an L-model M such that L(Γ) ⊆ L and M |= Γ.
For sets Γ of closed formulas we write

ModL (Γ) = {M | M is an L−model & M |= Γ ∪ EqL }.

Clearly Γ is satisfiable if and only if Γ has an L-model. A theory T is an axiom system closed
under `c , that is, EqL(T ) ⊆ T and

T = {A ∈ L(T ) | T `c A}.

A theory T is called complete, if for every formula A ∈ L(T ), T `c A or T `c ¬A.

For every L-model M satisfying the equality axioms the set Th(M) of all closed L-formulas
A such that M |= A is a theory. We consider the question as to whether in T there is a
notion of truth (in the form of a truth formula B(z)), such that B(z) means that z is true.
A consequence is that we have to explain all notions used without referring to semantical
concepts at all.
4.9. UNDEFINABILITY OF THE NOTION OF TRUTH IN FORMAL THEORIES 119

1. z ranges over closed formulas (or sentences) A, or more precisely over their Gödel
numbers pAq.
2. A true is to be replaced by T ` A.
3. C equivalent to D is to be replaced by T ` C ↔ D.

Hence the question now is whether there is a truth formula B(z) such that

T ` A ↔ B(pAq),

for all sentences A. The result will be that this is impossible, under rather weak assumptions
on the theory T . Technically, the issue will be to replace the notion of definability by the
notion of representability within a formal theory. We begin with a discussion of this notion.
In this section we assume that L is an elementarily presented language with 0, Succ and = in
L, and T is an L-theory containing the equality axioms EqL .

Definition 4.8.2. A relation R ⊆ Nn is representable in T if there is a formula A(x1 , . . . , xn )


such that

T ` A(a1 , . . . , an ) if (a1 , . . . , an ) ∈ R,
T ` ¬A(a1 , . . . , an ) if (a1 , . . . , an ) ∈
/ R.

A function f : Nn → N is called representable in T if there is a formula A(x1 , . . . , xn , y)


representing the graph Gf ⊆ Nn+1 of f , i.e., such that

T ` A(a1 , . . . , an , f (a1 , . . . , an )), (4.1)


T ` ¬A(a1 , . . . , an , c) if c 6= f (a1 , . . . , an ) (4.2)

and such that in addition

T ` A(a1 , . . . , an , y) ∧ A(a1 , . . . , an , z) → y=z, for all a1 , . . . , an ∈ N. (4.3)

If T ` b 6= c for b < c condition (4.2) follows from (4.1) and (4.3).

Lemma 4.8.3. If the characteristic function χR of a relation R ⊆ Nn is representable in T ,


then so is the relation R itself.

Proof. For simplicity assume n = 1. Let A(x, y) be a formula representing χR . We show


that A(x, 1) represents the relation R. Assume a ∈ R. Then χR (a) = 1, hence (a, 1) ∈ GχR ,
hence T ` A(a, 1). Conversely, assume a ∈ / R. Then χR (a) = 0, hence (a, 1) ∈
/ GχR , hence
T ` ¬A(a, 1).

4.9 Undefinability of the notion of truth in formal theories


Lemma 4.9.1 (Fixed point lemma). Assume that all elementary functions are representable
in T . Then for every formula B(z) we can find a closed formula A such that

T ` A ↔ B(pAq).
120 CHAPTER 4. GÖDEL’S INCOMPLETENESS THEOREMS

Proof. The proof is similar to the proof of the semantical fixed point lemma. Let s be the
elementary function introduced there and As (x1 , x2 , x3 ) a formula representing s in T . Let

C(z) = ∀x (As (z, z, x) → B(x)), A = C(pCq),

and therefore
A = ∀x (As (pCq, pCq, x) → B(x)).

Because of s(pCq, pCq) = pC(pCq)q = pAq we can prove in T

As (pCq, pCq, x) ↔ x = pAq,

hence by definition of A also

A ↔ ∀x (x = pAq → B(x))

and therefore
A ↔ B(pAq).

If T = Th(M), we obtain the semantical fixed point lemma above as a special case.

Theorem 4.9.2. Let T be a consistent theory such that all elementary functions are repre-
sentable in T . Then there cannot exist a formula B(z) defining the notion of truth, i.e., such
that for all closed formulas A
T ` A ↔ B(pAq).

Proof. Assume we would have such a B(z). Consider the formula ¬B(z) and choose by the
fixed point lemma a closed formula A such that

T ` A ↔ ¬B(pAq).

For this A we obtain T ` A ↔ ¬A, contradicting the consistency of T .

If T = Th(M), Tarski’s undefinability theorem is a special case of the previous theorem.

4.10 Recursive functions


Definition 4.10.1. A relation R of arity r is said to be Σ01 -definable, if there is an elementary
relation E, say of arity r + l, such that for all ~n = n1 , . . . , nr ,

R(~n ) ⇔ ∃k1 . . . ∃kl E(~n, k1 , . . . , kl ).

A partial function ϕ is said to be Σ01 -definable, if its graph

{(~n, m) | ϕ(~n) is defined and ϕ(~n = m}

is Σ01 -definable.
4.10. RECURSIVE FUNCTIONS 121

To say that a non-empty relation R is Σ01 -definable, or recursively enumerable, is equivalent


to saying that the set of all sequences h~ni satisfying R can be enumerated (possibly with
repetitions) by some elementary function f : N → N. Such relations are called elementarily
enumerable. For choose any fixed sequence ha1 , . . . , ar i satisfying R and define
(
h(m)1 , . . . , (m)r i if E((m)1 , . . . , (m)r+l )
f (m) =
ha1 , . . . , ar i otherwise.

Conversely, if R is elementarily enumerated by f , then

R(~n ) ⇔ ∃m (f (m) = h~ni)

is a Σ01 -definition of R.

Definition 4.10.2. The µ-recursive, or simply recursive functions are those partial functions
which can be defined from the initial functions: constant 0, successor S, projections onto the
i-th coordinate, addition +, modified subtraction −· and multiplication ·, by applications of
composition and unbounded minimisation. The latter is the scheme

f ∈ Rec(r+1)
,
µy f ∈ Rec(r)

where
(µy f )(x1 , . . . , xr ) = µy (f (x1 , . . . , xr , y) = 0)

that is, the least number y such that f (x1 , . . . , xr , y 0 ) is defined for every y 0 ≤ y and
f (x1 , . . . , xk , y) = 0.

Note that it is through unbounded minimisation that partial functions may arise.

Lemma 4.10.3. Every elementary function is µ-recursive.

Proof. By removing the bounds on µ one obtains µ-recursive definitions of the pairing functions
π, π1 , π2 and of Gödel’s β-function. Then by removing all mention of bounds one sees that
the µ-recursive functions are closed under unlimited primitive recursive definitions of the form:

f (m,
~ 0) = g(m),
~

f (m,
~ n + 1) = h(n, f (m,
~ n)).

Thus one can µ-recursively define bounded sums and bounded products, and hence all
elementary functions.

The converse of the previous lemma does not hold (why?). Call a relation R recursive, if
its total characteristic function is recursive. One can show that a relation R is recursive if and
only if both R and its complement are recursively enumerable.
122 CHAPTER 4. GÖDEL’S INCOMPLETENESS THEOREMS

4.11 Undecidability and incompleteness


Consider a consistent formal theory T with the property that all recursive functions are
representable in T . This is a very weak assumption, as it is always satisfied if the theory allows
to develop a certain minimum of arithmetic. We shall show that such a theory necessarily
is undecidable. Then we prove Gödel’s first incompleteness theorem, saying that every
axiomatised such theory must be incomplete.

Definition 4.11.1. In this section let L be an elementarily presented language with 0, Succ,
= in L and T a theory containing the equality axioms EqL . A set S of formulas is called
recursive (recursively enumerable), if

pSq = {pAq | A ∈ S}

is recursive (recursively enumerable).

Theorem 4.11.2 (Undecidability). Assume that T is a consistent theory such that all recursive
functions are representable in T . Then T is not recursive.

Proof. Assume that T is recursive. By assumption there exists a formula B(z) representing
pT q in T . Choose by the fixed point lemma a closed formula A such that

T ` A ↔ ¬B(pAq).

We shall prove (∗) T 6` A and (∗∗) T ` A; this is the desired contradiction.


Proof of (∗). Assume T ` A. Then A ∈ T , hence pAq ∈ pT q, hence T ` B(pAq) (because
B(z) represents in T the set pT q). By the choice of A it follows that T ` ¬B(pAq, which
contradicts the consistency of T .
Proof of (∗∗). By (∗) we know T 6` A. Therefore A ∈ / T , hence pAq ∈
/ pT q and therefore
T ` ¬B(pAq). By the choice of A it follows that T ` A.

We now aim at Gödel’s first incompleteness theorem.

Definition 4.11.3. A theory T is consistent, if ⊥ ∈ / T ; otherwise T is inconsistent. Recall


that a theory T is complete, if for every closed formula A ∈ L we have A ∈ T or ¬A ∈ T .

Theorem 4.11.4 (Gödel’s First Incompleteness Theorem). Assume that T is a recursively


enumerable consistent theory with the property that all recursive functions are representable in
T . Then T is incomplete.

Proof. Let T be such a theory, which is supposed to be complete. Clearly, the set F = {pAq |
A ∈ L} is elementary. Since T is complete, we have

a∈
/ pT q ↔ a ∈
/ F ∨ ¬a
˙ ∈ pT q

with ¬a
˙ = hsn(→), a, sn(⊥)i. Hence the complement of pT q is recursively enumerable as well,
which means that pT q is recursive. Now the claim follows from the undecidability theorem
above.
4.11. UNDECIDABILITY AND INCOMPLETENESS 123

There are very simple theories with the property that all recursive functions are repre-
sentable in them; an example is a finitely axiomatised arithmetical theory Q due to Robinson.
One can sharpen the Incompleteness Theorem, as one can produce a formula A such that
neither A nor ¬A is provable. The original idea for this sharpening is due to Rosser. Gödel’s
original first incompleteness theorem provided such an A under the assumption that the
theory satisfied a stronger condition than mere consistency, namely ω-consistency. Rosser
then improved Gödel’s result by showing, with a somewhat more complicated formula, that
consistency is all that is required.
A theory T in an elementarily presented language L is axiomatised, if it is given by a
recursively enumerable axiom system AxT . One can show that the set AxT is elementary.
According to the theorem of Gödel-Rosser, for every axiomatised consistent theory T satisfying
certain weak assumptions, there is an undecidable sentence A meaning “for every proof of me
there is a shorter proof of my negation”. Because A is unprovable, it is clearly true. Gödel’s
Second Incompleteness Theorem provides a particularly interesting alternative to A, namely
a formula ConT expressing the consistency of T . Again it turns out to be unprovable and
therefore true. The proof of this theorem in a sharpened form is due to Löb (see [19], section
3.6.2).

Theorem 4.11.5 (Gödel’s Second Incompleteness Theorem). Let T be an axiomatised consis-


tent extension of Robinson’s Q, satisfying certain underivability conditions. Then T 6` ConT .
124 CHAPTER 4. GÖDEL’S INCOMPLETENESS THEOREMS
Bibliography

[1] P. Aczel, M. Rathjen: Constructive Set Theory, book draft, 2010.

[2] S. Awodey: Category Theory, Oxford University Press, 2010.

[3] E. Bishop: Foundations of Constructive Analysis, McGraw-Hill, 1967.

[4] E. Bishop and D. S. Bridges: Constructive Analysis, Grundlehren der Math. Wis-
senschaften 279, Springer-Verlag, Heidelberg-Berlin-New York, 1985.

[5] D. S. Bridges and F. Richman: Varieties of Constructive Mathematics, Cambridge


University Press, 1987.

[6] C. C. Chang, A. H. Keisler: Model Theory, Dover, 2012.

[7] J. Dugundji: Topology, Allyn and Bacon,1966.

[8] G. Gentzen: Über das Verh ältnis zwischen intuitionistischer und klassischer Arithmetik,
Galley proof, Mathematische Annalen (received 15th March 1933). First published in
English translation in The collected papers of Gerhard Gentzen, M.E. Szabo (editor),
53-67, Amsterdam (North-Holland).

[9] G. Gentzen: Untersuchungen über das logische Schließen I, II, Mathematische Zeitschrift,
39, 1935, 176-210, 405-431.

[10] K. Gödel: Zur intuitionistisohen Arithmetik und Zahlentheorie, in : Ergebnisse eines


mathematischen Kolloquiums, Heft 4 (for 1931-1932, appeared in 1933), 34-38. Translated
into English in The undecidable, M. Davis (editor), 75-81, under the title “On intuitionistic
arithmetic and number theory”. For corrections of the translation see review in J.S.L. 31
(1966), 484-494.

[11] R. Goldblatt: Topoi, The Categorial Analysis of Logic, Dover, 2006.

[12] T. Jech: Set theory, Springer, 2002.

[13] A. N. Kolmogorov. On the principle of the excluded middle (in Russian). Mat. Sb., 32,
1925, 646-667.

[14] J. Lambek, P. J. Scott: Introduction to higher order categorical logic, Cambridge University
Press, 1986.

[15] P. Martin-Löf: An intuitionistic theory of types: predicative part, in H. E. Rose and J. C.


Shepherdson (Eds.) Logic Colloquium’73, pp.73-118, North-Holland, 1975.
126 BIBLIOGRAPHY

[16] P. Martin-Löf: Intuitionistic type theory: Notes by Giovanni Sambin on a series of lectures
given in Padua, June 1980, Napoli: Bibliopolis, 1984.

[17] H. Rogers: Theory of Recursive Functions and Effective Computability, McGraw-Hill,


1967.

[18] H. Schwichtenberg, A. Troelstra: Basic Proof Theory, Cambridge University Press 1996.

[19] H. Schwichtenberg, S. Wainer: Proofs and Computations, Cambridge University Press


2012.

[20] T. Streicher: Realizability, Lecture Notes, TU Darmstadt, 2018.

[21] The Univalent Foundations Program: Homotopy Type Theory: Univalent Foundations of
Mathematics, Institute for Advanced Study, Princeton, 2013.

You might also like