LogicLectureNotes
LogicLectureNotes
These lecture notes, and especially chapters 3 and 4, are based on [19]. In chapters 1 and 2 a
categorical perspective is introduced. I would like to thank the students of LMU who followed
this course, for providing their comments, suggestions and corrections. I especially thank Nils
Köpp for assisting this course.
iv
Contents
3 Models 79
3.1 Trees, fans, and spreads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
3.2 Fan models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
3.3 The Tarski-Beth definition of truth in a fan model . . . . . . . . . . . . . . . . 84
3.4 Soundness of minimal logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
3.5 Countermodels and intuitionistic fan models . . . . . . . . . . . . . . . . . . . . 91
3.6 Completeness of minimal logic . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
3.7 L-models and classical models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
3.8 Soundness theorem of classical logic . . . . . . . . . . . . . . . . . . . . . . . . 98
3.9 Completeness of classical logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
3.10 The compactness theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Mathematical logic (ML), or simply logic, is concerned with the study of formal systems related
to the foundations and practice of mathematics. ML is a very broad field encompassing various
theories, like the following. Proof theory, the main object of study of which is the concept
of (formal) derivation, or (formal) proof (see e.g., [18]). Model theory studies interpretations,
or models, of formal theories (see e.g., [6]). Axiomatic set theory is the formal theory of sets
that underlies most of the standard mathematical practice (see e.g., [12]). It is also called
Zermelo-Fraenkel set theory (ZF). The theory ZFC is ZF together with the axiom of choice.
ML has strong connections to category theory, a theory developed first by Eilenberg and Mac
Lane within homology and homotopy theory (see e.g. [2]). Categorical logic is that part of
category theory connected to logic (see [14]). Computability theory is the theory of computable
functions, or in general of algorithmic objects (see e.g., [17]).
An alternative to the notion of set is the concept of type (or data-type). Type theory,
which has its origins to Russell’s so called ramified theory of types, is evolved in modern times
to Martin-Löf type theory (MLTT), which also has many applications to theoretical computer
science (see [15], [16]). Recently, the late Fields medalist V. Voevodsky revealed unexpected
connections between homotopy theory and logic, developing Homotopy Type Theory (HoTT),
an extension of MLTT with his axiom of univalence and higher inductive types (see [21]).
In this chapter we develop the basics of first-order theories and we study derivations in
minimal logic. Although standard mathematics is done within classical logic, great mathemati-
cians, have developed mathematics within constructive logic. The aforementioned theories
MLTT and HoTT are within intuitionistic logic. There is also constructive set theory (see [1]),
constructive computability theory (see [5]). For basic mathematical theories within construc-
tive logic see [3], [4]. Minimal logic is the most general (constructive) logic that we study
here.
metatheory M. Next we explain the kind of inductive definitions that must be possible in M.
An inductively defined set, or an inductive set, X is determined by two kinds of rules
(or axioms); the introduction rules, which determine the way the elements of X are formed,
or introduced, and the induction principle IndX for X (or elimination rule for X) which
guarantees that X is the least set satisfying its introduction rules.
Example 1.1.1. The most fundamental example of an inductive set is that of the set of
natural numbers N. Its introduction rules are:
n∈N
0∈N , Succ(n) ∈ N
.
According to these rules, the elements of N are formed by the element 0 and by the primitive,
or given successor-function Succ : N → N. These rules alone do not determine a unique set; for
example the rationals Q and the reals R satisfy the same rules. We determine N by postulating
that N is the least set satisfying the above rules. This we do in a “ bottom-up” way1 with
the induction principle for N. If P, Q, R are formulas in our metatheory M, the M-formula
P ⇒ Q ⇒ R is the formula P ⇒ (Q ⇒ R) or (P & Q) ⇒ R i.e., ‘if P and if Q, then R.
The induction principle IndN for N is the following formula (in M): for every formula A(n)
on N in M,
A(0) ⇒ ∀n∈N (A(n) ⇒ A(Succ(n))) ⇒ ∀n∈N (A(n)).
The interpretation of IndN is the following: the hypotheses of IndN say that A satisfies the two
formation rules for N i.e., A(0) and ∀n∈N (A(n) → A(Succ(n))). In this case A is a “competitor”
predicate to N. Then, if we view A as the set of all objects such that A(n) holds, the conclusion
of IndN guarantees that N ⊆ A, i.e., ∀n∈N (A(n)). In other words, N is “smaller” than A, and
this is the case for any such A.
Notice that we use the following conventions in M:
∀x∈X φ(x) :⇔ ∀x x ∈ X ⇒ φ(x) ,
∃x∈X φ(x) :⇔ ∃x x ∈ X & φ(x) .
The induction principle in an inductive definition is the main tool for proving properties of
the defined set. In the case of N, one can prove (exercise) its corresponding recursion theorem
RecN , which determines the way one defines functions on N. According to a simplified version
of it, if X is a set, x0 ∈ X and g : X → X, there exists a unique function f : N → X such that
f (0) = x0 ,
f (Succ(n)) = g(f (n)); n ∈ N,
To show e.g., the uniqueness of f with the above properties, let h : N → X such that h(0) = x0
and h(Succ(n)) = g(h(n)), for every n ∈ N. Using IndN on A(n) :⇔ (f (n) = h(n)), we get
∀n (A(n)). As an example of a function defined through RecN , let Double : N → N defined by
Double(0) = 0,
Double(Succ(n)) = Succ(Succ(Double(n)))
i.e., X = N, x0 = 0 and g = Succ ◦ Succ.
1
If we work inside the set of real numbers R, we can also do this in a “top-down” way by defining N to be
the intersection of all sets satisfying these introduction rules. Notice that also in this way N is the least subset
of R satisfying its introduction rules.
1.2. FIRST-ORDER LANGUAGES 5
Example 1.1.2. Let A be a non-empty set that we call alphabet. The set A∗ of words over
A is introduced by the following rules
w ∈ A∗ , a ∈ A
nilA∗ ∈ A∗ ,
w ? a ∈ A∗
.
The symbol nilA∗ denotes the empty word, while the word w ? s denotes the concatenation
of the word w and the letter a ∈ A. The induction principle IndA∗ for A∗ is the following: if
P (w) is any formula on A∗ in M, then
P (nilA∗ ) ⇒ ∀w∈A∗ ∀a∈A P (w) ⇒ P (w ? a) ⇒ ∀w∈A∗ (P (w)).
f (nilA∗ ) = x0 ,
f (w ? a) = ga (f (w)); w ∈ A∗ , a ∈ A.
As an example of a function defined through RecA∗ , if X = A∗ , w0 ∈ A∗ and if ga (w) = w ? a,
for every a ∈ A, let the function fw0 : A∗ → A∗ defined by
fw0 (nilA∗ ) = w0 ,
If ZF is our metatheory M, then the proof of the recursion theorem that corresponds to
an inductive definition can be complicated. If as metatheory we use a theory like Martin-
Löf’s type theory MLTT, there is a completely mechanical, hence trivial, way to recover the
corresponding recursion rule from the induction rule of an inductive definition.
where for every n ∈ N, Rel(n) is a, possibly empty, set of n-ary relation symbols or predicate
symbols. Moreover, Rel(n) ∩ Rel(m) = ∅, for every n 6= m. A 0-ary relation symbol is called a
propositional symbol. The symbol ⊥ (read falsum) is required as a fixed propositional symbol
2
For simplicity, we do no use the more accurate notations RelL , FunL .
6 CHAPTER 1. DERIVATIONS IN MINIMAL LOGIC
i.e., Rel(0) is inhabited by ⊥. The language will not, unless stated otherwise, contain the
equality symbol =, which is a 2-ary relation symbol. Moreover,
[
Fun = Fun(n) ,
n∈N
where for every n ∈ N, Fun(n) is a, possible empty, set of n-ary function symbols. Moreover,
Fun(n) ∩ Fun(m) = ∅, for every n 6= m. A 0-ary function symbol is called constant, and let
Const = Fun(0) .
Clearly, the above definition rests on some theory of sets, and of natural numbers, which,
as we have already said, are presupposed for our metaheory M. The equality symbol used in
Definition 1.2.1 is the equality (of sets, or objects) in M. If our formal language includes one
more fixed countably infinite set of variables VAR = {Vn | n ∈ N}, where Vi is a variable of
another sort, e.g., a set-variable, then one could define the notion of a second-order language
over Var, VAR and L in a similar fashion.
Example 1.2.2. The first-order language of arithmetic is the pair ({⊥, =}, {0, S, +, ·}), which
is written for simplicity as (⊥, =, 0, S, +, ·), where 0 ∈ Const, S ∈ Fun(1) , and +, · ∈ Fun(2) .
The first-order language of Zermelo-Fraenkel set theory (ZF) is the pair ({⊥, =, ∈}, ∅)}), which
is written for simplicity as (⊥, =, ∈), where ∈ is in Rel(2) .
1.3 Terms
The set TermL of terms of a first-order language L is inductively defined. For simplicity we
omit the subscript L. N+ denotes the set of strictly positive natural numbers.
Definition 1.3.1. The set Term of terms of a first-order language L is defined by the following
introduction rules:
x ∈ Var c ∈ Const
, ,
x ∈ Term c ∈ Term
n ∈ N+ , t1 , . . . , tn ∈ Term, f ∈ Fun(n)
,
f (t1 , . . . , tn ) ∈ Term
to which, the following induction principle IndTerm corresponds:
∀x∈Var (P (x)) ⇒
∀c∈Const (P (c)) ⇒
∀n∈N+ ∀t1 ,...,tn ∈Term ∀f ∈Fun(n) ((P (t1 ) & . . . & P (tn )) ⇒ P (f (t1 , . . . , tn )) ⇒
∀t∈Term (P (t)),
where P (t) is any formula (in M) on Term.
In words, every variable is a term, every constant is a term, and if t1 , . . . , tn are terms and
f is an n-ary function symbol with n ≥ 1, then f (t1 , . . . , tn ) is a term. If r, s are terms and ◦
is a binary function symbol, we usually write r ◦ s instead of ◦(r, s). E.g.,
are terms of the language of arithmetic. As in the case of the induction principle for natural
numbers, the induction principle for Term expresses that Term is the least set satisfying its
defining rules. A formula P (t) on Term could be “the number of left parentheses, (, occurring
in t is equal to the number of right parentheses, ), occurring in t”. We need to express this
formula in mathematical terms. For that we need the following recursion theorem for Term.
Proposition 1.3.2 (Recursion theorem for Term (RecTerm )). Let X be a set. If
FVar : Var → X,
FConst : Const → X,
Ff,n : X n → X,
for every n ∈ N+ and f ∈ Fun(n) , are given functions, there is a unique function F : Term → X
such that, for every n ∈ N+ , t1 , . . . , tn ∈ Term, and f ∈ Fun(n) ,
Using the recursion theorem for Term one can define e.g., the function Pleft : Term → N
such that Pleft (t) is the number of left parentheses occurring in t ∈ Term. It suffice to define
it on the variables, the constants, and the complex terms f (t1 , . . . , tn ) supposing that Pleft is
defined on the terms t1 , . . . , tn . Namely, we define
Pleft (ui ) = 0,
Pleft (c) = 0,
n
X
Pleft (f (t1 , . . . , tn )) = 1 + Pleft (ti ).
i=1
Here we used the recursion theorem for Term with respect the functions FVar → Var → N,
FConst : Const → N, and Ff,n : Nn → N, where n ∈ N+ and f ∈ Fun(n) , defined by the rules:
In exactly the same way, one defines the function Pright : Term → N such that Pright (t) is the
number of right parentheses occurring in t ∈ Term. Now we can show the following.
Proposition 1.3.3. ∀t∈Term Pleft (t) = Pright (t) .
8 CHAPTER 1. DERIVATIONS IN MINIMAL LOGIC
Proof. We apply IndTerm on the formula P (t) :⇔ Pleft (t) = Pright (t). The validity of P (x),
for every x ∈ Var, and P (c), for every c ∈ Const is trivial. If f (t1 , . . . , tn ) is a complex term,
such that P (ti ) holds, for every i ∈ {1, . . . , n}, then by the inductive hypothesis we get
n
X
Pleft f (t1 , . . . , tn ) = 1 + Pleft (ti )
i=1
Xn
=1+ Pright (ti )
i=1
= Pright f (t1 , . . . , tn ) .
1.4 Formulas
Definition 1.4.1. The set of formulas Form of a first-order language L is defined by the
following introduction rules:
n ∈ N, t1 , . . . , tn ∈ Term, R ∈ Rel(n)
,
R(t1 , . . . , tn ) ∈ Form
A, B ∈ Form
,
A → B, A ∧ B, A ∨ B ∈ Form
A ∈ Form, x ∈ Var
,
∀x A, ∃x A ∈ Form
to which, the following induction principle IndForm corresponds:
¬A = A → ⊥.
As usual, the induction principle IndForm expresses that Form is the least set satisfying its
introduction rules. Note that IndForm consists of formulas in M, where the same quantifiers and
logical symbols, except from the meta-theoretic implication symbol ⇒ and the metatheoretic
conjuction symbol &, are used. Since the variables occurring in these meta-theoretic formulas
1.4. FORMULAS 9
are different from Var, it is easy to understand from the context the difference between the
formulas in Form and the formulas in M. The expression ⊥ → ⊥ is a formula, and also the
expressions ∀x (⊥ → ⊥) and ∃x (R(x) ∨ S(x)). Notice that we have added two parentheses
(left and right) in both last examples, in order to make them clear to read. Alternatively, one
could have used the introduction rule
A, B ∈ Form
,
(A → B), (A ∧ B), (A ∨ B) ∈ Form
but then it would be cumbersome to be faithful to it all the time. It is easy to associate to
each formula its formation tree i.e., the tree of all introduction rules that generate the formula,
which lies in the root of this tree. A metatheoretic formula P (A) on Form could express e.g.,
“the number of left parentheses occurring in A is equal to the number of right parentheses
occurring in A”. As in the case of terms, we need a recursion theorem for Form to formulate
P (A). Let Prime be the set of all prime formulas i.e., the set
Proposition 1.4.2 (Recursion theorem for Form (RecForm )). Let X be a set. If
FPrime : Prime → X
F→ : X × X → X, F∧ : X × X → X, F∨ : X × X → X,
F∀,x : X → X, F∃,x : X → X,
for every x ∈ Var, are given functions, there is a unique function F : Form → X such that
F (A → B) = F→ (F (A), F (B)),
F (A ∧ B) = F∧ (F (A), F (B)),
F (A ∨ B) = F∨ (F (A), F (B)),
F (∀x A) = F∀,x (F (A)),
F (∃x A) = F∃,x (F (A)).
|P | = 0, P ∈ Prime,
|A2B| = max{|A|, |B|} + 1, 2 ∈ {→, ∧, ∨},
|4x A| = |A| + 1, 4 ∈ {∀, ∃}.
10 CHAPTER 1. DERIVATIONS IN MINIMAL LOGIC
FRel (P ) = 0,
F2 (m, n) = max{m, n} + 1,
F4,x (m) = m + 1.
Definition 1.4.4. The function length ||.|| : Form → N is defined recursively by the clauses
||P || = 1, P ∈ Prime,
Proof. Exercise.
Definition 1.5.1. Let the function FVTerm : Term → P fin (Var) defined by
FVTerm (c) = ∅,
n
[
FVTerm (f (t1 , . . . , tn )) = FVTerm (ti ).
i=1
n
[
FVForm (R(t1 , . . . , tn )) = FVTerm (ti ), R ∈ Rel(n) , n ∈ N+ ,
i=1
Definition 1.5.2. W(L) is the set of finite lists of symbols from the set Var ∪ L ∪ Rel ∪ Fun.
The set W(L) can be defined inductively as the set [Var ∪ L ∪ Rel ∪ Fun]∗ of words over the
alphabet Var ∪ L ∪ Rel ∪ Fun (see Example 1.1.2).
Clearly, Term, Form ( W(L), as e.g., f R ∧ g(⊥, u8 is a word neither in Term nor in Form.
Definition 1.5.3. If s ∈ Term and x ∈ Var are fixed, the function
c[x := s] = c,
f (t1 , . . . , tn )[x := s] = f (t1 [x := s], . . . , tn [x := s]).
Proposition 1.5.4. If s ∈ Term and x ∈ Var, then ∀t∈Term (t[x := s] ∈ Term).
Proof. It follows trivially by IndTerm .
Definition 1.6.1. Let s ∈ Term, such that FV(s) = {y1 , . . . , ym }, and x ∈ Var. If 2 = {0, 1},
we define a function
Frees,x : Form → 2
that determines when “the variable x is substitutable i.e., it is free to be substituted, from
s in some formula”. Namely, if Frees,x (A) = 1, then x is substitutable from s in A, and if
Frees,x (A) = 0, then x is not substitutable from s in A. From now on, when we define a
function on Form that is based on a function on Term, as in the case of FVForm and FVTerm ,
we omit the subscripts and we understand from the context their domain of definition. The
function Frees,x is defined by
Frees,x (P ) = 1; P ∈ Prime,
According to Definition 1.6.1, x is substitutable from s in a prime formula, since there are
no quantifiers in it that can generate a capture. It is substitutable in A2B, if it is substitutable
both in A and B. In the case of an ∃-, or ∀-formula, if x is not free in A (which is equivalent
to x =6 y &x∈ / FV(A) \ {y}), then we set Frees,x (4y A) ≡ 1, since no capture is possible to
be generated.
If then we consider the formula ∃y (¬(y = x)), by Definition 1.6.1 we get
Freey,x ∃y (¬(y = x)) = 0.
Freez,x (R(x)) = 1,
Freez,x ∀z R(x) = 0,
Freef (x,z),x ∀y S(x, y) = 1, if x 6= y, y 6= z,
Freef (x,z),x ∃z ∀y (S(x, y) ⇒ R(x)) = 0.
Definition 1.6.2. If s ∈ Term and x ∈ Var are fixed, the function
R[x := s] = R, R ∈ Rel(0) ,
Proof. Exercise.
If x ∈ FV(A), and x 6= y ∧ y ∈
/ {y1 , . . . , ym }, the required implication follows trivially.
p : (A → B → C) → ((A → B) → (A → C))
p(q) : (A → B) → (A → C),
E = ∀x (A → B) → A → ∀x B, if x ∈
/ FV(A).
[p(q)](r) : ∀x B.
1.8. GENTZEN’S DERIVATIONS, A FIRST PRESENTATION 15
It is easy to see that there is no straightforward method to find a BHK-proof of the converse
implication ¬¬A → A, which is an instance of the so-called double negation elimination
principle (DNE). As we will see later in the course, this principle holds only classically. There
are some instances of DNE though, that can be shown constructively.
Example 1.7.6. A BHK-proof p : ¬¬¬A → ¬A is a rule p such that for every q : (¬¬A) → ⊥,
we have that p(q) : A → ⊥. Let p∗ : A → ¬¬A from Example 1.7.5. If r : A, we define
[p(q)](r) = q(p∗ (r)).
formulas derived as conclusions at those points, and the labels of the leaves are formulas
or terms. The labels of the nodes immediately above a node k are the premises of the rule
application. At the root of the tree we have the conclusion (or end formula) of the whole
derivation. In natural deduction systems one works with assumptions at leaves of the tree;
they can be either open or closed (canceled). Any of these assumptions carries a marker . As
markers we use assumption variables denoted u, v, w, u0 , u1 , . . . . The variables in Var will
now often be called object variables, to distinguish them from assumption variables. If at a
node below an assumption the dependency on this assumption is removed (it becomes closed),
we record this by writing down the assumption variable. Since the same assumption may be
used more than once (this was the case in the first example above), the assumption marked
with u (written u : A) may appear many times. Of course we insist that distinct assumption
formulas must have distinct markers.
An inner node of the tree is understood as the result of passing from premises to the
conclusion of a given rule. The label of the node then contains, in addition to the conclusion,
also the name of the rule. In some cases the rule binds or closes or cancels an assumption
variable u (and hence removes the dependency of all assumptions u : A thus marked). An
application of the ∀-introduction rule similarly binds an object variable x (and hence removes
the dependency on x). In both cases the bound assumption or object variable is added to the
label of the node.
First we have an assumption rule, allowing to write down an arbitrary formula A together
with a marker u:
u: A assumption.
The other rules of natural deduction split into introduction rules (I-rules for short) and
elimination rules (E-rules) for the logical connectives. E.g., for implication → there is an
introduction rule →+ and an elimination rule →− also called modus ponens. The left premise
A → B in →− is called the major (or main) premise, and the right premise A the minor
(or side) premise. Note that with an application of the →+ -rule all assumptions above it
marked with u : A are canceled (which is denoted by putting square brackets around these
assumptions), and the u then gets written alongside. There may of course be other uncanceled
assumptions v : A of the same formula A, which may get canceled at a later stage. We use
symbols like M, N, K, for derivations.
a: A 1
A
A
is a derivation tree of a formula A from assumption A. We use the variable assumption a : A
only for this tree. The introduction and elimination rules for implication are:
[u : A]
|M |N
|M
A→B A
B + B →−
A→B → u
For the universal quantifier ∀ there is an introduction rule ∀+ (again marked, but now
with the bound variable x) and an elimination rule ∀− whose right premise is the term r to
be substituted. The rule ∀+ x with conclusion ∀x A is subject to the following (eigen-)variable
1.8. GENTZEN’S DERIVATIONS, A FIRST PRESENTATION 17
condition to avoid capture: the derivation M of the premise A must not contain any open
assumption having x as a free variable.
|M |M
A + ∀x A r ∈ Term −
∀x A ∀ x A(r)
∀
[u : A] [v : B]
|M |N
|M |N |K
A ∨+ B ∨+
A∨B 0 A∨B 1 A∨B C C −
∨ u, v
C
For conjunction we have the rules
[u : A] [v : B]
|M |N
|M |N
A B +
A∧B ∧ A∧B C −
∧ u, v
C
and for the existential quantifier we have the rules
[u : A]
|M
|M |N
r ∈ Term A(r)
∃x A ∃+ ∃x A B −
∃ x, u (var.cond.)
B
Similar to ∀+ x the rule ∃− x, u is subject to an (eigen-)variable condition: in the derivation N
the variable x (i) should not occur free in the formula of any open assumption other than u : A,
and (ii) should not occur free in B. Again, in each of the elimination rules ∨− , ∧− and ∃−
the left premise is called major (or main) premise, and the right premise is called the minor
(or side) premise.
Notice that, as in the case of the BHK-interpretation, there is no rule for the derivation
of a prime formula P , other than the trivial unit-rule 1P . It is a nice exercise to check the
compatibility of Gentzen’s rules to the corresponding BHK-proofs. The rule ∨− u, v
[u : A] [v : B]
|M |N |K
A∨B C C −
∨ u, v
C
is understood as follows: given a derivation tree for A ∨ B and derivation trees for C with
assumption variables u : A and v : B, respectively, a derivation tree for C is formed, such
that u : A and v : B are cancelled. Similarly we understand the rules →+ u, ∧− u, v and
∃− x, u. The above definition is a quite complex inductive definition. In order to rewrite it,
we introduce the following notions. Note that the rules of Definition 1.8.1 are used in the
presence of free assumptions in the same way. E.g., next follows a derivation tree for C with
assumption formula G:
18 CHAPTER 1. DERIVATIONS IN MINIMAL LOGIC
w: G [u : A] [v : B]
|M |N |K
A∨B C C −
∨ u, v
C
We now give derivations of the two example formulas D, E, treated informally above.
Since in many cases the rule used is determined by the conclusion, we suppress in such cases
the name of the rule. Moreover, often we write only a : A, instead of the whole tree that
corresponds to 1A . First we give the derivation of D:
[u : A → B → C] [w : A] [v : A → B] [w : A]
B→C B
C +
A→C → w +
→ v
(A → B) → A → C
→+ u
(A → B → C) → (A → B) → A → C
[u : ∀x (A → B)] x ∈ Var
A→B [v : A]
B +
∀x B ∀ x +
A → ∀x B → v
→+ u
∀x (A → B) → A → ∀x B
Note that the variable condition is satisfied: In the derivation of B the still open assumption
formulas are A and ∀x (A → B); by hypothesis x is not free in A, and by Definition 1.5.1 it is
also not free in ∀x (A → B).
Definition 1.9.1. Let Avar be a new infinite set of “assumption variables”, and let
where, for every (u, A) ∈ Aform we write u : A. If V is a non-empty finite subset of Aform i.e.,
V = {u1 : A1 , . . . , un : An }, we define
Form(V ) = A ∈ Form | ∃u∈Avar u : A ∈ V = {A1 , . . . , An }.
M ∈ D{u : A} (B)
(→+ u) M
.
+
A→B → u ∈ D(A → B)
M ∈ DV (A → B) N ∈ DW (A)
(→− ) M N −
.
B → ∈ DV ∪W (B)
M ∈ DV (A) N ∈ DW (B)
(∧+ ) M N +
.
A∧B ∧ ∈ DV ∪W (A ∧ B)
M ∈ DV (A)
(∨+
0) M
.
+
A∨B ∨0 ∈ DV (A ∨ B)
N ∈ DW (B)
(∨+
1) N
.
+
A∨B ∨1 ∈ DW (A ∨ B)
{A1 , . . . , An } ` A, or simpler A1 , . . . , An ` A,
If V = {a : A}, then Form(V ) = {A} and 1A ∈ DV (A). Hence we always have that A ` A.
Example 1.10.2. The collection of sets and functions between them is the simplest example
of a category, which is denoted by Set.
The objects of a category are not necessarily sets, and hence the arrows are not necessarily
functions. This is exactly the case with the category of formulas Form.
1.10. THE PREORDER CATEGORY OF FORMULAS 21
Proposition 1.10.3. The category of formulas Form has objects the formulas in Form and
an arrow from A to B is a derivation of B from an assumption of the form u : A i.e.,
M : A → B :⇔ M ∈ D{u : A} (B).
Although a formula A is not a set, we have already discussed approaches to logic, like Martin
Löf’s type theory MLTT, where a set, or a type, is also a formula. In BHK-interpretation
one can also understand a formula A as the “set” of its proofs p : A. Moreover, the arrow in
Form is a not a function, but as it is an arrow in this category, it behaves as an “abstract”
function with respect to the abstract operation of composition in Form. Recall that the
arrow M : A → B in Form is captured in BHK-interpretation by some rule that behaves like
a function! So, in some sense, the category Form captures the “set-character” of a formula
and the “function-character” of a proof p : A → B in BHK-interpretation.
Notice that if L : A → B in Form i.e., L ∈ D{u : A} (B), and if M : A → B in Form i.e.,
M ∈ D{v : A} (B), are arrows in Form, then by the definition of equality of derivations in
Definition 1.9.2 we get L = M . Hence any two arrows A → B in Form are equal, or, in other
words, there is at most one arrow from A to B. An immediate consequence of this fact is that
proofs of equality of arrows in Form become trivial, as two arrows are always equal when
they have the same domain and codomain.
Definition 1.10.5. preorder is a pair (I, ), where I is a set, and ⊆ I × I such that:
(i) ∀i∈I i i .
(ii) ∀i,j,k∈I i j & j k ⇒ i k .
If a preorder satisfies the condition
(iii) ∀i,j∈I i j & j i ⇒ i = j ,
it is called a partially ordered set, or a poset.
A preorder (I, ) becomes a category with objects the elements of I and a unique arrow
from i to j, if and only if i j. Conditions (i) and (ii) above ensure that I is a category.
Moreover, any thin category generates a preorder.
Clearly, a poset is also a thin category. Many categorical notions are generalisations of
order-theoretic concepts. In many cases, a category can be seen as a generalised poset, allowing
more arrows between its objects.
22 CHAPTER 1. DERIVATIONS IN MINIMAL LOGIC
[u : A → ⊥] [a : A]
⊥
→+ u
(A → ⊥) → ⊥
→+ a
A → (A → ⊥) → ⊥
Note that double negation elimination i.e., the formula DNEA = ¬¬A → A, is in general
not derivable in minimal logic. But this we cannot show now.
(i) (A → B) → ¬B → ¬A,
(ii) ¬(A → B) → ¬B,
(iii) ¬¬(A → B) → ¬¬A → ¬¬B,
(iv) (⊥ → B) → (¬¬A → ¬¬B) → ¬¬(A → B),
(v) ¬¬∀x A → ∀x ¬¬A.
Proof. Exercise.
ax∨+
0 = A → A ∨ B,
ax∨+
1 = B → A ∨ B,
ax∨− = A ∨ B → (A → C) → (B → C) → C,
ax∧+ = A → B → A ∧ B,
ax∧− = A ∧ B → (A → B → C) → C,
ax∃+ = A → ∃x A,
ax∃− = ∃x A → ∀x (A → B) → B (x ∈
/ FV(B)).
(iii) The formulas ax∃+ and ax∃− are equivalent, as axioms, to the rules ∃+ and ∃− x, u over
minimal logic.
Proof. (i) First we show that from the axiom ax∨+0 , a derivation of which is considered the
formula itself, and a supposed derivation M of A we get the following derivation of A ∨ B
|M
A→A∨B A
A∨B →−
Similarly we show that from the formula ax∨+1 and a supposed derivation N of B we get a
derivation of A ∨ B. Next we show that from the formula ax∨− and supposed derivations M
of A ∨ B, N of C with assumption A, and K of C with assumption B we get the following
derivation of C
[u : A]
|M |N [v : B]
ax∨− A∨B C |K
→− +
(A → C) → (B → C) → C A → C →− u C
→ +
(B → C) → C B → C →− v
C →
[a : A]
1A
A ∨+
A∨B 0 +
A→A∨B → a
Similarly, from the rule ∨+ +
1 we get a derivation of ax∨1 . From the elimination rule for
disjunction we get the following derivation of ax∨−
[u : A ∨ B] [v : A → C] [v 0 : A] [w : B → C] [w0 : B]
1A∨B
A∨B C C − 0 0
∨ v ,w
C
→+ w
(B → C) → C
→+ v
(A → C) → (B → C) → C
→+ u
A ∨ B → (A → C) → (B → C) → C
(ii) and (iii) are exercises.
A similar result holds for axioms corresponding to the rules ∀+ x and ∀− . Note that in the
above derivation of C
u : A ∨ B ax [v : A → C] [v 0 : A] [w : B → C] [w0 : B]
A∨B C C
∨− v 0 , w0
C
we used the rule ∨− v 0 , w0 in the “extended” way described in Definition 1.9.2, where the
assumption variables u : A ∨ B, v : A → C and w : B → C are still open. Of course, they will
be canceled later in the derivation of ax∨− . The notation B ← A means A → B.
24 CHAPTER 1. DERIVATIONS IN MINIMAL LOGIC
(i) (A ∧ B → C) ↔ (A → B → C),
(ii) (A → B ∧ C) ↔ (A → B) ∧ (A → C),
(iii) (A ∨ B → C) ↔ (A → C) ∧ (B → C),
(iv) (A → B ∨ C) ← (A → B) ∨ (A → C),
(v) (∀x A → B) ← ∃x (A → B) if x ∈
/ FV(B),
(vi) (A → ∀x B) ↔ ∀x (A → B) if x ∈
/ FV(A),
(vii) (∃x A → B) ↔ ∀x (A → B) if x ∈
/ FV(B),
(viii) (A → ∃x B) ← ∃x (A → B) if x ∈
/ FV(A).
[w : A → B] [v : A]
x ∈ Var B
[u : ∃x (A → B)] ∃x B −
∃ x, w
∃x B +
A → ∃x B → v
→+ u
∃x (A → B) → A → ∃x B
The variable condition for ∃− is satisfied since the variable x (i) is not free in the formula A
of the open assumption v : A, and (ii) is not free in ∃x B. Of course, it is not a problem, if it
occurs free in A → B.
Γ ` A, Γ ⊆ ∆
ext
∆`A
Γ ` A, ∆ ∪ {A} ` B
cut
Γ∪∆`B
Proof. The ext-rule is an immediate consequence of the definition of Γ ` A. Suppose
next that there are C1 , . . . , Cn ∈ Γ and D1 , . . . , Dm ∈ ∆ such that C1 , . . . , Cn ` A and
D1 , . . . , Dm , A ` B. The following is a derivation of B from assumptions in Γ ∪ ∆:
u1 : D1 . . . um : Dm [u : A]
|M w1 : C1 . . . wn : Cn
B |N
+
A→B → u A
B →−
1.12. EXTENSION, CUT, AND THE DEDUCTION THEOREM 25
The following rules are special cases of the cut-rule for Γ = ∆ and Γ = ∆ = ∅, respectively.
Γ ` A, Γ ∪ {A} ` B
Γ`B
` A, A ` B
`B
From now on, we also denote Γ ` A by the tree
Γ
|M
A
Proposition 1.12.2. Let Γ ⊆ Form and A, B ∈ Form.
(i) Γ ` (A → B) ⇒ (Γ ` A ⇒ Γ ` B).
(ii) (Γ ` A or Γ ` B) ⇒ Γ ` A ∨ B.
(iii) Γ ` (A ∧ B) ⇔ (Γ ` A and Γ ` B).
(iv) Γ ` ∀y A ⇒ Γ ` A(s), for every s ∈ Term such that Frees,y (A) = 1.
(v) If s ∈ Term such that Frees,y (A) = 1 and Γ ` A(s), then Γ ` ∃y A.
(iv) and (v) If Γ ` ∀y A, the left derivation is a derivation of A(s) from Γ, and if Γ ` A(s), the
right derivation is a derivation of ∃y A from Γ:
26 CHAPTER 1. DERIVATIONS IN MINIMAL LOGIC
Γ Γ
|M |M
∀y A s ∈ Term s ∈ Term A(s)
∀− ∃+
A(s) ∃y A
1
^
Ai = A1 ,
i=1
n+1
^ n
^
Ai = Ai ∧ An+1 ,
i=1 i=1
then n
^
∀n∈N+ ∀A1 ,...,An ,A∈Form {A1 , . . . , An } ` A ⇔ ` Ai → A .
i=1
u1 : C1 . . . un : Cn [u : A]
|M
B +
A→B → u
is a derivation of A → B from Γ. Conversely, if C1 , . . . , Cn ∈ Γ such that C1 , . . . , Cn , ` A → B,
the following is a derivation of B from Γ ∪ {A}:
u1 : C1 . . . un : Cn
|M a: A 1A
A→B A
B →−
and we show
n+1
^
∀A1 ,...,An ,An+1 ,A∈Form {A1 , . . . , An , An+1 } ` A ⇔ ` Ai →A .
i=1
1.13. THE CATEGORY OF FORMULAS IS CARTESIAN CLOSED 27
where (∗) follows by the inductive hypothesis on A1 , . . . , An and the formula An+1 → A, and
(∗∗) follows by the derivation
` (A → B → C) ↔ (A ∧ B → C)
` A ↔ B ⇒ (` A ⇔ ` B).
Proof. Exercise.
Definition 1.13.3. Let > be a fixed formula such that ` >. We call the formula > verum
i.e., true.
Notice that the notion of an initial (terminal) object is dual to the notion of a terminal
(initial) object i.e., we get the definition of a terminal object by reversing the arrow in the
definition of an initial object, and vice versa. One could have named a terminal object a
coinitial object and an initial object a coterminal one. This duality between concepts and
“coconcepts” is very often in category theory.
28 CHAPTER 1. DERIVATIONS IN MINIMAL LOGIC
In the category of sets Set any singleton, like 1 = {0} is terminal, and the empty set
∅ is initial. It is straightforward to show that terminal, or initial objects in a category C
are unique up to isomorphism i.e., any two terminal, or initial objects in a category C are
isomorphic (exercise).
Proof. Exercise.
If there is a formula I ∈ Form, such that I is an initial element in Form, then this is
expected to be the formula ⊥ (can you find a reason for that?). But such a thing cannot be
shown now, it has to be “postulated” (see derivations in intuitionistic logic).
C
fA h fB
A pr A × B pr B.
A B
In Set the product of sets A, B is their cartesian product together with the two projection
maps. Next we show that the product A × B in C, if it exists, is unique up to isomorphism
i.e., if there is some object D and arrow $A : D → A and $B : D → B such that the universal
property of products is satisfied, then D ∼ = A × B. In the universal property for D let
C
fA h fB
A $ D $ B
A B
D
$A 1D $B
A $ D $ B,
A B
we get h = 1D , and the arrow h$A , $B i is unique, Since A × B and D both satisfy the
universal property of the products, from the previous remark we get g ◦ h = 1A×B
1.13. THE CATEGORY OF FORMULAS IS CARTESIAN CLOSED 29
A×B
prA h prB
A $ D $ B.
A B
prA g prB
A×B
D
$A g $B
A prA A × B prB B
$A h $B
A × A0
prA prA0
A f × f0 A0
f f0
prB prB 0
B B × B0 B0
Proof. Exercise. Notice that the commutativity of the corresponding diagrams is trivially
satisfied, as Form is a thin category.
Next we define the dual notion to the product of objects in a category. Notice that the
arrows in the universal property of coproduct are reversed with respect to the arrows in the
universal property of the product.
C
fA h fB
A A+B B.
iA iB
The arrows iA , iB are called coprojections, or injections. A category C has coproducts, if for
every objects A, B of C, there is a coproduct A + B in C (for simplicity we avoid to mention
the corresponding coprojection arrows).
In Set the coproduct of sets A, B is their disjoint union
together with the injections iA : A → A + B, where iA (a) = (0, a), for every a ∈ A, and
iB : B → A + B, where iB (b) = (1, b), for every b ∈ B. A coproduct A + B in C, if it exists, is
unique up to isomorphism (the proof is dual to the proof for the product).
Proposition 1.13.10. If A, B ∈ Form, then A ∨ B is a coproduct of A, B in Form. Conse-
quently, Form has coproducts.
Proof. Exercise.
evalB,C
(B → C) × B C,
fb × 1B f
D×B
D fb × 1B B
fb 1B
prB→C prB
B→C (B → C) × B B.
The arrow fb is called the (exponential) transpose of f . A category has exponentials, if for
every B, C in C there is an exponential B → C in C.
An exponential B → C of B and C is unique up to isomorphism (exercise). In Set an
exponential of the sets B and C is the set of all functions from B to C i.e.,
C B = {f ∈ P(B × C) | f : B → C},
1.14. FUNCTORS ASSOCIATED TO THE MAIN LOGICAL SYMBOLS 31
F0 (A)
F0 (A).
F1 (g ◦ f )
commutes, where for simplicity we use the same symbol for the operation of composition in
the categories C and D. In this case we write4 F : C → D. A functor C → C is called an
endofunctor (on C). Two functors F, G : C → D are equal, if F0 = G0 and F1 = G1 .
A contravariant functor from C to D is a pair F := (F0 , F1 ), where:
(i) F0 maps an object A of C to an object F0 (A) of D,
(ii0 ) F1 maps an arrow f : A → B of C to an arrow F1 (f ) : F0 (B) → F0 (A) of D, such that
(a) F1 (1A ) = 1F0 (A) , for every A in C0 .
(b0 ) If f : A → B and g : B → C, then F1 (g ◦ f ) = F1 (f ) ◦ F1 (g) i.e., the following diagram
4
In the literature it is often written F (C) and F (f ), instead of F0 (C) and F1 (f ).
32 CHAPTER 1. DERIVATIONS IN MINIMAL LOGIC
F1 (g) F1 (f )
F0 (C) F0 (B) F0 (A).
F1 (g ◦ f )
Example 1.14.3. The pair (G0 , G1 ) : Setop → Set, where G0 (X) = F(X) = {φ : X → R},
and if f : X → Y , then G1 (f ) : F(Y ) → F(X) is defined by [G1 (f )](θ) = θ ◦ f
f
X Y
θ◦f θ
R,
for every θ ∈ F(Y ), is a contravariant functor from Set to Set. If X is a set, then
[G1 (idX )](φ) = φ ◦ idX = φ
and since φ ∈ F(X) is arbitrary, we conclude that G1 (idX ) = idF(X) = idG0 (X) . If f : X → Y
and g : Y → Z, then G1 (f ) : F(Y ) → F(X), G1 (g) : F(Z) → F(Y ) and G1 (g ◦ f ) : F(Z) →
F(X). Moreover, if η ∈ F(Z), we have that
[G1 (g ◦ f )](η) = η ◦ (g ◦ f )
= (η ◦ g) ◦ f
= [G1 (f )](η ◦ g)
= G1 (f ) [G1 (g)](η)
= [G1 (f ) ◦ G1 (g)](η).
Example 1.14.4. If C, D, E are categories, F : C → D, and G : D → E are functors, their
composition G ◦ F : C → E, where G ◦ F = (G0 ◦ F0 , G1 ◦ F1 ), is a functor.
Definition 1.14.5. The collection of all categories with arrows the functors between them is
a category, which is called the category of categories, and it is denoted by Cat. The unit arrow
1C is the identity functor IdC , while the composition of functors is defined in Example 1.14.4.
Example 1.14.6. An endofunctor F : Form → Form is a monotone function from (Form, ≤)
to itself. The same is the case for any functor between preorders. Recall that if (I, ≤) and
(J, ) are preorders (see Definition 1.10.5), a function f : I → J is monotone, if
∀i,i0 ∈I i ≤ i0 ⇒ f (i) f (i0 ) .
Definition 1.14.7. If C, D are categories, the product category C × D has objects pairs (c, d),
where c ∈ C0 and d ∈ D0 . An arrow from (c, d) to (c0 , d0 ) is a pair (f, g), where f : c → c0
in C1 and g : d → d0 in D1 . If (f, g) : (c, d) → (c0 , d0 ) and (f 0 , g 0 ) : (c0 , d0 ) → (c00 , d00 ), their
composition is defined by
(f 0 , g 0 ) ◦ (f, g) = f 0 ◦ f, g 0 ◦ g .
Moreover, 1(c,d) = (1c , 1d ). The projection functor PrC : C × D → C is the pair PrC C
0 , Pr1 ,
where PrC C
0 (c, d) = c, for every object (c, d) of C × D, and Pr1 (f, g) = f , for every arrow
D
(f, g) in C × D. The projection functor Pr : C × D → D is defined similarly.
It is immediate to show that C × D is a category and PrC and PrD are functors. Moreover,
the product category C × D is a product of C and D in Cat.
[u : A] [v : B]
|M |N
A 0 B0 +
w : A ∧ B 1A∧B
0 0 ∧
A∧B A ∧B − u, v
∧
A0 ∧ B 0
W W
(ii) : Form × Form → Form, where 0 (A, B) = A ∨ B,0 for 0every object (A, B) in Form ×
0 0
W
Form, and 1 M : A → A , N : B → B : (A ∨ B) → (A ∨ B ) is the following derivation of
A0 ∨ B 0 from assumption w : A ∨ B, given derivations M and N ,
[u : A] [v : B]
|M |N
w: A ∨ B 1A∨B
A0 ∨+ B0 ∨+
A∨B A ∨ B0 0
0 A ∨ B 0 1−
0
∨ , u, v
A0 ∨ B 0
(iii) → : Formop × Form → Form, where (→)0 (A, B) = A → B, for every (A, B) in
Formop × Form. The definition of (→)1 (M, N ) : (A, B) → (A0 , B 0 ) : (A → B) → (A0 →
B 0 ), where M : A0 → A and N : B → B 0 , is an exercise.
(iv) ∀x : Form → Form, where ∀x 0 (A) = ∀x A, for every A ∈ Form, and ∀x 1 M : A →
[u : A]
|M w : ∀x A 1∀x A
B ∀x A x ∈ Var
+ ∀−
A→B → u A(x)
B →−
+
∀x B ∀ x
∃x : Form → Form, ∃x ∃x
(v) where 0
(A) = ∃x A, for every A ∈ Form, and 1
M: A →
34 CHAPTER 1. DERIVATIONS IN MINIMAL LOGIC
τC τC 0
G0 (C) G0 (C 0 ).
G1 (f )
1.15. NATURAL TRANSFORMATIONS 35
G1 (f )
F(Y ) F(X)
Φ ◦ G1 (f ) Φ
R,
τX (x) = x̂,
x̂ : F(X) → R, x̂(φ) = φ(x); φ ∈ F(X).
The Gelfand transformation τ is a natural transformation from IdSet to H, as for every
f : X → Y the following diagram commutes
f
X Y
τX τY
Definition 1.15.3. If C, D are categories the functor category Fun(C, D) has objects the
functors from C to D, and if F, G : C → D, an arrow from F to G is a natural transformation
from F to G. The identity arrow 1F : F ⇒ F is the family of arrows (1F )C : F0 (C) → F0 (C),
where (1F )C = 1F0 (C) , and the following diagram trivially commutes
F1 (f )
F0 (C) F0 (C 0 )
F0 (C) F0 (C 0 ).
F1 (f )
(σ ◦ τ )C = σC ◦ τC : F0 (C) → H0 (C),
F1 (f )
F0 (C) F0 (C 0 )
τC τC 0
(σ ◦ τ )C G0 (C) G0 (C 0 ) (σ ◦ τ )C 0
G1 (f )
σC σC 0
H0 (C) H0 (C 0 ),
H1 (f )
since
(σ ◦ τ )C 0 ◦ F1 (f ) = σC 0 ◦ τC 0 ◦ F1 (f )
= σC 0 ◦ τC 0 ◦ F1 (f )
= σC 0 ◦ G1 (f ) ◦ τC
= σC 0 ◦ G1 (f ) ◦ τC
= H1 (f ) ◦ σC ◦ τC
= H1 (f ) ◦ σC ◦ τC
= H1 (f ) ◦ (σ ◦ τ )C .
V V
Example 1.15.4. W functors B , and B are isomorphic in Fun(Form, Form), and also
W The
the functors B and B , are isomorphic in the category Fun(Form, Form) (exercise).
A ∧ B ≤ C ⇔ A ≤ (B → C).
1.16. GALOIS CONNECTIONS 37
With the help of the functors B , →B : Form → Form the last equivalence is rewritten as
V
^
(A) ≤ C ⇔ A ≤ →B 0 (C),
B 0
f : D × B → C ⇔ fb: D → (B → C)
that holds in a category C with exponentials. The last equivalence is understood as follows:
if f : D × B → C, there is a unique arrow fb: D → (B → C), using the universal property of
an exponential B → C of B and C in a category C. Conversely, if g : D → (B → C), there is
a unique arrow f : D × B → C such that fb = g (exercise).
Definition 1.16.1. If (I, ≤) and (J, ) are preorders, a Galois connection, or a Galois
correspondence, between them is a pair of monotone functions (f : I → J, g : J → I), such that
∀i∈I ∀j∈J f (i) j ⇔ i ≤ g(j) .
In this case we say that g is right adjoint to f , or f is left adjoint to g, and we write f a g.
As Cls(i) ∼
= Cls(Cls(i)) and Int(i) ∼= Int(Int(i)), we get Cls(i) ∈ Closed(I) and
Int(i) ∈ Open(I), for every i ∈ I. Notice that if (I, ≤) and (J, ) are preorders, and if
f : I → J is monotone, then f preserves isomorphism i.e., i ∼
= i0 ⇒ f (i) ∼
= f (i0 ), for every
0
i, i ∈ I, where we use the same symbol for isomorphic elements of J.
Proof. Exercise.
38 CHAPTER 1. DERIVATIONS IN MINIMAL LOGIC
The quantifiers can be described as adjoints. First we give a set-interpretation of this fact.
Definition 1.16.5. Let X, Y be sets. If u : X → Y , let u∗ : P(Y ) → P(X), defined by
If D is a subcategory of C, such that for every A, B ∈ D0 we have that D1 (A, B) = C1 (A, B),
then D is called a full subcategory of C. A category C is called small, if the collections C0
and C1 are both sets. If one of them is a proper class i.e., a class that is not a set, then C is
called large6 . If for every A, B ∈ C0 the collection C1 (A, B) is a set, then C is called locally
small.
6
In Zermelo-Fraenkel set theory a class is either a set or a proper class. The collection of all sets, or the
universe, V is a proper class. That can be shown via the so-celled Russell’s paradox: if V was a set, then we
can define with the scheme of separation the set R = {x ∈ V | x ∈ / x}, and then we reach the contradiction
R∈R⇔R∈ / R.
1.17. THE QUANTIFIERS AS ADJOINTS 39
Example 1.17.2. The category Setfin of all finite sets and functions between them is a full
subcategory of Set. The category Set is large, as the collection V of all sets is a proper class,
but it is locally small, since the collection of all functions between two sets is a set. The
category Form is small.
Example 1.17.3. If x1 , . . . , xn ∈ Var, the category Form(x1 , . . . , xn ) with objects the set
Form(x1 , . . . , xn ) of all formulas A, such that FV(A) ⊆ {x1 , . . . , xn }, together with the usual
derivations between them as arrows, is a full subcategory of Form.
Definition 1.17.4. If x, y ∈ Var, let the functors ∃(x, y), ∀(x, y) : Form(x, y) → Form(x),
defined by the rules
∃xy (A) = ∃y A & ∀xy (A) = ∀y A; A ∈ Form(x, y).
0 0
Let also W (x, y) : Form(x) → Form(x, y), where W (x, y) 0 (A) = A, for every A ∈ Form.
Next follows the immediate translation of Proposition 1.16.6 into minimal logic.
Theorem 1.17.5. The following adjunctions hold:
(i) ∃(x, y) a W (x, y).
(ii) W (x, y) a ∀(x, y).
Proof. Exercise.
Example 1.17.6. The category Formx of all formulas A, such that x ∈ / FV(A), together
with the usual derivations between them as arrows, is a full subcategory of Form.
Definition 1.17.7. The functors ∃x : Form → Form and ∀x : Form → Form, defined
in Definition 1.14.8, can be written as functors of the form ∃x : Form → Formx and
∀x : Form → Formx , since x ∈/ FV(∃x A) and x ∈/ FV(∀x A). Let the functor Wx : Formx →
Form, defined by Wx 0 (A) = A, for every A ∈ Form.
Theorem 1.17.8. The following adjunctions hold:
(i) ∃x a Wx .
(ii) Wx a ∀x .
Proof. Let A ∈ Form such that x ∈
/ FV(A), and C ∈ Form.
(i) We show that ∃x 0 (C) ≤ A ⇔ C ≤ Wx )0 (A) i.e., there is an arrow M : ∃x C → A if and
only if there is an arrow N : C → A. Suppose first that M : ∃x C → A. We find a derivation
N of A from assumption C, as follows:
[v : ∃x C]
|M w: C 1C
A x ∈ Term C
→+ v ∃+
∃x C → A ∃x C
A →− .
For the converse, we suppose that there is a derivation N of A with assumption C, and we
find a derivation M of A with assumption ∃x C, as follows:
[u : C]
w : ∃x C |N
1∃x C
∃x C A −
∃ x, u.
A
40 CHAPTER 1. DERIVATIONS IN MINIMAL LOGIC
[v : A]
|M
C a: A
→+ v 1A
A→C A
→−
C
∀+ x
∀x C
The variable condition in ∀+ x is satisfied: the only open assumption in the derivation of C is
A, and by our hypothesis x ∈ / FV(A). For the converse, let N be a derivation of ∀x C with
assumption A. We find a derivation M of C with assumption A, as follows:
[u : A]
|N
∀x C a: A
→+ u 1A
A → ∀x C A
→−
∀x C x ∈ Term
∀−
C
Next we give one more variation of the previous theorem.
In this chapter we study derivations in intuitionistic and classical logic. We also explore the
relation between minimal, intuitionisitc and classical logic.
Definition 2.1.1. We define inductively the set DiV (A) of intuitionistic derivations of a
formula A with assumption variables in V , where V is a finite subset of Aform (see Defini-
tion 1.9.1). If V = ∅, we write Di (A). The introduction-rules for DiV (A) are the introduction-
rules for DV (A), given in Definition 1.9.2, together with the following rule:
(0A ) The following tree 0A
o: ⊥ 0
A
A
is an element of Di{o : ⊥} (A), for every A ∈ Form \ {⊥}.
Unless otherwise stated, a derivation in DiV (A) is denoted by Mi . If V = {u : A}, W = {v : A},
Mi ∈ DiV (B) and Ni ∈ DiW (B), we define Mi = Ni . A formula A is derivable in intuitionistic
logic, written `i A, if there is an intuitionistic derivation of A without free assumptions i.e.,
intuitionistic formulas Formi has objects the formulas in Form and an arrow from A to B is
an intuitionistic derivation of B from an assumption of the form u : A, and we write
Mi : A → B :⇔ Mi ∈ Di{u : A} (B).
The induced preorder and isomorphism of the thin category Formi are given by
A ≤i B :⇔ ∃Mi Mi : A → B ,
A∼
=i B :⇔ A ≤i B & B ≤i A.
[o : ⊥]
0A
A
→+ o.
⊥→A
The addition of the rule (0A ) in the inductive definition of DiV (A) has an immediate
consequence to the category of intuitionistic formulas Formi and to the preorder ≤i .
Proposition 2.1.3. The category of intuitionistic formulas Formi has an initial element,
and the preorder ≤i has a minimal element.
Proposition 2.1.4. If A ∈ Form and V ⊆fin Aform, then DV (A) ⊆ DiV (A).
Proof. We use induction on DV (A). Let P (M ) :⇔ M ∈ DiV (A), a formula of our metatheory
M on DV (A). The cumbersome to write induction principle IndDV (A) gives us that
E.g., according to the clause of IndDV (A) with respect to the rule (→+ ), if M ∈ D{u : A} (B)
M
such that M ∈ Di{u : A} (B), then A→B →+ u ∈ Di (A → B), as we apply the rule (→+ u) of
The category Formi , as the category Form, has a terminal object >, which is a ≤-maximal,
and hence by Corollary 2.1.5, it is also ≤i -maximal. As there are many ≤i -maximal elements,
there are many ≤i -minimal elements, although isomorphic to each other. E.g., A ∧ ¬A ∼ = ⊥,
for every A ∈ Form. The inequality ⊥ ≤i A ∧ ¬A follows from 0A∧¬A , while the inequality
A ∧ ¬A ≤i ⊥ follows immediately (i.e., by Corollary 2.1.5(iv)) from the minimal inequality
A ∧ ¬A ≤ ⊥, as the following tree is a derivation of ⊥ from A ∧ ¬A in minimal logic:
[a : A] [v : ¬A]
1A 1¬A
w : A ∧ ¬A A ¬A
1A∧¬A →−
A ∧ ¬A ⊥ ∧− a,v.
⊥
One could have used a weaker notion of intuitionistic derivability, by not accepting all
instances of the ex-falso-quodlibet. One could have defined `i A :⇔ Efq ` A, where Efq is the
set of formulas defined next.
Definition 2.1.6. Let Efq be the following set of formulas:
Efq = {∀x1 ,...,xn (⊥ → R(x1 , . . . , xn )) | n ∈ N+ , R ∈ Rel(n) , x1 , . . . , xn ∈ Var}
∪ {⊥ → R | R ∈ Rel(0) \ {⊥}}.
Theorem 2.1.7. ∀A∈Form Efq ` (⊥ → A) .
Proof. If A = R(t1 , . . . , tn ), where n ∈ N+ , R ∈ Rel(n) and t1 , . . . , tn ∈ Term, the following is
an intuitionistic derivation of ⊥ → R(t1 , . . . , tn ):
∀x1 ,...,xn (⊥ → R(x1 , . . . , xn )) t1 ∈ Term
∀−
∀x2 ,...,xn (⊥ → R(t1 , x2 . . . , xn )) t2 ∈ Term
∀−
∀x3 ,...,xn (⊥ → R(t1 , t2 , x3 . . . , xn ))
......... .........
∀xn (⊥ → R(t1 , . . . , tn−1 , xn )) tn ∈ Term −
∀
⊥ → R(t1 , t2 . . . , tn )
If we suppose that Efq ` (⊥ → A) and Efq ` (⊥ → B) i.e., that there are minimal derivations
M, N of ⊥ → A and ⊥ → B from Efq, respectively, the following are minimal derivations of
⊥ → A → B, ⊥ → A ∨ B, ⊥ → A ∧ B, ⊥ → ∀x A and ⊥ → ∃x A from Efq, respectively:
Efq
|N
⊥→B [v : ⊥]
B →−
→ +u : A
A→B
→+ v
⊥ → (A → B)
Efq Efq
|M |N
⊥→A [u : ⊥] ⊥→B [u : ⊥]
A →− B →−
A∧B ∧+
→+ u
⊥ → (A ∧ B)
44 CHAPTER 2. DERIVATIONS IN INTUITIONISTIC AND CLASSICAL LOGIC
Efq Efq
|M |M
⊥→A [u : ⊥] ⊥→A [u : ⊥]
→− A →−
A ∨+ +
A∨B 0 ∀x A ∀ x +
→+ u
⊥ → (A ∨ B) ⊥ → ∀x A → u
Efq
|M
⊥→A [u : ⊥]
x ∈ Var A →−
∃x A ∃+
+
⊥ → ∃x A → u
In the above use of the ∀+ x-rule the variable condition is satisfied, as x ∈
/ FV(⊥) = FV(S) = ∅,
for every S ∈ Efq.
Proposition 2.1.8. Let the functor EFQ : Form → Form, where EFQ0 (A) = ⊥ → A, for
every A ∈ Form (of which already established functor is EFQ a special case?).
(i) EFQ preserves products i.e., EFQ (A ∧ B) ∼
0 = EFQ (A) ∧ EFQ (B), for every A, B ∈ Form.
0 0
(ii) EFQ0 (A ∨ B) ≥ EFQ0 (A) ∨ EFQ0 (B), for every A, B ∈ Form.
(iii) EFQ preserves the terminal object > i.e., EFQ (>) ∼
= >.
0
(iv) If EFQ : Formi → Formi is also defined by EFQii
ii
0 (A) = ⊥ → A, for every A ∈ Form,
then EFQii does not preserve the initial element ⊥ i.e., it is not the case that EFQii ∼
0 (⊥) =i ⊥.
Proof. Exercise.
Given that there is no minimal derivation of ⊥ → A, for every A ∈ Form, is the rule
EFQim : Formi → Form, defined as above, a functor (exercise)? Note that the extension-rule,
the cut-rule and the deduction theorem for minimal logic (see section 1.12) are easily extended
to intuitionistic logic. Next we describe the functors associated to negation.
Proposition 2.1.9. Let IdForm be the identity functor on Form (see Example 1.14.2) and let
¬ : Form → Form, defined by ¬0 (A) = ¬A, for every A ∈ Form. For every n ∈ N we define
Form
n
Id ,n=0
¬ = n−1 ¬ ,n=1
¬ ◦ ¬ , n > 1.
(i) ¬2n+1 is a contravariant endofunctor, for every n ∈ N.
2n
(ii) ¬ is a covariant endofunctor, for every n ∈ N.
2n+1
(iii) The endofunctor ¬ is isomorphic to ¬ in Fun(Form, Form), for every n ∈ N+ .
Proof. Exercise.
n
If ¬i : Formi → Formi is defined similarly, for every n ∈ N, then it also satisfies
n
Proposition 2.1.9(i)-(iii). The corresponding negation endofunctor ¬c on the category of
classical formulas, defined in the next section, satisfies extra properties. E.g., the endofunctor
¬2n
c is isomorphic to ¬c
2n−2
, and hence it is isomorphic to IdForm , for every n ≥ 2.
2.2. DERIVATIONS IN CLASSICAL LOGIC 45
Mc : A → B :⇔ Mc ∈ Dc{u : A} (B).
The induced preorder and isomorphism of the thin category Formc are given by
A ≤c B :⇔ ∃Mc Mc : A → B ,
A∼
=c B :⇔ A ≤c B & B ≤c A.
The derivation ¬¬⊥ → ⊥ is not considered in the rule (DNEA ), as a derivation of ⊥ from
¬¬⊥ already exists in minimal logic:
[o : ⊥]
1⊥
⊥ +
[v : (⊥ → ⊥) → ⊥] ⊥ → ⊥ →− o
⊥ →
→+ v
((⊥ → ⊥) → ⊥) → ⊥
1
For simplicity we use the same notation for the tree DNEA and for the formula DNEA = ¬¬A → A. It
will always be clear from the context where the notation DNEA refers to.
46 CHAPTER 2. DERIVATIONS IN INTUITIONISTIC AND CLASSICAL LOGIC
Definition 2.2.2. We denote the above minimal derivation of ⊥ from ¬¬⊥ by DNE⊥ .
[v : ¬¬A]
DNEA
A
→+ v .
¬¬A → A
Proposition 2.2.3. If A ∈ Form and V ⊆fin Aform, there is a unique, canonical embedding
c : Di (A) → Dc (A).
V V
Proof. We use recursion on DiV (A). As the introduction rules of DiV (A) differ from the
introduction rules of DcV (A) only with respect to the rule (0A ) ∈ Di{o : ⊥} (A), it suffices to
describe the rule (0A )c ∈ Dc{o : ⊥} (A). If A 6= ⊥, let the following derivation
[u : ¬¬A] o : ⊥ [w : ¬A]
DNEA 1⊥
A ⊥
→+ u →+ w
¬¬A → A ¬¬A
A →− .
be the derivation (0A )c , which is clearly in Dc{o : ⊥} (A). For all the rest introduction rules of
DiV (A) the embedding Mi 7→ Mic is defined by the identity rule.
Proof. All cases follow immediately from Proposition 2.2.3. Notice that for the proof of (iii)
the preservation of the unit arrow 1A follows immediately from the definition of the canonical
embedding c . As we use the identity rule in its definition for all introduction rules of DiV (A),
other than (0A ), we get (1A )c = 1A .
DV (A)
Id Id
c
DiV (A) DcV (A),
Form
Idmi Idmc
ic
Id
Formi Formc ,
2.2. DERIVATIONS IN CLASSICAL LOGIC 47
where Idmc = Idic ◦ Idmi : Form → Formc , is given by the identity rules, and
A ≤ B ⇒ A ≤i B ⇒ A ≤c B,
A∼
=B⇒A∼
=i B ⇒ A ∼
=c B.
Proposition 2.2.5. If A ∈ Form, let PEMA = A ∨ ¬A.
(i) ` ¬¬PEMA .
(ii) `i PEMA → DNEA .
(iii) ` DNEPEMA → PEMA , hence `c PEMA .
Proof. Exercise.
The addition of the rule (DNEA ) in the inductive definition of DcV (A) has the following
consequence to the preorder ≤c . Note that because of the above implications between the
various preorders and congruences, we usually use the subscript i (or none), if an intuitionistic
(or minimal) preorder or congruence holds in Formc .
Let `∗c A ⇔ Dne ` A and Γ `∗c A ⇔ Γ ∪ Dne ` A. We denote a derivation Γ `∗c A by Mc∗ .
Clearly, `∗c A ⇒ `c A, but not conversely. Next we see which part of the rule (DNEA ) is
captured by the weaker classical derivability `∗c . For that we need a lemma and a definition.
Proof. Exercise.
Definition 2.2.9. The formulas Form∗ without ∨, ∃ are defined inductively by the rules:
P ∈ Prime A, B ∈ Form∗ A ∈ Form∗ , x ∈ Var
, , .
P ∈ Form∗ (A → B), (A ∧ B) ∈ Form∗ ∀x A ∈ Form∗
A ¬A
iA i¬A
A ∨ ¬A ∼
=c >
2.3. MONOS, EPIS AND SUBOBJECTS 49
expresses that in Formc the objects A and ¬A are complemented subobjects of >.
f ◦g
g
f
C A B.
h
f ◦h
g◦f
g
f
A B C.
h
h◦f
f
B C
iA
B iA
C
A
If (B, iA
B ) is an object in SubC (A), its unit arrow is the unit 1B in C1 , since the following
diagram is trivially commutative
1B
B B
iA
B iA
B
A.
If f : (B, iA A A A
B ) → (C, iC ) and g : (C, iC ) → (D, iD ) in SubC (A), their composition in SubC (A)
is the composition g ◦ f in C, as the commutativity of the following inner diagrams implies
the commutativity of the following outer diagram
50 CHAPTER 2. DERIVATIONS IN INTUITIONISTIC AND CLASSICAL LOGIC
g◦f
f g
B C D
iA
iA
B
C
iA
D
A
iA A A A
D ◦ (g ◦ f ) = (iD ◦ g) ◦ f = iC ◦ f = iB .
Notice that in the above definition of the abstract injectivity (surjectivity) of arrows is
expressed without reference to the membership relation ∈ of sets. Moreover, the notion of
a subobject is the abstract, categorical version of the notion of subset, and the category of
subobjects SubC (A) of A in C is the abstract, categorical version of the set of subsets of a set.
g0 1B g 00
f g f
C A B A B D
h0 1A h00
g ◦ (f ◦ g 0 ) = g ◦ (f ◦ h0 ) ⇒ (g ◦ f ) ◦ g 0 = (g ◦ f ) ◦ h0
⇒ 1A ◦ g 0 = 1A ◦ h0
⇒ g 0 = h0 .
(g 00 ◦ f ) ◦ g = (h00 ◦ f ) ◦ g ⇒ g 00 ◦ (f ◦ g) = h00 ◦ (f ◦ g)
⇒ g 00 ◦ 1B = h00 ◦ 1B
⇒ g 00 = h00 .
iA C A A
C ◦ (f ◦ g) = iA ◦ (f ◦ h) ⇒ (iC ◦ f ) ◦ g = (iC ◦ f ) ◦ h
⇒ iA A
B ◦ g = iB ◦ h
⇒ g = h.
(v) Let f, g : B → C such that the following two diagrams formed by the arrows iA A
B , iC
f
B C
g
iA
B iA
C
A
commute. Then the third diagram, formed by the arrows f, g, also commutes, since iA
B =
A A
iC ◦ g = iC ◦ f ⇒ g = f . Case (vi) is an exercise.
M M0
A B C.
N N0
Moreover, 1grp
A = (1A , 1A ), and (M, N ) = (K, L) ⇔ M = K & N = L. Similarly we define
the groupoid categories Formgrp
i and Formgrp
c .
F1grp
M : A → B, N : B → A : F0 (A) → F0 (B),
F1grp (M, N ) = F1 (M ) : F0 (A) → F0 (B), F1 (N ) : F0 (B) → F0 (A) ,
M N M
A B A B.
N M N
F1grp 1grp = F1grp (1A , 1A ) = (F1 (1A ), F1 (1A )) = (1F0 (A) , 1F0 (A) ) = 1grp
A F0 (A) ,
2.5. THE NEGATIVE FRAGMENT 53
= F1 (M 0 ◦ M ), F1 (N ◦ N 0 )
= F1 (M 0 ) ◦ F1 (M ), F1 (N ) ◦ F1 (N 0 )
= F1 (M 0 ), F1 (N 0 ) ◦ F1 (M ), F1 (N )
DNE1 M : A → B, N : B → A = (M 0 , N 0 ),
Proof. Exercise.
PEM1 M : A → B, N : B → A = (M 0 , N 0 ),
Proof. Exercise.
Definition 2.5.1. The negative formulas Form− of Form, or the negative fragment of Form,
is defined by the following inductive rules:
where ◦ ∈ {→, ∧}. To the definition of Form− corresponds the obvious induction principle. The
negative fragment Form− of Form is the corresponding full subcategory of negative formulas.
The negative fragments Form− −
i and Formc are defined similarly.
54 CHAPTER 2. DERIVATIONS IN INTUITIONISTIC AND CLASSICAL LOGIC
(ii) Let R ∈ R(n) . If n > 1 and t1 , . . . , tn ∈ Term, then R(t1 , . . . , tn ) ∈ Form∗ \ Form− . If n = 0
and R 6= ⊥, then R ∈ Form∗ \ Form− .
Proof. (i) By induction on Form (exercise). (ii) Since R ∈ Prim, we get R ∈ ∗ . If R ∈ − , then
R is either ⊥, or of the form P → ⊥, for some P ∈ Prim, or of the form A ◦ B, or of the form
∀x A, for some A, B ∈ Form− . In all these cases we get a contradiction.
Proposition 2.5.3. ∀A∈Form− ` ¬¬A → A .
D = (¬¬A → A) → ¬¬∀x A → A,
|M |K
D ¬¬A → A
¬¬∀x A → A u : ¬¬∀x A
A +
∀x A ∀ x +
¬¬∀x A → ∀x A → u
The variable condition is trivially satisfied in the previous use of the rule ∀+ x.
˜ B = ¬A → ¬B → ⊥
A∨ & ˜x A = ¬∀x ¬A.
∃
2.6. WEAK DISJUNCTION AND WEAK EXISTENCE 55
Proof. Exercise. For the proof of (x) and (xi) we only mention the following. By (i) Theo-
rem 2.2.10 implies the classical derivability of the double-negation-elimination of A ∨ ˜ B and
˜ ∗
∃x A, if A, B ∈ Form . By (ii) Proposition 2.5.3 implies the derivability of the double-negation-
elimination of A ∨ ˜x A, if A, B ∈ Form− . Using Brouwer’s double-negation-elimination
˜ B and ∃
of a negated formula (Proposition 1.11.1(iii)), we derive these eliminations in minimal logic.
(i) ˜x A → B) → ∀x (A → B),
(∃ if x ∈
/ FV(B).
(ii) ˜x A → B,
(¬¬B → B) → ∀x (A → B) → ∃ if x ∈
/ FV(B).
(iii) ˜x B) → ∃
(⊥ → B(c)) → (A → ∃ ˜x (A → B), if x ∈
/ FV(A).
(iv) ˜x (A → B) → A → ∃
∃ ˜x B, if x ∈
/ FV(A).
[u1 : ∀x ¬A] x
¬A [w : A]
⊥
˜x A → B →+ u1
∃ ¬∀x ¬A
B +
A → B → +w
∀ x
∀x (A → B)
∀x (A → B) x
A→B [u1 : A]
[u2 : ¬B] B
⊥
→+ u1
¬A
¬∀x ¬A ∀x ¬A
⊥
→+ u2
¬¬B → B ¬¬B
B
56 CHAPTER 2. DERIVATIONS IN INTUITIONISTIC AND CLASSICAL LOGIC
∀x ¬B x [u1 : A → B] A
¬B B
⊥
→+ u1
¬(A → B)
˜x (A → B) ∀+ x
∃ ∀x ¬(A → B)
⊥ →−
(ii) ˜x (A → B)
∀x (¬¬A → A) → (∀x A → B) → ∃ if x ∈
/ FV(B).
Proof. If Ax, Ay stand for A(x), A(y), respectively, we get the following derivation M of (i):
∀y Ay x
Ax + ∀x (⊥ → Ax) y u1 : ¬Ax u2 : Ax
∀x Ax → B ∀x Ax ∀ −x ⊥ → Ay ⊥
B → Ay
∀+ y
∀y Ay → B ∀y Ay
∀x ¬(Ax → B) x B
→+ u2
¬(Ax → B) Ax → B
⊥
→+ u1 ,
¬¬Ax
where the last →+ -rules are not included. Using this derivation M we obtain
∀x (¬¬Ax → Ax) x |M
¬¬Ax → Ax ¬¬Ax
Ax
∀x Ax → B ∀x Ax
∀x ¬(Ax → B) x B
¬(Ax → B) Ax → B −
⊥ → .
Note that the assumption ∀x (¬¬A → A) in (ii) is used to derive the assumption ∀x (⊥ → A)
in (i), since ` (¬¬A → A) → ⊥ → A (see the proof of Proposition 2.2.3).
2.6. WEAK DISJUNCTION AND WEAK EXISTENCE 57
˜x (R(x) → ∀x R(x)).
Corollary 2.6.5. If R ∈ Rel(1) , then `∗c ∃
˜x (R(x) → ∀x R(x)) is known as the drinker formula, and can be read as “in
The formula ∃
every non-empty bar there is a person such that, if this person drinks, then everybody drinks”.
The next proposition on weak disjunction is similar to Proposition 2.6.3.
˜ B → C) → (A → C) ∧ (B → C),
(A ∨
(¬¬C → C) → (A → C) → (B → C) → A ∨˜ B → C,
(⊥ → B) → ˜ C) → (A → B) ∨
(A → B ∨ ˜ (A → C),
˜ (A → C) → A → B ∨
(A → B) ∨ ˜ C,
˜ (B → C) → A → B → C,
(¬¬C → C) → (A → C) ∨
(⊥ → C) → ˜ (B → C).
(A → B → C) → (A → C) ∨
Proof. The derivations of the second and the final formula are
A → C u2 : A
u1 : ¬C C B → C u3 : B
⊥ u1 : ¬C C
→+ u2
¬A → ¬B → ⊥ ¬A ⊥
→+ u3
¬B → ⊥ ¬B
⊥
→+ u1
¬¬C → C ¬¬C
C
A → B → C u1 : A
B→C u2 : B
C
→+ u1
¬(A → C) A→C
⊥→C ⊥
C
→+ u
¬(B → C) B→C − 2
⊥ →
The weak disjunction and the weak existential quantifier satisfy the same axioms as the
strong variants, if one restricts the conclusion of the elimination axioms to formulas in Form∗ .
˜ B,
`A→A∨
˜ B,
`B→A∨
`∗c A ∨
˜ B → (A → C) → (B → C) → C (C ∈ Form∗ ),
`A→∃ ˜x A,
`∗c ∃
˜x A → ∀x (A → B) → B / FV(B), B ∈ Form∗ ).
(x ∈
58 CHAPTER 2. DERIVATIONS IN INTUITIONISTIC AND CLASSICAL LOGIC
∀x (A → B) x
A→B u2 : A
u1 : ¬B B
⊥
→+ u2
¬A
¬∀x ¬A ∀x ¬A
⊥
→+ u
¬¬B → B ¬¬B − 1
B → .
Proof. Exercise.
Corollary 2.7.3. Let F : Form → Form, and ¬ : Formop → Form, defined in Proposi-
tions 2.1.9. We define the following endofunctors on Form:
n
F ,n=0
¬ F = ¬ ◦ F
n−1
,n=1
¬ ◦ ¬ F , n > 1.
(i) ¬2n+1 F is a contravariant endofunctor, for every n ∈ N.
2n
(ii) ¬ F is a covariant endofunctor, for every n ∈ N.
2n+1
(iii) The endofunctor ¬ F is isomorphic to ¬F in Fun(Form, Form), for every n ∈ N.
2n
(iv) If F : Formc → Formc , then ¬ F is isomorphic to F in Fun(Formc , Formc ), for every
n ∈ N.
Proposition 2.7.4. If F, G : Form → Form and B ∈ Form, the following are functors.
(i) F × G : Form → Form × Form, where
(F × G)0 (A) = F0 (A), G0 (A) ; A ∈ Form,
(F × G)1 (M : A → B) : F0 (A), G0 (A) → F0 (B), G0 (B) ; M : A → B,
2.7. LOGICAL OPERATIONS ON FUNCTORS 59
(F × G)1 (M ) = F1 (M ) : F0 (A) → F0 (B), G1 (M ) : G0 (A) → G0 (B) .
(ii) F ∧ G : Form → Form, where (F ∧ G)0 (A) = F0 (A) ∧ G0 (A), for every A ∈ Form.
(iii) F ∨ G : Form → Form, where (F ∨ G)0 (A) = F0 (A) ∧ G0 (A), for every A ∈ Form.
(iv) B → F : Form → Form, where (B → F )0 (A) = B → F0 (A), for every A ∈ Form.
(v) ∃x F : Form → Form, where (∃x F )0 (A) = ∃x F0 (A), for every A ∈ Form.
(vi) ∀x F : Form → Form, where (∀x F )0 (A) = ∀x F0 (A), for every A ∈ Form.
Proof. (i) If A, B, C ∈ Form, M : A → B and N : B → C, then by the definition of the unit
arrow and composition in the product category Form × Form we get
(F × G)1 (1A ) = F1 (1A ), G1 (1A ) = 1F0 (A) , 1G0 (A) = 1(F0 (A),G0 (A)) = 1(F ×G)0 (A) ,
(F × G)1 (N ◦ M ) = F1 (N ◦ M ), G1 (N ◦ M )
= F1 (N ) ◦ F1 (M ), G1 (N ) ◦ G( M )
= F1 (N ), G1 (N ) ◦ F1 (M ), G1 (M )
= (F × G)1 (N ) ◦ (F × G)1 (M ).
(ii)-(vi) These are functors as composition of the functors in Definitions 1.14.8 and 1.14.9 with
F × G or F , respectively, as it is shown in the following commutative diagrams
V
F ×G
Form Form × Form Form,
F ∧G
W
F ×G
Form Form × Form Form,
F ∨G
F →B
Form Form Form,
B→F
F ∃x
Form Form Form,
∃x F
F ∀x
Form Form Form.
∀x F
60 CHAPTER 2. DERIVATIONS IN INTUITIONISTIC AND CLASSICAL LOGIC
If F, G : Formop → Form are contravariant functors, all results in Proposition 2.7.4 are
extended accordingly. If F, G : Form × Form → Form, or more generally, F, G : Formn →
Form, where n > 1, the corresponding functors F ∧ G and F ∨ G are defined similarly.
Proposition 2.7.5. Let F : Form × Form → Form a functor and let a function G0 : Form ×
Form → Form, such that ` F0 (A, B) ↔ G0 (A, B), for every A, B ∈ Form. Then G0 generates
a functor G : Form × Form → Form.
Proof. If (M, N ) : (A, B) → (A0 , B 0 ) in Form × Form, then we define G1 (M, N ) : G0 (A, B) →
G0 (A0 , B 0 ) the arrow LA0 ,B 0 ◦ F1 (M, N ) ◦ KA,B
F1 (M, N )
F0 (A, B) F0 (A0 , B 0 )
KA,B LA0 ,B 0
G0 (A, B) G0 (A0 , B 0 ),
G1 (M, N )
where KA,B : G0 (A, B) → F0 (A, B) and LA0 ,B 0 : F0 (A0 , B 0 ) → G0 (A0 , B 0 ) are found by the
hypotheses ` F0 (A, B) ↔ G0 (A, B) and ` F0 (A0 , B 0 ) ↔ G0 (A0 , B 0 ).
A similar result holds, if F : Formn → Form and G0 : Formn → Form, for every n > 0. All
results of this section are extended naturally to functors F, G : C → Form (in Proposition 2.7.4)
and F : C × D → Form (in Proposition 2.7.5), where C and D are categories.
(→B )1 η : F ⇒ G) : (B → F ) ⇒ (B → G),
∀x
where ηA : F0 (A) → G0 (A), and 1
(ηA ) is defined in Definition 1.14.8(iv).
(iv) ∃x : Fun(Form, Form) → Fun(Form, Form), defined by
∃x (F ) = ∃x F,
0
∃x
η : F ⇒ G) : ∃x F ⇒ ∃x G,
1
∃x η) = ∃x (ηA ) : ∃x F0 (A) → ∃x G0 (A); A ∈ Form,
1 A 1
∃x
where ηA : F0 (A) → G0 (A), and (ηA ) is defined in Definition 1.14.8(v).
1
V V
Other functors on formulas, like B and B , induce the corresponding functors on functors
on formulas. The preorder on Form also induces a preorder on Fun(Form, Form).
◦F ≤ G ⇔ F ≤ (→B ◦ G).
^
B
⇔ F ≤ (→B ◦ G).
Definition 2.8.4. The functors ∀x and ∃x in Definition 2.8.1 can be seen as functors
W
We can show that e is a functor using also Propositions 2.7.2 and 2.7.5 (exercise). The
]A = A ∨
fact that the rule PEM ˜ ¬A defines an endofunctor on Form follows from the trivial
fact that ` A ∨˜ ¬A, for every A ∈ Form.
g
: Form → Form
A 7→ Ag ; A ∈ Form
⊥g = ⊥,
Rg = ¬¬R, R ∈ Rel(0) \ {⊥},
g
R(t1 , . . . , tn ) = ¬¬R(t1 , . . . , tn ), R ∈ Rel(n) , n ∈ N+ , t1 , . . . , tn ∈ Term,
(A ◦ B)g = Ag ◦ B g , ◦ ∈ {→, ∧},
g g
(∀x A) = ∀x A ,
g
(A ∨ B) = Ag ∨˜ B g = ¬Ag → ¬B g → ⊥,
(∃x A)g =∃˜x Ag = ¬∀x ¬Ag .
(ii) Let R ∈ R(n) . If n > 1 and t1 , . . . , tn ∈ Term, then R(t1 , . . . , tn ) → ⊥ ∈ Form− \ Formg . If
n = 0 and R 6= ⊥, then R → ⊥ ∈ Form− \ Formg .
(iii) The Gödel-Gentzen translation is not an injection.
Proof. Exercise.
FV ∃x A)g = FV [∀x (Ag → ⊥)] → ⊥ = FV(Ag ) \ {x} = FV(A) \ {x} = FV(∃x A).
(iii) By Definition 1.6.1 we get the following equalities. Let A ∈ Prime. If A = ⊥, then
Frees,x (⊥g ) = Frees,x (⊥) = 1. If A = R ∈ Rel(0) \ {⊥}, or if A = R(t1 , . . . , tn ), then
Frees,x (Ag ) = Frees,x ((A → ⊥) → ⊥) = Frees,x (A) · Frees,x (⊥) · Frees,x (⊥) = Frees,x (A).
= Frees,x ∀y ¬Ag
0 , x = y ∨ [x 6= y & y ∈ {y1 , . . . , ym }]
= 1, , x 6= y & x ∈/ FV(¬Ag ) \ {y}
g
Frees,x (¬A ) , x 6= y & y ∈ / {y1 , . . . , ym } & x ∈ FV(¬Ag )
0 , x = y ∨ [x 6= y & y ∈ {y1 , . . . , ym }]
= 1, , x 6= y & x ∈/ FV(Ag ) \ {y}
Frees,x (Ag ) , x 6= y & y ∈ / {y1 , . . . , ym } & x ∈ FV(Ag )
0 , x = y ∨ [x 6= y & y ∈ {y1 , . . . , ym }]
= 1, , x 6= y & x ∈ / FV(A) \ {y}
Frees,x (A) , x 6= y & y ∈ / {y1 , . . . , ym } & x ∈ FV(A)
= Frees,x ∃y A .
(iv) Let A ∈ Prime. If A = ⊥, then (⊥[x := s])g = ⊥g = ⊥ = ⊥g [x := s]. If A = R ∈
Rel(0) \ {⊥} and if A = R(t1 , . . . , tn ), then we we get, respectively,
(R[x := s])g = Rg = ¬¬R = ¬¬R(x := s] = (¬¬R)[x := s] = Rg [x := s].
g g
R(t1 , . . . , tn )[x := s] = R(t1 [x := s], . . . , tn [x := s])
= ¬¬R(t1 [x := s], . . . , tn [x := s])
= ¬¬R(t1 , . . . , tn ) [x := s]
g
= R(t1 , . . . , tn ) [x := s].
If Frees,x (A) = 0 or Frees,x (B) = 0, then Frees,x (A ◦ B) = 0, and hence
g
(A ◦ B)[x := s] = (A ◦ B)g = Ag ◦ B g = (Ag ◦ B g )[x := s] = (A ◦ B)g [x := s],
since by (iii) we get Frees,x (A ◦ B)g = Frees,x (A ◦ B) = 0. Suppose next that Frees,x (A) =
1 = Frees,x (B), hence Frees,x (Ag ) = 1 = Frees,x (B g ). By the inductive hypotheses we get
g g
(A ◦ B)[x := s] = A[x := s] ◦ B[x := s]
g g
= A[x := s] ◦ B[x := s]
= Ag [x := s] ◦ B g [x := s]
= (Ag ◦ B g )[x := s]
= (A ◦ B)g [x := s].
If Frees,x (A) = 0 or Frees,x (B) = 0, then Frees,x (A ∨ B) = 0, and hence
g
(A ∨ B)[x := s] = (A ∨ B)g = (A ∨ B)g [x := s],
since by (iii) we get Frees,x (A ∨ B)g = Frees,x (A ∨ B) = 0. Suppose next that Frees,x (A) =
1 = Frees,x (B), hence Frees,x (Ag ) = 1 = Frees,x (B g ). By the inductive hypotheses we get
g g
(A ∨ B)[x := s] = A[x := s] ∨ B[x := s]
g g
= ¬ A[x := s] → ¬ B[x := s] → ⊥
= ¬Ag [x := s] → ¬B g [x := s] → ⊥
= (Ag ∨
˜ B g )[x := s]
= (A ∨ B)g [x := s].
2.11. THE GÖDEL-GENTZEN FUNCTOR 67
g g g
(∀y A)[x := s] = ∀y A = ∀y A [x := s].
g g
(∀y A)[x := s] = ∀y A[x := s]
= ∀y (A[x := s])g
= ∀y Ag [x := s]
= ∀y Ag [x := s]
g g g
(∃y A)[x := s] = ∃y A = ∃y A [x := s].
g g
(∃y A)[x := s] = ∃y A[x := s]
= ¬∀y ¬(A[x := s])g
= ¬∀y ¬Ag [x := s]
= ¬∀y (¬Ag )[x := s]
= ¬∀y ¬Ag [x := s].
Because of Proposition 2.5.2(i), the Gödel-Gentzen translation is also called the negative
translation. Since Ag ∈ Form∗ , by Theorem 2.2.10 we get that `∗c ¬¬Ag → Ag . For the formulas
in Form∗ ∩ Formg one gets though, the minimal derivability of their double-negation-elimination.
Proof. Exercise.
Proposition 2.11.1. There is a function dne : Form− → DV (A) such that dne(A) : ¬¬A → A,
for every A ∈ Form− .
68 CHAPTER 2. DERIVATIONS IN INTUITIONISTIC AND CLASSICAL LOGIC
Proof. We use recursion on Form− and we rewrite accordingly the proof of Proposition 2.5.3
(the details is an exercise).
For simplicity we use the same symbol to the Gödel-Gentzen translation for the function
that translates classical derivations into minimal ones.
V g = {ug : C g | u : C ∈ V }.
Proof. By recursion2 on DcV (A) we map a classical derivation Mc in DcV (A) to a minimal
derivation Mcg in DV g (Ag ) by mapping each introduction-rule of DcV (A) to an element of
DV g (Ag ).
where as Ag ∈ Form− and dne : Form− → DV (A), we get dne(Ag ) ∈ D{ug : ¬¬Ag =(¬¬A)g } (Ag ).
(1A ) a: A 1 7→ a g : Ag 1 g
A A
A Ag
(→+ ) If we consider the following left, classical derivation Mc and if we suppose that Ncg is
already defined i.e.,
[u : A] u1 : C1 . . . un : Cn
ug : Ag u1 g : C1 g . . . un g : Cn g
| Nc and | Ncg
B + Bg
A→B → u
[ug : Ag ] u1 g : C1 g . . . un g : Cn g
| Ncg
Bg + g
A → Bg → u
g
as Ag → B g = (A → B)g .
(→− ) If we consider the following classical derivation Mc
u1 : C1 . . . un : Cn v1 : D1 . . . vm : Dm
| Nc | Kc
A→B A
B →−
2
Actually, what we need do here, as in the proof of Proposition 2.11.1 is the following: first we define by
recursion a function in the set of trees of formulas, and then by induction we prove that the value of this
function is a minimal derivation of Ag from assumptions V g . For simplicity, here we perform the two steps
simultaneously.
2.11. THE GÖDEL-GENTZEN FUNCTOR 69
and if we suppose that that Ncg and Kcg have been defined i.e.,
u1 g : C1 g . . . un g : Cn g v1 g : D1 g . . . vm g : Dm g
| Ncg and | Kcg
(A → B)g Ag
we define Mcg to be the following minimal derivation
u1 g : C1 g . . . un g : Cn g v1 g : D1 g . . . vm g : Dm g
| Ncg | Kcg
Ag → B g Ag
Bg →−
(∀+ ) If we consider the following left, classical derivation Mc , and if we suppose that the
minimal derivation Ncg is already defined i.e.,
u1 : C1 . . . un : Cn
u1 g : C1 g . . . un g : Cn g
| Nc and | Ncg
A + Ag
∀x A ∀ x
we define Mcg to be the following minimal derivation
u1 g : C1 g . . . un g : Cn g
| Ncg
Ag +
∀x Ag ∀ x
/ FV(C1g ) & . . . & ∈
where the variable condition x ∈ / FV(Cng ), is satisfied, since by Propo-
sition 2.10.4(ii) FV(Ci ) = FV(Ci g ), for every i ∈ {1, . . . , n}, and the variable condition
x∈ / FV(C1 ) & . . . & x ∈
/ FV(Cn ) is satisfied in Nc .
(∀− ) If we consider the following left, classical derivation Mc , and if we suppose that Ncg is
already defined i.e.,
u1 : C1 . . . un : Cn
u1 g : C1 g . . . un g : Cn g
| Nc
and | Ncg
∀x A r ∈ Term −
∀ (∀x A)g = ∀x Ag
A(r)
we define Mcg to be the following classical derivation
u1 g : C1 g . . . un g : Cn g
| Ncg
∀x Ag r ∈ Term −
∀
A (r) = A(r)g
g
and if we suppose that the following minimal derivations Ncg and Kcg are already defined i.e.,
u1 g : C1 g . . . un g : Cn g v1 g : D1 g . . . vm g : Dm g
| Ncg and | Kcg
Ag Bg
we define Mcg to be the following minimal derivation
u1 g : C1 g . . . un g : Cn g v1 g : D1 g . . . vm g : Dm g
| Ncg | Kcg
Ag Bg +
g g g ∧
A ∧ B = (A ∧ B)
u1 : C1 . . . un : Cn [u : A] [v : B]
| Nc | Kc
A∧B C −
∧ u, v
C
and if we suppose that the minimal derivations Ncg and Kcg are already defined i.e.,
u1 g : C1 g . . . un g : Cn g ug : Ag v g : B g
| Ncg and | Kcg
(A ∧ B)g Cg
u1 g : C1 g . . . un g : Cn g [ug : Ag ] [v g : B g ]
| Ncg | Kcg
Ag ∧ B g Cg − g g
∧ u ,v
Cg
(∨+
0 ) If we consider the following, left classical derivation Mc , and if we suppose that the
minimal derivation Ncg is already defined i.e.,
u1 : C1 . . . un : Cn
u1 g : C1 g . . . un g : Cn g
| Nc and | Ncg
A ∨+ Ag
A∨B 0
we define Mcg to be the following minimal derivation of (A ∨ B)g = ¬Ag → ¬B g → ⊥
u1 g : C1 g . . . un g : Cn g
| Ncg
u : Ag → ⊥ Ag
⊥ →−
+ g
¬B g → ⊥ → v : +¬B
¬Ag → ¬B g → ⊥ → u
For the rule ∨+
1 we proceed similarly.
2.11. THE GÖDEL-GENTZEN FUNCTOR 71
w: Γ [u : A] w0 : ∆ [v : B] w00 : E
| Nc | Kc | Lc
A∨B C C −
∨ u, v.
C
We suppose that the minimal derivations Ncg , Kcg and Lgc are already defined i.e.,
w g : Γg ug : Ag w0 g : ∆g v g : B g w00 g : E g
| Ncg | Kcg | Lgc
(A ∨ B)g Cg Cg.
˜ B → C.
D(A, B, C) = (¬¬C → C) → (A → C) → (B → C) → A ∨
D0 (Ag , B g , C g ) = (Ag → C g ) → (B g → C g ) → Ag ∨
˜ Bg → C g ,
D00 (Ag , B g , C g ) = (B g → C g ) → Ag ∨
˜ Bg → C g ,
In the above definition of Mcg we write all intermediate derivations as values of appropriate
functions, in order to be compatible to the formulation of the recursion theorem for DcV (A)
that we employ in the proof.
(∃+ ) If we consider the following left, classical derivation Mc , and if we suppose that the
minimal derivation Ncg is already defined i.e.,
u1 : C1 . . . un : Cn
u1 g : C1 g . . . un g : Cn g
| Nc
and | Ncg
r ∈ Term A(r) +
∃ A(r)g = Ag (r)
∃x A
72 CHAPTER 2. DERIVATIONS IN INTUITIONISTIC AND CLASSICAL LOGIC
w: Γ [u : A] w0 : ∆
| Nc | Kc
∃x A B −
∃ x, u
B
with x ∈ / FV(B). Suppose that the derivations Ncg and Kcg are defined i.e.,
/ FV(∆) and x ∈
w g : Γg ug : Ag w0 g : ∆g
| Ncg and | Kcg
˜ x Ag
∃ Bg
/ FV(B g ) = FV(B), by Proposition 2.6.3(ii) there is a derivation Λ of
As x ∈
˜x Ag → ∀x (Ag → B g ) → B g .
Ex (Ag , B g ) = (¬¬B g → B g ) → ∃
as x must not be free in B. By Proposition 2.11.1 let the derivation dne(B g ) : ¬¬B g → B g . If
E 0 x (Ag , B g ) = ∃
˜x Ag → ∀x (Ag → B g ) → B g ,
E 00 x (Ag , B g ) = ∀x (Ag → B g ) → B g ,
we define Mcg to be the following minimal derivation of B g
u0 g : ¬¬B g
| dne(B g ) [ug : Ag ] w0 g : ∆g
| gx (Ag , B g ) (B g w: Γg
+ 0g | Kcg
Ex (Ag , B g ) ¬¬B g → B g → u | Ncg
Bg + g
A → B g → +u
E 0 x (Ag , B g ) ˜ x Ag
∃ g
→− ∀ x
E 00 x (Ag , B g ) ∀x (Ag → B g )
Bg →−
from assumptions Γg and ∆g . Note that the variable condition is satisfied in the above use of
∀+ x, since x ∈
/ FV(∆g ) = FV(∆).
Definition 2.11.3. The Gödel-Gentzen functor GG : Formc → Form is defined by GG0 (A) =
Ag , for every A ∈ Form, and for every arrow Mc : A → B in Formc let
Proof. Exercise.
(∗) `c ⊥ ⇒ ` ⊥g = ⊥.
(∗∗) ` ⊥ ⇒ `i ⊥ ⇒ `c ⊥.
74 CHAPTER 2. DERIVATIONS IN INTUITIONISTIC AND CLASSICAL LOGIC
∀y (⊥ → Ay) y u1 : ¬Ax u2 : Ax
⊥ → Ay ⊥
Ay
∀x Ax → B ∀y Ay
∀x ¬(Ax → B) x B
→+ u2
¬(Ax → B) Ax → B
⊥
→+ u1
¬¬Ax
Proof. By induction on DcV (A) and inspection of the proof of Theorem 2.11.2.
Proposition 2.12.7. There are functions g0c , g1c : Form → DcV (A) such that
u1 g : C1 g . . . un g : Cn g
| Mg
Ag
2.12. APPLICATIONS OF THE GÖDEL-GENTZEN FUNCTOR 75
[u1 g : C1 g ] . . . [un g : Cn g ]
| Mg un : Cn
Ag | g1c (Cn )
→ + ug
g
C n → Ag n
Cn g u1 : C1
u0 g : Ag Ag →−
| g0c (A) ... | g1c (C1 )
... →+ ug1
A C1 g → Ag C1 g
Ag → A → u
+ 0g
Ag →−
A →−
Notice that the above function is not defined by recursion (why this is not possible?).
Moreover, if M g : Ag → B g , then (M g )c : A → B. Despite this “functorial” behaviour of
the function c , we cannot define a functor Form → Formc , having c as its 1-part (why?).
Consequently, the following compositions are defined
g◦
c
g c g
DcV (A) DV g (Ag ) DcV (A) DV g (Ag )
g
c◦
g
Mc 7 (Mc)g 7→ [(Mc)g ]c,
→ c
M g 7→ (M g )c 7→ [(M g )c ]g .
c c
A 7→ Ak
defined by recursion on Form through the following clauses:
⊥k = ⊥,
k
R = ¬¬R, R ∈ Rel(0) \ {⊥},
(R(t1 , . . . , tn ) = ¬¬R(t1 , . . . , tn ), R ∈ Rel(n) , n ∈ N+ , t1 , . . . , tn ∈ Term,
(A2B)k = ¬¬(Ak 2B k ), 2 ∈ {→, ∧, ∨},
k k
(4x A) = ¬¬(4x A ), 4 ∈ {∀, ∃}.
(iv) The set of formulas for which there is a minimal derivation of their double-negation-
elimination is not included in Form− .
Proof. (i) Clearly (A ∨ B)k , (∃x A)k ∈/ Form∗ .
(ii) and (iii) are exercises.
(iv) It follows from (i) and (iii) and the fact that Form− ⊂ Form∗ .
Corollary 2.12.12. (i) The Kolmogorov translation defines a functor K : Formc → Form
K0 (A) = Ak ,
K1 (Mc : A → B) : Ak → B k .
(ii) If Γ ⊆ Form and A ∈ Form, then Γ `c A ⇒ Γk ` Ak .
(iii) If Γ ⊆ Form and A ∈ Form, then Γ ` A ⇒ Γk ` Ak .
Proof. Exercise.
The Gödel-Gentzen translation was introduced from Gödel in [10], and independently from
Gentzen in [8]. The Kolmogorov translation was introduced even earlier in [13], but it was not
known neither to Gödel nor to Gentzen.
Proof. For this it suffices to show3 that if A, B, C ∈ Form such that C ∈ OA ∩ OB , there is
some D ∈ Form such that
C ∈ OD ⊆ OA ∩ OB .
The hypothesis C ∈ OA ∩ OB implies that A ` C and B ` C i.e., C ∈ OA and C ∈ OB , hence
by Lemma 2.13.2(ii) we get OC ⊆ OA and OC ⊆ OB . Hence C ∈ OC ⊆ OA ∩ OB .
We denote the resulting topological space as F = (Form, T (B)). It is easy to see that this
space does not behave well with respect to the separation properties. E.g., it is not T1 , since
A ∧ A is in the complement Form \ {A} of {A}, which is not open; if there was some C ∈ Form
such that A ∧ A ∈ OC ⊆ Form \ {A}, then OA∧A ⊆ OC ⊆ Form \ {A}, but A ∈ OA∧A and
A∈/ Form \ {A}.
Proposition 2.13.4. The Gödel-Gentzen translation g : Form → Form and the Kolmogorov
translation k : Form → Form are continuous functions from F to F.
Proof. We prove the continuity of the Gödel-Gentzen translation and, because of Corol-
lary 2.12.12(ii), the proof of the continuity of the Kolmogorov translation is similar.
By definition, a function f : X → Y between two topological spaces X, Y is continuous, if
the inverse image f −1 (O) of every open set O in Y is open in X. If B is a basis for Y , it is
easy to see that f is continuous if and only if the inverse image f −1 (B) of every basic open set
B in B is open in X. Clearly, g−1 (Form) = Form ∈ T (B) and g−1 (∅) = ∅ ∈ T (B). If A ∈ Form,
g−1
(OA ) = {B ∈ Form | B g ∈ OA } = {B ∈ Form | A ` B g }.
B ∈ OB ⊆ g−1 (OA ),
hence the set g−1 (OA ) is open, as the union of the open sets OB , for every B ∈ g−1 (OA ). The
membership B ∈ OB follows from Lemma 2.13.2(i). Next we fix some C ∈ OB i.e., ` B → C,
and we show that C ∈ g−1 (OA ) i.e., A ` C g . By Corollary 2.12.1(ii) we get
` B → C ⇒ ` Bg → C g ,
is a derivation A ` C g .
3
Here we use the fact that if a collection B of subsets of some set X satisfies the property: “for every x ∈ X
and Bi , Bj ∈ B with x ∈ Bi ∩ Bj , there is some Bk ∈ B such that x ∈ Bk ⊆ Bi ∩ Bj ”, then B is a basis for
some topology T (B) on X. This topology T (B) is unique and the smallest topology on X that includes B
(see [7], Theorem 3.2).
78 CHAPTER 2. DERIVATIONS IN INTUITIONISTIC AND CLASSICAL LOGIC
Chapter 3
Models
It is an obvious question to ask whether the logical rules we have been considering suffice i.e.,
whether we have forgotten some necessary rules. To answer this question we first have to fix
the meaning of a formula i.e., provide a semantics for the syntax developed in the previous
chapters. This will be done here by means of fan models. Using this concept of a model we
will prove soundness and completeness.
where F({0, . . . , n − 1}, X) denotes the set of functions u from {0, . . . , n − 1} to X. Such a
function is also understood as an n-tuple of elements of X i.e.,
u = u(0), u(1), . . . , u(n − 1) = u0 , u1 , . . . , un−1 ,
We also use the symbol hi for the empty node. The length |u| of a node of X <N is defined by
0 ,u=∅
|u| =
n , u ∈ X n & n > 0.
If one of them is the empty node, then their concatenation is the other node. A sequence of
elements of X is an element α ∈ X N = F(N, X), and if n ∈ N, the n-th initial part ᾱ(n) of α
is defined by
∅ ,n=0
ᾱ(n) =
(α0 , α1 , . . . , αn−1 ) , n > 0.
A tree T on X is a subset of X <N , which is closed under initial segments i.e.,
Example 3.1.3. If X = N, the tree N<N on N is called the Baire tree. Its body N <N is
the set NN , which is called the Baire space. Clearly, the Baire tree is infinitely branching.
Example
<N 3.1.4. If X = 2 = {0, 1}, the tree 2<N on N is called the Cantor tree. Its body
2 is the set 2N , which is called the Cantor space. Clearly, the Cantor tree is a fan.
Trivially, ∅ ≺ u, for every u ∈ X <N \ {∅}, while a tree T on X is inhabited if and only if
∅ ∈ T . Notice that a node of a tree may have more than one immediate successors, but it has
always a unique immediate predecessor (defined in the obvious way). If T is a finite tree on
X, then, trivially, T is well-founded, but the converse is not true.
Proposition 3.1.5. (i) There is a well-founded, infinite tree.
(ii) An infinite fan F has an infinite path.
Proof. The proof of (i) is an exercise. The proof of (ii) uses classical logic. If u ∈ F , let u be
“good”, if there are infinitely many nodes w ∈ F with u ≺ w. Let u be “bad”, if u is not good.
What we want follows from the observation that if all immediate successor nodes of u are bad,
then u is also bad. The completion of the proof is an exercise.
. . . α3 R α2 R α1 R α0 ,
Proof. Suppose that there is some x ∈ X such that ¬P (x). This implies the (classical)
existence of some x1 ≺ x such that ¬P (x1 ). By repeating this step (and using some form of
the axiom of choice), we get that ≺ has an infinitely descending chain, which contradicts our
hypothesis.
Definition 3.1.8. Let T be a tree on some inhabited set X. A leaf of T is a node of T without
proper successors (equivalently, without immediate successors). We denote by Leaf(T ) the set
of leaves of T . We call T a spread, if Leaf(T ) = ∅, or equivalently, if every node of T has an
immediate successor1 . A subtree T 0 of T , in symbols T 0 ≤ T , is a subset T 0 of T which is also
a tree on X. A branch A of T is a linearly ordered subtree of T i.e.,
∀u,w∈A (u w or w u).
If ᾱ(n) ∈ B, we say that the infinite path α hits the bar B at the node ᾱ(n). A bar B of S is
called uniform, if there is a uniform bound on the length of the initial part of an infinite path
that hits B i.e.,
∃n∈N ∀α∈[S] ∃m≤n ᾱ(m) ∈ B .
Example 3.1.9. The Baire and the Cantor tree are spreads, and for every n ∈ N the sets
Bn = {u ∈ 2<N | |u| = n}
are uniform bars of 2<N . Note that B0 = {∅} is a uniform bar of every spread.
u(x0 ) = {u ∗ (x0 , x0 , . . . , x0 ) | n ∈ N+ }
| {z }
n
Proposition 3.1.11. Let F be a fan on an inhabited set X, and G a fan and a spread on X.
(i) If all branches of F are finite, then F has a branch of maximal length.
(ii) If B is a bar of G, then B is uniform.
Proof. The proof of (i) rests on Proposition 3.1.5(ii). For the proof of both (i) and (ii) we use
classical reasoning.
Proposition 3.1.12. Let X, Y be inhabited sets, and let F be a a fan on X and G a fan on
Y such that F, G are spreads.
(i) If u ∈ F , and if B(u) = {α ∈ [F ] | u ≺ α}, where u ≺ α ⇔ ∃n∈N (ᾱ(n) = u), then the
family {B(u) | u ∈ F } ∪ {∅} is a basis for a topology TF on [F ].
(ii) Let φ : F → G satisfying the following properties:
where u ∨ w = sup {u, w}, is continuous with respect to the topologies TF and TG .
Proof. Exercise.
If n > 0, an element of Rel(n) (D) is a relation on D of arity n, and an element of Fun(n) (D)
is a function f : Dn → D. Since D0 = {∅}, we get Rel(0) (D) = P({∅}) = {∅, {∅}} = 2. The
value 0 = ∅ represents falsity, and the value 1 = {∅} represents truth. Moreover, the set
Fun(0) (D) = F({∅}, D) can be identified with D.
3.2. FAN MODELS 83
We also write
RM (d,
~ u) ⇔ d~ ∈ j(R, u),
i(f ) ∈ D .
i(f ) : Dn → D .
j(R, u) ∈ 2,
∗ A node u ∈ F is interpreted as a “possible world”, and its length |u| is its “level”.
∗ The relation u ≺ w is interpreted as: “the possible world w is a possible future of the
possible world u”.
84 CHAPTER 3. MODELS
for every u ∈ F , is interpreted as: “if R is true at u, it is true at w”, since, if j(R, u) = ∅,
then we always have that j(R, u) ⊆ j(R, w), while if j(R, u) = {∅}, the monotonicity
j(R, u) ⊆ j(R, w), implies that j(R, w) = {∅} too.
The next fact explains why no generality is lost if the fan in a fan model is a spread.
η : Var → D.
It might be that di = dj , for some i, j ∈ {1, . . . , n}. If η ∈ F(Var, D) and d ∈ D, let ηxd be the
variable assignment in D defined by η and d as follows2 :
(
η(y) , if y 6= x,
ηxd (y) =
d , if y = x.
The next step is to associate an element of D to every term of L. This we can do with the
use of an assignment routine and a fixed fan model of L.
2
If we use classical logic in our metatheory, then the use of this instance of the principle of the excluded
middle, x = y or ¬(x = y), is legitimate. If we use constructive logic though, we need to equip the set of
variables Var of L with a decidable equality i.e., an equality satisfying such a disjunction.
3.3. THE TARSKI-BETH DEFINITION OF TRUTH IN A FAN MODEL 85
ηM : Term → D,
ηM (x) = η(x),
ηM (c) = i(c),
ηM (f (t1 , . . . , tn )) = i(f )(ηM (t1 ), . . . , ηM (tn )),
tM [η] = ηM (t),
and when M is fixed, we may even use the same symbol η(t) for ηM (t). If ~t ∈ Term<N , let
(
∅ , if ~t = ∅,
ηM (~t) =
(ηM (t0 ), . . . , ηM (t|~t|−1 )) , if ~t = (t0 , . . . , t|~t|−1 ).
Definition 3.3.3 (Tarski, Beth). Let M = (D, F, X, i, j) be a fan model of L, such that F is
a spread. We define inductively the relation
“the formula A is true in M at the node u under the variable assignment η”, or “u forces A
under η in M”,
in symbols
M, u A[η], or simpler u A[η] ,
by the following rules3 :
M ~M 0
u (R(~t))[η] ⇔ ∃n∈N ∀u0 n u R (t [η], u ) ,
In this definition, the logical connectives →, ∧, ∨, ∀, ∃ on the left hand side are part of the
object language L, whereas the same connectives on the right hand side are to be understood
in the usual sense: they belong to the metalanguage. It should always be clear from the context
whether a formula is part of the object or the metalanguage. Regarding the Beth-Tarski
definition of truth, we make the following remarks.
Hence, R (or R(~t)) is true in M at u under η if and only if there is a level of possible
worlds in F such that R is true in M at all possible future worlds of u of that level
under η. If u (R(~t))[η], and if
SF (u) = {w ∈ F | u w} ∪ {w ∈ F | w u},
then
BF (u) = w ∈ SF (u) | RM (~tM [η], w)
is a uniform bar of the spread subfan SF (u) of F , with |u| + n as a uniform bound.
F
u0
|u| + n
|u|
u
hi
• The formula A ∨ B is true in M at u under η if and only if for every possible future u0
of u of level |u| + n either A is true at u0 or B is true at u0 , for some n ∈ N.
• The formula A → B is true in M at u under η if and only if for every possible future u0
of u if A is true in M at u0 under η, then B is true in M at u0 under η.
3.3. THE TARSKI-BETH DEFINITION OF TRUTH IN A FAN MODEL 87
i.e., there is a level of future words of u such that d ∈ j(R, u0 ), and this is the case for
every d ∈ D. As any possible interpretation d of x is in all j(R, u0 ), for some level of
possible future worlds of u, it is natural to define then that ∀x R(x) is true in M under η.
The use of ηxa in the definition of u (∃x A)[η] and u (∀x A)[η] reflects that no capture
occurs when we infer u (∃x A)[η] and u (∀x A)[η] from ∃n∈N ∀u0 n u ∃d∈D (u0 A[ηxd ])
and ∀d∈D (u A[ηxd ]), respectively.
Proposition 3.3.4 (Extension). Let A ∈ Form, M = (D, F, X, i, j) be a fan model of L, F is
a spread, η a variable assignment in D, and u, w ∈ F . Then
Proof. Exercise.
Proof. By induction on Form. Case R(~s). Assume ∀u0 n u u0 (R(~s))[η] . Since F is a fan,
there are finitely many nodes u0 such that u0 n u. Let their set be N = {u1 , . . . , ul }. By
definition we have that for each uk ∈ N
hence by the corresponding clause of the Tarski-Beth definition we get u R(~s)[η] . For
this we argue as follows. If w n+m u, then w wk nk uk , for some k ∈ {1, . . . , l}. Since
by hypothesis, ηM (~s) ∈ j(R, wk ), by the monotonicity of jR we get ηM (~s) ∈ j(R, w) ⇔
RM (~sM [η], w). The cases A ∨ B and ∃x A are handled similarly.
Case A → B. Let N = {u1 , . . . , ul } be the set of all u0 u with |u0 | = |u| + n such that
u0 (A → B)[η]. We show that
Let w u and w A[η]. We must show w B[η]. If |w| ≥ |u| + n, then w uk , for some
k ∈ {1, . . . , l}. Hence, by the hypothesis on uk and the definition of uk (A → B)[η], we get
w B[η]. If |u| ≤ |w| < |u| + n, then by Proposition 3.3.4 for the set N 0 of all elements uj of
N that extend w we have that each uj A[η]. Hence, we also have that uj B[η]. But N 0 is
the set of all successors of w with length |w| + m, where m = |u| + n − |w|. By the induction
hypothesis on the formula B, we get the required w B[η]. The cases A ∧ B and ∀x A are
straightforward to show.
Proof. By Induction on Term and Form, respectively. The details are left to the reader.
Proof. By Induction on Term and Form, respectively. The details are left to the reader.
Next theorem expresses that minimal derivations are sound with respect the Beth-Tarski
notion of truth of a formula in a fan model i.e., they respect truth in a fan model.
We fix u0 such that u0 u and we suppose u0 A[η]. By Extension (Proposition 3.3.4) we get
u0 {C1 , . . . , Cn }[η], hence u0 {A, C1 , . . . , Cn }[η]. Hence, by IH(N ) we get u0 B[η].
Case (→− ). Let the derivation
C1 , . . . , C n D1 , . . . , D m
|N |K
A→B A
B →−
Let d ∈ D. By the variable condition we get η|FV(Ci ) = (ηxd )|FV(Ci ) , for every i ∈ {1, . . . , n},
hence by Coincidence (Lemma 3.4.1) we conclude that u {C1 , . . . , Cn }[ηxd ]. By IH(N ) on ηxd
and u we get u A[ηxd ].
Case (∀− ). Let the derivation
C1 , . . . , C n
|N
∀x A r ∈ Term −
∀
A(r)
and suppose u {C1 , . . . , Cn }[η]. We show u A(r)[η] under the inductive hypotheses on N :
η (r)
Applying IH(N ) on u we get ∀d∈D (u A[ηxd ]). If we consider d = ηM (r), we get u A ηxM ,
and by Substitution (Lemma 3.4.2) we conclude that u A(r)[η].
Case (∧+ ) and Case (∧− ) are straightforward.
Case (∨+ +
0 ) and Case (∨1 ) are straightforward.
Case (∨− ). Let the derivation
C1 , . . . , C n [A], D1 , . . . , Dm [B], E1 , . . . , El
|N |K |L
A∨B C C −
C ∨
We fix u0 such that u0 n u. If u0 A[η], then by Extension and IH(K) we get u0 C[η]. If
u0 B[η], then by Extension and IH(L) we get u0 C[η].
Case (∃+ ) is straightforward.
Case (∃− ). Let the derivation
C1 , . . . , C n [A], D1 , . . . , Dm
|N |K
∃x A B −
B ∃ x
Corollary 3.4.4. Let Γ ∪ {A} ⊆ Form such that Γ ` A. If M = (D, F, X, i, j) is a fan model
of L, u ∈ F and η is a variable assignment in D, the following hold:
(i) M, u Γ[η] ⇒ M, u A[η].
(ii) If Γ = ∅, then u A[η], and JAKM,η = [F ].
(iii) The set
JuKM,η = {A ∈ Form | u A[η]}
is open in the topology T (B) on Form, defined in Proposition 2.13.3.
Proof. Exercise.
To every node of the following fan we write all propositions forced at that node (the nodes
where falsum is forced are considered to be extended, and at every extension-node falsum is
also forced). .
•⊥ ..
•⊥ •
•⊥ •
•
This is a fan model because monotonicity holds trivially. Clearly, the above condition is not
satisfied, hence ` is consistent. As minimal, intuitionistic, and classical logic are equiconsistent
(Corollary 2.12.4), we conclude that intuitionistic and classical logic are also consistent.
Example 3.5.3 (Minimal underivability of Ex falsum). A countermodel to the derivation
` ⊥ → R, where R ∈ Rel(0) \ {⊥}, is constructed as follows: take F = {x0 }<N , D any
inhabited set, and define j(⊥, ∅) = 1, and j(R, ∅) = 0.
..
..
⊥•
⊥•
⊥•
⊥•
92 CHAPTER 3. MODELS
By extension we get j(⊥, u) = 1, for every u ∈ F . Moreover, we get j(R, u) = 0, for every u ∈ F ;
if there was some u ∈ F \ {∅} such that j(R, u) = 1, then, since this is the only node u0 ∈ F
such that u0 |u0 | ∅, by Covering we would get j(R, ∅) = 1 too. We show that ∅ 6 (⊥ → R)[η],
where η is arbitrary. Suppose that ∅ (⊥ → R)[η] ⇔ ∀u (u ⊥[η] ⇒ u R[η]). For every
u ∈ F though, we have that u ⊥[η] and u 6 R[η].
Definition 3.5.4. An intuitionistic fan model of a countable first-order language L is a fan
model Mi = (D, F, X, i, j) of L such that
∀u∈F j(⊥, u) = 0 .
It is easy to see that if Mi is an intuitionistic fan model, then
Mi , u (⊥ → A)[η],
for every A, ∈ Form, u ∈ F and assignment η in D. Notice that an intuitionistic fan model
provides an immediate proof that `i is consistent, hence by Corolary 2.12.4 we get the
consistency of `, `c once more. Notice that the fan model used in Example 3.5.2 is not
intuitionistic.
Lemma 3.5.5. A fan model M = (D, F, X, i, j) of L, where F is a spread, is intuitionistic if
and only if
∀η ∀u∈F u 6 ⊥[η] .
Proof. Exercise.
Proposition 3.5.6. Let Mi = (D, F, X, i, j) be an intuitionistic fan model of L, η a variable
assignment, u ∈ F and A ∈ Form.
(i) u (¬A)[η] ⇔ ∀u0 u (u0 6 A[η]).
(ii) u (¬¬A)[η] ⇔ ∀u0 u ¬∀u00 u0 (u00 6 A[η]).
Proof. Exercise.
Definition 3.5.7. An intuitionistic countermodel to some derivation Γ `i A is a triple
(Mi , η, u), where Mi = (D, F, X, i, j) is an intuitionistic fan model, η is a variable assignment
in D, and u ∈ F such that Mi u Γ[η] and Mi u 6 A[η].
Since the soundness theorem of intuitionistic logic follows immediately from the soundness
theorem of minimal logic, we can use it to conclude an intuitionistic underivability Γ 6`i A
from an intuitionistic countermodel to Γ `i A.
Example 3.5.8 (Intuitionistic underivability of DNE). We give an intuitionistic countermodel
to the derivation `i ¬¬P → P . We describe the desired fan model by means of a diagram
below. Next to every node we write all propositions forced at that node (again the nodes
where P is forced are considered to be extended, and at every extension-node P is also forced).
.
•P ..
•P •
•P •
•
This is a fan model because monotonicity clearly holds. Observe also that j(⊥, u) = 0, for every
node u i.e., it is an intuitionistic fan model, and moreover ∅ 6 P [η]. Using Proposition 3.5.6(ii),
it is easily seen that ∅ (¬¬P )[η]. Thus ∅ 6 (¬¬P → P )[η], and hence 6`i (¬¬P → P ).
3.6. COMPLETENESS OF MINIMAL LOGIC 93
M, u Γ[η] ⇒ M, u A[η].
Proof. (Harvey Friedman) Soundness of minimal logic already gives “(i) implies (ii)”. The
main idea in the proof of the other direction is the construction of a fan model M over the
Cantor tree 2<N with domain D the set Term of all terms of the underlying language such
that the following property holds:
Γ ` B ⇔ M, ∅ B[idVar ].
We assume here that Γ ∪ {A} is a set of closed formulas. In order to define M, we will need an
enumeration A0 , A1 , A2 , . . . of the underlying language L (assumed countable), in which every
formula occurs infinitely often. We also fix S an enumeration x0 , x1 , . . . of distinct variables.
Since Γ is countable it can we written Γ = n Γn with finite sets Γn such that Γn ⊆ Γn+1 .
With every node u ∈ 2<N , we associate a finite set ∆u of formulas and a set Vu of variables,
by induction on the length of u. We write ∆ `n B to mean that there is a derivation of height
≤ n of B from ∆.
Let ∆∅ = ∅ and V∅ = ∅. Take a node u such that |u| = n and suppose that ∆u , Vu are
already defined. We define ∆u∗0 , Vu∗0 and ∆u∗1 , Vu∗1 as follows:
Case 0. FV(An ) 6⊆ Vu . Then let
Γ, ∆u ` R~s,
∃n ∀u0 n u (R~s ∈ ∆u0 ) by (3.2) and (3.1),
M 0
∃n ∀u0 n u R (~s, u ) by definition of M,
k R~s by definition of , since tM [idVar ] = t.
u0 ∗ 0 B and u0 ∗ 1 C.
Then by definition we have u B ∨ C. For the reverse implication (⇐) we argue as follows:
u B ∨ C,
∃n ∀u0 n u (u0 B ∨ u0 C),
∃n ∀u0 n u ((Γ, ∆u0 ` B) ∨ (Γ, ∆u0 ` C)) by hypothesis on B, C,
∃n ∀u0 n u (Γ, ∆u0 ` B ∨ C),
Γ, ∆u ` B ∨ C by (3.1).
For (⇐) assume u ∀x B(x). Pick u0 n u such that Am ⇔ ∃x (⊥ → ⊥), for m = |u| + n.
Then at height m we put some xi into the variable sets: for u0 n u we have xi ∈ / Vu0 but
0
xi ∈ Vu0 ∗j . Clearly u ∗ j B(xi ), hence Γ, ∆u0 ∗j ` B(xi ) by hypothesis on B(xi )), hence
(since at this height we consider the trivial formula ∃x (⊥ → ⊥)) also Γ, ∆u0 ` B(xi ). Since
xi ∈/ Vu0 we obtain Γ, ∆u0 ` ∀x B(x). This holds for all u0 n u, hence Γ, ∆u ` ∀x B(x) by
(3.1).
Case ∃x B(x). Assume FV(∃x B(x)) ⊆ Vu . For (⇒) let Γ, ∆u ` ∃x B(x). Choose an n ≥ |u|
such that Γn , ∆u `n An = ∃x B(x). Then, for all u0 u with |u0 | = n
where xi ∈
/ Vu0 . Hence by hypothesis on B(xi ) (applicable since FV(B(xi )) ⊆ Vu0 ∗j )
Corollary 3.6.2 (Completeness of intuitionistic logic). Let Γ ∪ {A} ⊆ Form. The following
are equivalent:
(i) Γ `i A.
(ii) Γ, Efq A, i.e., for all intuitionistic fan models Mi , assignments η in |Mi | and nodes u
in the fan of Mi
Mi , u Γ[η] ⇒ Mi , u A[η].
Definition 3.7.2 (Validity). For every L-model M = (D, i, j), assignment η in D and
formula A ∈ Form∗ we define the relation “A is valid in M under the assignment η”, in
symbols M |= A[η] inductively, with respect only formulas without ∨ and ∃ as follows:
Lemma 3.7.4 (Substitution). Let M = (D, i, j) be an L-model, t, r(x) terms, A(x) ∈ Form∗ ,
and η an assignment in D.
η(t)
(i) η(r(t)) = ηx (r(x)).
η(t)
(ii) M |= A(t)[η] if and only if M |= A(x)[ηx ].
Definition 3.7.5. An L-model Mc = (D, i, j) is called classical, if for every A ∈ Form∗ , and
every assignment η in D we have that
Mc |= (¬¬A)[η] ⇒ Mc |= A[η].
98 CHAPTER 3. MODELS
If the weaker classical derivation `∗c is only considered, then for the constructive proof of
completeness theorem of classical logic it suffices to assume for Mc that
¬¬RMc (d~ ) ⇒ RMc (d~ )
~
for all relation symbols R and all d~ ∈ D|d| . If classical logic is used in our metatheory, then
every L-model is classical. To show this, we suppose that Mc |= (¬¬A)[η], and we show that
Mc |= A[η] by showing ¬¬(Mc |= A[η]). For that, suppose ¬(Mc |= A[η]). Then we get
Mc |= (¬A)[η] ⇔ Mc |= (A → ⊥)[η] ⇔ Mc |= A[η] ⇒ Mc |= ⊥[η] ,
as the premiss in the last implication is false by our second hypothesis. By our first hypothesis
Mc |= (¬¬A)[η] ⇔ Mc |= (¬A → ⊥)[η] ⇔ Mc |= (¬A)[η] ⇒ Mc |= ⊥[η]
and since the premiss in the last implication holds, we get Mc |= ⊥[η], which contradicts
¬(Mc |= ⊥[η]), hence we showed that ¬¬(Mc |= A[η]). With DNE we get Mc |= A[η].
Moreover, one can show constructively (exercise) that
Mc |= (¬A)[η]) ⇔ ¬(Mc |= A[η]).
Let d ∈ D. By the variable condition η|FV(Ci ) = (ηxd )|FV(Ci ) , for every i ∈ {1, . . . , n}, hence by
Coincidence we conclude that Mc |= {C1 , . . . , Cn }[ηxd ]. By IH(N ) on ηxd we get Mc |= A[ηxd ].
Case (∀− ). Let the derivation
C1 , . . . , C n
|N
∀x A r ∈ Term −
∀
A(r)
and let Mc |= {C1 , . . . , Cn }[η]. We show Mc |= A(r)[η] under the inductive hypotheses on N :
Proof. Exercise.
Γ |= A
100 CHAPTER 3. MODELS
i.e., for all classical models Mc and assignments η in |Mc | we have that
Mc |= Γ[η] ⇒ Mc |= A[η].
Proof. (Ulrich Berger, with constructive logic) The proof is based on the proof of completeness
of minimal logic. According to it, a contradiction is derived from the assumption Γ ∪
Dne 6` A. By the completeness theorem for minimal logic, there must be a fan model
M = (Term, 2<N , 2, i, j) and a node u0 such that u0 Γ, Dne and u0 6 A. The details of the
proof are found in [19].
Since in the above proof the carrier set of the classical model in question is the countable
set Term, the following holds immediately.
Γ |=ℵ0 A
i.e., “for all classical models Mc with a countable carrier set |Mc |, for all assignments η,
Mc |= Γ[η] ⇒ Mc |= A[η]”.
Definition 3.9.3. We call a classical models Mc with a countable carrier set |Mc | a countable
(classical) model. Similarly, a finite model Mc is a model with a finite carrier set |Mc |. In
general, the cardinality of a classical model Mc is the cardinality of its carrier set |Mc |.
Corollary 3.8.2(i) of the soundness theorem for classical logic can take the form
Γ `c A ⇒ Γ |= A,
As the implication
Γ `∗c A ⇒ Γ `c A
implies constructively the implication
Γ |= A ⇒ ¬¬(Γ `c A),
Γ |= A ⇒ Γ `c A
i.e., the converse implication that expresses the soundness theorem for classical logic.
3.10. THE COMPACTNESS THEOREM 101
Notice that we use the equivalence between ∃ ˜x A and ¬¬∃x A in the above formulation
of satisfiability (Proposition 2.6.2(vi). The consistency of Γ is a so-called syntactical notion,
while satisfiability of Γ is a so-called semantical one. As classical logic is consistent, the empty
set ∅ is consistent.
Proof. (with constructive logic) We show only that if Γ is consistent, then Γ is satisfiable,
and the converse implication is an exercise. Assume Γ 6`c ⊥, and also assume that Γ is not
satisfiable i.e.,
¬¬¬ ∃Mc ∃η∈F(Var,|Mc |) (Mc |= Γ[η]) .
By Brouwer’s theorem we get
¬ ∃Mc ∃η∈F(Var,|Mc |) (Mc |= Γ[η]) ,
Hence, for every every classical model Mc and every assignment η : Var → |Mc | we have that
Mc |= Γ[η] ⇒ Mc |= ⊥[η].
By the completeness theorem for classical logic there must be a derivation Γ `c ⊥ i.e.,
¬¬(Γ `c ⊥).
This, together with the assumption ¬(Γ `c ⊥), lead to a contradiction. hence, we showed
¬¬¬¬ ∃Mc ∃η∈F(Var,|Mc |) (Mc |= Γ[η]) .
i.e., Γ is satisfiable.
Of course, the above proof is considerably simplified, if classical logic is used. Among the
many important corollaries of the completeness theorem the compactness and Löwenheim-
Skolem theorems stand out as particularly important. Although their classical proofs are
much simpler, we also show these theorems constructively.
102 CHAPTER 3. MODELS
Working as in the proof of Corollary 3.10.2, by the completeness theorem for classical logic
there must be a derivation Γ `c ⊥ i.e., ¬¬(Γ `c ⊥). As
Γ `c ⊥ ⇒ ∃Γ ⊆fin Γ Γ0 `c ⊥ ,
0
we get
¬¬(Γ `c ⊥) ⇒ ¬¬∃Γ ⊆fin Γ Γ0 `c ⊥ ,
0
By definition
Γ0 is satisfiable ⇔ ¬¬ ∃Mc ∃η∈F(Var,|Mc |) (Mc |= Γ0 [η]) .
The following implication holds:
¬¬∃Γ ⊆fin Γ Γ0 `c ⊥ ⇒ ¬ ∃Mc ∃η∈F(Var,|Mc |) (Mc |= Γ0 [η]) ,
0
For that classical model Mc and assignment η the following implication holds:
∃Γ ⊆fin Γ Γ0 `c ⊥ ⇒ Mc |= ⊥[η],
0
Mc |= Γ0 [η] ⇒ Mc |= ⊥[η],
i.e., Γ is satisfiable.
Proof. The proof with classical logic is straightforward. The constructive proof is an exercise.
3.10. THE COMPACTNESS THEOREM 103
Hence, however large a model of a satisfiable Γ can be, we can always find a small model
i.e., a countable one. In the spirit of the converse direction, one can show with compactness
that if there are arbitrarily large finite models of Γ, then there is also an infinite model of Γ.
Before showing this result we interpolate some related notions and facts on equality in L.
Definition 3.10.5. Let the underlying language L contain a binary relation symbol ≈ i.e.,
≈ ∈ Rel(2) . The set EqL of L-equality axioms consists of (the universal closures of)
(Eq1 ) x ≈ x,
(Eq2 ) x ≈ y → y ≈ x,
(Eq3 ) x ≈ y & y ≈ z → x ≈ z,
(Eq4 ) x1 ≈ y1 ∧ . . . ∧ xn ≈ yn → f (x1 , . . . , xn ) ≈ f (y1 , . . . , yn ),
(Eq5 ) x1 ≈ y1 ∧ . . . ∧ xn ≈ yn ∧ R(x1 , . . . , xn ) → R(y1 , . . . , yn ),
for all n-ary function symbols f , for all relation symbols R of L, and n ∈ N.
Note that the equality axioms are formulas of L. If f is a 0-ary function symbol, then Eq4
has as special case the axiom c ≈ c. Notice that this equality is the given “internal” equality of
L and must not be confused with the “external” equality x = y, which is the metatheoretical
equality of the set Var. Consequently, if t, s ∈ Term, we get the following formula of L:
t ≈ s.
Lemma 3.10.6 (Equality). Let r, s, t ∈ Term and A ∈ Form∗ .
(i) EqL ` t ≈ s → r(t) ≈ r(s).
(ii) If Freet,x (A) = Frees,x (A), then EqL ` t ≈ s → (A(t) ↔ A(s)).
Proof. (i) By induction on Term we prove the following formula
∀r∈Term EqL ` t ≈ s → r(t) ≈ r(s) .
Proof. (with classical logic) (i) Let C = {cn | n ∈ N} be a set of constants such that cn 6= cm ,
for every n 6= m. Let also the new countable language
L0 = L ∪ C.
We extend the equality of L to L0 by keeping for simplicity the same symbol ≈. Let the
following set Γ0 of formulas in L0 :
Γ0 0 = Γ0 ∪ Σ0 ,
for some k ∈ N. Clearly, we can find a finite model Mc with cardinality m and an assignment
in |Mc |, such that Mc |= Γ[η], hence Mc |= Γ0 [η], and m > 2k. We can extend η to some
η 0 such that all constant occurring in Σ0 are assigned to pairwise distinct element of |Mc |
under η 0 (clearly, the equality on the carrier set is a congruence). Hence Mc |= Γ0 0 [η 0 ]. By the
compactness theorem for the countable language L0 there is a model classical model Nc and an
assignment θ in |Nc | such that Nc |= Γ0 [θ]. Consequently, Nc is infinite, and clearly Nc |= Γ[θ].
(ii) and (iii) follow immediately from (i).
With classical logic one can also show that the compactness theorem implies the complete-
ness theorem for classical logic (exercise).
Chapter 4
Definition 4.1.1. The set of elementary functions of type Nk → N, where k > 1, is defined
inductively by the following rules:
(Elem1 ) 1 , 1 ,
0 ∈ Elem(1) 1 ∈ Elem(1)
1 1
where 0 is the constant function 0 on N and 1 is the constant function 1 on N.
k ∈ N+ , i ∈ {1, . . . , k}
(Elem2 ) ,
prki ∈ Elem(k)
where the projection function prki is defined by prki (x1 , . . . , xk ) = xi .
(Elem3 ) ,
+ ∈ Elem(2)
where +(x, y) = x + y is the addition of natural numbers.
(Elem4 ) ,
· ∈ Elem(2)
−
· (x, y) = x −
where the modified subtraction − · y is defined by
x− · y = x−y , x≥y
0 , otherwise.
n, k ∈ N+ , f ∈ Elem(n) , f1 . . . . , fn ∈ Elem(k)
(Elem5 ) ,
f ◦ (f1 , . . . , fn ) ∈ Elem(k)
where the composite function f ◦ (f1 , . . . , fn ) is defined by
f ◦ (f1 , . . . , fn ) (x1 , . . . , xk ) = f f1 (x1 , . . . , xk ), . . . , fn (x1 , . . . , xk ) .
106 CHAPTER 4. GÖDEL’S INCOMPLETENESS THEOREMS
r ∈ N, f ∈ Elem(r+1)
(Elem6 ) ,
Σf ∈ Elem(r+1)
where X
Σf (x1 , . . . , xr , y) = f (x1 , . . . , xr , z),
z<y
and
Σf (x1 , . . . , xr , 0) = 0.
r ∈ N, f ∈ Elem(r+1)
(Elem7 ) ,
Πf ∈ Elem(r+1)
where Y
Πf (x1 , . . . , xr , y) = f (x1 , . . . , xr , z),
z<y
and
Πf (x1 , . . . , xr , 0) = 1.
We also define
∞
[
Elem = Elem(k) .
k=1
The function Σf is the bounded sum of f , and the function Πf is the bounded product of f .
By omitting bounded products, one obtains the so-called subelementary functions.
Proof. We show only (vii) and (viii), and the rest is an exercise. We have that
X X
·(x, y) = x · y = pr21 (x, z) = x,
z<y z<y
Y Y
!(x) = x! = Succ(y) = (y + 1).
y<x y<x
4.2. A NON-ELEMENTARY FUNCTION 107
(vi) The elementary functions are closed under “bounded minimisation” i.e., if f ∈ Elem(r+1) ,
then µf ∈ Elem(r+1) , where
(µf )(x1 , . . . , xr , y) = µz<y (f (x1 , . . . , xr , z) = 0)
where µz<y (f (x1 , . . . , xr , z) = 0) denotes the least z < y such that f (x1 , . . . , xr , z) = 0. If
there is no z < y such that f (x1 , . . . , xr , z) = 0, then µf (x1 , . . . , xr , y) = y.
Proof. Case (iv) can be shown with the use of modified subtraction, and case (v) with the use
of modified subtraction and the bounded sum. Hence, not only the elementary, but in fact the
subelementary functions are closed under bounded minimization. The rest is an exercise.
Furthermore, we define µz≤y f (x1 , . . . , xr , z) = 0 as µz<y+1 f (x1 , . . . , xr , z) = 0 .
has the cardinality of the set of real numbers. Next we show how to find a non-elementary
function, which is defined explicitly by some rule.
Definition 4.2.1. If k ∈ N, the function 2k : N → N is defined by
20 (m) = m; m ∈ N,
2k+1 (m) = 22k (m) ; m ∈ N.
If m ∈ N, then
21 (m) = 220 (m) = 2m ,
m
22 (m) = 221 (m) = 22 ,
2m
23 (m) = 222 (m) = 22 ,
m
·2
··
2k+1 (m) = 22k (m) = 2 ,
where there are k + 1-many 2’s in the above tower of powers.
108 CHAPTER 4. GÖDEL’S INCOMPLETENESS THEOREMS
Lemma 4.2.2. For every elementary function f : Ns → N there is k ∈ N such that for all
(x1 , . . . , xs ) ∈ Ns we have that
f (x1 , . . . , xs ) ) < 2k max{x1 , . . . , xs }
Proof. By the induction principle that corresponds to the definition of elementary functions
1 1 1 1 n
of arity k. If f = 0 , then 0 (n) = 0 < 2n = 21 (n). If f = 1 , then 1 (n) = 1 < 22 = 22 (n).
If s ∈ N+ , and 1 ≤ i ≤ s, then
prsi (x1 , . . . , xs ) = xi
≤ max{x1 , . . . , xs }
< 2max{x1 ,...,xs }
= 21 max{x1 , . . . , xs } .
For the rest calculations we use the following inequalities:
n < 2n ⇒ nn < (2n )n ,
n
(∗) nn < (2n )n ≤ 22 , for every n > 3.
2n
(∗∗) 2n < 2
2n
The inequality (2n )n ≤ 2 is shown by induction on n > 3, while to show (∗∗), we verify cases
n = 0, . . . , n = 3, and for n > 3 we have that 2n < nn , and we use (∗). Hence,
x + y ≤ 2 max{x, y}
< 2 · 2max{x,y}
(∗∗) max{x,y}
< 22
= 22 max{x, y} ,
· y ≤ max{x, y} < 2max{x,y} = 21 max{x, y} .
x−
Let f1 , . . . , fn ∈ Elem(s) and f ∈ Elem(n) such that
f1 (x1 , . . . , xs ) < 2k1 max{x1 , . . . , xs } ,
............
fn (x1 , . . . , xs ) < 2kn max{x1 , . . . , xs }
f (y1 , . . . , yn ) < 2k max{y1 , . . . , yn } ,
for some k1 , . . . , kn , k ∈ N. If
l = max{k1 , . . . , kn , k},
f ◦ (f1 , . . . , fn ) (x1 , . . . , xs ) = f f1 (x1 , . . . , xs ), . . . , fn (x1 , . . . , xs )
< 2k max{f1 (x1 , . . . , xs ), . . . , fn (x1 , . . . , xs )}
≤ 2k max 2k1 max{x1 , . . . , xs } , . . . , 2kn max{x1 , . . . , xs }
≤ 2k 2l max{x1 , . . . , xs }
≤ 2l 2l max{x1 , . . . , xs }
= 22l max{x1 , . . . , xs } .
4.2. A NON-ELEMENTARY FUNCTION 109
for some k ∈ N. As
and as
y ≤ 2k (y) ≤ 2k max{x1 , . . . , xr , y} ,
for every k ∈ N, we have that
X
Σf (x1 , . . . , xr , y) = f (x1 , . . . , xr , z)
z<y
X
< 2k max{x1 , . . . , xr , z}
z<y
X
≤ 2k max{x1 , . . . , xr , y}
z<y
= y2k max{x1 , . . . , xr , y}
2
≤ 2k max{x1 , . . . , xr , y}
< 2k+2 max{x1 , . . . , xr , y} ,
Similarly,
Y
Πf (x1 , . . . , xr , y) = f (x1 , . . . , xr , z)
z<y
Y
< 2k max{x1 , . . . , xr , z}
z<y
Y
≤ 2k max{x1 , . . . , xr , y}
z<y
y
= 2k max{x1 , . . . , xr , y}
2 max{x1 ,...,xr ,y}
≤ 2k max{x1 , . . . , xr , y} k
< 2k+2 max{x1 , . . . , xr , y} ,
(∗) 2n (m)
as 2n (m)2n (m) < 22 = 2n+2 (m).
is elementary.
Example 4.3.2. The equality = on N and the inequality < on N are elementary since their
characteristic functions can be described as follows:
· (1 −
χ< (n, m) = 1 − · (m −
· n)),
as
s+1
· χR )(~n, k) = 1 − χR (~n, k) = 0 , (~n, k) ∈ R
(1 −
1 , (~n, k) ∈
/ R.
Next we show that he elementary relations are closed under applications of propositional
connectives and bounded quantifiers.
and the result for the rest relations follows from their redundancy to them e.g.,
E(~x, y) ⇔ ¬∀z<y ¬T (~x, z) .
4.4. THE SET OF FUNCTIONS E 111
Example 4.3.4. The above closure properties enable us to show that many “natural” functions
and relations of number theory are elementary. E.g., the floor of a positive rational, defined
as a function on pairs of naturals, and the “remainder function” mod : N2 → N, where
mod (n, m) = n mod m is the remainder of the division of n by m are elementary as
n
= µk<n (n < (k + 1)m),
m
· n
n mod m = n − m.
m
The unary relation Prime and the enumeration-function of primes are also elementary, since
The values p0 , p1 , p2 , . . . form the enumeration of primes in increasing order. The inequality
n
p n ≤ 22
for the n-th prime pn can be proved by induction on n: for n = 0 this is clear by our convention
in Proposition 4.1.3(vi), and for n ≥ 1 we obtain
0 1 n−1 n −1 n
pn ≤ p0 p1 · · · pn−1 + 1 ≤ 22 22 · · · 22 + 1 = 22 + 1 < 22 .
as follows:
χA = χ= ◦ prk+1
k+1 , f ,
f (~x, y) = µz<y χT (~x, z) = 0 .
As
χA (~x, y) = 1 ⇔ χ= ◦ prk+1
k+1 , f (~x, y) = 1
⇔ prk+1
k+1 (~
x , y) = µ z<y χ T (~
x , z) = 0
⇔ y = µz<y χT (~x, z) = 0 ,
by our convention in Proposition 4.1.3(vi) we have that χT (~x, z) = 1, for every z < y.
Lemma 4.4.3. There are pairing functions π, π1 , π2 in E with the following properties:
(i) π maps N × N bijectively onto N.
(ii) π(a, b) + b + 2 ≤ (a + b + 1)2 , for a + b ≥ 1, hence π(a, b) < (a + b + 1)2 .
(iii) π1 (c), π2 (c) ≤ c.
(iv) π(π1 (c), π2 (c)) = c.
(v) π1 (π(a, b)) = a.
(vi) π2 (π(a, b)) = b.
Proof. We enumerate the pairs of natural numbers
.. .. .. .. ..
. . . . .
(0, 3)(1, 3)(2, 3)(3, 3) . . .
(0, 2)(1, 2)(2, 2)(3, 2) . . .
(0, 1)(1, 1)(2, 1)(3, 1) . . .
(0, 0)(1, 0)(2, 0)(3, 0) . . .
as follows:
..
.
6 ...
3 7 ...
1 4 8 ...
0 2 5 9 ...
I.e., if ∆n are the diagonals:
∆0 = (0, 0),
∆1 = (0, 1)(1, 0),
∆2 = (0, 2)(1, 1)(2, 0),
∆3 = (0, 3)(1, 2)(2, 1)(3, 0),
etc., then the above enumeration enumerates the pairs of all diagonals following the route
∆0 → ∆1 → ∆2 → ∆3 → . . . .
We remark the following:
4.4. THE SET OF FUNCTIONS E 113
• If (a, b) ∈ ∆n , then a + b = n.
• The number of pairs in ∆n is n + 1.
• The number π(a, b) associated to the pair (a, b) counts the number of pairs from (0, 0),
the first pair in the enumeration, until reaching (a, b) in the diagonal ∆a+b and having
gone through the previous diagonals
∆0 → ∆1 → ∆2 → ∆3 → . . . → ∆a+b−1 .
π(a, b) = [1 + 2 + . . . (a + b)] + a
1
= (a + b)(a + b + 1) + a
2
X
= i + a.
i≤a+b
The second equality above shows that π is in E (the justification of this is an exercise), while
the third equality shows that π is subelementary. Clearly π : N × N → N is bijective. Moreover,
a, b ≤ π(a, b) and in case π(a, b) 6= 0 also a < π(a, b). Let
As π is in E, we also have that π1 and π2 are in E. Moreover, by their definition, and since
π is subelementary, we also have that π1 and π2 are subelementary. Clearly, πi (c) ≤ c for
i ∈ {1, 2} and
π1 (π(a, b)) = a, π2 (π(a, b)) = b, π(π1 (c), π2 (c)) = c.
1
n(n + 1) + n + 2 ≤ (n + 1)2 for n ≥ 1,
2
Theorem 4.4.4 (Gödel’s β-function). There is in E a function β with the following property:
For every sequence a0 , . . . , an−1 < b of numbers less than b we can find a number
4
c ≤ 4 · 4n(b+n+1) ,
Proof. Let Y
a = π(b, n) and d = 1 + π(ai , i)a! .
i<n
From a! and d we can, for each given i < n, reconstruct the number ai as the unique x < b
such that
1 + π(x, i)a! | d.
For clearly ai is such an x, and if some x < b were to satisfy the same condition, then
because π(x, i) < a and the numbers 1 + ka! are relatively prime for k ≤ a, we would have
π(x, i) = π(aj , j) for some j < n. Hence x = aj and i = j, thus x = ai . Therefore
Clearly, β is in E. Furthermore with c = π(a!, d) we see that β(c, i) = ai . One can then
estimate the given bound on c, using π(b, n) < (b + n + 1)2 (exercise).
Theorem 4.4.5. The set of functions E is closed under limited recursion. Thus if g, h, k are
given functions in E and f is defined from them according to the schema
f (m,
~ 0) = g(m),
~
f (m,
~ n + 1) = h(n, f (m,
~ n), m),
~
f (m,
~ n) ≤ k(m,
~ n),
then f is in E.
Proof. Let f be defined from g, h and k in E, by limited recursion as above. Using Gödel’s
β-function as in the last theorem we can find for any given m, ~ n a number c such that
~ i) for all i ≤ n. Let R(m,
β(c, i) = f (m, ~ n, c) be the relation
and note that its characteristic function is in E. It is clear, by induction, that if R(m, ~ n, c)
~ i), for all i ≤ n. Therefore we can define f explicitly by the equation
holds then β(c, i) = f (m,
f (m,
~ n) = β(µc R(m,
~ n, c), n).
Note that it is in the previous proof only that the exponential function is required, in
providing a bound for µc .
Corollary 4.4.6. The set of functions E is equal to the set Elem of elementary functions.
4.5. CODING FINITE LISTS 115
Proof. It is sufficient to show that E is closed under bounded sums and bounded products.
Suppose
P for instance, that f is defined from g in E by bounded summation: f (m,~ n) =
i<n g( m,
~ i). Then f can be defined by limited recursion, as follows
f (m,
~ 0) =0
f (m,
~ n + 1) = f (m,
~ n) + g(m,
~ n)
f (m,
~ n) ≤ n · max g(m,
~ i)
i<n
and the functions (including the bound) from which it is defined are in E (why?). Thus f is in
E by the theorem. If f is defined by bounded product, we proceed similarly.
Assume that 0 is a constant and Succ is a unary function symbol in L. For every a ∈ N the
numeral a ∈ TermL is defined by 0 = 0 and n + 1 = Succ(n).
Proposition 4.6.3. There is an elementary function s such that for every formula C = C(z)
with z = x0 ,
s(pCq, k) = pC(k)q;
Proof. The proof requires a lot of preparation, and it is omitted. Lemma 4.5.2 is necessary to
the proof.
We assume in this section that |M| = N, 0 is a constant in L and Succ is a unary function
symbol in L with 0M = 0 and SuccM (a) = a + 1.
Recall that for every a ∈ N the numeral a ∈ TermL is defined by 0 = 0 and n + 1 = Succ(n).
Observe that in this case the definability of R ⊆ Nn by A(x1 , . . . , xn ) is equivalent to
R = (a1 , . . . , an ) ∈ Nn | M |= A(a1 , . . . , an ) .
We shall show that already from these assumptions it follows that the notion of truth for
M, more precisely the set
Lemma 4.7.3 (Semantical fixed point lemma). If every elementary relation is definable in
M, then for every L-formula B(z) we can find a closed L-formula A such that
Proof. Let s be the elementary function satisfying for every formula C = C(z) with z = x0 ,
s(pCq, k) = pC(k)q
and therefore
A = ∀x (As (pCq, pCq, x) → B(x)).
Hence M |= A if and only if ∀d∈N d = pC(pCq)q ⇒ M |= B(d) , which is the same as
M |= B(pAq).
Theorem 4.7.4 (Tarski’s undefinability theorem). Assume that every elementary relation is
definable in M. Then Th(M) is undefinable in M.
Proof. Assume that pTh(M)q is definable by BW (z). Then for all closed formulas A
Now consider the formula ¬BW (z) and choose by the fixed point lemma a closed L-formula A
such that
M |= A if and only if M |= ¬BW (pAq).
This contradicts the equivalence above.
Definition 4.8.1. Let L be a countable first order language with equality, and let L be the
set of all closed L-formulas. For every set Γ of formulas let L(Γ) be the set of all function
and relation symbols occurring in Γ. An axiom system Γ is a set of closed formulas such that
EqL(Γ) ⊆ Γ. A model of an axiom system Γ is an L-model M such that L(Γ) ⊆ L and M |= Γ.
For sets Γ of closed formulas we write
Clearly Γ is satisfiable if and only if Γ has an L-model. A theory T is an axiom system closed
under `c , that is, EqL(T ) ⊆ T and
T = {A ∈ L(T ) | T `c A}.
For every L-model M satisfying the equality axioms the set Th(M) of all closed L-formulas
A such that M |= A is a theory. We consider the question as to whether in T there is a
notion of truth (in the form of a truth formula B(z)), such that B(z) means that z is true.
A consequence is that we have to explain all notions used without referring to semantical
concepts at all.
4.9. UNDEFINABILITY OF THE NOTION OF TRUTH IN FORMAL THEORIES 119
1. z ranges over closed formulas (or sentences) A, or more precisely over their Gödel
numbers pAq.
2. A true is to be replaced by T ` A.
3. C equivalent to D is to be replaced by T ` C ↔ D.
Hence the question now is whether there is a truth formula B(z) such that
T ` A ↔ B(pAq),
for all sentences A. The result will be that this is impossible, under rather weak assumptions
on the theory T . Technically, the issue will be to replace the notion of definability by the
notion of representability within a formal theory. We begin with a discussion of this notion.
In this section we assume that L is an elementarily presented language with 0, Succ and = in
L, and T is an L-theory containing the equality axioms EqL .
T ` A(a1 , . . . , an ) if (a1 , . . . , an ) ∈ R,
T ` ¬A(a1 , . . . , an ) if (a1 , . . . , an ) ∈
/ R.
T ` A ↔ B(pAq).
120 CHAPTER 4. GÖDEL’S INCOMPLETENESS THEOREMS
Proof. The proof is similar to the proof of the semantical fixed point lemma. Let s be the
elementary function introduced there and As (x1 , x2 , x3 ) a formula representing s in T . Let
and therefore
A = ∀x (As (pCq, pCq, x) → B(x)).
A ↔ ∀x (x = pAq → B(x))
and therefore
A ↔ B(pAq).
If T = Th(M), we obtain the semantical fixed point lemma above as a special case.
Theorem 4.9.2. Let T be a consistent theory such that all elementary functions are repre-
sentable in T . Then there cannot exist a formula B(z) defining the notion of truth, i.e., such
that for all closed formulas A
T ` A ↔ B(pAq).
Proof. Assume we would have such a B(z). Consider the formula ¬B(z) and choose by the
fixed point lemma a closed formula A such that
T ` A ↔ ¬B(pAq).
is Σ01 -definable.
4.10. RECURSIVE FUNCTIONS 121
is a Σ01 -definition of R.
Definition 4.10.2. The µ-recursive, or simply recursive functions are those partial functions
which can be defined from the initial functions: constant 0, successor S, projections onto the
i-th coordinate, addition +, modified subtraction −· and multiplication ·, by applications of
composition and unbounded minimisation. The latter is the scheme
f ∈ Rec(r+1)
,
µy f ∈ Rec(r)
where
(µy f )(x1 , . . . , xr ) = µy (f (x1 , . . . , xr , y) = 0)
that is, the least number y such that f (x1 , . . . , xr , y 0 ) is defined for every y 0 ≤ y and
f (x1 , . . . , xk , y) = 0.
Note that it is through unbounded minimisation that partial functions may arise.
Proof. By removing the bounds on µ one obtains µ-recursive definitions of the pairing functions
π, π1 , π2 and of Gödel’s β-function. Then by removing all mention of bounds one sees that
the µ-recursive functions are closed under unlimited primitive recursive definitions of the form:
f (m,
~ 0) = g(m),
~
f (m,
~ n + 1) = h(n, f (m,
~ n)).
Thus one can µ-recursively define bounded sums and bounded products, and hence all
elementary functions.
The converse of the previous lemma does not hold (why?). Call a relation R recursive, if
its total characteristic function is recursive. One can show that a relation R is recursive if and
only if both R and its complement are recursively enumerable.
122 CHAPTER 4. GÖDEL’S INCOMPLETENESS THEOREMS
Definition 4.11.1. In this section let L be an elementarily presented language with 0, Succ,
= in L and T a theory containing the equality axioms EqL . A set S of formulas is called
recursive (recursively enumerable), if
pSq = {pAq | A ∈ S}
Theorem 4.11.2 (Undecidability). Assume that T is a consistent theory such that all recursive
functions are representable in T . Then T is not recursive.
Proof. Assume that T is recursive. By assumption there exists a formula B(z) representing
pT q in T . Choose by the fixed point lemma a closed formula A such that
T ` A ↔ ¬B(pAq).
Proof. Let T be such a theory, which is supposed to be complete. Clearly, the set F = {pAq |
A ∈ L} is elementary. Since T is complete, we have
a∈
/ pT q ↔ a ∈
/ F ∨ ¬a
˙ ∈ pT q
with ¬a
˙ = hsn(→), a, sn(⊥)i. Hence the complement of pT q is recursively enumerable as well,
which means that pT q is recursive. Now the claim follows from the undecidability theorem
above.
4.11. UNDECIDABILITY AND INCOMPLETENESS 123
There are very simple theories with the property that all recursive functions are repre-
sentable in them; an example is a finitely axiomatised arithmetical theory Q due to Robinson.
One can sharpen the Incompleteness Theorem, as one can produce a formula A such that
neither A nor ¬A is provable. The original idea for this sharpening is due to Rosser. Gödel’s
original first incompleteness theorem provided such an A under the assumption that the
theory satisfied a stronger condition than mere consistency, namely ω-consistency. Rosser
then improved Gödel’s result by showing, with a somewhat more complicated formula, that
consistency is all that is required.
A theory T in an elementarily presented language L is axiomatised, if it is given by a
recursively enumerable axiom system AxT . One can show that the set AxT is elementary.
According to the theorem of Gödel-Rosser, for every axiomatised consistent theory T satisfying
certain weak assumptions, there is an undecidable sentence A meaning “for every proof of me
there is a shorter proof of my negation”. Because A is unprovable, it is clearly true. Gödel’s
Second Incompleteness Theorem provides a particularly interesting alternative to A, namely
a formula ConT expressing the consistency of T . Again it turns out to be unprovable and
therefore true. The proof of this theorem in a sharpened form is due to Löb (see [19], section
3.6.2).
[4] E. Bishop and D. S. Bridges: Constructive Analysis, Grundlehren der Math. Wis-
senschaften 279, Springer-Verlag, Heidelberg-Berlin-New York, 1985.
[8] G. Gentzen: Über das Verh ältnis zwischen intuitionistischer und klassischer Arithmetik,
Galley proof, Mathematische Annalen (received 15th March 1933). First published in
English translation in The collected papers of Gerhard Gentzen, M.E. Szabo (editor),
53-67, Amsterdam (North-Holland).
[9] G. Gentzen: Untersuchungen über das logische Schließen I, II, Mathematische Zeitschrift,
39, 1935, 176-210, 405-431.
[13] A. N. Kolmogorov. On the principle of the excluded middle (in Russian). Mat. Sb., 32,
1925, 646-667.
[14] J. Lambek, P. J. Scott: Introduction to higher order categorical logic, Cambridge University
Press, 1986.
[16] P. Martin-Löf: Intuitionistic type theory: Notes by Giovanni Sambin on a series of lectures
given in Padua, June 1980, Napoli: Bibliopolis, 1984.
[18] H. Schwichtenberg, A. Troelstra: Basic Proof Theory, Cambridge University Press 1996.
[21] The Univalent Foundations Program: Homotopy Type Theory: Univalent Foundations of
Mathematics, Institute for Advanced Study, Princeton, 2013.