0% found this document useful (0 votes)
15 views88 pages

Thesis

This document is Adam Fiedler's master's thesis on deduction in matching logic. It investigates the foundations of matching logic and its proof system called System H. A long-standing open problem is whether System H is complete with respect to all matching logic theories, even those without equality. The thesis identifies a tractable condition for the completeness of System H and uses it to find new classes of complete theories. It also explores connections to first-order logic, properties of proof systems, and techniques for constructing canonical models.

Uploaded by

blooregardqcazoo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views88 pages

Thesis

This document is Adam Fiedler's master's thesis on deduction in matching logic. It investigates the foundations of matching logic and its proof system called System H. A long-standing open problem is whether System H is complete with respect to all matching logic theories, even those without equality. The thesis identifies a tractable condition for the completeness of System H and uses it to find new classes of complete theories. It also explores connections to first-order logic, properties of proof systems, and techniques for constructing canonical models.

Uploaded by

blooregardqcazoo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 88

D E D U C T I O N I N M AT C H I N G L O G I C

adam fiedler

Master’s Thesis
Faculty of Informatics
Masaryk University

May 2022
Adam Fiedler: Deduction in Matching Logic, Master’s Thesis.
© May 2022
D E C L A R AT I O N

Hereby I declare that this thesis is my original authorial work, which


I have worked out on my own. All sources, references, and literature
used or excerpted during the elaboration of this work are properly
cited and listed in complete reference to the due source.

Brno, May 2022

Adam Fiedler

advisor: doc. Mgr. Jan Obdržálek, PhD.

consultant: Xiaohong Chen


“All good (constructive) logic must have an operational side.”
— Jean-Yves Girard [15, p. 21]

ACKNOWLEDGMENTS

First, I would like to thank my advisor, doc. Mgr. Jan Obdržálek, PhD.,
for his continuous guidance and patience with me. My complicated
writing style remains a problem. However, this thesis would be much
more difficult to understand without his help. Always when I thought
that something could not be explained more clearly, he proved me
wrong. He gave me so many helpful writing suggestions that I have
the feeling he read the thesis in more detail than I did.
Most warm-hearted thanks go to Xiaohong Chen for all the inspiring
discussions we had throughout the year, his brilliant permutation idea,
and his willingness to read all my drafts. Xiaohong taught me much
about matching logic and research. His passion for science never fails
to astonish me. I would also like to thank the rest of the FSL laboratory
at the UIUC for taking me as their own and for their amazing work.
Last but not least, I would like to thank my family and Juliana for
their understanding in these difficult couple of months. Writing is a
great ordeal when other things have to be put aside. It is important to
remember the support of those around the writer.

v
ABSTRACT

Matching logic (ML) is a logic designed for reasoning about programs


by means of operational semantics. We investigate the foundations
of matching logic and its proof systems suited for formal verification.
We focus on System H, which is complete w.r.t. most matching logic
theories used in practice. A problem open for several years is whether
System H is complete w.r.t. all theories. In this thesis, we identify a
tractable if-and-only-if-condition for completeness of System H and
exploit it to find new classes of complete theories. While solving the
completeness problem, we review some existing results and answer
related questions on expressiveness, consistency, and (un)satisfiability.
For example, we show a detailed embedding of first-order logic in
matching logic, prove the well-known compactness property for ML,
and present a new technique of constructing canonical models for
matching logic theories with equality. We also borrow some notions
from first-order logic and study their properties in matching logic.

KEYWORDS

matching logic, first-order logic, proof systems, completeness, deduc-


tion, compactness, conservative extensions, consistency, satisfiability,
canonical models

vii
CONTENTS

1 introduction 1
1.1 Mathematical conventions 3
1.2 A brief review of first-order logic 4
2 matching logic 7
2.1 Syntax 9
2.2 Semantics 11
2.3 Syntactic sugar 13
2.4 Entailment 14
2.5 Equality and definedness 17
2.6 Equality extensions 21
3 connections with first-order logic 23
3.1 Embedding ML in first-order logic with equality 23
3.2 Embedding first-order logic in ML 24
4 two proof systems for matching logic 29
4.1 System P 29
4.2 System H 32
4.2.1 Frame reasoning 35
4.2.2 Equivalence as a congruence 37
4.2.3 Deduction property 37
4.2.4 Local completeness 40
5 is system h complete? 43
5.1 An if-and-only-if condition for completeness 44
5.2 Reduction to finite theories 49
5.3 Theories without symbols are H-complete 50
5.4 Consistency, satisfiability, and compactness 54
5.5 Negation-complete theories 58
5.6 Open leads 60
6 canonical models for equality extensions 63
6.1 Local consistency 63
6.2 Canonical models 65
6.3 New results 70
7 conclusion 73

bibliography 75

ix
INTRODUCTION
1
Matching logic (ML) [8, 11, 27] is a logic designed for reasoning about
programs by means of operational semantics. We can define the op-
erational semantics of a programming language as a matching logic
theory and then derive operational behaviors in this logical theory [23].
The goal is to have a single source of truth in the form of operational
semantics and use it unchanged to generate the entire toolkit (e.g.,
compiler, debugger, verifier, or state-space explorer) for the given pro-
gramming language automatically [27, p. 3]. This is because we would
like to verify programs with a minimal trust base that is consistent for
all stages of the development process.
The current consensus is that operational semantics is too low-level
to be used for practical formal verification [27, p. 3]. Many state-of-
the-art formal methods thus rely on alternative semantics, various
translations, or “ad hoc” techniques. Even if these methods are proved
to be correct (and the proofs are themselves correct), each indirection
creates a possibility where things can go wrong. ML was born in an
effort to overcome the obstacles associated with operational semantics
and use it directly: as a single point of reference. Operational semantics
is usually easy to understand, scales well, and can be debugged
and tested because it is executable [27, p. 3]. The K framework [28],
based on matching logic, is proof that using operational semantics
for both execution and verification is feasible. In a nutshell, K takes
operational semantics of a language L as input and generates an
interpreter and verifier for L as output. The list of programming
languages successfully defined in K includes C [19], Java [6], and
JavaScript [24]. The K verifier was able to verify several complex heap-
manipulating programs in [29] at a performance level comparable
with verifiers crafted for a specific language [27, p. 4].
ML has grown considerably since its introduction as a variant of first-
order logic in [27]. It turned out to be well-suited not only for defining
operational semantics but also for capturing other logics, which is
another strong argument for using ML. ML can conveniently express
and unify many popular logics as ML theories. The basic matching
logic introduced in [27] is expressive enough to capture first-order logic
and the well-known modal logic S5 [1, 3, 22]. In [11], matching logic
was extended with the least fixpoint µ-binder. This allowed capturing
many other logics such as first-order logic with least fixpoints, modal
µ-logic [21], dynamic logic [17], as well as various temporal logics
such as linear temporal logic [25] or reachability logic [26]. Another
variant called applicative matching logic (AML) was introduced in [9]

1
2 introduction

to capture, e.g., the λ-calculus [9]. Matching logic can be seen as a


logic unifying all of these different logics and as the driving force
behind the development of K.
Unlike axiomatic semantics that have to be tailored for each pro-
gramming language on its own, ML has a single proof system called
System H [11], which is language-independent. Given an operational
semantics of a language, H can be used to derive the properties of
programs written in this language. H is complete w.r.t. most matching
logic theories used in practice, i.e., for most “useful” theories Γ we
have

Γ |= ϕ implies Γ `H ϕ.

What these theories have in common is that they can define equality
“=”. The question of whether H is complete w.r.t. all theories, even for
those without equality, has been open for several years [11, p. 1].

contributions. This thesis is a modest follow-up of [10, 11, 27]


investigating the foundations of matching logic in order to answer the
question of whether System H is complete. We take ML in its basic
form (without the µ-binder) and ponder questions on the intersection
of semantics and provability. The most notable of our contributions
include:
(1) We present a full proof of embedding first-order logic in match-
ing logic stronger than the original embedding presented in [27].
Namely, for every first-order S-theory Φ, we construct a match-
ing logic theory ΓS such that Φ |=FOL ϕ iff Φ ∪ ΓS |=ML ϕ.

(2) We define a new concept called equality extensions and exploit it


to identify a tractable if-and-only-if condition for completeness
of H.

(3) We reduce the problem of completeness to the problem of


whether H is complete w.r.t. all finite theories.

(4) We prove that H is complete w.r.t. some particular classes of


theories even without equality.

(5) We prove the well-known compactness theorem for ML.

(6) We borrow the notions of consistency and negation-complete the-


ories from first-order logic and show that their ML counterparts
behave similarly.

(7) We develop a new technique of constructing canonical models


for theories with equality based on the work in [10].
In the process of dealing with the completeness problem, we also an-
swer some related questions regarding expressiveness, (un)satisfiability,
and consistency.
1.1 mathematical conventions 3

structure of the thesis. First, we get ourselves familiar with


matching logic in Chapter 2, especially with concepts important for
our aims. We discuss how matching logic relates to first-order logic in
Chapter 3 and show that they have the same level of expressiveness.
Chapter 4 reviews a proof system for matching logic called System
H (and briefly also its predecessor System P ). In Chapter 5, we in-
vestigate completeness of H, formulate an if-and-only-if condition
for completeness, solve completeness w.r.t. some classes of theories
even without equality, and solve related problems. Finally, the last
Chapter 6 deals with canonical models in ML: we briefly overview
existing results and show a new technique of constructing canonical
models for theories with equality.

1.1 mathematical conventions

Let us declare a series of conventions that are followed throughout


the thesis. Every block of mathematical text (definitions, examples,
lemmas, theorems, etc.) ends with a black square “”, not just proofs;
this way it is clear where a block ends even if it consists of multiple
paragraphs. The symbol N stands for natural numbers, Z for integers.
We always assume 0 ∈ N. When it is necessary to drop 0, we write
N+ = N \ { 0 } .
Lowercase letters from the beginning of the English alphabet such as
a, b, c and m, n denote elements of a set (natural numbers, the carrier set
of a model, . . . ). The letters i, j are reserved for indices we are currently
considering in a list such as a1 , . . . , ai , . . . , an for some n ∈ N. Note that
the list a1 , . . . , an can be empty when n = 0. On the other hand, letters
from the end of the English alphabet x, y, z or x1 , . . . , xn are strictly
used for variables. Lowercase Greek letters ψ, ϕ, ξ, γ, δ always denote
formulas. Because we want to stay consistent with other matching
logic publications, the letter ρ is reserved for valuations and σ is
a placeholder formal symbol (similarly to how P is often used in
first-order logic). If we use an “enclosing” formal symbol such as
the ceiling symbol d·e, by d ϕe we mean the application d·e( ϕ). We
often use an underline to distinguish some formal symbols from their
corresponding meta-object, e.g., the symbol 0 and the number 0 ∈ N.
If we use any stand-alone uppercase letter, we likely refer to a
set of some kind. Uppercase Greek letters such as Ψ, Φ, Ξ, Γ, ∆ are
specifically used for sets of formulas. Γ will stand for sets of matching
logic formulas, and Φ for a set of FOL formulas. As usual, for a set A
we write P( A) to mean the powerset of A, and by

An = A × . . . × A
| {z }
n times

we mean the n-ary Cartesian power (note that A0 = {∅}). By | A| we


denote the cardinality of A. We assume all functions to be total, unless
4 introduction

stated otherwise. If f : A → B, then f | A0 for A0 ⊆ A is the restriction


of f to A0 defined as f | A0 = f ∩ ( A0 × B). A function composition
g ◦ f : A → C of functions f : A → B and g : B → C is defined as
( g ◦ f )( x ) = g( f ( x )). We also differentiate between equality “=” and
meta-level equality “≡”. For example, 2 · 21 = 42 is different from
∃ x. x ≡ ∃y. y, which means we consider α-equivalent formulas as the
same. In ambiguous situations we also explicitly write a := b instead
of a = b to highlight that we mean a is defined as b.
Finally, some note on special font styles. The calligraphic font styles
P , H or D are strictly used for Hilbert-style proof systems. The Fraktur
A, B, M is always used for structures, the letter I = (A, v) for FOL inter-
pretations (where v is an FOL valuation of variables). This is relatively
standard in publications on logic [3, 4, 13, 14] and it helps to avoid
defining redundant notation. We specifially follow the convention set
by [13]: if A is a structure, the corresponding letter A always denotes
the carrier set of the structure and a ∈ A its element.

1.2 a brief review of first-order logic

Because we will often be discussing connections with first-order logic,


let us informally recap basic definitions and our notation. First-order
logic formulas are formal expressions over an alphabet consisting of a
signature, logical connectives, quantifiers, and auxiliary symbols such
as “(”, “)”, or “,”. An FOL signature is a triple S = (Var, Pred, Func),
where Pred ∩ Func = ∅ and:

• Var is a countably infinite set of variables.


S
• Pred = n∈N+ Predn , where each Predn is countable and we
have Predi ∩ Pred j = ∅ for i 6= j.
If P ∈ Predn , we call P an n-ary predicate symbol.
S
• Func = n∈N Funcn , where each Funcn is countable and we
have Funci ∩ Func j = ∅ for i 6= j.
If f ∈ Funcn , we call f an n-ary function symbol.

Given a signature S = (Var, Pred, Func), we define a grammar of


FOL S-terms t and S-formulas ϕ as follows:

t ::= x ∈ Var | f (t1 , . . . , tn ) if f ∈ Funcn


ϕ ::= P(t1 , . . . , tn ) if P ∈ Predn | ϕ1 ∧ ϕ2 | ¬ ϕ | ∃ x. ϕ if x ∈ Var

Semantics of S-terms and S-formulas are given by interpretations


(A, v), where v : Var → A is a valuation of variables and A = ( A, (·)A )
is an S-structure, consisting of

• a non-empty carrier set A consisting of elements,

• a mapping (·)A such that


1.2 a brief review of first-order logic 5

– f A : Mn → M for each function symbol f ∈ Funcn , and


– PA : Mn for each n-ary predicate symbol P ∈ Predn .

Given an interpretation (A, v), an S-term t points to an element


vA (t) and an S-formula ϕ is either true in (A, v) ((A, v) |=FOL ϕ) or
false in (A, v) ((A, v) 6|=FOL ϕ). We write A |=FOL ϕ iff (A, v) |=FOL ϕ
for all valuations v : Var → A. The definitions of vA (t) and |=FOL are
standard due to Tarski [30]. They can be found (with varying notation),
e.g., in [13] or [14].
M AT C H I N G L O G I C
2
Matching logic (ML)1 is a variant of first-order logic (FOL) that does
not differentiate between predicate symbols and function symbols [27].
Every FOL formula is an ML formula but not vice versa. For exam-
ple, we can write an ML formula that says something about triples
(n1 , n2 , n3 ) such that n1 + n2 = n3 as follows:

ϕSUM ≡ ∃ x ∃y∃z. h x, hy, zii ∧ x + y = z.


| {z } | {z }
structure logical constraint

Notice that ϕSUM is not an FOL formula; the reason is that h·, ·i is
neither a predicate symbol nor a function symbol. In ML, there is no
distinction between predicates (formulas) and terms.
Formulas of ML are called patterns. A pattern is interpreted in ML
as a set of model elements that “match” this pattern, similar to pattern
matching in functional programming languages such as Haskell. To
illustrate this, suppose we want to define a Haskell function f only for
triples ( x, y, z) of integers such that x + y = z. We could define f with
guarded pattern matching as follows.
f :: (Integer, Integer, Integer) -> ...
f (x, y, z) | x + y == z = ...

The domain of the partial function f exactly matches the pattern ϕSUM
defined in the first paragraph if we interpret symbols of ϕSUM in a
reasonable manner. This means we can define an ML model N of
integers where ϕSUM is a pattern that N interprets as the set

{(n1 , n2 , n3 ) ∈ Z3 | n1 + n2 = n3 } = dom(f).
| {z } | {z }
structure logical constraint

Take N as a black box for now. ML models are different from FOL
models because formal symbols are interpreted in ML models as maps
from elements to sets of model elements, not to model elements. On
an abstract level, our model N interprets h·, ·i as a tuple constructor

hn1 , n2 i 7→N {n1 , {n1 , n2 }}.

In this sense, h x, hy, zii matches triples (n1 , n2 , n3 ) ∈ Z3 , the pattern


x + y = z filters triples (n1 , n2 , n3 ) ∈ Z3 such that n1 + n2 = n3 .
Note that the pattern x + y = z is interpreted as a set of elements
as any other pattern. We discuss how equalities technically work in

1 We use the acronym ML throughout the thesis to mean matching logic, not the Meta
Language due to Milner et al.

7
8 matching logic

ϕ
M1

ϕ M4
M3
M2

Figure 2.1: The pattern ϕ matches no elements in the model M1 , a single


element in M2 , all elements of the set M4 ⊂ M3 in both M3 and
M4 . All elements of M4 match ϕ, thus ϕ is valid in M4 .

Section 2.5. Finally, the connective ∧ is interpreted as intersection ∩ of


sets of elements matching h x, hy, zii and of those matching x + y = z.
Interpretations of ML symbols applied to a set of elements distribute
over all contained elements. Let us give an example. If our model N
also interprets symbols even and prime as

even 7→N {n ∈ N | n is even},


prime 7→N { p ∈ N | p is prime},

then the pattern hprime, hprime, evenii distributes h·, ·i over all elements
matched by even and prime in the corresponding positions, i.e., it is
interpreted by N as

{( p1 , p2 , n) ∈ Z3 | p1 , p2 prime and n even}.


Patterns can be meaningfully composed together exactly because
they represent sets of model elements. This helps us avoid duplicities
and redundant pattern definitions. For example, the pattern

ϕGB ≡ ϕSUM ∧ hprime, hprime, evenii

matches some triple in N for every even number greater than 2 iff
Goldbach’s conjecture is true. We did not need to change anything
about the pattern ϕSUM from the first paragraph. ML semantics make
structural reasoning very compact and composable (modular). That
is why ML is well-suited for operational semantics of programming
languages and formal verification using these semantics, which was
one of the motivations behind introducing ML [27]. Readers interested
in how to use ML for formal verification are referred to [29] for details.
The pattern ϕGB defined above also illustrates the dual character
of patterns; patterns can specify sets of model elements as well as
specify models among other models (Figure 2.1). This is different from
FOL formulas. Consider some closed FOL formulas ψ1 , ψ2 ; then the
notation

{ψ1 } 6|=FOL ψ2 ,
2.1 syntax 9

says that there is an FOL model of the FOL theory {ψ1 } where ψ2
does not hold. Here ψ1 “specifies” FOL models we are considering,
ψ2 is an untrue statement about those models. FOL formulas express
properties of models by referring to model elements with terms, where
properties are either true or false. On the other hand, the notation

{ψ1 } 6|=ML ψ2 ,

says that there is an ML model of {ψ1 } where ψ2 does not match


all elements of the model. As we shall see in Section 2.2, if a pattern Patterns have a dual
matches all elements of a model, the pattern is called valid in the character.
model. This is how we use patterns as both terms referring to sets
of model elements and formulas that are either valid (matching all
elements) or not valid (not matching all elements).

2.1 syntax

Formulas of matching logic are called patterns. Analogously to other


logics, patterns (formulas) are formal expressions that contain vari-
ables and formal symbols given by a signature.

Definition 2.1.1 (Signature). A matching logic signature (or simply sig-


nature) is a pair (Var, Σ), where

• Var is a countably infinite set of variables,

• Σ = {Σn }n∈N is a set of pairwise disjoint sets of symbols, where


each Σn contains countably many symbols.

If σ ∈ Σn , then σ is called an n-ary symbol. 

Notice that a signature does not differentiate between predicate or


function symbols. A signature only declares what variables and formal
symbols we use and of what arity each formal symbol is. Patterns
(ML formulas) are then built with these variables and formal symbols
applied to other patterns. We can also combine patterns by standard
logical connectives.

Definition 2.1.2 (Pattern). Let (Var, Σ) be an ML signature. Then a


(Var, Σ)-pattern (or simply pattern) ϕ is any expression generated by
the grammar

ϕ ::= x ∈ Var | ¬ ϕ | ϕ1 ∧ ϕ2 | ∃ x. ϕ if x ∈ Var


| σ( ϕ1 , . . . , ϕn ) if σ ∈ Σn .

The set of all (Var, Σ)-patterns is denoted Pattern(Var,Σ) . We say that


a pattern ϕ is in the signature (Var, Σ) iff ϕ ∈ Pattern(Var,Σ) . 

Note that x ∈ Var is both a variable and a pattern. Unless stated


otherwise, we assume Var to be letters from the end of the English
10 matching logic

alphabet such as x, y, z. That is why we usually drop this conven-


tional Var in the signature (Var, Σ). The set of all Σ-patterns with
conventional variables is denoted simply PatternΣ .
By σ ∈ Σ we slightly abuse notation to mean that σ ∈ Σi for
See Section 1.1 for some i ∈ N. We also write, e.g., Σ = {λ, σ (·), σ0 (·, ·)} to mean Σ =
other conventions. {Σ0 , Σ1 , Σ2 , . . .} where Σ0 = {λ}, Σ1 = {σ}, Σ2 = {σ0 } and Σn = ∅
for n > 2. Whenever λ ∈ Σ0 , we call λ a constant (symbol). In patterns
we write constants as λ instead of λ().
Even though precedence parentheses “(” and “)” are not part of ML
syntax, we use them to explicitly show pattern structure. For example,
we need to distinguish ¬ ϕ ∧ ψ from ¬( ϕ ∧ ψ). Negation ¬ always
binds more tightly than other connectives, i.e.,

¬ ϕ ∧ ψ ≡ (¬ ϕ) ∧ ψ.

If Γfin is a finite set of patterns, we also make our lives slightly easier
and write Γ to mean the pattern γ∈Γfin γ.
V V

bound variables. The scope of “∃” goes as far as possible to the


right unless it is limited by parentheses. For example,

∃ x. ψ → ((∃y. ϕ) → z) ≡ ∃ x. (ψ → ((∃y. ϕ) → z).

Analogously to FOL, “∃” is a binder. Therefore bound variables, free


variables, capture-avoiding substitution and α-renaming are defined ac-
cordingly (see, e.g., [27, p. 7]). By FV( ϕ) we denote the set of free
variables of a pattern ϕ. When FV( ϕ) = ∅, we say that ϕ is closed. We
consider α-equivalent patterns to be the same, i.e., ϕ ≡ ϕ0 if ϕ, ϕ0 are
α-equivalent. Given a pattern ϕ ∈ PatternΣ , then

ϕ( x1 , . . . , xn ) means FV( ϕ) ⊆ { x1 , . . . , xn }.

We write ϕ[ψ/x ] to mean capture-avoiding substitution2 , i.e., the


result of substituting ψ for every free occurrence of x in ϕ with implicit
α-renaming that prevents variable capture. As usual, the notation
ϕ[ψ1 /x1 , ψ2 /x2 , . . . , ψn /xn ] for distinct xi (1 ≤ i ≤ n) means simultane-
ous capture-avoiding substitution.

remark on sorts. Unlike the canonical paper on ML [27], we


only use single-sorted signatures. When we use a result from many-
sorted ML, this is justified because we can simply assume that we only
have one sort s. In fact, the single-sorted variant of ML we use in this
thesis is as expressive as the original many-sorted ML. An example of
how to define sorts in ML can be found in [8]. This is why sorts are
often dropped in recent matching logic publications.

2 Note that ψ in ϕ[ψ/x ] can be any pattern as there is technically no difference between
predicates and terms in matching logic.
2.2 semantics 11

2.2 semantics

Matching logic semantics is similar to pattern matching from functional


programming languages such as Haskell. Intuitively speaking, a pat-
tern is interpreted in ML models as a set of those model elements that
match this pattern (hence the name matching logic) [11, 27]. For exam-
ple, (2, 3) matches the pattern h x, 3i in a model of N2 if we interpret
x := 2. If we do not care about the value of x, we can consider the
pattern ∃ x. h x, 3i that matches all elements of the set {(n, 3) | n ∈ N}
in a model of N2 .
Let us give an intuition of semantics for each case. The pattern x
(for any x ∈ Var) matches exactly one element given by a valuation
(Definition 2.2.2); ϕ1 ∧ ϕ2 matches elements matching both ϕ1 and ϕ2 ;
¬ ϕ matches elements not matching ϕ; ∃ x. ϕ matches every element
matching ϕ for some valuation of x. Finally, the pattern σ( ϕ1 , . . . , ϕn )
matches elements determined by the interpretation of the symbol σ
given by a model.

Definition 2.2.1 (Model). Let Σ be a signature. A matching logic Σ-


model (or simply model) is a pair M = ( M, {σM }σ∈Σ ) consisting
of

• a non-empty carrier set M (often called the domain),

• a function σM : Mn → P( M ) for every n-ary symbol σ ∈ Σn


(called interpretation of σ).

Every m ∈ M is called an element of M (or simply model element). 

Even though technically M0 = {∅}, we interpret λM : M0 → P( M )


simply as λM : P( M). We often overload σM and mean the extension
of σM to sets, i.e., σM : P( M)n → P( M) where

σ M ( A1 , . . . , A n ) = σ M ( a1 , . . . , a n ).
[

a1 ∈ A1 ,...,an ∈ An

We also use notation in the following manner

M : M = { m1 , m2 , . . . }, σ M ( m ) = A

for m ∈ M, A ⊆ M to mean M = ({m1 , m2 , . . .}, {. . . , σM , . . .}) where


σM (m) = A to avoid unnecessary details.
Pattern matching in matching logic is formalized using the notion
of valuation. For a given pattern, a valuation returns a set of model
elements that match the given pattern.

Definition 2.2.2 (Valuation). Let M be a Σ-model. A function ρ :


Var → M is called an M-valuation. Given an M-valuation ρ, we
define pattern valuation ρM : PatternΣ → P( M) for all x ∈ Var, all
ϕ ∈ PatternΣ and all σ ∈ Σ inductively as follows:
12 matching logic

3 ϕ
1
...
0
5
2 4
s( ϕ)
M = (N, {s M })

Figure 2.2: Matching the pattern ϕ ≡ (1 ∨ 2 ∨ 3) ∧ ¬ x ) with ρ( x ) = 2 in a


model of natural numbers (Example 2.2.1). Arrows depict sM ; the
dotted arrow depicts ρM (s( ϕ)) = sM ({1, 3}) = {2, 4}.

• ρM ( x ) = {ρ( x )},

• ρM (¬ ϕ) = M \ ρM ( ϕ),

• ρM ( ϕ1 ∧ ϕ2 ) = ρM ( ϕ1 ) ∩ ρM ( ϕ2 ),

• ρM (∃ x. ϕ) = ρ[m/x ]M ( ϕ),
S
m∈ M

• ρM (σ( ϕ1 , . . . , ϕn )) = σM (ρM ( ϕ1 ), . . . , ρM ( ϕn )) if σ ∈ Σn ,
where ρ[m/x ]( x ) = m and ρ[m/x ](y) = ρ(y) for all y 6= x. We say
that ϕ evaluates to A (with ρ) if ρM ( ϕ) = A. We say that a matches ϕ
or ϕ matches a (with x := m) if a ∈ ρ[ x/m]M ( ϕ) for some ρ. 
Notice that M-valuations are maps from variables to elements of M.
Similarly to other logics, we then extend M-valuations to valuations of
patterns (formulas), which also depend on interpretations of symbols
in the model M. Logical connectives correspond to basic operations
over sets, symbols are arbitrary maps given by models from elements
to sets of elements. This is illustrated by the following example.
Example 2.2.1 (Natural numbers). Consider the {0, s(·)}-model (de-
picted in Figure 2.2) defined as

M : M = N, 0M = {0}, sM (n) = {n + 1}.


n times
z }| {
Let us also define the expected syntactic sugar n ≡ s(s(. . . s(0)))) for
all n ∈ N+ and ϕ1 ∨ ϕ2 ≡ ¬(¬ ϕ1 ∧ ¬ ϕ2 ).
It is easy to see for every M-valuation ρ that the pattern n matches
the corresponding natural number n, i.e., ρM (n) = {n}. Disjunctions
evaluate to unions of natural numbers, conjunctions evaluate to inter-
sections of matched numbers. For example, if ρ( x ) = 2 then

ρM ((1 ∨ 2 ∨ 3) ∧ ¬ x ) = {1, 3}
 
(1 ∨ 2 ∨ 3) ∧ ¬ x matches 1 and 3 for x := 2 .
2.3 syntactic sugar 13

The interpretation sM ( X ) applied to a subset of natural numbers


X ⊆ N distributes sM over all elements of X, e.g.,

sM ({n1 , . . . , nk }) = sM (ni ) = {n1 + 1, . . . , nk + 1}.


[

1≤ i ≤ k


Of course, M-valuations have no effect on valuations of closed
patterns, i.e., closed patterns match the same elements no matter how
we interpret variables. We use this argument several times in the thesis,
so it is useful to have it stated properly. This intuition is a corollary of
the following proposition.3

Proposition 2.2.1 ([27]). Let M be a Σ-model and ϕ ∈ PatternΣ .


For every M-valuation ρ1 , ρ2 we have ρ1 |FV( ϕ1 ) = ρ2 |FV( ϕ2 ) implies
ρM M
1 ( ϕ1 ) = ρ2 ( ϕ2 ). 
As a concluding remark, we sometimes need to work with models
restricted to smaller signatures. These are important when proving
results related to patterns that contain only some subset of symbols in
a given signature.

Definition 2.2.3 (Model restriction). Let M = ( M, I ) be a Σ0 -model and


Σ be some signature such that Σ ⊆ Σ0 . We define the model restriction
of M to Σ as M|Σ = ( M, I ∩ {σM | σ ∈ Σ}). 
On patterns in the restricted signature, model restrictions behave
the same as the original model. This argument is important in our
results, therefore we state it explicitly in the following proposition.

Proposition 2.2.2. Let M be a Σ0 -model and Σ ⊆ Σ0 be some signature.


For every ϕ ∈ PatternΣ and every M-valuation ρ we have ρM ( ϕ) =
ρ M| Σ ( ϕ ). 

2.3 syntactic sugar

Example 2.2.1 already introduced the sugar ϕ1 ∨ ϕ2 ≡ ¬(¬ ϕ1 ∨ ¬ ϕ2 ).


Seeing how semantics work in ML, we can now define other expected
syntactic sugar for the logical connectives such as “→”, “∀ x. ϕ”, etc.
Every time we write a pattern with syntactic sugar, we mean the
desugared pattern.

ϕ1 ∨ ϕ2 ≡ ¬(¬ ϕ1 ∧ ¬ ϕ2 ) ∀ x. ϕ ≡ ¬∃ x. ¬ ϕ
ϕ1 → ϕ2 ≡ ¬ ϕ1 ∨ ϕ2 > ≡ (∃ x. x ) ∨ ¬(∃ x. x )
ϕ1 ↔ ϕ2 ≡ ( ϕ1 → ϕ2 ) ∧ ( ϕ2 → ϕ1 ) ⊥ ≡ ¬>

Notice that > is a closed Σ-pattern with Σ = ∅ and if we replace


∃ x. x in > with a propositional variable p, then we get a propositional
3 Note that for every closed pattern ϕ we have ρ|FV( ϕ) = ρ|∅ = ∅.
14 matching logic

tautology p ∨ ¬ p. It is easy to see how > ≡ (∃ x. x ) ∨ ¬(∃ x. x ) matches


all elements of any model M. Let ∃ x. x evaluate to a set A ⊆ M, then
> evaluates to A ∪ ( M \ A) = M. In fact, propositional tautologies
always match all elements of a model:
Proposition 2.3.1 ([27]). Let ψ be a propositional tautology containing
only variables p1 , . . . , pn and let ϕ1 , . . . , ϕn ∈ PatternΣ for some sig-
nature (Var, Σ). Then for every Σ-model M and every M-valuation ρ
we get that ρM (ψ[ ϕ1 /p1 , . . . , ϕn /pn ]) = M. 
It is not hard to derive with basic set theory that the rest of the
syntactic sugar works as expected too:
Proposition 2.3.2 ([27]). Let M be a (Var, Σ)-model and ρ any M-
valuation. The following propositions hold for all x ∈ Var and all
ϕ ∈ PatternΣ :
• ρM (>) = M and ρM (⊥) = ∅,

• ρM ( ϕ1 ∨ ϕ2 ) = ρM ( ϕ1 ) ∪ ρM ( ϕ2 ),

• ρM ( ϕ1 → ϕ2 ) = ( M \ ρM ( ϕ1 )) ∪ ρM ( ϕ2 )
= M \ (ρM ( ϕ1 ) \ ρM ( ϕ2 )),

• ρM ( ϕ1 ↔ ϕ2 ) = M \ (ρM ( ϕ1 )4ρM ( ϕ2 )),

• ρM (∀ x. ϕ) = ρ[m/x ]M ( ϕ),
T
m∈ M

where “4” is set symmetric difference. 


Notice that we defined the sugar ∀ x. ϕ. We will extend this a little
further and write ∀ ϕ to mean the universal closure of ϕ, i.e.,

∀ ϕ ( x1 , . . . , x n ) ≡ ∀ x1 . . . ∀ x n . ϕ ( x1 , . . . , x n ).

This notation applied to sets of patterns will mean ∀Γ ≡ {∀γ | γ ∈ Γ}.

2.4 entailment

We are now ready to define the relation |=ML . Throughout the thesis,
we usually drop ML in |=ML and simply write |=. Because ML does not
distinguish between formulas and terms, patterns play a dual role. We
have seen that each pattern is interpreted as a set of elements given by
the pattern valuations ρM (Definition 2.2.2). Here we learn to think of
patterns in their second role as formulas play in FOL, i.e., patterns can
specify properties of models. A pattern is called valid in an ML model
if the pattern matches all elements of the model, regardless of how we
interpret variables.
Definition 2.4.1 (Validity). Let M be a Σ-model. We say that a pattern
ϕ ∈ PatternΣ is valid in M, denoted M |= ϕ, iff ρM ( ϕ) = M for every
M-valuation ρ. 
2.4 entailment 15

We can also ask about validity of patterns w.r.t. sets of patterns,


which we call theories if we assume a fixed signature. Matching logic
theories play a role similar to FOL theories in that they specify models.
A Σ-model M is called a model of a Σ-theory Γ if all patterns of Γ are
valid in M. Each pattern of a theory is called an axiom (of the theory).

Definition 2.4.2 (Theory). Let Σ be a signature. Any set Γ ⊆ PatternΣ


is called a Σ-theory (or simply theory when Σ is known from context).
A model of Γ is any Σ-model M such that M |= γ for all γ ∈ Γ, in
which case we write M |= Γ. We say ϕ is valid in Γ iff M |= ϕ for every
model M of Γ, in which case we write Γ |= ϕ. Finally, the pattern ϕ is
simply valid if ∅ |= ϕ. 

Similar to FOL, adding universal quantification preserves validity in


ML. That is why universal closure ∀ ϕ we defined in Section 2.3 makes
good sense. Often we can suppose that a pattern is closed without
loss of generality.

Lemma 2.4.1 ([10]). M |= ϕ iff M |= ∀ x. ϕ. 

Corollary 2.4.1. Γ |= ϕ iff Γ |= ∀ ϕ. 

Notice that finite theories can be understood as a single pattern, i.e.,


a conjunction of patterns. This follows straight from the definition of
the relation |=.

Proposition 2.4.1. Let Γfin be a finite Σ-theory. Then M |= Γfin iff


M |= Γfin .
V


We call a theory Γ satisfiable iff there exists a model M such that


M |= Γ. Because finite theories can be thought of as patterns (Proposi-
tion 2.4.1), in every model they evaluate to a set of elements. It might
be surprising that unsatisfiable patterns (finite theories) do not always
evaluate to ∅. A trivial example can be found by looking at the pattern
¬ x:

Example 2.4.1. Consider the theory {¬ x }. Given any model M with There is a new result
| M| > 1, for every M-valuation ρ it is easy to see that stronger than
Example 2.4.1 that
we cover in
∅ ⊂ ρM (¬ x ) = M \ {ρ( x )} ⊂ M.
Chapter 5.

A special class of patterns with nice semantics is called predicate


patterns. The naming is not a coincidence; predicate patterns are always
either true/valid (match all elements) or false (match no elements):

Definition 2.4.3 (Predicate pattern). Let M be a Σ-model. A pattern


ψ ∈ PatternΣ is called an M-predicate (predicate in M) if

for every M-valuation ρ, either ρM (ψ) = M or ρM (ψ) = ∅.


16 matching logic

If ψ is an M-predicate for every model M of a Σ-theory Γ, then ψ is


called a Γ-predicate (predicate in Γ). Finally, if ψ is an ∅-predicate
(predicate in the empty theory ∅), then ψ is simply called a predicate
pattern. 

M-predicates have many of the properties that we would expect


from FOL formulas. Namely, for any closed M-predicate ψ we have

M |= ψ iff M 6|= ¬ψ.

The direction (⇐) does not hold for arbitrary closed patterns. For
example, in the Σ-model M : M = {0, 1}, λM = {0} we have both
M 6|= ¬λ and M 6|= λ where λ ∈ Σ0 is a closed pattern.
We can also prove that M-predicates are preserved under all stan-
dard connectives:

Proposition 2.4.2 ([27]). Let ψ1 , ψ2 be M-predicates. Then ¬ψ1 , ψ1 ∧ ψ2 ,


∃ x. ψ1 , ∀ x. ψ1 are all M-predicates. 

Corollary 2.4.2. Let ϕ be a closed Γ-predicate. Then Γ |= ϕ iff Γ ∪


{¬ ϕ} |= ⊥.

Proof.

• (⇒) By contraposition. Let Γ ∪ {¬ ϕ} 6|= ⊥, thus there is some


model M of Γ ∪ {¬ ϕ} such that M 6|= ⊥. In this model M, for
every M-valuation ρ we have ρM (¬ ϕ) = M by definition of |=.
But then ρM ( ϕ) = ∅: thus M |= Γ and ρM ( ϕ) 6= M, i.e., Γ 6|= ϕ.

• (⇐) By contraposition. Let Γ 6|= ϕ, thus for some model M of Γ


we have M 6|= ϕ. This means ρM ( ϕ) = ∅ for all M-valuations ρ
because ϕ is a closed Γ-predicate. But then ρM (¬ ϕ) = M for all
M-valuations. By definition of |=, M |= Γ ∪ {¬ ϕ} and obviously
M 6|= ⊥ (ρM (∅) = ∅). Thus Γ ∪ {¬ ϕ} 6|= ⊥.

One other important property to consider about Γ-predicates is that


they are predicates also in all supersets of the original theory. This
is simply because all models of the supersets are also models of the
original theory.

Proposition 2.4.3. Let ϕ be a Γ-predicate and Γ ⊆ Γ+ . Then ϕ is also a


Γ+ -predicate.

Proof. Immediate from M |= Γ+ implying M |= Γ. 


2.5 equality and definedness 17

remark. Implications ϕ1 → ϕ2 are not predicate patterns for all


ϕ1 , ϕ2 , which has consequences that might be surprising. Let us as-
sume that ϕ1 , ϕ2 are some closed FOL formulas. In any FOL structure
A, one is used to working with

A |=FOL ϕ1 → ϕ2 iff A 6|=FOL ϕ1 or A |=FOL ϕ2 .

This is fine because entailment in FOL is two-valued, i.e., ϕ1 is either


true or false in A. We have A |=FOL ¬ ϕ1 iff A 6|=FOL ϕ1 .
On the other hand, the ML implication ϕ1 → ϕ2 ≡ ¬ ϕ1 ∨ ϕ2 is
interpreted as set union, where ¬ ϕ1 can evaluate to any set of model
elements. Recall that M 6|= ϕ1 iff ϕ1 does not match all elements. Thus Our basic intuition
for some models M with ∅ ⊂ ρM ( ϕ1 ) we have ρM (¬ ϕ1 ) ⊂ M, i.e., about implication
can mislead us.
M 6|= ϕ1 does not imply M |= ¬ ϕ1 ∨ ϕ2 !

The implication ϕ1 → ϕ2 can evaluate to any set of model elements


because it is just a union of matched elements.
A better intuition for “→” is given by pattern matching. An element
matches ϕ1 → ϕ2 iff it does not match ϕ1 or matches ϕ2 . In other
words, if an element matches ϕ1 and ϕ1 → ϕ2 , then it must also match
ϕ2 . This is an instance of modus ponens. Only if for every model
element m holds that m matches ϕ1 implies m matches ϕ2 , the pattern
ϕ1 → ϕ2 is valid. A valid ML implication corresponds to subsumption.

Proposition 2.4.4 ([27]). Let M be a Σ-model. Then M |= ϕ1 → ϕ2 iff


for every M-valuation ρ we have ρM ( ϕ1 ) ⊆ ρM ( ϕ2 ). 
Thus the ML implication says something else than first-order impli-
cation. An analogue of first-order implication is discussed when we
talk about the deduction property (Section 4.2.3).

2.5 equality and definedness

We are looking for a pattern ϕ1 = ϕ2 that is valid in any model iff ϕ1


and ϕ2 match the same model elements. Formally, for such a pattern
we would have

M |= ϕ1 = ϕ2 iff for every M-valuation ρM ( ϕ1 ) = ρM ( ϕ2 ).

Is this possible without extending the syntax of ML? Note that FOL
with equality is an extension of FOL in both syntax and semantics.
There is indeed no FOL formula EQ(t1 , t2 ) that is true in any given
FOL interpretation iff the terms t1 , t2 point to the same element of the
FOL model. An easy way to prove this is using the Lowenheim-Skolem
theorem (LST):

Theorem 2.5.1 (“Upward” Lowenheim-Skolem [18]). If a countable


set of formulas without equality has a model, then it has a model of
arbitrarily larger cardinality. 
18 matching logic

Corollary 2.5.1. Every satisfiable set of countably many FOL formulas


has an infinite model. 
Note that the version of LST we use here is for FOL without equality,
and is rather trivial. The intuition is that having a model of a theory
of FOL without equality, this model is non-empty and you can always
add dummy elements that “behave” the same as some chosen original
element of the model. For more details one can see, e.g. [18, p. 227].
Corollary 2.5.2. There is no FOL formula capturing equality of terms.

Proof. Suppose for a contradiction that there is some FOL formula


EQ(t1 , t2 ) such that for every FOL interpretation (A, v)

(A, v) |=FOL EQ(t1 , t2 ) iff vA (t1 ) = vA (t2 ).

Then the theory {∀ x ∀y. EQ( x, y)} is countable and satisfiable but
has no infinite model, which is a contradiction with Lowenheim-
Skolem. 

ML is expressive enough to capture equality of two patterns with-


out any extensions to the logic itself. Because of Proposition 2.4.4,
one might think that ↔ does the job. Indeed, a corollary of Proposi-
tion 2.4.4 is the following:
Proposition 2.5.1 ([27]). Let M be a Σ-model. Then M |= ϕ1 ↔ ϕ2 iff
for every M-valuation ρ we have ρM ( ϕ1 ) = ρM ( ϕ2 ). 
Proposition 2.5.1 answers our question mentioned in the introduc-
tion of this section. Unfortunately, there are two problems with consid-
ering “↔” as equality. The first problem is that “↔” does not work as
pattern equality when nested inside binders. A nice counterexample
was provided in [27, p. 23]:
Example 2.5.1 ([27]). Suppose we want to model a unary symbol f as
an FOL term (function). This means we want a pattern ψ such that for
any model

M |= ψ iff | f M (m)| = 1 for every m ∈ M.

If “↔” is pattern equality, the pattern ψ ≡ ∃y. f ( x ) ↔ y should


precisely capture this. However, consider the model

M : M = {0, 1}, f M (0) = ∅, f M (1) = {0, 1}.

Then for every M-valuation ρ we have

ρM (∃y. f ( x ) ↔ y) =
[
M \ ( f M (ρ( x ))4{m})
m∈ M
= ( M \ {0}) ∪ ( M \ {1})
= M.

This shows M |= ∃y. f ( x ) ↔ y and clearly | f M (0)| 6= 1. 


2.5 equality and definedness 19

There is also the second problem why ↔ cannot be equality: ϕ1 ↔


ϕ2 is not a predicate pattern. In particular, ρM ( ϕ1 ) 6= ρM ( ϕ2 ) does
not imply ρM ( ϕ1 ↔ ϕ2 ) = ∅. This is the actual cause of our bug in
Example 2.5.1. The pattern f ( x ) ↔ y matches some element for each
valuation of y in the counterexample model M; the union of these
elements contains every element in M.
However, notice ϕ1 ↔ ϕ2 is not that “far” from pattern equality. We
only have to force ϕ1 ↔ ϕ2 to match the empty set if ρM ( ϕ1 ) 6= ρM ( ϕ2 ).
To do this, it turns out we only need to define a unary symbol d·e to
behave as a ceiling function: d ϕe matches all elements iff ϕ matches
at least one element. Then ¬d¬( ϕ1 ↔ ϕ2 )e yields us exactly what we
need. To see why, we can do an intuitive case analysis as follows.

• ϕ1 ↔ ϕ2 is valid: ¬d¬( ϕ1 ↔ ϕ2 )e is valid because the negation


¬( ϕ1 ↔ ϕ2 ) matches no element.

• ϕ1 ↔ ϕ2 is not valid: then the negation ¬( ϕ1 ↔ ϕ2 ) matches at


least one element. But then we assume d¬( ϕ1 ↔ ϕ2 )e matches
all elements, so ¬d¬( ϕ1 ↔ ϕ2 )e is empty.

How can we define d·e to behave this way? It is not that difficult; we
only need d·eM (m) = M for all m ∈ M! How do we specify models
with such a definition of d·e? We consider theories containing the
axiom d x e, which enforces the symbol d·e to behave this way:

Definition 2.5.1 (Definedness). Let Γ be a Σ-theory such that σ ( x ) ∈ Γ


for some unary σ ∈ Σ1 .4 Then σ( x ) is called a definedness axiom. For
σ = d·e we call d x e:

(Definedness) d x e.

For this particular symbol d·e we also define totality “b·c”, equality
“=”, membership “∈”, and set containment5 “⊆” as derived constructs:

b ϕc ≡ ¬d¬ ϕe ϕ1 = ϕ2 ≡ b ϕ1 ↔ ϕ2 c
x ∈ ϕ ≡ d x ∧ ϕe ϕ1 ⊆ ϕ2 ≡ b ϕ1 → ϕ2 c

To avoid writing too many parentheses, let us assume that all of


the constructs in Definition 2.5.1 have the same precedence as symbol
application, i.e., they bind more tightly than binary connectives. For
example,

¬ ϕ1 = ϕ2 ∧ ( x ∈ ¬ψ → ϕ) ≡ ((¬ ϕ1 ) = ϕ2 ) ∧ (( x ∈ (¬ψ)) → ϕ).

The constructs from Definition 2.5.1 have their expected meaning.


Specifically, in models M satisfying the axiom d x e these constructs
4 Note that this is the same as having ∀ x. σ( x ) ∈ Γ
5 Recall that → works as subsumption only in models where it is valid (Section 2.4)!
20 matching logic

are M-predicates that decide whether a pattern ϕ matches at least


one element (d ϕe); matches all elements (b ϕc); whether two patterns
ϕ1 , ϕ2 match the same elements (ϕ1 = ϕ2 ); etc. This is formally stated
in the next proposition.

Proposition 2.5.2 ([27]). Let M be a Σ-model such that M |= d x e. Then


the following properties hold for every M-valuation ρ:

 M ρM ( ϕ ) 6 = ∅
ρM (d ϕe) =
∅ otherwise


 M ρM ( ϕ ) = M
ρM (b ϕc) =
∅ otherwise


 M ρM ( ϕ ) = ρM ( ϕ )
1 2
ρM ( ϕ1 = ϕ2 ) =
∅ otherwise


 M ρ ( x ) ∈ ρM ( ϕ )
ρM ( x ∈ ϕ ) =
∅ otherwise


 M ρM ( ϕ ) ⊆ ρM ( ϕ )
1 2
ρM ( ϕ1 ⊆ ϕ2 ) =
∅ otherwise

There is also another property of (Definedness) to consider. Because


M-predicates are either true (evaluate to M) or false (evaluate to ∅)
in the model M, it is easy to see that (Definedness) and totality are
identities on M-predicates:

Proposition 2.5.3 ([27]). Let M |= d x e and ψ be an M-predicate. Then


for every M-valuation ρ we have ρM (bψc) = ρM (ψ) = ρM (dψe). 

Intuitively we can see that we fix Example 2.5.1 if we use ∃y. ϕ = y


instead of ∃y. ϕ ↔ y. How do we know that “=” meets our expec-
tations about equality in other cases as well? One way how we can
convince ourselves is to show that “=” behaves at least as good as
equality in FOL with equality. How do we do this? We show that the
equality axioms for FOL hold in ML.

Proposition 2.5.4 ([27]). Equality axioms for FOL are valid in ML


theories containing d x e, i.e.:

(1) {d x e} |= ϕ = ϕ (equality introduction),

(2) {d x e} |= ( ϕ1 = ϕ2 ∧ ϕ[ ϕ1 /x ]) → ϕ[ ϕ2 /x ] (equality elimination).


2.6 equality extensions 21

Note that the equality elimination axiom is actually stronger in ML


because the compared patterns ϕ1 , ϕ2 do not have to be terms (in the
FOL sense). We should also mention the axiom
 ^ 
ϕi = ϕi0 → σ( ϕ1 , . . . , ϕn ) = σ( ϕ10 , . . . , ϕ0n );
1≤ i ≤ n

this axiom is valid in {d x e}, however, we cite a stronger result in


Section 4.2.2. All remaining doubts about “=” should hopefully be
settled in Chapter 3.

2.6 equality extensions

As we have seen in Section 2.5, pattern equality ϕ1 = ϕ2 can be


defined as an ML theory without any extensions to ML, only assuming
one unary symbol d·e and the pattern (Definedness) ≡ d x e as an
axiom. We often consider theories containing (Definedness) because
the axiom is special for many reasons. This axiom will follow us
throughout the whole thesis.
Instead of mentioning whether a theory contains (Definedness), we
will introduce a new concept called an equality extension. The equality
extension of a theory Γ is Γ extended with a definedness axiom such
that the added definedness axiom uses a fresh unary (definedness)
symbol. We formalize equality extensions in the following definition.

Definition 2.6.1 (Equality extension). Let Γ be a Σ-theory. Then we


call the Σ= -theory Γ= ≡ Γ ∪ {x} an equality extension of Γ where
/ Σ and Σ= = Σ ∪ {·}. For each signature Σ we assume some
· ∈
determined fresh symbol, i.e., for every pair of Σ-theories Γ, Γ0 we
assume Γ= , Γ0= are Σ= -theories such that Γ= \ Γ = Γ0= \ Γ0 . 

We must be careful. Observe that (·)= is a notation that is undeter-


ministic. There are infinitely many equality extensions of the same
theory. To use the notation Γ= for a Σ-theory Γ, we have to agree
on the meta-level that the fresh symbol is determined for Σ. This is
w.l.o.g. because (·)= is really only a shorthand. Instead of Γ= , Γ0= we
could always write: “Let Γ, Γ0 be Σ-theories and · ∈
/ Σ be some fresh
unary symbol. Consider the theories Γ ∪ {x}, Γ ∪ {x}.”
0

We always have Γ ⊂ Γ= by construction and the added axiom x is


the only pattern in Γ= with the fresh symbol ·. Obviously we have a
guarantee that Γ= contains a definedness axiom (with a fresh symbol),
whereas Γ may or may not contain a definedness axiom (regardless of
the used symbol). For example, the set

{d x e}= ≡ {d x e, x}

is an equality extension of {d x e}. The fresh axiom is added even


though {d x e} already contains (Definedness).
22 matching logic

Equality extensions are not an ad hoc concept: they make presenta-


tion of some existing results more compact and clear. Most importantly,
they are crucial in our search for a complete proof system for matching
logic (see Chapter 5). In Definition 2.6.1 we used · instead of d·e
on purpose to make it more explicit that · is fresh. However, for
simplicity we always assume w.l.o.g. that d x e is the added (fresh)
definedness axiom in Γ= unless stated otherwise, i.e., d x e ∈ Γ= and
dxe ∈/ Γ.
C O N N E C T I O N S W I T H F I R S T- O R D E R L O G I C
3
We mention in the introduction that ML is an FOL variant. This is not an
unjustified claim. In this chapter, we study how ML relates to FOL (and
vice versa). We can translate patterns to first-order logic formulas and
back while preserving the intended semantics. To prevent confusion,
we will explicitly write |=ML for ML semantics and |=FOL for standard
FOL semantics due to Tarski [30].

3.1 embedding ml in first-order logic with equality

We will show a sketch of the argument that every ML theory can be


captured by predicate logic with equality, while preserving the origi-
nal semantics. Predicate logic is a first-order logic fragment without
function (or constant) symbols, i.e., first-order logic over relational
symbol sets. It is known that FOL (even with equality) can be trans-
lated to a predicate logic with equality. The main idea is to consider
the graph of a function instead of the function itself [13, p. 116].
For ML, we too can provide a direct translation to predicate logic
with equality; there is indeed a good intuition why no function sym-
bols are needed. The semantics of matching logic suggests that we
can understand matching logic symbols as relations. For example,
consider an ML model M with a single n-ary symbol σ. We can define
an n + 1-ary relation Rσ such that

(m1 , . . . , mn , m) ∈ Rσ iff m ∈ σM (m1 , . . . , mn ).

Then M can be translated to an FOL structure A with a single predicate


symbol σ interpreted as the relation Rσ , i.e., σA := Rσ ! Let us turn
our intuition into a method of translating patterns to FOL formulas.
As [27] shows, we can define a translation of a pattern ϕ to an FOL
formula t( ϕ, m) that says: the pattern ϕ matches the element m.

Definition 3.1.1 ([27]). Let ϕ be a Σ-pattern. Then we define the trans-


lation t( ϕ) = ∀m. t( ϕ, m), where t( ϕ, m) is a function defined induc-
tively as follows:

• t( x, m) = ( x = m),

• t(¬ ϕ, m) = ¬t( ϕ, m),

• t ( ϕ1 ∧ ϕ2 , m ) = t ( ϕ1 , m ) ∧ t ( ϕ2 , m ),

• t(∃ x. ϕ, m) = ∃ x. t( ϕ, m),

23
24 connections with first-order logic

• t(σ( ϕ1 , . . . , ϕn ), m) = ∃ x1 . . . ∃ xn . Pσ ( x1 , . . . , xn , m)
^
∧ t ( ϕ i , x i ).
1≤ i ≤ n

We further extend t to sets as t(Γ) = {t(γ) | γ ∈ Γ}. 


Theorem 3.1.1 ([27]). Let Γ be a Σ-theory. Then for every Σ-pattern ϕ
we have Γ |=ML ϕ iff t(Γ) |=FOL t( ϕ). 
Note that we can use this translation to prove that our definition of
“=” in ML directly translates to “=” as defined by predicate logic with
equality [27, p. 51]. This can be seen as another piece of evidence that
the ML meta-operator “=” does what it is meant to do.

3.2 embedding first-order logic in ml

First-order logic can be captured as an ML theory ΓFOL , i.e., a theory


for which we intuitively have

|=FOL ϕ iff ΓFOL |=ML ϕ

for all FOL formulas ϕ. This was proved for many-sorted matching
logic in [27, p. 38] constructing ΓFOL with at least two sorts. Since we
use a single-sorted variant of ML, we have to show a slightly different
method sketched in [8, p. 6]. To mitigate reinventing the wheel, we
include a full proof of a stronger result that for every FOL S-theory Φ
there is an ML theory ΓS such that

Φ |=FOL ϕ iff Φ ∪ ΓS |=ML ϕ

for every FOL S-formula ϕ. This was proved neither in [27] nor [8]. Our
construction has also nice properties that were not explicitly showed
in [8].
Given an FOL signature S = (Var, Func, Pred) with variables Var,
function symbols Func = {Func0 , Func1 , . . .} and predicate symbols
Pred = {Pred1 , Pred2 , . . .}, we can notice that FOL structures corre-
spond to a special class of matching logic models if we allow FOL
domains to be sets. Namely every FOL structure can be translated to
an ML model as follows:

Definition 3.2.1. Let S = (Var, Func, Pred) be an FOL signature and


A be an FOL S-structure. Then we define an ML ΣS -model M corre-
sponding to A as

• M = A,

• f M (m1 , . . . , mn ) = { f A (m1 , . . . , mn )} for all f ∈ Funcn ,



 M if (m , . . . , m ) ∈ PA
1 n
• PM ( m1 , . . . , m n ) = for all P ∈ Predn .
∅ otherwise

3.2 embedding first-order logic in ml 25

Given an FOL signature S, the idea is to construct a theory ΓS


of which every model corresponds to an FOL structure (and vice
versa). The one-to-one correspondence is also why we choose PM
in Definition 3.2.1 this way. How do we construct ΓS ? We want to
force ML semantics to behave as FOL semantics. Recall that an ML
symbol σ ∈ Σ has relational semantics (Section 3.1). If we assume that
(Definedness) is at our disposal, we can easily enforce a symbol σ to
be functional (a term) with the following axiom [27]:

(Function) ∃ y . σ ( x1 , . . . , x n ) = y

For the axiom (Function) we can show a lemma that confirms our
intuition:

Lemma 3.2.1 ([27]). Let M be a Σ-model and ϕ ∈ PatternΣ be a


pattern such that y ∈
/ FV( ϕ). Then M |=ML ∃y. ϕ = y iff for every Now our function
M-valuation ρ there exists exactly one element m ∈ M such that axiom finally works.
ρM ( ϕ ) = { m }. 

To embed FOL syntax in ML, we define a matching logic signature


(Var, ΣS ) where ΣS = {Func0 } ∪ {Σi | i ∈ N+ , Σi = Funci ∪ Predi }.
Notice that trivially every FOL S-formula is a ΣS -pattern. We can
define first-order logic in the signature S as an ML ΣS ∪ {d·e}-theory
ΓS containing the (Function) axioms for all f ∈ Func and axioms
that enforce predicate symbols P ∈ Pred to be ΓS -predicates, i.e.,

ΓS = {d x e}
∪ {∃y. f ( x1 , . . . , xn ) = y | f ∈ Funcn }n∈N
∪ { P( x1 , . . . , xn ) = ⊥ ∨ P( x1 , . . . , xn ) = > | P ∈ Predn }n∈N+ .

Note that we w.l.o.g. assume that d·e ∈/ Func ∪ Pred. The following
lemma shows that the axioms ∃y. f ( x1 , . . . , xn ) = y suffice to enforce
that all terms are singletons in a model of ΓS , not just simple function
applications:

Lemma 3.2.2 (Singleton terms [27]). Let M be a Σ-model such that


M |=ML d x e and M |=ML ∃y. f ( x1 , . . . , xn ) = y for all f ∈ Funcn and
all n ∈ N. For every FOL term t in the signature (Var, Func, Pred)
we have that y ∈
/ FV(t) implies M |=ML ∃y. t = y.

More importantly, we can further show that any FOL S-formula ϕ is


a ΓS -predicate:

Lemma 3.2.3 (ΓS -predicates). Let S = (Var, Func, Pred) be an FOL


signature. Then every FOL S-formula ϕ is a ΓS -predicate.

Proof. By structural induction on S-formula ϕ. The only non-trivial


case is the base case.
26 connections with first-order logic

• ϕ ≡ P(t1 , . . . , tn ): Let M be any model of ΓS and ρ be any


M-valuation. By Lemma 3.2.2 M |=ML ∃y. ti = y for all 1 ≤
i ≤ n. But then by definition ρM (∃y. ti = y) = M, which is by
Lemma 3.2.1 iff there exists mi ∈ M such that ρM (ti ) = {mi } for
all 1 ≤ i ≤ n.
We want ρM ( P(t1 , . . . , tn )) = ∅ or ρM ( P(t1 , . . . , tn )) = M. Con-
sider some M-valuation ρ x such that

ρ x ( x1 ) = m1 , . . . , ρ x ( x n ) = m n .

Then we have

ρM ( P(t1 , . . . , tn )) = PM (ρM (t1 ), . . . , ρM (tn ))


= PM ( ρM M
x ( x1 ), . . . , ρ x ( xn ))
= ρM
x ( P ( x1 , . . . , xn )).

But ρMx ( P ( x1 , . . . , xn )) = ∅ or ρ x ( P ( x1 , . . . , xn )) = M by con-


M

struction. Because ρ ( P(t1 , . . . , tn )) = ρM


M
x ( P ( x1 , . . . , xn )), we
have what we wanted.

• Step. By IH and Proposition 2.4.2.

We can further argue that for S-formulas, ΓS exactly captures stan-


dard FOL semantics |=FOL due to Tarski [30]. This can be proved by
showing that for every model of ΓS , there is some corresponding FOL
S-structure that behaves the same on S-formulas:

Definition 3.2.2. Let S be an FOL signature and M be a model of ΓS .


We define a function M 7→ A where A is an S-structure defined as
follows:

• A = M,

• f A (m1 , . . . , mn ) = m where f M (m1 , . . . , mn ) = {m}.

• (m1 , . . . , mn ) ∈ PA if PM (m1 , . . . , mn ) = M,
/ PA if PM (m1 , . . . , mn ) = ∅ where P 6= d·e.
( m1 , . . . , m n ) ∈

Note that the function M 7→ A is well-defined because of Lemma 3.2.2


and Lemma 3.2.3. Simply choose a valuation that assigns ρ( xi ) =
mi , then f ( x1 , . . . , xn ) must be a term, i.e., there exists m ∈ M such
that ρM ( f ( x1 , . . . , xn )) = {m} = f M (m1 , . . . , mn ). There can only be
one such m because f M is a function. Analogously for the predicate
symbols (note that always M 6= ∅). We immediately notice that M 7→
A from Definition 3.2.2 maps ML models to such FOL models that
behave the same on FOL formulas:
3.2 embedding first-order logic in ml 27

Theorem 3.2.1 (Isomorphism with FOL). Let S be an FOL signature.


There exists a bijection h : {M | M |=ML ΓS } → {A | A |=FOL ∅} such
that
• for every FOL S-term t and valuation v : Var → A it holds that
{vh(M) (t)} = ρM (t) for the M-valuation ρ := v,
• for every FOL S-formula ϕ holds h(M) |=FOL ϕ iff M |=ML ϕ.
Proof. Let us consider the function
h : {M | M |=ML ΓS } → {A | A |=FOL ∅}
given by Definition 3.2.2 and prove h is a bijection. Of course, h is
injective by construction. For surjectivity, consider for a contradic-
tion that there exists some FOL structure A that is not covered. Use
Definition 3.2.1 to construct an ML model M, extend M with the
(fresh) symbol d·e such that d·eM (m) = M for all m ∈ M. Obviously
M |=ML Γs and h(M) = A, contradiction.
Consider any pair (M, h(M)) and set A = h(M). Let us now prove
the first requirement on h by structural induction over S-terms.
• t ≡ x: {vA (t)} = {v( x )} = {ρ( x )} = ρM ( x ).

• t ≡ f (t1 , . . . , tn ): by IH {vA (ti )} = ρM (ti ) for every 1 ≤ i ≤ t.


Thus
{vA ( f (t1 , . . . , tn ))} = { f A (vA (t1 ), . . . , vA (tn ))}
h
= f M (vA (t1 ), . . . , vA (tn ))
= f M ({vA (t1 )}, . . . , {vA (tn )})
IH M M
= f (ρ (t1 ), . . . , ρM (tn ))
= ρM ( f (t1 , . . . , tn ))

Let us now prove the second requirement on h. We prove a stronger


claim by structural induction on every S-formula ϕ that for every FOL
interpretation (A, v) we have
(A, v) |=FOL ϕ iff ρM ( ϕ) = M for ρ := v.
The only non-trivial case is the base case and negation.

• ϕ ≡ P ( t1 , . . . , t n ).
(A, v) |=FOL P(t1 , . . . , tn ) iff (vA (t1 ), . . . , vA (tn )) ∈ PA
h
iff PM (vA (t1 ), . . . , vA (tn )) = M
iff PM ({vA (t1 )}, . . . , {vA (tn )}) = M
1
iff PM (ρM (t1 ), . . . , ρM (tn )) = M
iff ρM ( P(t1 , . . . , tn )) = M
where (1) follows from the first requirement on h.
28 connections with first-order logic

• ϕ ≡ ¬ψ. Then (A, v) |=FOL ¬ψ iff (A, v) 6|=FOL ψ iff ρM (ψ) 6= M


for ρ := v. Consider that M |=ML ΓS , thus ψ is an M-predicate.
But then this is iff ρM (ψ) = ∅ iff ρM (¬ψ) = M.

• ϕ ≡ ψ1 ∧ ψ2 . Then (A, v) |=FOL ϕ1 ∧ ϕ2 iff (A, v) |=FOL ϕ1 and


(A, v) |=FOL ϕ2 iff ρM ( ϕ1 ) = M and ρM ( ϕ2 ) = M iff ρM ( ϕ1 ∧
ϕ2 ) = M.

• ϕ ≡ ∃ x. ψ. Then (A, v) |=FOL ∃ x. ψ iff (A, v[m/x ]) |=FOL ψ


for some m ∈ M iff ρ[m/x ]M (ψ) = M for some m ∈ M iff
ρM (∃ x. ψ) = m∈ M ρ[m/x ]M (ψ) = M. Note that we can use IH
S

here because the statement is proved for all FOL interpretations


(A, v).

This yields what we set out to prove as A |=ML ϕ iff (A, v) |=ML ϕ
for every valuation v : Var → A iff ρM ( ϕ) = M for every M-valuation
ρ : Var → M iff M |=ML ϕ. Equivalence (2) is given by the proved
statement and the fact that obviously {v | v : Var → A} = {ρ | ρ :
Var → M } because A = M by construction. 

Of course, ΓS is satisfiable given any FOL signature S. We prove


a stronger result that ΓS is satisfiable even if it is in union with a
satisfiable FOL theory.

Lemma 3.2.4. Let S = (Var, Func, Pred) be an FOL signature. Then


for every FOL S-theory Φ we have that Φ is satisfiable in an FOL
structure iff Φ ∪ ΓS is satisfiable in an ML model.

Proof.
(⇒) Let A |=FOL Φ. Take the corresponding ML model M from
Theorem 3.2.1, which yields that M |=ML Φ. We also have M |=ML ΓS
by construction, i.e., M |=ML Φ ∪ ΓS .
(⇐) Let M |=ML Φ ∪ ΓS . Take the corresponding FOL structure A
from Theorem 3.2.1. Then A |=FOL Φ. 

Now we are ready to prove the main result of this section.

Theorem 3.2.2. Let S be an FOL signature and Φ some S-theory. Then


for every S-formula ϕ we have Φ |=FOL ϕ iff Φ ∪ ΓS |=ML ϕ.

Proof. It suffices to prove Φ |=FOL ∀ ϕ iff Φ ∪ ΓS |=ML ∀ ϕ by Lemma 2.4.1.


Let ⊥FOL be some FOL contradiction. Because we know that ϕ is a
ΓS -predicate, by Corollary 2.4.2 it further suffices to prove

Φ ∪ {¬∀ ϕ} |=FOL ⊥FOL iff ΓS ∪ Φ ∪ {¬∀ ϕ} |=ML ⊥.

This is immediate from Lemma 3.2.4 because Φ ∪ {¬∀ ϕ} is an FOL


S-theory. Φ ∪ {¬∀ ϕ} |=FOL ⊥FOL iff Φ ∪ {¬∀ ϕ} is not satisfiable iff
ΓS ∪ (Φ ∪ {¬∀ ϕ}) is not satisfiable iff Φ ∪ {¬∀ ϕ} |=ML ⊥. 
T W O P R O O F S Y S T E M S F O R M AT C H I N G L O G I C
4
We have defined in Section 2.2 a notion of truth and now we would
like a tool that can mechanically verify truth, i.e., a proof system for
matching logic. Chapter 2 already pointed out that ML fulfills intuitive
preconditions for an elegant proof system. Propositional tautologies
are valid (Proposition 2.3.1) and semantics of ϕ1 → ϕ2 agree with
modus ponens (Proposition 2.4.4). That means a proof system of
matching logic can (and should) include a complete proof system for
propositional logic. Since ML is a variant of FOL (Chapter 3), why not
take a proof system of FOL as well?
This chapter covers two existing proof systems for ML. Section 4.1
covers a system called System P [27], which is based on the intuition
of taking a complete FOL proof system and turning it into a proof
system for matching logic. We shall see that System P stumbles when
dealing with term substitutions (∀ x. ϕ) → ϕ[t/x ]. The workaround
is painful; P directly depends on (Definedness) and the particular
symbol d·e, which means we cannot verify truth using P in all theories.
Section 4.2 covers a system called System H [11], which is a successful
attempt at a proof system that does not depend on any fixed formal
symbols. What is more, H has numerous interesting properties that
make it a very strong and flexible system for matching logic. Most
importantly, H is well-suited for intended applications of ML.

4.1 system p

A Hilbert-style proof system for ML has been known since the intro-
duction of ML [27, p. 53]. Because ML is very close to FOL, the idea
was to take a complete proof system for FOL and make it work for
matching logic. However, there is a catch; we cannot use the following
axiom for term substitutions:

(∀ x. ϕ) → ϕ[t/x ].

Term substitutions are problematic in matching logic exactly because


terms are not syntactically distinguished from formulas. We cannot
allow (∀ x. ϕ) → ϕ[ψ/x ] for any pattern ψ because ψ can match any
number of elements, not just one (as variables do). Take the coun-
terexample pointed out by [27, p. 52], where we pick the pattern
ϕ ≡ (∃y. x = y). Consider ψ ≡ ¬ x, then for every Σ-model M with
| M| 6= 2 we have

M 6|= (∀ x. ∃y. x = y) → (∃y. (¬ x ) = y).

29
30 two proof systems for matching logic

(PT) ϕ if ϕ is a propositional tautology over patterns


ϕ1 ϕ1 → ϕ2
(MP) ϕ2
(∀) (∀ x. ϕ1 → ϕ2 ) → ( ϕ1 → ∀ x. ϕ2 ) if x ∈
/ FV( ϕ1 )
(FunSub) ((∃y. ψ = y) ∧ ∀ x. ϕ) → ϕ[ψ/x ] if y ∈
/ FV(ψ)
ϕ
(Gen) ∀ x. ϕ
(EqIntro) ϕ=ϕ
(EqElim) ( ϕ1 = ϕ2 ∧ ψ[ ϕ1 /x ]) → ψ[ ϕ2 /x ]
ϕ
if x ∈
/ FV( ϕ)
(MemIntro) ∀ x. x ∈ ϕ
∀ x. x ∈ ϕ
if x ∈
/ FV( ϕ)
(MemElim) ϕ
(MemVar) ( x ∈ y) = ( x = y)
(Mem¬ ) ( x ∈ ¬ ϕ) = ¬( x ∈ ϕ)
(Mem∧ ) ( x ∈ ϕ1 ∧ ϕ2 ) = ( x ∈ ϕ1 ) ∧ ( x ∈ ϕ2 )
(Mem∃ ) ( x ∈ ∃y. ϕ) = ∃y.( x ∈ ϕ) where x and y distinct.
(MemSym) ( x ∈ Cσ [ ϕ]) = ∃y.(y ∈ ϕ) ∧ ( x ∈ Cσ [y]) if y ∈
/ FV (Cσ [ ϕ])

Figure 4.1: System P

Intuitively, the workaround is to allow substituting ψ for which we


can prove {d x e} |= ∃y. ψ = y, i.e., where ψ behaves as a functional pat-
tern. This type of substitution is called functional substitution (FunSub)
and can be shown to be valid, i.e.,

{d x e} |= ((∃y. ψ = y) ∧ ∀ x. ϕ) → ϕ[ψ/x ] if y ∈
/ FV(ψ).

Building upon this idea, [27] presents System P that is sound and
complete for theories containing (Definedness). This system can be
divided into two groups of rules, as outlined by the separator in
Figure 4.1. The first group is a proof system for predicate logic1 with
equality with a workaround for term substitutions. The second group
are technical rules for “∈” that were needed to show completeness
using a translation “backwards” from the translation in Section 3.1. The
symbol Cσ occurring in the last rule will be explained in Section 4.2.
For the rest of the details, we recommend interested readers to see [27,
p. 54].
System P certainly serves its purpose and is educational by showing
how matching logic proofs relate to FOL proofs; the proof of complete-
ness goes by reduction to a complete FOL proof system. Unfortunately,
the inspiration for P is also the very reason why P makes sense only

1 Predicate logic is first-order logic without functional symbols.


4.1 system p 31

for theories containing (Definedness).2 System P directly contains


the predefined constructs we derived with d·e in Section 2.5.

Theorem 4.1.1 ([27]). Let Γ be a Σ-theory containing (Definedness).


Then for every ϕ ∈ PatternΣ we have Γ |= ϕ iff Γ `P ϕ. 

Observe that symbols such as “=” or “∈” are mere syntactic sugar
over the fixed symbol d·e, e.g., ϕ1 = ϕ2 ≡ ¬d¬ ϕ1 ↔ ϕ2 e. What if Γ
uses the symbol d·e for something else than (Definedness), e.g., for
¬∀ x. d x e? System P does not make sense for these theories.
We defined the so-called equality extensions in Definition 2.6.1, which
add a definedness axiom with a fresh symbol that can be different
from d·e. Why cannot System P use “=” that is a sugar for

ϕ1 = ϕ2 ≡ ¬¬ ϕ1 ↔ ϕ2 ?

We can define a class of proof systems that are the same as P , except
they use any definedness symbol we choose:

Definition 4.1.1 (System PΣ ). Let Σ be some symbol set. Then by PΣ


we mean System P where “=” and “∈” use the fresh definedness
symbol not in Σ that we assume to be determined. 

Whatever fresh definedness symbol we agree to use for the notation


Γ= given that Γ is a Σ-theory, we can take the same fresh definedness [27] does not
symbol, plug it into P , call it PΣ and from Theorem 4.1.1 we trivially consider equality
extensions but this
know that PΣ will be sound and complete w.r.t. the theory Γ= :
result follows
trivially from the
Proposition 4.1.1. Let Γ be a Σ-theory Then for every ϕ ∈ PatternΣ=
original results.
we have Γ= |= ϕ iff Γ= `PΣ ϕ.

Proof. We know that there is a definedness axiom in Γ= , w.l.o.g. Γ= \


Γ = {d x e}. This axiom uses the same (fresh) symbol as the one
used in PΣ by construction. Thus the proof is analogous to that of
Theorem 4.1.1. 

If we take P(·) as a family of proof systems and agree on some


determined fresh symbols for each Σ, we could intuitively say that
P(·) are sound and complete w.r.t. equality extensions Γ= , even if Γ
uses the “original” symbol d·e for something else than (Definedness).
Each System PΣ makes sense for the equality extension of a theory in
the signature Σ.
However, we are starting to see that the situation is not ideal. We do
not want to redefine P if we want to use d·e for something else. Even
if we forbid d·e a different meaning, Theorem 4.1.1 does not mean
that P is complete. Not all theories contain (Definedness). Even for
as simple theory as {∀ x. x }, we do not know whether {∀ x. x } |= ϕ
implies {∀ x. x } ` ϕ for all ϕ (we solve this instance in Section 5.1).

2 System P actually makes sense for every theory Γ such that Γ ` d x e.


32 two proof systems for matching logic

One could suggest that we simply add d·e to the ML syntax with
the expected semantics and add d x e as an axiom to P . However, this
is contrary to the principles of ML. ML tries to build upon as small a
core as possible because it is meant to be flexible and simple so that it
is trustworthy.
We would naturally like to find a complete proof system for all
theories or find a fundamental reason why such a system cannot
exist. A counterexample would be interesting as matching logic can be
easily embedded in FOL (Chapter 3) and for FOL we have a complete
proof system. Apart from practical motivations, there is also a strictly
theoretical one: the connection of ML to modal logic (see, e.g., [11]). If
we find a proof system for ML without (Definedness), we might find
an alternative proof system for several modal logics, which could be
defined as ML theories. This would be another strong argument for
ML as a logic unifying other logics.

4.2 system h

We have seen that System P (Section 4.1) makes sense only for theories
containing (Definedness). Moreover, P is not well-suited for the
intentions behind ML, which is discussed already in [27, pp. 53, 57].
The second group of rules in P are more technical than practical; they
axiomatize working with the membership constructs “∈”. Instead, we
would like to axiomatize something more fundamental for matching
logic and derive all of the technical rules as lemmas.

(PT) ϕ if ϕ is a propositional tautology over patterns


ϕ1 ϕ1 → ϕ2
(MP)
ϕ2
(∀) (∀ x. ϕ1 → ϕ2 ) → ( ϕ1 → ∀ x. ϕ2 ) if x ∈
/ FV( ϕ1 )
(Sub) (∀ x. ϕ) → ϕ[y/x ]
ϕ
(Gen)
∀ x. ϕ
(Propagation⊥ ) Cσ [⊥] → ⊥
(Propagation∨ ) Cσ [ ϕ1 ∨ ϕ2 ] → (Cσ [ ϕ1 ] ∨ Cσ [ ϕ2 ])
(Propagation∃ ) Cσ [∃ x.ϕ] → ∃ x.Cσ [ ϕ] if x 6∈ FV(Cσ [∃ x.ϕ])
ϕ1 → ϕ2
(Framing)
Cσ [ ϕ1 ] → Cσ [ ϕ2 ]
where Cσ is a single symbol context.
(Ex) ∃ x. x
(Singleton) ¬(C1 [ x ∧ ϕ] ∧ C2 [ x ∧ ¬ ϕ])
where C1 , C2 are nested symbol contexts.

Figure 4.2: System H


4.2 system h 33

[10, 11] introduces System H (Figure 4.2)3 ; a Hilbert-style proof


system for matching logic that does not depend on any formal symbols.
Throughout the thesis, we drop H in `H and write just ` instead. Let
Γ be a Σ-theory. Then Γ ` ϕ for ϕ ∈ PatternΣ means there exists
a Hilbert-style proof of ϕ in H with additional axioms from Γ. Of
course, formulas of proof rules in H are over the same signature as Γ,
i.e., Σ. In particular, System H does not contain any fixed predefined
symbols. Instead, it works with so-called contexts (the notation Cσ ).
Contexts are patterns with a distinguished free variable  that
serves as a placeholder. It is called a context because the placeholder
can be substituted for any pattern without implicit α-renaming. For
example, if

C1 ≡ σ(>) ∧ ∀ x. ,

then we write C1 [ x ] to mean σ(>) ∧ ∀ x. x. Intuitively, a context C is


a symbol context iff the “path” to the free variable  contains only
symbols and no logical connectives. The context

C2 ≡ σ1 (σ1 (>), x ∧ ¬y, σ2 (σ3 (), λ)),

is a symbol context because the path σ1 , σ2 , σ3 to  contains only


symbols. On other other hand, σ() ∧ λ is not a symbol context; the
path to  is ∧, σ. Contexts are formally defined inductively in the next
definition.

Definition 4.2.1 (Context). A pattern C is called a context if C contains a


distinguished (occurring only once) free variable . Then C [ ϕ] means
the result of replacing  with ϕ without implicit α-renaming.4 Given
σ ∈ Σn , a single symbol context Cσ is a context of the form

Cσ ≡ σ( ϕ1 , . . . , ϕi−1 , , ϕi+1 , . . . , ϕn )

where ϕ1 , . . . , ϕn are patterns not containing . A nested symbol context


is defined inductively as follows.

• The variable  is a nested symbol context.

• If Cσ is a single symbol context and C is a nested symbol context,


then Cσ [C ] is a nested symbol context.

A nested symbol context is sometimes simply called a symbol context.




3 We work with the “∀-version” of H, which was actually used in [10].


4 Free variables in ϕ may be bound in C [ ϕ], which is different from capture-avoiding
substitution.
34 two proof systems for matching logic

proof rules. System H is divided into four groups of rules as


shown by the separators (Figure 4.2).

(1) The first group is a proof system for propositional logic, which
can be include because of Proposition 2.3.1. Given a tautology of
propositional logic such as p ∨ ¬ p, (PT) says that replacing each
propositional variable pi with a pattern ϕi in this propositional
tautology yields an axiom. If we wish to avoid the meta-rule (PT)
that adds infinitely many rules, we can simply replace it with
any sound and complete system for propositional logic.

(2) The second group is a first-order proof system. A notable dif-


ference is that (Sub) is weaker than term substitution in FOL,
as it only allows to substitute variables for variables. Notice
that except for term substitution, all axiom schemes and proof
rules of a complete proof system for FOL (without equality) are
included in H.

(3) The third group allows us to do frame reasoning, which is moti-


vated by formal verification of data structures. We discuss these
rules in Section 4.2.1.

(4) The fourth group consists of technical axioms.

Even though symbol contexts are an interesting concept, they are


not special in the proof-theoretic sense. [10] gives a conventional proof
that H is sound. Unlike P , System H is sound for all theories even if
they do not contain (Definedness). This is important because it makes
sense to use it for any theory (unlike P ).

Theorem 4.2.1 (Soundness [10]). For every Σ-theory Γ and every pat-
tern ϕ ∈ PatternΣ we have that Γ ` ϕ implies Γ |= ϕ. 

closed patterns. Notice that (Sub) and (Gen) mean that we can
focus only on closed patterns in the next discussion without loss of
generality. Intuitively this is because we can always close a pattern
with the corresponding universal quantifiers and vice versa. This is
formally stated in the next theorem.

Proposition 4.2.1. Let Γ be a Σ-theory. Then for every ϕ ∈ PatternΣ


we have Γ ` ϕ iff ∀Γ ` ∀ ϕ.

Proof.
(⇒) Let Γ ` ϕ. By definition of ` there is a finite Σ-theory Γ0 ⊆ Γ
such that Γ0 ` ϕ. The following is the proof of ∀Γ ` ∀ ϕ.

(1) Introduce each axiom from ∀Γ0 ⊆ ∀Γ.

(2) For each introduced axiom ∀γ ∈ ∀Γ0 where γ ∈ Γ0 , repeatedly


apply (Sub) on ∀γ to get γ.
4.2 system h 35

(3) Apply the proof of Γ0 ` ϕ.

(4) Finitely many times apply (Gen) to get ∀ ϕ.

(⇐) Symmetrically using (Gen) on free variables for each γ ∈ Γ. 

conditional completeness. Much like System P , System H


is complete w.r.t. theories containing (Definedness). This is because
(MP) and (Gen) are in both P and H while for every axiom scheme α
of P we have {d x e} ` α [10]. Of course, the situation is the same if we
consider P defined with a different definedness symbol such as ·,
e.g., PΣ defined for equality extensions we assume determined for Σ.
That is why soundness and “conditional” completeness of H can be
expressed for any equality extension as in Proposition 4.2.2.

Theorem 4.2.2 ([10]). Let Γ be a Σ-theory containing (Definedness).


Then for every ϕ ∈ PatternΣ we have Γ |= ϕ iff Γ ` ϕ. 

Proposition 4.2.2. Let Γ be a Σ-theory. Then for every ϕ ∈ PatternΣ=


we have Γ= |= ϕ iff Γ= ` ϕ. 

For now it seems that H does not give us much in comparison with
P . On the contrary, let us go through several important properties
that System H provably enjoys. Besides getting rid of the symbol d·e
in our proof system, we have gained an easy way to reason in contexts
(Section 4.2.1, Section 4.2.2), an analogue of the deduction property
(Section 4.2.3), or even the so-called local completeness (Section 4.2.4).

4.2.1 Frame reasoning

H allows us to do the so-called frame reasoning. Intuitively said, frame


reasoning is reasoning inside structures from the “outside”. We do
local calculations independently of the used data structure, which
can then be “framed” into a data structure represented by a symbol
context [27, p. 16]. To illustrate this, one can consider a simple example
in a model of natural numbers such as5

(N, +, ∗) |= 5 ∗ 5 → 25.

Suppose we want to do the same reasoning in a data structure such


as a pair written as h·, ·i. For simplicity, let us assume that pairs exist
in (N, +, ∗). The above property that holds “locally” can be “framed”
into h·, ·i without any additional axioms, i.e.,

(N, +, ∗) |= h5 ∗ 5, 2i → h25, 2i.

5 Recall that the pattern intuitively says: every element that matches 5 ∗ 5 also matches
25.
36 two proof systems for matching logic

We do not have to axiomatize this for every data structure separately.


This is because matching logic symbols (and symbol contexts) are
monotonic: (Framing) says that
ρM (ψ) ⊆ ρM ( ϕ) implies ρM (Cσ [ψ]) ⊆ ρM (Cσ [ ϕ]).
Pairs here are only used for simplicity; patterns can express much
more complicated data structures such as heaps or maps [27]. The
point is that we can do this kind of reasoning in System H:
Lemma 4.2.1 (Sound Frame Reasoning [10]). Let Γ be a Σ-theory such
that σ ∈ Σn . If Γ ` ϕi → ϕi0 for all 1 ≤ i ≤ n, then we also have
Γ ` σ( ϕ1 , . . . , ϕn ) → σ( ϕ10 , . . . , ϕ0n ). 
Notice that the rules (Framing), (Propagation⊥ ), (Propagation∃ ),
and finally the rule (Propagation∨ ) are defined only for single-symbol
contexts. The following lemmas say that we can generalize frame
reasoning rules to nested symbol contexts.
Lemma 4.2.2 (Framing through Symbol Contexts [10]). For any nested
symbol context C we have that Γ ` ϕ → ϕ0 implies Γ ` C [ ϕ] →
C [ ϕ 0 ]. 
Lemma 4.2.3 (Propagation through Symbol Contexts [10]). Let Σ be a
signature and ϕ, ϕ1 , ϕ2 ∈ PatternΣ . For any nested symbol context C
we have
(1) ` C [⊥] ↔ ⊥.
(2) ` C [ ϕ1 ∨ ϕ2 ] ↔ (C [ ϕ1 ] ∨ C [ ϕ2 ]).
(3) ` C [∃ x. ϕ] ↔ ∃ x. C [ ϕ].

Corollary 4.2.1 ([10]). Let Γ be any Σ-theory and ϕ, ϕ1 , ϕ2 ∈ PatternΣ .
Then the following propositions hold:
(1) Γ ` C [⊥] iff Γ ` ⊥,
(2) Γ ` C [ ϕ1 ∨ ϕ2 ] iff Γ ` C [ ϕ1 ] ∨ C [ ϕ2 ],
(3) Γ ` C [∃ x. ϕ] iff Γ ` ∃ x. C [ ϕ].

There is one last property left to be covered when it comes to symbol
contexts. In modal logic, there is a concept of the so-called duals. For
example, the well-known diamond symbol ♦ is dual to the modal
logic symbol  because
 ϕ ≡ ¬♦¬ ϕ.
If we have σ( ϕ1 , . . . , ϕn ), then we call ¬σ(¬ ϕ1 , . . . , ¬ ϕn ) its dual. In
this sense, the totality construct b ϕc ≡ ¬d¬ ϕe is dual to d ϕe. We can
show that ML duals have much in common with modal logic duals,
for example [10, p. 3]:
4.2 system h 37

Lemma 4.2.4 ([10]). Let Γ be a Σ-theory and Cσ be any symbol context


in Σ. Then Γ ` ϕ implies Γ ` ¬Cσ [¬ ϕ]. 

Note that ` ϕ implies ` ¬♦(¬ ϕ) is a special case of Lemma 4.2.4


known as the normal modal logic generalization rule [4, p. 33].

4.2.2 Equivalence as a congruence

In the previous section, we saw how rules with single-symbol contexts


generalize to nested symbol contexts. Here we show that ↔ can be
seen as a congruence. One can take any context C (not just a symbol
context) and replace a pattern it contains with any other equivalent
pattern while preserving validity.

Lemma 4.2.5 ([10]). Let C be any context (not just a nested symbol
context). Then Γ ` ϕ1 ↔ ϕ2 implies Γ ` C [ ϕ1 ] ↔ C [ ϕ2 ]. 

This answers the hanging question about equality from the end of
Section 2.5. Lemma 4.2.5 is stronger than the axiom
 ^ 
ϕi = ϕi0 → σ( ϕ1 , . . . , ϕn ) = σ( ϕ10 , . . . , ϕ0n )
1≤ i ≤ n

because by soundness of H we can replace equivalent patterns in any


contexts, not just symbol contexts. This also applies to equal patterns
because we have {d x e} ` ϕ1 ↔ ϕ2 iff ρM ( ϕ1 ) = ρM ( ϕ2 ) for all models
M with M |= d x e and M-valuations ρ iff {d x e} ` ϕ1 = ϕ2 .
Lemma 4.2.5 also allows us to unwrap Γ-predicates from totality
because totality (and definedness) is an identity on Γ-predicates:

Proposition 4.2.3 (Totality canceling). Let ψ be a Γ-predicate. Then (1)


Γ= ` ψ ↔ bψc and (2) Γ ` C [ψ] ↔ C [bψc] for any context C.

Proof. (1) holds because totality is an identity on Γ-predicates (Propo-


sition 2.5.3), i.e., Γ= |= ψ ↔ bψc. (2) follows from (1) and Lemma 4.2.5.


4.2.3 Deduction property

The deduction property is one of the major properties of some Hilbert-


style proof systems. Let D be any Hilbert-style proof system with
provability denoted `D (not necessarily a system for ML). Then we say
D has the deduction property if for every closed formula ψ,

Γ ∪ {ψ} `D ϕ iff Γ `D ψ → ϕ.

The deduction property is very useful and tends to make logical


arguments shorter. However, it is important to note that the deduction
property has various variants, especially in modal logic that raises
38 two proof systems for matching logic

problems for deduction [3, p. 94]. This variant we just defined is


usually considered in FOL. It is natural to ask if the deduction property
holds for H. For some premises ψ, we have Γ ∪ {ψ} ` ϕ iff Γ ` ψ → ϕ:

Example 4.2.1 (Explosion principle). It is easy to see that for any


pattern ϕ we have Γ ∪ {⊥} ` ϕ iff Γ ` ⊥ → ϕ.

• (⇒) Let Γ ∪ {⊥} ` ϕ. Because ` ⊥ → ϕ is an instance of (PT),


trivially Γ ` ⊥ → ϕ.

• (⇐) Let Γ ` ⊥ → ϕ. The proof of Γ ∪ {⊥} ` ϕ is as follows: ⊥,


⊥ → ϕ (PT), apply (MP) on the first two.

Notice that for Γ := ∅ we get the explosion principle: for all ϕ we


have {⊥} ` ϕ. 

However, recall that symbols are interpreted in matching logic


as maps to sets of model elements. The deduction property cannot
hold for any sound and reasonably strong6 proof system for match-
ing logic such as H. There is an easy counterexample even without
(Definedness):

Proposition 4.2.4. The deduction property does not hold for H.

Proof. By a counterexample. Here is the proof for {∃ x. σ( x ), λ} ` σ(λ):

1. λ
2. λ → (∃ x. x → λ) (PT)
3. ∃ x. x → λ 1., 2. (MP)
4. σ(∃ x. x ) → σ(λ) 3. (Framing)
5. σ(∃ x. x ) ↔ ∃ x. σ( x ) Lemma 4.2.3
6. ∃ x. σ( x )
7. (∃ x. σ( x )) → σ(λ) 4., 5. Lemma 4.2.5
8. σ(λ) 6., 7. (MP)

Now we show that ∃ x. σ( x ) 6` λ → σ(λ). Assume for a contradiction


∃ x. σ( x ) ` λ → σ(λ). Soundness of H implies ∃ x. σ( x ) |= ¬λ ∨ σ(λ).
However, consider the model

M : M := {0, 1}, λM := {1}, σM (0) := {0, 1}, σM (1) := ∅.

We have M |= ∃ x. σ( x ) but M 6|= ¬λ ∨ σ(λ), contradiction. 

This shows that the deduction property is semantically too strong


to hold in ML. We could already see hints in Proposition 2.4.4, which
says M |= ψ → ϕ iff for any M-valuation ρ we have ρM (ψ) ⊆ ρM ( ϕ).
Such a behavior is different from how implication normally behaves
6 The deduction property holds for a complete system of propositional logic, which is
a sound proof system for matching logic.
4.2 system h 39

in logics such as FOL. Just consider models where ψ is not valid: we


have already discussed that there is no guarantee ψ → ϕ is valid in
those models.
To have any hope for an analogoue of the deduction property from
FOL, we have to define a pattern that semantically captures the idea
M |= ψ implies M |= ϕ. This is only possible if we can enforce that the The pattern
premise of the implication is either valid in M or does not match any bψc → ϕ exactly
corresponds to our
element in M. Observe that this can precisely be done by putting ψ
intuition about
into the totality context. We already know that bψc is an M-predicate implication.
for any model M with M |= d x e. What is more, we can easily show
that bψc → ϕ does what we would expect from an implication:
Proposition 4.2.5 ([27]). Let M be a Σ-model such that M |= d x e. Then
M |= bψc → ϕ iff M |= ψ implies M |= ϕ. 
Using the construction we have just showed in Proposition 4.2.5, we
can have a weaker version of the deduction property [11, p. 4]:
Theorem 4.2.3 (“Weak” deduction property of H [11]). Let Γ be a
Σ-theory containing (Definedness). For every closed Σ-pattern ψ we
have Γ ∪ {ψ} ` ϕ iff Γ ` bψc → ϕ. 
Proposition 4.2.6. Let Γ be a Σ-theory and d·e be the fresh symbol in
Γ= . For every closed ψ we have Γ= ∪ {ψ} ` ϕ iff Γ= ` bψc → ϕ.  Equality extensions
are not considered
The deduction property of H tells us something fundamental about by [11], but again,
matching logic. That is, closed Γ-predicates should exactly be those Proposition 4.2.6
patterns for which the conventional deduction property (without trivially follows from
their results.
premises in totality contexts) is sound. To prove this, let us first for-
mally define this class of patterns.
Definition 4.2.2 (Γ-deduct). Let Γ be a Σ-theory and ψ ∈ PatternΣ .
Then ψ is called a Γ-deduct if ψ is closed and for all ϕ ∈ PatternΣ
holds Γ ∪ {ψ} ` ϕ iff Γ ` ψ → ϕ. 
For example, ⊥ is a Γ-deduct for any theory Γ (Example 4.2.1).
In theories Γ containing (Definedness), closed Γ-predicates exactly
correspond to Γ-deducts (and vice versa):
Theorem 4.2.4. Let Γ be a Σ-theory containing (Definedness) and
ψ ∈ PatternΣ some closed pattern. Then ψ is a Γ-predicate iff ψ is a
Γ-deduct.

Proof. Let ϕ ∈ PatternΣ .


(⇒) Let ψ be a Γ-predicate. We want to prove ψ is a Γ-deduct. Let
Γ ∪ {ψ} ` ϕ. By weak deduction property we have Γ ` bψc → ϕ.
Because ψ is a Γ-predicate, by totality canceling (Proposition 4.2.3)
Γ ` ψ → ϕ. The other implication Γ ` ψ → ϕ implies Γ ∪ {ψ} ` ϕ is
trivial by (MP).
(⇐) By contraposition. If ψ is not a Γ-predicate, there is some model
M |= Γ and M-valuation ρ such that ∅ ⊂ ρM (ψ) ⊂ M. We show that
40 two proof systems for matching logic

ψ is not a Γ-deduct by counterexample. Obviously Γ ∪ {ψ} ` bψc


and M is a witness to Γ 6` ψ → bψc by completeness of H w.r.t. Γ
(Proposition 4.2.2). 

Corollary 4.2.2. Let Γ be a Σ-theory and ψ ∈ PatternΣ= some closed


pattern. Then ψ is a Γ= -predicate iff ψ is a Γ= -deduct.

Proof. Γ= contains a definedness axiom with some fresh symbol. Re-


peat the proof of Theorem 4.2.4 with this symbol. 

In Chapter 5 we shall prove that completeness of H implies that


Γ-predicates are Γ-deducts even if Γ does not contain (Definedness).

4.2.4 Local completeness

Besides the fact that H is complete w.r.t. theories containing the axiom
(Definedness), H is complete for empty theories (Theorem 4.2.5). The
proof draws inspiration from [5] and is rather technical, combining
techniques from both hybrid modal logic and first-order logic [10,
p. 4]. We provide the result here only for reference.

Theorem 4.2.5 (Local completeness [10]). Let ϕ be a pattern. Then


∅ |= ϕ implies ∅ ` ϕ. 
Notice that completeness of H wrt the empty theory holds with-
out any further conditions. Especially we do not need the axiom
(Definedness), of course. If we were considering FOL instead of ML,
local completeness would be enough to prove completeness! In FOL,
local completeness is as strong as (global) completeness:

Theorem 4.2.6. Let D be a sound Hilbert-style proof system for FOL


with modus ponens. If for all valid FOL formulas ϕ we have ∅ `D ϕ,
then D is a complete proof system for FOL (Φ |= ϕ implies Φ ` ϕ).

Proof. Let ∅ `D ϕ for all formulas ϕ such that ∅ |=FOL ϕ. We want


to show Φ |= ϕ implies Φ ` ϕ for every FOL theory Φ. It is enough
to show that every consistent theory Φ has a model (Henkin’s re-
duction [20]), i.e., Φ 6`D ⊥FOL implies Φ 6|=FOL ⊥FOL for some FOL
contradiction ⊥FOL .
Let Φ be an FOL theory such that Φ 6`D ⊥FOL . The compactness
property of FOL yields Φ 6|=FOL ⊥FOL iff Φfin 6|=FOL ⊥FOL for every
finite subset Φfin ⊆ Φ. We can w.l.o.g. assume that Φ contains only
closed formulas. Thus by FOL semantics this is iff

∅ 6|=FOL Φfin → ⊥FOL for every finite subset Φfin ⊆ Φ.


^

Let Φfin ⊆ Φ be any finite subset of Φ. By definition of `D we get


Φfin 6`D ⊥FOL . But this means ∅ 6`D Φfin → ⊥FOL (because assuming
V

∅ `D Φfin → ⊥FOL leads to an obvious contradiction). Finally local


V

completeness of D yields 6|=FOL Φfin → ⊥FOL .


V

4.2 system h 41

Theorem 4.2.6 shows that it is enough to show local completeness


of an FOL proof system to show that this system is (globally) complete.
In fact, Godel’s original proof of completeness is proving local com-
pleteness of a FOL proof system [16]. In ML, however, Γ ∪ {ψ} |= ⊥
does generally not imply Γ |= ψ → ⊥. Recall that this property holds
only for closed Γ-predicates (Corollary 2.4.2). Even if we chose ψ to
be a Γ-predicate, we cannot repeat the same trick as in Theorem 4.2.6
because removing assumptions from a finite theory adds models, i.e.,
cancels some predicates of the original theory.
IS SYSTEM H COMPLETE?
5
We have learned in Section 4.2 that H is complete w.r.t. theories
containing (Definedness), or more generally, w.r.t. equality extensions.
However, H is sound for any ML theory. In this section we investigate
if System H is complete w.r.t. any given theory, i.e., if

Γ |= ϕ implies Γ ` ϕ

holds for all Σ-theories Γ, even if Γ does not contain (Definedness).


Recall that ML is embedabble in predicate logic with equality (Sec-
tion 3.1), which has a complete proof system. That is why it was
conjectured in [27, p. 57] that ML should also admit a complete proof
system, which does not rely on any predefined symbols such as d·e.
System H does not contain any predefined symbols and we have
seen indications that H is quite a strong proof system. Among other
things, H is a locally complete proof system for ML (Theorem 4.2.5),
i.e., it is complete for the empty theory Γ = ∅. On the other hand, we
need predicate logic with equality to contain ML. Local deduction ` ϕ
seems “weaker” than global deduction Γ ` ϕ. This has been explained
by our discussions about local completeness in Section 4.2.4 and the
deduction property in Section 4.2.3, where we have seen that

{ψ} ` ϕ does not imply ∅ ` ψ → ϕ.

We shall see that the deduction property plays a central role in the
question whether H is complete.
The structure of this chapter is as follows. We try to follow Henkin’s
method of proving completeness for FOL [20] to see exactly where it
fails for ML: the deduction property. This will lead us to an alternative
characterization of completeness, which is the main result of this
thesis. We notice we can formalize the notion that (Definedness)
“makes” H complete. It turns out H is complete if and only if every
equality extension is a conservative extension. It is difficult to show that
an extension is conservative without a complete proof system and the
deduction property. However, this result will allow us to reduce the
problem of completeness to finite theories and prove some instances
of completeness. By an instance of completeness we mean that H is
complete w.r.t. some given class of theories:

Definition 5.0.1 (H-complete theory). Let Γ be a Σ-theory. We say that


H is complete w.r.t. Γ (or simply that Γ is H-complete) iff for every
ϕ ∈ PatternΣ we have that Γ |= ϕ implies Γ ` ϕ. 

43
44 is system h complete?

For example, all theories containing (Definedness) are H-complete


in this sense. If we find that H is not complete because of some-
thing fundamental about ML (and not just because of some missing
rules), we would at least like to characterize H-complete theories and
prove results about them. That means, we would like an if-and-only-if
condition that tells us whether a theory is H-complete.
Let us recap how we proceed in the following list.

(1) We show where Henkin’s method fails when applied to ML. Then
we find an alternative characterization of H-complete theories:
we reduce the completeness problem of H to proving that every
equality extension is a conservative extension (Section 5.1).

(2) We show that we can reduce completeness of H to completeness


of H w.r.t. finite theories (Section 5.2).

(3) We prove some instances of completeness. In particular, we show


that H is complete w.r.t. the fragment of ML without symbols
(Section 5.3). If a theory does not contain any symbols, we will
show that this theory is H-complete.

(4) We study the notion of H-consistency. We show that H-consistent


theories have similar properties to consistent theories in FOL.
In particular, we use this to prove the well-known compactness
property for ML (Section 5.4).

(5) We study the notion of negation-complete theories in ML (Sec-


tion 5.5).

(6) We conclude the chapter with open leads to proving other in-
stances of completeness.

5.1 an if-and-only-if condition for completeness

Henkin showed that instead of solving completeness of an FOL proof


system D , we can show that every D -consistent theory has a model [20].
This allowed to construct the so-called canonical model for a given (con-
sistent) theory, which yielded a technique to prove completeness in
different variants for many logics [5, 12, 13]. Since ML is a variant of
FOL (Chapter 3), why not try Henkin’s method?

Theorem 5.1.1 (Henkin’s reduction [20]). Let D be an FOL sound


Hilbert-style proof system with the deduction property that has at least
modus ponens and such that `D (¬ ϕ → ⊥FOL ) → ϕ for some FOL
contradiction ⊥FOL . Then the following two statements are equivalent:

(1) D is a complete proof system (Φ |= ϕ implies Φ ` ϕ).

(2) Every D -consistent theory has a model.


5.1 an if-and-only-if condition for completeness 45

Proof.
(⇒) Let Φ be a D -consistent theory, i.e., Φ 6`D ⊥FOL . Then by
completeness of D we get Φ 6|=FOL ⊥FOL . By definition of |=FOL this
means that there exists a model A of Φ such that A 6|=FOL ⊥FOL , i.e., Φ
is satisfiable.
(⇐) We want to prove that D is complete, i.e., Φ 6`D ϕ implies
Φ 6|=FOL ϕ. Let Φ 6`D ϕ.

• First we show that Φ ∪ {¬ ϕ} is satisfiable. Assume for a con-


tradiction that Φ ∪ {¬ ϕ} is unsatisfiable. (2) yields that every
D -consistent theory is satisfiable, which is the same as saying
every unsatisfiable theory is D -inconsistent. Thus (2) yields
Φ ∪ {¬ ϕ} `D ⊥FOL . The deduction property yields

Φ `D ¬ ϕ → ⊥FOL .

But now we can use `D (¬ ϕ → ⊥FOL ) → ϕ and modus ponens


to get Φ `D ϕ, contradiction.

• Because Φ ∪ {¬ ϕ} is satisfiable, there exists a model A of Φ ∪


{¬ ϕ}, i.e., A |=FOL Φ ∪ {¬ ϕ}. By definition of |=FOL we imme-
diately get that A |=FOL Φ and A |= ¬ ϕ. However, A |=FOL ¬ ϕ
implies A 6|=FOL ϕ. Thus A |= Φ and A 6|= ϕ, i.e., Φ 6|=FOL ϕ.

Besides a few technicalities, the main idea in the proof of Theo-


rem 5.1.1 is applying the deduction property. Theorem 5.1.1 can be
seen as a reduction because it reduces the problem of completeness to
the problem of showing that every consistent theory has a model. If
we want to use this for ML, we get immediately stuck because we do
not have the deduction property (Section 4.2.3). Can we too “reduce”
the completeness problem of H to something more tractable?
In Chapter 4 we have learned that H is complete w.r.t. theories con-
taining (Definedness). Intuitively said, (Definedness) “makes” each
theory H-complete. Is it not a way how to characterize H-complete
theories? We do not need to restrict ourselves to (Definedness): all
equality extensions are H-complete (Proposition 4.2.2). Equality exten-
sions are very simple because we only add a single axiom with a fresh
symbol. We have learned in Proposition 2.2.2 that models restricted to
a smaller signature evaluate patterns in the smaller signature to the
same set as the original models do. If we have a Σ ∪ {d·e}-model M
with M |= d x e, then for every M-valuation ρ holds ρM ( ϕ) = ρM|Σ ( ϕ)
for ϕ ∈ PatternΣ . This concept should sound familiar. An equality
extension is indeed a model-theoretic conservative extension:

Definition 5.1.1. Let Γ be a Σ-theory and Γ+ be a Σ+ -theory such that


Σ ⊆ Σ+ . We say that Γ+ is a model-theoretic conservative extension of Γ if
for every ϕ ∈ PatternΣ we have Γ+ |= ϕ iff Γ |= ϕ. 
46 is system h complete?

Lemma 5.1.1 (Extension by definedness). Let Γ be a Σ-theory. Then


Γ= is a model-theoretic conservative extension of Γ, i.e., for every
Σ-pattern ϕ we have Γ= |= ϕ iff Γ |= ϕ.

Proof. This is immediate from Proposition 2.2.2 but we will show a


full proof. Let d·e be w.l.o.g. the fresh definedness symbol in Γ= .
(⇒) By contraposition. Let Γ 6|= ϕ. By definition of |= we have a
Σ-model M = ( M, I ) of Γ such that M 6|= ϕ. Consider the model
+ +
M+ = ( M, I ∪ {d·eM }) where d·eM (m) = M for all m ∈ M. But
then obviously M+ |= Γ ∪ {d x e} and M+ 6|= ϕ because d·e ∈ / Σ. This
means Γ= 6|= ϕ.
(⇐) By contraposition. Let Γ= 6|= ϕ. By definition of |= there exists
a model M of Γ= such that M 6|= ϕ. Consider that also M|Σ |= Γ
because Γ ⊂ Γ= and Γ is a Σ-theory. Because ϕ ∈ PatternΣ , we also
have M|Σ 6|= ϕ. But then M|Σ |= Γ and M|Σ 6|= ϕ, i.e., Γ 6|= ϕ. 

We can exploit Lemma 5.1.1 to reduce completeness of H to proving


that every equality extension is a proof-theoretic conservative extension
in H. There is indeed a good intuition behind because H is com-
plete w.r.t. equality extensions (Proposition 4.2.2). We start with the
definition of proof-theoretic extensions.

Definition 5.1.2 (Extension). Let Γ be a Σ-theory and Γ0 be a Σ0 -theory.


If Σ ⊆ Σ0 and for every ϕ ∈ PatternΣ we have Γ ` ϕ implies Γ0 ` ϕ,
then Γ0 is called a (proof-theoretic) extension of Γ. 

Note that Γ= is a proof-theoretic extension of Γ in the sense of


Definition 5.1.2 trivially because Γ ⊂ Γ= . Conservative extensions also
satisfy the other direction Γ0 ` ϕ implies Γ ` ϕ for ϕ ∈ PatternΣ , i.e.,
the original theory can prove everything in the original signature that
the extension can:

Definition 5.1.3 (Conservative extension). Let Γ be a Σ-theory and Γ0


be a Σ0 -theory. If Γ0 is an extension of Γ and for every ϕ ∈ PatternΣ
we have Γ0 ` ϕ implies Γ ` ϕ, then Γ0 is called a (proof-theoretic)
conservative extension of Γ. 

We will always drop the adjective proof-theoretic when we refer


to proof-theoretic extensions or proof-theoretic conservative exten-
sions. Because we know that H is complete w.r.t. equality extensions
(Proposition 4.2.2), it is natural to ask how equality extensions relate
to completeness of H. We know that Γ |= ϕ iff Γ= |= ϕ for ϕ in the sig-
nature of Γ (Lemma 5.1.1). But we also know that Γ= |= ϕ iff Γ= ` ϕ!
The problem whether Γ is H-complete can be reduced to showing that
Γ= is a conservative extension of Γ. If we prove this for every Γ, then
H is complete:

Theorem 5.1.2. Let Γ be a Σ-theory. The following two statements are


equivalent:
5.1 an if-and-only-if condition for completeness 47

(1) For every ϕ ∈ PatternΣ we have Γ |= ϕ implies Γ ` ϕ,


(Γ is H-complete)

(2) For every ϕ ∈ PatternΣ we have Γ= ` ϕ implies Γ ` ϕ.


(Γ= is a conserative extension of Γ)

Proof.
(⇒) Let ϕ ∈ PatternΣ . We show the contraposition of (2). If Γ 6`
ϕ, by (1) we have Γ 6|= ϕ. By the extension by definedness lemma
(Lemma 5.1.1) we have Γ= 6|= ϕ. By correctness of H we finally get
Γ= 6` ϕ.
(⇐) Let ϕ ∈ PatternΣ . We show the contraposition of (1). If Γ 6` ϕ,
by (2) we have Γ= 6` ϕ. By completeness of H w.r.t. Γ= , we have
Γ= 6|= ϕ. By extension by definedness lemma (Lemma 5.1.1) we finally
get Γ 6|= ϕ. 

Corollary 5.1.1 (Characterization of completeness). The following two


statements are equivalent:

(1) H is complete, i.e., every Σ-theory Γ is H-complete. Here is why equality


extensions are
(2) For every Σ-theory Γ we have that Γ= is a conservative extension actually useful.
of Γ.

We can delve even deeper. The question of completeness of H is


really an instance of the problem whether extensions by definition1 [13,
p. 126] are conservative extensions in H: in FOL, an extension by
definition of a fresh n-ary predicate symbol P is extending a given
theory with a so-called definition ϕ of P

∀ x1 . . . ∀ x n . P ( x1 , . . . , x n ) ↔ ϕ ( x1 , . . . , x n )

where ϕ is in the original signature. Extensions by definition are


trivially conservative in FOL because FOL has a complete proof system.
Notice that equality extensions are extensions by definition in this
sense because d x e really says ∀ x. d x e ↔ >. In ML, it is possible to show
that extensions by definition mean almost the same thing as in FOL
and thus they should be conservative; semantically, this makes sense
as we have already seen. If H is complete, extensions by definition
in ML are conservative. We are in a situation where we would like to
show the other direction; this is difficult.
However, all is not lost. Notice that Theorem 5.1.2 allows us to prove
completeness of H on a case-by-case basis. Our if-and-only-if condition
for H-complete theories gives a straightforward manner of solving
completeness of H w.r.t. some theories Γ. For example, H is complete
w.r.t. predicate patterns (∅-predicates). The idea is the analogous to
1 Mind the difference between a definition and a definedness axiom.
48 is system h complete?

the one used in FOL for reducing completeness to local completeness


(Theorem 4.2.6), i.e., we remove patterns from our assumptions “to
the front”:

Theorem 5.1.3. Let ψ ∈ PatternΣ be a ∅-predicate. Then {ψ} is H-


complete.

Proof. We will prove an equivalent statement that {ψ}= is a conserva-


tive extension of {ψ} (Theorem 5.1.2), i.e., for every ϕ ∈ PatternΣ we
have {ψ}= ` ϕ implies {ψ} ` ϕ.
Let {ψ}= ` ϕ, which is the same as {d x e} ∪ {ψ} ` ϕ. Recall that ψ
is an ∅-predicate, thus it is also a {d x e}-predicate by Lemma 5.1.1. But
then ψ is a {d x e}-deduct by Theorem 4.2.4, thus {d x e} ` ψ → ϕ. By
soundness we get {d x e} |= ψ → ϕ. But then extension by definedness
lemma (Lemma 5.1.1) yields |= ψ → ϕ because obviously ψ → ϕ ∈
PatternΣ . By local completeness of H we get ` ψ → ϕ. The proof of
{ψ} ` ϕ is introducing ψ, applying ` ψ → ϕ and applying (MP) on ψ
and ψ → ϕ. 

Notice that Theorem 5.1.3 solves the question whether {∀ x. x } is


H-complete mentioned in Section 4.1 because χ ≡ ∀ x. x is a predicate
pattern! The pattern χ evaluates to m∈ M {m} in every model M.
T

Therefore in single-element models the pattern χ is trivially valid, in


other models χ matches no element because the intersection is empty.
This intuition can be extended further to all closed patterns without
symbols, which is the main idea behind solving completeness of H
w.r.t. the fragment without symbols (Section 5.3).
An attentive reader might have also noticed that our if-and-only-if
condition for completness (Theorem 5.1.2) not only leads to a potential
proof of completeness of H. It also means that H-complete theories
admit a stronger version of the weak deduction property:

Proposition 5.1.1. Let Γ be an H-complete Σ-theory. Then for every


ϕ ∈ PatternΣ we have Γ ∪ {ψ} ` ϕ iff Γ= ` bψc → ϕ.

Proof. Γ is H-complete implies that Γ= is a conservative extension


of Γ. But then for every ϕ ∈ PatternΣ we have Γ ∪ {ψ} ` ϕ iff
Γ= ∪ {ψ} ` ϕ iff Γ= ` bψc → ϕ, where the last equivalence is the
weak deduction property. 

This leads us back to the discussion in Section 4.2.3 about Γ-deducts,


i.e., patterns for which the conventional deduction property holds.
The deduction property and our if-and-only-if condition for the com-
pleteness of H using conservative extensions (Theorem 5.1.2) implies
that Γ-predicates are Γ-deducts for any H-complete theory Γ.

Theorem 5.1.4. Let Γ be an H-complete theory. Then every closed


Γ-predicate is a Γ-deduct.

Proof. Let Γ be a Σ-theory and ψ ∈ PatternΣ any closed Γ-predicate.


5.2 reduction to finite theories 49

• We want to show Γ ∪ {ψ} ` ϕ implies Γ ` ψ → ϕ. Let Γ ∪ {ψ} `


ϕ for some ϕ ∈ PatternΣ . By Proposition 5.1.1 we have Γ= `
bψc → ϕ. Observe that ψ must also be a Γ= -predicate (Proposi-
tion 2.4.3) and thus totality canceling (Proposition 4.2.3) yields
Γ= ` ψ → ϕ. But Theorem 5.1.2 says that Γ= is a conservative ex-
tension of Γ. Therefore Γ ` ψ → ϕ because ψ → ϕ ∈ PatternΣ .

• We want to show Γ ` ψ → ϕ implies Γ ∪ {ψ} ` ϕ. Let Γ ` ψ → ϕ.


Then trivially Γ ∪ {ψ} ` ψ → ϕ. The proof of Γ ∪ {ψ} ` ϕ is ψ,
applying ` ψ → ϕ, and applying (MP) on ψ and ψ → ϕ.

Interestingly enough, the question whether deducts are predicates


even in H-complete theories is non-trivial; we do not know anything
about contained symbols. We leave the general case as a conjecture.

Conjecture 5.1.1. Let Γ be an H-complete theory. Then every Γ-deduct


is a Γ-predicate.

5.2 reduction to finite theories

Our if-and-only-if condition for completeness of H has an immediate


consequence. The condition reduces completeness, a problem of both
semantic and syntactic nature, to a strictly syntactic problem. Provabil-
ity means there is a finite Hilbert-style proof that uses a finite number
of axioms. If we prove the condition given by Theorem 5.1.2 for all
finite theories, we will thus prove the condition for all theories. This is
formalized in the following theorem.

Theorem 5.2.1 (Finite theories). The following two statements are


equivalent.

(1) For every finite Σ-theory Γ and every ϕ ∈ PatternΣ we have


Γ= ` ϕ implies Γ ` ϕ
(every finite equality extension is a conservative extension).

(2) For every Σ-theory Γ and every ϕ ∈ PatternΣ we have Γ= ` ϕ


implies Γ ` ϕ
(every equality extension is a conservative extension).

Proof.
(⇒) Let Γ= ` ϕ and w.l.o.g assume Γ= \ Γ = {d x e} where d·e ∈ / Σ.
By definition of ` there is some finite theory Γ0 ⊆ Γ= such that Γ0 ` ϕ.
Note that Γ0 is not necessarily a Σ-theory because Γ0 can contain
d x e. That is why we instead consider the Σ-theory Γ00 = Γ0 \ {d x e}. By
our assumption from Definition 2.6.1 about the fixed fresh symbol for
Σ we have Γ0 ⊆ Γ00= ⊆ Γ= . But then obviously Γ00= ` ϕ. Because Γ00 is
50 is system h complete?

finite, we can apply (1) to get Γ00 ` ϕ. But then trivially Γ ` ϕ because
Γ00 ⊆ Γ.
(⇐) Trivial. 

Note that in the proof of Theorem 5.2.1, we have to consider Γ00


instead of Γ0 because Γ0 might contain the fresh definedness axiom, i.e.,
we might have Γ0 6⊆ Γ. That is why (1) applied on Γ0= to get Γ0 would
not help us to prove the conclusion Γ ` ϕ. Because our if-and-only-if
characterization of completeness, Theorem 5.2.1 means that we only
have to prove completeness of H for finite theories:

Corollary 5.2.1. The following two statements are equivalent.

• Every finite Σ-theory is H-complete.

• Every Σ-theory is H-complete.

5.3 theories without symbols are h-complete

Now we are ready to show that H is complete for Σ-theories Γ with


Σ = ∅. We can show this by proving something stronger. The frag-
ment of ML without symbols is very weak. In fact, we will show that
closed patterns without symbols are predicate patterns (∅-predicate)!
This will immediately yield completeness for matching logic without
symbols because we have learned that

(1) {ψ} is H-complete if ψ is an ∅-predicate (Theorem 5.1.3), and

(2) every finite ∅-theory is H-complete implies every ∅-theory is


H-complete (Theorem 5.2.1).
The main insight is the following: if we do not have symbols, we do
not have a way how to distinguish between individual model elements.
That is why we introduce some theory on permutations over model
elements. Let π : M → M be a permutation (bijection) of model
elements from M. Notice that for any M-valuation ρ, the composition
π ◦ ρ is again an M-valuation. For any X ⊆ M, we can naturally
extend permutations of elements to sets of model elements as follows:

π ( X ) = { π ( x ) | x ∈ X }.

When we apply this permutations to pattern valuations, we can show


that they have several interesting properties:

Lemma 5.3.1. Let M be a Σ-model, π : M → M any permutation and


ρ any M-valuation. Then the following properties hold:

(1) π (ρM ( ϕ1 ) ∩ ρM ( ϕ2 )) = π (ρM ( ϕ1 )) ∩ π (ρM ( ϕ2 ))

(2) π ( M \ ρM ( ϕ)) = M \ π (ρM ( ϕ)).


5.3 theories without symbols are h-complete 51

ρ[m/x ]M ( ϕ)) = π (ρ[m/x ]M ( ϕ)).


S S
(3) π ( m∈ M m∈ M

(4) π ◦ ρ[m/x ] = (π ◦ ρ)[π (m)/x ].

Proof. Let us prove each property on its own.

(1) We are proving the following equality

{π (m) | m ∈ ρM ( ϕ1 ) ∩ ρM ( ϕ2 )} = {π (m) | m ∈ ρM ( ϕ1 )} ∩ {π (m) | m ∈ ρM ( ϕ2 )}.

(⊆) is trivial, for (⊇) consider m that is in both sets. We know


then there exists a ∈ ρM ( ϕ1 ), b ∈ ρM ( ϕ2 ) such that π ( a) = m
and π (b) = m. By injectivity of π we get a = b ∈ ρM ( ϕ1 ) ∩
ρM ( ϕ2 ).

(2) We prove both subsumptions.


• (⊆) Let m ∈ M \ π (ρM ( ϕ)). Because π is surjective, there
exists a ∈ M such that π ( a) = m. Assume for a contradic-
/ M \ ρM ( ϕ). Then a ∈ ρM ( ϕ), which means
tion that a ∈
m ∈ π (ρM ( ϕ)) and thus m ∈/ M \ π (ρM ( ϕ)), contradiction.
• (⊇) By contraposition. Let m ∈ / M \ π (ρM ( ϕ)). Thus m ∈
π (ρM ( ϕ)). Assume for a contradiction that m ∈ π ( M \
ρM ( ϕ)). Thus m ∈ π (ρM ( ϕ)) ∩ π ( M \ ρM ( ϕ)), which by
(1) means m ∈ π (ρM ( ϕ) ∩ ( M \ ρM ( ϕ))) = π (∅) = ∅,
contradiction.

ρ[m/x ]M ( ϕ)) iff b ∈ ρ[m/x ]M ( ϕ)


[ [
(3) π (b) ∈ π (
m∈ M m∈ M

π (ρ[m/x ]M ( ϕ)).
[
iff π (b) ∈
m∈ M

(4) We consider two cases:


• y = x: (π ◦ ρ[m/x ])(y) = π (m) = (π ◦ ρ)[π (m)/x ](y)
• y 6= x: (π ◦ ρ[m/x ])(y) = π (ρ(y)) = (π ◦ ρ)(y) = (π ◦
ρ)[π (m)/x ](y).

Recall that given an M-valuation ρ and a permutation π, π ◦ ρ is also


an M-valuation. When we do not have formal symbols in the signature,
the following lemma intuitively says that (π ◦ ρ)M ( ϕ) matches the
same elements as π applied to the set ρM ( ϕ).

Lemma 5.3.2. Let M be a ∅-model. Then for every ∅-pattern ϕ, every


M-valuation ρ and every permutation π : M → M we have

π (ρM ( ϕ)) = (π ◦ ρ)M ( ϕ).

Proof. By structural induction on ϕ.


52 is system h complete?

• ϕ ≡ x. By definition

π (ρM ( x )) = {π ( a) | a ∈ ρM ( x )} = {(π ◦ ρ)( x )} = (π ◦ ρ)M ( x ).

• ϕ ≡ ¬ ϕ1 . Consider the following.


1
π (ρM ( ϕ)) = π ( M \ ρM ( ϕ1 )) = M \ π (ρM ( ϕ1 ))
IH
= M \ ( π ◦ ρ )M ( ϕ1 )
= ( π ◦ ρ )M ( ϕ )

Equality (1) works is given by Lemma 5.3.1.

• ϕ ≡ ϕ1 ∧ ϕ2 . Consider the following.

π (ρM ( ϕ)) = π (ρM ( ϕ1 ) ∩ ρM ( ϕ2 ))


1
= π (ρM ( ϕ1 )) ∩ π (ρM ( ϕ2 ))
IH
= ( π ◦ ρ )M ( ϕ1 ) ∩ ( π ◦ ρ )M ( ϕ2 )
2
= ( π ◦ ρ )M ( ϕ1 ∧ ϕ2 )

(1) is given by Lemma 5.3.1. (2) is by definition of pattern inter-


pretation because π ◦ ρ is an M-valuation.

• ϕ ≡ ∃ x. ϕ.
1 [
π (ρM (∃ x. ϕ)) = π ( ρ[m/x ]M ( ϕ)) = π (ρ[m/x ]M ( ϕ))
[

m∈ M m∈ M
IH [
= (π ◦ ρ[m/x ])M ( ϕ)
m∈ M
2 [
= (π ◦ ρ)[π (m)/x ]M ( ϕ)
m∈ M
3 M 4
(π ◦ ρ)[m0 /x ] ( ϕ) = (π ◦ ρ)M (∀ x. ϕ)
[
=
m0 ∈ M

Equalities (1) and (2) are given by Lemma 5.3.1. IH can be used
because we are proving this for every M-valuation, including
ρ[m/x ]. Equality (3) holds because π is surjective. Equality (4)
is again by definition of pattern interpretation (π ◦ ρ is an M-
valuation).

Lemma 5.3.2 yields a direct manner how to prove that closed ∅-


patterns are predicates:

Theorem 5.3.1. Let ψ be a closed ∅-pattern. Then ψ is an ∅-predicate


(predicate pattern).
5.3 theories without symbols are h-complete 53

Proof. We want to prove that for every ∅-model M we have ρM (ψ) =


∅ or ρM (ψ) = M. Let M be an ∅-model. Because ψ is closed, we can
fix some M-valuation ρ. By Lemma 5.3.2 we have

π (ρM (ψ)) = (π ◦ ρ)M (ψ)

for every permutation π : M → M. Because ψ is closed, further


π (ρM (ψ)) = (π ◦ ρ)M (ψ) = ρM (ψ) (Proposition 2.2.1).
Let A := ρM (ψ). This means that π ( A) = A for every permutation
π : M → M. Consider that this is only possible if A = ∅ or A = M.
Assume for a contradiction that ∅ ⊂ A ⊂ M. Choose a ∈ A, b ∈ / A and
consider the permutation π ( a) := b, π (b) := a. But then obviously
π ( A) 6= A, which is a contradiction. 

Now because we can consider only finite theories, by completeness


of H w.r.t. predicates (Theorem 5.1.3) we have the following.

Theorem 5.3.2. Every ∅-theory Γ is H-complete, i.e., H is complete


w.r.t. the fragment of ML without symbols.

Proof. By Theorem 5.2.1 it suffices to prove that every finite ∅-theory


Γ is H-complete.
Let Γ be a finite ∅-theory. Γ can obviously be expressed as a con-
juction γ ≡ Γ. By our if-and-only-if condition for H-completeness
V

we just need to prove for every ϕ ∈ Pattern∅ that {γ}= ` ϕ implies


{γ} ` ϕ.
Let {γ}= ` ϕ for some ϕ ∈ Pattern∅ . This is iff {∀γ}= ` ∀ ϕ by
Proposition 4.2.1. Notice that ∀γ is a closed ∅-pattern and thus ∀γ is
an ∅-predicate by Theorem 5.3.2. But H is complete w.r.t. predicate
patterns (Theorem 5.1.3), i.e., {∀γ}= ` ∀ ϕ implies {∀γ} ` ∀ ϕ. Thus
{∀γ} ` ∀ ϕ. Again by Proposition 4.2.1 this implies {γ} ` ϕ. 

Thus any Σ-theory Γ with Σ = ∅ is H-complete. This is important


because now when we talk about completeness of H, we can always
assume w.l.o.g. that |Σ| > 0. This is more similar to the situation in
FOL, where we always have at least one predicate symbol P (or “=” in
the case of FOL with equality). As a concluding remark, Theorem 5.3.2
also answers the hanging question about Γ-deducts when Γ is an
∅-theory:

Proposition 5.3.1. Let Γ be an ∅-theory. Then every Γ-deduct is a


Γ-predicate.

Proof. This is immediate from Theorem 5.3.2 because Γ-deducts are


by definition closed ∅-patterns. 
54 is system h complete?

5.4 consistency, satisfiability, and compactness

In this section we study consistency of ML theories. For any sufficiently


strong proof system of matching logic such as H, we can have the
same definition of consistency as FOL, which has mostly the same
Consistency depends properties. An H-consistent theory means that you cannot derive the
on the chosen proof contradiction ⊥ from this theory in H. For example, the empty theory
system, not the
∅ is consistent because H is sound (Theorem 4.2.1).
chosen logic.
Definition 5.4.1 (Consistent theory). Let Γ be a Σ-theory. Then Γ is
called H-consistent (or simply consistent) iff Γ 6` ⊥. 
We often drop H in H-consistency throughout the thesis. An incon-
sistent Σ-theory can prove every Σ-pattern (explosion principle), which
is a consequence of what we have seen in Example 4.2.1. Therefore
we can easily show that the notion of (in)consistency has the same
equivalent definitions as most sound and complete systems for FOL
do.

Lemma 5.4.1. Let Γ be a Σ-theory. The following three are equivalent.

(1) Γ ` ⊥

(2) For every ϕ ∈ PatternΣ it holds that Γ ` ϕ.

(3) There exists a closed pattern ϕ ∈ PatternΣ such that Γ ` ϕ and


Γ ` ¬ ϕ.

Proof. We show all the implications.

• (1⇒2) Let Γ ` ⊥. The proof of Γ ` ϕ is as follows. Apply proof


of Γ ` ⊥, notice that ` ⊥ → ϕ is an instance of (PT)2 , apply
(MP) on ⊥ and ⊥ → ϕ.

• (2⇒3) Let Γ ` ϕ for any pattern ϕ. Choose ϕ1 ≡ ∃ x. x and


ϕ2 ≡ ¬(∃ x. x ). Thus Γ ` ∃ x. x and Γ ` ¬(∃ x. x ) by assumption.

• (3⇒1) Let Γ ` ϕ and Γ ` ¬ ϕ for some ϕ. The proof of Γ ` ⊥ is


as follows. Apply proofs of Γ ` ϕ and Γ ` ¬ ϕ. Apply ` ϕ →
(¬ ϕ → ⊥), which is an instance of (PT). Apply (MP) twice, first
on ϕ and ϕ → (¬ ϕ → ⊥), then on ¬ ϕ and (¬ ϕ → ⊥).

How can we be certain that this is an “appropriate” definition of


consistency for ML? We should naturally aim for such a definition of
consistency that unsatisfiable theories are inconsistent. Recall that we
have seen in Example 2.4.1 that the unsatisfiable pattern ¬ x does not
evaluate to ∅ (as ⊥ does). This is fine because {¬ x } ` ∀ x. ¬ x by (Gen)
2 This is the reason why we defined > as a propositional tautology, even though one
can find a shorter definition such as ∃ x. x.
5.4 consistency, satisfiability, and compactness 55

and we have ` ∃ x. x by (Ex), which is the same as ` ¬(∀ x. ¬ x ). More-


over, for theories with (Definedness) we can show that consistency
exactly corresponds with satisfiability.

Theorem 5.4.1. Let Γ be a Σ-theory. Then Γ= is satisfiable iff Γ= is


consistent.

Proof. We prove both implications.


(⇒) Let M be a model of Γ. Assume for a contradiction that Γ= is
inconsistent, i.e., Γ= ` ⊥. Then by soundness of H we have Γ= |= ⊥.
By definition of |= we have M |= ⊥, which means ρM (⊥) = M for all
M-valuations ρ, contradiction with ρM (⊥) = ∅.
(⇐) Let Γ= 6` ⊥. By completeness of H w.r.t. Γ= we have Γ= 6|= ⊥,
which yields there exists a model M of Γ= such that M 6|= ⊥, i.e., Γ=
is satisfiable. 

Observe that Theorem 5.4.1 and our extension by definedness lemma


(Lemma 5.1.1) give a direct manner to show the well-known compact-
ness property for ML! Compactness property for ML was not explicitly
proved yet. What is more, here we show that it holds for any theory,
even if it does not contain (Definedness).

Theorem 5.4.2 (Compactness property). Let Γ be any Σ-theory. Then Γ


is satisfiable iff every finite subset Γfin ⊆ Γ of Γ is satisfiable.

Proof. Lemma 5.1.1 yields that Γ is satisfiable iff Γ= is satisfiable.


Theorem 5.4.1 yields Γ= is satisfiable iff Γ= 6` ⊥. But then by definition
of ` we have Γ= 6` ⊥ iff Γ0 6` ⊥ for every finite subset Γ0 ⊆ Γ= . This is
again by Theorem 5.4.1 iff every finite subset Γ0 ⊆ Γ= is satisfiable iff
every finite subset Γ00 ⊆ Γ is satisfiable. 

It should not be surprising the compactness property holds for


ML after what we have seen in Section 3.1. However, it is still nice
to have a direct and clear proof for our intuition. Does this mean
that every H-consistent theory has a model also for theories without
(Definedness)? Unfortunately, for these theories we still only have the
other implication that is proved by soundness of H.

Proposition 5.4.1. Let Γ be a Σ-theory. If Γ is satisfiable, then Γ is


consistent. 
The other implication is usually proved by defining a canonical
model for the so-called negation-complete theories, which are prob-
lematic in matching logic (see Section 5.5). It is not at all intuitive
consistency implies satisfiability in ML, or equivalently, that unsatisfi-
ablity implies inconsistency. One reason is that unsatisfiable patterns
do not have to evaluate to ∅ (Example 2.4.1). In fact, even closed un-
satisfiable patterns are not restricted to a few semantically equivalent
edge cases. Unsatisfiable patterns can have complicated semantics. The
following theorem is a new result confirming that closed unsatisfiable
56 is system h complete?

patterns can evaluate to any strict subset of a model M, not just ∅, or


M \ {m} for some m ∈ M, etc. The difficulty is that ML symbols can
be self-referencing, i.e., they can call themselves.

Theorem 5.4.3. Let Σ be any signature such that σ ∈ Σ1 . There exists a


closed unsatisfiable pattern ξ ∈ PatternΣ such that for every Σ-model
0
M and set X ⊂ M, there exists a model M0 with ρM (ξ ) = X for every
ξ can match all M-valuation ρ.
numbers in R \ Q
and still not be valid Proof. Consider the pattern ξ = σ(¬σ (>)). First we prove that {ξ } is
in any model! unsatisfiable by showing that {ξ } is inconsistent (Proposition 5.4.1).
Here is the proof for {ξ } ` ⊥:

1. σ(¬σ(>))
2. ¬σ(>) → > (PT)
3. σ(¬σ(>)) → σ(>) 2. (Framing)
4. σ(>) 1., 3. (MP)
5. ¬σ(¬σ(>)) 4. (Lemma 4.2.4)
6. ⊥ 1., 5. (Lemma 5.4.1)

The rest is straightforward. Let M be some Σ-model. Choose any


0
X ⊂ M and consider the model M0 = ( M, I ∩ {σM }) where we
0
define σM (m) = X for all m ∈ M. Note that M \ X 6= ∅. For every
M-valuation ρ we get
0 0 0 0
ρM ( ξ ) = σ M ( M \ σM (b)) = σM ( M \ X ) = X.
[

b∈ M

The consequence of Theorem 5.4.3 is, for example, that we cannot


simply claim that |= ψ → ⊥ for a closed unsatisfiable pattern ψ. In
proof-theoretical terms, this forbids the deduction property even if we
restrict the conclusions to ⊥ such as

ψ ` ⊥ implies ` ψ → ⊥.

Simply choose ψ ≡ σ(¬σ(>)) (recall that {σ(¬σ(>))} ` ⊥ but by


soundness 6` σ(¬σ(>)) → ⊥). Notice that this is exactly the kind of
reasoning we needed for Henkin’s reduction (Theorem 5.1.1), where
we used only this particular instance of the deduction property!
However, there is still some hope. Consistency implies satisfiability
is equivalent with something that is much more intuitive, i.e., that
adding a definition of a fresh symbol does not break consistency.

Theorem 5.4.4. The following two properties are equivalent:

(1) Γ 6` ⊥ implies Γ 6|= ⊥


(consistency implies satisfiability),
5.4 consistency, satisfiability, and compactness 57

(2) Γ 6` ⊥ implies Γ= 6` ⊥
(adding a definition does not break consistency).

Proof. We have Γ 6|= ⊥ iff Γ= 6|= ⊥ (Lemma 5.1.1) and Γ= 6|= ⊥ iff
Γ= 6` ⊥ (Proposition 4.2.2). 

Thus we leave consistency implies satisfiability as a conjecture.

Conjecture 5.4.1. Γ 6` ⊥ implies Γ is satisfiable (Γ 6|= ⊥).

Note that consistency implies satisfibility alone might not be enough


for proving that H is complete (unlike in FOL and FOL proof systems).
Consistency implies satisfiability overlaps with completeness only if
every predicate is a deduct, as the following theorem shows. The proof
of the theorem uses a trick that we can focus on Γ-predicates when
dealing with completeness, together with the conventional trick from
Henkin’s reduction (Theorem 5.1.1):

Theorem 5.4.5 (Henkin’s reduction in ML). Let us assume for any the-
ory Γ that every closed Γ-predicate is a Γ-deduct. Then the following
two statements are equivalent:

(1) H is complete.

(2) Every H-consistent theory is satisfiable.

Proof. (⇒) Let Γ 6` ⊥. Because H is complete, we automatically get


Γ 6|= ⊥. By definition of |= there is a model M of Γ such that M 6|= ⊥,
i.e., Γ is satisfiable.
(⇐) Notice that it is enough to prove completeness of H for closed
Γ-predicates, i.e., the following two are trivially equivalent:

(a) H is complete,

(b) For every Σ-theory Γ, every Γ-predicate ϕ, Γ |= ϕ implies Γ ` ϕ.

This is because if Γ |= ϕ, trivially ϕ is a Γ-predicate (ϕ is valid in


Γ). Let us prove (1) by proving the contraposition of (b), i.e., let Γ be
a Σ-theory, ϕ a Γ-predicate and assume Γ 6` ϕ. First we show that
Γ ∪ {¬ ϕ} is satisfiable. Assume Γ ∪ {¬ ϕ} is unsatisfiable. Therefore
by the contraposition of (2) we get Γ ∪ {¬ ϕ} ` ⊥. Because ϕ is a
Γ-predicate, also ¬ ϕ is a Γ-predicate by Proposition 2.4.2. But then our
extra assumption says that ¬ ϕ is a Γ-deduct. Thus by definition of
Γ-deducts we get Γ ` ¬ ϕ → ⊥. By (PT) we have Γ ` (¬ ϕ → ⊥) → ϕ,
thus obviously Γ ` ϕ, contradiction. Thus Γ ∪ {¬ ϕ} is satisfiable, i.e.,
there is some model M |= Γ ∪ {¬ ϕ}. This means that M |= Γ and
M 6|= ϕ, i.e., Γ 6|= ¬ ϕ. 

The trouble with Henkin’s reduction is even if we had it, we would


probably need to deal with negation-complete theories. Negation-complete
theories are problematic in ML. This is discussed in the next section.
58 is system h complete?

5.5 negation-complete theories

One of the widely-known techniques in completeness proofs is finding


how to construct a canonical model for any given consistent theory.
In FOL, this is done by extending the given (consistent) theory to a
(witnessed) negation-complete theory that can derive any closed formula
or its negation [13, p. 78]:

Definition 5.5.1 (Negation-complete Σ-theory). A Σ-theory Γ is called


negation-complete if Γ is consistent and for any closed pattern ψ holds
either Γ ` ψ or Γ ` ¬ψ. 
In fact, maximally consistent FOL theories are negation-complete
FOL theories, which can be easily proved using the deduction property.
Since ML is a variant of FOL (Section 3.1), it is a natural question to ask
whether such a class of theories actually exists in ML. There are indeed
theories that can be extended to negation-complete ML theories:

Theorem 5.5.1. Let Ξ+ be a maximally consistent set in some signa-


ture Σ and containing the axioms {∀ x. x, d x e}. Then Ξ+ is negation-
complete.

Proof. First we show some Ξ+ exists. The theory Ξ = {∀ x. x, d x e} is


consistent because it is satisfiable (Proposition 5.4.1). For example,
consider M : M = {0}, d·eM (m) = M for all m ∈ M, clearly we have
M |= Ξ. Now extend Ξ to some maximally H-consistent set Ξ+ in the
conventional manner; the set Ξ+ must exist for the same reasons as in
FOL (System H is a Hilbert-style proof system).
Let us show that for every closed pattern ϕ ∈ PatternΣ , either
Ξ+ ` ϕ or Ξ+ ` ¬ ϕ. Assume for a contradiction that both Ξ+ 6` ϕ
and Ξ+ 6` ¬ ϕ. By maximality this means Ξ+ ∪ { ϕ} ` ⊥. Consider that
ϕ is a Ξ+ -predicate because ∀ x. x ∈ Ξ+ , i.e., for every M |= Ξ+ we
have | M | = 1. Because d x e ∈ Ξ+ , by Theorem 4.2.4 the pattern ψ is
a Ξ+ -deduct. From this follows Ξ+ ` ψ → ⊥, i.e., Ξ+ ` ¬ψ because
` (ψ → ⊥) → ¬ψ by (PT), contradiction.


However, there is a very simple example showing that not all ML


theories can be extended to a negation-complete theory, even if we
consider theory extensions instead of supersets (Definition 5.1.2):

Example 5.5.1. Consider the Σ-theory Γ = {d x e, dλe, d¬λe} where


Σ = {λ}. Clearly Γ is satisfiable; consider

M : M = {0, 1}, λM = {0}, d·eM (m) = M for every m ∈ M,

which is indeed a model of Γ. Moreover, Γ is consistent (Theorem 5.4.1).


Assume for a contradiction there is some negation-complete extension
Γ+ of Γ. By definition Γ+ is consistent and because Γ+ ` d x e, there is
also some model M+ of Γ+ (Theorem 5.4.1). By negation-completeness
5.5 negation-complete theories 59

of Γ+ we get that Γ+ ` λ or Γ+ ` ¬λ because λ is closed. Thus


soundness of H yields either M+ |= λ or M+ |= ¬λ, which is a
contradiction because we have both M+ |= dλe and M+ |= d¬λe
+
(∅ ⊂ ρM (λ) ⊂ M+ for every M+ -valuation ρ). 
Notice that Example 5.5.1 applies to any sound proof system for
ML. In fact, this shows a remarkable difference from FOL that theories
of models Th(M) are generally not negation-complete for any sound
proof system of matching logic.
Definition 5.5.2 (Theory of a model M). Let M be a Σ-model. Then

Th(M) = { ϕ ∈ PatternΣ | M |= ϕ}

is called the theory of M. 


Theories of models are obviously satisfiable because M |= Th(M) by
definition. Thus they are also consistent by Proposition 5.4.1. However,
theories of models are not negation-complete. If you take the model M
from Example 5.5.1, then we have both Th(M) 6` ¬λ and Th(M) 6` λ.
How can this be? The explanation is that λ is not an M-predicate.
That is why neither λ nor ¬λ is valid in M, which excludes both from
Th(M).
We would like to find an alternative definition for negation-complete
theories with similar properties as negation-complete theories in FOL,
guaranteeing that
(C1) every theory can be extended to this kind of a negation-complete
theory, and

(C2) theories of models would be negation-complete in this new sense.

To achieve this, it is obvious that we have to have the deduction


property and patterns that are either true or false. Since we conjecture
that Γ-deducts are exactly Γ-predicates (and vice versa), the following
definition should fix both of our goals.
Definition 5.5.3 (Weakly negation-complete theory). Let Γ be a Σ-
theory. If Γ is consistent and for every Γ-deduct ϕ we have Γ ` ϕ or
Γ ` ¬ ϕ, then Γ is called weakly negation-complete. 
Indeed, our definition of weakly negation-complete theories satisfies
the condition (C1) because we can use the deduction property:
Theorem 5.5.2. Maximally consistent theories are weakly negation-
complete.

Proof. Let Γ+ be a maximally consistent theory. Assume for a contra-


diction that Γ+ is not weakly negation-complete. Thus there is some
Γ+ -deduct ψ such that Γ+ 6` ψ and Γ+ 6` ¬ψ. Maximality implies that
Γ+ ∪ {ψ} ` ⊥. But then by definition of Γ+ -deducts Γ+ ` ψ → ⊥. Con-
sider ` (ψ → ⊥) → ¬ψ by (PT). Thus Γ+ ` ¬ψ. Contradiction. 
60 is system h complete?

Corollary 5.5.1. Every consistent theory Γ can be extended to a weakly


negation-complete theory. 

As for the condition (C2), the situation is a little more complicated.


Even though it is trivial to see that Th(M) is an H-complete theory,
we do not know for every Th(M) whether Th(M)-deducts are Th(M)-
predicates (Conjecture 5.1.1). We can prove the condition (C2) only for
models that satisfy (Definedness):

Theorem 5.5.3. Let M be a Σ-model with M |= d x e. Then Th(M) is


weakly negation-complete.

Proof. Let us prove that for every Th(M)-deduct ψ, either Th(M) ` ψ


or Th(M) ` ¬ψ. We will show a stronger claim that for every Th(M)-
deduct ψ, either ψ ∈ Th(M) or ψ 6∈ Th(M).
Because (Definedness) ∈ Th(M), we know that every Th(M)-
deduct is a closed Th(M)-predicate by Theorem 4.2.4. Thus it suffices
to show that for every closed Th(M)-predicate ψ we have ψ ∈ Th(M)
or ¬ψ ∈ Th(M). This is obvious because ψ is an M-predicate by
M |= Th(M) and for every closed M-predicate we have M |= ψ or
M |= ¬ψ. 

For models that do not satisfy (Definedness) we leave the condition


(C2) as a conjecture, which can be proved using the same idea as
Theorem 5.5.3 if every Γ-deduct is a Γ-predicate (Conjecture 5.1.1):

Conjecture 5.5.1. Th(M) is weakly negation-complete for every Σ-


model M.

5.6 open leads

We have seen in Theorem 5.4.5 that we can use Henkin’s reduction


for ML if we learn more about deducts. So far, this is what we know
about them:
if Γ is H-complete
1
ψ is a Γ-predicate ψ is a Γ-deduct

ψ is a Γ= -predicate ψ is a Γ= -deduct

Figure 5.1: Our current knowledge about Γ-deducts assuming Γ is a Σ-theory


and ψ ∈ PatternΣ is closed. (1) and (2) are equivalent arrows,
which are proved to hold if Γ is H-complete (Theorem 5.1.4). The
dotted arrows are also equivalent and represent Conjecture 5.1.1.
5.6 open leads 61

We have shown that the arrow (1) holds only for H-complete theories.
If we proved the arrow (1) or (2) for every theory Γ, Theorem 5.4.5
says that completeness of H is equivalent to claiming that every H-
consistent theory has a model. This would greatly simplify things.
Until then, it is unclear if this direction has more to offer.
There are two more directions which could lead to new results.
First, recall that a theory Γ is H-complete iff Γ= is a conservative
extension of Γ (Theorem 5.1.2). Conservative extensions are tricky to
work with directly but maybe there are some techniques that we have
not tried. Second, we still have not looked at constructing some form
of a canonical model.
We mentioned that we cannot copy the construction of canonical
models from FOL because we do not have negation-complete theories.
That is why we have to look for inspiration elsewhere, namely in
modal logic. Modal logic has much in common with matching logic
and offers two insights into completeness of H:

(1) Some modal logics are known to be incomplete [2, 7]. If H is


complete and ML can capture these logics, would it be a contra-
diction? This depends very much on what kind of capturing we
would have and requires further research.

(2) Modal logic has a great deal of techniques for constructing


canonical models that are different from the classical FOL con-
struction. We might learn from these techniques how to construct
a canonical model for ML.

The next chapter deals with how we can approach (2) and where it
could potentially lead.
C A N O N I C A L M O D E L S F O R E Q UA L I T Y E X T E N S I O N S
6
In Chapter 5 we looked at ways how to approach completeness of H
using conservative extensions. Now we would like to turn to canonical
models. Canonical models are in various forms a widely-known tech-
nique for proving completeness of proof systems (see e.g. [14] or [3]).
Recall that in FOL, canonical models are built using negation-complete
theories. In Section 5.5 we saw that these are problematic in ML. We
have to look elsewhere for inspiration, namely into modal logic [5].
In this chapter we show a new technique how to construct canonical
models for equality extensions that builds upon the theory developed
in [10], which used canonical models from [5] to show local complete-
ness of H. Our contribution extends their construction as follows. For
any given consistent equality extension Γ= , we show a construction of
a model MΓ= such that

Γ= ` ϕ iff MΓ= |= ϕ.

In FOL this is sometimes called Henkin’s theorem [13, p. 78].


Observe that if we could construct a canonical model MΓ for any
consistent theory Γ, proving completeness of H would be straightfor-
ward. This is illustrated by the following example.

Example 6.0.1. We want to show Γ |= ϕ implies Γ ` ϕ by contraposi-


tion. Let Γ 6` ϕ, i.e., Γ is consistent. If we can construct the canonical
model MΓ , by construction MΓ |= Γ and also MΓ 6|= ϕ, but then by
definition of |= we have Γ 6|= ϕ. 
We will see how we can do this for equality extensions Γ= . Our
construction is thus an alternative proof of the fact that equality exten-
sions are H-complete. It also shows exactly what role (Definedness)
semantically plays in the question of completeness, which could help
trigger new developments in solving this problem. This is how we
proceed:

(1) We make an overview of the existing theory from [10].

(2) We present a new technique how to use this theory to construct


MΓ= in the last Section 6.3.

6.1 local consistency

Even if we had an analogoue of negation-complete theories (Sec-


tion 5.5), it is not exactly clear how we could use them to construct
a canonical model. The reason is that ML is not two-valued, which

63
64 canonical models for equality extensions

is problematic. Suppose we have some theory Γ and pattern ϕ such


that Γ 6` ϕ. This does not tell us much about ϕ even if H is complete!
Notice that Γ 6|= ϕ only means that ϕ does not match every element
for some valuation. We do not know what exactly ϕ matches. The
end of Section 5.4 hints in the proof of Theorem 5.4.5 that we can
restrict ourselves to proving completeness for Γ-predicates. However,
Γ-predicates are hard to work with because they depend on the theory
Γ. For these reasons we are forced to find an inspiration for canonical
models elsewhere, namely in modal logic [5]. The main idea is to switch
from (general) provability ` to the so-called local provability.

Definition 6.1.1 (Local provability [10]). Let Γ be a Σ-theory and ϕ ∈


PatternΣ . We say ϕ is locally provable from Γ, denoted Γ ϕ, if there
is some finite subset Γfin ⊆ Γ such that ` Γfin → ϕ.
V


There is a very intuitive correspondence between global and local


deduction. Local deduction implies global deduction simply because
we can repeatedly apply (MP).

Lemma 6.1.1 (Global and Local Deduction). If Γ ϕ, then Γ ` ϕ.

Proof. Let Γ ϕ. By definition there is some finite subset Γfin ⊆ Γ such


that ` Γfin → ϕ. Notice that ` (( ϕ1 ∧ ϕ2 ) → ϕ) ↔ ( ϕ1 → ( ϕ2 → ϕ))
V

by (PT). But then Γfin ` ϕ by repeated application of (MP). Because


Γfin ⊆ Γ, we get Γ ` ϕ. 

Corollary 6.1.1. If Γ 6` ⊥, then Γ 6 ⊥. 

Corollary 6.1.1 already suggests a very natural definition for local


consistency, i.e., Γ 6 ⊥. Instead of talking about maximally consistent
sets, we shall talk about maximally locally consistent sets.

Definition 6.1.2 (Locally consistent set [10]). A set Γ ⊆ PatternΣ


is called locally consistent iff Γ 6 ⊥. Furthermore, if for every ϕ ∈
PatternΣ such that ϕ ∈ / Γ we have Γ ∪ { ϕ} locally inconsistent, we
say that Γ is maximally locally consistent set (MCS)1 . 

MCS’s have several useful properties for building a canonical model.


Most importantly, they are closed under negation, conjunction, and
ML implication. This is possible because local provability is less strict
than provability.

Lemma 6.1.2 (MCS properties [10]). Let Γ be an MCS in (Σ, Var). The
following properties hold.

(1) ϕ ∈ Γ iff Γ ϕ;

(2) ¬ ϕ ∈ Γ iff ϕ ∈
/ Γ;

(3) ϕ1 ∧ ϕ2 ∈ Γ iff ϕ1 ∈ Γ and ϕ2 ∈ Γ;


1 We use MCS instead of MLCS because we want to follow the naming in [10].
6.2 canonical models 65

(4) ϕ1 ∨ ϕ1 ∈ Γ iff ϕ1 ∈ Γ or ϕ2 ∈ Γ;

(5) ϕ1 , ϕ1 → ϕ2 ∈ Γ implies ϕ2 ∈ Γ.

(6) ∀ x. ϕ ∈ Γ implies ϕ[y/x ] ∈ Γ for all y ∈ Var.

MCS properties allow the kind of reasoning we needed in Henkin’s


reduction (Theorem 5.1.1), especially when it comes to the deduction
property:

Proposition 6.1.1. Γ ϕ iff Γ ∪ {¬ ϕ} ⊥.

Proof. (⇒) Let Γ ϕ. Then there is some finite Γfin ⊆ Γ such that `
Γfin → ϕ. This is the same as ` Γfin → (¬ ϕ → ⊥), by propositional
V V

reasoning ` ( Γfin ) ∧ ¬ ϕ → ⊥. However, Γfin ∪ {¬ ϕ} is a finite subset


V

of Γ ∪ {¬ ϕ}, and thus Γ ∪ {¬ ϕ} ` ⊥ by definition of .


(⇐) This is symmetric to (⇒). 

Similarly to Henkin’s technique for constructing canonical models,


we consider witnessed sets that add the so-called witnesses. If ∃ x. ϕ ∈ Γ
for some MCS Γ, we do not know whether ϕ[y/x ] ∈ Γ for some
y ∈ Var. This is different from property (6) in Lemma 6.1.2, which
intuitively holds simply because of the property (5) and the rule (Sub)
in H. That is why we consider witnessed MCS:

Definition 6.1.3 (Witnessed MCS [10]). Let Γ be an MCS. We say that Γ


is a witnessed MCS if for every ∃ x. ϕ ∈ Γ we have (∃ x. ϕ) → ϕ[y/x ] ∈ Γ
for some y ∈ Var. 
The reason why we can focus on witnessed MCS’s is that any MCS
can be extended to a witnessed MCS (Lemma 6.1.3). Notice that in the
following lemma we need to explicitly mention the variables we are
using because we have to extend them for technical reasons.

Lemma 6.1.3 (Extension [10]). Let Γ be a locally consistent set in


(Σ, Var). Then there exists a witnessed MCS Γ+ in (Σ, Var+ ) such
that Var ⊆ Var+ and Γ ⊆ Γ+ . 

6.2 canonical models

Now as we finally know what witnessed MCS’s are and what proper-
ties they have, we can define canonical models:

Definition 6.2.1 (Canonical model [10]). Let (Σ, Var) be a signature. A


canonical (Σ, Var)-model K is a Σ-model defined as

• K = {∆ | ∆ is a witnessed MCS in (Σ, Var)},

• ∆ ∈ σK (∆1 , . . . , ∆n ) iff σ( ϕ1 , . . . , ϕn ) ∈ ∆ for all ϕi ∈ ∆i where


1 ≤ i ≤ n.


66 canonical models for equality extensions

Note that canonical models are well-defined because witnessed


MCS’s exist by Lemma 6.1.3. This is admittedly a very complex for-
malism. To show ∆ ∈ σK (∆1 , . . . , ∆n ), we have to consider all patterns
ϕi ∈ ∆i whether σ ( ϕ1 , . . . , ϕn ) ∈ ∆. However, there is an easier and
stronger description of how the symbol interpretations behave. Rather
than considering all patterns in each ∆i , the following lemma says
that σ( ϕ1 , . . . , ϕn ) ∈ ∆ already implies that corresponding ∆i exists
for each 1 ≤ i ≤ n. What is more, ϕi will be contained in in ∆i :

Lemma 6.2.1 (Existence [10]). Let ∆ be a witnessed MCS and K the


canonical model in the corresponding signature. If σ( ϕ1 , . . . , ϕn ) ∈ ∆,
then there exist witnessed MCS’s ∆1 , . . . , ∆n ∈ K such that ϕ1 ∈
∆1 , . . . , ϕn ∈ ∆n and ∆ ∈ σK (∆1 , . . . , ∆n ). 
Notice that the canonical model does not depend in any way on a
theory Γ. That is where the so-called Γ-generated model steps in:

Definition 6.2.2 (Generated model [10]). Let Γ be a witnessed MCS in


(Σ, Var) and K be the canonical (Σ, Var)-model. A Γ-generated model
M is a Σ-model such that M is the smallest set satisfying

• Γ ∈ M,

• if ∆ ∈ M and there exist witnessed MCS’s ∆1 , . . . , ∆n ∈ K such


that ∆ ∈ σK (∆1 , . . . , ∆n ) for some σ ∈ Σn , then ∆1 , . . . , ∆n ∈ M,

and for every σ ∈ Σn we have σM (∆1 , . . . , ∆n ) := M ∩ σK (∆1 , . . . , ∆n ).



This construction is again non-trivial, and this time we will give
much more space for describing its properties. One can imagine the
Γ-generated model as a countable “cut” through the canonical model
(Figure 6.1). The cut is determined by Γ only: notice that Γ itself
is a witnessed MCS, which serves as the basis of the (inductive)
construction. Then we make “transitions” to other witnessed MCS
simply by choosing those witnessed MCS’s, which we formally need
as arguments for our symbols.
The cut is indeed countable. To each set added to the generated
model, there is a so-called generating path from the basis Γ to this set:

Definition 6.2.3 (Generating path). Let Γ be a witnessed MCS in


(Var, Σ), K the canonical model in (Var, Σ) and M some Γ-generated
model. A generating path relation P : M × Σ∗ is defined inductively as
follows:

• P(Γ, ε).

• if P(∆, π ) and there exists ∆1 , . . . , ∆n ∈ K such that

∆ ∈ σ K ( ∆1 , . . . , ∆ n )

for some σ ∈ Σn , then also P(∆i , πσ) for all 1 ≤ i ≤ n.


6.2 canonical models 67

∆10
x2
∆2
π1 ...
∆6 x1 ∆1

∆5 ∆3
x Γ ∆9 x3
∆4
x7 π2
∆7 ∆8
...

Figure 6.1: A simplified sketch of the carrier set in some Γ-generated model M.
Each circle is a witnessed MCS in the canonical model K. Dashed
circles are witnessed MCS from K that are not included in M.
Regular circles are witnessed MCS included in M by definition of
Γ-generated models, i.e., each one of these MCS is added by some
generating path π and is represented by some unique variable
(Lemma 6.2.3).

We say that π is a generating path of a witnessed MCS ∆ ∈ M if


P(∆, π ). 
Notice that the relation P exactly copies the definition of a generated
model (Definition 6.2.2). That is why obviously for every set ∆ ∈ M,
there is at least one generating path π of ∆, i.e., P(∆, π ). Especially
for Γ we have P(Γ, ε) by construction. Given a generating path π, we
define a path context. Path contexts are “dummy” symbol contexts
Definition 4.2.1 such that the path to  is exactly a generating path:

Definition 6.2.4 (Path context). A path context Cπ for a generating path


π is defined inductively as follows.

• Cε ≡  is a path context,

• Cπσ = Cπ [σ(>, . . . , , . . . , >)] for a path context Cπ and σ ∈ Σn .


| {z }
n


Path contexts are interesting because of the fact that they are con-
tained in the basis Γ of the Γ-generated model. This means that rep-
resentatives of every set we add to the generated model are already
given at the start of the construction:

Lemma 6.2.2 ([10]). Let Γ be a witnessed MCS, let M be the Γ-generated


model and consider some ∆ ∈ M. If ∆ has a generating path π, then
Cπ [ ϕ] ∈ Γ for any pattern ϕ ∈ ∆. 
68 canonical models for equality extensions

This has several immediate consequences. For example, each wit-


nessed MCS in the generated model contains unique variables. In
other words, a single variable cannot be in two different MCS’s:
Lemma 6.2.3 (Singleton variables [10]). Let Γ be a witnessed MCS in
(Σ, Var) and consider the Γ-generated model M. For every x ∈ Var
and ∆1 , ∆2 ∈ M we have that x ∈ ∆1 ∩ ∆2 implies ∆1 = ∆2 . 
This means that each set contained in a generated model is repre-
sented not only by a generating path, but also a unique variable:
Lemma 6.2.4. Let Γ be a witnessed MCS in (Σ, Var) and consider the
Γ-generated model M. For every ∆ ∈ M there is at least one variable
y ∈ Var such that y ∈ ∆.

Proof. Every ∆ ∈ M must contain a pattern α-equivalent to ∃ x. x


because ` ∃ x. x by definition of H and ∆ is an MCS. Since ∆ is also
witnessed we have (∃ x. x ) → x [y/x ] ∈ ∆ for some y ∈ Var, but then
by MCS properties y ∈ ∆. 

Corollary 6.2.1 (Representatives). Let Γ be a witnessed MCS in (Var, Σ)


and M be the Γ-generated model. Then for every ∆ ∈ M there is at
least one variable y ∈ Var such that y ∈ ∆ and y is unique in M. 
What are Γ-generated models good for? We would like them to
connect syntax with semantics. This concept is often called in modal
logic as a truth lemma (see, e.g., [4, p. 201]). Ideally, we would like to
find a generated model M and an M-valuation ρ for the generated
model such that the following holds: for every witnessed MCS ∆ ∈ M
and every ϕ ∈ PatternΣ we have

ϕ ∈ ∆ iff ∆ ∈ ρM ( ϕ).

Is this possible? There is a fundamental problem right at the core.


Obviously we need a valuation ρ such that x ∈ ∆ iff ρ( x ) = ∆.
Lemma 6.2.3 guarantees that such a valuation would be a function.
However, we have no guarantee that this function would be total. In
other words, we do not know whether each variable is covered by some
witnessed MCS in the generated model:
Definition 6.2.5 (M-covered variable). Let Γ be a witnessed MCS and
M the Γ-generated model. Then the variable x ∈ Var is called to be
M-covered (or simply covered) iff x ∈ ∆ for some witnessed MCS
∆ ∈ M. 
That is why we have to come up with yet another definition that
“completes” the generated models by filling holes for those variables
which possibly might not be covered. We call them completed models.
Definition 6.2.6 (Completed model). Let Γ be a witnessed MCS in
(Var, Σ) and M be the Γ-generated model. Let ∗ ∈/ M be some fresh
element. A Γ-completed model is any model M∗ such that:
6.2 canonical models 69


 M ∪ {∗} some x ∈ Var is not M-covered
• M∗ =
M otherwise


• σ M ( m1 , . . . , m n ) =



 ∅ mi = ∗ for some 1 ≤ i ≤ n

= σM (m1 , . . . , mn ) ∪ {∗} mi = Γ for some 1 ≤ i ≤ n, ∗ ∈ M∗



 M
σ ( m1 , . . . , m n ) otherwise


Notice that if a Γ-generated model M has all variables covered, then
M is a (unique) Γ-completed model. We only add ∗ if some variable
is not M-covered, in which case there can be multiple completed
models (depending on the chosen ∗). This will be very important
in our construction. Once we have completed models, we can easily
define the so-called completed valuation, for which we already gave the
intuition:

Definition 6.2.7 (Completed valuation). Let Γ be a witnessed MCS


and M∗ a Γ-completed model. Then the completed valuation of M∗ is
an M∗ -valuation ρ defined as:

∆ x ∈ ∆
ρ( x ) =
∗ otherwise


Completed valuations are well-defined because

• by Lemma 6.2.3 if x ∈ ∆1 and x ∈ ∆2 , then ∆1 = ∆2 .

• ρ( x ) = ∗ iff x is not covered, in which case ∗ will be in a


completed model.

Γ-completed models connect syntax and semantics, i.e., the so-called


truth lemma holds for Γ-completed models and completed valuations:

Lemma 6.2.5 (Truth Lemma [10]). Let Γ be a witnessed MCS in (Var, Σ).
Consider a Γ-completed model M and the corresponding completed
valuation ρ.
For every witnessed MCS ∆ ∈ M, every ϕ ∈ PatternΣ we have

ϕ ∈ ∆ iff ∆ ∈ ρM ( ϕ).


This lemma is the main idea behind proving local completeness of
H, which we discussed in Section 4.2.4. We do not present the proof
here, interested readers are referred to [10] for details.
70 canonical models for equality extensions

6.3 new results

We want to use the theory developed in this chapter for constructing


a canonical model of a given consistent theory Γ. In this section we
show how to do it for equality extensions. There is only one crucial
observation to be made. For simplicity assume that some witnessed
MCS Γ contains (Definedness) (then the same argument holds for an
equality extension). Then we can show that the Γ-generated model
will have all variables covered! That is, the Γ-generated model will
also be a Γ-completed model:

Lemma 6.3.1 (Variable Covering). Let Γ be a witnessed MCS in (Var, Σ)


and consider the Γ-generated model M. If ∀ x. d x e ∈ Γ, then for every
x ∈ Var we have x ∈ ∆ for some ∆ ∈ M (x is M-covered).

Proof. Let y ∈ Var be any variable. We have ` (∀ x. d x e) → d x e[y/x ]


by (Sub). By MCS properties dye ∈ Γ.
But then by the Existence Lemma (Lemma 6.2.1) there exists some
∆1 ∈ K such that y ∈ ∆1 and Γ ∈ d·eK (∆1 ). Since Γ ∈ M and there ex-
ists ∆1 ∈ K such that Γ ∈ d·eK (∆1 ), by construction of the Γ-generated
model we also have ∆1 ∈ M. This means y ∈ ∆1 for some ∆1 ∈ M, i.e.,
y is M-covered. 

Corollary 6.3.1. Let Γ be a witnessed MCS. If ∀ x. d x e ∈ Γ, then a


Γ-generated model is also a Γ-completed model. 
Most importantly, Corollary 6.3.1 says that a Γ-completed valuation
ρ always returns a witnessed MCS (and never ∗). But now we can
easily prove for theories Γ containing the axiom (Definedness) that
there is a generated model M satisfying Henkin’s theorem:

Theorem 6.3.1. Let Γ be a consistent Σ-theory containing ∀ x. d x e. Then


there exists a witnessed MCS Γ+ ⊇ Γ such that a Γ+ -completed model
M satisfies:

Γ ` ϕ iff M |= ϕ.

Proof. Let Γ be a (globally) consistent Σ-theory. By Lemma 4.2.4 we


have Γ ` ¬C [¬ψ] for any nested symbol context C in the signature of
Γ and ψ such that Γ ` ψ. But then

Γ0 = Γ ∪ {¬C [¬ψ] ∈ PatternΣ | Γ ` ψ}

is (globally) consistent because it is obviously a conservative extension


of Γ. Also Γ00 = Γ0 ∪ {¬ ϕ | Γ 6` ϕ} is locally consistent by Proposi-
tion 6.1.1.
Extend Γ00 to a witnessed MCS Γ+ and consider any Γ+ -completed
model M. We know that ∀ x. d x e ∈ Γ ⊆ Γ+ , but then by the covering
lemma (Lemma 6.3.1), M is also a generated model, i.e., ∗ ∈ / M. Now
take the completed M-valuation ρ. We will show that Γ ` ψ implies
6.3 new results 71

M |= ψ and Γ 6` ϕ implies M 6|= ϕ. We can assume w.l.o.g.that ψ and


ϕ are closed (Lemma 2.4.1).

(1) Let Γ ` ψ, we can show M |= ψ. Because ψ is closed, we just


need to show that ∆ ∈ ρM (ψ) for every ∆ ∈ M. By Truth Lemma
(Lemma 6.2.5) ∆ ∈ ρM (ψ) iff ψ ∈ ∆.
Assume for a contradiction that there is some ∆ ∈ M such that
ψ ∈/ ∆. By MCS properties thus ¬ψ ∈ ∆. Recall that ∆ has at
least one generating path π and by Lemma 6.2.2 Cπ [¬ψ] ∈ Γ+ .
However, by construction we have ¬Cπ [¬ψ] ∈ Γ+ because
Cπ [¬ψ] ∈ PatternΣ . By MCS properties Cπ [¬ψ] ∈ / Γ+ , con-
tradiction.

(2) Let Γ 6` ϕ. Then we can show M 6|= ϕ, e.g., by showing Γ+ ∈ /


ρM ( ϕ) (because this would mean ρM ( ϕ) 6= M). By Truth Lemma
/ Γ+ . By construction ¬ ϕ ∈ Γ00 ⊆ Γ+
(Lemma 6.2.5) this is iff ϕ ∈
and thus by MCS properties ϕ ∈ / Γ+ . Therefore ρM ( ϕ) 6= M and
M 6|= ϕ by definition.

Corollary 6.3.2. For every consistent equality extension Γ= there is a


model MΓ= such that Γ= ` ϕ iff MΓ= |= ϕ.

Proof. Use the same technique as presented in Theorem 6.3.1 replacing


(Definedness) with the corresponding fresh symbol. 
CONCLUSION
7
We set out to answer whether System H is complete as a proof system
w.r.t. all ML theories. Even though we did not give a final answer,
we gave a summary of results useful for answering this question,
found a very natural characterization of completeness, and exploited
it to find new classes of H-complete theories. Along the way, we
also proved related results, such as the compactness property. We
have also extended the existing theory with new concepts such as
equality extensions, Γ-deducts, weakly negation-complete theories,
or H-consistency and showed that they mostly behave as expected.
Regardless of the final answer to completeness of H, the presented
results will remain relevant and should (hopefully) bring some new
light to this debate.

what did not work. We have seen that ML is an FOL variant


in Chapter 3; in particular, these two logics have the same level of
expressiveness. There are many connections, and some of them are
useful for results in ML. At the end of Section 5.4, we saw that the
two-valued Γ-predicates (resembling FOL semantics) are actually the
only patterns “relevant” for completeness of H. However, it currently
seems that investigating completeness of H from an FOL perspective
has reached a dead end. Unless we learn more about the counterparts
of Γ-predicates, i.e., Γ-deducts, it is hard to imagine how to make FOL
techniques work for completeness of H. For example, in Section 5.5 we
presented an analogue of negation-complete theories that had seemed
promising; in the end we could not find out much more, even under
the assumption that H is complete.

what worked. We looked away from FOL and tried to prove many
things directly in ML. First, the intuition that (Definedness) “makes”
H complete led to a more tractable if-and-only-if condition for com-
pleteness of H: System H is complete iff every equality extension is a
conservative extension. To the best of our knowledge, we have not seen
such an approach in completeness proofs. What is more, this approach
allowed us to focus on finite theories and prove completeness on a
case-by-case basis. Second, the modal logic idea for canonical models
led to a constructive proof that H is complete w.r.t. equality extensions.
We now know what canonical models for equality extensions look
like, no matter what these theories with equality specify. This is a new
result, which did not follow directly from the results presented in [10].

73
74 conclusion

future work. Our characterization of completeness depends on


conservative extensions. Conservative extensions are difficult to han-
dle without a complete proof system. Unless we find alternative tech-
niques to work with conservative extensions, it seems that looking
for inspiration in modal logic is the most promising direction for this
problem. We saw that we could take canonical models in modal logic
and turn them into canonical models of ML equality extensions. It
might be possible to take this construction even further. For example,
the definition of completed models could be tweaked to allow stronger
versions of the truth lemma. On the other hand, there are also hints
suggesting incompleteness of H. Some modal logics are known to be
incomplete (see, e.g., [2] or [12]). If we can define them as matching
logic theories, what would it lead to? This requires further research.
BIBLIOGRAPHY

[1] Oskar Becker. Zur Logik der Modalitäten. Max Niemeyer Verlag,
1930.
[2] Johan F. A. K. van Benthem. “Two simple incomplete modal
logics.” In: Theoria 44.1 (Feb. 11, 2008), pp. 25–37. issn: 00405825,
17552567. doi: 10 . 1111 / j . 1755 - 2567 . 1978 . tb00830 . x. url:
https : / / onlinelibrary . wiley . com / doi / 10 . 1111 / j . 1755 -
2567.1978.tb00830.x (visited on 05/10/2022).
[3] Patrick Blackburn, Johan F. A. K. van Benthem, and Frank
Wolter, eds. Handbook of Modal Logic. 1st edition. Amsterdam
Boston: Elsevier Science, Dec. 25, 2006. 1260 pp. isbn: 978-0-444-
51690-9.
[4] Patrick Blackburn, Maarten de Rijke, and Yde Venema. Modal
Logic. Cambridge: Cambridge University Press, Sept. 30, 2002.
578 pp. isbn: 978-0-521-52714-9.
[5] Patrick Blackburn and Miroslava Tzakova. “Hybrid complete-
ness.” In: Logic Journal of the IGPL 6.4 (July 1998), pp. 625–650.
issn: 1367-0751. doi: 10 . 1093 / jigpal / 6 . 4 . 625. url: https :
//academic.oup.com/jigpal/article-pdf/6/4/625/1878635/
060625.pdf.
[6] Denis Bogdanas and Grigore Roşu. “K-Java: A Complete Seman-
tics of Java.” In: Proceedings of the 42nd Annual ACM SIGPLAN-
SIGACT Symposium on Principles of Programming Languages. POPL
’15: The 42nd Annual ACM SIGPLAN-SIGACT Symposium on
Principles of Programming Languages. Mumbai India: ACM,
Jan. 14, 2015, pp. 445–456. isbn: 978-1-4503-3300-9. doi: 10.1145/
2676726 . 2676982. url: https : / / dl . acm . org / doi / 10 . 1145 /
2676726.2676982 (visited on 05/01/2022).
[7] George Boolos and Giovanni Sambin. “An incomplete system of
modal logic.” In: Journal of Philosophical Logic 14.4 (Nov. 1, 1985),
pp. 351–358. issn: 1573-0433. doi: 10 . 1007 / BF00649480. url:
https://doi.org/10.1007/BF00649480 (visited on 05/10/2022).
[8] Xiaohong Chen, Dorel Lucanu, and Grigore Roşu. “Match-
ing logic explained.” In: Journal of Logical and Algebraic Meth-
ods in Programming 120 (2021), p. 100638. issn: 2352-2208. doi:
https : / / doi . org / 10 . 1016 / j . jlamp . 2021 . 100638. url:
https : / / www . sciencedirect . com / science / article / pii /
S2352220821000018.
[9] Xiaohong Chen and Grigore Roşu. “Applicative matching logic.”
In: (July 31, 2019). url: https://www.ideals.illinois.edu/
handle/2142/104616 (visited on 05/09/2022).

75
76 bibliography

[10] Xiaohong Chen and Grigore Roşu. Matching mu-Logic (Technical


Report). University of Illinois at Urbana-Champaign, Tech. Rep.
2019. url: http://hdl.handle.net/2142/102281 (visited on
09/24/2021).
[11] Xiaohong Chen and Grigore Roşu. “Matching mu-Logic.” In:
Proceedings of the 34th Annual ACM/IEEE Symposium on Logic in
Computer Science (LICS’19). ACM/IEEE, June 2019, pp. 1–13. doi:
https://doi.org/10.1109/LICS.2019.8785675.

[12] Max J. Cresswell and George E. Hughes. A New Introduction to


Modal Logic. 1st edition. London ; New York: Routledge, Sept. 12,
1996. 432 pp. isbn: 978-0-415-12600-7.
[13] Heinz-Dieter Ebbinghaus, Jörg Flum, and Wolfgang Thomas.
Mathematical Logic, 2nd Edition. 2nd edition. New York: Springer,
June 10, 1994. 301 pp. isbn: 978-0-387-94258-2.
[14] Herbert B. Enderton. A Mathematical Introduction to Logic. 2nd
edition. San Diego: Academic Press, Jan. 5, 2001. 336 pp. isbn:
978-0-12-238452-3.
[15] Jean-Yves Girard. Proofs and types. Cambridge tracts in theo-
retical computer science 7. Cambridge [England] ; New York:
Cambridge University Press, 1989. 176 pp. isbn: 978-0-521-37181-
0.
[16] Kurt Gödel. “Die Vollständigkeit der Axiome des logischen
Funktionenkalküls.” In: Monatshefte für Mathematik und Physik
37.1 (Dec. 1930), pp. 349–360. issn: 0026-9255, 1436-5081. doi:
10 . 1007 / BF01696781. url: http : / / link . springer . com / 10 .
1007/BF01696781 (visited on 04/27/2022).

[17] David Harel, Dexter Kozen, and Jerzy Tiuryn. Dynamic logic.
Foundations of computing series. Cambridge, Mass. London:
MIT Press, 2000. 459 pp. isbn: 978-0-262-52766-8 978-0-262-08289-
1.
[18] John Harrison. Handbook of Practical Logic and Automated Reason-
ing. 1st edition. Cambridge ; New York: Cambridge University
Press, Apr. 13, 2009. 702 pp. isbn: 978-0-521-89957-4.
[19] Chris Hathhorn, Chucky Ellison, and Grigore Roşu. “Defining
the undefinedness of C.” In: ACM SIGPLAN Notices 50.6 (Aug. 7,
2015), pp. 336–345. issn: 0362-1340, 1558-1160. doi: 10.1145/
2813885 . 2737979. url: https : / / dl . acm . org / doi / 10 . 1145 /
2813885.2737979 (visited on 05/01/2022).

[20] Leon Henkin. “The completeness of the first-order functional


calculus.” In: Journal of Symbolic Logic 14.3 (Sept. 1949), pp. 159–
166. issn: 0022-4812, 1943-5886. doi: 10 . 2307 / 2267044. url:
https : / / www . cambridge . org / core / product / identifier /
S0022481200105675/type/journal_article (visited on 03/17/2022).
bibliography 77

[21] Dexter Kozen. “Results on the propositional mu-calculus.” In:


Theoretical Computer Science 27.3 (1983), pp. 333–354. issn: 03043975.
doi: 10.1016/0304-3975(82)90125-6. url: https://linkinghub.
elsevier . com / retrieve / pii / 0304397582901256 (visited on
05/01/2022).
[22] Saul A. Kripke. “A completeness theorem in modal logic.” In:
Journal of Symbolic Logic 24.1 (Mar. 1959), pp. 1–14. issn: 0022-
4812, 1943-5886. doi: 10 . 2307 / 2964568. url: https : / / www .
cambridge.org/core/product/identifier/S0022481200125058/
type/journal_article (visited on 05/09/2022).

[23] Matching Logic. Matching Logic. url: http://www.matching-


logic.org/ (visited on 05/09/2022).

[24] Daejun Park, Andrei Stefănescu, and Grigore Roşu. “KJS: a


complete formal semantics of JavaScript.” In: Proceedings of the
36th ACM SIGPLAN Conference on Programming Language Design
and Implementation. PLDI ’15: ACM SIGPLAN Conference on
Programming Language Design and Implementation. Portland
OR USA: ACM, June 3, 2015, pp. 346–356. isbn: 978-1-4503-3468-
6. doi: 10.1145/2737924.2737991. url: https://dl.acm.org/
doi/10.1145/2737924.2737991 (visited on 05/01/2022).

[25] Amir Pnueli. “The temporal logic of programs.” In: 18th Annual
Symposium on Foundations of Computer Science (sfcs 1977). 18th
Annual Symposium on Foundations of Computer Science (sfcs
1977). Providence, RI, USA: IEEE, Sept. 1977, pp. 46–57. doi:
10.1109/SFCS.1977.32. url: http://ieeexplore.ieee.org/
document/4567924/ (visited on 05/01/2022).

[26] Grigore Rosu, Andrei Stefanescu, Stefan Ciobâcá, and Brandon


M. Moore. “One-Path Reachability Logic.” In: 2013 28th Annual
ACM/IEEE Symposium on Logic in Computer Science. 2013 28th
Annual ACM/IEEE Symposium on Logic in Computer Science.
ISSN: 1043-6871. June 2013, pp. 358–367. doi: 10.1109/LICS.
2013.42.

[27] Grigore Roşu. “Matching logic.” In: Logical Methods in Computer


Science 13.4 (Dec. 2017), pp. 1–61. doi: http://arxiv.org/abs/
1705.06312.

[28] Grigore Roşu and Traian Florin Şerbănuţă. “K Overview and


SIMPLE Case Study.” In: Electronic Notes in Theoretical Computer
Science 304 (June 2014), pp. 3–56. issn: 15710661. doi: 10.1016/j.
entcs.2014.05.002. url: https://linkinghub.elsevier.com/
retrieve/pii/S1571066114000383 (visited on 05/01/2022).

[29] Andrei Stefănescu, Daejun Park, Shijiao Yuwen, Yilong Li, and
Grigore Roşu. “Semantics-based program verifiers for all lan-
guages.” In: ACM SIGPLAN Notices 51.10 (Dec. 5, 2016), pp. 74–
91. issn: 0362-1340, 1558-1160. doi: 10.1145/3022671.2984027.
78 bibliography

url: https : / / dl . acm . org / doi / 10 . 1145 / 3022671 . 2984027


(visited on 04/26/2022).
[30] Alfred Tarski. The concept of truth in formalized languages. New
York, NY: Springer Berlin Heidelberg, 2016. isbn: 978-3-319-
32614-6.

You might also like