Difflog
Difflog
Science
Differential linear logic enriches linear logic with additional logical rules for the
exponential connectives, dual to the usual rules of dereliction, weakening and
contraction. We present a proof-net syntax for differential linear logic and a categorical
axiomatization of its denotational models. We also introduce a simple categorical
condition on these models under which a general antiderivative operation becomes
available. Last we briefly describe the model of sets and relations and give a more
detailed account of the model of finiteness spaces and linear and continuous functions.
Introduction
Extending Linear Logic (LL) with differential constructs has been considered by Gi-
rard at a very early stage of the design of this system. This option appears at various
places in the conclusion of (Gir86), entitled Two years of linear logic: selection from the
garbage collector. In Section V.2 The quantitative attempt of that conclusion, the idea
of a syntactic Taylor expansion is explicitly mentioned as a syntactic counterpart of the
quantitative semantics of the λ-calculus (Gir88). However it is contemplated there as a
reduction process rather than as a transformation on terms. In Section V.5 The exponen-
tials, the idea of reducing λ-calculus substitution to a more elementary linear operation
explicitly viewed as differentiation is presented as one of the basic intuitions behind the
exponential of LL. The connection of this idea with Krivine’s Machine (Kri85; Kri07) and
its linear head reduction mechanism (DR99) is explicitly mentioned. In this mechanism,
first considered by De Bruijn and called mini-reduction in (DB87), it is only the head
occurrence of a variable which is substituted during reduction. This restriction is very
meaningful in LL: the head occurrence is the only occurrence of a variable in a term
which is linear.
LL is based on the distinction of particular proofs among all proofs, that are linear
wrt. their hypotheses. The word linear has here two deeply related meanings.
— An algebraic meaning: a linear morphism is a function which preserves sums, linear
combinations, joins, unions (depending on the context). In most denotational models
of LL, linear proofs are interpreted as functions which are linear in that sense.
Thomas Ehrhard 2
and erasable. This fact could not be observed in LL because promotion is the only rule of
LL which allows one to introduce the “!” modality: without promotion, it is impossible
to build a proof object that can be cut on a contraction or a weakening rule.
In Differential LL (DiLL), there are two new rules to introduce the “!” modality: coweak-
ening and codereliction. The first of these rules allows one to introduce an empty proof
of type !A and the second one allows one to turn a proof of type A into a proof of type
!A, without making it duplicable in sharp contrast with the promotion rule. The last new
rule, called cocontraction, allows one to merge two proofs of type !A for creating a new
proof of type !A. This latter rule is similar to the tensor rule of ordinary LL with the
difference that the two proofs glued together by a cocontraction must have the same type
and cannot be distinguished anymore deterministically, whereas the two proofs glued by
a tensor can be separated again by cutting the resulting proof against a par rule. These
new rules are called costructural rules to stress the symmetry with the usual structural
rules of LL.
DiLL has therefore a finite fragment which contains the standard “?” rules (weakening,
contraction and dereliction) as well as the new “!” ones (coweakening, cocontraction and
codereliction), but not the promotion rule. Cut elimination in this system generates
sums of proofs, and therefore it is natural to endow proofs with a vector space (or
module) structure over a field (or more generally over a semi-ring1 ). This fragment has
the following pleasant properties:
— It enjoys strong normalization, even in the untyped case, as long as one considers only
proof-nets which satisfy a correctness criterion similar to the standard Danos-Regnier
criterion for multiplicative LL (MLL).
— In this fragment, all proofs are linear combinations of “simple proofs” which do not
contain linear combinations: this is possible because all the syntactic constructions
of this fragment are multilinear. So proofs are similar to polynomials or power series,
simple proofs playing the role of monomials in this algebraic analogy which is strongly
suggested by the denotational models of DiLL.
Moreover, it is possible to transform any instance of the promotion rule (which is applied
to a sub-proof π) into an infinite linear combination of proofs containing copies of π: this
is the Taylor expansion of promotion. This operation can be applied hereditarily to all
instances of the promotion rule in a proof, giving rise to an infinite linear combinations
of promotion-free DiLL simple proofs with positive rational coefficients.
1 This general setting allows us to cover also “qualitative” situations where sums of proofs are lubs in
a poset.
Thomas Ehrhard 4
One important step in our presentation of the categorical setting for interpreting dif-
ferential LL is the notion of an exponential structure. It is the categorical counterpart of
the finitary fragment of DiLL, that is, the fragment DiLL0 where the promotion rule is
not required to hold.
An exponential structure consists of a preadditive2 ∗-autonomous category L together
with an operation which maps any object X of L to an object !X of L equipped with
a structure of ⊗-bialgebra (representing the structural and costructural rules) as well as
a “dereliction” morphism in L(!X, X) and a “codereliction” morphism L(X, !X). The
important point here is that the operation X 7→ !X is not assumed to be functorial (it
has nevertheless to be a functor on isomorphisms). Using this simple structure, we define
in particular morphisms ∂ X ∈ L(!X ⊗ X, !X) and ∂X ∈ L(!X, !X ⊗ X).
An element of L(!X, Y ) can be considered as a non-linear morphism from X to Y (some
kind of generalized polynomial, or analytical function), but these morphisms cannot be
composed. It is nevertheless possible to define a notion of polynomial such morphism,
and these polynomial morphisms can be composed, giving rise to a category which is
cartesian if L is cartesian.
By composition with ∂ X ∈ L(!X ⊗ X, !X), any element f of L(!X, Y ) can be differen-
tiated, giving rise to an element f 0 of L(!X ⊗ X, Y ) that we consider as its derivative3 .
This operation can be performed again, giving rise to f 00 ∈ L(!X ⊗ X ⊗ X, Y ) and, as-
suming that cocontraction is commutative, this morphism is symmetric in its two last
linear parameters (a property usually known as Shwarz Lemma).
In this general context, a very natural question arises. Given a morphism g ∈ L(!X ⊗
X, Y ) whose derivative g 0 ∈ L(!X ⊗ X ⊗ X, Y ) is symmetric, can one always find a
morphism f ∈ L(!X, Y ) such that g = f 0 ? Inspired by the usual proof of Poincaré’s
Lemma, we show that such an antiderivative is always available as soon as the natural
morphism Id!X +(∂ X ∂X ) ∈ L(!X, !X) is an isomorphism for each object X. We explain
2 This means that the monoidal category is enriched over commutative monoids. Actually, we assume
more generally that it is enriched over the category of k-modules, where k is a given semi-ring.
3 Or differential, or Jacobian: by monoidal closedness, f 0 can be seen as an element of L(!X, X ( Y )
where X ( Y is the object of morphisms from X to Y in L, that is, of linear morphisms from X to
Y , and the operation f 7→ f 0 satisfies all the ordinary properties of differentiation.
Differential Linear Logic 5
how this property is related to a particular case of integration by parts. We also describe
briefly a syntactic version of antiderivatives in a promotion-free differential λ-calculus.
To interpret the whole of DiLL, including the promotion rule, one has to assume that !
is an endofunctor on L and that this functor is endowed with a structure of comonad and
a monoidal structure; all these data have to satisfy some coherence conditions wrt. the
exponential structure. These conditions are essential to prove that the interpretation of
proof-nets is invariant under the various reduction rules, among which the most com-
plicated one is an LL version of the usual chain rule of calculus. Our main references
here are the work of Bierman (Bie95), Melliès (Mel09) and, for the commutations in-
volving costructural logical rules, our concrete models (Ehr05; Ehr02), the categorical
setting developed by Blute, Cockett and Seely (BCS06) and, very importantly, the work
of Fiore (Fio07).
One major a priori methodological principle applied in this paper is to stick to Clas-
sical Linear Logic, meaning in particular that the categorical models we consider are ∗-
autonomous categories. This is justified by the fact that most of the concrete models we
have considered so far satisfy this hypothesis (with the noticeable exception of (BET12))
and it is only in this setting that the new symmetries introduced by the differential and
costructural rules appear clearly. A lot of material presented in this paper could probably
be carried to a more general intuitionistic Linear Logic setting.
Some aspects of DiLL are only alluded to in this presentation, the most significant
one being certainly the Taylor expansion formula and its connection with linear head
reduction. On this topic, we refer to (ER08; ER06; Ehr10).
Notations
In this paper, a set of coefficients is needed, which has to be a commutative semi-ring.
This set will be denoted as k. In Section 4.3, k will be assumed to be a field but this
assumption is not needed before that section.
1.1.2. LL types. Let A be a set of type atoms ranged over by α, β, . . . , together with an
involution α 7→ α such that α 6= α. Types are defined as follows.
— if α ∈ A then α is a type;
— if A and B are types then A ⊗ B and A`B are types;
— if A is a type then !A and ?A are types.
The linear negation A⊥ of a type A is given by the following inductive definition: α⊥ = α,
(A ⊗ B)⊥ = A⊥ `B ⊥ ; (A`B)⊥ = A⊥ ⊗ B ⊥ ; (!A)⊥ = ?A⊥ and (?A)⊥ = !A⊥ .
An MLL type is a type built using only the ⊗ and ` constructions4 .
1.2.1. Typing rules. We first explain how to type MLL proof trees. The corresponding
typing judgments of the form Φ `0 t : A where Φ is a typing context, t is a proof tree
and A is a formula.
The rules are
Φ, x : A `0 x : A
4 We do not consider the multiplicative constants 1 and ⊥ because they are not essential for our purpose.
Differential Linear Logic 7
Φ `0 s : A Φ `0 t : B Φ `0 s : A Φ `0 t : B
Φ `0 s ⊗ t : A ⊗ B Φ `0 s`t : A`B
Given a cut c = hs | s0 i and a typing context Φ, one writes Φ `0 c if there is a type A
such that Φ `0 s : A and Φ `0 s0 : A⊥ .
Last, given a simple proof-structure p = (~c ; ~s) with ~s = (s1 , . . . , sn ) and ~c = (c1 , . . . , ck ),
a sequence Γ = (A1 , . . . , Al ) of formulas and a typing context Φ, one writes Φ `0 p : Γ if
l = n and Φ `0 si : Ai for 1 ≤ i ≤ n and Φ `0 ci for 1 ≤ i ≤ k.
x1 xl
...
axiom links
p= ... ... ... ... ... ...
s1 s01 sn s0n t1 tk
... ...
x1 xl
...
p
1 k
...
→
− → −
c ;cut ( d ; t )
→
− − →
− →− − → − context
(c, b ; →s ) ;cut ( d , b ; →s, t)
x∈/ V(s)
→
− −c ; →
− ax-cut
(hx | si, c ; t ) ; (→
→
−
cutt ) [s/x]
For applying the latter rule (see Figure 3), we assume that x ∈ / V(s). Without this re-
striction, we would reduce the cyclic proof-structure (hx | xi ; ) to ( ; ) and erase the cycle
which is certainly not acceptable from a semantic viewpoint. For instance, in a model
of proof-structures based on finite dimension vector spaces, the semantics of (hx | xi ; )
would be the dimension of the space interpreting the type of x (trace of the identity).
Remark : We provide some pictures to help understand the reduction rules on proof
structures. In these pictures, logical proof-net constructors (such as tensor, par etc.) are
represented as white triangles labeled by the corresponding symbol – they correspond to
the cells of interaction nets or to the links of proof-nets – and subtrees are represented
as gray triangles.
Wires represent the edges of a proof tree. We also represent axioms and cuts as wires:
an axiom looks like and a cut looks like . In Figure 3, we indicate the variables
associated with the axiom, but in the next pictures, this information will be kept implicit.
Figure 1 represents the simple proof-structure
p = (hs1 | s01 i, . . . , hsn | s0n i ; t1 , . . . , tk ) .
with free variables x1 , . . . , xl . The box named axiom links contains axioms connecting
variables occurring in the trees s1 , . . . , sn , s01 , . . . , s0n , t1 , . . . , tk . When we do not want to
be specific about its content, we represent such a simple proof-structure as in Figure 2
by a gray box with indices 1, . . . , k on its border for locating the roots of the trees of p.
The same kind of notation will be used also for proof-structures which are not necessarily
simple, see the beginning of Paragraph 1.4.3 for this notion.
Differential Linear Logic 9
s
x x
−
→ s ;cut −
→
(−
→
c ; t) (−
→
c ; t)
s1 s2 t1 t2
s1 s2 t1 t2
` ⊗ ;cut
1.4. DiLL0
This is the promotion-free fragment of differential LL. In DiLL0 , one extends the signature
of MLL with new constructors:
— Σ0 = {w, w}, called respectively weakening and coweakening.
— Σ1 = {d, d}, called respectively dereliction and codereliction.
— Σ2 = {`, ⊗, c, c}, the two new constructors being called respectively contraction and
cocontraction.
— Σn = ∅ for n > 2.
1.4.1. Typing rules. The typing rules for the four first constructors are similar to those
of MLL.
Φ `0 w : ?A Φ `0 w : !A
Φ `0 t : A Φ `0 t : A
Φ `0 d(t) : ?A Φ `0 d(t) : !A
The two last rules require the subtrees to have the same type.
Φ `0 s1 : ?A Φ `0 s2 : ?A Φ `0 s1 : !A Φ `0 s2 : !A
Φ `0 c(s1 , s2 ) : ?A Φ `0 c(s1 , s2 ) : !A
? ! ;cut
s t
? ! ;cut 0 ? ! ;cut 0
Φ ` (→−c ; →
−
s , s) : Γ, A
→
− →
− co-dereliction
Φ ` ( c ; s , d(s)) : Γ, !A
Φ ` (→
−c ; →
−
s , s1 , s2 ) : Γ, ?A, ?A
→
− →
− contraction
Φ ` ( c ; s , c(s1 , s2 )) : Γ, ?A
→
− → −
Φ ` (→
−c ; →
−
s , s) : Γ, !A Φ ` ( d ; t , t) : ∆, !A
co-contraction
Φ ` (→−c , →
− →
d ;−
→
−
s , t , c(s, t)) : Γ, ∆, !A
1.4.3. Reduction rules. To describe the reduction rules associated with these new con-
structions, we need to introduce formal sums (or more generally k-linear combinations)
of simple proof-structures called proof-structures in the sequel, and denoted with cap-
ital letters P, Q, . . . Such an extension by linearity of the syntax was already present
in (ER03).
The empty linear combination 0 is a particular proof-structure which plays an impor-
tant role. As linear combinations, proof-structures can be linearly combined.
The typing rule for linear combinations is
∀i ∈ {1, . . . , n} Φ ` pi : Γ and µi ∈ k
Pn sum
Φ ` i=1 µi pi : Γ
Differential Linear Logic 11
s1 s2
s1 ! s2 !
? ! ;cut
t1 t2
? t1 ? t2
? ! ;cut
s t
s t
? ! ;cut
In the last reduction rule, the four variables that we introduce are pairwise distinct and
fresh. Up to α-conversion, the choice of these variables is not relevant.
The contextual rule must be extended, in order to take sums into account.
c ;cut P
→
− → context
−
(c, b ; s ) ;
P
− →
→
− →
→ − → − →−
− P ·( c , b ; s, t )
cut p=( c ; t ) p
s1 s2 t s1 t s2 s1 s2 t
! !
? ! ;cut ! + !
s t1 t2
s t1 t2 t1 s t2
? ?
? ! ;cut ? + ?
s1 s2 t1 t2
s1 s2 ! ! ? ? t1 t2
? ! ;cut
possible proof-structures p, but there are only finitely many p’s such that Pp 6= 0 so that
→
− −
this sum is actually finite. A particular case of this rule is c ;cut 0 ⇒ (c, b ; →
s ) ;cut 0.
1.5. Promotion
Let p = (→−c ; →
−
s ) be a simple proof-structure. The width of p is the number of elements
of the sequence →−s.
By definition, a proof-structure of width n is a finite linear combination of simple
proof-structures of width n.
Observe that 0 is a proof-structure of width n for all n.
Let P be a proof-structure5 of width n + 1. We introduce a new constructor6 called
promotion box, of arity n:
P !(n) ∈ Σn .
The presence of n in the notation is useful only in the case where P = 0 so it can
most often be omitted. The use of a non necessarily simple proof structure P in this
construction is crucial: promotion is not a linear construction and is actually the only
non linear construction of (differential) LL.
So if t1 , . . . , tn are trees, P !(n) (t1 , . . . , tn ) is a tree. Pictorially, this tree will typically
be represented as in Figure 11. A simple net p appearing in P is typically of the form
(→
−c ; →
−
s ) and its width is n + 1, so that → −s = (s1 , . . . , sn , s). The indices 1, . . . , n and
5 To be completely precise, we should also provide a typing environment for the free variables of P ; this
can be implemented by equipping each variable with a type.
6 The definitions of the syntax of proof trees and of the signature Σ are mutually recursive when
promotion is taken into account.
Differential Linear Logic 13
t1 tn
···
1 n
P
•
!
• which appear on the gray rectangle representing P stand for the roots of these trees
s1 , . . . , sn and s.
1.5.2. Logical rule. The logical rule associated with this construction is the following.
→
−
Φ ` P : ?A⊥ ⊥
1 , . . . , ?An , B Φ ` (→ −
ci ; ti , ti ) : Γi , !Ai (i = 1, . . . , n)
→
− →
−
Φ ` (→−
c1 , . . . , →
−
cn ; t1 , . . . , tn , P !(n) (t1 , . . . , tn )) : Γ1 , . . . , Γn , !B
Remark : This promotion rule is of course highly debatable. We choose this presentation
because it is compatible with our tree-based presentation of proof-structures.
1.5.3. Cut elimination rules. The basic reductions associated with promotion are as fol-
lows.
hP !(n) (t1 , . . . , tn ) | wi ;cut (ht1 | wi, . . . , htn | wi ; ) see Figure 12.
hP !(n)
(t1 , . . . , tn ) | d(s)i ;cut
Pp · (→
−c , hs | t i, . . . , hs | t i, hs0 | si ; )
X
1 1 n n
p=(→
−
c ;→
−
s ,s0 )
1.5.4. Commutative reductions. There are also auxiliary reduction rules sometimes called
commutative reductions which do not deal with cuts — at least in the formalization of
nets we present here.
Thomas Ehrhard 14
t1 tn
···
1 n
t1 ? tn ?
P ;cut ···
•
! ?
t1 tn
···
1 n
s t1 tn s
P ;cut ··· P
• n 1 •
···
! ?
Remark : In Figure 15 and 17, for graphical reasons, we don’t follow exactly the notations
used in the text. For instance in Figure 15, the correspondence with the notations of (1)
is given by v1 = t1 ,. . . ,vi−1 = ti−1 , u1 = ti ,. . . , uk = tk+i−1 , vi = tk+i ,. . . ,vn = tk+n .
Remark : Figure 15 is actually slightly incorrect as the connections between the “auxil-
iary ports” of the cocontraction rule within the promotion box of the right hand proof-
structure and the main ports of the trees u1 , . . . , uk are represented as vertical lines
whereas they involve axioms (corresponding to the pairs (xi , xi ) for i = 1, . . . , k in the
formula above). The same kind of slight incorrectness occurs in figure 17.
Differential Linear Logic 15
t1 tn
···
n
1 s1 s2
P ;cut
•
! ?
··· ···
1 n 1 n
t1 ? tn ?
s1 P ··· P s2
• •
! !
The three last commutative reductions deal with the interaction between a promotion
and the costructural rules.
Interaction between a promotion and a coweakening, see Figure 16:
where
Pp · (→
−c , hs | wi ; s , . . . , s , s , . . . , s
X
R= i 1 i−1 i+1 n+1 , s) .
p=(→
−
c ;→
−
s ,s)
P !(n+1) (t1 , . . . , ti−1 , c(ti , ti+1 ), ti+2 , . . . , tn+2 ) ;com ( ; R!(n+2) (t1 , . . . , tn+2 )) (2)
where
Pp (→
−c , hs | c(x, y)i ; s , . . . , s , x, y, s , . . . , s , s) .
X
R= i 1 i−1 i n
p=(→
−
c ;→
−
s ,s)
hc(x1 , s1 ) | t1 i, . . . , hc(x\
i , si ) | ti i, . . . , hc(xn+1 , sn+1 ) | tn+1 i ;
a1 , . . . , ai−1 , ai+1 , . . . , an .
We also have to explain how these commutative reductions can be used in arbitrary
Thomas Ehrhard 16
u1 uk
u1 uk
··· ···
1 k 1 k
v1 Q vn Q
• v1 • vn
! ;com !
! !
t1 tn
t1 tn
!
!
··· ···
1 i n ;com
P ···
n
• 1
P
! • i
contexts. We deal first with the case where such a reduction occurs under a constructor
symbol ϕ ∈ Σn+1 .
t ;com P
ϕ(→−
u , t, →
−
v ) ;com p=(→ →
− →
− →
−
P
c ; w) Pp · ( c ; ϕ( u , w, v ))
−
Next we deal with the case where t occurs in outermost position in a proof-structure.
There are actually two possibilities.
t ;com P
→
− →
− →
−
( c ; u , t, v ) ;
P
→
−
−c , →
P · (→
− →
d ;−
u , w, →
−
v)
com p=( d ; w) p
t ;com P
0 →
− →
−
(ht | t i, c ; t ) ;com p=(→
P
−
−c , →
P · (→
− →
−
d , hw | t0 i ; t )
d ; w) p
We use ; for the union of the reduction relations ;cut and ;com .
This formalization of nets enjoys a subject reduction property.
The proof is a rather long case analysis. We need to consider possible extensions of Φ
because of the fresh variables which are introduced by several reduction rules.
Differential Linear Logic 17
u1 u2 v1 u1 u2 vn
v1 vn
! !
! !
u
u
t1 tn ! ! P
1 i n •
! ··· ···
··· ···
1 i n
··· ···
1 i n ;com P
• t1 ? ··· tn ?
P
•
! !
!
As it is defined, our reduction relation does not allow us to perform the reduction within
boxes. To this end, one should add the following rule.
P ;Q
P !(n) (t1 , . . . , tn ) ; Q!(n) (t1 , . . . , tn )
It is then possible to prove basic properties such as confluence and normalization7 . For
these topics, we refer mainly to the work of Pagani (Pag09), Tranquilli (PT09; Tra09;
PT11), Gimenez (Gim11). We also refer to Vaux (Vau09) for the link between the alge-
braic properties of k and the properties of ;, in a simpler λ-calculus setting.
Of course these proofs should be adapted to our presentation of proof structures. This
has not been done yet but we are confident that it should not lead to difficulties.
7 For confluence, one needs to introduce an equivalence relation on proof-structures which expresses
typically that contraction is associative, see (Tra09). For normalization, some conditions have to be
satisfied by k; typically, it holds if one assumes that k = N but difficulties arise if k has additive
inverses.
Differential Linear Logic 19
L(hi) = 0
L(∗) = 1
L(hτ1 , τ2 i) = L(τ1 ) + L(τ2 ) .
Let Tn be the set of trees τ such that L(τ ) = n. This set is infinite for all n.
Let τ ∈ Tn . Then we define in an obvious way a functor τ : Ln → L. On object, it is
defined as follows:
hi = I
∗ X = X
→
− →
−
hτ1 ,τ2 i (X1 , . . . , XL(τ1 ) , Y1 , . . . , YL(τ2 ) ) = (τ1 ( X )) (τ2 ( Y )) .
The coherence commutation diagrams (which include the McLane Pentagon) allow one
→
− →
−
indeed to prove that all the possible definitions of an isomorphism τ1 ( X ) → τ2 ( X )
using these basic ingredients give rise to the same result. This is McLane coherence
Theorem for monoidal categories. In particular the following properties will be quite
useful:
ττ →
− = Id τ →
X
−
(X )
and ττ23 →
− τ
X
1−
τ2 →
X
= ττ13 →
− .
X
(3)
→
−
We shall often omit the indexing sequence X when using these natural isomorphisms,
writing στ instead of στ →
−.
X
→
− b ϕ,σ
→
−
σ X σ ϕ(
b X)
σ
τ σ
τ
→
− b ϕ,τ
→−
τ X τ ϕ(
b X)
(4)
This is a consequence of McLane coherence Theorem for symmetric monoidal cate-
gories.
Let f ∈ L(U ⊗X, Y ). We have η f ∈ L(U ⊗X, Y ⊥⊥ ), hence cur⊥ −1 (η f ) ∈ L((U ⊗ X)⊗
h∗,h∗,∗ii
Y ⊥ , ⊥), so cur⊥ −1 (η f ) ⊗hh∗,∗i,∗i ∈ L(U ⊗ (X ⊗ Y ⊥ ), ⊥) and we can define a linear cur-
ryfication of f as
h∗,h∗,∗ii
cur(f ) = cur⊥ (cur⊥ −1 (η f ) ⊗hh∗,∗i,∗i ) ∈ L(U, X ( Y ) .
One can prove then that the following equations hold, showing that the symmetric
monoidal category L is closed.
ev (cur(f ) ⊗ X) = f
cur(f ) g = cur(f (g ⊗ X))
cur(ev) = Id
where g ∈ L(V, U ).
It follows as usual that cur is a bijection from L(U ⊗ X, Y ) to L(U, X ( Y ).
We set X`Y = (X ⊥ ⊗ Y ⊥ )⊥ = X ⊥ ( Y ; this operation is the cotensor product,
also called par in linear logic. Using the above properties one shows that this operation
is a functor L2 → L which defines another symmetric monoidal structure on L. The
operation X 7→ X ⊥ is an equivalence of symmetric monoidal categories from (Lop , ⊗) to
(L, `).
~ ⊥ , X) by setting axτ =
2.3.3. MLL vector constructions. Let X ∈ L. We define ax ∈ L(X
h∗,∗i −1 hhi,∗i
`τ cur(ηX ⊗∗ ) ∈ L(1, X ⊥⊥
( X) = L(1, X `X) for all τ ∈ T2 .
⊥
8 If we see ⊥ as the object of scalars, which is compatible with the intuition that X ( ⊥ is the dual of
X, that is, the “space of linear forms on X”, then this monoid structure is an internal multiplication
law on scalars.
Thomas Ehrhard 22
since
One checks easily that this definition does not depend on the choice of σ and τ , using
Equations (3) and (5), and one can check that
~ 1 , . . . , Xn , Y1 , . . . , Yp , X ⊗ Y ) .
⊗(u, v) ∈ L(X
~ 1 , . . . , Xn ) and ϕ ∈ Sn . Given σ ∈ Tn , we have uσ ∈ L(1, `σ (X1 , . . . , Xn ))
Let u ∈ L(X
b ϕ,σ uσ ∈ L(1, `σ (Xϕ(1) , . . . , Xϕ(n) )). Given θ ∈ Tn , we set therefore
and hence `
defining an element sym(ϕ, u) of L(X ~ ϕ(1) , . . . , Xϕ(n) ) which does not depend on the
choice of σ. Indeed, let τ ∈ Tn , we know that uσ = `τσ uτ and hence sym(ϕ, u)θ =
b ϕ,σ `τ uτ = `σ `τ `
`σθ ` b ϕ,τ uτ = `τ `b ϕ,τ uτ , using Diagram (4).
σ θ σ θ
~ 1 , . . . , Xn , X ⊥ ) and v ∈ L(Y
Let u ∈ L(X ~ 1 , . . . , Yp , X). We have
~ 1 , . . . , Xn , Y1 , . . . , Yp , X ⊥ ⊗ X)
⊗(u, v) ∈ L(X
so that
for each θ ∈ Tn+p . As usual, this definition does not depend on the choice of σ and τ .
Thomas Ehrhard 24
2.3.4. Interpreting MLL derivations. We start with a valuation which, with each α ∈
A, associates [α] ∈ L in such a way that [α] = [α]⊥ . We extend this valuation to an
interpretation of all MLL types as objects of L in the obvious manner, so that we have a
De Morgan iso dmA ∈ L([A⊥ ], [A]⊥ ) defined inductively as follows.
We set first dmα = Id[α]⊥ . We have dmA ∈ L([A⊥ ], [A]⊥ ) and dmB ∈ L([B ⊥ ], [B]⊥ ),
therefore dmA `dmB ∈ L([(A ⊗ B)⊥ ], [A]⊥ `[B]⊥ ). We have
[A]⊥ `[B]⊥ = ([A]⊥⊥ ⊗ [B]⊥⊥ )⊥
by definition of ` and remember that η[A] ∈ L([A], [A]⊥⊥ ) and so we set
dmA⊗B = (η[A] ⊗ η[B] )⊥ (dmA `dmB ) .
We have [(A`B)⊥ ] = [A⊥ ] ⊗ [B ⊥ ] so dmA ⊗ dmB ∈ L([(A`B)⊥ ], [A]⊥ ⊗ [A]⊥ ). By
definition we have [A`B]⊥ = ([A]⊥ ⊗ [B]⊥ )⊥⊥ . So we set
dmA`B = η[A]⊥ ⊗[B]⊥ (dmA ⊗ dmB ) .
Given a sequence Γ = (A1 , . . . , An ) of types, we denote as [Γ] the sequence of objects
([A1 ], . . . , [An ]).
Given a derivation π of a logical judgment Φ ` p : Γ we define now [π] ∈ L([Γ]),~ by
induction on the structure of π.
Assume first that Γ = (A⊥ , A), p = ( ; x, x) and that π is the axiom
axiom
Φ`p:Γ
We have ax[A] ∈ L(1, [A]⊥ `[A]) and dmA ∈ L([A⊥ ], [A]⊥ ) so that we can set
~
and we have [π] ∈ L([Γ]) as required.
Assume next that Γ = (∆, A`B), that p = (→ −c ; →
−
s , s`t) and that π is the following
derivation, where λ is the derivation of the premise:
Φ ` (→
−c ; →
−
s , s, t) : ∆, A, B
→
− →
− `-rule
Φ ` ( c ; s , s`t) : ∆, A`B
then by inductive hypothesis we have [λ] ∈ L([∆,~ A, B]) and hence we set
[π] = `([λ]) ∈ L([∆,
~ A`B]) .
Assume now that Γ = (∆, Λ, A ⊗ B), that p = (→ −c , →
− →
d ;−
→
−
s , t , s ⊗ t) and π is the
following derivation, where λ is the derivation of the left premise and ρ is the derivation
of the right premise:
→
− → −
Φ ` (→−c ; →
−s , s) : ∆, A Φ ` ( d ; t , t) : Λ, B
⊗-rule
Φ ` (→−c , →
− →
d ;−
→
−
s , t , s ⊗ t) : ∆, Λ, A ⊗ B
then by inductive hypothesis we have [λ] ∈ L([∆, ~ ~
A]) and [ρ] ∈ L([Λ, B]) and hence we
set
~
[π] = ⊗([λ], [ρ]) ∈ L([∆, Λ, A ⊗ B]) .
Assume that ϕ ∈ Sn , Γ = (Aϕ(1) , . . . , Aϕ(n) ), p = (→
−c ; s
ϕ(1) , . . . , sϕ(n) ) and that π is
the following derivation, where λ is the derivation of the premise:
Differential Linear Logic 25
Φ ` (→
−c ; s , . . . , s ) : A , . . . , A
1 n 1 n
permutation rule
Φ`p:Γ
By inductive hypothesis we have [λ] ∈ L([A ~ 1 ], . . . , [An ]) and we set
The first main property of this interpretation of derivations is that they only depend
on the underlying nets.
~ 1 , . . . , Xn ) inherits canonically a
Let (Xi )ni=1 be a family of objects of L. The set L(X
k-module structure.
!X b ϕ,hh∗,∗i,h∗,∗ii
⊗ Id1 !X
cX wX
1
!X ⊗ !X (!X ⊗ !X) ⊗ (!X ⊗ !X)
cX ⊗ cX
Differential Linear Logic 27
X dX X dX
0 !X dX ⊗ w X + w X ⊗ dX !X
wX cX
1 !X ⊗ !X
X dX X dX
0 !X d X ⊗ wX + wX ⊗ d X !X
cX
1 wX !X ⊗ !X
and
X dX
IdX !X
X dX
2.5.1. The why not modality. We define ?X = (!(X ⊥ ))⊥ and we extend this operation
to a functor Liso → Liso in the same way (using the contravariant functoriality of ( )⊥ ).
We define
0 ⊥
wX = wX ⊥ : ⊥ → ?X .
we set
c0X = c⊥ ⊥
X ⊥ (η!(X ⊥ ) ⊗ η!(X ⊥ ) ) ∈ L(?X`?X, ?X) .
[λ]τ `w[A]
0
∈ L(1`⊥, `hτ,∗i ([Γ, ?A]))
so that we can set
hτ,∗i
[π]θ = `θ ([λ]τ `w[A]
0
) `∗h∗,hii
for any θ ∈ Tn . The fact that the family [π] defined in that way does not depend on the
~
choice of τ results from the fact that [λ] ∈ L([Γ]).
→
− →
−
Assume that Γ = (∆, ?A), p = ( c ; s , c(t1 , t2 )) and that π is the following derivation,
denoting with λ the derivation of the premise:
` (→
−c ; →
−s , t1 , t2 ) : ∆, ?A, ?A
contraction
` (→
−c ; →
−
s , c(t1 , t2 )) : ∆, ?A
We have c0 ∈ L([?A]`[?A], [?A]). By inductive hypothesis [λ] ∈ L([∆, ~ ?A, ?A]). Let
[A]
τ ∈ Tn where n is the length of ∆. We have
[λ]hτ,h∗,∗ii ∈ L(1, (`τ ([∆]))`([?A]`[?A]))
and hence, given θ ∈ Tn+1 , we set
hτ,∗i
[π]θ = `θ (`τ ([∆])`c0[A] ) [λ]hτ,h∗,∗ii
~
defining in that way [π] ∈ L([∆, ?A]).
Assume that Γ = (!A), p = ( ; w) and that π is the following derivation
co-weakening
` ( ; w) : !A
then, for θ ∈ T1 we set [π]θ = `∗θ ([!A]) w[A] defining in that way an element [π] of L([!A]).
~
→
− →
− → − →
−
Assume that Γ = (∆, Λ, !A), p = ( c , d ; s , t , c(u, v)) and that π is the following
derivation
→
− → −
Φ ` (→−c ; →
−
s , u) : ∆, !A Φ ` ( d ; t , v) : Λ, !A,
co-contraction
Φ ` (→−c , →
− →
d ;−
→
−
s , t , c(u, v)) : ∆, Λ, !A
and we denote with λ and ρ the derivations of the two premises. By inductive hypothesis,
~
we have [λ] ∈ L([∆], ~
[!A]) and [ρ] ∈ L([Λ], ~
[!A]). We have ⊗([λ], [ρ]) ∈ L([∆], [Λ], [!A] ⊗
[!A]).
Let m be the length of ∆ and n be the length of Λ. Let τ ∈ Tm+n , we have ⊗([λ], [ρ])hτ,∗i ∈
L(1, (`τ ([∆], [Λ]))`(([!A] ⊗ [!A])). Hence, given θ ∈ Tm+n+1 we set
hτ,∗i
[π]θ = `θ ((`τ ([∆], [Λ]))`cX ) ⊗([λ], [ρ])hτ,∗i .
Differential Linear Logic 29
~
so that [π] ∈ L([∆], [Λ], [!A]), and this definition does not depend on the choice of τ .
Assume that Γ = (∆, ?A), p = (→ −c ; →−
s , d(s)) and that π is the following derivation
→
− →
−
Φ ` ( c ; s , s) : ∆, A
dereliction
Φ ` (→
−c ; →
−s , d(s)) : ∆, ?A
~
Let λ be the derivation of the premise, so that [λ] ∈ L([∆], [A]).
We have d0[A] : [A] → [?A]. Let n be the length of ∆, let τ ∈ Tn and let θ ∈ Tn+1 . We
set
hτ,∗i
[π]θ = `θ ((`τ ([∆]))`d0[A] ) [λ]hτ,∗i
and we define in that way an element π of L([∆], ~ [?A]) which does not depend on the
choice of τ .
Assume that Γ = (∆, !A), p = (→ −c ; →
−s , d(s)) and that π is the following derivation
→
− →
−
Φ ` ( c ; s , s) : ∆, A
co-dereliction
Φ ` (→−c ; →
−
s , d(s)) : ∆, !A
~
Let λ be the derivation of the premise, so that [λ] ∈ L([∆], [A]). We have d[A] : [A] → [!A].
Let n be the length of ∆, let τ ∈ Tn and let θ ∈ Tn+1 . We set
hτ,∗i
[π]θ = `θ ((`τ ([∆]))`d[A] ) [λ]hτ,∗i
~
and we define in that way an element π of L([∆], [!A]) which does not depend on the
choice of τ .
Pn
Last assume that p = i=1 µi pi , that π is the following derivation
Φ ` pi : Γ ∀i ∈ {1, . . . , n}
Pn sum
Φ ` i=1 µi pi : Γ
and that λi is the derivation of the i-th premise in this derivation. Then by inductive
~
hypothesis we have [λi ] ∈ L([∆]) and we set of course
n
X
[π] = µi [λi ] .
i=1
One can prove for this extended interpretation the same results as for the MLL frag-
ment.
— For each f ∈ L(X, Y ) we are given a morphism !f ∈ L(!X, !Y ) and the correspondence
f 7→ !f is functorial. This mapping f 7→ !f extends the action of ! on isomorphisms.
— The morphisms dX , dX , wX , wX , cX and cX are natural with respect to this extended
functor.
— There is a natural transformation pX : !X → !!X which turns (!X, dX , pX ) into a
comonad.
— There is a morphism µ0 : 1 → !1 and a natural transformation9 µ2X,Y : !X ⊗ !Y →
!(X ⊗ Y ) which satisfy the following commutations
µ0 ⊗ !X µ21,X
1 ⊗ !X !1 ⊗ !X !(1 ⊗ X)
!⊗hhi,∗i
∗
⊗hhi,∗i
∗
!X (6)
!X ⊗ µ0 µ2X,1
!X ⊗ 1 !X ⊗ !1 !(X ⊗ 1)
!⊗h∗,hii
∗
⊗h∗,hii
∗
!X (7)
hh∗,∗i,∗i
⊗h∗,h∗,∗ii !X ⊗ µ2Y,Z
(!X ⊗ !Y ) ⊗ !Z !X ⊗ (!Y ⊗ !Z) !X ⊗ !(Y ⊗ Z)
µ2X,Y ⊗ !Z µ2X,Y ⊗Z
hh∗,∗i,∗i
µ2X⊗Y,Z !⊗h∗,h∗,∗ii
!(X ⊗ Y ) ⊗ !Z !((X ⊗ Y ) ⊗ Z) !(X ⊗ (Y ⊗ Z))
(8)
σ!X,!Y
!X ⊗ !Y !Y ⊗ !X
µ2X,Y µ2Y,X
!σX,Y
!(X ⊗ Y ) !(Y ⊗ X)
(9)
— The following diagrams commute
µ2X,Y µ0
!X ⊗ !Y !(X ⊗ Y ) 1 !1
dX⊗Y d1
dX ⊗ dY 1
X ⊗Y 1
9 These morphisms are not required to be isos, whence the adjective “lax” for the monoidal structure.
Differential Linear Logic 31
µ2X,Y µ0
!X ⊗ !Y !(X ⊗ Y ) 1 !1
pX ⊗ pY pX⊗Y µ0 p1
µ2!X,!Y !µ2X,Y !µ0
!!X ⊗ !!Y !(!X ⊗ !Y ) !!(X ⊗ Y ) !1 !!1
When these conditions hold, one says that (µ0 , µ2 ) is a lax symmetric monoidal structure
on the comonad (!, d, p).
2.6.1. Monoidality and structural morphisms. This monoidal structure must also be com-
patible with the structural constructions.
µ2X,Y µ0
!X ⊗ !Y !(X ⊗ Y ) 1 !1
wX ⊗ wY wX⊗Y w1
hhi,hii 1
⊗hi
1⊗1 1 1
cX ⊗ cY
!X ⊗ !Y (!X ⊗ !Y ) ⊗ (!X ⊗ !Y ) hi
⊗hhi,hii
⊗
b ϕ,σ
1 1⊗1
µ2X,Y (!X ⊗ !Y ) ⊗ (!X ⊗ !Y ) µ 0
µ0 ⊗ µ0
c1
µ2X,Y ⊗ µ2X,Y !1 !1 ⊗ !1
cX⊗Y
!(X ⊗ Y ) !(X ⊗ Y ) ⊗ !(X ⊗ Y )
2.6.2. Monoidality and costructural morphisms. We need the following diagrams to com-
mute in order to validate the reduction rules of DiLL.
cX ⊗ !Y
(!X ⊗ !X) ⊗ !Y !X ⊗ !Y
wX ⊗ !Y
1 ⊗ !Y !X ⊗ !Y (!X ⊗ !X) ⊗ cY
hhi,hii
⊗hi (!X ⊗ !Y ) ⊗ (!X ⊗ !Y )
wX⊗Y
1 !(X ⊗ Y ) µ2X,Y ⊗ µ2X,Y
cX⊗Y
!(X ⊗ Y ) ⊗ !(X ⊗ Y ) !(X ⊗ Y )
dX ⊗ !Y
X ⊗ !Y !X ⊗ !Y
X ⊗ dY µ2X,Y
dX⊗Y
X ⊗Y !(X ⊗ Y )
pX pX
!X !!X !X !!X
w!X cX c!X
wX
pX ⊗ pX
1 !X ⊗ !X !!X ⊗ !!X
2.6.4. Digging and costructural morphisms. It is not required that pX be a monoid mor-
phism from (!X, wX , cX ) to (!!X, w!X , c!X ), but the following diagrams must commute.
wX cX pX
1 !X !X ⊗ !X !X !!X
µ0 pX pX ⊗ pX !cX
µ2!X,!X
!1 !!X !!X ⊗ !!X !(!X ⊗ !X)
!wX
In the same spirit, we need a last diagram to commute, which describes the interaction
between codereliction and digging.
dX pX
X !X !!X
⊗∗
hhi,∗i !cX
w X ⊗ dX pX ⊗ d!X
1⊗X !X ⊗ !X !!X ⊗ !!X
2.6.5. Preadditive structure and functorial exponential. Our last requirement justifies the
term “exponential” since it expresses that sums are turned into products by this functorial
operation.
!0 !(f + g)
!X !X !X !Y
cX cX
wX wX
!f ⊗ !g
1 !X ⊗ !X !Y ⊗ !Y
Using this structure, the comonad (! , d, p) can be equipped with a lax symmetric monoidal
structure (µ0 , µ2 ). Again, our main reference for these notions and constructions is (Mel09).
In this setting, the structural natural transformations wX and cX can be defined and it
is well known that the Kleisli category L! of the comonad ! is cartesian closed.
If we require the category L to be preadditive in the sense of Section 2.4, it is easy to
see that > is also an initial object and that & is also a coproduct. Using this fact, the
natural transformations wX and cX can also be defined.
To describe a model of DiLL in this setting, one has to require these Seely monoidality
isomorphisms to satisfy some commutations with the d natural transformation.
Here, we prefer a description which does not use cartesian products because it is closer
to the basic constructions of the syntax of proof-structures and makes the presentation
of the semantics conceptually simpler and more canonical, to our taste at least.
2.6.6. Generalized monoidality, contraction and digging. Just as the monoidal structure
of a monoidal category, the monoidal structure of ! can be parameterized by monoidal
→
−
trees. Let n ∈ N and let τ ∈ Tn . Given a family of objects X = (X1 , . . . , Xn ) of L, we
τ τ →
− τ →
−
− : ⊗ (! X ) → !⊗ ( X ) by induction on τ as follows:
define µ→
X
µhi = µ0
µ∗X = Id!X
hσ,τ i
2 σ τ
µ→ − =µ σ →
− → − − (µ→
→ − ⊗ µ→
−).
⊗ ( X ),⊗τ ( Y )
X,Y X Y
Given σ, τ ∈ Tn and ϕ ∈ Sn , one can prove that the following diagrams commute
→
−
⊗σ
τ (! X )
b ϕ,σ (!→
⊗
−
→
− →
− →
− X) →
−
⊗σ (! X ) ⊗τ (! X ) ⊗σ (! X ) ⊗σ (ϕ(!
b X ))
σ
µ→
−
τ
µ→
−
σ
µ→
− µσ →
−
X X X ϕ(
b X)
→
−
!⊗σ
τ (! X )
b ϕ,σ (→
!⊗
−
→
− →
− →
− X) →−
!⊗σ ( X ) !⊗τ ( X ) !⊗σ ( X ) !⊗σ (ϕ(
b X ))
Thomas Ehrhard 34
⊗σ (w→
−)
→
− X →
− →
−
⊗σ (! X ) ⊗σ 1 = ⊗σ 0 ⊗σ (! X ) ⊗σ (d→
−)
X
σ
µ→ σ
⊗hi0
σ
µ→ →
−
−
X
w⊗σ (→
−
−
X ⊗σ X
→
− X) →
− d⊗σ (→
−
!⊗σ ( X ) 1 = ⊗hi !⊗σ ( X ) X)
→
−
where 1 is the sequence (1, . . . , 1) (n elements) and σ0 = σ [hi/∗] ∈ T0 (the tree
obtained from σ by replacing each occurrence of ∗ by hi).
Before stating the next commutation, we define a generalized form of contraction
σ σ →
− hσ,σi →− → −
− : ⊗ !X → ⊗
c→ (! X , ! X ) as the following composition of morphisms:
X
σ
2
⊗σ c→
− b ϕ,σ ⊗hσ,σi
→
− X −
→ ⊗ →
− → − →
− → −
σ
⊗ !X ⊗ !X 0
σ2 σ2
⊗ (! X , ! X ) ⊗hσ,σi (! X , ! X )
−→
where X 0 = (X1 , X1 , X2 , X2 , . . . , Xn , Xn ), σ2 = σ [h∗, ∗i/∗] and ϕ ∈ S2n is defined by
ϕ(2i + 1) = i + 1 and ϕ(2i + 2) = i + n + 1 for i ∈ {0, . . . , n − 1}. With these notations,
one can prove that
σ
c→
−
→
− X →
− → − →
− →
−
σ
⊗ !X ⊗hσ,σi (! X , ! X ) = (⊗σ ! X ) ⊗ (⊗σ ! X )
σ σ σ
µ→
− − ⊗ µ→
µ→ −
X X X
c⊗σ (→
−
→
− σ→
− σ→
−
X)
!⊗σ X (!⊗ X ) ⊗ (!⊗ X )
σ σ →
− σ →
−
− : ⊗ ! X → !⊗ ! X as the following
We also define a generalized version of digging p→
X
composition of morphisms:
⊗σ p→
− µσ→
−
→
− X →
− !X →
−
⊗σ ! X ⊗σ !! X !⊗σ ! X
We have phi = µ0 , p∗X = pX , and observe that the following generalizations of the
comonad laws hold. The two commutations involving digging and dereliction generalize
to:
→
−
⊗σ ! X
→
− σ
µ→
⊗σ ! X σ X
−
p→
−
X
→
− →
− →
−
⊗σ ! X d⊗σ !→
−
!⊗σ ! X !⊗σ d→
−
!⊗σ X
X X
→
−
The square diagram involving digging generalizes as follows. Let Y = (Y1 , . . . , Ym ) be
Differential Linear Logic 35
→
−
2.6.7. Generalized promotion and structural constructions. Let f : ⊗σ ! X → Y , we define
→
−
the generalized promotion f ! : ⊗σ ! X → !Y by f ! = !f p→
σ
− . Using the commutations of
X
Section 2.6.6, one can prove that this construction obeys the following commutations.
→
− f!
σ !Y
⊗ !X
σ wY
⊗ w→
−
X σ
⊗hi0
→
−
⊗σ 1 = ⊗σ0 ⊗hi = 1
→
− f!
⊗σ ! X !Y
σ
c→
− cY
X
→
− → − f! ⊗ f!
⊗hσ,σi (! X , ! X ) !Y ⊗ !Y
The next two diagrams deal with the interaction between generalized promotion and
dereliction (resp. digging).
→
− f!
⊗σ ! X !Y
dY
f
Y
→
−
→
− →
− f ! ⊗ (⊗τ ! Y ) →
−
(⊗σ ! X ) ⊗ (⊗τ ! Y ) !Y ⊗ (⊗τ ! Y )
hσ,τ i h∗,τ i
p→
− →
− p →
−
X ,Y
! →
− τ
Y, Y
→
− →
− !(f ⊗ (⊗ ! Y )) →
−
!((⊗σ ! X ) ⊗ (⊗τ ! Y )) !(!Y ⊗ (⊗τ ! Y ))
The second diagram follows easily from (10) and allows one to prove the following prop-
Thomas Ehrhard 36
−
→ −
→ −→
erty. Let f : ⊗σ !X → Y and g : !Y ⊗ (⊗τ !Y ) → Z so that f ! : ⊗σ !X → !Y and
−
→
g ! : !Y ⊗ (⊗τ !Y ) → !Z; one has
→
− →
− −→ −
→
g ! (f ! ⊗ (⊗τ ! Y )) = (g (f ! ⊗ (⊗τ ! Y )))! : (⊗σ !X) ⊗ (⊗τ !Y ) → !Z
Remark : We actually need a more general version of this property, where f ! is not
necessarily in leftmost position in the ⊗ tree. It is also easy to obtain, but notations are
more heavy. We use the same kind of convention in the sequel but remember that the
corresponding properties are easy to generalize.
→
−
2.6.8. Generalized promotion and costructural constructions. Let f : !X ⊗ (⊗σ ! X ) → Y .
→
− →
−
Observe that f (wX ⊗ (⊗σ ! X )) ⊗σhhi,σi −→ : ⊗σ ! X → Y . The following equation holds:
!X
→
− σ →
−
f ! (wX ⊗ (⊗σ ! X )) ⊗σhhi,σi − σ
→ = (f (wX ⊗ (⊗ ! X )) ⊗hhi,σi −
!X
→)
!X
!
→
− →
−
Similarly, we have f (cX ⊗(⊗σ ! X )) : (!X ⊗ !X)⊗(⊗σ ! X ) → Y and the following equation
holds:
→
− →
−
f ! (cX ⊗ (⊗σ ! X )) = (f (cX ⊗ (⊗σ ! X )))!
This results from the commutations of Sections 2.6.2 and 2.6.4.
2.6.9. Generalized promotion and codereliction (also known as chain rule). Let f : !X ⊗
→
−
(⊗σ ! X ) → Y . We set
→
− →
−
f0 = f (wX ⊗ (⊗σ ! X )) ⊗σhhi,σi : ⊗σ ! X → Y
Then we have
→
−
→
− dX ⊗ (⊗σ ! X ) →
− f!
X ⊗ (⊗σ ! X ) !X ⊗ (⊗σ ! X ) !Y
σ
dX ⊗ c→
−
X
→
− →
−
!X ⊗ ((⊗σ ! X ) ⊗ (⊗σ ! X )) cY
h∗,h∗,∗ii
⊗hh∗,∗i,∗i
→
− →
− f ⊗ f0! dY ⊗ !Y
(!X ⊗ (⊗σ ! X )) ⊗ (⊗σ ! X ) Y ⊗ !Y !Y ⊗ !Y
2.6.10. Interpreting DiLL derivations. For the sake of readability, we assume here that
the De Morgan isomorphisms (see 2.3.4) are identities, so that [A⊥ ] = [A]⊥ for each
formula A. The general definition of the semantics can be obtained by inserting De
Morgan isomorphisms at the correct places in the forthcoming expressions.
→
−
Let P be a net of arity n + 1 and let pi = (→
−ci ; ti , ti ) for i = 1, . . . , n. Consider the
following derivation π, where we denote as λ, ρ1 , . . . , ρn the derivations of the premises.
Φ ` P : ?A⊥ ⊥
1 , . . . , ?An , B Φ ` p1 : Γ1 , !A1 · · · Φn ` pn : Γn , !An
→
− →
− →
− →
− !(n)
Φ ` ( c1 , . . . , cn ; t1 , . . . , tn , P (t1 , . . . , tn )) : Γ1 , . . . , Γn , !B
Differential Linear Logic 37
~
By inductive hypothesis, we have [λ] ∈ L((![A ⊥ ⊥
1 ]) , . . . , (![An ]) , [B]) so that, picking an
element σ of Tn we have
[λ]hσ,∗i ∈ L(1, `σ ((![A1 ])⊥ , . . . , (![An ])⊥ )`[B])
= L(1, ⊗σ (![A1 ], . . . , ![An ]) ( [B])
and hence
(cur−1 ([λ]hσ,∗i ) ⊗σhhi,σi )! ∈ L(⊗σ (![A1 ], . . . , ![An ]), ![B]) .
~ i ], ![Ai ]). Let li be the length of Γi , and let us
For i = 1, . . . , n, we have [ρi ] ∈ L([Γ
choose τi ∈ Tli . We have [ρi ]hτi ,∗i ∈ L(1, `τi ([Γi ])`![Ai ]) and hence, setting
ri = cur−1 ([ρi ]hτi ,∗i ) ⊗τhhi,τ
i
ii
∈ L(⊗τi ([Γi ])⊥ , ![Ai ])
we have ⊗σ (→
−
r ) ∈ L(⊗θ ([∆]⊥ ), ⊗σ (![A1 ], . . . , ![An ])) where
∆ = Γ1 , . . . , Γn
θ = σ(τ1 , . . . , τn )
where σ(τ1 , . . . , τn ) (for σ ∈ Tn and τi ∈ Tni for i = 1, . . . , k) is the element of Tn1 +···+nk
defined inductively by
hi() = hi
∗(τ ) = τ
hσ, σ i(τ1 , . . . , τn ) = hσ(τ1 , . . . , τk ), σ 0 (τk+1 , . . . , τn )i
0
where σ ∈ Tk , σ 0 ∈ Tn−k
We have therefore
(cur−1 ([λ]hσ,∗i ) ⊗σhhi,σi )! ⊗σ (→
−
r ) ∈ L(⊗θ ([∆]⊥ ), [B])
We set
[π]θ = cur((cur−1 ([λ]hσ,∗i ) ⊗σhhi,σi )! ⊗σ (→
− hhi,θi
r ) ⊗θ ) ∈ L(1, `hθ,∗i ([∆, !B]))
~
and this gives us a definition of [π] ∈ L([∆, !B]) which does not depend on the choice of
θ.
Theorem 7. Let π and π 0 be derivations of Φ ` p : Γ. Then [π] = [π 0 ].
Again, we set [p] = [π] where π is a derivation of Φ ` p : Γ.
Theorem 8. Assume that Φ ` p : Γ, Φ ` p0 : Γ and that p ; p0 . Then [p] = [p0 ].
The proofs of these results are tedious inductions, using the commutations described
in paragraphs 2.6.7, 2.6.8 and 2.6.9.
ferential LL. We record here briefly our original syntax of (ER03), simplified by Vaux
in (Vau05)10 .
A simple term is either
— a variable x,
— or an ordinary application (M ) R where M is a simple terms and R is a term,
— or an abstraction λx M where x is a variable and M is a simple term,
— or a differential application DM · N where M and N are simple terms.
A term is a finite linear combination of simple terms, with coefficients in k. Substitution
of a term R for a variable x in a simple term M , denoted as M [R/x] is defined as usual,
whereas differential (or linear) substitution of a simple term for a variable in another
simple term, denoted as ∂M ∂x · N , is defined as follows:
(
∂y t if x = y
·t=
∂x 0 otherwise
∂λy M ∂M
· N = λy ·N
∂x ∂x
∂DM · N ∂M ∂N
·P =D · P · N + DM · ·P
∂x ∂x ∂x
∂ (M ) R ∂M ∂R
·N = · N R + DM · ·N R
∂x ∂x ∂x
All constructions are linear, except for ordinary application which is not linear in the
argument. This means that when we write e.g. (M1 + M2 ) R, what we actually intend is
(M1 ) R + (M2 ) R. Similarly, substitution M [R/x] is linear in M and not in R, whereas
differential substitution ∂M
∂x ·N is linear in both M and N . There are two reduction rules:
(λx M ) R β M [R/x]
∂M
D(λx M ) · N βd λx ·N
∂x
which have of course to be closed under arbitrary contexts. The resulting calculus can
be proved to be Church-Rosser using fairly standard techniques (Tait - Martin-Löf), to
have good normalization properties in the typed case etc, see (ER03; Vau05). To be more
precise, Church-Rosser holds only up to the least congruence on terms which identifies
D(DM · N1 ) · N2 and D(DM · N2 ) · N1 , a syntactic version of Schwarz Lemma: terms are
always considered up to this congruence called below symmetry of derivatives.
2.7.1. Resource calculus. Differential application can be iterated: given simple terms
M, N1 , . . . , Nn , we define Dn M · (N1 , . . . , Nn ) = D(· · · DM · N1 · · · ) · Nn ; the order on
the terms N1 , . . . , Nn does not matter, by symmetry of derivatives. The (general) re-
source calculus is another syntax for the differential λ-calculus, in which the combi-
nation (Dn M · (N1 , . . . , Nn )) R is considered as one single operation denoted e.g. as
10 Alternative syntaxes have been proposed, which are formally closer to Boudol’s calculus with multi-
plicities or with resources and are therefore often called resource λ-calculi
Differential Linear Logic 39
2.7.2. The finite resource calculus. If, in the resource calculus above, one restricts one’s
attention to the terms where all applications are of the form
M [N1 , . . . , Nn , 0∞ ]
which corresponds to the differential term (D(· · · DM · N1 · · · ) · Nn ) 0, then one gets a
calculus which is stable under reduction and where all terms are strongly normalizing.
This calculus, called the finite resource calculus, can be presented as follows.
— Any variable x is a term.
— If x is a variable and s is a simple term then λx s is a simple term.
— If S is a finite multiset (also called bunch in the sequel) of simple terms then hsi S
is a simple term. Intuitively, this term stands for the application s[s1 , . . . , sn , 0∞ ] of
the resource calculus, where [s1 , . . . , sn ] = S.
A term is a (possibly infinite11 ) linear combination of finite terms. This syntax is extended
from simple terms to general terms by linearity. For instance, the term hxi [y + z, y + z]
stands for hxi [y, y] + 2 hxi [y, z] + hxi [z, z].
In the finite resource calculus, it is natural to perform several βd -reductions in one
step, and one gets
(P
f ∈Sn s sf (1) /x1 , . . . , sf (n) /xn if degs x = n
hλx si S βd
0 otherwise
where d = degx s is the number of occurrences of x in s (which is a simple term), x1 , . . . , xd
are the occurrences of x in s and the multiset S is [s1 , . . . , sn ].
Again, this calculus enjoys confluence, and also strong normalization (even in the
untyped case). It can be used for hereditarily Taylor expanding λ-terms as explained
in (ER08; ER06; Ehr10).
Taylor expansion consists in hereditarily replacing, in a differential λ-term, any ordi-
nary application (M ) N by the infinite sum
∞
X 1
(Dn M · (N, . . . , N )) 0 .
n=0
n!
11 When considering infinite linear combinations, one has to deal with the possibility of unbounded
coefficients appearing during the reduction. One option is to accept infinite coefficients, but it is also
possible to prevent this phenomenon to occur by topological means as explained in (Ehr10).
Thomas Ehrhard 40
x∗ = x
∗
(λx M ) = λx (M ∗ )
∞
∞ ∗
X 1
M [N1 , . . . , Nn , R ] = hM ∗ i [N1 ∗ , . . . , Nn ∗ , R∗ , . . . , R∗ ]
p=0
p!
so that the Taylor expansion of a resource term is a generally infinite linear combination
of finite resource terms. In the definition above, we use the extension by linearity of
the syntax of finite resource terms to arbitrary (possibly infinite) linear combinations.
The coefficients belong to the considered semi-ring k where division by positive natural
numbers must be possible.
In (ER08; ER06) we studied the behavior of this expansion with respect to differential
β-reduction in the case where the expanded terms come from the λ-calculus (that is, do
not contain differential applications; this is the uniform case), and we exhibited tight
connections between this operation and Krivine’s machine, an implementation of linear
head reduction.
There is a simple translation from resource terms (or differential terms) to DiLL proof-
nets. When restricted to the finite resource calculus, this translation ranges in DiLL0 .
This translation extends Girard’s Translation from the λ-calculus to LL proof-nets.
!X ⊗ dX cX
!X ⊗ X !X ⊗ !X !X
n
More generally, we define ∂ X ∈ L(!X ⊗ X ⊗n , !X):
0
∂ X = Id!X
n+1 n
∂X = ∂ X (∂ X ⊗ X)
Differential Linear Logic 41
Last we set
n n
dX = ∂ X (wX ⊗ X ⊗n ) ∈ L(X ⊗n , !X) .
n
We define dually ∂X ∈ L(!X, !X ⊗X) as ∂X = (!X ⊗dX ) cX and ∂X ∈ L(!X, !X ⊗X ⊗n )
0 n+1 n
by ∂X = Id!X and ∂X = (∂X ⊗ X) ∂X . And we set
dnX = (wX ⊗ X ⊗n ) ∂X
n
∈ L(!X, X ⊗n ) .
1
Observe that we have dX = dX and d1X = dX .
Consider now some f ∈ L(!X, Y ), to be intuitively seen as a non linear map from X to
Y (if ! were assumed to be a comonad as in Section 2.6, then f would be a morphism in
the Kleisli category L! ). Such a morphism will sometimes be called a “regular function”
in the sequel, but keep in mind that it is not even a function in general. For instance,
if X and Y are vector spaces (with a topological structure, in the infinite dimensional
case), such a regular function could typically be a smooth or an analytic function.
With these notations, f wX ∈ L(1, Y ) should be understood as the point of Y obtained
by applying the regular function f to 0. Similarly, f cX ∈ L(!X ⊗ !X, Y ) should be
understood as the regular function g : X × X → Y defined by g(x1 , x2 ) = f (x1 + x2 ).
Dually, given y ∈ L(1, Y ) considered as a point of Y , then y wX ∈ L(!X, Y ) should
be understood as the constant regular function which takes y as unique value. If g ∈
(!X ⊗ !X, Y ), to be considered as a regular function with two parameters X × X → Y ,
then f = g cX ∈ L(!X, Y ) should be understood as the regular function X → Y given by
f (x) = g(x, x). Given g ∈ L(X, Y ), to be considered as a linear function from X to Y ,
f = g dX ∈ L(!X, Y ) is g, considered now as a regular function from X to Y .
The basic idea of DiLL is that f 0 = f ∂ X ∈ L(!X ⊗ X, Y ) (that is f 0 ∈ L(!X, X ( Y ),
up to linear curryfication) represents the derivative of f .
Remember indeed that if f : X → Y is a smooth function from a vector space X to
a vector space Y , the derivative of f is a function f 0 from X to the space X ( Y of
(continuous) linear functions from X to Y : f 0 maps x ∈ X to a linear map f 0 (x) : X → Y ,
the differential (or Jacobian) of f at point x, which maps u ∈ X to f 0 (x) · u.
In particular, f dX = f ∂ X (wX ⊗ X) ∈ L(X, Y ) corresponds to f 0 (0), the differential
of f at 0.
n
More generally, f ∂ X ∈ L(!X ⊗ X ⊗n , Y ) represents the nth derivative of f , which is a
regular function from X to the space of n-linear functions from X n to Y (these n linear
functions are actually symmetric, a property called “Schwarz Lemma” and is axiomatized
here by the commutativity of the algebra structure of !X).
The axioms of an exponential structure express that this categorical definition of differ-
entiation satisfies the usual laws of differentiation. Let us give a simple example. Consider
f ∈ L(!X ⊗ !X, Y ), to be seen as a regular function f (x1 , x2 ) depending on two parame-
ters x1 and x2 in X. Remember that g = f cX ∈ L(!X, Y ) represents the map depending
on one parameter x in X given by g(x) = f (x, x).
Using the axioms of exponential structures, one checks easily that
where we use fi0 for the ith partial derivative of f and “·” for the linear application of
the differential. This is Leibniz law.
Similarly, the easily proven equation wX ∂ X = 0 expresses that the derivative of a
constant map is equal to 0.
It is a nice exercise to interpret similarly the dual equations
f1 ∂ X = f2 ∂ X ⇒ f1 + (f2 wX wX ) = f2 + (f1 wX wX )
and does not seem to be derivable from the other axioms of exponential structures. The
converse implication is easy to prove.
Remark : There is a dual condition which reads as follows: if f1 , f2 : Y → !X, then
∂X f1 = ∂X f2 ⇒ f1 + (wX wX f2 ) = f2 + (wX wX f1 )
TnX ∂ X = ∂ X (Tn−1
X ⊗ IdX ) .
Differential Linear Logic 43
Proposition 10. Assume that L is Taylor and that each homset L(X, Y ) is a cancellative
n+1 n+1
monoid. Let n ∈ N and let f1 , f2 : !X → Y . If f1 ∂ X = f2 ∂ X then f1 + (f2 TnX ) =
f2 + (f1 TnX ).
n+1
In particular, if f ∂ X = 0 (that is, the (n + 1)-th derivative of f is uniformly equal to
0), then f = f TnX , meaning that f is equal to its Taylor expansion of rank n.
that is
cur(f1 ∂ X ) + (cur(f2 ∂ X (TnX ⊗ IdX ))) = cur(f2 ∂ X ) + (cur(f1 ∂ X (TnX ⊗ IdX )))
and hence
f1 + f2 Tn+1
X + (f2 + f1 Tn+1 n+1
X ) wX wX = f2 + f1 TX + (f1 + f2 Tn+1
X ) w X wX
f1 + f2 Tn+1
X = f2 + f1 Tn+1
X
as required. 2
3.1.1. The category of polynomials. We say that f ∈ L(!X, Y ) is polynomial if there exists
n+1
n ∈ N such that f ∂ X = 0, and we call degree of f the least such n. The morphism
dX ∈ C(!X, X) is polynomial of degree 1.
Let f ∈ L(!X, Y ) and g ∈ L(!Y, Z) be polynomial of degree m and n respectively. We
Thomas Ehrhard 44
3.1.2. Weak functoriality and the category of polynomials. We do not require this oper-
ation X 7→ !X to be functorial, but some weak form of functoriality can be derived from
the above categorical axioms. Let f ∈ L(X, Y ). By induction on n, we define a family of
morphisms f n : !X → !X as follows: f 0 = wX wX and
f n+1 = ∂ X (f n ⊗ f ) ∂X .
Proof. Simple calculation using the diagram commutations which define an exponential
structure. 2
Pn 1 q
So for each n ∈ N we can define !n f = q=0 q! f : !X → !X, and we have !n g !n f =
!n (g f ). So f 7→ !n f is a quasifunctor, but not a functor as it does not map IdX to Id!X ,
but to an idempotent morphism ρnX : !X → !X.
In some concrete models, this sequence (!n f )n∈N can be said to be convergent, in a
sense which depends of course on the model. The limit is then denoted as !f and the
operation defined in that way turns out often to be a true functor, defining a functorial
exponential in the sense of Section 2.6.
JX = IdX +(∂ X ∂X ) : !X → !X
∂X ∂ X = Id!X⊗X +ψX .
−1
Proof. Since IX = (Id!X +(∂ X ∂X )) , we have IX ⊗IdX = ϕ−1 where ϕ = Id!X⊗X +((∂ X ∂X )⊗
IdX ) by functoriality of ⊗. To prove (11), it suffices therefore to prove that ϕ commutes
with ψX . For this, it suffices to show that
We have
((∂ X ∂X ) ⊗ IdX ) ψX = ((∂ X ∂X ∂ X ) ⊗ IdX ) σ23 (∂X ⊗ IdX )
but remember that ∂X ∂ X = Id!X⊗X +ψX , and hence
((∂ X ∂X ) ⊗ IdX ) ψX
= ψX + (∂ X ⊗ IdX ) ((∂ X ∂X ) ⊗ IdX ⊗ IdX ) σ23 (∂X ⊗ IdX )
= ψX + (∂ X ⊗ IdX ) ((∂ X ∂X ) ⊗ σ) (∂X ⊗ IdX )
A similar, and completely symmetric computation, using this time the cocommutativity
of the bialgebra !X, leads to
We can now prove a completely categorical version of the following proposition which
is the key step in the usual proof of Poincaré’s Lemma.
Then we have
g ∂ X = f (IX ⊗ IdX ) ∂X ∂ X
= f (IX ⊗ IdX ) (Id!X⊗X +ψX )
= f (IX ⊗ IdX ) + f (IX ⊗ IdX ) ψX
= f (IX ⊗ IdX ) + f ψX (IX ⊗ IdX )
So we get
3.2.1. Comments. Let us give some intuition about our axiom that JX has an inverse.
Given f : !X → Y seen as a “regular function” from X to Y , we explain why the
morphism f IX : !X → Y should be understood as representing the regular function g
defined by
Z 1
g(x) = f (tx) dt
0
assuming of course that this integral makes sense. With this interpretation, g ∂ X : !X ⊗
X → Y represents the differential Dg of g, a regular function X × X → Y which maps
(x, y) to Dg(x) · y and is linear in y. Then, applying the ordinary rules of differential
calculus, and the fact that differentiation commutes with integration, we get
Z 1
Dg(x) · y = t(Df (tx) · y)dt .
0
Differential Linear Logic 47
which is exactly its definition, in the standard proof of Poincaré’s Lemma. The proof of
Proposition 13 is a rephrasing of the standard proof, which uses an integration by parts.
3.2.2. The fundamental theorem of calculus. This is the statement according to which
one can use antiderivatives for computing integrals: if f, g : R → R are such that g 0 = f ,
Thomas Ehrhard 48
Rb
then a f (t) dt = g(b) − g(a). In the present setting, it boils down to a simple categorical
equation.
Proposition 14. Let L be an exponential structure which has antiderivatives and is
Taylor. Then
∂ X (IX ⊗ IdX ) ∂X + wX wX = Id!X .
We extend this operation by linearity to all elements u ∈ kh∆i, that is we set Ix (u) =
P
t∈∆ ut Ix (t).
(d)
For d ∈ N, let ∆x = {t ∈ ∆ | degx t = d} be the set of all simple resource terms of
(d)
degree d in x. The elements of kh∆x i are said to be homogeneous of degree d in x.
With these notations, we can write
∞
X 1 X
Ix (u) = ut t
d+1
d=0 (d)
t∈∆x
R1
Intuitively, Ix (u) stands for the integral 0 u(τ x) dτ which is the basic ingredient in the
proof above of Poincaré’s Lemma.
(1)
Let u ∈ kh∆i which is linear in the variable h, in other words u ∈ kh∆h i. Let h0 be
a variable which does not occur free in u, we assume that
∂u 0 ∂u [h0 /h]
·h = ·h
∂x ∂x
which is our symmetry hypothesis on u. In other words, for any d ∈ N, we have
X ∂t X ∂t [h0 /h]
ut · h0 = ut · h. (14)
(d)
∂x (d)
∂x
t∈∆x t∈∆x
= u [h0 /h]
Thomas Ehrhard 50
4. Concrete models
We want now to give concrete examples of categorical models of DiLL.
pX ⊗ pY !!(X & Y )
!h!π1 , !π2 i
m2!X,!Y
!!X ⊗ !!Y !(!X & !Y )
and µ2X,Y is
−1
m2X,Y pX&Y !(m2X,Y ) !(dX ⊗ dY )
!X ⊗ !Y !(X & Y ) !!(X & Y ) !(!X ⊗ !Y ) !(X ⊗ Y )
The bi-algebraic structure of !X presented in Section 2.5 is also related to this Seely
monoidal structure.
Differential Linear Logic 51
For the coalgebraic part, let ∆X ∈ L(X, X & X) be the diagonal morphism associated
with the cartesian product of X with itself. Then we have
−1
cX = m2X,X !∆X : !X → !X ⊗ !X
Similarly we set
wX = m01 !τX
where τX : X → > is the unique morphism to the terminal object. The algebraic part
satisfies similar conditions, using the codiagonal aX : X & X → X and the morphism
0
τX : > → X.
where pr1 and pr2 are the two projections of the cartesian product in the category Set
of sets and functions (the ordinary cartesian product “×”).
Observe that an isomorphism in Rel is a relation which is a bijection.
The symmetric monoidal structure is given by the tensor product X ⊗ Y = X × Y and
the unit 1 an arbitrary singleton. The neutrality, associativity and symmetry isomor-
phisms are defined as the obvious corresponding bijections (for instance, the symmetry
isomorphism σX,Y ∈ Rel(X ⊗ Y, Y ⊗ X) is given by σ(a, b) = (b, a)). This symmetric
monoidal category is closed, with linear function space given by X ( Y = X × Y ,
the natural bijection between Rel(Z ⊗ X, Y ) and Rel(Z, X ( Y ) being induced by the
cartesian product associativity isomorphism. Last, one takes for ⊥ an arbitrary singleton,
and this turns Rel into a ∗-autonomous category. One denotes as ? the unique element
of 1 and ⊥.
This category is additive, with cartesian product X1 & X2 of X1 and X2 defined as
{1}×X1 ∪{2}×X2 with projections πi = {((i, a), a) | a ∈ Xi } (for i = 1, 2), and terminal
object > = ∅. Then the commutative monoid structure on homsets Rel(X, Y ) is defined
by 0 = ∅ and f + g = f ∪ g and the action of k on morphisms is defined by 0 f = 0 and
1 f = f (there are no other possibilities).
Rel is also a Seely category (see Section 4.1), for a comonad ! defined as follows:
Thomas Ehrhard 52
4.2.1. Antiderivatives. This exponential structure is bicommutative and can easily seen
to be Taylor in the sense of Section 3.1. Moreover, it has antiderivatives in the sense of
Section 3.2, simply because the morphism JX = IdX +(∂ X ∂X ) : !X → !X coincides here
with the identity. Indeed ∂X = {(l+[a], (l, a)) | l ∈ !X and a ∈ X}, ∂ X = {((l, a), l+[a]) |
l ∈ !X and a ∈ X} and therefore ∂ X ∂X = {(l, l) | l ∈ !X and #l > 0}.
Concretely, saying that a morphism f ∈ Rel(!X ⊗ X, Y ) satisfies the symmetry con-
dition of Proposition 13 simply means that, given m ∈ Mfin (X), a, a0 ∈ X and b ∈ Y ,
one has ((m + [a], a0 ), b) ∈ f ⇔ ((m + [a0 ], a), b) ∈ f . In that case, the antiderivative
g ∈ Rel(!X, Y ) given by that proposition is simply
g = {(m + [a], b) | ((m, a), b) ∈ f } .
Proof. Let L be a generating filter for the topology of E. First, let x ∈ U and let V ∈ L
be such that V ⊆ U (such a V exists because U is a neghborhood of 0), then we have
x + V ⊆ U since U is a linear subspace and hence U is open. Next let x ∈ E \ U . If
y ∈ U ∩ (x + U ) then we have y − x ∈ U and hence x ∈ U since y ∈ U and U is a linear
subspace: contradiction. Therefore U ∩ (x + U ) = ∅ and U is closed since U is open. 2
4.4.2. Linear boundedness. Let E be an ltvs and let U be an open linear subspace of
E. Let πU : E → E/U be the canonical projection. This map is of course linear, and
its kernel is U which is a neighborhood of 0. This means that, endowing E/U with the
discrete topology, πU is continuous. Hence the quotient topology on E/U is the discrete
topology.
We say that a subspace B of E is linearly bounded if πU (B) is finite dimensional, for
every linear open subspace U of E. In other words, for any linear open subspace U , there
is a finite dimensional subspace A of E such that B ⊆ U + A.
Proposition 16. Any finite dimensional subspace of an ltvs E is linearly bounded. Let
B1 and B2 be subspaces of E. If B1 ⊆ B2 and B2 is linearly bounded, so is B1 . If B1
and B2 are linearly bounded, so is B1 + B2 .
subspace V of F , we define
Ann(B1 , . . . , Bn , V ) = {f ∈ (E1 , . . . , En ) ( F | f (B1 × · · · × Bn ) ⊆ V } .
This is a linear subspace of (E1 , . . . , En ) ( F and by Proposition 16 these subspaces
form a filter which defines a linear topology on (E1 , . . . , En ) ( F and this topology
is Hausdorff. Indeed, if f ∈ (E1 , . . . , En ) ( F is 6= 0, then take xi ∈ Ei such that
f (x1 , . . . , xn ) 6= 0. Since F is Hausdorff, there is a linear neighborhood V of 0 in F such
that f (x1 , . . . , xn ) ∈ / V . Let Bi = kxi ; this is a linearly bounded subspace of Ei and
f (B1 × · · · × Bn ) 6⊆ V .
In the case n = 1 (and E = E1 ), the corresponding maps f : E → F are simply called
linear, and they are continuous. The corresponding function space is denoted as E ( F .
If F = k, the corresponding maps are called (multi)linear (hypo)continuous forms.
If furthermore n = 1 the corresponding function space is denoted as E 0 and is called
topological dual of E.
Proposition 17. Let f : E1 × · · · × En → F be multilinear and hypocontinuous and
let Bi ⊆ Ei be linearly bounded subspaces for i = 1, . . . , n. Then f (B1 × · · · × Bn ) is a
linearly bounded subspace of F .
{0} (because ∀a ∈ |X| {a} ∈ F(X)⊥ ), and therefore this filter defines an Hausdorff linear
topology on khXi, that we call the canonical topology of khXi.
Proposition 18. For any finiteness space X, the ltvs khXi is Cauchy-complete.
Proof. Let (x(d))d∈D be a Cauchy net in khXi. Let a ∈ |X|. By taking u0 = {a} in
the definition of a Cauchy net, we see that there exist xa ∈ k and da ∈ D such that
∀e ≥ da x(e)a = xa . In that way we have defined x = (xa )a∈|X| ∈ k|X|
We prove first that
∀u0 ∈ F(X)⊥ ∃d ∈ D ∀e ≥ d ∀a ∈ u0 x(e)a = xa . (17)
Let u0 ∈ F(X)⊥ . Let d0 ∈ D be such that x(e) − x(d0 ) ∈ V X (u0 ) for all e ≥ d0 . Let a ∈ u0
and let da ≥ d0 be such that x(e)a = xa for all e ≥ da . Let e ≥ d0 . Let e0 ≥ e, da . We
have xa = x(e0 )a since e0 ≥ da and x(e0 )a = x(e)a since e, e0 ≥ d0 and a ∈ u0 . It follows
that ∀a ∈ u0 xa = x(e)a .
From this we deduce now that x ∈ khXi. Let u0 ∈ F(X)⊥ . Let d ∈ D be such
that ∀e ≥ d ∀a ∈ u0 x(e)a = xa . Then supp(x) ∩ u0 = supp(x(d)) ∩ u0 is finite, so
supp(x) ∈ F(X)⊥⊥ = F(X), that is x ∈ khXi.
Now Condition (17) expresses exactly that limd∈D x(d) = x and hence the net (x(d))d∈D
converges. 2
A natural question is whether the ltvs khXi, which is Hausdorff, is always metrizable.
We provide a necessary and sufficient condition under which this is the case.
12 This would coincide with the categorical notion of isomorphism if we were using morphisms which
are defined as relations. With linear continuous maps (between the associated ltvs’s) as morphisms,
the present notion of isomorphism is a particular case of the standard categorical one: we can have
more linear homeomorphisms from khXi to khY i than those which are generated by such finiteness-
preserving bijections between webs.
Thomas Ehrhard 56
Proposition 19. Let X be a finiteness space. The ltvs khXi is metrizable iff there exists
a sequence (u0n )n∈N of elements of F(X)⊥ which is monotone (n ≤ m ⇒ u0n ⊆ u0m ) and
such that ∀u0 ∈ F(X)⊥ ∃n ∈ N u0 ⊆ u0n .
Proof. Let first (u0n )n∈N be a sequence of elements of F(X)⊥ which satisfies the condition
stated above. Given x, y ∈ khXi, we define
0 if x = y
d(x, y) = 2 −n
if x 6= y and n is the least integer
such that u0 ∩ supp(x − y) 6= ∅ .
n
Indeed, if x 6= y, then supp(x − y) 6= ∅ and hence, taking a ∈ supp(x − y), we can find
n ∈ N such that {a} ⊆ u0n . This function d is easily seen to be an ultrametric distance
(that is d(x, z) ≤ max(d(x, y), d(y, z))) and it generates the canonical topology of khXi.
Indeed we have
d(x, y) < 2−n iff x − y ∈ V X (u0n )
(indeed, d(x, y) < 2−n means that the least m such that x − y ∈ / V X (u0m ) satisfies m > n)
0
and hence B2−n = V X (un ), where Bε is the open ball centered at 0 and of radius ε.
Conversely, assume that khXi is metrizable and let d be a distance defining the canon-
ical topology of khXi. For each n ∈ N, B2−n is a neighborhood of 0 and hence there
exist vn0 ∈ F(X)⊥ such that V X (vn0 ) ⊆ B2−n . Let u0n = v00 ∪ · · · ∪ vn0 ∈ F(X)⊥ . Then
V X (u0n ) ⊆ V X (vn0 ) ⊆ B2−n . Now let u0 ∈ F(X)⊥ , then V X (u0 ) is a neighborhood of 0
and hence there exists n such that B2−n ⊆ V X (u0 ), which implies V X (u0n ) ⊆ V X (u0 ) and
hence u0 ⊆ u0n . 2
It follows that there are non metrizable ltvs associated with finiteness spaces. We give
in Proposition 20 an example of this situation which arises in the semantics of LL, using
exponential constructions that will be introduced in Section 4.5.3.
Proof. Let X = !?1, so that |X| = Mfin (N) and a subset u for |X| belongs to F(X)
iff ∃n ∈ N u ⊆ Mfin ({0, . . . , n}). The proof is a typical Cantor diagonal reasoning.
We assume towards a contradiction that khXi is metrizable, that is by Proposition 19,
we assume that there is a monotone sequence (u0n )n∈N of elements of F(X)⊥ such that
∀u0 ∈ F(X)⊥ ∃n ∈ N u0 ⊆ u0n . Let n ∈ N, we have {p[n] | p ∈ N} ∈ Mfin ({0, . . . , n})
and hence u0n ∩ {p[n] | p ∈ N} is finite. Therefore we can find a function f : N → N such
that ∀n ∈ N f (n)[n] ∈ / u0n . Let u0 = {f (n)[n] | n ∈ N}. Then u0 ∈ F(X)⊥ since, for any
0
n ∈ N, u ∩ Mfin ({0, . . . , n}) = {f (i)[i] | i ∈ [0, n]} is finite. But for all n ∈ N we have
f (n)[n] ∈ u0 \ u0n and so u0 6⊆ u0n . 2
Proposition 21. A linear subspace B of khXi is linearly bounded iff there exists u ∈
F(X) such that B ⊆ BX (u).
S
Proof. Assume that B is linearly bounded. Let u = x∈B supp(x), so that B ⊆ BX (u),
we prove that u ∈ F(X). Let u0 ∈ F(X)⊥ . Let A be a finite dimensional subspace
of E such that B ⊂ V X (u0 ) + A. Let A0 be a finite generating subset of A and let
S
u0 = y∈A0 supp(y) ∈ F(X). Then x ∈ A ⇒ supp(x) ⊆ u0 (that is A ⊆ BX (u0 )).
Let x ∈ B, we write x = x1 + x2 where x1 ∈ V X (u0 ) and x2 ∈ A. We have supp(x) ⊆
supp(x1 ) ∪ supp(x2 ) and hence u0 ∩ supp(x) ⊆ (u0 ∩ supp(x1 )) ∪ (u0 ∩ supp(x2 )) ⊆ u0 ∩ u0
since u0 ∩ supp(x1 ) = ∅. Since this holds for all x ∈ B, we have u0 ∩ u ⊆ u0 ∩ u0 so u0 ∩ u
is finite and hence u ∈ F(X) 2
Proposition 22. The ltvs khXi is locally linearly bounded iff there exist u ∈ F(X) and
u0 ∈ F(X)⊥ such that u ∪ u0 = |X|.
Let X and Y be finiteness spaces. Let X ( Y be the finiteness space such that
|X ( Y | = |X| × |Y | and
F(X ( Y ) = {u × v 0 | u ∈ F(X) and v 0 ∈ F(Y ⊥ )}⊥
= {w ⊆ |X| × |Y | | ∀u ∈ F(X) ∀v 0 ∈ F(Y )⊥ w ∩ (u × v 0 ) is finite}
Let w ∈ F(X ( Y ), u ∈ F(X) and v 0 ∈ F(X)⊥ . It follows from (16) that wu ∈ F(Y ) and
that w⊥ v 0 ∈ F(X)⊥ .
Let M ∈ khX ( Y i. If x ∈ khXi and b ∈ |Y |, then supp(M )⊥ {b} ∈ F(X)⊥ and
hence the sum a∈|X| Ma,b xa is finite. Therefore we can define M x ∈ k|Y | by M x =
P
P
( a∈|X| Ma,b xa )b∈|Y | . Since supp(M x) ⊆ supp(M ) supp(x), we have M x ∈ khY i and
hence the function fun(M ) defined by fun(M )(x) = M x is a linear map khXi → khY i.
Moreover, fun(M ) is continuous. Indeed, for any v 0 ∈ F(Y )⊥ we have V X (supp(M )⊥ v 0 ) ⊆
−1 −1
fun(M ) (V Y (v 0 )) and hence fun(M ) (V Y (v 0 )) is open since supp(M )⊥ v 0 ∈ F(X)⊥ .
Given finiteness spaces Z1 , Z2 , we define immediately the finiteness space Z1 ⊗ Z2 as
Z1 ⊗ Z2 = (Z1 ( Z2⊥ )⊥ , so that |Z1 ⊗ Z2 | = |Z1 | × |Z2 |. One of the most pleasant
features of the theory of finiteness spaces is the following property (see (Ehr05)) which
has been considerably generalized in (TV10).
Proposition 23. Let w ⊆ |Z1 | × |Z2 |. One has w ∈ F(Z1 ⊗ Z2 ) iff pri (w) ∈ F(Zi ) for
i = 1, 2.
Coming back to linear function spaces, this means in particular that, given w ⊆ |X| ×
|Y |, one has w ∈ F(X ( Y )⊥ iff there are u ∈ F(X) and v 0 ∈ F(Y )⊥ such that w ⊆ u×v 0 ,
from which we derive a simple characterization of the topology of linear function spaces.
Proposition 24. The function FunX,Y : M 7→ fun(M ) is a linear homeomorphism from
khX ( Y i to khXi ( khY i, equipped with the topology of uniform convergence on
linearly bounded subspaces.
Proof. The proof that FunX,Y is a linear isomorphism can be found in (Ehr05). We
prove that this linear isomorphism is an homeomorphism. Let B ⊆ khXi be a bounded
subspace and V ⊆ khY i is an open subspace. Let u ∈ F(X) be such that B ⊆ BX (u)
and let v 0 ∈ F(Y )⊥ be such that V Y (v 0 ) ⊆ V . Then u × v 0 ∈ F(X ( Y )⊥ and hence
V X(Y (u × v 0 ) ⊆ khX ( Y i is an open subspace. Let M ∈ V X(Y (u × v 0 ), x ∈ B and
b ∈ v 0 , we have (M x)b = 0 since supp(x) ⊆ u, which shows that FunX,Y (M )(B) ⊆ V and
hence FunX,Y is continuous.
Let now W ⊆ khX ( Y i be an open subspace. Let w ∈ F(X ( Y )⊥ be such that
V X(Y (w) ⊆ W . By Proposition 23, there are u ∈ F(X) and v 0 ∈ F(Y )⊥ such that
w ⊆ u × v 0 , and hence V X(Y (u × v 0 ) ⊆ W . Then, given M ∈ V X(Y (u × v 0 ), we have
FunX,Y (M )(BX (u)) ⊆ V Y (v 0 ), which shows that FunX,Y (W ) is an open linear subspace
of khXi ( khY i.
We have seen that FunX,Y is a continuous and open bijection and hence it is an
homeomorphism. 2
The tensor product X ⊗ Y defined above is characterized by a standard universal
property: it classifies the hypocontinuous bilinear maps.
Differential Linear Logic 59
Proposition 25. Let Z be a finiteness space and let f : khXi×khY i → khZi be bilinear
and hypocontinuous. There exists exactly one continuous linear map f˜ : khX ⊗ Y i →
khZi such that f = f˜ ◦ τ .
Proof. We define a matrix M ∈ k|X|×|Y |×|Z| by Ma,b,c = f (ea , eb )c and we show first
that supp(M ) ∈ F(X ⊗ Y ( Z).
So let u ∈ F(X), v ∈ F(Y ) and w0 ∈ F(Z)⊥ ; we must show that supp(M ) ∩ (u × v × w0 )
is finite. Let v 0 ∈ F(Y )⊥ and u0 ∈ F(X)⊥ be such that
But we know that a ∈ u and b ∈ v, that is ea ∈ BX (u) and eb ∈ BY (v). It follows that
/ V X (u0 ), that is a ∈ u0 , and similarly b ∈ v 0 .
ea ∈
Since BX (u) and BY (v) are linearly bounded, so is f (BX (u)×BY (v)) by Proposition 17
and hence there exists w ∈ F(Z) such that
supp(M ) ∩ (u × v × w0 ) ⊆ (u ∩ u0 ) × (v ∩ v 0 ) × (w ∩ w0 )
= f˜(x ⊗ y) by continuity of f˜ .
Uniqueness of the continuous linear map f˜ results from the fact that necessarily f˜(ea ⊗
eb ) = f (ea , eb ). 2
Then one proves easily that the category Fink equipped with this tensor product
(whose neutral object is 1, which satisfies obviously kh1i = k) is ∗-autonomous, the
object of morphisms from X to Y being X ( Y and the dualizing object being ⊥ = 1
(indeed, the finiteness spaces X ( ⊥ and X ⊥ are obviously strongly isomorphic).
This category is preadditive in the sense of Section 2.4 since homsets Fink (X, Y )
have an obvious structure of k-vector space which is compatible with all the categorical
operations introduced so far.
Countable products and coproducts are available as well. Let (Xi )i∈I be a countable
˘ S
family of finiteness spaces. The finiteness space X = i∈I Xi is given by |X| = i∈I |Xi |
and F(X) = {w ⊆ |X| | ∀i ∈ I wi ∈ F(Xi )} where wi = {a ∈ |Xi | | (i, a) ∈ w}. It is easy
to check that
F(X)⊥ = {w0 ⊆ |X| | ∀i ∈ I wi0 ∈ F(Xi )⊥ and wi0 = ∅ for almost all i}
˘
and it follows that F(X)⊥⊥ = F(X). It is clear that kh i∈I Xi i = i∈I khXi i up to
Q
˘
a straightforward strong isomorphism and that i∈I Xi together with projections πj :
˘
kh i∈I Xi i → khXj i defined in the obvious way, is the cartesian product of the Xi ’s.
L ˘ ⊥ ⊥
Thanks to ∗-autonomy, the coproduct of the Xi ’s is given by i∈I Xi = i∈I Xi
L Q
and kh i∈I Xi i ⊆ i∈I khXi i is the space of all families (xi )i∈I of vectors such that
˘
xi = 0 for almost all i ∈ I. Of course, the canonical linear topology on kh i∈I Xi i is the
L
product topology, but the canonical topology on kh i∈I Xi i is much finer: it is generated
Q
by all products i∈I Vi where Vi is a linear neighborhood of 0 in khXi i.
For finite families of objects, products and coproducts coincide.
Let X be a finiteness space. We define !X by |!X| = Mfin (|X|) and
[
F(!X) = {A ⊆ |!X| | supp(m) ∈ F(X)}
m∈A
and it can be proved that indeed F(!X) = F(!X)⊥⊥ (again, see (TV10) for more general
results of this kind).
Given x ∈ khXi and m ∈ |!X|, we set
Y
xm = xm(a)
a ∈k
a∈|X|
Differential Linear Logic 61
by definition of F(!X). Let M ∈ kh!X ( Y i, it is not hard to see that one defines a map
Fun(M ) : khXi → khY i by setting
X
Fun(M )(x) = Mm,b xm
m∈|!X|
b∈|Y |
all these sums are indeed finite, see (Ehr05) for the details. When the field k is infinite,
the map M 7→ Fun(M ) is injective.
In (Ehr05), it is also proven that ! is a functor. Given M ∈ khX ( Y i one defines
!M ∈ kh!X ( !Y i by setting, for m ∈ |!X| and p ∈ |!Y |,
X
(!M )m,p = [r] M r
r∈L(m,p)
where
X X
L(m, p) = {r ∈ Mfin (|X| × |Y |) | r(a, b) = m(a) and r(a, b) = p(b)}
b∈|Y | a∈|X|
we check that (wX )∗,m = δm,[] , and that (cX )(p,q),m = p+q
p δm,p+q where
m Y m(a)!
= ∈ N+
p p(a)!(m(a) − p(a))!
a∈|X|
4.5.4. An intrinsic presentation of function spaces. We have seen that a morphism from
X to Y of the linear category Fink can be seen both as an element of khX ( Y i and
as a continuous linear function from khXi to khY i.
A morphism from X to Y in the Kleisli category Relk! is an element of kh!X ( Y i.
Given M ∈ kh!X ( Y i, we have seen that we can define a function Fun M : khXi → khY i
by
X
Fun(M )(x) = M x! = ( Mm,b xm )b∈|Y | .
m∈|!X|
Theorem 26. Assume that k is infinite. For any finiteness spaces X and Y , the ltvs
kh!X ( Y i is linearly homeomorphic to Anak (khXi, khY i).
13 A completion of an ltvs E is a pair (Ẽ, h) where Ẽ is a complete ltvs and h : E → Ẽ is a linear and
continuous map such that, for any complete ltvs F and any linear continuous map f : E → F , there
is an unique linear and continuous map f˜ : Ẽ → F such that f˜ ◦ h = f . Using standard techniques,
one can prove that any ltvs admits a completion, which is unique up to unique isomorphism.
Differential Linear Logic 63
Remember that, using contraction and dereliction, we have defined in Section 3 the
morphism dnX ∈ kh!X ( X ⊗ · · · ⊗ Xi. Then we have N = M dnX ∈ kh!X ( Y i, and it
is easy to see that
Fun(N )(x) = h(x, . . . , x) .
In that way, we see that any polynomial map from khXi to khY i is an element of
kh!X ( Y i; we have an inclusion Polk (khXi, khY i) ⊆ kh!X ( Y i. Actually, the expo-
nential structure Fink is Taylor and this notion of polynomial map coincides with the
general notion of Section 3.1.
n
Conversely, let (m, b) ∈ |!X ( Y | with m = [a1 , . . . , an ]. The map f : khXi →
k defined by f (x(1), . . . , x(n)) = x(1)a1 . . . x(n)an is multilinear and hypocontinuous.
n
Hence the same holds for the map x 7→ f (x)eb from khXi to khY i. Therefore we have
(|!X(Y |)
k ⊆ Polk (khXi, khY i) (given a set I, remember that k(I) is the k-vector space
generated by I, that is, the space of all families (ai )i∈I of elements of k such that ai = 0
for almost all i’s).
Hence Polk (khXi, khY i) is a dense subspace of kh!X ( Y i. To show that kh!X ( Y i
is the completion of Polk (khXi, khY i) it suffices to show that the above defined linear
topology on that space (uniform convergence on all linearly bounded subspaces) is the
restriction of the topology of kh!X ( Y i.
Let B ⊆ khXi be a linearly bounded subspace and let V ⊆ khY i be linear open.
Let v 0 ∈ F(Y )⊥ be such that V Y (v 0 ) ⊆ V . By Proposition 21, supp(B) ∈ F(X), so
Mfin (supp(B)) ∈ F(!X). Let
M ∈ V !X(Y (Mfin (supp(B)) × v 0 ) ⊆ kh!X ( Y i ,
then fun(M )(x)b = 0 for each x ∈ B and b ∈ v 0 . So we have
V !X(Y (Mfin (supp(B)) × v 0 ) ∩ Polk (khXi, khY i) ⊆ Ann(B, V ) .
Conversely let U ∈ F(!X) and v 0 ∈ F(Y )⊥ , then we have u = m∈U supp(m) ∈ F(X)
S
and hence the subspace B ⊆ khXi of all vectors which vanish outside u is linearly
bounded. Let M ∈ kh!X ( Y i be such that the map Fun(M ) : khXi → khY i is polyno-
mial and belongs to Ann(B, V Y (v 0 )). Then for any m = [a1 , . . . , an ] ∈ Mfin (u) and b ∈ v 0
m(a ) m(a )
we have Mm,b = 0 because this scalar is the coefficient of the monomial ξ1 1 . . . ξn n in
the polynomial P ∈ k[ξ1 , . . . , ξn ] such that P (z1 , . . . , zn ) = Fun(M )(x)b where x ∈ khXi
is such that xa = zi if a = ai and xa = 0 if a ∈ / supp(m), and P = 0 because
0
Fun(M )(B) ⊆ V Y (v ) by assumption (we also use the fact that k is infinite). Hence
M ∈ V !X(Y (U × v 0 ) and we have shown that
Ann(B, V Y (v 0 )) ⊆ V !X(Y (U × v 0 ) ∩ Polk (khXi, khY i) ,
showing that this latter set is a neighborhood of 0 in the space of polynomials. 2
The Taylor formula proved in (Ehr05) for the morphisms of this Kleisli category shows
that actually any morphism is the sum of a converging series whose n-th term is an
homogeneous polynomial of degree n.
As an example, take E = k[ξ] ' kh!1 ( 1i. The corresponding topology on E is
the discrete topology. A typical example of generalized polynomial map is the function
Thomas Ehrhard 64
4.5.5. Antiderivatives. Just as Rel, the Fink exponential structure is Taylor in the sense
of Section 3.1. Moreover, if k is of characteristic 0 (meaning that ∀n ∈ N n 1 = 0 ⇒ n = 0)
it has antiderivatives in the sense of 3.2, because the morphism JX = IdX +(∂ X ∂X ) :
!X → !X satisfies (JX )p,q = (#p + 1)δp,q for all p, q ∈ |!X| and hence is an isomorphism
1
whose inverse IX is given by (IX )p,q = #p+1 δp,q .
Acknowledgment
Part of the work reported in this article has been supported by the French-Chinese project
ANR-11-IS02-0002 and NSFC 61161130530 Locali.
References
Samson Abramsky. Computational interpretations of linear logic. Theoretical Computer Science,
111:3–57, 1993.
Antonio Bucciarelli, Alberto Carraro, Thomas Ehrhard, and Giulio Manzonetto. Full Abstrac-
tion for Resource Calculus with Tests. In Marc Bezem, editor, CSL, volume 12 of LIPIcs.
Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2011.
Gérard Boudol, Pierre-Louis Curien, and Carolina Lavatelli. A semantics for lambda calculi
with resource. Mathematical Structures in Computer Science, 9(4):437–482, 1999.
Richard Blute, Robin Cockett, and Robert Seely. Differential categories. Mathematical Struc-
tures in Computer Science, 16(6):1049–1083, 2006.
Richard Blute, Thomas Ehrhard, and Christine Tasson. A convenient differential category.
Cahiers de Topologie et Géométrie Différentielle Catégoriques, 53, 2012.
Gavin Bierman. What is a categorical model of intuitionistic linear logic? In Mariangiola Dezani-
Ciancaglini and Gordon D. Plotkin, editors, Proceedings of the second Typed Lambda-Calculi
and Applications conference, volume 902 of Lecture Notes in Computer Science, pages 73–93.
Springer-Verlag, 1995.
Pierre-Louis Curien, editor. Typed Lambda Calculi and Applications, 9th International Confer-
ence, TLCA 2009, Brasilia, Brazil, July 1-3, 2009. Proceedings, volume 5608 of Lecture Notes
in Computer Science. Springer, 2009.
N.G. De Bruijn. Generalizing Automath by means of a lambda-typed lambda calculus. In D.W.
Kueker, E.G.K. Lopez-Escobar, and C.H. Smith, editors, Mathematical Logic and Theoretical
Computer Science, Lecture Notes in Pure and Applied Mathematics, pages 71–92. Marcel
Dekker, 1987. Reprinted in: Selected papers on Automath, Studies in Logic, volume 133,
pages 313-337, North-Holland, 1994.
Vincent Danos and Laurent Regnier. Reversible, irreversible and optimal lambda-machines.
Theoretical Computer Science, 227(1-2):273–291, 1999.
Thomas Ehrhard. On Köthe sequence spaces and linear logic. Mathematical Structures in
Computer Science, 12:579–623, 2002.
Thomas Ehrhard. Finiteness spaces. Mathematical Structures in Computer Science, 15(4):615–
646, 2005.
Differential Linear Logic 65
Thomas Ehrhard. A finiteness structure on resource terms. In LICS, pages 402–410. IEEE
Computer Society, 2010.
Thomas Ehrhard. The Scott model of Linear Logic is the extensional collapse of its relational
model. Theoretical Computer Science, 424:20–45, 2012.
Thomas Ehrhard. A new correctness criterion for MLL proof nets. In Thomas A. Henzinger
and Dale Miller, editors, Joint Meeting of the Twenty-Third EACSL Annual Conference on
Computer Science Logic (CSL) and the Twenty-Ninth Annual ACM/IEEE Symposium on
Logic in Computer Science (LICS), CSL-LICS ’14, Vienna, Austria, July 14 - 18, 2014,
page 38. ACM, 2014.
Thomas Ehrhard and Laurent Regnier. The differential lambda-calculus. Theoretical Computer
Science, 309(1-3):1–41, 2003.
Thomas Ehrhard and Laurent Regnier. Böhm trees, Krivine machine and the Taylor expansion
of ordinary lambda-terms. In Arnold Beckmann, Ulrich Berger, Benedikt Löwe, and John V.
Tucker, editors, Logical Approaches to Computational Barriers, volume 3988 of Lecture Notes
in Computer Science, pages 186–197. Springer-Verlag, 2006. Long version available on http:
//www.pps.univ-paris-diderot.fr/~ehrhard/.
Thomas Ehrhard and Laurent Regnier. Uniformity and the Taylor expansion of ordinary lambda-
terms. Theoretical Computer Science, 403(2-3):347–372, 2008.
Marcelo P. Fiore. Differential structure in models of multiplicative biadditive intuitionistic
linear logic. In Simona Ronchi Della Rocca, editor, TLCA, volume 4583 of Lecture Notes in
Computer Science, pages 163–177. Springer, 2007.
Maribel Fernández and Ian Mackie. A Calculus for Interaction Nets. In Gopalan Nadathur,
editor, PPDP, volume 1702 of Lecture Notes in Computer Science, pages 170–187. Springer-
Verlag, 1999.
Stéphane Gimenez. Realizability proof for normalization of full differential linear logic. In
C.-H. Luke Ong, editor, TLCA, volume 6690 of Lecture Notes in Computer Science, pages
107–122. Springer-Verlag, 2011.
Jean-Yves Girard. The system F of variable types, fifteen years later. Theoretical Computer
Science, 45:159–192, 1986.
Jean-Yves Girard. Linear logic. Theoretical Computer Science, 50:1–102, 1987.
Jean-Yves Girard. Normal functors, power series and the λ-calculus. Annals of Pure and Applied
Logic, 37:129–177, 1988.
Michael Huth. Linear Domains and Linear Maps. In Stephen D. Brookes, Michael G. Main,
Austin Melton, Michael W. Mislove, and David A. Schmidt, editors, MFPS, volume 802 of
Lecture Notes in Computer Science, pages 438–453. Springer-Verlag, 1993.
Jean-Louis Krivine. Un interprteur du lambda-calcul. Unpublished note, 1985.
Jean-Louis Krivine. A call-by-name lambda-calculus machine. Higher-Order and Symbolic Com-
putation, 20(3):199–207, 2007.
Saunders Mac Lane. Categories for the Working Mathematician, volume 5 of Graduate Texts in
Mathematics. Springer-Verlag, 1971.
Paul-André Melliès. Categorical semantics of linear logic. Panoramas et Synthèses, 27, 2009.
Ian Mackie and Shinya Sato. A Calculus for Interaction Nets Based on the Linear Chemical
Abstract Machine. Electronic Notes in Theoretical Computer Science, 192(3):59–70, 2008.
Michele Pagani. The cut-elimination theorem for differential nets with promotion. In Curien
(Cur09), pages 219–233.
Michele Pagani and Paolo Tranquilli. Parallel Reduction in Resource Lambda-Calculus. In
Zhenjiang Hu, editor, APLAS, volume 5904 of Lecture Notes in Computer Science, pages
226–242. Springer, 2009.
Thomas Ehrhard 66
Michele Pagani and Paolo Tranquilli. The Conservation Theorem for Differential Nets. Mathe-
matical Structures in Computer Science, 2011. To appear.
Christian Retoré. Handsome proof-nets: perfect matchings and cographs. Theoretical Computer
Science, 294(3):473–488, 2003.
Christine Tasson. Algebraic totality, towards completeness. In Curien (Cur09), pages 325–340.
Christine Tasson. Sémantiques et syntaxes vectorielles de la logique linéaire. Thèse de doctorat,
Université Paris Diderot – Paris 7, 2009.
Paolo Tranquilli. Confluence of pure differential nets with promotion. In Erich Grädel and
Reinhard Kahle, editors, CSL, volume 5771 of Lecture Notes in Computer Science, pages
500–514. Springer-Verlag, 2009.
Christine Tasson and Lionel Vaux. Transport of finiteness structures and applications. Mathe-
matical Structures in Computer Science, 2010. To appear.
Lionel Vaux. The differential lambda-mu calculus. Theoretical Computer Science, 379(1-2):166–
209, 2005.
Lionel Vaux. The algebraic lambda-calculus. Mathematical Structures in Computer Science,
19(5):1029–1059, 2009.
Glynn Winskel. Linearity and non linearity in distributed computation. In Thomas Ehrhard,
Jean-Yves Girard, Paul Ruet, and Philip Scott, editors, Linear Logic in Computer Science, vol-
ume 316 of London Mathematical Society Lecture Notes Series. Cambridge University Press,
2004.