Essence of Events
Essence of Events
Abstract
Event-driven programming is based on a natural abstraction: an event is a computation that
can eventually return a value. This paper exploits the intuition relating events and time by
drawing a Curry-Howard correspondence between a functional event-driven programming lan-
guage and a linear-time temporal logic. In this logic, the eventually proposition ♦A describes
the type of events, and Girard’s linear logic describes the effectful and concurrent nature of
the programs. The correspondence reveals many interesting insights into the nature of event-
driven programming, including a generalization of selective choice for synchronizing events, and
an implementation in terms of callbacks where ♦A is just ¬ ¬A.
1 Introduction
Event-driven programming is a popular approach to functional concurrency in which events,
also known as “futures,” “deferred values,” or “lightweight threads,” execute concurrently
with each other and eventually produce a result. The abstraction of the event has been
extremely successful in producing lightweight, extensible, and efficient concurrent programs.
As a result, a wide range of programming languages and libraries use the event-driven
paradigm to describe everything from message-passing concurrency [24] and lightweight
threads [26] to graphical user interfaces [11] and I/O [13, 21, 25].
Although these systems vary considerably in the details of their implementations and
APIs, they share a common, basic structure. They provide:
In this paper we distill event-driven programming to its essence, demonstrating how to derive
these components from first principles. Starting from a logical basis and building up the
minimal machinery needed to explain the four points above, we proceed as follows:
characterize the fact that events are effectful and execute concurrently. This linear and
monadic logic serves as the basic scaffolding for event-driven computations.
Synchronization refers to the ability to execute two events concurrently and record which
one happens first. In Section 3 we extend the type system of pure events to include a
synchronization operator choose in the style of Concurrent ML [24]. We observe that,
logically, choose corresponds to the linear-time axiom of temporal logic:
Callbacks, continuations, and the event loop. In the event-driven paradigm, events are
implemented using callbacks that interact with an underlying event loop. For the language of
pure events, we define in Section 4 a time-aware continuation-passing style (CPS) translation
based on the property of temporal logic that ♦A is equivalent to ¬ ¬A, where negation ¬
is the type of first-class continuations, and is the “always” operator from temporal logic.
In addition to the logic of pure events, the implementation should take into account the
extralogical sources of concurrency that interact with the event loop. Throughout this paper
we use a range of these concurrency primitives, including nondeterministic events, timeouts,
user input, and channels, and argue that the choice of primitive is orthogonal to the logical
structure of events.
In Section 5 we extend the CPS translation to account for these axiomatic sources of
concurrency. We model the event loop in the answer type of the CPS translation [7] and
show how to instantiate the answer type for a concrete choice of event primitives.
The essence of events. In this paper we argue that the logical interpretation of events is
a unifying idea behind the vast array of real-world event-driven languages and libraries. We
complete the story in Section 6 by comparing techniques used in practice with the approaches
developed in this paper based on the essence of event-driven programming.
1
Other presentations of temporal logic include next (◦), always (), and until (U) operators, and do not
necessarily assume that just because a proposition is true now, it will always be true.
J. Paykin, N. R. Krishnaswami, and S. Zdancewic XX:3
The “eventually” modality ♦A is defined by two rules. The first says that if a proposition
is true now, it is also true later. The second rule says that if A is true later, and if A now
proves that some B is true later, then B itself is also true later. Through the Curry-Howard
correspondence, these proofs correspond to typing rules for return and bind, respectively.
∆`e:A ∆1 ` e1 : ♦A ∆2 , x : A ` e2 : ♦B
∆ ` return e : ♦A ∆1 , ∆2 ` bind x = e1 in e2 : ♦B
2
Other event-driven languages solve this problem in different ways without using linear types. Imple-
mentations in strict functional languages like Async [21] ensure that every event normalizes immedi-
ately to an asynchronous primitive, which somewhat defeats the purpose of a strict evaluation order.
In CML [24], all events evaluate strictly and synchronously unless wrapped in a thunk called guard.
XX:4 The Essence of Event-Driven Programming
Γ; · ` e : A Γ ` t : dAe
d−e-I d−e-E
Γ ` suspend e : dAe Γ; · ` force t : A
Γ`t:τ Γ; ∆1 ` e1 : bτ c Γ, x : τ ; ∆2 ` e2 : B
b−c-I b−c-E
Γ; · ` btc : bτ c Γ; ∆1 , ∆2 ` let bxc = e1 in e2 : B
Figure 2 Typing rules for moving between the linear and unrestricted fragments.
on any linear assumptions. On the other hand, a persistent type τ can always be treated
as a linear type, written bτ c. Figure 1 shows the syntax of types, (non-linear) terms, and
(linear) expressions.
The typing rules for terms and expressions are mostly standard, but Figure 2 shows how
to move in between the linear and non-linear fragments via the typing rules for dAe and
bτ c. A linear expression e can be suspended to a persistent term suspend e, and can be
unsuspended using force. On the other hand, a non-linear term can be used linearly in this
type system by applying a floor operator, and unpacked in the same way.
The remaining typing rules are shown in Appendix A.
3
These rules make sense operationally, but not necessarily as part of the Curry-Howard correspondence,
because as an equational theory they relate return(in1 (e1 , return e2 )) and return(in2 (return e1 , e2 )),
which are certainly not equal. This stems from the fact that choose is not a pure logical axiom; it relates
multiple connectives in a complicated way and is hence neither an introduction nor an elimination rule.
XX:6 The Essence of Event-Driven Programming
new : 1 ( bChan Ac
τ ::= · · · | Chan A
send : A ( bChan Ac ( ♦1
spawn : ♦1 ( 1
receive : bChan Ac ( ♦A
choose (eA,eB) =
let |win| : Chan (1⊕1) = new () in bind z = receive |win| in
let |cA| : Chan A = new () in case z of
let |cB| : Chan B = new () in | in1 () ->
spawn (bind a = eA in bind a = |receive |cA| in
spawn (send |win| (in1 ())); return (in1 (a,receive |cB|))
send |cA| a); | in2 () ->
spawn (bind b = eB in bind b = receive |cB| in
spawn (send |win| (in2 ())); return (in2 (receive |cA|,b))
send |cB| b); end
where the input list is partitioned into the prefix and suffix of the first event to return a
value. Unfortunately it is not possible to derive chooseList, or even a version over triples
of events, from the binary version of choose. Like choose itself we need to implement
chooseList using channels or some other concurrency primitive. By itself this solution is
ad-hoc and unsatisfactory.
In the remainder of this section we describe a way to build up synchronization operators
on arbitrary finite containers of events by induction on the type of the container. We do
this by exploiting a uniform pattern on the structure of these primitives, inspired by Conor
J. Paykin, N. R. Krishnaswami, and S. Zdancewic XX:7
∂♦ ♦A = A
∂♦ (A ⊗ B) = (∂♦ A ⊗ B) ⊕ (A ⊗ ∂♦ B) ∂♦ (A ( B) = 0
∂♦ 1 = 0
∂♦ (A ⊕ B) = ∂♦ A ⊕ ∂♦ B ∂♦ bτ c = 0
∂♦ 0 = 0
McBride’s 2001 observation that the derivative of a regular type is the type of its one-hole
contexts [19]. We explore a variation on his idea and show that the derivative of a type with
respect to time is the type of its synchronization operator.
Derivatives with respect to time. We define the instant of an event to be the time at
which it returns a value. An event itself can be thought of as a context with a hole for time
that is filled in by its instant. For example, the event return n consists of the context [ ]n,
where the hole [ ] is filled in by its instant “now.”
The semantics of synchronization say that the instant of choose(e1 , e2 ) is either the
instant of e1 or the instant of e2 , depending on which occurs first. The context containing
that hole has one of two shapes. If e1 returns a value n before e2 does, then the context
will have the the form ([ ]n, e2 ), of type A ⊗ ♦B. If e2 returns a value first, the context will
have the type ♦A ⊗ B. Thus the return type of choose, (A ⊗ ♦B) ⊕ (♦A ⊗ B), describes the
possible shapes of its context with a hole for time.
McBride’s partial derivative operation, written ∂X A, records the possible shapes of a
one-hole context of A with a hole for the type X .4 This intuition extends from finite
containers to recursive data types like lists. For example, the one-hole contexts of the type
List A consist of a one-hole context of A, along with the prefix and suffix lists surrounding
the element with the hole. That is, the derivative of List A is List A ⊗ ∂X A ⊗ List A, which
is reminiscent of the return type of chooseList.
In Figure 5 we define the syntactic operation ∂♦ A on types that describes the derivative
with respect to time.5 The derivative of an event type ♦A is A itself, leaving the time at
which the event occurred as the hole.
The general choose operator decomposes a type into two parts: its instant (designated
by the ♦ prefix) at which synchronization will occur, and a context with a hole for time.
This gives us a pattern for synchronizing events across arbitrary finite containers. McBride’s
treatment of recursive types provides a way to extend synchronization to arbitrary recursive
containers such as lists, but we leave the details to future work.
4
The syntax is inspired by the fact that this operation obeys the product and sum rules from calculus.
For example, if we write X 2 for X × X then ∂X X 2 ∼ = 2 × X = X + X.
5
The one-hole context interpretation of derivatives does not extend to higher-order types, so we make
the simplification that the derivative of all higher-order types is 0.
XX:8 The Essence of Event-Driven Programming
The above lemma is enough to prove preservation in the extended system. The side
condition on chooseA that ∂♦ A =6∼ 0 ensures that progress also holds by ruling out ill-
formed terms like choose(λx.e).
What does the type of a callback have to do with temporal logic? The callback being
registered in the event loop will not be invoked immediately, but at some point in the
future, once the user has pressed a key. The type of the callback itself should then reflect
the fact that it will be available in the future. We write A to denote the fact that A will
be true at every point in the future, and so the type of onKeyPress should be written
This last step follows from the isomorphism ¬ ¬A ∼ = ♦A, so we conclude: the act of
registering a callback that reacts to a key press is the same as an event that eventually
returns the key that was pressed.
Γ; ∆, x : An ` e : Answer Γ; ∆1 ` e1 : ¬An Γ; ∆2 ` e2 : An
Γ; ∆ ` λx.e : ¬An Γ; ∆1 , ∆2 ` e1 e2 : Answer
Γ; ∆ ` e : A Γ; ∆ ` e : A
Γ; ∆ ` box e : A Γ; ∆ ` unbox e : A
The behavior of An can be described using two simple rules. First, if An is provable
using only hypotheses that are always true, then An is always true. On the other hand, a
proposition An that is always true is also true now.
To describe continuations, we add the negation type ¬An and remove all other linear
arrows. Because only linear expressions need to undergo a CPS translation, the non-linear
arrow types are unchanged. We denote the linear propositions in the resulting adjoint tensor
logic [20] as An , and non-linear propositions as τ .
τ ::= Unit | Void | τ × τ | τ + τ | τ → τ | dAen
An ::= 1 | 0 | An ⊗ An | An ⊕ An | ¬A | A | bτ c
The syntax of terms and expressions is almost identical to that of the event-based lan-
guage. The negation type ¬An is introduced with a λ-abstraction and is eliminated with an
application. The monadic bind and return operators are replaced by the comonadic box
and unbox operators, as shown in Figure 6. We assume a call-by-value operational semantics
for both terms and expressions.
in the event nondet e, timeouts of the form onTimeout e, synchronous channels that create
events send and receive, and the GUI operation onKeyPress.
The following section sketches a way to implement these primitive sources of concurrency
as part of the CPS translation. The trick is to integrate the concurrent actions of these
primitives with the answer type of the continuation, as described by Claessen [7] in the poor
man’s concurrency monad.
Actions and the answer type. For the pure fragment of the eventually monad (consisting
only of return and bind), the answer type of the continuation is invisible. What we write
as ¬A is in fact A ( Answer for some fixed type Answer. Claessen observed that this makes
the answer type of the continuation the perfect place to hide the presence of effects.
We call these effects actions following Claessen. An action can be thought of as a
distinct thread of computation [18], the primitive thread operations being Halt and Fork.
These threads are also stateful, operating over an event queue monad, which we write
EventQueue A. The Atom action can execute an arbitrary monadic operation over the event
queue. Using Haskell-like notation for the algebraic datatype of actions, we have:
data Action =
| Halt : Action
| Fork : Action -o Action -o Action
| Atom : EventQueue Action -o Action
Since actions represent threads of computation, they are executed by a scheduler of type
List Action ( EventQueue Action that schedules a list of actions inside the event queue
monad. For example, the following is a simple round robin scheduler:
eventLoop [] = return ()
eventLoop (Halt :: ls) = eventLoop ls
eventLoop (Fork a1 a2 :: ls) = eventLoop (ls ++ [a1,a2])
eventLoop (Atom mA :: ls) = bind a = mA in eventLoop (ls++[a])
Events and actions interact via a top-level run operator that converts a unit-valued event
to an action. Its type is ¬J♦1K or J♦1K ( Action.
Spawn. Using actions we can encode the event-level concurrency primitives that have been
used throughout this paper. For example, spawn, of type ♦1 ( 1, is implemented as a
member of its CPS-converted type J♦1 ( 1K = ¬(J♦1K ⊗ ¬J1K) that takes in an event (of
type J♦1K) and a continuation (of type ¬J1K) and produces a Fork action that runs the
event and calls the continuation in parallel. The definition is found in Appendix D, and we
can check that runJreturn(spawn e)K evaluates to Fork(boxJeK) Halt.
Linear Channels. We conclude this section with a sketch of how to implement synchronous
message-passing in the style of CML, which we used to encode choose in Section 3. Other
sources of concurrency can be implemented in a similar way.
In order to represent channels, the event queue underlying the action type should be
stateful over a linear heterogeneous store. Linear references are indexed by some non-linear
identifiers, which we write Id A.
A channel is a reference to a linear cell that consists of either: (a) a list of messages to
be sent, along with the event handlers to be triggered after rendezvous, or (b) a list of event
handlers waiting for messages. The types of these two possible elements are written SendElt
and RecvElt, respectively.
When a message of type An is sent over the channel, we examine the current state of the
cell. If the cell contains a list of messages to be sent, the new message will be added to the
end of the list. If the cell contains any callbacks waiting for messages, the callback will be
applied to the incoming message and stored as an action. This behavior is governed by the
function attachSend of type bChan Ac ( SendElt A ( EventQueue Action.
attachSend c (a,k0) = updateId c (fun s =>
case s of
| in1 ls -> (in1 (ls++[(a,k0)]), Halt)
| in2 [] -> (in1 [a], Halt)
| in2 (k::ks) -> (in2 ks, Fork ((unbox k) a) ((unbox k0) [()]))
end)
The event-level send operator has the type bChan Ac ( A ( ♦1, so its implementation
has type JbChan Ac ( A ( ♦1K. Then JsendK is a continuation that takes in a channel, a
message of type JAK, and a continuation of type J♦1K, and produces an Atom performing
the monadic computation attachSend.
The interpretation of receive is governed by a similar protocol attachReceive of type
bChan Ac ( RecvElt A ( EventQueue Action, the details of which are given in Appendix D.
6 Discussion
In this paper, a linear and temporal logic is the guiding principle in the design of a core
language for events. But the connection between logic and programming is only significant
if it has a basis in existing event-driven languages, which vary in the ways they embody
the event-driven paradigm. To understand these variations and the design decisions of this
paper, we revisit the four features of event-driven programming discussed in the introduction:
the monadic abstraction of the event, the synchronization operator, the implementation in
terms of callbacks, and the primitive sources of events.
Layers of abstraction for the monadic event. The language of pure events presented in
Section 2 has two parts: a type of events (♦A) and a monadic interface for interacting with
them. The monadic structure is explicit in many existing languages, including CML’s ’a
event type [24], Async’s ’a Deferred.t [21], Lwt’s type for light-weight threads [26], and
Scala’s Future[A] [13].6 Other languages don’t have an explicit event type, but require
programmers to work directly in CPS, including Python’s Twisted library [16], JavaScript’s
Node.js [25], or Ruby’s EventMachine [6]. Still others have the event abstraction but not in
a monadic style, like Racket’s synchronizable events [1] or Go’s goroutines [2].
6
Although all of these abstractions are monads, their interfaces are not standard. CML has a return
operator alwaysEvt but in lieu of bind it has a functorial wrap along with a sync operator of type
’a event -> ’a that synchronously executes an event. Async and Lwt both use the standard return
and bind terminology but Async has an impure peek operation that polls whether or not an event has
completed, and Lwt has the ability to explicitly put threads to sleep and wake them up again.
XX:12 The Essence of Event-Driven Programming
Synchronization. Selective choice as described in Section 3 is less universal than the mon-
adic bind, but has proved useful in CML, Async, Lwt, and Racket, where the default choice
operator acts on lists, not pairs, and has type List(♦A) → ♦A.7 In this paper, by consid-
ering a linear choose operator over pairs instead of lists, we are able to draw a connection
with the linear-time axiom of temporal logic, and abstract away from the type of pairs to
derive a synchronization operator not only for lists, but for any container data structure.
Where do events come from? Despite the similarities between event-driven languages,
programming in one versus another may feel very different depending on the intended ap-
plication domain. CML feels most natural for describing message-passing concurrency due to
its built-in channels and spawn operator. Async describes shared-state concurrency, where
its Ivar data structure is a one-shot kind of shared state. Promises in Scala are more focused
on long-running computations like I/O. However, these different concurrency abstractions
can be implemented in one another, such as eXene’s implementations of GUIs in CML [9],
or Scala’s async libraries [3]. We argue that the choice of concurrency primitive is ortho-
gonal to the design of events themselves, and that the techniques presented in this paper
are applicable to a wide range of primitives.
What about FRP? The event-driven paradigm described in this paper is closely connected
to functional reactive programming (FRP), which targets many of the same domains. In
the FRP model, the input to a program is modeled as a time-varying value, or a stream.
FRP programs can be thought of as stream transformers, or as programs that react to the
current state of the system. Recently, FRP’s connection with linear-time temporal logic [15]
was discovered, which in fact prompted us to search for similar connections to event-based
programming.
In typical FRP systems, the type A denotes the type of time-varying values, as opposed
to our interpretation as an expression that is available now or at any time in the future.
FRP programs model events A coinductively as να. A ∨ ◦α, where ◦ is the “next” modal-
ity. Unfortunately, this forces an implementation based on polling, which means programs
continuously check whether an event has resolved yet.9 In the event-driven interpretation,
the type of events ♦A is interpreted as a continuation ¬ ¬A and the structure of the event
loop avoids polling.
7
In CML, choose aborts the events that are not chosen by means of its negative acknowledgment
mechanism, but Panangaden and Reppy [22] show that this feature is encodable.
8
CML and Lwt both use synchronous implementation strategies. Notice that the question of synchronous
versus asynchronous events is orthogonal to the choice of synchronous versus asynchronous channels.
9
Modern FRP languages work hard to avoid these time and space leaks, either by restricting the ex-
pressivity of programs [8] or by mixing ideas from event-driven programming with FRP [9].
REFERENCES XX:13
Conclusion The Curry-Howard correspondence reveals many interesting insights into the
nature of event-driven programs. Synchronization via selective choice can be thought of as
the linear-time axiom from temporal logic, and can be generalized to arbitrary container data
structures. The standard implementation using callbacks can be explained in a temporal
way by interpreting ♦A as ¬ ¬A, and primitive sources of concurrency are implemented
using a clever choice of answer type. The result is a top-to-bottom formulation of the essence
of events: computations that eventually return a value.
References
1 Events (Racket documentation). Website. URL: docs.racket-lang.org/reference/sync.html.
2 The Go programming language. Website. URL: www.golang.org/.
3 scala-async. GitHub repository. URL: github.com/scala/async.
4 P.N. Benton. A mixed linear and non-linear logic: Proofs, terms and models. In Computer Science
Logic. 1995.
5 Luís Caires and Frank Pfenning. Session types as intuitionistic linear propositions. In CONCUR.
2010.
6 Francis Cianfrocca. About EventMachine. Website. URL: www.rubydoc.info/gems/eventmachine.
7 Koen Claessen. A poor man’s concurrency monad. Journal of Functional Programming, 9:313–323,
1999.
8 Antony Courtney and Conal Elliott. Genuinely functional user interfaces. In Haskell Workshop, 2001.
9 Evan Czaplicki and Stephen Chong. Asynchronous functional reactive programming for GUIs. In
PLDI, 2013.
10 Oliver Danvy and Andrzex Filinski. Representing control: a study of the CPS transformation. Math-
ematical Structures in Computer Science, 2:361–391, 12 1992.
11 Emden R. Gansner and John H. Reppy. A multi-threaded higher-order user interface toolkit. In User
Interface Software, volume 1 of Software Trends. 1993.
12 Jean-Yves Girard. Linear logic. Theoretical Computer Science, 50(1):1–101, 1987.
13 Philipp Haller, Aleksandar Prokopec, Heather Miller, Viktor Klang, Roland Kuhn, and Vojin Jovan-
ovic. Futures and Promises (Scala documentation). Website, 2013. URL: http://docs.scala-lang.
org/overviews/core/futures.
14 Dana Harrington. Uniqueness logic. Theoretical Computer Science, 354(1):24 – 41, 2006. Algebraic
Methods in Language Processing.
15 Alan Jeffrey. LTL types FRP: Linear-time temporal logic propositions as types, proofs as functional
reactive programs. In PLPV, 2012.
16 Ken Kinder. Event-driven programming with Twisted and Python. Linux Journal, March 2005.
17 Neel Krishnaswami and Nick Benton. A semantic model for graphical user interfaces. In ICFP, 2011.
18 Peng Li and Steve Zdancewic. Combining events and threads for scalable network services: Imple-
mentation and evaluation of monadic, application-level concurrency primitives. In PLDI, 2007.
19 Conor McBride. The derivative of a regular type is its type of one-hole contexts. 2001.
20 Paul-André Melliès and Nicolas Tabareau. Resource modalities in tensor logic. Annals of Pure and
Applied Logic, 161(5):632–653, 2010.
21 Yaron Minsky, Anil Madhavapeddy, and Jason Hickey. Real World OCaml. O’Reilly Media, 2013.
22 Prakash Panangaden and John Reppy. ML with Concurrency: Design, Analysis, Implementation,
and Application, chapter The Essence of Concurrent ML, pages 5–29. 1997.
23 Frank Pfenning and Dennis Griffith. Polarized substructural session types. In FSSCS. 2015.
24 John H. Reppy. Concurrent Programming in ML. Cambridge University Press, 1999.
25 S. Tilkov and S. Vinoski. Node.js: Using JavaScript to build high-performance network programs.
IEEE Internet Computing, 14(6):80–83, Nov 2010.
26 Jérôme Vouillon. Lwt: A cooperative thread library. In ML Workshop, 2008.
27 Philip Wadler. Propositions as sessions. ICFP, 2012.
XX:14 REFERENCES
A Event-based language
Γ ` t : Void
var Unit-I Void-E
Γ, x : τ ` x : τ Γ ` ( ) : Unit Γ ` case t of () : σ
Γ ` t1 : τ1 Γ ` t2 : τ2 Γ ` t : τ1 × τ2
×-I ×-E
Γ ` (t1 , t2 ) : τ1 × τ2 Γ ` πi t : τi
Γ ` t : τi Γ ` t : τ1 + τ2 Γ, x1 : τ1 ` t1 : σ Γ, x2 : τ2 ` t2 : σ
+-I +-E
Γ ` ini t : τ1 + τ2 Γ ` case t of (in1 x1 → t1 | in2 x2 → t2 ) : σ
Γ, x : τ ` t : σ Γ ` t1 : τ → σ Γ ` t2 : τ
→-I →-E
Γ ` λx.t : τ → σ Γ ` t1 t2 : σ
Γ; ∆ ` e : 0
var 0-E
Γ; x : A ` x : A Γ; ∆ ` case e of () : B
Γ; ∆1 ` e1 : 1 Γ; ∆2 ` e2 : B
1-I 1-E
Γ; · ` ( ) : 1 Γ; ∆1 , ∆2 ` let () = e1 in t2 : B
Γ; ∆1 ` e1 : A1 Γ; ∆2 ` e2 : A2 Γ; ∆1 ` e1 : A1 ⊗ A2 Γ; ∆2 , x1 : A1 , x2 : A2 ` e2 : B
⊗-I ⊗-E
Γ; ∆1 , ∆2 ` (e1 , e2 ) : A1 ⊗ A2 Γ; ∆1 , ∆2 ` let (x1 , x2 ) = e1 in e2 : B
Γ; ∆ ` e : Ai Γ; ∆1 ` e : A1 ⊕ A2 Γ; ∆2 , x1 : A1 ` e1 : B Γ; ∆2 , x2 : A2 ` e2 : B
⊕-I ⊕-E
Γ; ∆ ` ini e : A1 ⊕ A2 Γ; ∆1 , ∆2 ` case e of (in1 x1 → e1 | in2 x2 → e2 ) : B
Γ; ∆, x : A ` e : B Γ; ∆1 ` e1 : A ( B Γ; ∆2 ` e2 : A
(-I (-E
Γ; ∆ ` λx.e : A ( B Γ; ∆1 , ∆2 ` e1 e2 : B
Γ; ∆ ` e : A Γ; ∆1 ` e : ♦A Γ; ∆, x : A ` e 0 : ♦B
♦-I ♦-E
Γ; ∆ ` return e : ♦A Γ; ∆1 , ∆ ` bind x = e in e 0 : ♦B
Γ; · ` e : A Γ ` t : dAe
d−e-I d−e-E
Γ ` suspend e : dAe Γ; · ` force t : A
Γ`t:τ Γ; ∆1 ` e1 : bτ c Γ, x : τ ; ∆2 ` e2 : B
b−c-I b−c-E
Γ; · ` btc : bτ c Γ; ∆1 , ∆2 ` let bxc = e1 in e2 : B
REFERENCES XX:15
B Operational Semantics
The normal forms of terms and expressions, respectively, are denoted v and n.
E P ::= ([ ], t) | (v, [ ]) | πi [ ]
t t0
| ini [ ] | case [ ] of (in1 x1 → t1 | in2 x2 → t2 )
E P JtK E P Jt 0 K
| [ ]t | v[ ] | force[ ] | b[ ]c
E L ::= let () = [ ] in e | let (x1 , x2 ) = [ ] in e
e e0
| case [ ] of (in1 x1 → e1 | in2 x2 → e2 ) L
E JeK E L Je 0 K
| [ ]e | bind x = [ ] in e | let bxc = [ ] in e
C CPS translation
t ::= x | ( ) | case t of ()
| (t1 , t2 ) | πi t
| ini t | case t of (in1 x1 → t1 | in2 x2 → t2 )
| λx.t | t1 t2 | suspend e
e ::= x | ( ) | let () = e1 in e2 | case e of ()
| (e1 , e2 ) | let (x1 , x2 ) = e1 in e2
| ini e | case e of (in1 x2 → e1 | in2 x2 → t2 )
| λx.e | e1 e2
| box e | unbox e
| force t | btc | let bxc = t in e
J1K = ¬¬1
JUnitK = Unit
J0K = ¬¬0
JVoidK = Void
JA1 ⊗ A2 K = ¬¬(JA1 K ⊗ JA2 K)
Jτ1 × τ2 K = Jτ1 K × Jτ2 K
JA1 ⊕ A2 K = ¬¬(JA1 K ⊕ JA2 K)
Jτ1 + τ2 K = Jτ1 K + Jτ2 K
JA1 ( A2 K = ¬(JA1 K ⊗ ¬JA2 K)
Jτ1 → τ2 K = Jτ1 K → Jτ2 K
J♦AK = ¬ ¬ JAK
JdAeK = dJAKe
Jbτ cK = ¬¬bJτ Kc
JxK = unbox x
Jreturn eK = λk.(unbox k)(boxJeK)
Jλx.eK = λ(x, k).kJeK
Jbind x = e1 in e2 K = λk.Je1 K(box(λx.Je2 Kk))
Je1 e2 K = λk.Je1 K(boxJe2 K, λz.zk)
XX:16 REFERENCES
Sketch of Theorem 5. Following Danvy and Filinski [10], we could easily consider a one-pass CPS
translation where the administrative reduxes—those introduced only by the CPS translation—are
treated as meta-operations on terms. These meta-operations are written with an overline, and
follow the pattern
The administrative reduces are introduced uniformly in the CPS translation of terms, with the
exception of the λ abstraction rule, which requires an extra η-expansion.
[x] = unbox x
[λx.e] = λ(y, z).(λx.z @ [e])y
[e1 e2 ] = λk.[e1 ] @ (box[e2 ], λz.z @ k)
[return e] = λk.(unbox k)(box[e])
[bind x = e1 in e2 ] = λk.[e1 ] @ (box(λx.[e2 ] @ k))
With this one-pass CPS translation we can prove the theorem directly. J
spawn : ♦1 ( 1
JspawnK : J♦1 ( 1K = ¬(J♦1K ⊗ ¬J1K)
= λ(x, k). Fork(run(unbox x))(kJ( )K)
new : 1 ( bChan Ac
JnewK : ¬(J1K ⊗ ¬JbChan AcK)
= λ(u : ¬¬1, k : ¬bChanJAKc).(unbox u)(λ().Atom (bind i = newId (inl []) in ki))
send : bChan Ac ( A ( ♦1
JsendK : ¬(JbChan AcK ⊗ JAK ⊗ ¬J♦1K)
= λ(c : ¬¬bChanJAKc, a : box[A], k : J♦1K).
(unbox c)(λi.k(λ(k0 : ¬ J1K).Atom (attachSend i k0 )))
receive : bChan Ac ( ♦A
JreceiveK : ¬(JbChan AcK ⊗ ¬J♦AK)
= λ(c : ¬¬bChanJAKc, k : J♦AK).
(unbox c)(λi.k(λ(k0 : ¬ JAK).Atom (attachReceive i k0 )))