0% found this document useful (0 votes)
2 views16 pages

Essence of Events

This paper explores the essence of event-driven programming by establishing a Curry-Howard correspondence between functional event-driven languages and linear-time temporal logic. It discusses the fundamental components of event-driven programming, such as events, synchronization, and callbacks, and presents a logical framework to understand their interactions. The authors argue that this logical interpretation unifies various real-world event-driven languages and libraries, enhancing our understanding of their underlying principles.

Uploaded by

hp.eng.pmp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views16 pages

Essence of Events

This paper explores the essence of event-driven programming by establishing a Curry-Howard correspondence between functional event-driven languages and linear-time temporal logic. It discusses the fundamental components of event-driven programming, such as events, synchronization, and callbacks, and presents a logical framework to understand their interactions. The authors argue that this logical interpretation unifies various real-world event-driven languages and libraries, enhancing our understanding of their underlying principles.

Uploaded by

hp.eng.pmp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

The Essence of Event-Driven Programming

Jennifer Paykin1 , Neelakantan R. Krishnaswami2 , and


Steve Zdancewic3

1 University of Pennsylvania, Philadelphia, USA


[email protected]
2 University of Birmingham, Birmingham, United Kingdom
[email protected]
3 University of Pennsylvania, Philadelphia, USA
[email protected]

Abstract
Event-driven programming is based on a natural abstraction: an event is a computation that
can eventually return a value. This paper exploits the intuition relating events and time by
drawing a Curry-Howard correspondence between a functional event-driven programming lan-
guage and a linear-time temporal logic. In this logic, the eventually proposition ♦A describes
the type of events, and Girard’s linear logic describes the effectful and concurrent nature of
the programs. The correspondence reveals many interesting insights into the nature of event-
driven programming, including a generalization of selective choice for synchronizing events, and
an implementation in terms of callbacks where ♦A is just ¬  ¬A.

Digital Object Identifier 10.4230/LIPIcs...

1 Introduction
Event-driven programming is a popular approach to functional concurrency in which events,
also known as “futures,” “deferred values,” or “lightweight threads,” execute concurrently
with each other and eventually produce a result. The abstraction of the event has been
extremely successful in producing lightweight, extensible, and efficient concurrent programs.
As a result, a wide range of programming languages and libraries use the event-driven
paradigm to describe everything from message-passing concurrency [24] and lightweight
threads [26] to graphical user interfaces [11] and I/O [13, 21, 25].
Although these systems vary considerably in the details of their implementations and
APIs, they share a common, basic structure. They provide:

the abstraction of the event, often given explicitly as a monad;


the ability to synchronize across events;
a continuation-passing style implementation in terms of callbacks; and
a source or sources of primitive events.

In this paper we distill event-driven programming to its essence, demonstrating how to derive
these components from first principles. Starting from a logical basis and building up the
minimal machinery needed to explain the four points above, we proceed as follows:

The logic of events. In Section 2 we identify a Curry-Howard correspondence between


events that eventually return a value and the ♦ (“eventually”) modality from temporal logic.
We define a core language of pure events where the monad from the event-driven abstraction
is identified as ♦. We formulate the type system in the setting of Girard’s linear logic to
© Jennifer Paykin, Neelakantan R. Krishnaswami, and Steve Zdancewic;
licensed under Creative Commons License CC-BY
Leibniz International Proceedings in Informatics
Schloss Dagstuhl – Leibniz-Zentrum für Informatik, Dagstuhl Publishing, Germany
XX:2 The Essence of Event-Driven Programming

characterize the fact that events are effectful and execute concurrently. This linear and
monadic logic serves as the basic scaffolding for event-driven computations.
Synchronization refers to the ability to execute two events concurrently and record which
one happens first. In Section 3 we extend the type system of pure events to include a
synchronization operator choose in the style of Concurrent ML [24]. We observe that,
logically, choose corresponds to the linear-time axiom of temporal logic:

♦A ∧ ♦B → ♦((A ∧ ♦B) ∨ (♦A ∧ B)) “eventually, A happens before B or B happens before A”

This logical interpretation of choose suggests a natural generalization in terms of McBride’s


derivatives of types [19], which we develop into a new technique for synchronizing events
across arbitrary container data structures.

Callbacks, continuations, and the event loop. In the event-driven paradigm, events are
implemented using callbacks that interact with an underlying event loop. For the language of
pure events, we define in Section 4 a time-aware continuation-passing style (CPS) translation
based on the property of temporal logic that ♦A is equivalent to ¬  ¬A, where negation ¬
is the type of first-class continuations, and  is the “always” operator from temporal logic.
In addition to the logic of pure events, the implementation should take into account the
extralogical sources of concurrency that interact with the event loop. Throughout this paper
we use a range of these concurrency primitives, including nondeterministic events, timeouts,
user input, and channels, and argue that the choice of primitive is orthogonal to the logical
structure of events.
In Section 5 we extend the CPS translation to account for these axiomatic sources of
concurrency. We model the event loop in the answer type of the CPS translation [7] and
show how to instantiate the answer type for a concrete choice of event primitives.

The essence of events. In this paper we argue that the logical interpretation of events is
a unifying idea behind the vast array of real-world event-driven languages and libraries. We
complete the story in Section 6 by comparing techniques used in practice with the approaches
developed in this paper based on the essence of event-driven programming.

2 The Curry-Howard Connection


The important abstraction of event-driven programming is based on the relationship between
events and time: an event is a computation that can eventually return a value. This abstrac-
tion takes the form of a monad, which means that there is a simple interface for interacting
with events:
return e is a trivial event that immediately returns the value e.
bind x = e1 in e2 is an event that first waits for e1 to return a value, binds that value to
x, and then continues as the event e2 .
In this paper we go even further by identifying the particular monad for events with the
“eventually” operator from temporal logic. Consider a simple intuitionistic logic for time in
which a proposition is either true now or true later. A proposition that is true now is also
true at every point in the future, but a proposition that is true later may not necessarily be
true now.1 A proposition that is true later is denoted ♦A, and pronounced “eventually A.”

1
Other presentations of temporal logic include next (◦), always (), and until (U) operators, and do not
necessarily assume that just because a proposition is true now, it will always be true.
J. Paykin, N. R. Krishnaswami, and S. Zdancewic XX:3

The “eventually” modality ♦A is defined by two rules. The first says that if a proposition
is true now, it is also true later. The second rule says that if A is true later, and if A now
proves that some B is true later, then B itself is also true later. Through the Curry-Howard
correspondence, these proofs correspond to typing rules for return and bind, respectively.

∆`e:A ∆1 ` e1 : ♦A ∆2 , x : A ` e2 : ♦B
∆ ` return e : ♦A ∆1 , ∆2 ` bind x = e1 in e2 : ♦B

2.1 A logic for effects: linear logic


Consider the event nondet e that returns e at some nondeterministic point in the future.
The program foo = (let x = nondet e in in1 (x, x)) has state in1 (undefined, undefined) for
some nondeterministic amount of time before simultaneously stepping to in1 (return e, return e).
In particular, foo is not equivalent to the substituted form in1 (nondet e, nondet e), which
at some point in the future may have its first component be return e and its second com-
ponent be undefined. On the other hand, the computation should not be blocking; the
expression case foo of (in1 y → True | in2 z → False), which tests whether foo is a right
or a left injection, should resolve immediately to True.
Events like foo are inherently effectful, because the state of an event changes as com-
putation progresses. In order to describe events in a purely logical way, the Curry-Howard
correspondence has to take these effectful relationships into account.
Linear logic [12] is one approach for typing effectful programs that has been successful for
concurrency in the settings of session types [5, 27] and uniqueness types [14]. Linear-use (or
“one shot”) data structures show up in many event-driven languages, for example in the form
of Ivars in Async [21], Futures in Scala [13], and widgets in GUI programming [17]. Linear
types are so useful in concurrent programming because they disallow aliasing, meaning
that there is no shared state between processes. By using linear variables we can define
an equational operational semantics that restores the Curry-Howard correspondence with
logic.2

2.2 A logic without effects: the unrestricted fragment


The linear and temporal type system will account for effects and events, but event-driven
programs also include ordinary computations that don’t require the event or linear-space ab-
stractions. For these we should be able to write terms in a regular programming language—in
this case, the (non-linear) simply-typed λ-calculus.
Benton et al. [4] introduced a way to combine linear and non-linear logics that has since
been called adjoint logic [23] after the categorical structure. In an adjoint logic, we write
non-linear types τ to distinguish them from linear types A, and non-linear typing contexts
Γ to distinguish them from linear typing contexts ∆. The non-linear typing judgment has
the form Γ ` t : τ , while the linear one has the form Γ; ∆ ` e : A. That is, the linear typing
judgment allows unrestricted access to the non-linear variables in Γ, but only linear-space
access to the variables in ∆.
Adjoint logic also describes ways in which the linear and non-linear types relate to each
other. A linear type A can be embedded into a persistent type dAe when A does not rely

2
Other event-driven languages solve this problem in different ways without using linear types. Imple-
mentations in strict functional languages like Async [21] ensure that every event normalizes immedi-
ately to an asynchronous primitive, which somewhat defeats the purpose of a strict evaluation order.
In CML [24], all events evaluate strictly and synchronously unless wrapped in a thunk called guard.
XX:4 The Essence of Event-Driven Programming

τ ::= Unit | Void | τ × τ | τ + τ | τ → τ | dAe


A ::= 1 | 0 | A ⊗ A | A ⊕ A | A ( A | ♦A | bτ c

t ::= x | ( ) | case t of () | (t1 , t2 ) | πi t | ini t | case t of (in1 x1 → t1 | in2 x2 → t2 )


| λx.t | t1 t2 | suspend e
e ::= x | ( ) | let () = e1 in e2 | case e of ()
| (e1 , e2 ) | let (x1 , x2 ) = e1 in e2 | ini e | case e of (in1 x2 → e1 | in2 x2 → t2 )
| λx.e | e1 e2 | return e | bind x = e1 in e2 | force t | btc | let bxc = e1 in e2

Figure 1 Syntax of linear and non-linear types, terms, and expressions.

Γ; · ` e : A Γ ` t : dAe
d−e-I d−e-E
Γ ` suspend e : dAe Γ; · ` force t : A

Γ`t:τ Γ; ∆1 ` e1 : bτ c Γ, x : τ ; ∆2 ` e2 : B
b−c-I b−c-E
Γ; · ` btc : bτ c Γ; ∆1 , ∆2 ` let bxc = e1 in e2 : B

Figure 2 Typing rules for moving between the linear and unrestricted fragments.

on any linear assumptions. On the other hand, a persistent type τ can always be treated
as a linear type, written bτ c. Figure 1 shows the syntax of types, (non-linear) terms, and
(linear) expressions.
The typing rules for terms and expressions are mostly standard, but Figure 2 shows how
to move in between the linear and non-linear fragments via the typing rules for dAe and
bτ c. A linear expression e can be suspended to a persistent term suspend e, and can be
unsuspended using force. On the other hand, a non-linear term can be used linearly in this
type system by applying a floor operator, and unpacked in the same way.
The remaining typing rules are shown in Appendix A.

2.3 Completing the Curry-Howard connection


We have shown how the Curry-Howard correspondence relates the propositions of temporal
linear logic to types and the proofs of such propositions to event-driven programs. In the
remainder of this section we show how the operational semantics of event-based programs
relates to the equational theory of proofs.
Events use a call-by-name evaluation strategy, because events themselves are not expec-
ted to normalize right away. Consider the nondeterministic event nondet e introduced in
the beginning of this section. In the expression let x = nondet e in in1 x, the variable x
occurs linearly in the rest of the expression and so it is safe to substitute the unresolved
event nondet e for x.
On the other hand, terms have a call-by-value evaluation strategy, which is safe because
a suspended expression suspend e is a value in the term language.
A selection of the operational semantics for expressions is shown in Figure 3. Evaluation
occurs under contexts, which have the form E P for contexts with holes for non-linear terms,
and E L for contexts with holes for linear expressions. A full description of the operational
J. Paykin, N. R. Krishnaswami, and S. Zdancewic XX:5

bind x = return e1 in e2 e2 {e1 /x} t t0


force(suspend e) e P
E [t] E P [t 0 ]
let bxc = bvc in e e{v/x}
P
E ::= · · · | force[ ] | b[ ]c e e0
E L ::= · · · | bind x = [ ] in e | let bxc = [ ] in e E L [e] E L [e 0 ]

Figure 3 Operational semantics of terms and expressions.

semantics can be found in Appendix B.

I Theorem 1 (Preservation). If ` t : τ and t t 0 then ` t 0 : τ . If ` e : A and e e0


then ` e 0 : A.

I Theorem 2 (Progress). If ` t : τ then either t is a value or t can take a step.


If ` e : A then either e is in weak head normal form or e can take a step.

3 Synchronization and Linear Time


In the previous section we described a simple logic for pure events that is not particularly
expressive. Although it can express sequential events using return and bind, it cannot
express concurrent events running in parallel. As an example, consider a timeout event
onTimeout n that returns a unit value after the amount of time specified by n. With this
operator we can limit the execution of another event e by running onTimeout n and e in
parallel and keeping track of which one occurs first. We denote the operation that runs two
events in parallel as choose and give it the type ♦A ⊗ ♦B ( ♦(A ⊗ ♦B ⊕ ♦A ⊗ B). The
choose operator is a kind of selective choice operator as seen in CML and Async.
timeout e |n| = bind z = choose (e, onTimeout |n|) in
case z of | in1 (a,t) -> let () = drop t in return (Some a)
| in2 (e,()) -> let () = drop e in return None
end
Here drop, of type ♦A ( 1, is an operation that explicitly aborts an event.
The type of choose can be interpreted as a property of temporal logic: if both A and
B will be true at some point in the future, then either A will come before B, or B will
come before A. This axiom distinguishes a linear-time temporal logic from a branching-time
temporal logic, in which two different futures could occur in different timelines.
The logical interpretation of choose reveals that it is somehow fundamental to the event-
driven interpretation. In fact, we can think of choose as part of the operational semantics
of events.3 We extend the evaluation contexts so that the two events being synchronized on
can evaluate concurrently. Once one of the events resolves to a value, the synchronization
operator can take a β-reduction step.

choose(return e1 , e2 ) return(in1 (e1 , e2 ))


E L ::= · · · | choose([ ], e) | choose(e, [ ])
choose(e1 , return e2 ) return(in2 (e1 , e2 ))

3
These rules make sense operationally, but not necessarily as part of the Curry-Howard correspondence,
because as an equational theory they relate return(in1 (e1 , return e2 )) and return(in2 (return e1 , e2 )),
which are certainly not equal. This stems from the fact that choose is not a pure logical axiom; it relates
multiple connectives in a complicated way and is hence neither an introduction nor an elimination rule.
XX:6 The Essence of Event-Driven Programming

new : 1 ( bChan Ac
τ ::= · · · | Chan A
send : A ( bChan Ac ( ♦1
spawn : ♦1 ( 1
receive : bChan Ac ( ♦A
choose (eA,eB) =
let |win| : Chan (1⊕1) = new () in bind z = receive |win| in
let |cA| : Chan A = new () in case z of
let |cB| : Chan B = new () in | in1 () ->
spawn (bind a = eA in bind a = |receive |cA| in
spawn (send |win| (in1 ())); return (in1 (a,receive |cB|))
send |cA| a); | in2 () ->
spawn (bind b = eB in bind b = receive |cB| in
spawn (send |win| (in2 ())); return (in2 (receive |cA|,b))
send |cB| b); end

Figure 4 Signature of channels and the channel-based implementation of choose

3.1 Implementing choose with channels


As we hinted above, the choose operator cannot be implemented in the language of pure
events. Logically this is because choose corresponds to the linear-time axiom from temporal
logic, so it is not derivable from the other operators. However, as we describe here, we
can implement choose in terms of primitive sources of concurrency: in this case, linear
synchronous channels and the spawn operator.
The spawn operator takes an event with return type 1 and executes it asynchronously.
A synchronous channel is a way to communicate linear information between processes. Al-
though the data transmitted across channels is linear, the reference to the channel itself need
not be, so Chan A is added as a non-linear type τ to our type system. Following Reppy [24],
send and receive are synchronous in that they do not return a value until a send is matched
with a receive. The signature of linear message-passing is summarized in Figure 4.
The implementation of choose, also shown in Figure 4, uses three channels. The first,
called win, tracks which event of eA or eB occurs first. Based on that, choose will wait for
either eA or eB explicitly, through the intermediate channels cA and cB. Then choose will
return either in1 or in2 along with a reference to the intermediate event, the receive event
on the unresolved channel.

3.2 Synchronizing more than pairs


For practical programming problems, we often need to synchronize more than two events at
a time. For example, the choose operators in CML and Async operate over lists instead of
pairs. In the linear type system of this paper, the type of such an operator would be

chooseList : List(♦A) ( List(♦A) ⊗ A ⊗ List(♦A)

where the input list is partitioned into the prefix and suffix of the first event to return a
value. Unfortunately it is not possible to derive chooseList, or even a version over triples
of events, from the binary version of choose. Like choose itself we need to implement
chooseList using channels or some other concurrency primitive. By itself this solution is
ad-hoc and unsatisfactory.
In the remainder of this section we describe a way to build up synchronization operators
on arbitrary finite containers of events by induction on the type of the container. We do
this by exploiting a uniform pattern on the structure of these primitives, inspired by Conor
J. Paykin, N. R. Krishnaswami, and S. Zdancewic XX:7

∂♦ ♦A = A
∂♦ (A ⊗ B) = (∂♦ A ⊗ B) ⊕ (A ⊗ ∂♦ B) ∂♦ (A ( B) = 0
∂♦ 1 = 0
∂♦ (A ⊕ B) = ∂♦ A ⊕ ∂♦ B ∂♦ bτ c = 0
∂♦ 0 = 0

E L ::= · · · | choose E ♦ fill [ ]e = e


♦ ♦ ♦
E ::= [ ] | ini E | (E , e) | (e, E ) ♦
fill (ini E ♦ )e = ini (fill E ♦ e)
fill (E ♦ , e 0 )e = in1 (fill E ♦ e, e 0 )
choose E ♦ [return e] return(fill E ♦ e) fill (e 0 , E ♦ )e = in2 (e 0 , fill E ♦ e)

Figure 5 Derivatives with respect to time

McBride’s 2001 observation that the derivative of a regular type is the type of its one-hole
contexts [19]. We explore a variation on his idea and show that the derivative of a type with
respect to time is the type of its synchronization operator.

Derivatives with respect to time. We define the instant of an event to be the time at
which it returns a value. An event itself can be thought of as a context with a hole for time
that is filled in by its instant. For example, the event return n consists of the context [ ]n,
where the hole [ ] is filled in by its instant “now.”
The semantics of synchronization say that the instant of choose(e1 , e2 ) is either the
instant of e1 or the instant of e2 , depending on which occurs first. The context containing
that hole has one of two shapes. If e1 returns a value n before e2 does, then the context
will have the the form ([ ]n, e2 ), of type A ⊗ ♦B. If e2 returns a value first, the context will
have the type ♦A ⊗ B. Thus the return type of choose, (A ⊗ ♦B) ⊕ (♦A ⊗ B), describes the
possible shapes of its context with a hole for time.
McBride’s partial derivative operation, written ∂X A, records the possible shapes of a
one-hole context of A with a hole for the type X .4 This intuition extends from finite
containers to recursive data types like lists. For example, the one-hole contexts of the type
List A consist of a one-hole context of A, along with the prefix and suffix lists surrounding
the element with the hole. That is, the derivative of List A is List A ⊗ ∂X A ⊗ List A, which
is reminiscent of the return type of chooseList.
In Figure 5 we define the syntactic operation ∂♦ A on types that describes the derivative
with respect to time.5 The derivative of an event type ♦A is A itself, leaving the time at
which the event occurred as the hole.
The general choose operator decomposes a type into two parts: its instant (designated
by the ♦ prefix) at which synchronization will occur, and a context with a hole for time.

chooseA : A ( ♦(∂♦ A) (when ∂♦ A is not degenerate, i.e. ∂♦ A 6∼


= 0)

This gives us a pattern for synchronizing events across arbitrary finite containers. McBride’s
treatment of recursive types provides a way to extend synchronization to arbitrary recursive
containers such as lists, but we leave the details to future work.

4
The syntax is inspired by the fact that this operation obeys the product and sum rules from calculus.
For example, if we write X 2 for X × X then ∂X X 2 ∼ = 2 × X = X + X.
5
The one-hole context interpretation of derivatives does not extend to higher-order types, so we make
the simplification that the derivative of all higher-order types is 0.
XX:8 The Essence of Event-Driven Programming

Operational semantics. The operational behavior of chooseA is also shown in Figure 5.


The evaluation contexts E L are extended with choose E ♦ where E ♦ is a special kind of
context for expressions. We write Γ; ∆ ` A :> E ♦ : B to mean that E ♦ has a hole for
expressions of type A.
The β-rule applies when a component of any context returns a value. In this case the
synchronization operator must produce an expression of type ∂♦ A. We define an operation
fill E ♦ e that constructs the element of the derivative from the context and the component
filling in the hole.

I Lemma 3. If Γ; ∆1 ` ♦A :> E ♦ : B and Γ; ∆2 ` e : A, then Γ; ∆1 , ∆2 ` fill E ♦ e : ∂♦ B.

The above lemma is enough to prove preservation in the extended system. The side
condition on chooseA that ∂♦ A =6∼ 0 ensures that progress also holds by ruling out ill-
formed terms like choose(λx.e).

4 Logic and Callbacks


The interpretation of events as the eventually type from temporal logic has led to some
interesting insights about their behavior, including the meaning of synchronization. In the
following sections we discuss how the standard implementation of events as callbacks can
also be given a logical interpretation.
Callbacks, which are also called continuations and event handlers, are functions that
accept some input but do not return a value. Rather, the return type of a callback is
invisible, so it can be represented as A → Answer for any type Answer. We could instead
write its type as ¬A, making the answer type opaque to the programmer.
Consider an implementation of a GUI library using callbacks and an event loop. The
library provides a way to register handlers in the event loop that get triggered once the user
performs some external action, such as a key press.

onKeyPress : (Char → Answer) → Answer or onKeyPress : ¬¬Char

What does the type of a callback have to do with temporal logic? The callback being
registered in the event loop will not be invoked immediately, but at some point in the
future, once the user has pressed a key. The type of the callback itself should then reflect
the fact that it will be available in the future. We write  A to denote the fact that A will
be true at every point in the future, and so the type of onKeyPress should be written

onKeyPress : (Char → Answer) → Answer or onKeyPress : ¬  ¬Char or onKeyPress : ♦Char

This last step follows from the isomorphism ¬  ¬A ∼ = ♦A, so we conclude: the act of
registering a callback that reacts to a key press is the same as an event that eventually
returns the key that was pressed.

4.1 An alternate temporal logic


While the event-based perspective on programming focused on a temporal logic of the even-
tually type ♦A, the callback-based perspective focuses on the “always” or “global” type,
written  A. Under this logic, a proposition that is true now (denoted An ) is not necessarily
true in the future. The modifier  An indicates that An is true both now and at every point
in the future. However, since the temporal logic is still linear in space, the proposition  An
is only available until it gets used.
J. Paykin, N. R. Krishnaswami, and S. Zdancewic XX:9

Γ; ∆, x : An ` e : Answer Γ; ∆1 ` e1 : ¬An Γ; ∆2 ` e2 : An
Γ; ∆ ` λx.e : ¬An Γ; ∆1 , ∆2 ` e1 e2 : Answer

Γ;  ∆ ` e : A Γ; ∆ ` e :  A
Γ;  ∆ ` box e :  A Γ; ∆ ` unbox e : A

Figure 6 Typing rules for tensor logic

The behavior of  An can be described using two simple rules. First, if An is provable
using only hypotheses that are always true, then An is always true. On the other hand, a
proposition An that is always true is also true now.
To describe continuations, we add the negation type ¬An and remove all other linear
arrows. Because only linear expressions need to undergo a CPS translation, the non-linear
arrow types are unchanged. We denote the linear propositions in the resulting adjoint tensor
logic [20] as An , and non-linear propositions as τ .
τ ::= Unit | Void | τ × τ | τ + τ | τ → τ | dAen
An ::= 1 | 0 | An ⊗ An | An ⊕ An | ¬A |  A | bτ c

The syntax of terms and expressions is almost identical to that of the event-based lan-
guage. The negation type ¬An is introduced with a λ-abstraction and is eliminated with an
application. The monadic bind and return operators are replaced by the comonadic box
and unbox operators, as shown in Figure 6. We assume a call-by-value operational semantics
for both terms and expressions.

4.2 A temporal CPS translation


The callback-based implementation of events is based on a time-sensitive CPS transforma-
tion on types written JAK. The CPS translation is time-sensitive in that it keeps track of
when hypotheses are available. In particular, while expressions in the event-based language
are available at any point in the future, expressions in the callback-based language expire.
For example, the translation of the eventually type encodes the fact that the return value
can be used at any point in the future: J♦AK = ¬  ¬ JAK. Similarly, a function might
use its argument not right away, but at some point in the future, so the type translation of
linear functions is JA1 ( A2 K = ¬(JA1 K ⊗ ¬JA2 K). The full details of the CPS translation
on types are shown in Appendix C.
As expected, the translation on (non-linear) terms is the identity, while the translation
on expressions is guided by the type translation in a mostly standard way. We describe a
few cases in Appendix C and have the following soundness theorem:
I Theorem 4. If Γ ` t : τ then JΓK ` JtK : Jτ K. If Γ; ∆ ` e : A then JΓK; J∆K ` JeK : JAK.
In addition, the translation respects the operational semantics of the source language.
I Theorem 5. If e e 0 then JeK →∗ Je 0 K, modulo the administrative reduxes introduced by
the CPS translation [10].

5 Concurrency Sources and the Event Loop


The CPS translation in the previous section only describes the language of pure events, and
not the additional sources of concurrency that are the backbone of event-driven program-
ming. So far in this paper we have considered a number of different sources: nondeterminism
XX:10 The Essence of Event-Driven Programming

in the event nondet e, timeouts of the form onTimeout e, synchronous channels that create
events send and receive, and the GUI operation onKeyPress.
The following section sketches a way to implement these primitive sources of concurrency
as part of the CPS translation. The trick is to integrate the concurrent actions of these
primitives with the answer type of the continuation, as described by Claessen [7] in the poor
man’s concurrency monad.

Actions and the answer type. For the pure fragment of the eventually monad (consisting
only of return and bind), the answer type of the continuation is invisible. What we write
as ¬A is in fact A ( Answer for some fixed type Answer. Claessen observed that this makes
the answer type of the continuation the perfect place to hide the presence of effects.
We call these effects actions following Claessen. An action can be thought of as a
distinct thread of computation [18], the primitive thread operations being Halt and Fork.
These threads are also stateful, operating over an event queue monad, which we write
EventQueue A. The Atom action can execute an arbitrary monadic operation over the event
queue. Using Haskell-like notation for the algebraic datatype of actions, we have:
data Action =
| Halt : Action
| Fork : Action -o Action -o Action
| Atom : EventQueue Action -o Action
Since actions represent threads of computation, they are executed by a scheduler of type
List Action ( EventQueue Action that schedules a list of actions inside the event queue
monad. For example, the following is a simple round robin scheduler:
eventLoop [] = return ()
eventLoop (Halt :: ls) = eventLoop ls
eventLoop (Fork a1 a2 :: ls) = eventLoop (ls ++ [a1,a2])
eventLoop (Atom mA :: ls) = bind a = mA in eventLoop (ls++[a])
Events and actions interact via a top-level run operator that converts a unit-valued event
to an action. Its type is ¬J♦1K or J♦1K ( Action.

run = λ(k : ¬  ¬ J1K).k(box(λ(x : J1K).(unbox x)(λ(). Halt)))

As a sanity check, observe that runJreturn vK evaluates to JvK(λ(). Halt).

Spawn. Using actions we can encode the event-level concurrency primitives that have been
used throughout this paper. For example, spawn, of type ♦1 ( 1, is implemented as a
member of its CPS-converted type J♦1 ( 1K = ¬(J♦1K ⊗ ¬J1K) that takes in an event (of
type J♦1K) and a continuation (of type ¬J1K) and produces a Fork action that runs the
event and calls the continuation in parallel. The definition is found in Appendix D, and we
can check that runJreturn(spawn e)K evaluates to Fork(boxJeK) Halt.

Linear Channels. We conclude this section with a sketch of how to implement synchronous
message-passing in the style of CML, which we used to encode choose in Section 3. Other
sources of concurrency can be implemented in a similar way.
In order to represent channels, the event queue underlying the action type should be
stateful over a linear heterogeneous store. Linear references are indexed by some non-linear
identifiers, which we write Id A.

τ ::= · · · | Id A newId : An ( EventQueuebId An c


An ::= · · · | EventQueue An updateId : bId An c ( (An ( An ⊗ B n ) ( EventQueue B n
J. Paykin, N. R. Krishnaswami, and S. Zdancewic XX:11

A channel is a reference to a linear cell that consists of either: (a) a list of messages to
be sent, along with the event handlers to be triggered after rendezvous, or (b) a list of event
handlers waiting for messages. The types of these two possible elements are written SendElt
and RecvElt, respectively.

SendElt An =  An ⊗  ¬ J1K RecvElt An =  ¬  An


Chan An = Id(List(SendElt An ) ⊕ List(RecvElt An ))

When a message of type  An is sent over the channel, we examine the current state of the
cell. If the cell contains a list of messages to be sent, the new message will be added to the
end of the list. If the cell contains any callbacks waiting for messages, the callback will be
applied to the incoming message and stored as an action. This behavior is governed by the
function attachSend of type bChan Ac ( SendElt A ( EventQueue Action.
attachSend c (a,k0) = updateId c (fun s =>
case s of
| in1 ls -> (in1 (ls++[(a,k0)]), Halt)
| in2 [] -> (in1 [a], Halt)
| in2 (k::ks) -> (in2 ks, Fork ((unbox k) a) ((unbox k0) [()]))
end)
The event-level send operator has the type bChan Ac ( A ( ♦1, so its implementation
has type JbChan Ac ( A ( ♦1K. Then JsendK is a continuation that takes in a channel, a
message of type JAK, and a continuation of type J♦1K, and produces an Atom performing
the monadic computation attachSend.
The interpretation of receive is governed by a similar protocol attachReceive of type
bChan Ac ( RecvElt A ( EventQueue Action, the details of which are given in Appendix D.

6 Discussion
In this paper, a linear and temporal logic is the guiding principle in the design of a core
language for events. But the connection between logic and programming is only significant
if it has a basis in existing event-driven languages, which vary in the ways they embody
the event-driven paradigm. To understand these variations and the design decisions of this
paper, we revisit the four features of event-driven programming discussed in the introduction:
the monadic abstraction of the event, the synchronization operator, the implementation in
terms of callbacks, and the primitive sources of events.

Layers of abstraction for the monadic event. The language of pure events presented in
Section 2 has two parts: a type of events (♦A) and a monadic interface for interacting with
them. The monadic structure is explicit in many existing languages, including CML’s ’a
event type [24], Async’s ’a Deferred.t [21], Lwt’s type for light-weight threads [26], and
Scala’s Future[A] [13].6 Other languages don’t have an explicit event type, but require
programmers to work directly in CPS, including Python’s Twisted library [16], JavaScript’s
Node.js [25], or Ruby’s EventMachine [6]. Still others have the event abstraction but not in
a monadic style, like Racket’s synchronizable events [1] or Go’s goroutines [2].

6
Although all of these abstractions are monads, their interfaces are not standard. CML has a return
operator alwaysEvt but in lieu of bind it has a functorial wrap along with a sync operator of type
’a event -> ’a that synchronously executes an event. Async and Lwt both use the standard return
and bind terminology but Async has an impure peek operation that polls whether or not an event has
completed, and Lwt has the ability to explicitly put threads to sleep and wake them up again.
XX:12 The Essence of Event-Driven Programming

Synchronization. Selective choice as described in Section 3 is less universal than the mon-
adic bind, but has proved useful in CML, Async, Lwt, and Racket, where the default choice
operator acts on lists, not pairs, and has type List(♦A) → ♦A.7 In this paper, by consid-
ering a linear choose operator over pairs instead of lists, we are able to draw a connection
with the linear-time axiom of temporal logic, and abstract away from the type of pairs to
derive a synchronization operator not only for lists, but for any container data structure.

Implementation: synchronous or asynchronous. In Section 4 we give an implementation


of events based on a CPS translation that is independent of the event loop. As a consequence,
return and bind are necessarily synchronous, which means that an event, once running, does
not yield control by default. Only once an explicitly asynchronous operation like choose,
spawn, or receive is called does control shift back to the scheduler.8 In the asynchronous
style, used by Async and Scala, the bind operation bind x = e1 in e2 registers the event e1
in the event queue so that it executes asynchronously by default. To encode synchrony in
an asynchronous system requires a special case, such as Scala’s blocking constructs [13], but
the other direction is trivial using spawn [22].

Where do events come from? Despite the similarities between event-driven languages,
programming in one versus another may feel very different depending on the intended ap-
plication domain. CML feels most natural for describing message-passing concurrency due to
its built-in channels and spawn operator. Async describes shared-state concurrency, where
its Ivar data structure is a one-shot kind of shared state. Promises in Scala are more focused
on long-running computations like I/O. However, these different concurrency abstractions
can be implemented in one another, such as eXene’s implementations of GUIs in CML [9],
or Scala’s async libraries [3]. We argue that the choice of concurrency primitive is ortho-
gonal to the design of events themselves, and that the techniques presented in this paper
are applicable to a wide range of primitives.

What about FRP? The event-driven paradigm described in this paper is closely connected
to functional reactive programming (FRP), which targets many of the same domains. In
the FRP model, the input to a program is modeled as a time-varying value, or a stream.
FRP programs can be thought of as stream transformers, or as programs that react to the
current state of the system. Recently, FRP’s connection with linear-time temporal logic [15]
was discovered, which in fact prompted us to search for similar connections to event-based
programming.
In typical FRP systems, the type  A denotes the type of time-varying values, as opposed
to our interpretation as an expression that is available now or at any time in the future.
FRP programs model events A coinductively as να. A ∨ ◦α, where ◦ is the “next” modal-
ity. Unfortunately, this forces an implementation based on polling, which means programs
continuously check whether an event has resolved yet.9 In the event-driven interpretation,
the type of events ♦A is interpreted as a continuation ¬  ¬A and the structure of the event
loop avoids polling.

7
In CML, choose aborts the events that are not chosen by means of its negative acknowledgment
mechanism, but Panangaden and Reppy [22] show that this feature is encodable.
8
CML and Lwt both use synchronous implementation strategies. Notice that the question of synchronous
versus asynchronous events is orthogonal to the choice of synchronous versus asynchronous channels.
9
Modern FRP languages work hard to avoid these time and space leaks, either by restricting the ex-
pressivity of programs [8] or by mixing ideas from event-driven programming with FRP [9].
REFERENCES XX:13

Conclusion The Curry-Howard correspondence reveals many interesting insights into the
nature of event-driven programs. Synchronization via selective choice can be thought of as
the linear-time axiom from temporal logic, and can be generalized to arbitrary container data
structures. The standard implementation using callbacks can be explained in a temporal
way by interpreting ♦A as ¬  ¬A, and primitive sources of concurrency are implemented
using a clever choice of answer type. The result is a top-to-bottom formulation of the essence
of events: computations that eventually return a value.

References
1 Events (Racket documentation). Website. URL: docs.racket-lang.org/reference/sync.html.
2 The Go programming language. Website. URL: www.golang.org/.
3 scala-async. GitHub repository. URL: github.com/scala/async.
4 P.N. Benton. A mixed linear and non-linear logic: Proofs, terms and models. In Computer Science
Logic. 1995.
5 Luís Caires and Frank Pfenning. Session types as intuitionistic linear propositions. In CONCUR.
2010.
6 Francis Cianfrocca. About EventMachine. Website. URL: www.rubydoc.info/gems/eventmachine.
7 Koen Claessen. A poor man’s concurrency monad. Journal of Functional Programming, 9:313–323,
1999.
8 Antony Courtney and Conal Elliott. Genuinely functional user interfaces. In Haskell Workshop, 2001.
9 Evan Czaplicki and Stephen Chong. Asynchronous functional reactive programming for GUIs. In
PLDI, 2013.
10 Oliver Danvy and Andrzex Filinski. Representing control: a study of the CPS transformation. Math-
ematical Structures in Computer Science, 2:361–391, 12 1992.
11 Emden R. Gansner and John H. Reppy. A multi-threaded higher-order user interface toolkit. In User
Interface Software, volume 1 of Software Trends. 1993.
12 Jean-Yves Girard. Linear logic. Theoretical Computer Science, 50(1):1–101, 1987.
13 Philipp Haller, Aleksandar Prokopec, Heather Miller, Viktor Klang, Roland Kuhn, and Vojin Jovan-
ovic. Futures and Promises (Scala documentation). Website, 2013. URL: http://docs.scala-lang.
org/overviews/core/futures.
14 Dana Harrington. Uniqueness logic. Theoretical Computer Science, 354(1):24 – 41, 2006. Algebraic
Methods in Language Processing.
15 Alan Jeffrey. LTL types FRP: Linear-time temporal logic propositions as types, proofs as functional
reactive programs. In PLPV, 2012.
16 Ken Kinder. Event-driven programming with Twisted and Python. Linux Journal, March 2005.
17 Neel Krishnaswami and Nick Benton. A semantic model for graphical user interfaces. In ICFP, 2011.
18 Peng Li and Steve Zdancewic. Combining events and threads for scalable network services: Imple-
mentation and evaluation of monadic, application-level concurrency primitives. In PLDI, 2007.
19 Conor McBride. The derivative of a regular type is its type of one-hole contexts. 2001.
20 Paul-André Melliès and Nicolas Tabareau. Resource modalities in tensor logic. Annals of Pure and
Applied Logic, 161(5):632–653, 2010.
21 Yaron Minsky, Anil Madhavapeddy, and Jason Hickey. Real World OCaml. O’Reilly Media, 2013.
22 Prakash Panangaden and John Reppy. ML with Concurrency: Design, Analysis, Implementation,
and Application, chapter The Essence of Concurrent ML, pages 5–29. 1997.
23 Frank Pfenning and Dennis Griffith. Polarized substructural session types. In FSSCS. 2015.
24 John H. Reppy. Concurrent Programming in ML. Cambridge University Press, 1999.
25 S. Tilkov and S. Vinoski. Node.js: Using JavaScript to build high-performance network programs.
IEEE Internet Computing, 14(6):80–83, Nov 2010.
26 Jérôme Vouillon. Lwt: A cooperative thread library. In ML Workshop, 2008.
27 Philip Wadler. Propositions as sessions. ICFP, 2012.
XX:14 REFERENCES

A Event-based language

τ ::= Unit | Void | τ × τ | τ + τ | τ → τ | dAe


A ::= 1 | 0 | A ⊗ A | A ⊕ A | A ( A | ♦A | bτ c

t ::= x | ( ) | case t of () | (t1 , t2 ) | πi t | ini t | case t of (in1 x1 → t1 | in2 x2 → t2 )


| λx.t | t1 t2 | suspend e
e ::= x | ( ) | let () = e1 in e2 | case e of ()
| (e1 , e2 ) | let (x1 , x2 ) = e1 in e2 | ini e | case e of (in1 x2 → e1 | in2 x2 → t2 )
| λx.e | e1 e2 | return e | bind x = e1 in e2 | force t | btc | let bxc = e1 in e2

Γ ` t : Void
var Unit-I Void-E
Γ, x : τ ` x : τ Γ ` ( ) : Unit Γ ` case t of () : σ

Γ ` t1 : τ1 Γ ` t2 : τ2 Γ ` t : τ1 × τ2
×-I ×-E
Γ ` (t1 , t2 ) : τ1 × τ2 Γ ` πi t : τi

Γ ` t : τi Γ ` t : τ1 + τ2 Γ, x1 : τ1 ` t1 : σ Γ, x2 : τ2 ` t2 : σ
+-I +-E
Γ ` ini t : τ1 + τ2 Γ ` case t of (in1 x1 → t1 | in2 x2 → t2 ) : σ

Γ, x : τ ` t : σ Γ ` t1 : τ → σ Γ ` t2 : τ
→-I →-E
Γ ` λx.t : τ → σ Γ ` t1 t2 : σ

Γ; ∆ ` e : 0
var 0-E
Γ; x : A ` x : A Γ; ∆ ` case e of () : B

Γ; ∆1 ` e1 : 1 Γ; ∆2 ` e2 : B
1-I 1-E
Γ; · ` ( ) : 1 Γ; ∆1 , ∆2 ` let () = e1 in t2 : B

Γ; ∆1 ` e1 : A1 Γ; ∆2 ` e2 : A2 Γ; ∆1 ` e1 : A1 ⊗ A2 Γ; ∆2 , x1 : A1 , x2 : A2 ` e2 : B
⊗-I ⊗-E
Γ; ∆1 , ∆2 ` (e1 , e2 ) : A1 ⊗ A2 Γ; ∆1 , ∆2 ` let (x1 , x2 ) = e1 in e2 : B

Γ; ∆ ` e : Ai Γ; ∆1 ` e : A1 ⊕ A2 Γ; ∆2 , x1 : A1 ` e1 : B Γ; ∆2 , x2 : A2 ` e2 : B
⊕-I ⊕-E
Γ; ∆ ` ini e : A1 ⊕ A2 Γ; ∆1 , ∆2 ` case e of (in1 x1 → e1 | in2 x2 → e2 ) : B

Γ; ∆, x : A ` e : B Γ; ∆1 ` e1 : A ( B Γ; ∆2 ` e2 : A
(-I (-E
Γ; ∆ ` λx.e : A ( B Γ; ∆1 , ∆2 ` e1 e2 : B

Γ; ∆ ` e : A Γ; ∆1 ` e : ♦A Γ; ∆, x : A ` e 0 : ♦B
♦-I ♦-E
Γ; ∆ ` return e : ♦A Γ; ∆1 , ∆ ` bind x = e in e 0 : ♦B

Γ; · ` e : A Γ ` t : dAe
d−e-I d−e-E
Γ ` suspend e : dAe Γ; · ` force t : A

Γ`t:τ Γ; ∆1 ` e1 : bτ c Γ, x : τ ; ∆2 ` e2 : B
b−c-I b−c-E
Γ; · ` btc : bτ c Γ; ∆1 , ∆2 ` let bxc = e1 in e2 : B
REFERENCES XX:15

B Operational Semantics
The normal forms of terms and expressions, respectively, are denoted v and n.

v ::= ( ) | (v1 , v2 ) | ini v | λx.t | suspend e


n ::= ( ) | (e1 , e2 ) | ini e | λx.e | return e | bvc

πi (v1 , v2 ) vi (λx.t1 )v2 t1 {v2 /x}


case ini v of (in1 x1 → t1 | in2 x2 → t2 ) ti {v/xi }

let () = ( ) in e e let (x1 , x2 ) = (e1 , e2 ) in e e{e1 /x, e2 /x}


(λx.e1 )e2 e1 {e2 /x} bind x = return e1 in e2 e2 {e1 /x}
force(suspend e) e let bxc = bvc in e e{v/x}
case ini e of (in1 x1 → e1 | in2 x2 → e2 ) ei {e/xi }

E P ::= ([ ], t) | (v, [ ]) | πi [ ]
t t0
| ini [ ] | case [ ] of (in1 x1 → t1 | in2 x2 → t2 )
E P JtK E P Jt 0 K
| [ ]t | v[ ] | force[ ] | b[ ]c
E L ::= let () = [ ] in e | let (x1 , x2 ) = [ ] in e
e e0
| case [ ] of (in1 x1 → e1 | in2 x2 → e2 ) L
E JeK E L Je 0 K
| [ ]e | bind x = [ ] in e | let bxc = [ ] in e

C CPS translation

t ::= x | ( ) | case t of ()
| (t1 , t2 ) | πi t
| ini t | case t of (in1 x1 → t1 | in2 x2 → t2 )
| λx.t | t1 t2 | suspend e
e ::= x | ( ) | let () = e1 in e2 | case e of ()
| (e1 , e2 ) | let (x1 , x2 ) = e1 in e2
| ini e | case e of (in1 x2 → e1 | in2 x2 → t2 )
| λx.e | e1 e2
| box e | unbox e
| force t | btc | let bxc = t in e

J1K = ¬¬1
JUnitK = Unit
J0K = ¬¬0
JVoidK = Void
JA1 ⊗ A2 K = ¬¬(JA1 K ⊗ JA2 K)
Jτ1 × τ2 K = Jτ1 K × Jτ2 K
JA1 ⊕ A2 K = ¬¬(JA1 K ⊕ JA2 K)
Jτ1 + τ2 K = Jτ1 K + Jτ2 K
JA1 ( A2 K = ¬(JA1 K ⊗ ¬JA2 K)
Jτ1 → τ2 K = Jτ1 K → Jτ2 K
J♦AK = ¬  ¬ JAK
JdAeK = dJAKe
Jbτ cK = ¬¬bJτ Kc

JxK = unbox x
Jreturn eK = λk.(unbox k)(boxJeK)
Jλx.eK = λ(x, k).kJeK
Jbind x = e1 in e2 K = λk.Je1 K(box(λx.Je2 Kk))
Je1 e2 K = λk.Je1 K(boxJe2 K, λz.zk)
XX:16 REFERENCES

Sketch of Theorem 5. Following Danvy and Filinski [10], we could easily consider a one-pass CPS
translation where the administrative reduxes—those introduced only by the CPS translation—are
treated as meta-operations on terms. These meta-operations are written with an overline, and
follow the pattern

(λx.e) @ v = e{v/x} unbox(box e) = e.

The administrative reduces are introduced uniformly in the CPS translation of terms, with the
exception of the λ abstraction rule, which requires an extra η-expansion.

[x] = unbox x
[λx.e] = λ(y, z).(λx.z @ [e])y
[e1 e2 ] = λk.[e1 ] @ (box[e2 ], λz.z @ k)
[return e] = λk.(unbox k)(box[e])
[bind x = e1 in e2 ] = λk.[e1 ] @ (box(λx.[e2 ] @ k))

With this one-pass CPS translation we can prove the theorem directly. J

D CPS Implementation of Event Primitives

spawn : ♦1 ( 1
JspawnK : J♦1 ( 1K = ¬(J♦1K ⊗ ¬J1K)
= λ(x, k). Fork(run(unbox x))(kJ( )K)

new : 1 ( bChan Ac
JnewK : ¬(J1K ⊗ ¬JbChan AcK)
= λ(u :  ¬¬1, k : ¬bChanJAKc).(unbox u)(λ().Atom (bind i = newId (inl []) in ki))

send : bChan Ac ( A ( ♦1
JsendK : ¬(JbChan AcK ⊗ JAK ⊗ ¬J♦1K)
= λ(c :  ¬¬bChanJAKc, a : box[A], k : J♦1K).
(unbox c)(λi.k(λ(k0 :  ¬ J1K).Atom (attachSend i k0 )))

receive : bChan Ac ( ♦A
JreceiveK : ¬(JbChan AcK ⊗ ¬J♦AK)
= λ(c :  ¬¬bChanJAKc, k : J♦AK).
(unbox c)(λi.k(λ(k0 :  ¬ JAK).Atom (attachReceive i k0 )))

You might also like