Tutorial
Tutorial
1
Chalmers University, Gothenburg
[email protected]
2
Institute of Cybernetics, Tallinn
[email protected]
1 Introduction
2 Agda Basics
The rest of your program goes inside the top-level module. Let us
start by defining some simple datatypes and functions.
Similar to languages like Haskell and ML, a key concept in Agda is pattern
matching over algebraic datatypes. With the introduction of dependent
types pattern matching becomes even more powerful as we shall see in
Section 2.4 and Section 3. But for now, let us start with simply typed
functions and datatypes.
Datatypes are introduced by a data declaration, giving the name and
type of the datatype as well as the constructors and their types. For
instance, here is the type of booleans
The type of Bool is Set, the type of small4 types. Functions over
Bool can be defined by pattern matching in a way familiar to Haskell
programmers:
In the same way as functions are not allowed to crash, they must also
be terminating. To guarantee termination recursive calls have to be made
on structurally smaller arguments. In this case _+_ passes the termination
checker since the first argument is getting smaller in the recursive call
(n < suc n). Let us define multiplication while we are at it
There are some new and interesting bits in the type of if_then_else_.
For now, it is sufficient to think about {A : Set} -> as declaring a poly-
morphic function over a type A. More on this in Sections 2.2 and 2.3.
Just as in Haskell and ML datatypes can be parameterised by other
types. The type of lists of elements of an arbitrary type is defined by
infixr 40 _::_
data List (A : Set) : Set where
[] : List A
_::_ : A -> List A -> List A
Again, note that Agda is quite liberal about what is a valid name.
Both [] and _::_ are accepted as sensible names. In fact, Agda names
can contain arbitrary non-whitespace unicode characters, with a few ex-
ceptions, such as parenthesis and curly braces. So, if we really wanted
(which we dont) we could define the list type as
data _? ( : Set) : Set where
: ?
_C_ : -> ? -> ?
This liberal policy of names means that being generous with whites-
pace becomes important. For instance, not:Bool->Bool would not be a
valid type signature for the not function, since it is in fact a valid name.
zero : Nat
zero = identity Nat zero
true : Bool
true = id true
Note that the type argument is implicit both when the function is
applied and when it is defined.
There are no restrictions on what arguments can be made implicit,
nor are there any guarantees that an implicit argument can be inferred
by the type checker. For instance, we could be silly and make the second
argument of the identity function implicit as well:
false : Bool
false = silly {x = false}
Clearly, there is no way the type checker could figure out what the
second argument to silly should be. To provide an implicit argument
explicitly you use the implicit application syntax f {v}, which gives v as
the left-most implicit argument to f, or as shown in the example above,
f {x = v}, which gives v as the implicit argument called x. The name of
an implicit argument is obtained from the type declaration.
Conversely, if you want the type checker to fill in a term which needs
to be given explicitly you can replace it by an underscore. For instance,
one : Nat
one = identity _ (suc zero)
It is important to note that the type checker will not do any kind of
search in order to fill in implicit arguments. It will only look at the typing
constraints and perform unification5 .
Even so, a lot can be inferred automatically. For instance, we can
define the fully dependent function composition. (Warning: the following
type is not for the faint of heart!)
5
Miller pattern unification to be precise.
__ : {A : Set}{B : A -> Set}{C : (x : A) -> B x -> Set}
(f : {x : A}(y : B x) -> C x y)(g : (x : A) -> B x)
(x : A) -> C x (g x)
(f g) x = f (g x)
The type checker can figure out the type arguments A, B, and C, when
we use __.
We have seen how to define simply typed datatypes and functions,
and how to use dependent types and implicit arguments to represent
polymorphic functions. Let us conclude this part by defining some familiar
functions.
The rule for when an argument should be dotted is: if there is a unique
type correct value for the argument it should be dotted.
In the example above, the terms under the dots were valid patterns,
but in general they can be arbitrary terms. For instance, we can define
the image of a function as follows:
Here we state that the only way to construct an element in the image
of f is to pick an argument x and apply f to x. Now if we know that a
particular y is in the image of f we can compute the inverse of f on y:
Here fzero is smaller than suc n for any n and if i is smaller than n
then fsuc i is smaller than suc n. Note that there is no way of construct-
ing a number smaller than zero. When there are no possible constructor
patterns for a given argument you can pattern match on it with the ab-
surd pattern ():
The types ensure that there is no danger of indexing outside the list.
This is reflected in the case of the empty list where there are no possible
values for the index.
The _!_ function turns a list into a function from indices to elements.
We can also go the other way, constructing a list given a function from
indices to elements:
tabulate : {n : Nat}{A : Set} -> (Fin n -> A) -> Vec A n
tabulate {zero} f = []
tabulate {suc n} f = f fzero :: tabulate (f fsuc)
trivial : True
trivial = _
Now, isTrue b is the type of proofs that b equals true. Using this
technique we can define the safe list lookup function in a different way,
working on simply typed lists and numbers.
_<_ : Nat -> Nat -> Bool
_ < zero = false
zero < suc n = true
suc m < suc n = m < n
The equations for min following the with abstraction have an extra
argument, separated from the original arguments by a vertical bar, cor-
responding to the value of the expression x < y. You can abstract over
multiple expressions at the same time, separating them by vertical bars
and you can nest with abstractions. In the left hand side, with abstracted
arguments should be separated by vertical bars.
In this case pattern matching on x < y doesnt tell us anything inter-
esting about the arguments of min, so repeating the left hand sides is a
bit tedious. When this is the case you can replace the left hand side with
...:
filter : {A : Set} -> (A -> Bool) -> List A -> List A
filter p [] = []
filter p (x :: xs) with p x
... | true = x :: filter p xs
... | false = filter p xs
Two natural numbers are different if one is zero and the other suc of
something, or if both are successors but their predecessors are different.
Now we can define the function equal? to check if two numbers are equal:
equal? : (n m : Nat) -> Equal? n m
equal? zero zero = eq refl
equal? zero (suc m) = neq z6=s
equal? (suc n) zero = neq s6=z
equal? (suc n) (suc m) with equal? n m
equal? (suc n) (suc .n) | eq refl = eq refl
equal? (suc n) (suc m) | neq p = neq (s6=s p)
Note that in the case where both numbers are successors we learn
something by pattern matching on the proof that the predecessors are
equal. We will see more examples of this kind of informative datatypes in
Section 3.1.
When you abstract over an expression using with, that expression
is abstracted from the entire context. This means that if the expression
occurs in the type of an argument to the function or in the result type,
this occurrence will be replaced by the with-argument on the left hand
side. For example, suppose we want to prove something about the filter
function. That the only thing it does is throwing away some elements of
its argument, say. We can define what it means for one list to be a sublist
of another list:
infix 20 __
data __ {A : Set} : List A -> List A -> Set where
stop : [] []
drop : forall {xs y ys} -> xs ys -> xs y :: ys
keep : forall {x xs ys} -> xs ys -> x :: xs x :: ys
The interesting case is the _::_ case. Let us walk through it slowly:
-- lem-filter p (x :: xs) = ?
At this point the goal that we have to prove is
-- (filter p (x :: xs) | p x) x :: xs
In the goal filter has been applied to its with abstracted argument p x
and will not reduce any further. Now, when we abstract over p x it will
be abstracted from the goal type so we get
-- lem-filter p (x :: xs) with p x
-- ... | px = ?
Now, when we pattern match on px the call to filter will reduce and
we get
-- lem-filter p (x :: xs) with p x
-- ... | true = ? {- x :: filter p xs x :: xs -}
-- ... | false = ? {- filter p xs x :: xs -}
2.7 Modules
The module system in Agda is primarily used to manage name spaces.
In a dependently typed setting you could imagine having modules as first
class objects that could be passed around and created on the fly, but in
Agda this is not the case.
We have already seen that each file must define a single top-level
module containing all the declarations in the file. These declarations can
in turn be modules.
module M where
data Maybe (A : Set) : Set where
nothing : Maybe A
just : A -> Maybe A
By default all names declared in a module are visible from the outside.
If you want to hide parts of a module you can declare it private:
module A where
private
internal : Nat
internal = zero
To access public names from another module you can qualify the name
by the name of the module.
mapMaybe1 : {A B : Set} -> (A -> B) -> M.Maybe A -> M.Maybe B
mapMaybe1 f M.nothing = M.nothing
mapMaybe1 f (M.just x) = M.just (f x)
open M
When opening a module you can control which names are brought into
scope with the using, hiding, and renaming keywords. For instance, to
open the Maybe module without exposing the maybe function, and using
different names for the type and the constructors we can say
open M hiding (maybe)
renaming (Maybe to _option; nothing to none; just to some)
This creates a new module SortNat with functions insert and sort.
sort2 : List Nat -> List Nat
sort2 = SortNat.sort
Often you want to instantiate a module and open the result, in which
case you can simply write
open Sort Nat _<_ renaming (insert to insertNat; sort to sortNat)
Now the Lists module will contain insert and sort as well as the
minimum function.
8
But not by other modules.
Importing modules from other files Agda programs can be split over
multiple files. To use definitions from a module defined in another file the
module has to be imported. Modules are imported by their names, so if
you have a module A.B.C in a file /some/local/path/A/B/C.agda it is
imported with the statement import A.B.C. In order for the system to
find the file /some/local/path must be in Agdas search path.9 .
I have a file Logic.agda in the same directory as these notes, defining
logical conjunction and disjunction. To import it we say
import Logic using (__; __)
Note that you can use the same namespace control keywords as when
opening modules. Importing a module does not automatically open it
(like when you say import qualified in Haskell). You can either open
it separately with an open statement, or use the short form open import
Logic.
Splitting a program over several files will improve type checking per-
formance, since when you are making changes the type checker only has
to type check the files that are influenced by the changes.
2.8 Records
We have seen a record type already, namely the record type with no fields
which was used to model the true proposition. Now let us look at record
types with fields. A record type is declared much like a datatype where
the fields are indicated by the field keyword. For instance
record Point : Set where
field x : Nat
y : Nat
This declares a record type Point with two natural number fields x and
y. To construct an element of Point you write
mkPoint : Nat -> Nat -> Point
mkPoint a b = record{ x = a; y = b }
To allow projection of the fields from a record, each record type comes
with a module of the same name. This module is parameterised by an
element of the record type and contains projection functions for the fields.
In the point example we get a module
-- module Point (p : Point) where
-- x : Nat
-- y : Nat
9
The search path can be set from emacs by executing M-x customize-group agda2.
This module can be used as it is or instantiated to a particular record.
getX : Point -> Nat
getX = Point.x
At the moment you cannot pattern match on records, but this will
hopefully be possible in a later version of Agda.
It is possible to add your own functions to the module of a record by
including them in the record declaration after the fields.
2.9 Exercises
Instead of defining the sublist relation we can define the type of sub-
lists of a given list as follows:
infixr 30 _::_
data SubList {A : Set} : List A -> Set where
[] : SubList []
_::_ : forall x {xs} -> SubList xs -> SubList (x :: xs)
skip : forall {x xs} -> SubList xs -> SubList (x :: xs)
(b) Define a function to extract the list corresponding to a sublist.
forget : {A : Set}{xs : List A} -> SubList xs -> List A
forget s = {! !}
(d) Give an alternative definition of filter which satisfies the sublist prop-
erty by construction.
filter : {A : Set} -> (A -> Bool) -> (xs : List A) -> SubList xs
filter p xs = {! !}
3 Programming Techniques
In this section we will describe and exemplify a couple of programming
techniques which are made available in dependently typed languages:
views and universe constructions.
3.1 Views
As we have seen pattern matching in Agda can reveal information not
only about the term being matched but also about terms occurring in the
type of this term. For instance, matching a proof of x == y against the
refl constructor we (and the type checker) will learn that x and y are
the same.
We can exploit this, and design datatypes whose sole purpose is to
tell us something interesting about its indices. We call such a datatype a
view [5]. To use the view we define a view function, computing an element
of the view for arbitrary indices.
This section on views is defined in the file Views.lagda so here is the
top-level module declaration:
module Views where
Natural number parity Let us start with an example. We all know
that any natural number n can be written on the form 2k or 2k + 1
for some k. Here is a view datatype expressing that. We use the natural
numbers defined in the summer school library [7] module Data.Nat.
In the suc n case we use the view recursively to find out the parity
of n. If n = k * 2 then suc n = 1 + k * 2 and if n = 1 + k * 2 then
suc n = suc k * 2.
In effect, this view gives us the ability to pattern match on a natu-
ral number with the patterns k * 2 and 1 + k * 2. Using this ability,
defining the function that divides a natural number by two is more or less
trivial:
Note that k is bound in the pattern for the view, not in the dotted
pattern for the natural number.
Finding an element in a list Let us turn our attention to lists. First
some imports: we will use the definitions of lists and booleans from the
summer school library [7].
open import Data.Function
open import Data.List
open import Data.Bool
Using the All datatype we could prove the second part of the correct-
ness of the filter function, namely that all the elements of the result
of filter satisfies the predicate: All (satisfies p) (filter p xs).
This is left as an exercise. Instead, let us define some interesting views on
lists.
Given a decidable predicate on the elements of a list, we can either
find an element in the list that satisfies the predicate, or else all elements
satifies the negation of the predicate. Here is the corresponding view
datatype:
data Find {A : Set}(p : A -> Bool) : List A -> Set where
found : (xs : List A)(y : A) -> satisfies p y -> (ys : List A) ->
Find p (xs ++ y :: ys)
not-found : forall {xs} -> All (satisfies (not p)) xs ->
Find p xs
Indexing into a list In Sections 2.4 and Section 2.5 we saw two ways of
safely indexing into a list. In both cases the type system guaranteed that
the index didnt point outside the list. However, sometimes we have no
control over the value of the index and it might well be that it is pointing
outside the list. One solution in this case would be to wrap the result of
the lookup function in a maybe type, but maybe types dont really tell
you anything very interesting and we can do a lot better. First let us
define the type of proofs that an element x is in a list xs.
data __ {A : Set}(x : A) : List A -> Set where
hd : forall {xs} -> x x :: xs
tl : forall {y xs} -> x xs -> x y :: xs
The first element of a list is a member of the list, and any element
of the tail of a list is also an element of the entire list. Given a proof
of x xs we can compute the index at which x occurs in xs simply by
counting the number of tls in the proof.
index : forall {A}{x : A}{xs} -> x xs -> Nat
index hd = zero
index (tl p) = suc (index p)
In the variable case we turn the proof into a natural number using the
index function.
Now we are ready to define the view of a raw term as either being the
erasure of a well-typed term or not. Again, we dont provide any justifi-
cation for giving a negative result. Since, we are doing type inference the
type is not a parameter of the view but computed by the view function.
data Infer ( : Cxt) : Raw -> Set where
ok : ( : Type)(t : Term ) -> Infer (erase t)
bad : {e : Raw} -> Infer e
The view function is the type inference function taking a raw term
and computing an element of the Infer view.
infer : ( : Cxt)(e : Raw) -> Infer e
Let us walk through the three cases: variable, application, and lambda
abstraction.
In the variable case we need to take case of the fact that the raw
variable might be out of scope. We can use the lookup function _!_ we
defined above for that. When the variable is in scope the lookup function
provides us with the type of the variable and the proof that it is in scope.
infer (e1 $ e2 )
with infer e1
infer (e1 $ e2 ) | bad = bad
infer (.(erase t1 ) $ e2 ) | ok t1 = bad
infer (.(erase t1 ) $ e2 ) | ok ( ) t1
with infer e2
infer (.(erase t1 ) $ e2 ) | ok ( ) t1 | bad = bad
infer (.(erase t1 ) $ .(erase t2 )) | ok ( ) t1 | ok t2
with =?=
infer (.(erase t1 ) $ .(erase t2 ))
| ok ( ) t1 | ok . t2 | yes = ok (t1 $ t2 )
infer (.(erase t1 ) $ .(erase t2 ))
| ok ( ) t1 | ok t2 | no = bad
The application case is the bulkiest simply because there are a lot of
things we need to check: that the two terms are type correct, that the first
term has a function type and that the type of the second term matches
the argument type of the first term. This is all done by pattern matching
on recursive calls to the infer view and the type equality view.
Finally, the lambda case is very simple. If the body of the lambda is
type correct in the extended context, then the lambda is well-typed with
the corresponding function type.
Without much effort we have defined a type checker for simply typed
-calculus that not only is guaranteed to compute well-typed terms, but
also guarantees that the erasure of the well-typed term is the term you
started with.
3.2 Universes
The second programming technique we will look at that is not available
in non-dependently typed languages is universe construction. First the
module header.
infix 30 not_
infixr 25 _and_
postulate _div_ : Nat -> (m : Nat){p : isTrue (nonZero m)} -> Nat
three = 16 div 5
Here the proof obligation isTrue (nonZero 5) will reduce to True and
solved automatically by the type checker. Note that if you tell the type
checker that you have defined the type of natural numbers, you are allowed
to use natural number literals like 16 and 5. This has been done in the
library.
infixr 50 _|+|_ __
infixr 60 _|x|_ __
Unfortunately, this definition does not pass the termination checker since
the recursive call to fold is passed to the higher order function map
and the termination checker cannot see that map isnt applying it to bad
things.
To make fold pass the termination checker we can fuse map and fold
into a single function mapFold F G x = map F (fold G ) x defined
recursively over x. We need to keep two copies of the functor since fold
is always called on the same functor, whereas map is defined by taking its
functor argument apart.
mapFold : forall {X} F G -> ([ G ] X -> X) -> [ F ] ( G) -> [ F ] X
mapFold |Id| G < x > = (mapFold G G x)
mapFold (|K| A) G c = c
mapFold (F1 |+| F2 ) G (inl x) = inl (mapFold F1 G x)
mapFold (F1 |+| F2 ) G (inr y) = inr (mapFold F2 G y)
mapFold (F1 |x| F2 ) G (x , y) = mapFold F1 G x , mapFold F2 G y
There is a lot more fun to be had here, but let us make do with a
couple of examples. Both natural numbers and lists are examples of least
fixed points of polynomial functors:
Z : NAT
Z = < inl _ >
Universes for overloading At the moment, Agda does not have a class
system like the one in Haskell. However, a limited form of overloading can
be achieved using universes. The idea is simply if you know in advance at
which types you want to overload a function, you can construct a universe
for these types and define the overloaded function by pattern matching
on a code.
A simple example: suppose we want to overload equality for some of
our standard types. We start by defining our universe:
infix 30 _==_
_==_ : {a : Type} -> El a -> El a -> Bool
example1 : isTrue (2 + 2 == 4)
example1 = _
3.3 Exercises
(b) Now use the view to compute the difference between two numbers
difference : Nat -> Nat -> Nat
difference n m = {! !}
(b) Define a type of illtyped terms and change infer to return such a
term upon failure. Look to the definition of infer for clues to the
constructors of BadTerm.
data BadTerm ( : Cxt) : Set where
-- ...
infixr 30 _::_
data All {A : Set}(P : A -> Set) : List A -> Set where
[] : All P []
_::_ : forall {x xs} -> P x -> All P xs -> All P (x :: xs)
We proved that filter computes a sublist of its input. Now lets finish
the job.
(b) Below is the proof that all elements of filter p xs satisfies p. Doing
this without any auxiliary lemmas involves some rather subtle use of
with-abstraction.
Figure out what is going on by replaying the construction of the pro-
gram and looking at the goal and context in each step.
(c) Finally prove filter complete, by proving that all elements of the
original list satisfying the predicate are present in the result.
mutual
data Schema : Set where
tag : Tag -> List Child -> Schema
printChildren : {kids : List Child} -> All Element kids -> String
printChildren xs = {! !}
4 Compiling Agda programs
This section deals with the topic of getting Agda programs to interact
with the real world. Type checking Agda programs requires evaluating
arbitrary terms, ans as long as all terms are pure and normalizing this is
not a problem, but what happens when we introduce side effects? Clearly,
we dont want side effects to happen at compile time. Another question is
what primitives the language should provide for constructing side effect-
ing programs. In Agda, these problems are solved by allowing arbitrary
Haskell functions to be imported as axioms. At compile time, these im-
ported functions have no reduction behaviour, only at run time is the
Haskell function executed.
The first argument to COMPILED TYPE is the name of the Agda type
and the second is the corresponding Haskell type.
postulate
putStrLn : String -> IO Unit
Just as for compiled types the first argument to COMPILED is the name
of the Agda function and the second argument is the Haskell code it should
compile to. The compiler checks that the given code has the Haskell type
corresponding to the type of the Agda function.
main : IO Unit
main = putStrLn "Hello world!"
To compile the program simply call the command-line tool with the
--compile (or -c) flag. The compiler will compile your Agda program
and any Agda modules it imports to Haskell modules and call the Haskell
compiler to generate an executable binary.
4.6 Exercises
Exercise 4.1. Turn the type checker for -calculus from Section 3.1 into a
complete program that can read a file containing a raw -term and print
its type if its well-typed and an error message otherwise. To simplify
things you can write the parser in Haskell and import it into the Agda
program.
5 Further reading
More information on the Agda language and how to obtain the code for
these notes can be found on the Agda wiki [2]. If you have any Agda
related questions feel free to ask on the Agda mailing list [1].
My thesis [6] contains more of the technical and theoretical details
behind Agda, as well as some programming examples. To learn more
about dependently typed programming in Agda you can read The power
of Pi by Oury and Swierstra [8]. For dependently typed programming in
general try The view from the left by McBride and McKinna [5] and Why
dependent types matter by Altenkirch, McBride and McKinna [3].
References
1. The Agda mailing list, 2012. https://lists.chalmers.se/mailman/listinfo/agda.
2. The Agda wiki, 2012. http://www.cs.chalmers.se/~ulfn/Agda.
3. T. Altenkirch, C. McBride, and J. McKinna. Why dependent types matter.
Manuscript, available online, April 2005.
4. P. Martin-Lof. Intuitionistic Type Theory. BibliopolisNapoli, 1984.
5. C. McBride and J. McKinna. The view from the left. Journal of Functional Pro-
gramming, 14(1):69111, January 2004.
6. U. Norell. Towards a practical programming language based on dependent type the-
ory. PhD thesis, Department of Computer Science and Engineering, Chalmers
University of Technology, SE-412 96 Goteborg, Sweden, September 2007.
7. U. Norell and J. Chapman. Dependently Typed Programming in Agda (source
code), 2012. http://www.cse.chalmers.se/~ulfn/darcs/AFP08/LectureNotes/.
8. N. Oury and W. Swierstra. The power of pi. In Proceedings of the 13th ACM
SIGPLAN international conference on Functional programming, ICFP 08, pages
3950, New York, NY, USA, 2008. ACM.