Category Theory For Programmers Scala
Category Theory For Programmers Scala
for Programmers
Scala Edition
Contains code snippets in Haskell and Scala
By Bartosz Milewski
Bartosz Milewski
Version v1.1-rc1-1-gc920207
November 25, 2018
Preface xiii
Part One 2
i
2.7 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . 27
4 Kleisli Categories 41
4.1 The Writer Category . . . . . . . . . . . . . . . . . . . 46
4.2 Writer in Haskell . . . . . . . . . . . . . . . . . . . . . 49
4.3 Kleisli Categories . . . . . . . . . . . . . . . . . . . . . 53
4.4 Challenge . . . . . . . . . . . . . . . . . . . . . . . . . . 54
ii
6.3 Sum Types . . . . . . . . . . . . . . . . . . . . . . . . . 90
6.4 Algebra of Types . . . . . . . . . . . . . . . . . . . . . . 96
6.5 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . 102
7 Functors 104
7.1 Functors in Programming . . . . . . . . . . . . . . . . . 107
7.1.1 The Maybe Functor . . . . . . . . . . . . . . . . 107
7.1.2 Equational Reasoning . . . . . . . . . . . . . . 110
7.1.3 Optional . . . . . . . . . . . . . . . . . . . . . . 113
7.1.4 Typeclasses . . . . . . . . . . . . . . . . . . . . 115
7.1.5 Functor in C++ . . . . . . . . . . . . . . . . . . 118
7.1.6 The List Functor . . . . . . . . . . . . . . . . . . 119
7.1.7 The Reader Functor . . . . . . . . . . . . . . . . 122
7.2 Functors as Containers . . . . . . . . . . . . . . . . . . 124
7.3 Functor Composition . . . . . . . . . . . . . . . . . . . 128
7.4 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . 131
8 Functoriality 133
8.1 Bifunctors . . . . . . . . . . . . . . . . . . . . . . . . . 133
8.2 Product and Coproduct Bifunctors . . . . . . . . . . . . 137
8.3 Functorial Algebraic Data Types . . . . . . . . . . . . . 139
8.4 Functors in C++ . . . . . . . . . . . . . . . . . . . . . . 145
8.5 The Writer Functor . . . . . . . . . . . . . . . . . . . . 147
8.6 Covariant and Contravariant Functors . . . . . . . . . . 150
8.7 Profunctors . . . . . . . . . . . . . . . . . . . . . . . . . 155
8.8 The Hom-Functor . . . . . . . . . . . . . . . . . . . . . 157
8.9 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . 158
iii
9.2 Currying . . . . . . . . . . . . . . . . . . . . . . . . . . 166
9.3 Exponentials . . . . . . . . . . . . . . . . . . . . . . . . 171
9.4 Cartesian Closed Categories . . . . . . . . . . . . . . . 172
9.5 Exponentials and Algebraic Data Types . . . . . . . . . 173
9.5.1 Zeroth Power . . . . . . . . . . . . . . . . . . . 174
9.5.2 Powers of One . . . . . . . . . . . . . . . . . . 174
9.5.3 First Power . . . . . . . . . . . . . . . . . . . . 175
9.5.4 Exponentials of Sums . . . . . . . . . . . . . . . 175
9.5.5 Exponentials of Exponentials . . . . . . . . . . 176
9.5.6 Exponentials over Products . . . . . . . . . . . 177
9.6 Curry-Howard Isomorphism . . . . . . . . . . . . . . . 177
9.7 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . 180
iv
12.4 Continuity . . . . . . . . . . . . . . . . . . . . . . . . . 240
12.5 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . 242
v
17.1 Functors . . . . . . . . . . . . . . . . . . . . . . . . . . 292
17.2 Commuting Diagrams . . . . . . . . . . . . . . . . . . . 293
17.3 Natural Transformations . . . . . . . . . . . . . . . . . 294
17.4 Natural Isomorphisms . . . . . . . . . . . . . . . . . . . 296
17.5 Hom-Sets . . . . . . . . . . . . . . . . . . . . . . . . . . 296
17.6 Hom-Set Isomorphisms . . . . . . . . . . . . . . . . . . 297
17.7 Asymmetry of Hom-Sets . . . . . . . . . . . . . . . . . 298
17.8 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . 299
18 Adjunctions 300
18.1 Adjunction and Unit/Counit Pair . . . . . . . . . . . . . 301
18.2 Adjunctions and Hom-Sets . . . . . . . . . . . . . . . . 308
18.3 Product from Adjunction . . . . . . . . . . . . . . . . . 313
18.4 Exponential from Adjunction . . . . . . . . . . . . . . . 318
18.5 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . 320
vi
21.2.3 Read-Only State . . . . . . . . . . . . . . . . . . 355
21.2.4 Write-Only State . . . . . . . . . . . . . . . . . 358
21.2.5 State . . . . . . . . . . . . . . . . . . . . . . . . 360
21.2.6 Exceptions . . . . . . . . . . . . . . . . . . . . . 362
21.2.7 Continuations . . . . . . . . . . . . . . . . . . . 363
21.2.8 Interactive Input . . . . . . . . . . . . . . . . . 366
21.2.9 Interactive Output . . . . . . . . . . . . . . . . 369
21.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 371
23 Comonads 395
23.1 Programming with Comonads . . . . . . . . . . . . . . 397
23.2 The Product Comonad . . . . . . . . . . . . . . . . . . . 397
23.3 Dissecting the Composition . . . . . . . . . . . . . . . 400
23.4 The Stream Comonad . . . . . . . . . . . . . . . . . . . 403
23.5 Comonad Categorically . . . . . . . . . . . . . . . . . . 407
23.6 The Store Comonad . . . . . . . . . . . . . . . . . . . . 411
23.7 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . 415
24 F-Algebras 416
24.1 Recursion . . . . . . . . . . . . . . . . . . . . . . . . . . 421
24.2 Category of F-Algebras . . . . . . . . . . . . . . . . . . 424
24.3 Natural Numbers . . . . . . . . . . . . . . . . . . . . . 428
24.4 Catamorphisms . . . . . . . . . . . . . . . . . . . . . . 429
24.5 Folds . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
vii
24.6 Coalgebras . . . . . . . . . . . . . . . . . . . . . . . . . 434
24.7 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . 440
viii
28.3 Enriched Category . . . . . . . . . . . . . . . . . . . . . 506
28.4 Preorders . . . . . . . . . . . . . . . . . . . . . . . . . . 508
28.5 Metric Spaces . . . . . . . . . . . . . . . . . . . . . . . 509
28.6 Enriched Functors . . . . . . . . . . . . . . . . . . . . . 510
28.7 Self Enrichment . . . . . . . . . . . . . . . . . . . . . . 511
28.8 Relation to 𝟐-Categories . . . . . . . . . . . . . . . . . 513
29 Topoi 515
29.1 Subobject Classifier . . . . . . . . . . . . . . . . . . . . 516
29.2 Topos . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
29.3 Topoi and Logic . . . . . . . . . . . . . . . . . . . . . . 522
29.4 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . 523
ix
Appendices 556
Index 556
Acknowledgments 559
Colophon 560
x
A note from the editor
xi
class Monoid m where
mappend :: m -> m -> m
mempty :: m
trait Monoid[M] {
def combine(x: M, y: M): M
def empty: M
}
In addition, some Scala snippets make use of the Kind Projector2 com-
piler plugin, to support nicer syntax for partially-applied types.
2 https://github.com/non/kind-projector
xii
Preface
For some time now I’ve been floating the idea of writing a
book about category theory that would be targeted at pro-
grammers. Mind you, not computer scientists but program-
mers — engineers rather than scientists. I know this sounds
crazy and I am properly scared. I can’t deny that there is a
huge gap between science and engineering because I have
worked on both sides of the divide. But I’ve always felt a
very strong compulsion to explain things. I have tremen-
dous admiration for Richard Feynman who was the master
of simple explanations. I know I’m no Feynman, but I will
try my best. I’m starting by publishing this preface — which
is supposed to motivate the reader to learn category theory
— in hopes of starting a discussion and soliciting feedback.3
xiii
My optimism is based on several observations. First, category the-
ory is a treasure trove of extremely useful programming ideas. Haskell
programmers have been tapping this resource for a long time, and the
ideas are slowly percolating into other languages, but this process is too
slow. We need to speed it up.
Second, there are many different kinds of math, and they appeal to
different audiences. You might be allergic to calculus or algebra, but it
doesn’t mean you won’t enjoy category theory. I would go as far as to
argue that category theory is the kind of math that is particularly well
suited for the minds of programmers. That’s because category theory
— rather than dealing with particulars — deals with structure. It deals
with the kind of structure that makes programs composable.
Composition is at the very root of category theory — it’s part of
the definition of the category itself. And I will argue strongly that com-
position is the essence of programming. We’ve been composing things
forever, long before some great engineer came up with the idea of a sub-
routine. Some time ago the principles of structural programming rev-
olutionized programming because they made blocks of code compos-
able. Then came object oriented programming, which is all about com-
posing objects. Functional programming is not only about composing
functions and algebraic data structures — it makes concurrency com-
posable — something that’s virtually impossible with other program-
ming paradigms.
Third, I have a secret weapon, a butcher’s knife, with which I will
butcher math to make it more palatable to programmers. When you’re a
professional mathematician, you have to be very careful to get all your
assumptions straight, qualify every statement properly, and construct
all your proofs rigorously. This makes mathematical papers and books
extremely hard to read for an outsider. I’m a physicist by training, and in
xiv
physics we made amazing advances using informal reasoning. Mathe-
maticians laughed at the Dirac delta function, which was made up on the
spot by the great physicist P. A. M. Dirac to solve some differential equa-
tions. They stopped laughing when they discovered a completely new
branch of calculus called distribution theory that formalized Dirac’s in-
sights.
Of course when using hand-waving arguments you run the risk of
saying something blatantly wrong, so I will try to make sure that there
is solid mathematical theory behind informal arguments in this book. I
do have a worn-out copy of Saunders Mac Lane’s Category Theory for
the Working Mathematician on my nightstand.
Since this is category theory for programmers I will illustrate all ma-
jor concepts using computer code. You are probably aware that func-
tional languages are closer to math than the more popular imperative
languages. They also offer more abstracting power. So a natural temp-
tation would be to say: You must learn Haskell before the bounty of cat-
egory theory becomes available to you. But that would imply that cate-
gory theory has no application outside of functional programming and
that’s simply not true. So I will provide a lot of C++ examples. Granted,
you’ll have to overcome some ugly syntax, the patterns might not stand
out from the background of verbosity, and you might be forced to do
some copy and paste in lieu of higher abstraction, but that’s just the lot
of a C++ programmer.
But you’re not off the hook as far as Haskell is concerned. You don’t
have to become a Haskell programmer, but you need it as a language
for sketching and documenting ideas to be implemented in C++. That’s
exactly how I got started with Haskell. I found its terse syntax and pow-
erful type system a great help in understanding and implementing C++
templates, data structures, and algorithms. But since I can’t expect the
xv
readers to already know Haskell, I will introduce it slowly and explain
everything as I go.
If you’re an experienced programmer, you might be asking yourself:
I’ve been coding for so long without worrying about category theory
or functional methods, so what’s changed? Surely you can’t help but
notice that there’s been a steady stream of new functional features in-
vading imperative languages. Even Java, the bastion of object-oriented
programming, let the lambdas in. C++ has recently been evolving at a
frantic pace — a new standard every few years — trying to catch up with
the changing world. All this activity is in preparation for a disruptive
change or, as we physicists call it, a phase transition. If you keep heat-
ing water, it will eventually start boiling. We are now in the position of
a frog that must decide if it should continue swimming in increasingly
hot water, or start looking for some alternatives.
One of the forces that are driving the big change is the multicore
revolution. The prevailing programming paradigm, object oriented pro-
gramming, doesn’t buy you anything in the realm of concurrency and
parallelism, and instead encourages dangerous and buggy design. Data
xvi
hiding, the basic premise of object orientation, when combined with
sharing and mutation, becomes a recipe for data races. The idea of com-
bining a mutex with the data it protects is nice but, unfortunately, locks
don’t compose, and lock hiding makes deadlocks more likely and harder
to debug.
But even in the absence of concurrency, the growing complexity
of software systems is testing the limits of scalability of the imperative
paradigm. To put it simply, side effects are getting out of hand. Granted,
functions that have side effects are often convenient and easy to write.
Their effects can in principle be encoded in their names and in the com-
ments. A function called SetPassword or WriteFile is obviously mutat-
ing some state and generating side effects, and we are used to dealing
with that. It’s only when we start composing functions that have side
effects on top of other functions that have side effects, and so on, that
things start getting hairy. It’s not that side effects are inherently bad —
it’s the fact that they are hidden from view that makes them impossi-
ble to manage at larger scales. Side effects don’t scale, and imperative
programming is all about side effects.
Changes in hardware and the growing complexity of software are
forcing us to rethink the foundations of programming. Just like the
builders of Europe’s great gothic cathedrals we’ve been honing our craft
to the limits of material and structure. There is an unfinished gothic
cathedral in Beauvais4 , France, that stands witness to this deeply human
struggle with limitations. It was intended to beat all previous records of
height and lightness, but it suffered a series of collapses. Ad hoc mea-
sures like iron rods and wooden supports keep it from disintegrating,
but obviously a lot of things went wrong. From a modern perspective,
it’s a miracle that so many gothic structures had been successfully com-
4 http://en.wikipedia.org/wiki/Beauvais_Cathedral
xvii
Ad hoc measures preventing the Beauvais cathedral from collapsing.
xviii
Part One
1
1
Category: The Essence of Composition
2
In a category, if there is an arrow going from 𝐴 to 𝐵 and an arrow going from 𝐵 to 𝐶 then there
must also be a direct arrow from 𝐴 to 𝐶 that is their composition. This diagram is not a full category
because it’s missing identity morphisms (see later).
You can compose them by passing the result of 𝑓 to 𝑔. You have just
defined a new function that takes an 𝐴 and returns a 𝐶.
In math, such composition is denoted by a small circle between
functions: 𝑔 ∘ 𝑓 . Notice the right to left order of composition. For some
people this is confusing. You may be familiar with the pipe notation in
Unix, as in:
or the chevron >> in F#, which both go from left to right. But in math-
ematics and in Haskell functions compose right to left. It helps if you
read 𝑔 ∘ 𝑓 as “g after f.”
Let’s make this even more explicit by writing some C code. We have
one function f that takes an argument of type A and returns a value of
type B:
B f(A a);
3
and another:
C g(B b);
C g_after_f(A a)
{
return g(f(a));
}
f :: A -> B
val f: A => B
Similarly:
g :: B -> C
val g: B => C
4
g . f
g compose f
Once you see how simple things are in Haskell, the inability to express
straightforward functional concepts in C++ is a little embarrassing. In
fact, Haskell will let you use Unicode characters so you can write com-
position as:
g ◦ f
So here’s the first Haskell lesson: Double colon means “has the type of…”
A function type is created by inserting an arrow between two types.
You compose two functions by inserting a period between them (or a
Unicode circle).
5
f :: A -> B
g :: B -> C
h :: C -> D
h . (g . f) == (h . g) . f == h . g . f
val f: A => B
val g: B => C
val h: C => D
6
template<class T> T id(T x) { return x; }
Of course, in C++ nothing is that simple, because you have to take into
account not only what you’re passing but also how (that is, by value, by
reference, by const reference, by move, and so on).
In Haskell, the identity function is part of the standard library (called
Prelude). Here’s its declaration and definition:
id :: a -> a
id x = x
7
This concludes our second Haskell lesson.
The identity conditions can be written (again, in pseudo-Haskell) as:
f . id == f
id . f == f
f compose identity[A] == f
identity[B] _ compose f == f
You might be asking yourself the question: Why would anyone bother
with the identity function — a function that does nothing? Then again,
why do we bother with the number zero? Zero is a symbol for nothing.
Ancient Romans had a number system without a zero and they were
able to build excellent roads and aqueducts, some of which survive to
this day.
Neutral values like zero or id are extremely useful when working
with symbolic variables. That’s why Romans were not very good at al-
gebra, whereas the Arabs and the Persians, who were familiar with the
concept of zero, were. So the identity function becomes very handy as
an argument to, or a return from, a higher-order function. Higher order
functions are what make symbolic manipulation of functions possible.
They are the algebra of functions.
To summarize: A category consists of objects and arrows (morphisms).
Arrows can be composed, and the composition is associative. Every ob-
ject has an identity arrow that serves as a unit under composition.
8
signing an interactive program, they would ask: What is interaction?
When implementing Conway’s Game of Life, they would probably pon-
der about the meaning of life. In this spirit, I’m going to ask: What is
programming? At the most basic level, programming is about telling
the computer what to do. “Take the contents of memory address x and
add it to the contents of the register EAX.” But even when we program
in assembly, the instructions we give the computer are an expression of
something more meaningful. We are solving a non-trivial problem (if it
were trivial, we wouldn’t need the help of the computer). And how do
we solve problems? We decompose bigger problems into smaller prob-
lems. If the smaller problems are still too big, we decompose them fur-
ther, and so on. Finally, we write code that solves all the small problems.
And then comes the essence of programming: we compose those pieces
of code to create solutions to larger problems. Decomposition wouldn’t
make sense if we weren’t able to put the pieces back together.
This process of hierarchical decomposition and recomposition is not
imposed on us by computers. It reflects the limitations of the human
mind. Our brains can only deal with a small number of concepts at a
time. One of the most cited papers in psychology, The Magical Num-
ber Seven, Plus or Minus Two1 , postulated that we can only keep 7 ± 2
“chunks” of information in our minds. The details of our understand-
ing of the human short-term memory might be changing, but we know
for sure that it’s limited. The bottom line is that we are unable to deal
with the soup of objects or the spaghetti of code. We need structure not
because well-structured programs are pleasant to look at, but because
otherwise our brains can’t process them efficiently. We often describe
some piece of code as elegant or beautiful, but what we really mean is
1 http://en.wikipedia.org/wiki/The_Magical_Number_Seven,_Plus_or_
Minus_Two
9
that it’s easy to process by our limited human minds. Elegant code cre-
ates chunks that are just the right size and come in just the right number
for our mental digestive system to assimilate them.
So what are the right chunks for the composition of programs? Their
surface area has to increase slower than their volume. (I like this anal-
ogy because of the intuition that the surface area of a geometric object
grows with the square of its size — slower than the volume, which grows
with the cube of its size.) The surface area is the information we need
in order to compose chunks. The volume is the information we need in
order to implement them. The idea is that, once a chunk is implemented,
we can forget about the details of its implementation and concentrate
on how it interacts with other chunks. In object-oriented programming,
the surface is the class declaration of the object, or its abstract interface.
In functional programming, it’s the declaration of a function. (I’m sim-
plifying things a bit, but that’s the gist of it.)
Category theory is extreme in the sense that it actively discourages
us from looking inside the objects. An object in category theory is an
abstract nebulous entity. All you can ever know about it is how it relates
to other objects — how it connects with them using arrows. This is how
internet search engines rank web sites by analyzing incoming and out-
going links (except when they cheat). In object-oriented programming,
an idealized object is only visible through its abstract interface (pure
surface, no volume), with methods playing the role of arrows. The mo-
ment you have to dig into the implementation of the object in order to
understand how to compose it with other objects, you’ve lost the ad-
vantages of your programming paradigm.
10
1.4 Challenges
1. Implement, as best as you can, the identity function in your fa-
vorite language (or the second favorite, if your favorite language
happens to be Haskell).
2. Implement the composition function in your favorite language. It
takes two functions as arguments and returns a function that is
their composition.
3. Write a program that tries to test that your composition function
respects identity.
4. Is the world-wide web a category in any sense? Are links mor-
phisms?
5. Is Facebook a category, with people as objects and friendships as
morphisms?
6. When is a directed graph a category?
11
2
Types and Functions
12
provides yet another barrier against nonsensical programs. Moreover,
whereas in a dynamically typed language, type mismatches would be
discovered at runtime, in strongly typed statically checked languages
type mismatches are discovered at compile time, eliminating lots of in-
correct programs before they have a chance to run.
So the question is, do we want to make monkeys happy, or do we
want to produce correct programs?
The usual goal in the typing monkeys thought experiment is the pro-
duction of the complete works of Shakespeare. Having a spell checker
and a grammar checker in the loop would drastically increase the odds.
The analog of a type checker would go even further by making sure
that, once Romeo is declared a human being, he doesn’t sprout leaves
or trap photons in his powerful gravitational field.
13
the source object of the next arrow. In programming we pass the re-
sults of one function to another. The program will not work if the tar-
get function is not able to correctly interpret the data produced by the
source function. The two ends must fit for the composition to work. The
stronger the type system of the language, the better this match can be
described and mechanically verified.
The only serious argument I hear against strong static type checking
is that it might eliminate some programs that are semantically correct.
In practice, this happens extremely rarely and, in any case, every lan-
guage provides some kind of a backdoor to bypass the type system when
that’s really necessary. Even Haskell has unsafeCoerce. But such de-
vices should be used judiciously. Franz Kafka’s character, Gregor Samsa,
breaks the type system when he metamorphoses into a giant bug, and
we all know how it ends.
Another argument I hear a lot is that dealing with types imposes too
much burden on the programmer. I could sympathize with this senti-
ment after having to write a few declarations of iterators in C++ myself,
except that there is a technology called type inference that lets the com-
piler deduce most of the types from the context in which they are used.
In C++, you can now declare a variable auto and let the compiler figure
out its type.
In Haskell, except on rare occasions, type annotations are purely
optional. Programmers tend to use them anyway, because they can tell a
lot about the semantics of code, and they make compilation errors easier
to understand. It’s a common practice in Haskell to start a project by
designing the types. Later, type annotations drive the implementation
and become compiler-enforced comments.
Strong static typing is often used as an excuse for not testing the
code. You may sometimes hear Haskell programmers saying, “If it com-
14
piles, it must be correct.” Of course, there is no guarantee that a type-
correct program is correct in the sense of producing the right output.
The result of this cavalier attitude is that in several studies Haskell didn’t
come as strongly ahead of the pack in code quality as one would expect.
It seems that, in the commercial setting, the pressure to fix bugs is ap-
plied only up to a certain quality level, which has everything to do with
the economics of software development and the tolerance of the end
user, and very little to do with the programming language or method-
ology. A better criterion would be to measure how many projects fall
behind schedule or are delivered with drastically reduced functionality.
As for the argument that unit testing can replace strong typing,
consider the common refactoring practice in strongly typed languages:
changing the type of an argument of a particular function. In a strongly
typed language, it’s enough to modify the declaration of that function
and then fix all the build breaks. In a weakly typed language, the fact
that a function now expects different data cannot be propagated to call
sites. Unit testing may catch some of the mismatches, but testing is al-
most always a probabilistic rather than a deterministic process. Testing
is a poor substitute for proof.
15
x :: Integer
val x: BigInt
16
culations that involve recursion, and those might never terminate. We
can’t just ban non-terminating functions from Haskell because distin-
guishing between terminating and non-terminating functions is unde-
cidable — the famous halting problem. That’s why computer scientists
came up with a brilliant idea, or a major hack, depending on your point
of view, to extend every type by one more special value called the bot-
tom and denoted by _|_, or Unicode ⊥. This “value” corresponds to a
non-terminating computation. So a function declared as:
may return True, False, or _|_; the latter meaning that it would never
terminate.
Interestingly, once you accept the bottom as part of the type system,
it is convenient to treat every runtime error as a bottom, and even allow
functions to return the bottom explicitly. The latter is usually done using
the expression undefined, as in:
17
f :: Bool -> Bool
f = undefined
Loose Reasoning is Morally Correct. This paper provides justification for ignoring bot-
toms in most contexts.
18
There are formal tools for describing the semantics of a language
but, because of their complexity, they are mostly used with simplified
academic languages, not real-life programming behemoths. One such
tool called operational semantics describes the mechanics of program
execution. It defines a formalized idealized interpreter. The semantics of
industrial languages, such as C++, is usually described using informal
operational reasoning, often in terms of an “abstract machine.”
The problem is that it’s very hard to prove things about programs
using operational semantics. To show a property of a program you es-
sentially have to “run it” through the idealized interpreter.
It doesn’t matter that programmers never perform formal proofs of
correctness. We always “think” that we write correct programs. Nobody
sits at the keyboard saying, “Oh, I’ll just throw a few lines of code and
see what happens.” We think that the code we write will perform certain
actions that will produce desired results. We are usually quite surprised
when it doesn’t. That means we do reason about programs we write, and
we usually do it by running an interpreter in our heads. It’s just really
hard to keep track of all the variables. Computers are good at running
programs — humans are not! If we were, we wouldn’t need computers.
But there is an alternative. It’s called denotational semantics and it’s
based on math. In denotational semantics every programming construct
is given its mathematical interpretation. With that, if you want to prove
a property of a program, you just prove a mathematical theorem. You
might think that theorem proving is hard, but the fact is that we humans
have been building up mathematical methods for thousands of years, so
there is a wealth of accumulated knowledge to tap into. Also, as com-
pared to the kind of theorems that professional mathematicians prove,
the problems that we encounter in programming are usually quite sim-
ple, if not trivial.
19
Consider the definition of a factorial function in Haskell, which is a
language quite amenable to denotational semantics:
int fact(int n) {
int i;
int result = 1;
for (i = 2; i <= n; ++i)
result *= i;
return result;
}
20
denotational semantics a new lease on life and made pure functional
programs more usable, but also shed new light on traditional program-
ming. I’ll talk about monads later, when we develop more categorical
tools.
One of the important advantages of having a mathematical model
for programming is that it’s possible to perform formal proofs of cor-
rectness of software. This might not seem so important when you’re
writing consumer software, but there are areas of programming where
the price of failure may be exorbitant, or where human life is at stake.
But even when writing web applications for the health system, you may
appreciate the thought that functions and algorithms from the Haskell
standard library come with proofs of correctness.
21
In programming languages, functions that always produce the same
result given the same input and have no side effects are called pure func-
tions. In a pure functional language like Haskell all functions are pure.
Because of that, it’s easier to give these languages denotational seman-
tics and model them using category theory. As for other languages, it’s
always possible to restrict yourself to a pure subset, or reason about side
effects separately. Later we’ll see how monads let us model all kinds of
effects using only pure functions. So we really don’t lose anything by
restricting ourselves to mathematical functions.
(Remember, a is a type variable that can stand for any type.) The name is
not coincidental. There is deeper interpretation of types and functions
22
in terms of logic called the Curry-Howard isomorphism. The type Void
represents falsity, and the type of the function absurd corresponds to
the statement that from falsity follows anything, as in the Latin adage
“ex falso sequitur quodlibet.”
Next is the type that corresponds to a singleton set. It’s a type that
has only one possible value. This value just “is.” You might not immedi-
ately recognise it as such, but that is the C++ void. Think of functions
from and to this type. A function from void can always be called. If it’s a
pure function, it will always return the same result. Here’s an example
of such a function:
You might think of this function as taking “nothing”, but as we’ve just
seen, a function that takes “nothing” can never be called because there is
no value representing “nothing.” So what does this function take? Con-
ceptually, it takes a dummy value of which there is only one instance
ever, so we don’t have to mention it explicitly. In Haskell, however,
there is a symbol for this value: an empty pair of parentheses, (). So,
by a funny coincidence (or is it a coincidence?), the call to a function of
void looks the same in C++ and in Haskell. Also, because of the Haskell’s
love of terseness, the same symbol () is used for the type, the construc-
tor, and the only value corresponding to a singleton set. So here’s this
function in Haskell:
23
The first line declares that f44 takes the type (), pronounced “unit,” into
the type Integer. The second line defines f44 by pattern matching the
only constructor for unit, namely (), and producing the number 44. You
call this function by providing the unit value ():
f44 ()
You give it any integer, and it gives you back a unit. In the spirit of
terseness, Haskell lets you use the wildcard pattern, the underscore, for
24
an argument that is discarded. This way you don’t have to invent a name
for it. So the above can be rewritten as:
Notice that the implementation of this function not only doesn’t depend
on the value passed to it, but it doesn’t even depend on the type of the
argument.
Functions that can be implemented with the same formula for any
type are called parametrically polymorphic. You can implement a whole
family of such functions with one equation using a type parameter in-
stead of a concrete type. What should we call a polymorphic function
from any type to unit type? Of course we’ll call it unit:
unit :: a -> ()
unit _ = ()
template<class T>
void unit(T) {}
25
data Bool = True | False
(The way to read this definition is that Bool is either True or False.) In
principle, one should also be able to define a Boolean type in C++ as an
enumeration:
enum bool {
true,
false
};
but C++ enum is secretly an integer. The C++11 “enum class” could have
been used instead, but then you would have to qualify its values with
the class name, as in bool::true and bool::false, not to mention having
to include the appropriate header in every file that uses it.
Pure functions from Bool just pick two values from the target type,
one corresponding to True and another to False.
Functions to Bool are called predicates. For instance, the Haskell li-
brary Data.Char is full of predicates like isAlpha or isDigit. In C++ there
is a similar library that defines, among others, isalpha and isdigit, but
these return an int rather than a Boolean. The actual predicates are de-
fined in std::ctype and have the form ctype::is(alpha, c),
ctype::is(digit, c), etc.
26
2.7 Challenges
1. Define a higher-order function (or a function object) memoize in
your favorite language. This function takes a pure function f as
an argument and returns a function that behaves almost exactly
like f, except that it only calls the original function once for every
argument, stores the result internally, and subsequently returns
this stored result every time it’s called with the same argument.
You can tell the memoized function from the original by watch-
ing its performance. For instance, try to memoize a function that
takes a long time to evaluate. You’ll have to wait for the result
the first time you call it, but on subsequent calls, with the same
argument, you should get the result immediately.
2. Try to memoize a function from your standard library that you
normally use to produce random numbers. Does it work?
3. Most random number generators can be initialized with a seed.
Implement a function that takes a seed, calls the random number
generator with that seed, and returns the result. Memoize that
function. Does it work?
4. Which of these C++ functions are pure? Try to memoize them
and observe what happens when you call them multiple times:
memoized and not.
(a) The factorial function from the example in the text.
(b) std::getchar()
(c) bool f() {
std::cout << "Hello!" << std::endl;
return true;
}
(d) int f(int x) {
static int y = 0;
27
y += x;
return y;
}
5. How many different functions are there from Bool to Bool? Can
you implement them all?
6. Draw a picture of a category whose only objects are the types
Void, () (unit), and Bool; with arrows corresponding to all possible
functions between these types. Label the arrows with the names
of the functions.
28
3
Categories Great and Small
3.1 No Objects
The most trivial category is one with zero objects and, consequently,
zero morphisms. It’s a very sad category by itself, but it may be impor-
tant in the context of other categories, for instance, in the category of
all categories (yes, there is one). If you think that an empty set makes
sense, then why not an empty category?
29
3.2 Simple Graphs
You can build categories just by connecting objects with arrows. You can
imagine starting with any directed graph and making it into a category
by simply adding more arrows. First, add an identity arrow at each node.
Then, for any two arrows such that the end of one coincides with the
beginning of the other (in other words, any two composable arrows),
add a new arrow to serve as their composition. Every time you add a
new arrow, you have to also consider its composition with any other
arrow (except for the identity arrows) and itself. You usually end up
with infinitely many arrows, but that’s okay.
Another way of looking at this process is that you’re creating a cat-
egory, which has an object for every node in the graph, and all possible
chains of composable graph edges as morphisms. (You may even con-
sider identity morphisms as special cases of chains of length zero.)
Such a category is called a free category generated by a given graph.
It’s an example of a free construction, a process of completing a given
structure by extending it with a minimum number of items to satisfy its
laws (here, the laws of a category). We’ll see more examples of it in the
future.
3.3 Orders
And now for something completely different! A category where a mor-
phism is a relation between objects: the relation of being less than or
equal. Let’s check if it indeed is a category. Do we have identity mor-
phisms? Every object is less than or equal to itself: check! Do we have
composition? If 𝑎 ⩽ 𝑏 and 𝑏 ⩽ 𝑐 then 𝑎 ⩽ 𝑐: check! Is composition as-
30
sociative? Check! A set with a relation like this is called a preorder, so a
preorder is indeed a category.
You can also have a stronger relation, that satisfies an additional
condition that, if 𝑎 ⩽ 𝑏 and 𝑏 ⩽ 𝑎 then 𝑎 must be the same as 𝑏. That’s
called a partial order.
Finally, you can impose the condition that any two objects are in a
relation with each other, one way or another; and that gives you a linear
order or total order.
Let’s characterize these ordered sets as categories. A preorder is a
category where there is at most one morphism going from any object 𝑎
to any object 𝑏. Another name for such a category is “thin.” A preorder
is a thin category.
A set of morphisms from object 𝑎 to object 𝑏 in a category 𝐂 is called
a hom-set and is written as 𝐂(𝑎, 𝑏) (or, sometimes, Hom𝐂 (𝑎, 𝑏)). So ev-
ery hom-set in a preorder is either empty or a singleton. That includes
the hom-set 𝐂(𝑎, 𝑎), the set of morphisms from 𝑎 to 𝑎, which must be a
singleton, containing only the identity, in any preorder. You may, how-
ever, have cycles in a preorder. Cycles are forbidden in a partial order.
It’s very important to be able to recognize preorders, partial orders,
and total orders because of sorting. Sorting algorithms, such as quick-
sort, bubble sort, merge sort, etc., can only work correctly on total or-
ders. Partial orders can be sorted using topological sort.
31
show up as strings, lists, foldable data structures, futures in concurrent
programming, events in functional reactive programming, and so on.
Traditionally, a monoid is defined as a set with a binary operation.
All that’s required from this operation is that it’s associative, and that
there is one special element that behaves like a unit with respect to it.
For instance, natural numbers with zero form a monoid under ad-
dition. Associativity means that:
(𝑎 + 𝑏) + 𝑐 = 𝑎 + (𝑏 + 𝑐)
0+𝑎 =𝑎
and
𝑎+0=𝑎
The second equation is redundant, because addition is commutative (𝑎+
𝑏 = 𝑏 + 𝑎), but commutativity is not part of the definition of a monoid.
For instance, string concatenation is not commutative and yet it forms
a monoid. The neutral element for string concatenation, by the way, is
an empty string, which can be attached to either side of a string without
changing it.
In Haskell we can define a type class for monoids — a type for which
there is a neutral element called mempty and a binary operation called
mappend:
32
mappend :: m -> m -> m
trait Monoid[M] {
def mempty: M
def mappend(m1: M, m2: M): M
}
33
mappend = (++)
object Monoid {
implicit def stringMonoid: Monoid[String] = new
↪ Monoid[String] {
def mempty: String = ""
def mappend(m1: String, m2: String): String = m1 + m2
}
}
mappend = (++)
34
mappend s1 s2 = (++) s1 s2
template<class T>
T mempty = delete;
template<class T>
T mappend(T, T) = delete;
template<class M>
concept bool Monoid = requires (M m) {
{ mempty<M> } -> M;
{ mappend(m, m); } -> M;
};
35
The keyword delete means that there is no default value defined: It
will have to be specified on a case-by-case basis. Similarly, there is no
default for mappend.
The concept Monoid is a predicate (hence the bool type) that tests
whether there exist appropriate definitions of mempty and mappend for a
given type M.
An instantiation of the Monoid concept can be accomplished by pro-
viding appropriate specializations and overloads:
template<>
std::string mempty<std::string> = {""};
36
sition of adders can be made equivalent to the rules of addition. That’s
good too: we can replace addition with function composition.
But wait, there’s more: There is also the adder for the neutral ele-
ment, zero. Adding zero doesn’t move things around, so it’s the identity
function in the set of natural numbers.
Instead of giving you the traditional rules of addition, I could as well
give you the rules of composing adders, without any loss of informa-
tion. Notice that the composition of adders is associative, because the
composition of functions is associative; and we have the zero adder cor-
responding to the identity function.
An astute reader might have noticed that the mapping from integers
to adders follows from the second interpretation of the type signature
of mappend as m -> (m -> m). It tells us that mappend maps an element of
a monoid set to a function acting on that set.
Now I want you to forget that you are
dealing with the set of natural numbers
and just think of it as a single object, a blob
with a bunch of morphisms — the adders. A
monoid is a single object category. In fact
the name monoid comes from Greek mono,
which means single. Every monoid can be
described as a single object category with
a set of morphisms that follow appropriate
rules of composition.
String concatenation is an interesting
case, because we have a choice of defining right appenders and left ap-
penders (or prependers, if you will). The composition tables of the two
models are a mirror reverse of each other. You can easily convince your-
37
Monoid hom-set seen as morphisms and as points in a set.
38
larger than sets. A category in which morphisms between any two ob-
jects form a set is called locally small. As promised, I will be mostly
ignoring such subtleties, but I thought I should mention them for the
record.
A lot of interesting phenomena in category theory have their root
in the fact that elements of a hom-set can be seen both as morphisms,
which follow the rules of composition, and as points in a set. Here, com-
position of morphisms in 𝐌 translates into monoidal product in the set
𝐌(𝑚, 𝑚).
3.6 Challenges
1. Generate a free category from:
(a) A graph with one node and no edges
(b) A graph with one node and one (directed) edge (hint: this
edge can be composed with itself)
(c) A graph with two nodes and a single arrow between them
(d) A graph with a single node and 26 arrows marked with the
letters of the alphabet: a, b, c … z.
2. What kind of order is this?
(a) A set of sets with the inclusion relation: 𝐴 is included in 𝐵
if every element of 𝐴 is also an element of 𝐵.
(b) C++ types with the following subtyping relation: T1 is a sub-
type of T2 if a pointer to T1 can be passed to a function that
expects a pointer to T2 without triggering a compilation er-
ror.
39
3. Considering that Bool is a set of two values True and False, show
that it forms two (set-theoretical) monoids with respect to, re-
spectively, operator && (AND) and || (OR).
4. Represent the Bool monoid with the AND operator as a category:
List the morphisms and their rules of composition.
5. Represent addition modulo 3 as a monoid category.
40
4
Kleisli Categories
string logger;
bool negate(bool b) {
logger += "Not so! ";
return !b;
}
You know that this is not a pure function, because its memoized version
would fail to produce a log. This function has side effects.
41
In modern programming, we try to stay away from global muta-
ble state as much as possible — if only because of the complications of
concurrency. And you would never put code like this in a library.
Fortunately for us, it’s possible to make this function pure. You just
have to pass the log explicitly, in and out. Let’s do that by adding a
string argument, and pairing regular output with a string that contains
the updated log:
This function is pure, it has no side effects, it returns the same pair every
time it’s called with the same arguments, and it can be memoized if
necessary. However, considering the cumulative nature of the log, you’d
have to memoize all possible histories that can lead to a given call. There
would be a separate memo entry for:
and
and so on.
It’s also not a very good interface for a library function. The callers
are free to ignore the string in the return type, so that’s not a huge
burden; but they are forced to pass a string as input, which might be
inconvenient.
Is there a way to do the same thing less intrusively? Is there a way
to separate concerns? In this simple example, the main purpose of the
42
function negate is to turn one Boolean into another. The logging is sec-
ondary. Granted, the message that is logged is specific to the function,
but the task of aggregating the messages into one continuous log is a
separate concern. We still want the function to produce a string, but
we’d like to unburden it from producing a log. So here’s the compro-
mise solution:
The idea is that the log will be aggregated between function calls.
To see how this can be done, let’s switch to a slightly more realistic
example. We have one function from string to string that turns lower
case characters to upper case:
string toUpper(string s) {
string result;
int (*toupperp)(int) = &toupper; // toupper is overloaded
transform(begin(s), end(s), back_inserter(result), toupperp);
return result;
}
vector<string> toWords(string s) {
return words(s);
}
43
vector<string> words(string s) {
vector<string> result{""};
for (auto i = begin(s); i != end(s); ++i)
{
if (isspace(*i))
result.push_back("");
else
result.back() += *i;
}
return result;
}
template<class A>
using Writer = pair<A, string>;
Writer<string> toUpper(string s) {
string result;
int (*toupperp)(int) = &toupper;
44
transform(begin(s), end(s), back_inserter(result), toupperp);
return make_pair(result, "toUpper ");
}
Writer<vector<string>> toWords(string s) {
return make_pair(words(s), "toWords ");
}
Writer<vector<string>> process(string s) {
auto p1 = toUpper(s);
auto p2 = toWords(p1.first);
return make_pair(p2.first, p1.second + p2.second);
}
45
4.1 The Writer Category
The idea of embellishing the return types of a bunch of functions in
order to piggyback some additional functionality turns out to be very
fruitful. We’ll see many more examples of it. The starting point is our
regular category of types and functions. We’ll leave the types as objects,
but redefine our morphisms to be the embellished functions.
For instance, suppose that we want to embellish the function isEven
that goes from int to bool. We turn it into a morphism that is rep-
resented by an embellished function. The important point is that this
morphism is still considered an arrow between the objects int and bool,
even though the embellished function returns a pair:
46
return make_pair(p2.first, p1.second + p2.second);
}
So here’s the recipe for the composition of two morphisms in this new
category we are constructing:
47
Now we can go back to our earlier example and implement the compo-
sition of toUpper and toWords using this new template:
Writer<vector<string>> process(string s) {
return compose<string, string, vector<string>>(toUpper,
↪ toWords)(s);
}
There is still a lot of noise with the passing of types to the compose tem-
plate. This can be avoided as long as you have a C++14-compliant com-
piler that supports generalized lambda functions with return type de-
duction (credit for this code goes to Eric Niebler):
Writer<vector<string>> process(string s) {
return compose(toUpper, toWords)(s);
}
But we are not finished yet. We have defined composition in our new
category, but what are the identity morphisms? These are not our reg-
ular identity functions! They have to be morphisms from type A back
to type A, which means they are embellished functions of the form:
Writer<A> identity(A);
48
They have to behave like units with respect to composition. If you look
at our definition of composition, you’ll see that an identity morphism
should pass its argument without change, and only contribute an empty
string to the log:
You can easily convince yourself that the category we have just defined
is indeed a legitimate category. In particular, our composition is trivially
associative. If you follow what’s happening with the first component of
each pair, it’s just a regular function composition, which is associative.
The second components are being concatenated, and concatenation is
also associative.
An astute reader may notice that it would be easy to generalize this
construction to any monoid, not just the string monoid. We would use
mappend inside compose and mempty inside identity (in place of + and
""). There really is no reason to limit ourselves to logging just strings.
A good library writer should be able to identify the bare minimum of
constraints that make the library work — here the logging library’s only
requirement is that the log have monoidal properties.
49
type Writer[A] = (A, String)
Here I’m just defining a type alias, an equivalent of a typedef (or using)
in C++. The type Writer is parameterized by a type variable a and is
equivalent to a pair of a and String. The syntax for pairs is minimal:
just two items in parentheses, separated by a comma.
Our morphisms are functions from an arbitrary type to some Writer
type:
a -> Writer b
A => Writer[B]
It’s a function of two arguments, each being a function on its own, and
returning a function. The first argument is of the type (a -> Writer b),
the second is (b -> Writer c), and the result is (a -> Writer c).
Here’s the definition of this infix operator — the two arguments m1
and m2 appearing on either side of the fishy symbol:
50
m1 >=> m2 = \x ->
let (y, s1) = m1 x
(z, s2) = m2 y
in (z, s1 ++ s2)
object kleisli {
//allows us to use >=> as an infix operator
implicit class KleisliOps[A, B](m1: A => Writer[B]) {
def >=>[C](m2: B => Writer[C]): A => Writer[C] =
x => {
val (y, s1) = m1(x)
val (z, s2) = m2(y)
(z, s1 + s2)
}
}
}
51
return :: a -> Writer a
return x = (x, "")
The function map corresponds to the C++ transform. It applies the char-
acter function toUpper to the string s. The auxiliary function words is
defined in the standard Prelude library.
Finally, the composition of the two functions is accomplished with
the help of the fish operator:
52
process :: String -> Writer [String]
process = upCase >=> toWords
53
more than just pass the output of one function to the input of another.
We have one more degree of freedom to play with: the composition it-
self. It turns out that this is exactly the degree of freedom which makes
it possible to give simple denotational semantics to programs that in
imperative languages are traditionally implemented using side effects.
4.4 Challenge
A function that is not defined for all possible values of its argument is
called a partial function. It’s not really a function in the mathematical
sense, so it doesn’t fit the standard categorical mold. It can, however, be
represented by a function that returns an embellished type optional:
optional<double> safe_root(double x) {
if (x >= 0) return optional<double>{sqrt(x)};
else return optional<double>{};
}
54
1. Construct the Kleisli category for partial functions (define com-
position and identity).
2. Implement the embellished function safe_reciprocal that returns
a valid reciprocal of its argument, if it’s different from zero.
3. Compose safe_root and safe_reciprocal to implement
safe_root_reciprocal that calculates sqrt(1/x) whenever possi-
ble.
55
5
Products and Coproducts
56
This process is reminiscent of the way we do web searches. A query
is like a pattern. A very general query will give you large recall: lots of
hits. Some may be relevant, others not. To eliminate irrelevant hits, you
refine your query. That increases its precision. Finally, the search engine
will rank the hits and, hopefully, the one result that you’re interested in
will be at the top of the list.
The initial object is the object that has one and only one
morphism going to any object in the category.
57
However, even that doesn’t guarantee the uniqueness of the initial ob-
ject (if one exists). But it guarantees the next best thing: uniqueness up
to isomorphism. Isomorphisms are very important in category theory, so
I’ll talk about them shortly. For now, let’s just agree that uniqueness up
to isomorphism justifies the use of “the” in the definition of the initial
object.
Here are some examples: The initial object in a partially ordered
set (often called a poset) is its least element. Some posets don’t have an
initial object — like the set of all integers, positive and negative, with
less-than-or-equal relation for morphisms.
In the category of sets and functions, the initial object is the empty
set. Remember, an empty set corresponds to the Haskell type Void (there
is no corresponding type in C++) and the unique polymorphic function
from Void to any other type is called absurd:
absurd :: Void -> a
It’s this family of morphisms that makes Void the initial object in the
category of types.
58
5.2 Terminal Object
Let’s continue with the single-object pattern, but let’s change the way
we rank the objects. We’ll say that object 𝑎 is “more terminal” than ob-
ject 𝑏 if there is a morphism going from 𝑏 to 𝑎 (notice the reversal of
direction). We’ll be looking for an object that’s more terminal than any
other object in the category. Again, we will insist on uniqueness:
The terminal object is the object with one and only one
morphism coming to it from any object in the category.
59
unit :: a -> ()
unit _ = ()
But Bool is not a terminal object. There is at least one more Bool-valued
function from every type:
no :: a -> Bool
no _ = False
60
5.3 Duality
You can’t help but to notice the symmetry between the way we defined
the initial object and the terminal object. The only difference between
the two was the direction of morphisms. It turns out that for any cate-
gory 𝐂 we can define the opposite category 𝐂𝑜𝑝 just by reversing all the
arrows. The opposite category automatically satisfies all the require-
ments of a category, as long as we simultaneously redefine composi-
tion. If original morphisms 𝑓 ∷ 𝑎 → 𝑏 and 𝑔 ∷ 𝑏 → 𝑐 composed to
ℎ ∷ 𝑎 → 𝑐 with ℎ = 𝑔 ∘ 𝑓 , then the reversed morphisms 𝑓 𝑜𝑝 ∷ 𝑏 → 𝑎
and 𝑔 𝑜𝑝 ∷ 𝑐 → 𝑏 will compose to ℎ𝑜𝑝 ∷ 𝑐 → 𝑎 with ℎ𝑜𝑝 = 𝑓 𝑜𝑝 ∘ 𝑔 𝑜𝑝 .
And reversing the identity arrows is a (pun alert!) no-op.
Duality is a very important property of categories because it doubles
the productivity of every mathematician working in category theory.
For every construction you come up with, there is its opposite; and for
every theorem you prove, you get one for free. The constructions in the
opposite category are often prefixed with “co”, so you have products
and coproducts, monads and comonads, cones and cocones, limits and
colimits, and so on. There are no cocomonads though, because reversing
the arrows twice gets us back to the original state.
It follows then that a terminal object is the initial object in the op-
posite category.
5.4 Isomorphisms
As programmers, we are well aware that defining equality is a nontriv-
ial task. What does it mean for two objects to be equal? Do they have to
occupy the same location in memory (pointer equality)? Or is it enough
that the values of all their components are equal? Are two complex
61
numbers equal if one is expressed as the real and imaginary part, and
the other as modulus and angle? You’d think that mathematicians would
have figured out the meaning of equality, but they haven’t. They have
the same problem of multiple competing definitions for equality. There
is the propositional equality, intensional equality, extensional equality,
and equality as a path in homotopy type theory. And then there are the
weaker notions of isomorphism, and even weaker of equivalence.
The intuition is that isomorphic objects look the same — they have
the same shape. It means that every part of one object corresponds to
some part of another object in a one-to-one mapping. As far as our in-
struments can tell, the two objects are a perfect copy of each other.
Mathematically it means that there is a mapping from object 𝑎 to ob-
ject 𝑏, and there is a mapping from object 𝑏 back to object 𝑎, and they
are the inverse of each other. In category theory we replace mappings
with morphisms. An isomorphism is an invertible morphism; or a pair
of morphisms, one being the inverse of the other.
We understand the inverse in terms of composition and identity:
Morphism 𝑔 is the inverse of morphism 𝑓 if their composition is the
identity morphism. These are actually two equations because there are
two ways of composing two morphisms:
f . g = id
g . f = id
f compose g == identity _
g compose f == identity _
When I said that the initial (terminal) object was unique up to isomor-
phism, I meant that any two initial (terminal) objects are isomorphic.
62
That’s actually easy to see. Let’s suppose that we have two initial ob-
jects 𝑖1 and 𝑖2 . Since 𝑖1 is initial, there is a unique morphism 𝑓 from 𝑖1
to 𝑖2 . By the same token, since 𝑖2 is initial, there is a unique morphism
𝑔 from 𝑖2 to 𝑖1 . What’s the composition of these two morphisms?
63
5.5 Products
The next universal construction is that of a product. We know what a
Cartesian product of two sets is: it’s a set of pairs. But what’s the pattern
that connects the product set with its constituent sets? If we can figure
that out, we’ll be able to generalize it to other categories.
All we can say is that there are two functions, the projections, from
the product to each of the constituents. In Haskell, these two functions
are called fst and snd and they pick, respectively, the first and the sec-
ond component of a pair:
64
fst (x, _) = x
snd (_, y) = y
Equipped with this seemingly very limited knowledge, let’s try to define
a pattern of objects and morphisms in the category of sets that will lead
us to the construction of a product of two sets, 𝑎 and 𝑏. This pattern
consists of an object 𝑐 and two morphisms 𝑝 and 𝑞 connecting it to 𝑎
and 𝑏, respectively:
p :: c -> a
q :: c -> b
def p: C => A
def q: C => B
65
All 𝑐s that fit this pattern will be considered candidates for the product.
There may be lots of them.
For instance, let’s pick, as our constituents, two Haskell types, Int and
Bool, and get a sampling of candidates for their product.
Here’s one: Int. Can Int be considered a candidate for the product
of Int and Bool? Yes, it can — and here are its projections:
66
p :: (Int, Int, Bool) -> Int
p (x, _, _) = x
You may have noticed that while our first candidate was too small — it
only covered the Int dimension of the product; the second was too big
— it spuriously duplicated the Int dimension.
But we haven’t explored yet the other part of the universal con-
struction: the ranking. We want to be able to compare two instances
of our pattern. We want to compare one candidate object 𝑐 and its two
projections 𝑝 and 𝑞 with another candidate object 𝑐 ′ and its two projec-
tions 𝑝 ′ and 𝑞 ′ . We would like to say that 𝑐 is “better” than 𝑐 ′ if there
is a morphism 𝑚 from 𝑐 ′ to 𝑐 — but that’s too weak. We also want its
projections to be “better,” or “more universal,” than the projections of
𝑐 ′ . What it means is that the projections 𝑝 ′ and 𝑞 ′ can be reconstructed
from 𝑝 and 𝑞 using 𝑚:
p' = p . m
q' = q . m
p1 == p compose m
q1 == q compose m
67
Another way of looking at these equations is that 𝑚 factorizes 𝑝 ′ and 𝑞 ′ .
Just pretend that these equations are in natural numbers, and the dot is
multiplication: 𝑚 is a common factor shared by 𝑝 ′ and 𝑞 ′ .
Just to build some intuitions, let me show you that the pair (Int, Bool)
with the two canonical projections, fst and snd is indeed better than the
two candidates I presented before.
68
p x = fst (m x) = x
q x = snd (m x) = True
m (x, _, b) = (x, b)
We were able to show that (Int, Bool) is better than either of the two
candidates. Let’s see why the opposite is not true. Could we find some
m' that would help us reconstruct fst and snd from p and q?
fst = p . m'
snd = q . m'
fst == p compose m1
snd == q compose m1
In our first example, q always returned True and we know that there are
pairs whose second component is False. We can’t reconstruct snd from
q.
The second example is different: we retain enough information after
running either p or q, but there is more than one way to factorize fst
69
and snd. Because both p and q ignore the second component of the triple,
our m' can put anything in it. We can have:
or
and so on.
Putting it all together, given any type c with two projections p and q,
there is a unique m from c to the Cartesian product (a, b) that factorizes
them. In fact, it just combines p and q into a pair.
m :: c -> (a, b)
m x = (p x, q x)
That makes the Cartesian product (a, b) our best match, which means
that this universal construction works in the category of sets. It picks
the product of any two sets.
70
Now let’s forget about sets and define a product of two objects in
any category using the same universal construction. Such a product
doesn’t always exist, but when it does, it is unique up to a unique iso-
morphism.
5.6 Coproduct
Like every construction in category theory, the product has a dual,
which is called the coproduct. When we reverse the arrows in the prod-
uct pattern, we end up with an object 𝑐 equipped with two injections, i
and j: morphisms from 𝑎 and 𝑏 to 𝑐.
71
i :: a -> c
j :: b -> c
def i: A => C
def j: B => C
i' = m . i
j' = m . j
i1 == m compose i
j1 == m compose j
72
The “best” such object, one with a unique morphism connecting it to
any other pattern, is called a coproduct and, if it exists, is unique up to
unique isomorphism.
In the category of sets, the coproduct is the disjoint union of two sets.
An element of the disjoint union of 𝑎 and 𝑏 is either an element of 𝑎 or
an element of 𝑏. If the two sets overlap, the disjoint union contains two
copies of the common part. You can think of an element of a disjoint
union as being tagged with an identifier that specifies its origin.
For a programmer, it’s easier to understand a coproduct in terms of
types: it’s a tagged union of two types. C++ supports unions, but they
are not tagged. It means that in your program you have to somehow
keep track which member of the union is valid. To create a tagged union,
you have to define a tag — an enumeration — and combine it with the
union. For instance, a tagged union of an int and a
char const * could be implemented as:
struct Contact {
enum { isPhone, isEmail } tag;
union { int phoneNum; char const * emailAddr; };
};
Contact PhoneNum(int n) {
Contact c;
73
c.tag = isPhone;
c.phoneNum = n;
return c;
}
helpdesk :: Contact
helpdesk = PhoneNum 2222222
74
data Either a b = Left a | Right b
5.7 Asymmetry
We’ve seen two sets of dual definitions: The definition of a terminal ob-
ject can be obtained from the definition of the initial object by reversing
the direction of arrows; in a similar way, the definition of the coprod-
uct can be obtained from that of the product. Yet in the category of sets
75
the initial object is very different from the final object, and coproduct
is very different from product. We’ll see later that product behaves like
multiplication, with the terminal object playing the role of one; whereas
coproduct behaves more like the sum, with the initial object playing the
role of zero. In particular, for finite sets, the size of the product is the
product of the sizes of individual sets, and the size of the coproduct is
the sum of the sizes.
This shows that the category of sets is not symmetric with respect
to the inversion of arrows.
Notice that while the empty set has a unique morphism to any set
(the absurd function), it has no morphisms coming back. The singleton
set has a unique morphism coming to it from any set, but it also has
outgoing morphisms to every set (except for the empty one). As we’ve
seen before, these outgoing morphisms from the terminal object play a
very important role of picking elements of other sets (the empty set has
no elements, so there’s nothing to pick).
It’s the relationship of the singleton set to the product that sets it
apart from the coproduct. Consider using the singleton set, represented
by the unit type (), as yet another — vastly inferior — candidate for the
product pattern. Equip it with two projections p and q: functions from
the singleton to each of the constituent sets. Each selects a concrete
element from either set. Because the product is universal, there is also a
(unique) morphism m from our candidate, the singleton, to the product.
This morphism selects an element from the product set — it selects a
concrete pair. It also factorizes the two projections:
p = fst . m
q = snd . m
76
p == fst compose m
q == snd compose m
When acting on the singleton value (), the only element of the singleton
set, these two equations become:
p () = fst (m ())
q () = snd (m ())
p(()) == fst(m(()))
q(()) == snd(m(()))
77
A function must be defined for every element of its domain set (in
programming, we call it a total function), but it doesn’t have to cover
the whole codomain. We’ve seen some extreme cases of it: functions
from a singleton set — functions that select just a single element in
the codomain. (Actually, functions from an empty set are the real ex-
tremes.) When the size of the domain is much smaller than the size of
the codomain, we often think of such functions as embedding the do-
main in the codomain. For instance, we can think of a function from
a singleton set as embedding its single element in the codomain. I call
them embedding functions, but mathematicians prefer to give a name to
the opposite: functions that tightly fill their codomains are called sur-
jective or onto.
The other source of asymmetry is that functions are allowed to map
many elements of the domain set into one element of the codomain.
They can collapse them. The extreme case are functions that map whole
sets into a singleton. You’ve seen the polymorphic unit function that
does just that. The collapsing can only be compounded by composi-
tion. A composition of two collapsing functions is even more collaps-
ing than the individual functions. Mathematicians have a name for non-
collapsing functions: they call them injective or one-to-one.
Of course there are some functions that are neither embedding nor
collapsing. They are called bijections and they are truly symmetric, be-
cause they are invertible. In the category of sets, an isomorphism is the
same as a bijection.
5.8 Challenges
1. Show that the terminal object is unique up to unique isomor-
phism.
78
2. What is a product of two objects in a poset? Hint: Use the univer-
sal construction.
3. What is a coproduct of two objects in a poset?
4. Implement the equivalent of Haskell Either as a generic type in
your favorite language (other than Haskell).
5. Show that Either is a “better” coproduct than int equipped with
two injections:
int i(int n) { return n; }
int j(bool b) { return b ? 0: 1; }
79
5.9 Bibliography
1. The Catsters, Products and Coproducts1 video.
1 https://www.youtube.com/watch?v=upCSDIO9pjc
80
6
Simple Algebraic Data Types
81
6.1 Product Types
The canonical implementation of a product of two types in a program-
ming language is a pair. In Haskell, a pair is a primitive type construc-
tor; in C++ it’s a relatively complex template defined in the Standard
Library.
Pairs are not strictly commutative: a pair (Int, Bool) cannot be sub-
stituted for a pair (Bool, Int), even though they carry the same infor-
mation. They are, however, commutative up to isomorphism — the iso-
morphism being given by the swap function (which is its own inverse):
You can think of the two pairs as simply using a different format for
storing the same data. It’s just like big endian vs. little endian.
You can combine an arbitrary number of types into a product by
nesting pairs inside pairs, but there is an easier way: nested pairs are
equivalent to tuples. It’s the consequence of the fact that different ways
82
of nesting pairs are isomorphic. If you want to combine three types in
a product, a, b, and c, in this order, you can do it in two ways:
((a, b), c)
((A, B), C)
or
These types are different — you can’t pass one to a function that expects
the other — but their elements are in one-to-one correspondence. There
is a function that maps one to another:
def alpha[A, B, C]: (((A, B), C)) => ((A, (B, C))) = {
case ((x, y), z) => (x, (y, z))
}
83
def alphaInv[A, B, C]: ((A, (B, C))) => (((A, B), C)) = {
case (x, (y, z)) => ((x, y), z)
}
(𝑎 ∗ 𝑏) ∗ 𝑐 = 𝑎 ∗ (𝑏 ∗ 𝑐)
Except that, in the monoid case, the two ways of composing products
were equal, whereas here they are only equal “up to isomorphism.”
If we can live with isomorphisms, and don’t insist on strict equality,
we can go even further and show that the unit type, (), is the unit of the
product the same way 1 is the unit of multiplication. Indeed, the pairing
of a value of some type a with a unit doesn’t add any information. The
type:
(a, ())
(A, Unit)
84
def rho[A]: ((A, Unit)) => A = {
case (x, ()) => x
}
data Pair a b = P a b
85
constructor P. For instance, let’s define a value stmt as a pair of String
and Bool:
The first line is the type declaration. It uses the type constructor Pair,
with String and Bool replacing a and the b in the generic definition
of Pair. The second line defines the actual value by passing a concrete
string and a concrete Boolean to the data constructor P. Type construc-
tors are used to construct types; data constructors, to construct values.
Since the name spaces for type and data constructors are separate
in Haskell, you will often see the same name used for both, as in:
And if you squint hard enough, you may even view the built-in pair
type as a variation on this kind of declaration, where the name Pair is
replaced with the binary operator (,). In fact you can use (,) just like
any other named constructor and create pairs using prefix notation:
86
val stmt = ("This statement is", false)
which is just a product of String and Bool, but it’s given its own name
and constructor. The advantage of this style of declaration is that you
may define many types that have the same content but different mean-
ing and functionality, and which cannot be substituted for each other.
Programming with tuples and multi-argument constructors can get
messy and error prone — keeping track of which component represents
what. It’s often preferable to give names to components. A product type
with named fields is called a record in Haskell, and a struct in C.
6.2 Records
Let’s have a look at a simple example. We want to describe chemi-
cal elements by combining two strings, name and symbol; and an in-
teger, the atomic number; into one data structure. We can use a tu-
ple (String, String, Int) and remember which component represents
what. We would extract components by pattern matching, as in this
function that checks if the symbol of the element is the prefix of its
name (as in He being the prefix of Helium):
87
startsWithSymbol :: (String, String, Int) -> Bool
startsWithSymbol (name, symbol, _) = isPrefixOf symbol name
This code is error prone, and is hard to read and maintain. It’s much
better to define a record:
88
val tupleToElem: ((String, String, Int)) => Element = {
case (n, s, a) => Element(n, s, a)
}
Notice that the names of record fields also serve as functions to access
these fields. For instance, atomicNumber e retrieves the atomicNumber field
from e. We use atomicNumber as a function of the type:
With the record syntax for Element, our function startsWithSymbol be-
comes more readable:
startsWithSymbol :: Element -> Bool
startsWithSymbol e = isPrefixOf (symbol e) (name e)
We could even use the Haskell trick of turning the function isPrefixOf
into an infix operator by surrounding it with backquotes, and make it
read almost like a sentence:
89
startsWithSymbol e = symbol e `isPrefixOf` name e
90
sealed trait OneOfThree[A, B, C]
final case class Sinistral[A](v: A) extends OneOfThree[A,
↪ Nothing, Nothing]
final case class Medial[B](v: B) extends OneOfThree[Nothing,
↪ B, Nothing]
final case class Dextral[C](v: C) extends
↪ OneOfThree[Nothing, Nothing, C]
and so on.
It turns out that 𝐒𝐞𝐭 is also a (symmetric) monoidal category with re-
spect to coproduct. The role of the binary operation is played by the dis-
joint sum, and the role of the unit element is played by the initial object.
In terms of types, we have Either as the monoidal operator and Void,
the uninhabited type, as its neutral element. You can think of Either as
plus, and Void as zero. Indeed, adding Void to a sum type doesn’t change
its content. For instance:
Either a Void
Either[A, Nothing]
91
data Color = Red | Green | Blue
is the C++:
92
sealed trait Option[+A]
case object None extends Option[Nothing]
case class Some[A](a: A) extends Option[A]
The Maybe type is a sum of two types. You can see this if you separate
the two constructors into individual types. The first one would look like
this:
It’s an enumeration with one value called Nothing. In other words, it’s
a singleton, which is equivalent to the unit type (). The second part:
More complex sum types are often faked in C++ using pointers. A pointer
can be either null, or point to a value of specific type. For instance, a
Haskell list type, which can be defined as a (recursive) sum type:
93
data List a = Nil | Cons a (List a)
can be translated to C++ using the null pointer trick to implement the
empty list:
template<class A>
class List {
Node<A> * _head;
public:
List() : _head(nullptr) {} // Nil
List(A a, List<A> l) // Cons
: _head(new Node<A>(a, l))
{}
};
Notice that the two Haskell constructors Nil and Cons are translated into
two overloaded List constructors with analogous arguments (none, for
Nil; and a value and a list for Cons). The List class doesn’t need a tag
to distinguish between the two components of the sum type. Instead it
uses the special nullptr value for _head to encode Nil.
The main difference, though, between Haskell and C++ types is that
Haskell data structures are immutable. If you create an object using one
particular constructor, the object will forever remember which con-
structor was used and what arguments were passed to it. So a Maybe
object that was created as Just "energy" will never turn into Nothing.
Similarly, an empty list will forever be empty, and a list of three ele-
ments will always have the same three elements.
94
It’s this immutability that makes construction reversible. Given an
object, you can always disassemble it down to parts that were used in
its construction. This deconstruction is done with pattern matching and
it reuses constructors as patterns. Constructor arguments, if any, are
replaced with variables (or other patterns).
The List data type has two constructors, so the deconstruction of
an arbitrary List uses two patterns corresponding to those constructors.
One matches the empty Nil list, and the other a Cons-constructed list.
For instance, here’s the definition of a simple function on Lists:
The first part of the definition of maybeTail uses the Nil constructor as
pattern and returns Nothing. The second part uses the Cons constructor
as pattern. It replaces the first constructor argument with a wildcard, be-
cause we are not interested in it. The second argument to Cons is bound
to the variable t (I will call these things variables even though, strictly
speaking, they never vary: once bound to an expression, a variable never
changes). The return value is Just t. Now, depending on how your List
was created, it will match one of the clauses. If it was created using Cons,
the two arguments that were passed to it will be retrieved (and the first
discarded).
95
Even more elaborate sum types are implemented in C++ using poly-
morphic class hierarchies. A family of classes with a common ancestor
may be understood as one variant type, in which the vtable serves as a
hidden tag. What in Haskell would be done by pattern matching on the
constructor, and by calling specialized code, in C++ is accomplished by
dispatching a call to a virtual function based on the vtable pointer.
You will rarely see union used as a sum type in C++ because of
severe limitations on what can go into a union. You can’t even put a
std::string into a union because it has a copy constructor.
96
Another thing that links addition and multiplication is the distribu-
tive property:
𝑎 × (𝑏 + 𝑐) = 𝑎 × 𝑏 + 𝑎 × 𝑐
Does it also hold for product and sum types? Yes, it does — up to iso-
morphisms, as usual. The left hand side corresponds to the type:
(a, Either b c)
97
}
the e in case e of will be equal to Left "Hi!". It will match the pattern
Left y, substituting "Hi!" for y. Since the x has already been matched
to 2, the result of the case of clause, and the whole function, will be
Left (2, "Hi!"), as expected.
98
I’m not going to prove that these two functions are the inverse of
each other, but if you think about it, they must be! They are just trivially
re-packing the contents of the two data structures. It’s the same data,
only different format.
Mathematicians have a name for two such intertwined monoids: it’s
called a semiring. It’s not a full ring, because we can’t define subtraction
of types. That’s why a semiring is sometimes called a rig, which is a
pun on “ring without an n” (negative). But barring that, we can get a
lot of mileage from translating statements about, say, natural numbers,
which form a rig, to statements about types. Here’s a translation table
with some entries of interest:
Numbers Types
0 Void
1 ()
𝑎+𝑏 Either a b = Left a | Right b
𝑎×𝑏 (a, b) or Pair a b = Pair a b
2=1+1 data Bool = True | False
1+𝑎 data Maybe = Nothing | Just a
99
case object Nil extends List[Nothing]
final case class Cons[A](h: A, t: List[A]) extends List[A]
x = 1 + a * x
x = 1 + a*x
x = 1 + a*(1 + a*x) = 1 + a + a*a*x
x = 1 + a + a*a*(1 + a*x) = 1 + a + a*a + a*a*a*x
...
x = 1 + a + a*a + a*a*a + a*a*a*a...
100
either a value of type a or a value of type b, so it’s enough if one of
them is inhabited. Logical and and or also form a semiring, and it too
can be mapped into type theory:
101
Logic Types
false Void
true ()
𝑎 || 𝑏 Either a b = Left a | Right b
𝑎 && 𝑏 (a, b)
This analogy goes deeper, and is the basis of the Curry-Howard isomor-
phism between logic and type theory. We’ll come back to it when we
talk about function types.
6.5 Challenges
1. Show the isomorphism between Maybe a and Either () a.
2. Here’s a sum type defined in Haskell:
data Shape = Circle Float
| Rect Float Float
102
circ :: Shape -> Float
circ (Circle r) = 2.0 * pi * r
circ (Rect d h) = 2.0 * (d + h)
103
7
Functors
A t the risk of sounding like a broken record, I will say this about
functors: A functor is a very simple but powerful idea. Category
theory is just full of those simple but powerful ideas. A functor is a
mapping between categories. Given two categories, 𝐂 and 𝐃, a functor
𝐹 maps objects in 𝐂 to objects in 𝐃 — it’s a function on objects. If 𝑎 is
an object in 𝐂, we’ll write its image in 𝐃 as 𝐹 𝑎 (no parentheses). But a
category is not just objects — it’s objects and morphisms that connect
them. A functor also maps morphisms — it’s a function on morphisms.
But it doesn’t map morphisms willy-nilly — it preserves connections.
So if a morphism 𝑓 in 𝐂 connects object 𝑎 to object 𝑏,
𝑓 ∷𝑎→𝑏
the image of 𝑓 in 𝐃, 𝐹 𝑓 , will connect the image of 𝑎 to the image of 𝑏:
𝐹𝑓 ∷ 𝐹𝑎 → 𝐹𝑏
104
(This is a mixture of mathematical and
Haskell notation that hopefully makes sense by
now. I won’t use parentheses when applying
functors to objects or morphisms.)
As you can see, a functor preserves the
structure of a category: what’s connected in one
category will be connected in the other cate-
gory. But there’s something more to the structure of a category: there’s
also the composition of morphisms. If ℎ is a composition of 𝑓 and 𝑔:
ℎ = 𝑔.𝑓
105
Here, id𝑎 is the identity at the object 𝑎, and id𝐹 𝑎
the identity at 𝐹 𝑎. Note that these conditions
make functors much more restrictive than regu-
lar functions. Functors must preserve the struc-
ture of a category. If you picture a category as a
collection of objects held together by a network
of morphisms, a functor is not allowed to intro-
duce any tears into this fabric. It may smash ob-
jects together, it may glue multiple morphisms
into one, but it may never break things apart.
This no-tearing constraint is similar to the continuity condition you
might know from calculus. In this sense functors are “continuous” (al-
though there exists an even more restrictive notion of continuity for
functors). Just like functions, functors may do both collapsing and em-
bedding. The embedding aspect is more prominent when the source
category is much smaller than the target category. In the extreme, the
source can be the trivial singleton category — a category with one object
and one morphism (the identity). A functor from the singleton category
to any other category simply selects an object in that category. This is
fully analogous to the property of morphisms from singleton sets select-
ing elements in target sets. The maximally collapsing functor is called
the constant functor Δ𝑐 . It maps every object in the source category to
one selected object 𝑐 in the target category. It also maps every mor-
phism in the source category to the identity morphism id𝑐 . It acts like a
black hole, compacting everything into one singularity. We’ll see more
of this functor when we discuss limits and colimits.
106
7.1 Functors in Programming
Let’s get down to earth and talk about programming. We have our cat-
egory of types and functions. We can talk about functors that map this
category into itself — such functors are called endofunctors. So what’s
an endofunctor in the category of types? First of all, it maps types to
types. We’ve seen examples of such mappings, maybe without realizing
that they were just that. I’m talking about definitions of types that were
parameterized by other types. Let’s see a few examples.
Here’s an important subtlety: Maybe itself is not a type, it’s a type con-
structor. You have to give it a type argument, like Int or Bool, in order
to turn it into a type. Maybe without any argument represents a function
on types. But can we turn Maybe into a functor? (From now on, when I
speak of functors in the context of programming, I will almost always
mean endofunctors.) A functor is not only a mapping of objects (here,
types) but also a mapping of morphisms (here, functions). For any func-
tion from a to b:
107
f :: a -> b
val f: A => B
(By the way, in Haskell you can use apostrophes in variables names,
which is very handy in cases like these.) In Haskell, we implement the
morphism-mapping part of a functor as a higher order function called
fmap. In the case of Maybe, it has the following signature:
108
We often say that fmap lifts a function. The lifted function acts on Maybe
values. As usual, because of currying, this signature may be interpreted
in two ways: as a function of one argument — which itself is a function
(a -> b) — returning a function (Maybe a -> Maybe b); or as a function
of two arguments returning Maybe b:
To show that the type constructor Maybe together with the function fmap
form a functor, we have to prove that fmap preserves identity and com-
109
position. These are called “the functor laws,” but they simply ensure the
preservation of the structure of the category.
id x = x
def identity[A](x: A) = x
fmap id = id
110
fmap(identity) == identity
There are two cases to consider: Nothing and Just. Here’s the first case
(I’m using Haskell pseudo-code to transform the left hand side to the
right hand side):
fmap id Nothing
= { definition of fmap }
Nothing
= { definition of id }
id Nothing
Notice that in the last step I used the definition of id backwards. I re-
placed the expression Nothing with id Nothing. In practice, you carry
out such proofs by “burning the candle at both ends,” until you hit the
same expression in the middle — here it was Nothing. The second case
is also easy:
fmap id (Just x)
= { definition of fmap }
Just (id x)
= { definition of id }
Just x
= { definition of id }
id (Just x)
111
fmap(g compose f) == fmap(g) compose fmap(f)
fmap (g . f) Nothing
= { definition of fmap }
Nothing
= { definition of fmap }
fmap g Nothing
= { definition of fmap }
fmap g (fmap f Nothing)
fmap (g . f) (Just x)
= { definition of fmap }
Just ((g . f) x)
= { definition of composition }
Just (g (f x))
= { definition of fmap }
fmap g (Just (f x))
= { definition of fmap }
fmap g (fmap f (Just x))
= { definition of composition }
(fmap g . fmap f) (Just x)
It’s worth stressing that equational reasoning doesn’t work for C++
style “functions” with side effects. Consider this code:
int square(int x) {
return x * x;
}
112
int counter() {
static int c = 0;
return c++;
}
double y = square(counter());
This is definitely not a valid transformation, and it will not produce the
same result. Despite that, the C++ compiler will try to use equational
reasoning if you implement square as a macro, with disastrous results.
7.1.3 Optional
Functors are easily expressed in Haskell, but they can be defined in any
language that supports generic programming and higher-order func-
tions. Let’s consider the C++ analog of Maybe, the template type optional.
Here’s a sketch of the implementation (the actual implementation is
much more complex, dealing with various ways the argument may be
passed, with copy semantics, and with the resource management issues
characteristic of C++):
template<class T>
class optional {
bool _isValid; // the tag
T _v;
public:
optional() : _isValid(false) {} // Nothing
optional(T x) : _isValid(true) , _v(x) {} // Just
113
bool isValid() const { return _isValid; }
T val() const { return _v; } };
This template provides one part of the definition of a functor: the map-
ping of types. It maps any type T to a new type optional<T>. Let’s define
its action on functions:
114
curried or an uncurried free template function? Can the C++ compiler
correctly infer the missing types, or should they be specified explicitly?
Consider a situation where the input function f takes an int to a bool.
How will the compiler figure out the type of g:
auto g = fmap(f);
especially if, in the future, there are multiple functors overloading fmap?
(We’ll see more functors soon.)
7.1.4 Typeclasses
So how does Haskell deal with abstracting the functor? It uses the type-
class mechanism. A typeclass defines a family of types that support a
common interface. For instance, the class of objects that support equal-
ity is defined as follows:
class Eq a where
(==) :: a -> a -> Bool
trait Eq[A]{
def ===(x: A, y: A): Boolean
}
This definition states that type a is of the class Eq if it supports the op-
erator (==) that takes two arguments of type a and returns a Bool. If
you want to tell Haskell that a particular type is Eq, you have to de-
clare it an instance of this class and provide the implementation of (==).
For example, given the definition of a 2D Point (a product type of two
Floats):
115
data Point = Pt Float Float
Here I used the operator (==) (the one I’m defining) in the infix posi-
tion between the two patterns (Pt x y) and (Pt x' y'). The body of
the function follows the single equal sign. Once Point is declared an in-
stance of Eq, you can directly compare points for equality. Notice that,
unlike in C++ or Java, you don’t have to specify the Eq class (or interface)
when defining Point — you can do it later in client code. Typeclasses
are also Haskell’s only mechanism for overloading functions (and op-
erators). We will need that for overloading fmap for different functors.
There is one complication, though: a functor is not defined as a type but
as a mapping of types, a type constructor. We need a typeclass that’s
not a family of types, as was the case with Eq, but a family of type con-
structors. Fortunately a Haskell typeclass works with type constructors
as well as with types. So here’s the definition of the Functor class:
116
class Functor f where
fmap :: (a -> b) -> f a -> f b
trait Functor[F[_]] {
def fmap[A, B](f: A => B)(fa: F[A]): F[B]
}
By the way, the Functor class, as well as its instance definitions for a lot
of simple data types, including Maybe, are part of the standard Prelude
library.
117
7.1.5 Functor in C++
Can we try the same approach in C++? A type constructor corresponds
to a template class, like optional, so by analogy, we would parameterize
fmap with a template template parameter F. This is the syntax for it:
This definition works, but only because the second argument of fmap
selects the overload. It totally ignores the more generic definition of
fmap.
118
7.1.6 The List Functor
To get some intuition as to the role of functors in programming, we need
to look at more examples. Any type that is parameterized by another
type is a candidate for a functor. Generic containers are parameterized
by the type of the elements they store, so let’s look at a very simple
container, the list:
We have the type constructor List, which is a mapping from any type
a to the type List a. To show that List is a functor we have to define
the lifting of functions: Given a function a -> b define a function
List a -> List b:
119
this in practice, given that a (non-empty) list is defined as the Cons of a
head and a tail? We apply f to the head and apply the lifted (fmapped) f
to the tail. This is a recursive definition, because we are defining lifted
f in terms of lifted f:
Notice that, on the right hand side, fmap f is applied to a list that’s
shorter than the list for which we are defining it — it’s applied to its tail.
We recurse towards shorter and shorter lists, so we are bound to even-
tually reach the empty list, or Nil. But as we’ve decided earlier, fmap f
acting on Nil returns Nil, thus terminating the recursion. To get the fi-
nal result, we combine the new head (f x) with the new tail (fmap f t)
using the Cons constructor. Putting it all together, here’s the instance
declaration for the list functor:
120
}
}
If you are more comfortable with C++, consider the case of a std::vector,
which could be considered the most generic C++ container. The im-
plementation of fmap for std::vector is just a thin encapsulation of
std::transform:
std::vector<int> v{ 1, 2, 3, 4 };
auto w = fmap([](int i) { return i*i; }, v);
std::copy( std::begin(w)
, std::end(w)
, std::ostream_iterator(std::cout, ", "));
121
7.1.7 The Reader Functor
Now that you might have developed some intuitions — for instance,
functors being some kind of containers — let me show you an example
which at first sight looks very different. Consider a mapping of type a
to the type of a function returning a. We haven’t really talked about
function types in depth — the full categorical treatment is coming —
but we have some understanding of those as programmers. In Haskell,
a function type is constructed using the arrow type constructor (->)
which takes two types: the argument type and the result type. You’ve
already seen it in infix form, a -> b, but it can equally well be used in
prefix form, when parenthesized:
(->) a b
Function1[A, B]
// or
A => B
Just like with regular functions, type functions of more than one argu-
ment can be partially applied. So when we provide just one type argu-
ment to the arrow, it still expects another one. That’s why:
(->) a
122
parameterized by a. Let’s see if this is also a family of functors. Deal-
ing with two type parameters can get a bit confusing, so let’s do some
renaming. Let’s call the argument type r and the result type a, in line
with our previous functor definitions. So our type constructor takes any
type a and maps it into the type r -> a. To show that it’s a functor, we
want to lift a function a -> b to a function that takes r -> a and returns
r -> b. These are the types that are formed using the type constructor
(->) r acting on, respectively, a and b. Here’s the type signature of fmap
applied to this case:
It just works! If you like terse notation, this definition can be reduced
further by noticing that composition can be rewritten in prefix form:
123
fmap f g = (.) f g
and the arguments can be omitted to yield a direct equality of two func-
tions:
fmap = (.)
This combination of the type constructor (->) r with the above imple-
mentation of fmap is called the reader functor.
124
nats :: [Integer]
nats = [1..]
In the first line, a pair of square brackets is Haskell’s built-in type con-
structor for lists. In the second line, square brackets are used to create a
list literal. Obviously, an infinite list like this cannot be stored in mem-
ory. The compiler implements it as a function that generates Integers
on demand. Haskell effectively blurs the distinction between data and
code. A list could be considered a function, and a function could be
considered a table that maps arguments to results. The latter can even
be practical if the domain of the function is finite and not too large. It
would not be practical, however, to implement strlen as table lookup,
because there are infinitely many different strings. As programmers, we
don’t like infinities, but in category theory you learn to eat infinities for
breakfast. Whether it’s a set of all strings or a collection of all possi-
ble states of the Universe, past, present, and future — we can deal with
it! So I like to think of the functor object (an object of the type gen-
erated by an endofunctor) as containing a value or values of the type
over which it is parameterized, even if these values are not physically
present there. One example of a functor is a C++ std::future, which
may at some point contain a value, but it’s not guaranteed it will; and
if you want to access it, you may block waiting for another thread to
finish execution. Another example is a Haskell IO object, which may
contain user input, or the future versions of our Universe with “Hello
World!” displayed on the monitor. According to this interpretation, a
125
functor object is something that may contain a value or values of the
type it’s parameterized upon. Or it may contain a recipe for generating
those values. We are not at all concerned about being able to access the
values — that’s totally optional, and outside of the scope of the functor.
All we are interested in is to be able to manipulate those values using
functions. If the values can be accessed, then we should be able to see
the results of this manipulation. If they can’t, then all we care about
is that the manipulations compose correctly and that the manipulation
with an identity function doesn’t change anything. Just to show you
how much we don’t care about being able to access the values inside
a functor object, here’s a type constructor that ignores completely its
argument a:
The Const type constructor takes two types, c and a. Just like we did
with the arrow constructor, we are going to partially apply it to create
a functor. The data constructor (also called Const) takes just one value
of type c. It has no dependence on a. The type of fmap for this type
constructor is:
fmap :: (a -> b) -> Const c a -> Const c b
126
instance Functor (Const c) where
fmap _ (Const v) = Const v
This might be a little clearer in C++ (I never thought I would utter those
words!), where there is a stronger distinction between type arguments
— which are compile-time — and values, which are run-time:
Despite its weirdness, the Const functor plays an important role in many
constructions. In category theory, it’s a special case of the Δ𝑐 functor
I mentioned earlier — the endo-functor case of a black hole. We’ll be
seeing more of it in the future.
127
7.3 Functor Composition
It’s not hard to convince yourself that functors between categories com-
pose, just like functions between sets compose. A composition of two
functors, when acting on objects, is just the composition of their re-
spective object mappings; and similarly when acting on morphisms. Af-
ter jumping through two functors, identity morphisms end up as iden-
tity morphisms, and compositions of morphisms finish up as composi-
tions of morphisms. There’s really nothing much to it. In particular, it’s
easy to compose endofunctors. Remember the function maybeTail? I’ll
rewrite it using Haskell’s built in implementation of lists:
(The empty list constructor that we used to call Nil is replaced with the
empty pair of square brackets []. The Cons constructor is replaced with
the infix operator : (colon).) The result of maybeTail is of a type that’s
a composition of two functors, Maybe and [], acting on a. Each of these
functors is equipped with its own version of fmap, but what if we want
to apply some function f to the contents of the composite: a Maybe list?
We have to break through two layers of functors. We can use fmap to
break through the outer Maybe. But we can’t just send f inside Maybe
because f doesn’t work on lists. We have to send (fmap f) to operate on
128
the inner list. For instance, let’s see how we can square the elements of
a Maybe list of integers:
square x = x * x
The compiler, after analyzing the types, will figure out that, for the outer
fmap, it should use the implementation from the Maybe instance, and for
the inner one, the list functor implementation. It may not be immedi-
ately obvious that the above code may be rewritten as:
129
fmapO.compose(fmapL)
But remember that fmap may be considered a function of just one argu-
ment:
In our case, the second fmap in (fmap . fmap) takes as its argument:
The first fmap then takes that function and returns a function:
130
Finally, that function is applied to mis. So the composition of two func-
tors is a functor whose fmap is the composition of the corresponding
fmaps. Going back to category theory: It’s pretty obvious that functor
composition is associative (the mapping of objects is associative, and the
mapping of morphisms is associative). And there is also a trivial identity
functor in every category: it maps every object to itself, and every mor-
phism to itself. So functors have all the same properties as morphisms
in some category. But what category would that be? It would have to be
a category in which objects are categories and morphisms are functors.
It’s a category of categories. But a category of all categories would have
to include itself, and we would get into the same kinds of paradoxes that
made the set of all sets impossible. There is, however, a category of all
small categories called 𝐂𝐚𝐭 (which is big, so it can’t be a member of it-
self). A small category is one in which objects form a set, as opposed to
something larger than a set. Mind you, in category theory, even an in-
finite uncountable set is considered “small.” I thought I’d mention these
things because I find it pretty amazing that we can recognize the same
structures repeating themselves at many levels of abstraction. We’ll see
later that functors form categories as well.
7.4 Challenges
1. Can we turn the Maybe type constructor into a functor by defining:
fmap _ _ = Nothing
131
3. Implement the reader functor in your second favorite language
(the first being Haskell, of course).
4. Prove the functor laws for the list functor. Assume that the laws
are true for the tail part of the list you’re applying it to (in other
words, use induction).
132
8
Functoriality
N ow that you know what a functor is, and have seen a few exam-
ples, let’s see how we can build larger functors from smaller ones.
In particular it’s interesting to see which type constructors (which cor-
respond to mappings between objects in a category) can be extended to
functors (which include mappings between morphisms).
8.1 Bifunctors
Since functors are morphisms in 𝐂𝐚𝐭 (the category of categories), a lot
of intuitions about morphisms — and functions in particular — apply to
functors as well. For instance, just like you can have a function of two
arguments, you can have a functor of two arguments, or a bifunctor. On
objects, a bifunctor maps every pair of objects, one from category 𝐂,
and one from category 𝐃, to an object in category 𝐄. Notice that this is
133
just saying that it’s a mapping from a Cartesian product of categories
𝐂 × 𝐃 to 𝐄.
That’s pretty straightforward. But
functoriality means that a bifunc-
tor has to map morphisms as well.
This time, though, it must map a pair
of morphisms, one from 𝐂 and one
from 𝐃, to a morphism in 𝐄.
Again, a pair of morphisms is
just a single morphism in the prod-
uct category 𝐂 × 𝐃 to 𝐄. We define a
morphism in a Cartesian product of
categories as a pair of morphisms which goes from one pair of objects
to another pair of objects. These pairs of morphisms can be composed
in the obvious way:
(𝑓 , 𝑔) ∘ (𝑓 ′ , 𝑔 ′ ) = (𝑓 ∘ 𝑓 ′ , 𝑔 ∘ 𝑔 ′ )
The composition is associative and it has an identity — a pair of iden-
tity morphisms (id, id). So a Cartesian product of categories is indeed a
category.
But an easier way to think about bifunctors is that they are func-
tors in both arguments. So instead of translating functorial laws — as-
sociativity and identity preservation — from functors to bifunctors, it’s
enough to check them separately for each argument. If you have a map-
ping from a pair of categories to a third category, and you prove that it is
functorial in each argument separately (i.e., keeping the other argument
constant), then the mapping is automatically a bifunctor. By functorial
I mean that it acts on morphisms like an honest functor.
Let’s define a bifunctor in Haskell. In this case all three categories
are the same: the category of Haskell types. A bifunctor is a type con-
134
structor that takes two type arguments. Here’s the definition of the
Bifunctor typeclass taken directly from the library Control.Bifunctor:
The type variable f represents the bifunctor. You can see that in all type
signatures it’s always applied to two type arguments. The first type sig-
nature defines bimap: a mapping of two functions at once. The result
is a lifted function, (f a b -> f c d), operating on types generated by
the bifunctor’s type constructor. There is a default implementation of
bimap in terms of first and second, which shows that it’s enough to
have functoriality in each argument separately to be able to define a bi-
functor. (This is not true in general in category theory, because the two
135
maps may not commute: first g . second h might not be the same as
second h . first g.)
bimap
The two other type signatures, first and second, are the two fmaps wit-
nessing the functoriality of f in the first and the second argument, re-
spectively.
first second
136
When declaring an instance of Bifunctor, you have a choice of ei-
ther implementing bimap and accepting the defaults for first and second,
or implementing both first and second and accepting the default for
bimap (of course, you may implement all three of them, but then it’s up
to you to make sure they are related to each other in this manner).
There isn’t much choice: bimap simply applies the first function to the
first component, and the second function to the second component of a
pair. The code pretty much writes itself, given the types:
137
bimap :: (a -> c) -> (b -> d) -> (a, b) -> (c, d)
def bimap[A, B, C, D](f: A => C)(g: B => D): ((A, B)) => (C,
↪ D)
The action of the bifunctor here is to make pairs of types, for instance:
(,) a b = (a, b)
138
a monoidal category with respect to disjoint union, with the empty set
as a unit. What I haven’t mentioned is that one of the requirements for
a monoidal category is that the binary operator be a bifunctor. This is
a very important requirement — we want the monoidal product to be
compatible with the structure of the category, which is defined by mor-
phisms. We are now one step closer to the full definition of a monoidal
category (we still need to learn about naturality, before we can get
there).
139
data Identity a = Identity a
type Id[A] = A
You can think of Identity as the simplest possible container that always
stores just one (immutable) value of type a.
Everything else in algebraic data structures is constructed from these
two primitives using products and sums.
With this new knowledge, let’s have a fresh look at the Maybe type
constructor:
data Maybe a = Nothing | Just a
It’s a sum of two types, and we now know that the sum is functorial. The
first part, Nothing can be represented as a Const () acting on a (the first
type parameter of Const is set to unit — later we’ll see more interesting
uses of Const). The second part is just a different name for the identity
functor. We could have defined Maybe, up to isomorphism, as:
140
type Maybe a = Either (Const () a) (Identity a)
141
If you’re getting a little lost, try applying BiComp to Either, Const (),
Identity, a, and b, in this order. You will recover our bare-bone version
of Maybe b (a is ignored).
The new data type BiComp is a bifunctor in a and b, but only if bf is
itself a Bifunctor and fu and gu are Functors. The compiler must know
that there will be a definition of bimap available for bf, and definitions
of fmap for fu and gu. In Haskell, this is expressed as a precondition in
the instance declaration: a set of class constraints followed by a double
arrow:
142
}
bf (fu a) (gu b)
BF[FU[A], GU[B]]
which is quite a mouthful. The outer bimap breaks through the outer bf
layer, and the two fmaps dig under fu and gu, respectively. If the types
of f1 and f2 are:
f1 :: a -> a'
f2 :: b -> b'
then the final result is of the type bf (fu a') (gu b'):
143
def bimap: (FU[A] => FU[A1]) =>
(GU[B] => GU[B1]) =>
BiComp[BF, FU, GU, A, B] =>
BiComp[BF, FU, GU, A1, B1]
If you like jigsaw puzzles, these kinds of type manipulations can provide
hours of entertainment.
So it turns out that we didn’t have to prove that Maybe was a functor
— this fact followed from the way it was constructed as a sum of two
functorial primitives.
A perceptive reader might ask the question: If the derivation of the
Functor instance for algebraic data types is so mechanical, can’t it be
automated and performed by the compiler? Indeed, it can, and it is. You
need to enable a particular Haskell extension by including this line at
the top of your source file:
144
8.4 Functors in C++
If you are a C++ programmer, you obviously are on your own as far as
implementing functors goes. However, you should be able to recognize
some types of algebraic data structures in C++. If such a data structure is
made into a generic template, you should be able to quickly implement
fmap for it.
Let’s have a look at a tree data structure, which we would define in
Haskell as a recursive sum type:
145
template<class T>
struct Tree {
virtual ~Tree() {};
};
template<class T>
struct Leaf : public Tree<T> {
T _label;
Leaf(T l) : _label(l) {}
};
template<class T>
struct Node : public Tree<T> {
Tree<T> * _left;
Tree<T> * _right;
Node(Tree<T> * l, Tree<T> * r) : _left(l), _right(r) {}
};
146
return new Leaf<B>(f (pl->_label));
Node<A> * pn = dynamic_cast<Node<A>*>(t);
if (pn)
return new Node<B>( fmap<A>(f, pn->_left)
, fmap<A>(f, pn->_right));
return nullptr;
}
147
type Writer a = (a, String)
object kleisli {
//allows us to use >=> as an infix operator
implicit class KleisliOps[A, B](m1: A => Writer[B]) {
def >=>[C](m2: B => Writer[C]): A => Writer[C] =
x => {
val (y, s1) = m1(x)
val (z, s2) = m2(y)
(z, s1 + s2)
}
}
148
}
It turns out that, if you look at the types of these two functions long
enough (and I mean, long enough), you can find a way to combine them
to produce a function with the right type signature to serve as fmap. Like
this:
import kleisli._
def fmap[A, B](f: A => B): Writer[A] => Writer[B] =
identity[Writer[A]] _ >=> (x => pure(f(x)))
Here, the fish operator combines two functions: one of them is the fa-
miliar id, and the other is a lambda that applies return to the result
of acting with f on the lambda’s argument. The hardest part to wrap
your brain around is probably the use of id. Isn’t the argument to the
fish operator supposed to be a function that takes a “normal” type and
returns an embellished type? Well, not really. Nobody says that a in
a -> Writer b must be a “normal” type. It’s a type variable, so it can be
anything, in particular it can be an embellished type, like Writer b.
149
So id will take Writer a and turn it into Writer a. The fish operator
will fish out the value of a and pass it as x to the lambda. There, f will
turn it into a b and return will embellish it, making it Writer b. Putting
it all together, we end up with a function that takes Writer a and returns
Writer b, exactly what fmap is supposed to produce.
Notice that this argument is very general: you can replace Writer
with any type constructor. As long as it supports a fish operator and
return, you can define fmap as well. So the embellishment in the Kleisli
category is always a functor. (Not every functor, though, gives rise to a
Kleisli category.)
You might wonder if the fmap we have just defined is the same fmap
the compiler would have derived for us with deriving Functor. Interest-
ingly enough, it is. This is due to the way Haskell implements polymor-
phic functions. It’s called parametric polymorphism, and it’s a source of
so called theorems for free. One of those theorems says that, if there is an
implementation of fmap for a given type constructor, one that preserves
identity, then it must be unique.
150
type Reader r a = r -> a
But just like the pair type constructor, or the Either type constructor,
the function type constructor takes two type arguments. The pair and
Either were functorial in both arguments — they were bifunctors. Is the
function constructor a bifunctor too?
Let’s try to make it functorial in the first argument. We’ll start with a
type synonym — it’s just like the Reader but with the arguments flipped:
type Op r a = a -> r
This time we fix the return type, r, and vary the argument type, a. Let’s
see if we can somehow match the types in order to implement fmap,
which would have the following type signature:
151
fmap :: (a -> b) -> (a -> r) -> (b -> r)
152
Notice that a contravariant functor is just a regular functor from the
opposite category. The regular functors, by the way — the kind we’ve
been studying thus far — are called covariant functors.
trait Contravariant[F[_]] {
def contramap[A, B](f: B => A)(fa: F[A]): F[B]
}
153
implicit def opContravariant[R] = new Contravariant[R, ?] {
def contramap[A, B](f: B => A)(g: Op[R, A]): Op[R, B] =
g compose f
}
Notice that the function f inserts itself before (that is, to the right of) the
contents of Op — the function g.
The definition of contramap for Op may be made even terser, if you
notice that it’s just the function composition operator with the argu-
ments flipped. There is a special function for flipping arguments, called
flip:
154
8.7 Profunctors
We’ve seen that the function-arrow operator is contravariant in its first
argument and covariant in the second. Is there a name for such a beast?
It turns out that, if the target category is 𝐒𝐞𝐭, such a beast is called a
profunctor. Because a contravariant functor is equivalent to a covariant
functor from the opposite category, a profunctor is defined as:
𝐂𝑜𝑝 × 𝐃 → 𝐒𝐞𝐭
Since, to first approximation, Haskell types are sets, we apply the name
Profunctor to a type constructor p of two arguments, which is contra-
functorial in the first argument and functorial in the second. Here’s the
appropriate typeclass taken from the Data.Profunctor library:
155
def rmap[A, B, C]: (B => C) =>
F[A, B] => F[A, C] =
bimap[A, A, B, C](identity[A])
}
All three functions come with default implementations. Just like with
Bifunctor, when declaring an instance of Profunctor, you have a choice
of either implementing dimap and accepting the defaults for lmap and
rmap, or implementing both lmap and rmap and accepting the default for
dimap.
dimap
156
implicit val function1Profunctor = new Profunctor[Function1]
↪ {
override def bimap[A, B, C, D]
: (A => B) => (C => D) => (B => C)
=> (A => D) = ab => cd => bc =>
cd compose bc compose ab
Profunctors have their application in the Haskell lens library. We’ll see
them again when we talk about ends and coends.
𝑓 ∷ 𝑎′ → 𝑎
𝑔 ∷ 𝑏 → 𝑏′
157
The lifting of this pair must be a morphism (a function) from the set
𝐂(𝑎, 𝑏) to the set 𝐂(𝑎′ , 𝑏 ′ ). Just pick any element ℎ of 𝐂(𝑎, 𝑏) (it’s a
morphism from 𝑎 to 𝑏) and assign to it:
𝑔∘ℎ∘𝑓
8.9 Challenges
1. Show that the data type:
data Pair a b = Pair a b
158
You could recover our earlier definition of a List by recursively
applying PreList to itself (we’ll see how it’s done when we talk
about fixed points).
Show that PreList is an instance of Bifunctor.
4. Show that the following data types define bifunctors in a and b:
data K2 c a b = K2 c
1 http://strictlypositive.org/CJ.pdf
159
9
Function Types
S o far I’ve been glossing over the meaning of function types. A func-
tion type is different from other types.
Take Integer, for instance: It’s just a set of
integers. Bool is a two element set. But a func-
tion type 𝑎 → 𝑏 is more than that: it’s a set
of morphisms between objects 𝑎 and 𝑏. A set of
morphisms between two objects in any category
is called a hom-set. It just so happens that in the
category 𝐒𝐞𝐭 every hom-set is itself an object in
the same category —because it is, after all, a set.
The same is not true of other categories Hom-set in Set is just a set
where hom-sets are external to a category. They are even called external
hom-sets.
It’s the self-referential nature of the category 𝐒𝐞𝐭 that makes func-
tion types special. But there is a way, at least in some categories, to
160
Hom-set in category C is an external set
construct objects that represent hom-sets. Such objects are called inter-
nal hom-sets.
161
fine a function type. We will need a pattern that involves three objects:
the function type that we are constructing, the argument type, and the
result type.
The obvious pattern that connects these three types is called func-
tion application or evaluation. Given a candidate for a function type,
let’s call it 𝑧 (notice that, if we are not in the category 𝐒𝐞𝐭, this is just
an object like any other object), and the argument type 𝑎 (an object),
the application maps this pair to the result type 𝑏 (an object). We have
three objects, two of them fixed (the ones representing the argument
type and the result type).
We also have the application, which is a mapping. How do we incor-
porate this mapping into our pattern? If we were allowed to look inside
objects, we could pair a function 𝑓 (an element of 𝑧) with an argument
𝑥 (an element of 𝑎) and map it to 𝑓 𝑥 (the application of 𝑓 to 𝑥, which is
an element of 𝑏).
In Set we can pick a function 𝑓 from a set of functions 𝑧 and we can pick an argument 𝑥 from the
set (type) 𝑎. We get an element 𝑓 𝑥 in the set (type) 𝑏.
162
type 𝑎. The product 𝑧 ×𝑎 is an object, and we can pick, as our application
morphism, an arrow 𝑔 from that object to 𝑏. In 𝐒𝐞𝐭, 𝑔 would be the
function that maps every pair (𝑓 , 𝑥) to 𝑓 𝑥.
So that’s the pattern: a product of two objects 𝑧 and 𝑎 connected to
another object 𝑏 by a morphism 𝑔.
A pattern of objects and morphisms that is the starting point of the universal construction
Is this pattern specific enough to single out the function type using
a universal construction? Not in every category. But in the categories of
interest to us it is. And another question: Would it be possible to define a
function object without first defining a product? There are categories in
which there is no product, or there isn’t a product for all pairs of objects.
The answer is no: there is no function type, if there is no product type.
We’ll come back to this later when we talk about exponentials.
Let’s review the universal construction. We start with a pattern of
objects and morphisms. That’s our imprecise query, and it usually yields
lots and lots of hits. In particular, in 𝐒𝐞𝐭, pretty much everything is con-
nected to everything. We can take any object 𝑧, form its product with
𝑎, and there’s going to be a function from it to 𝑏 (except when 𝑏 is an
empty set).
That’s when we apply our secret weapon: ranking. This is usually
done by requiring that there be a unique mapping between candidate
objects — a mapping that somehow factorizes our construction. In our
163
case, we’ll decree that 𝑧 together with the morphism 𝑔 from 𝑧 × 𝑎 to 𝑏
is better than some other 𝑧 ′ with its own application 𝑔 ′ , if and only if
there is a unique mapping ℎ from 𝑧 ′ to 𝑧 such that the application of
𝑔 ′ factors through the application of 𝑔. (Hint: Read this sentence while
looking at the picture.)
Now here’s the tricky part, and the main reason I postponed this
particular universal construction till now. Given the morphism
ℎ ∷ 𝑧 ′ → 𝑧, we want to close the diagram that has both 𝑧 ′ and 𝑧 crossed
with 𝑎. What we really need, given the mapping ℎ from 𝑧 ′ to 𝑧, is a
mapping from 𝑧 ′ × 𝑎 to 𝑧 × 𝑎. And now, after discussing the functoriality
of the product, we know how to do it. Because the product itself is a
functor (more precisely an endo-bi-functor), it’s possible to lift pairs of
morphisms. In other words, we can define not only products of objects
but also products of morphisms.
Since we are not touching the second component of the product
′
𝑧 × 𝑎, we will lift the pair of morphisms (ℎ, id), where id is an identity
on 𝑎.
164
So, here’s how we can factor one application, 𝑔, out of another ap-
plication 𝑔 ′ :
𝑔 ′ = 𝑔 ∘ (ℎ × id)
The key here is the action of the product on morphisms.
The third part of the universal construction is selecting the object
that is universally the best. Let’s call this object 𝑎 ⇒ 𝑏 (think of this
as a symbolic name for one object, not to be confused with a Haskell
typeclass constraint — I’ll discuss different ways of naming it later). This
object comes with its own application — a morphism from (𝑎 ⇒ 𝑏) × 𝑎
to 𝑏 — which we will call 𝑒𝑣𝑎𝑙. The object 𝑎 ⇒ 𝑏 is the best if any other
candidate for a function object can be uniquely mapped to it in such a
way that its application morphism 𝑔 factorizes through 𝑒𝑣𝑎𝑙. This object
is better than any other object according to our ranking.
The definition of the universal function object. This is the same diagram as above, but now the
object 𝑎 ⇒ 𝑏 is universal.
Formally:
165
A function object from 𝑎 to 𝑏 is an object 𝑎 ⇒ 𝑏 together with the
morphism
𝑒𝑣𝑎𝑙 ∷ ((𝑎 ⇒ 𝑏) × 𝑎) → 𝑏
such that for any other object 𝑧 with a morphism
𝑔 ∷𝑧×𝑎 →𝑏
ℎ ∷ 𝑧 → (𝑎 ⇒ 𝑏)
𝑔 = 𝑒𝑣𝑎𝑙 ∘ (ℎ × id)
9.2 Currying
Let’s have a second look at all the candidates for the function object.
This time, however, let’s think of the morphism 𝑔 as a function of two
variables, 𝑧 and 𝑎.
𝑔 ∷𝑧×𝑎 →𝑏
Being a morphism from a product comes as close as it gets to being a
function of two variables. In particular, in 𝐒𝐞𝐭, 𝑔 is a function from pairs
of values, one from the set 𝑧 and one from the set 𝑎.
166
On the other hand, the universal property tells us that for each such
𝑔 there is a unique morphism ℎ that maps 𝑧 to a function object 𝑎 ⇒ 𝑏.
ℎ ∷ 𝑧 → (𝑎 ⇒ 𝑏)
In 𝐒𝐞𝐭, this just means that ℎ is a function that takes one variable of
type 𝑧 and returns a function from 𝑎 to 𝑏. That makes ℎ a higher order
function. Therefore the universal construction establishes a one-to-one
correspondence between functions of two variables and functions of
one variable returning functions. This correspondence is called curry-
ing, and ℎ is called the curried version of 𝑔.
This correspondence is one-to-one, because given any 𝑔 there is a
unique ℎ, and given any ℎ you can always recreate the two-argument
function 𝑔 using the formula:
𝑔 = 𝑒𝑣𝑎𝑙 ∘ (ℎ × id)
a -> (b -> c)
A => (B => C)
a -> b -> c
167
A => B => C
These two definitions are equivalent, and either can be partially applied
to just one argument, producing a one-argument function, as in:
168
(a, b) -> c
(A, B) => C
It’s trivial to convert between the two representations, and the two
(higher-order) functions that do it are called, unsurprisingly, curry and
uncurry:
and
Notice that curry is the factorizer for the universal construction of the
function object. This is especially apparent if it’s rewritten in this form:
169
def factorizer[A, B, C](g: (A, B) => C): A => (B => C) =
a => (b => g(a, b))
170
9.3 Exponentials
In mathematical literature, the function object, or the internal hom-
object between two objects 𝑎 and 𝑏, is often called the exponential and
denoted by 𝑏 𝑎 . Notice that the argument type is in the exponent. This
notation might seem strange at first, but it makes perfect sense if you
think of the relationship between functions and products. We’ve already
seen that we have to use the product in the universal construction of the
internal hom-object, but the connection goes deeper than that.
This is best seen when you consider functions between finite types
— types that have a finite number of values, like Bool, Char, or even Int
or Double. Such functions, at least in principle, can be fully memoized
or turned into data structures to be looked up. And this is the essence of
the equivalence between functions, which are morphisms, and function
types, which are objects.
For instance a (pure) function from Bool is completely specified by
a pair of values: one corresponding to False, and one corresponding to
True. The set of all possible functions from Bool to, say, Int is the set of
all pairs of Ints. This is the same as the product Int × Int or, being a
little creative with notation, Int2 .
For another example, let’s look at the C++ type char, which contains
256 values (Haskell Char is larger, because Haskell uses Unicode). There
are several functions in the part of the C++ Standard Library that are
usually implemented using lookups. Functions like isupper or isspace
are implemented using tables, which are equivalent to tuples of 256
Boolean values. A tuple is a product type, so we are dealing with prod-
ucts of 256 Booleans: bool × bool × bool × ... × bool. We know from
arithmetics that an iterated product defines a power. If you “multiply”
171
bool by itself 256 (or char) times, you get bool to the power of char, or
char
bool .
How many values are there in the type defined as 256-tuples of bool?
Exactly 2256 . This is also the number of different functions from char to
bool, each function corresponding to a unique 256-tuple. You can sim-
ilarly calculate that the number of functions from bool to char is 2562 ,
and so on. The exponential notation for function types makes perfect
sense in these cases.
We probably wouldn’t want to fully memoize a function from int
or double. But the equivalence between functions and data types, if
not always practical, is there. There are also infinite types, for instance
lists, strings, or trees. Eager memoization of functions from those types
would require infinite storage. But Haskell is a lazy language, so the
boundary between lazily evaluated (infinite) data structures and func-
tions is fuzzy. This function vs. data duality explains the identification of
Haskell’s function type with the categorical exponential object — which
corresponds more to our idea of data.
172
If you consider an exponential as an iterated product (possibly infinitely
many times), then you can think of a Cartesian closed category as one
supporting products of an arbitrary arity. In particular, the terminal ob-
ject can be thought of as a product of zero objects — or the zero-th power
of an object.
What’s interesting about Cartesian closed categories from the per-
spective of computer science is that they provide models for the simply
typed lambda calculus, which forms the basis of all typed programming
languages.
The terminal object and the product have their duals: the initial ob-
ject and the coproduct. A Cartesian closed category that also supports
those two, and in which product can be distributed over coproduct
𝑎 × (𝑏 + 𝑐) = 𝑎 × 𝑏 + 𝑎 × 𝑐
(𝑏 + 𝑐) × 𝑎 = 𝑏 × 𝑎 + 𝑐 × 𝑎
is called a bicartesian closed category. We’ll see in the next section that
bicartesian closed categories, of which 𝐒𝐞𝐭 is a prime example, have
some interesting properties.
173
9.5.1 Zeroth Power
𝑎0 = 1
In the categorical interpretation, we replace 0 with the initial object, 1
with the final object, and equality with isomorphism. The exponential is
the internal hom-object. This particular exponential represents the set
of morphisms going from the initial object to an arbitrary object 𝑎. By
the definition of the initial object, there is exactly one such morphism,
so the hom-set 𝐂(0, 𝑎) is a singleton set. A singleton set is the terminal
object in 𝐒𝐞𝐭, so this identity trivially works in 𝐒𝐞𝐭. What we are saying
is that it works in any bicartesian closed category.
In Haskell, we replace 0 with Void; 1 with the unit type (); and the
exponential with function type. The claim is that the set of functions
from Void to any type a is equivalent to the unit type — which is a
singleton. In other words, there is only one function Void -> a. We’ve
seen this function before: it’s called absurd.
This is a little bit tricky, for two reasons. One is that in Haskell we
don’t really have uninhabited types — every type contains the “result of
a never ending calculation,” or the bottom. The second reason is that all
implementations of absurd are equivalent because, no matter what they
do, nobody can ever execute them. There is no value that can be passed
to absurd. (And if you manage to pass it a never ending calculation, it
will never return!)
174
minal object. In general, the internal hom-object from 𝑎 to the terminal
object is isomorphic to the terminal object itself.
In Haskell, there is only one function from any type a to unit. We’ve
seen this function before — it’s called unit. You can also think of it as
the function const partially applied to ().
175
f :: Either Int Double -> String
176
9.5.6 Exponentials over Products
(𝑎 × 𝑏)𝑐 = 𝑎𝑐 × 𝑏 𝑐
In Haskell: A function returning a pair is equivalent to a pair of func-
tions, each producing one element of the pair.
It’s pretty incredible how those simple high-school algebraic iden-
tities can be lifted to category theory and have practical application in
functional programming.
177
eval :: ((a -> b), a) -> b
𝑒𝑣𝑎𝑙 ∷ (𝑎 ⇒ 𝑏) × 𝑎 → 𝑏
which defines the function type 𝑎 ⇒ 𝑏 (or the exponential object 𝑏 𝑎 ).
Let’s translate this signature to a logical predicate using the Curry-
Howard isomorphism:
((𝑎 ⇒ 𝑏) ∧ 𝑎) ⇒ 𝑏
Here’s how you can read this statement: If it’s true that 𝑏 follows from
𝑎, and 𝑎 is true, then 𝑏 must be true. This makes perfect intuitive sense
and has been known since antiquity as modus ponens. We can prove this
theorem by implementing the function:
178
b by simply applying the function f to x. By implementing this func-
tion I have just shown that the type ((a -> b), a) -> b is inhabited.
Therefore modus ponens is true in our logic.
How about a predicate that is blatantly false? For instance: if 𝑎 or 𝑏
is true then 𝑎 must be true.
𝑎∨𝑏 ⇒𝑎
This is obviously wrong because you can chose an 𝑎 that is false and a
𝑏 that is true, and that’s a counter-example.
Mapping this predicate into a function signature using the Curry-
Howard isomorphism, we get:
Either a b -> a
Either[A, B] => A
Try as you may, you can’t implement this function — you can’t produce
a value of type a if you are called with the Right value. (Remember, we
are talking about pure functions.)
Finally, we come to the meaning of the absurd function:
𝑓 𝑎𝑙𝑠𝑒 ⇒ 𝑎
179
Anything follows from falsehood (ex falso quodlibet). Here’s one possi-
ble proof (implementation) of this statement (function) in Haskell:
9.7 Bibliography
1. Ralph Hinze, Daniel W. H. James, Reason Isomorphically!1 . This
paper contains proofs of all those high-school algebraic identities
in category theory that I mentioned in this chapter.
1 http://www.cs.ox.ac.uk/ralf.hinze/publications/WGP10.pdf
180
10
Natural Transformations
181
sometimes very different. One may collapse the whole source category
into one object, another may map every object to a different object and
every morphism to a different morphism. The same blueprint may be re-
alized in many different ways. Natural transformations help us compare
these realizations. They are mappings of functors — special mappings
that preserve their functorial nature.
Consider two functors 𝐹 and 𝐺 between categories 𝐂 and 𝐃. If you
focus on just one object 𝑎 in 𝐂, it is mapped to two objects: 𝐹 𝑎 and 𝐺𝑎.
A mapping of functors should therefore map 𝐹 𝑎 to 𝐺𝑎.
𝛼𝑎 ∷ 𝐹 𝑎 → 𝐺𝑎
Keep in mind that 𝑎 is an object in 𝐂 while 𝛼𝑎 is a morphism in 𝐃.
If, for some 𝑎, there is no morphism between 𝐹 𝑎 and 𝐺𝑎 in 𝐃, there
can be no natural transformation between 𝐹 and 𝐺.
182
Of course that’s only half of the story, because functors not only
map objects, they map morphisms as well. So what does a natural trans-
formation do with those mappings? It turns out that the mapping of
morphisms is fixed — under any natural transformation between 𝐹 and
𝐺, 𝐹 𝑓 must be transformed into 𝐺 𝑓 . What’s more, the mapping of mor-
phisms by the two functors drastically restricts the choices we have in
defining a natural transformation that’s compatible with it. Consider
a morphism 𝑓 between two objects 𝑎 and 𝑏 in 𝐂. It’s mapped to two
morphisms, 𝐹 𝑓 and 𝐺𝑓 in 𝐃:
𝐹𝑓 ∷ 𝐹𝑎 → 𝐹𝑏
𝐺𝑓 ∷ 𝐺𝑎 → 𝐺𝑏
𝛼𝑎 ∷ 𝐹 𝑎 → 𝐺𝑎
𝛼𝑏 ∷ 𝐹 𝑏 → 𝐺𝑏
183
Now we have two ways of getting from 𝐹 𝑎 to 𝐺𝑏. To make sure that
they are equal, we must impose the naturality condition that holds for
any 𝑓 :
𝐺𝑓 ∘ 𝛼𝑎 = 𝛼𝑏 ∘ 𝐹 𝑓
The naturality condition is a pretty stringent requirement. For instance,
if the morphism 𝐹 𝑓 is invertible, naturality determines 𝛼𝑏 in terms of
𝛼𝑎 . It transports 𝛼𝑎 along 𝑓 :
𝛼𝑏 = (𝐺𝑓 ) ∘ 𝛼𝑎 ∘ (𝐹 𝑓 )−1
If there is more than one invertible morphism between two objects, all
these transports have to agree. In general, though, morphisms are not
invertible; but you can see that the existence of natural transforma-
tions between two functors is far from guaranteed. So the scarcity or
the abundance of functors that are related by natural transformations
may tell you a lot about the structure of categories between which they
operate. We’ll see some examples of that when we talk about limits and
the Yoneda lemma.
Looking at a natural transformation component-wise, one may say
that it maps objects to morphisms. Because of the naturality condition,
184
one may also say that it maps morphisms to commuting squares — there
is one commuting naturality square in 𝐃 for every morphism in 𝐂.
185
maps it to 𝐺𝑎. The component of a natural transformation alpha at a is
a function from 𝐹 𝑎 to 𝐺𝑎. In pseudo-Haskell:
alphaa :: F a -> G a
alpha :: F a -> G a
186
C++, on the other hand, supports by default ad hoc polymorphism,
which means that a template doesn’t have to be well-defined for all
types. Whether a template will work for a given type is decided at in-
stantiation time, where a concrete type is substituted for the type pa-
rameter. Type checking is deferred, which unfortunately often leads to
incomprehensible error messages.
In C++, there is also a mechanism for function overloading and tem-
plate specialization, which allows different definitions of the same func-
tion for different types. In Haskell this functionality is provided by type
classes and type families.
Haskell’s parametric polymorphism has an unexpected consequence:
any polymorphic function of the type:
alpha :: F a -> G a
𝐺𝑓 ∘ 𝛼𝑎 = 𝛼𝑏 ∘ 𝐹 𝑓
In Haskell, the action of a functor G on a morphism f is implemented
using fmap. I’ll first write it in pseudo-Haskell, with explicit type anno-
tations:
Because of type inference, these annotations are not necessary, and the
following equation holds:
187
fmap f . alpha = alpha . fmap f
This is still not real Haskell — function equality is not expressible in code
— but it’s an identity that can be used by the programmer in equational
reasoning; or by the compiler, to implement optimizations.
The reason why the naturality condition is automatic in Haskell has
to do with “theorems for free.” Parametric polymorphism, which is used
to define natural transformations in Haskell, imposes very strong limi-
tations on the implementation — one formula for all types. These limi-
tations translate into equational theorems about such functions. In the
case of functions that transform functors, free theorems are the natu-
rality conditions.1
One way of thinking about functors in Haskell that I mentioned
earlier is to consider them generalized containers. We can continue this
analogy and consider natural transformations to be recipes for repack-
aging the contents of one container into another container. We are not
touching the items themselves: we don’t modify them, and we don’t cre-
ate new ones. We are just copying (some of) them, sometimes multiple
times, into a new container.
The naturality condition becomes the statement that it doesn’t mat-
ter whether we modify the items first, through the application of fmap,
and repackage later; or repackage first, and then modify the items in
the new container, with its own implementation of fmap. These two ac-
tions, repackaging and fmapping, are orthogonal. “One moves the eggs,
the other boils them.”
Let’s see a few examples of natural transformations in Haskell. The
first is between the list functor, and the Maybe functor. It returns the head
of the list, but only if the list is non-empty:
1 You may read more about free theorems in my blog Parametricity: Money for
188
safeHead :: [a] -> Maybe a
safeHead [] = Nothing
safeHead (x:xs) = Just x
189
safeHead(fmap(f)(List.empty)) == safeHead(List.empty) ==
↪ None
fmap f [] = []
fmap f (x:xs) = f x : fmap f xs
190
fmap f Nothing = Nothing
fmap f (Just x) = Just (f x)
An interesting case is when one of the functors is the trivial Const func-
tor. A natural transformation from or to a Const functor looks just like a
function that’s either polymorphic in its return type or in its argument
type.
For instance, length can be thought of as a natural transformation
from the list functor to the Const Int functor:
191
def unConst[C, A]: Const[C, A] => C = {
case Const(x) => x
}
Another common functor that we’ve seen already, and which will play
an important role in the Yoneda lemma, is the Reader functor. I will
rewrite its definition as a newtype:
192
case class Reader[E, A](run: E => A)
193
dumb (Reader _) = Nothing
and
(The only thing you can do with g is to apply it to the unit value ().)
And, indeed, as predicted by the Yoneda lemma, these correspond to
the two elements of the Maybe () type, which are Nothing and Just ().
We’ll come back to the Yoneda lemma later — this was just a little teaser.
194
However, function types are not covariant in the argument type.
They are contravariant. Of course contravariant functors are equivalent
to covariant functors from the opposite category. Polymorphic func-
tions between two contravariant functors are still natural transforma-
tions in the categorical sense, except that they work on functors from
the opposite category to Haskell types.
You might remember the example of a contravariant functor we’ve
looked at before:
newtype Op r a = Op (a -> r)
195
def predToStr[A]: Op[Boolean, A] => Op[String, A] = {
case Op(f) =>
Op(x => if (f(x)) "T" else "F")
}
But since the two functors are not covariant, this is not a natural trans-
formation in Hask. However, because they are both contravariant, they
satisfy the “opposite” naturality condition:
Notice that the function f must go in the opposite direction than what
you’d use with fmap, because of the signature of contramap:
Are there any type constructors that are not functors, whether covariant
or contravariant? Here’s one example:
a -> a
196
A => A
This is not a functor because the same type a is used both in the negative
(contravariant) and positive (covariant) position. You can’t implement
fmap or contramap for this type. Therefore a function of the signature:
(a -> a) -> f a
197
We’d like to compose 𝛼 with 𝛽, which is a natural transformation from
functor 𝐺 to 𝐻 . The component of 𝛽 at 𝑎 is a morphism:
𝛽𝑎 ∷ 𝐺𝑎 → 𝐻 𝑎
One (long) look at a diagram convinces us that the result of this com-
position is indeed a natural transformation from F to H:
𝐻 𝑓 ∘ (𝛽 ⋅ 𝛼)𝑎 = (𝛽 ⋅ 𝛼)𝑏 ∘ 𝐹 𝑓
198
Composition of natural transformations is associative, because their
components, which are regular morphisms, are associative with respect
to their composition.
Finally, for each functor F there is an identity natural transformation
1𝐹 whose components are the identity morphisms:
id𝐹 𝑎 ∷ 𝐹 𝑎 → 𝐹 𝑎
So, indeed, functors form a category.
A word about notation. Following Saunders Mac Lane I use the dot
for the kind of natural transformation composition I have just described.
The problem is that there are two ways of composing natural transfor-
mations. This one is called the vertical composition, because the func-
tors are usually stacked up vertically in the diagrams that describe it.
Vertical composition is important in defining the functor category. I’ll
explain horizontal composition shortly.
199
The functor category between categories 𝐂 and 𝐃 is written as 𝐅𝐮𝐧(𝐂, 𝐃),
or [𝐂, 𝐃], or sometimes as 𝐃𝐂 . This last notation suggests that a functor
category itself might be considered a function object (an exponential)
in some other category. Is this indeed the case?
Let’s have a look at the hierarchy of abstractions that we’ve been
building so far. We started with a category, which is a collection of ob-
jects and morphisms. Categories themselves (or, strictly speaking small
categories, whose objects form sets) are themselves objects in a higher-
level category 𝐂𝐚𝐭. Morphisms in that category are functors. A Hom-set
in 𝐂𝐚𝐭 is a set of functors. For instance 𝐂𝐚𝐭(𝐂, 𝐃) is a set of functors be-
tween two categories 𝐂 and 𝐃.
200
As you may remember, in order to construct an exponential, we
need to first define a product. In 𝐂𝐚𝐭, this turns out to be relatively easy,
because small categories are sets of objects, and we know how to define
Cartesian products of sets. So an object in a product category 𝐂 × 𝐃 is
just a pair of objects, (𝑐, 𝑑), one from 𝐂 and one from 𝐃. Similarly, a
morphism between two such pairs, (𝑐, 𝑑) and (𝑐 ′ , 𝑑 ′ ), is a pair of mor-
phisms, (𝑓 , 𝑔), where 𝑓 ∷ 𝑐 → 𝑐 ′ and 𝑔 ∷ 𝑑 → 𝑑 ′ . These pairs of
morphisms compose component-wise, and there is always an identity
pair that is just a pair of identity morphisms. To make the long story
short, 𝐂𝐚𝐭 is a full-blown Cartesian closed category in which there is an
exponential object 𝐃𝐂 for any pair of categories. And by “object” in 𝐂𝐚𝐭
I mean a category, so 𝐃𝐂 is a category, which we can identify with the
functor category between 𝐂 and 𝐃.
10.4 2-Categories
With that out of the way, let’s have a closer look at 𝐂𝐚𝐭. By definition,
any Hom-set in 𝐂𝐚𝐭 is a set of functors. But, as we have seen, functors
between two objects have a richer structure than just a set. They form
a category, with natural transformations acting as morphisms. Since
functors are considered morphisms in 𝐂𝐚𝐭, natural transformations are
morphisms between morphisms.
This richer structure is an example of a 𝟐-category, a generalization
of a category where, besides objects and morphisms (which might be
called 1-morphisms in this context), there are also 2-morphisms, which
are morphisms between morphisms.
In the case of 𝐂𝐚𝐭 seen as a 𝟐-category we have:
201
• 1-morphisms: Functors between categories
• 2-morphisms: Natural transformations between functors.
202
Notice that we cannot apply vertical composition to this pair, because
the target of 𝛼 is different from the source of 𝛽. In fact they are members
of two different functor categories: 𝐃𝐂 and 𝐄𝐃 . We can, however, apply
composition to the functors 𝐹 ′ and 𝐺 ′ , because the target of 𝐹 ′ is the
source of 𝐺 ′ — it’s the category 𝐃. What’s the relation between the
functors 𝐺 ′ ∘ 𝐹 ′ and 𝐺 ∘ 𝐹 ?
Having 𝛼 and 𝛽 at our disposal, can we define a natural transforma-
tion from 𝐺 ∘ 𝐹 to 𝐺 ′ ∘ 𝐹 ′ ? Let me sketch the construction.
As usual, we start with an object 𝑎 in 𝐂. Its image splits into two ob-
jects in 𝐃: 𝐹 𝑎 and 𝐹 ′ 𝑎. There is also a morphism, a component of 𝛼,
connecting these two objects:
𝛼𝑎 ∷ 𝐹 𝑎 → 𝐹 ′ 𝑎
When going from 𝐃 to 𝐄, these two objects split further into four ob-
jects: 𝐺(𝐹 𝑎), 𝐺 ′ (𝐹 𝑎), 𝐺(𝐹 ′ 𝑎), 𝐺 ′ (𝐹 ′ 𝑎). We also have four morphisms
203
forming a square. Two of these morphisms are the components of the
natural transformation 𝛽:
𝛽𝐹 𝑎 ∷ 𝐺(𝐹 𝑎) → 𝐺 ′ (𝐹 𝑎)
𝛽𝐹 ′ 𝑎 ∷ 𝐺(𝐹 ′ 𝑎) → 𝐺 ′ (𝐹 ′ 𝑎)
The other two are the images of 𝛼𝑎 under the two functors (functors
map morphisms):
That’s a lot of morphisms. Our goal is to find a morphism that goes from
𝐺(𝐹 𝑎) to 𝐺 ′ (𝐹 ′ 𝑎), a candidate for the component of a natural transfor-
mation connecting the two functors 𝐺 ∘ 𝐹 and 𝐺 ′ ∘ 𝐹 ′ . In fact there’s not
one but two paths we can take from 𝐺(𝐹 𝑎) to 𝐺 ′ (𝐹 ′ 𝑎):
𝐺 ′ 𝛼𝑎 ∘ 𝛽𝐹 𝑎
𝛽𝐹 ′ 𝑎 ∘ 𝐺𝛼𝑎
Luckily for us, they are equal, because the square we have formed turns
out to be the naturality square for 𝛽.
We have just defined a component of a natural transformation from
𝐺 ∘ 𝐹 to 𝐺 ′ ∘ 𝐹 ′ . The proof of naturality for this transformation is pretty
straightforward, provided you have enough patience.
We call this natural transformation the horizontal composition of 𝛼
and 𝛽:
𝛽 ∘ 𝛼 ∷ 𝐺 ∘ 𝐹 → 𝐺′ ∘ 𝐹 ′
Again, following Mac Lane I use the small circle for horizontal compo-
sition, although you may also encounter star in its place.
204
Here’s a categorical rule of thumb: Every time you have composi-
tion, you should look for a category. We have vertical composition of
natural transformations, and it’s part of the functor category. But what
about the horizontal composition? What category does that live in?
The way to figure this out is to look at 𝐂𝐚𝐭 sideways. Look at natural
transformations not as arrows between functors but as arrows between
categories. A natural transformation sits between two categories, the
ones that are connected by the functors it transforms. We can think of
it as connecting these two categories.
(𝛽 ′ ⋅ 𝛼 ′ ) ∘ (𝛽 ⋅ 𝛼) = (𝛽 ′ ∘ 𝛽) ⋅ (𝛼 ′ ∘ 𝛼)
I will quote Saunders Mac Lane here: The reader may enjoy writing
down the evident diagrams needed to prove this fact.
205
There is one more piece of notation that might come in handy in the
future. In this new sideways interpretation of 𝐂𝐚𝐭 there are two ways of
getting from object to object: using a functor or using a natural trans-
formation. We can, however, re-interpret the functor arrow as a special
kind of natural transformation: the identity natural transformation act-
ing on this functor. So you’ll often see this notation:
𝐹 ∘𝛼
10.5 Conclusion
This concludes the first part of the book. We’ve learned the basic vo-
cabulary of category theory. You may think of objects and categories as
nouns; and morphisms, functors, and natural transformations as verbs.
Morphisms connect objects, functors connect categories, natural trans-
formations connect functors.
But we’ve also seen that, what appears as an action at one level of
abstraction, becomes an object at the next level. A set of morphisms
turns into a function object. As an object, it can be a source or a target
of another morphism. That’s the idea behind higher order functions.
A functor maps objects to objects, so we can use it as a type con-
structor, or a parametric type. A functor also maps morphisms, so it is
206
a higher order function — fmap. There are some simple functors, like
Const, product, and coproduct, that can be used to generate a large va-
riety of algebraic data types. Function types are also functorial, both
covariant and contravariant, and can be used to extend algebraic data
types.
Functors may be looked upon as objects in the functor category.
As such, they become sources and targets of morphisms: natural trans-
formations. A natural transformation is a special type of polymorphic
function.
10.6 Challenges
1. Define a natural transformation from the Maybe functor to the list
functor. Prove the naturality condition for it.
2. Define at least two different natural transformations between Reader ()
and the list functor. How many different lists of () are there?
3. Continue the previous exercise with Reader Bool and Maybe.
4. Show that horizontal composition of natural transformation sat-
isfies the naturality condition (hint: use components). It’s a good
exercise in diagram chasing.
5. Write a short essay about how you may enjoy writing down the
evident diagrams needed to prove the interchange law.
6. Create a few test cases for the opposite naturality condition of
transformations between different Op functors. Here’s one choice:
op :: Op Bool Int
op = Op (\x -> x > 0)
and
f :: String -> Int
207
f x = read x
208
Part Two
209
11
Declarative Programming
I n the first part of the book I argued that both category theory and
programming are about composability. In programming, you keep de-
composing a problem until you reach the level of detail that you can
deal with, solve each subproblem in turn, and re-compose the solutions
bottom-up. There are, roughly speaking, two ways of doing it: by telling
the computer what to do, or by telling it how to do it. One is called
declarative and the other imperative.
You can see this even at the most basic level. Composition itself may
be defined declaratively; as in, h is a composite of g after f:
h = g . f
val h = g compose f
or imperatively; as in, call f first, remember the result of that call, then
call g with the result:
210
h x = let y = f x
in g y
val h = x => {
val y = f(x)
g(y)
}
211
There are two forms of express-
ing most laws of physics. One uses
local, or infinitesimal, considera-
tions. We look at the state of a sys-
tem around a small neighborhood,
and predict how it will evolve within
the next instant of time. This is
usually expressed using differential
equations that have to be integrated,
or summed up, over a period of time.
Notice how this approach resembles imperative thinking: we reach
the final solution by following a sequence of small steps, each depending
on the result of the previous one. In fact, computer simulations of physi-
cal systems are routinely implemented by turning differential equations
into difference equations and iterating them. This is how spaceships are
animated in the asteroids game. At each time step, the position of a
spaceship is changed by adding a small increment, which is calculated
by multiplying its velocity by the time delta. The velocity, in turn, is
changed by a small increment proportional to acceleration, which is
given by force divided by mass.
These are the direct encodings of the differential equations corre-
sponding to Newton’s laws of motion:
𝑑𝑣
𝐹 =𝑚
𝑑𝑡
𝑑𝑥
𝑣=
𝑑𝑡
Similar methods may be applied to more complex problems, like the
propagation of electromagnetic fields using Maxwell’s equations, or even
212
the behavior of quarks and gluons inside a proton using lattice qcd
(quantum chromodynamics).
This local thinking combined with discretization of space and time
that is encouraged by the use of digital computers found its extreme ex-
pression in the heroic attempt by Stephen Wolfram to reduce the com-
plexity of the whole universe to a system of cellular automata.
The other approach is global. We look at the initial and the final
state of the system, and calculate a trajectory that connects them by
minimizing a certain functional. The simplest example is the Fermat’s
principle of least time. It states that light rays propagate along paths
that minimize their flight time. In particular, in the absence of reflect-
ing or refracting objects, a light ray from point 𝐴 to point 𝐵 will take
the shortest path, which is a straight line. But light propagates slower
in dense (transparent) materials, like water or glass. So if you pick the
starting point in the air, and the ending point under water, it’s more ad-
vantageous for light to travel longer in the air and then take a shortcut
through water. The path of minimum time makes the ray refract at the
boundary of air and water, resulting in Snell’s law of refraction:
𝑠𝑖𝑛(𝜃1 ) 𝑣1
=
𝑠𝑖𝑛(𝜃2 ) 𝑣2
where 𝑣1 is the speed of light in the air and 𝑣2 is the speed of light in
the water.
213
All of classical mechanics can be derived from the principle of least
action. The action can be calculated for any trajectory by integrating
the Lagrangian, which is the difference between kinetic and potential
energy (notice: it’s the difference, not the sum — the sum would be the
total energy). When you fire a mortar to hit a given target, the projectile
will first go up, where the potential energy due to gravity is higher, and
spend some time there racking up negative contribution to the action.
It will also slow down at the top of the parabola, to minimize kinetic
energy. Then it will speed up to go quickly through the area of low
potential energy.
214
lem is formulated in terms of initial state and final state. The Feynman
path integral between those states is used to calculate the probability of
transition.
215
rate handlers for every possible user action, all having access to some
shared mutable state, frp treats external events as an infinite list, and
applies a series of transformations to it. Conceptually, the list of all our
future actions is there, available as the input data to our program. From
a program’s perspective there’s no difference between the list of digits
of 𝜋, a list of pseudo-random numbers, or a list of mouse positions com-
ing through computer hardware. In each case, if you want to get the 𝑛th
item, you have to first go through the first 𝑛 − 1 items. When applied to
temporal events, we call this property causality.
So what does it have to do with category theory? I will argue that
category theory encourages a global approach and therefore supports
declarative programming. First of all, unlike calculus, it has no built-in
notion of distance, or neighborhood, or time. All we have is abstract ob-
jects and abstract connections between them. If you can get from 𝐴 to 𝐵
through a series of steps, you can also get there in one leap. Moreover,
the major tool of category theory is the universal construction, which is
the epitome of a global approach. We’ve seen it in action, for instance,
in the definition of the categorical product. It was done by specifying its
properties — a very declarative approach. It’s an object equipped with
two projections, and it’s the best such object — it optimizes a certain
property: the property of factorizing the projections of other such ob-
jects.
216
Compare this with Fermat’s principle of minimum time, or the prin-
ciple of least action.
Conversely, contrast this with the traditional definition of a Carte-
sian product, which is much more imperative. You describe how to cre-
ate an element of the product by picking one element from one set and
another element from another set. It’s a recipe for creating a pair. And
there’s another for disassembling a pair.
In almost every programming language, including functional lan-
guages like Haskell, product types, coproduct types, and function types
are built in, rather than being defined by universal constructions; al-
though there have been attempts at creating categorical programming
languages (see, e.g., Tatsuya Hagino’s thesis1 ).
Whether used directly or not, categorical definitions justify pre-
existing programming constructs, and give rise to new ones. Most im-
portantly, category theory provides a meta-language for reasoning about
computer programs at a declarative level. It also encourages reasoning
about problem specification before it is cast into code.
1 http://web.sfc.keio.ac.jp/~hagino/thesis.pdf
217
12
Limits and Colimits
218
this pattern into a category — a very simple category, but a category
nevertheless. It’s a category that we’ll call 𝟐. It contains just two objects,
1 and 2, and no morphisms other than the two obligatory identities. Now
we can rephrase the selection of two objects in 𝐂 as the act of defining a
functor 𝐷 from the category 𝟐 to 𝐂. A functor maps objects to objects, so
its image is just two objects (or it could be one, if the functor collapses
objects, which is fine too). It also maps morphisms — in this case it
simply maps identity morphisms to identity morphisms.
219
Now we have two functors, Δ𝑐 and 𝐷 going between 𝟐 and 𝐂 so it’s
only natural to ask about natural transformations between them. Since
there are only two objects in 𝟐, a natural transformation will have two
components. Object 1 in 𝟐 is mapped to 𝑐 by Δ𝑐 and to 𝑎 by 𝐷. So the
component of a natural transformation between Δ𝑐 and 𝐷 at 1 is a mor-
phism from 𝑐 to 𝑎. We can call it 𝑝. Similarly, the second component
is a morphism 𝑞 from 𝑐 to 𝑏 — the image of the object 2 in 𝟐 under 𝐷.
But these are exactly like the two projections we used in our original
definition of the product. So instead of talking about selecting objects
and projections, we can just talk about picking functors and natural
transformations. It so happens that in this simple case the naturality
condition for our transformation is trivially satisfied, because there are
no morphisms (other than the identities) in 𝟐.
220
such a transformation a cone, because the image of Δ is the apex of a
cone/pyramid whose sides are formed by the components of the natural
transformation. The image of 𝐷 forms the base of the cone.
In general, to build a cone, we start with a category 𝐈 that defines
the pattern. It’s a small, often finite category. We pick a functor 𝐷 from
𝐈 to 𝐂 and call it (or its image) a diagram. We pick some 𝑐 in 𝐂 as the
apex of our cone. We use it to define the constant functor Δ𝑐 from 𝐈 to
𝐂. A natural transformation from Δ𝑐 to 𝐷 is then our cone. For a finite
𝐈 it’s just a bunch of morphisms connecting 𝑐 to the diagram: the image
of 𝐈 under 𝐷.
Naturality requires that all triangles (the walls of the pyramid) in this
diagram commute. Indeed, take any morphism 𝑓 in 𝐈. The functor 𝐷
maps it to a morphism 𝐷𝑓 in 𝐂, a morphism that forms the base of some
triangle. The constant functor Δ𝑐 maps 𝑓 to the identity morphism on
𝑐. Δ squishes the two ends of the morphism into one object, and the
naturality square becomes a commuting triangle. The two arms of this
triangle are the components of the natural transformation.
221
So that’s one cone. What we are interested in is the universal cone —
just like we picked a universal object for our definition of a product.
There are many ways to go about it. For instance, we may define a
category of cones based on a given functor 𝐷. Objects in that category
are cones. Not every object 𝑐 in 𝐂 can be an apex of a cone, though,
because there may be no natural transformation between Δ𝑐 and 𝐷.
To make it a category, we also have to define morphisms between
cones. These would be fully determined by morphisms between their
apexes. But not just any morphism will do. Remember that, in our con-
struction of the product, we imposed the condition that the morphisms
between candidate objects (the apexes) must be common factors for the
projections. For instance:
p' = p . m
q' = q . m
val p1 = p compose m
val q1 = q compose m
222
This condition translates, in the general case, to the condition that the
triangles whose one side is the factorizing morphism all commute.
The commuting triangle connecting two cones, with the factorizing morphism ℎ (here, the lower
cone is the universal one, with Lim𝐷 as its apex)
223
the limit of the diagram 𝐷, Lim𝐷 (in the literature, you’ll often see a left
arrow pointing towards 𝐼 under the Lim sign). Often, as a shorthand,
we call the apex of this cone the limit (or the limit object).
The intuition is that the limit embodies the properties of the whole
diagram in a single object. For instance, the limit of our two-object di-
agram is the product of two objects. The product (together with the
two projections) contains the information about both objects. And be-
ing universal means that it has no extraneous junk.
224
that satisfies the particular commutativity condition. Does that sound
like defining a natural transformation? It most certainly does!
But what are the functors that are related by this transformation?
One functor is the mapping of 𝑐 to the set 𝐂(𝑐, Lim𝐷). It’s a func-
tor from 𝐂 to 𝐒𝐞𝐭 — it maps objects to sets. In fact it’s a contravariant
functor. Here’s how we define its action on morphisms: Let’s take a
morphism 𝑓 from 𝑐 ′ to 𝑐:
𝑓 ∷ 𝑐′ → 𝑐
Our functor maps 𝑐 ′ to the set 𝐂(𝑐 ′ , Lim𝐷). To define the action of this
functor on 𝑓 (in other words, to lift 𝑓 ), we have to define the corre-
sponding mapping between 𝐂(𝑐, Lim𝐷) and 𝐂(𝑐 ′ , Lim𝐷). So let’s pick
one element 𝑢 of 𝐂(𝑐, Lim𝐷) and see if we can map it to some element
of 𝐂(𝑐 ′ , Lim𝐷). An element of a hom-set is a morphism, so we have:
𝑢 ∷ 𝑐 → Lim𝐷
𝑢.𝑓 ∷ 𝑐 ′ → Lim𝐷
contramap :: (c' -> c) -> (c -> LimD) -> (c' -> LimD)
contramap f u = u . f
225
Notice the inversion in the order of 𝑐 and 𝑐 ′ characteristic of a con-
travariant functor.
𝑓 ∷ 𝑐′ → 𝑐
𝑁 𝑎𝑡(Δ𝑐 , 𝐷) → 𝑁 𝑎𝑡(Δ𝑐 ′ , 𝐷)
𝛼𝑎 ∷ Δ𝑐 𝑎 → 𝐷𝑎
226
or, using the definition of the constant functor Δ,
𝛼𝑎 ∷ 𝑐 → 𝐷𝑎
𝛽𝑎 ∷ 𝑐 ′ → 𝐷𝑎
We can easily get the latter (𝛽𝑎 ) from the former (𝛼𝑎 ) by precomposing
it with 𝑓 :
𝛽𝑎 = 𝛼𝑎 .𝑓
It’s relatively easy to show that those components indeed add up to a
natural transformation.
Given our morphism 𝑓 , we have thus built a mapping between two nat-
ural transformations, component-wise. This mapping defines contramap
for the functor:
𝑐 → 𝑁 𝑎𝑡(Δ𝑐 , 𝐷)
What I have just done is to show you that we have two (contravariant)
functors from 𝐂 to 𝐒𝐞𝐭. I haven’t made any assumptions — these functors
always exist.
227
Incidentally, the first of these functors plays an important role in
category theory, and we’ll see it again when we talk about Yoneda’s
lemma. There is a name for contravariant functors from any category
𝐂 to 𝐒𝐞𝐭: they are called “presheaves”. This one is called a representable
presheaf. The second functor is also a presheaf.
Now that we have two functors, we can talk about natural transfor-
mations between them. So without further ado, here’s the conclusion:
A functor 𝐷 from 𝐈 to 𝐂 has a limit Lim𝐷 if and only if there is a natural
isomorphism between the two functors I have just defined:
𝐂(𝑐, Lim𝐷) ≃ 𝑁 𝑎𝑡(Δ𝑐 , 𝐷)
Let me remind you what a natural isomorphism is. It’s a natural trans-
formation whose every component is an isomorphism, that is to say an
invertible morphism.
I’m not going to go through the proof of this statement. The proce-
dure is pretty straightforward if not tedious. When dealing with natu-
ral transformations, you usually focus on components, which are mor-
phisms. In this case, since the target of both functors is 𝐒𝐞𝐭, the com-
ponents of the natural isomorphism will be functions. These are higher
order functions, because they go from the hom-set to the set of natu-
ral transformations. Again, you can analyze a function by considering
what it does to its argument: here the argument will be a morphism — a
member of 𝐂(𝑐, Lim𝐷) — and the result will be a natural transformation
— a member of 𝑁 𝑎𝑡(Δ𝑐 , 𝐷), or what we have called a cone. This natural
transformation, in turn, has its own components, which are morphisms.
So it’s morphisms all the way down, and if you can keep track of them,
you can prove the statement.
The most important result is that the naturality condition for this
isomorphism is exactly the commutativity condition for the mapping
of cones.
228
As a preview of coming attractions, let me mention that the set
𝑁 𝑎𝑡(Δ𝑐 , 𝐷) can be thought of as a hom-set in the functor category; so
our natural isomorphism relates two hom-sets, which points at an even
more general relationship called an adjunction.
f :: a -> b
g :: a -> b
val f : A => B
val g : A => B
229
To build a cone over this diagram, we have to add the apex, 𝑐 and two
projections:
p :: c -> a
q :: c -> b
val p : C => A
val q : C => B
q = f . p
q = g . p
q == f compose p
q == g compose p
230
f . p = g . p
f compose p == g compose p
The way to think about it is that, if we restrict our attention to 𝐒𝐞𝐭, the
image of the function 𝑝 selects a subset of 𝑎. When restricted to this
subset, the functions 𝑓 and 𝑔 are equal.
For instance, take 𝑎 to be the two-dimensional plane parameterized
by coordinates 𝑥 and 𝑦. Take 𝑏 to be the real line, and take:
f (x, y) = 2 * y + x
g (x, y) = y - x
def f(x, y) = 2 * y + x
def g(x, y) = y - x
The equalizer for these two functions is the set of real numbers (the
apex, 𝑐) and the function:
p t = (t, (-2) * t)
231
f . p' = g . p'
f compose p1 == g compose p1
but they all uniquely factor out through 𝑝. For instance, we can take the
singleton set () as 𝑐 ′ and the function:
p'() = (0, 0)
It’s a good cone, because 𝑓 (0, 0) = 𝑔(0, 0). But it’s not universal, because
of the unique factorization through ℎ:
p' = p . h
val p1 = p compose h
with
h () = 0
def h(_,_) = 0
232
An equalizer can thus be used to solve equations of the type 𝑓 𝑥 = 𝑔 𝑥.
But it’s much more general, because it’s defined in terms of objects and
morphisms rather than algebraically.
An even more general idea of solving an equation is embodied in
another limit — the pullback. Here, we still have two morphisms that
we want to equate, but this time their domains are different. We start
with a three-object category of the shape: 1 → 2 ← 3. The diagram
corresponding to this category consists of three objects, 𝑎, 𝑏, and 𝑐, and
two morphisms:
f :: a -> b
g :: c -> b
val f : A => B
val g : C => B
233
p :: d -> a
q :: d -> c
r :: d -> b
val p : D => A
val q : D => C
val r : D => B
g . q = f . p
g compose q == f compose p
234
Again, if you narrow your focus down to sets, you can think of the object
𝑑 as consisting of pairs of elements from 𝑎 and 𝑐 for which 𝑓 acting on
the first component is equal to 𝑔 acting on the second component. If
this is still too general, consider the special case in which 𝑔 is a constant
function, say 𝑔 _ = 1.23 (assuming that 𝑏 is a set of real numbers). Then
you are really solving the equation:
f x = 1.23
f(x) == 1.23
In this case, the choice of 𝑐 is irrelevant (as long as it’s not an empty
set), so we can take it to be a singleton set. The set 𝑎 could, for instance,
be the set of three-dimensional vectors, and 𝑓 the vector length. Then
the pullback is the set of pairs (𝑣, ()), where 𝑣 is a vector of length 1.23
(a solution to the equation √(𝑥 2 + 𝑦 2 + 𝑧 2 ) = 1.23), and () is the dummy
element of the singleton set.
But pullbacks have more general applications, also in programming.
For instance, consider C++ classes as a category in which morphism are
235
arrows that connect subclasses to superclasses. We’ll consider inheri-
tance a transitive property, so if C inherits from B and B inherits from A
then we’ll say that C inherits from A (after all, you can pass a pointer to C
where a pointer to A is expected). Also, we’ll assume that C inherits from
C, so we have the identity arrow for every class. This way subclassing is
aligned with subtyping. C++ also supports multiple inheritance, so you
can construct a diamond inheritance diagram with two classes B and C
inheriting from A, and a fourth class D multiply inheriting from B and
C. Normally, D would get two copies of A, which is rarely desirable; but
you can use virtual inheritance to have just one copy of A in D.
What would it mean to have D be a pullback in this diagram? It
would mean that any class E that multiply inherits from B and C is also
a subclass of D. This is not directly expressible in C++, where subtyping
is nominal (the C++ compiler wouldn’t infer this kind of class relation-
ship — it would require “duck typing”). But we could go outside of the
subtyping relationship and instead ask whether a cast from E to D would
be safe or not. This cast would be safe if D were the bare-bone combina-
tion of B and C, with no additional data and no overriding of methods.
And, of course, there would be no pullback if there is a name conflict
between some methods of B and C.
236
There’s also a more advanced use of a pullback in type inference. There
is often a need to unify types of two expressions. For instance, suppose
that the compiler wants to infer the type of a function:
twice f x = f (f x)
f :: t0
x :: t1
f x :: t2
f (f x) :: t3
It will also come up with a set of constraints resulting from the rules of
function application:
237
t0 = t1 -> t2 -- because f is applied to x
t0 = t2 -> t3 -- because f is applied to (f x)
t1 = t2 = t3 = Int
twice :: (Int -> Int) -> Int -> Int
but, obviously, it’s not the most general one. The most general substi-
tution is obtained using a pullback. I won’t go into the details, because
they are beyond the scope of this book, but you can convince yourself
that the result should be:
12.3 Colimits
Just like all constructions in category theory, limits have their dual im-
age in opposite categories. When you invert the direction of all arrows
in a cone, you get a co-cone, and the universal one of those is called a
colimit. Notice that the inversion also affects the factorizing morphism,
which now flows from the universal co-cone to any other co-cone.
238
Cocone with a factorizing morphism ℎ connecting two apexes.
Both the product and the coproduct embody the essence of a pair of
objects, each in a different way.
Just like the terminal object was a limit, so the initial object is a
colimit corresponding to the diagram based on an empty category.
The dual of the pullback is called the pushout. It’s based on a diagram
called a span, generated by the category 1 ← 2 → 3.
239
12.4 Continuity
I said previously that functors come close to the idea of continuous map-
pings of categories, in the sense that they never break existing connec-
tions (morphisms). The actual definition of a continuous functor 𝐹 from
a category 𝐂 to 𝐂′ includes the requirement that the functor preserve
limits. Every diagram 𝐷 in 𝐂 can be mapped to a diagram 𝐹 ∘ 𝐷 in 𝐂′ by
simply composing two functors. The continuity condition for 𝐹 states
that, if the diagram 𝐷 has a limit Lim𝐷, then the diagram 𝐹 ∘ 𝐷 also has
a limit, and it is equal to 𝐹 (Lim𝐷).
240
When its second argument is fixed, the hom-set functor (which becomes
the representable presheaf) maps colimits in 𝐂 to limits in 𝐒𝐞𝐭; and when
its first argument is fixed, it maps limits to limits.
In Haskell, a hom-functor is the mapping of any two types to a func-
tion type, so it’s just a parameterized function type. When we fix the
second parameter, let’s say to String, we get the contravariant functor:
trait Contravariant[F[_]] {
def contramap[A, B](fa: F[A])(f: B => A) : F[B]
241
ToString[Either[B, C]] ~ (B => String, C => String)
I know what you’re thinking: You don’t need category theory to figure
these things out. And you’re right! Still, I find it amazing that such re-
sults can be derived from first principles with no recourse to bits and
bytes, processor architectures, compiler technologies, or even lambda
calculus.
If you’re curious where the names “limit” and “continuity” come
from, they are a generalization of the corresponding notions from cal-
culus. In calculus limits and continuity are defined in terms of open
neighborhoods. Open sets, which define topology, form a category (a
poset).
12.5 Challenges
1. How would you describe a pushout in the category of C++ classes?
2. Show that the limit of the identity functor Id ∷ 𝐂 → 𝐂 is the
initial object.
242
3. Subsets of a given set form a category. A morphism in that cate-
gory is defined to be an arrow connecting two sets if the first is
the subset of the second. What is a pullback of two sets in such
a category? What’s a pushout? What are the initial and terminal
objects?
4. Can you guess what a coequalizer is?
5. Show that, in a category with a terminal object, a pullback to-
wards the terminal object is a product.
6. Similarly, show that a pushout from an initial object (if one exists)
is the coproduct.
243
13
Free Monoids
244
start with an arbitrary set, form all possible pairs of elements, and call
them new elements. Then we’ll pair these new elements with all possible
elements, and so on. This is a chain reaction — we’ll keep adding new
elements forever. The result, an infinite set, will be almost a monoid.
But a monoid also needs a unit element and the law of associativity. No
problem, we can add a special unit element and identify some of the
pairs — just enough to support the unit and associativity laws.
Let’s see how this works in a simple example. Let’s start with a set of
two elements, {𝑎, 𝑏}. We’ll call them the generators of the free monoid.
First, we’ll add a special element 𝑒 to serve as the unit. Next we’ll add all
the pairs of elements and call them “products”. The product of 𝑎 and 𝑏
will be the pair (𝑎, 𝑏). The product of 𝑏 and 𝑎 will be the pair (𝑏, 𝑎), the
product of 𝑎 with 𝑎 will be (𝑎, 𝑎), the product of 𝑏 with 𝑏 will be (𝑏, 𝑏).
We can also form pairs with 𝑒, like (𝑎, 𝑒), (𝑒, 𝑏), etc., but we’ll identify
them with 𝑎, 𝑏, etc. So in this round we’ll only add (𝑎, 𝑎), (𝑎, 𝑏) and (𝑏, 𝑎)
and (𝑏, 𝑏), and end up with the set {𝑒, 𝑎, 𝑏, (𝑎, 𝑎), (𝑎, 𝑏), (𝑏, 𝑎), (𝑏, 𝑏)}.
245
In the next round we’ll keep adding elements like: (𝑎, (𝑎, 𝑏)), ((𝑎, 𝑏), 𝑎),
etc. At this point we’ll have to make sure that associativity holds, so
we’ll identify (𝑎, (𝑏, 𝑎)) with ((𝑎, 𝑏), 𝑎), etc. In other words, we won’t
be needing internal parentheses.
You can guess what the final result of this process will be: we’ll cre-
ate all possible lists of 𝑎s and 𝑏s. In fact, if we represent 𝑒 as an empty
list, we can see that our “multiplication” is nothing but list concatena-
tion.
This kind of construction, in which you keep generating all pos-
sible combinations of elements, and perform the minimum number of
identifications — just enough to uphold the laws — is called a free con-
struction. What we have just done is to construct a free monoid from
the set of generators {𝑎, 𝑏}.
trait Monoid[M] {
def mempty: M
def mappend(m1: M, m2: M): M
}
246
This just says that every Monoid must have a neutral element, which is
called mempty, and a binary function (multiplication) called mappend. The
unit and associativity laws cannot be expressed in Haskell and must be
verified by the programmer every time a monoid is instantiated.
The fact that a list of any type forms a monoid is described by this
instance definition:
object Monoid {
implicit def listMonoid[A]: Monoid[List[A]] = new
↪ Monoid[List[A]] {
def mempty: List[A] = List()
def mappend(m1: List[A], m2: List[A]): List[A] = m1 ++
↪ m2
}
}
It states that an empty list [] is the unit element, and list concatenation
(++) is the binary operation.
As we have seen, a list of type a corresponds to a free monoid with
the set a serving as generators. The set of natural numbers with mul-
tiplication is not a free monoid, because we identify lots of products.
Compare for instance:
2 * 3 = 6
[2] ++ [3] = [2, 3] // not the same as [6]
247
2 * 3 == 6
List(2) ++ List(3) == List(2, 3)
That was easy, but the question is, can we perform this free construction
in category theory, where we are not allowed to look inside objects?
We’ll use our workhorse: the universal construction.
The second interesting question is, can any monoid be obtained
from some free monoid by identifying more than the minimum number
of elements required by the laws? I’ll show you that this follows directly
from the universal construction.
h (a * b) = h a * h b
248
h(a * b) == h(a) * h(b)
becomes multiplication
2 * 3 = 6
2 * 3 == 6
Now let’s forget about the internal structure of individual monoids, and
only look at them as objects with corresponding morphisms. You get a
category 𝐌𝐨𝐧 of monoids.
Okay, maybe before we forget about internal structure, let us notice
an important property. Every object of 𝐌𝐨𝐧 can be trivially mapped to
a set. It’s just the set of its elements. This set is called the underlying set.
In fact, not only can we map objects of 𝐌𝐨𝐧 to sets, but we can also map
morphisms of 𝐌𝐨𝐧 (homomorphisms) to functions. Again, this seems
sort of trivial, but it will become useful soon. This mapping of objects
and morphisms from 𝐌𝐨𝐧 to 𝐒𝐞𝐭 is in fact a functor. Since this functor
“forgets” the monoidal structure — once we are inside a plain set, we
no longer distinguish the unit element or care about multiplication —
249
it’s called a forgetful functor. Forgetful functors come up regularly in
category theory.
We now have two different views of 𝐌𝐨𝐧. We can treat it just like
any other category with objects and morphisms. In that view, we don’t
see the internal structure of monoids. All we can say about a particular
object in 𝐌𝐨𝐧 is that it connects to itself and to other objects through
morphisms. The “multiplication” table of morphisms — the composition
rules — are derived from the other view: monoids-as-sets. By going to
category theory we haven’t lost this view completely — we can still
access it through our forgetful functor.
To apply the universal construction, we need to define a special
property that would let us search through the category of monoids and
pick the best candidate for a free monoid. But a free monoid is defined
by its generators. Different choices of generators produce different free
monoids (a list of Bool is not the same as a list of Int). Our construction
must start with a set of generators. So we’re back to sets!
That’s where the forgetful functor comes into play. We can use it to
X-ray our monoids. We can identify the generators in the X-ray images
of those blobs. Here’s how it works:
We start with a set of generators, 𝑥. That’s a set in 𝐒𝐞𝐭.
The pattern we are going to match consists of a monoid 𝑚 — an
object of 𝐌𝐨𝐧 — and a function 𝑝 in 𝐒𝐞𝐭:
p :: x -> U m
where 𝑈 is our forgetful functor from 𝐌𝐨𝐧 to 𝐒𝐞𝐭. This is a weird het-
erogeneous pattern — half in 𝐌𝐨𝐧 and half in 𝐒𝐞𝐭.
250
The idea is that the function 𝑝 will identify the set of generators
inside the X-ray image of 𝑚. It doesn’t matter that functions may be
lousy at identifying points inside sets (they may collapse them). It will
all be sorted out by the universal construction, which will pick the best
representative of this pattern.
q :: x -> U n
h :: m -> n
val h: M => N
251
q = U h . p
val q = uh compose p
This ranking may be used to find the best candidate — the free monoid.
Here’s the definition:
252
element of 𝑈 𝑛. This collapse corresponds to identifying some elements
of the free monoid. Therefore any monoid with generators 𝑥 can be
obtained from the free monoid based on 𝑥 by identifying some of the
elements. The free monoid is the one where only the bare minimum of
identifications have been made.
We’ll come back to free monoids when we talk about adjunctions.
13.3 Challenges
1. You might think (as I did, originally) that the requirement that a
homomorphism of monoids preserve the unit is redundant. After
all, we know that for all 𝑎
h a * h e = h (a * e) = h a
253
14
Representable Functors
I t’s about time we had a little talk about sets. Mathematicians have
a love/hate relationship with set theory. It’s the assembly language of
mathematics — at least it used to be. Category theory tries to step away
from set theory, to some extent. For instance, it’s a known fact that the
set of all sets doesn’t exist, but the category of all sets, 𝐒𝐞𝐭, does. So that’s
good. On the other hand, we assume that morphisms between any two
objects in a category form a set. We even called it a hom-set. To be
fair, there is a branch of category theory where morphisms don’t form
sets. Instead they are objects in another category. Those categories that
use hom-objects rather than hom-sets, are called enriched categories. In
what follows, though, we’ll stick to categories with good old-fashioned
hom-sets.
A set is the closest thing to a featureless blob you can get outside of
categorical objects. A set has elements, but you can’t say much about
these elements. If you have a finite set, you can count the elements. You
254
can kind of count the elements of an infinite set using cardinal numbers.
The set of natural numbers, for instance, is smaller than the set of real
numbers, even though both are infinite. But, maybe surprisingly, a set
of rational numbers is the same size as the set of natural numbers.
Other than that, all the information about sets can be encoded in
functions between them — especially the invertible ones called isomor-
phisms. For all intents and purposes isomorphic sets are identical. Be-
fore I summon the wrath of foundational mathematicians, let me explain
that the distinction between equality and isomorphism is of fundamen-
tal importance. In fact it is one of the main concerns of the latest branch
of mathematics, the Homotopy Type Theory (HoTT). I’m mentioning
HoTT because it’s a pure mathematical theory that takes inspiration
from computation, and one of its main proponents, Vladimir Voevod-
sky, had a major epiphany while studying the Coq theorem prover. The
interaction between mathematics and programming goes both ways.
The important lesson about sets is that it’s okay to compare sets of
unlike elements. For instance, we can say that a given set of natural
transformations is isomorphic to some set of morphisms, because a set
is just a set. Isomorphism in this case just means that for every natural
transformation from one set there is a unique morphism from the other
set and vice versa. They can be paired against each other. You can’t com-
pare apples with oranges, if they are objects from different categories,
but you can compare sets of apples against sets of oranges. Often trans-
forming a categorical problem into a set-theoretical problem gives us
the necessary insight or even lets us prove valuable theorems.
255
14.1 The Hom Functor
Every category comes equipped with a canonical family of mappings to
𝐒𝐞𝐭. Those mappings are in fact functors, so they preserve the structure
of the category. Let’s build one such mapping.
Let’s fix one object 𝑎 in 𝐂 and pick another object 𝑥 also in 𝐂. The
hom-set 𝐂(𝑎, 𝑥) is a set, an object in 𝐒𝐞𝐭. When we vary 𝑥, keeping 𝑎
fixed, 𝐂(𝑎, 𝑥) will also vary in 𝐒𝐞𝐭. Thus we have a mapping from 𝑥 to
𝐒𝐞𝐭.
256
position:
𝑓 ∘ℎ∷𝑎 →𝑦
is a morphism going from 𝑎 to 𝑦. It is therefore a member of 𝐂(𝑎, 𝑦).
We have just found our function from 𝐂(𝑎, 𝑥) to 𝐂(𝑎, 𝑦), which can
serve as the image of 𝑓 . If there is no danger of confusion, we’ll write
this lifted function as: 𝐂(𝑎, 𝑓 ) and its action on a morphism ℎ as:
𝐂(𝑎, 𝑓 )ℎ = 𝑓 ∘ ℎ
Since this construction works in any category, it must also work in the
category of Haskell types. In Haskell, the hom-functor is better known
as the Reader functor:
257
implicit def readerFunctor[A] = new Functor[Reader[A, ?]] {
def fmap[X, B](f: X => B)(h: Reader[A, X]): Reader[A, B] =
f compose h
}
Now let’s consider what happens if, instead of fixing the source of the
hom-set, we fix the target. In other words, we’re asking the question
if the mapping 𝐂(−, 𝑎) is also a functor. It is, but instead of being co-
variant, it’s contravariant. That’s because the same kind of matching
of morphisms end to end results in postcomposition by 𝑓 ; rather than
precomposition, as was the case with 𝐂(𝑎, −).
We have already seen this contravariant functor in Haskell. We called
it Op:
type Op a x = x -> a
Finally, if we let both objects vary, we get a profunctor 𝐂(−, =), which is
contravariant in the first argument and covariant in the second (to un-
derline the fact that the two arguments may vary independently, we use
258
a double dash as the second placeholder). We have seen this profunctor
before, when we talked about functoriality:
def lmap[A, B, C]
(f: C => A)(ab: A => B): C => B =
f andThen ab
def rmap[A, B, C]
(f: B => C)(ab: A => B): A => C =
f compose ab
}
The important lesson is that this observation holds in any category: the
mapping of objects to hom-sets is functorial. Since contravariance is
equivalent to a mapping from the opposite category, we can state this
fact succinctly as:
𝐶(−, =) ∷ 𝐂𝑜𝑝 × 𝐂 → 𝐒𝐞𝐭
259
14.2 Representable Functors
We’ve seen that, for every choice of an object 𝑎 in 𝐂, we get a functor
from 𝐂 to 𝐒𝐞𝐭. This kind of structure-preserving mapping to 𝐒𝐞𝐭 is often
called a representation. We are representing objects and morphisms of
𝐂 as sets and functions in 𝐒𝐞𝐭.
The functor 𝐂(𝑎, −) itself is sometimes called representable. More
generally, any functor 𝐹 that is naturally isomorphic to the hom-functor,
for some choice of 𝑎, is called representable. Such a functor must neces-
sarily be 𝐒𝐞𝐭-valued, since 𝐂(𝑎, −) is.
I said before that we often think of isomorphic sets as identical.
More generally, we think of isomorphic objects in a category as iden-
tical. That’s because objects have no structure other than their relation
to other objects (and themselves) through morphisms.
For instance, we’ve previously talked about the category of monoids,
𝐌𝐨𝐧, that was initially modeled with sets. But we were careful to pick
as morphisms only those functions that preserved the monoidal struc-
ture of those sets. So if two objects in 𝐌𝐨𝐧 are isomorphic, meaning
there is an invertible morphism between them, they have exactly the
same structure. If we peeked at the sets and functions that they were
based upon, we’d see that the unit element of one monoid was mapped
to the unit element of another, and that a product of two elements was
mapped to the product of their mappings.
The same reasoning can be applied to functors. Functors between
two categories form a category in which natural transformations play
the role of morphisms. So two functors are isomorphic, and can be
thought of as identical, if there is an invertible natural transformation
between them.
260
Let’s analyze the definition of the representable functor from this
perspective. For 𝐹 to be representable we require that: There be an object
𝑎 in 𝐂; one natural transformation α from 𝐂(𝑎, −) to 𝐹 ; another natural
transformation, β, in the opposite direction; and that their composition
be the identity natural transformation.
Let’s look at the component of α at some object 𝑥. It’s a function in
𝐒𝐞𝐭:
𝛼𝑥 ∷ 𝐂(𝑎, 𝑥) → 𝐹 𝑥
The naturality condition for this transformation tells us that, for any
morphism 𝑓 from 𝑥 to 𝑦, the following diagram commutes:
𝐹 𝑓 ∘ 𝛼𝑥 = 𝛼𝑦 ∘ 𝐂(𝑎, 𝑓 )
261
fmap f . alpha = alpha . fmap f
We will see later that a natural transformation from 𝐂(𝑎, −) to any 𝐒𝐞𝐭-
valued functor always exists (Yoneda’s lemma) but it is not necessarily
invertible.
Let me give you an example in Haskell with the list functor and Int
as a. Here’s a natural transformation that does the job:
262
alpha :: forall x. (Int -> x) -> [x]
alpha h = map h [12]
def alpha: (Int => ?) ~> List = new ~>[Int => ?, List] {
def apply[A](fa: Int => A) =
List(12).map(fa)
}
I have arbitrarily picked the number 12 and created a singleton list with
it. I can then fmap the function h over this list and get a list of the type
returned by h. (There are actually as many such transformations as there
are list of integers.)
The naturality condition is equivalent to the composability of map
(the list version of fmap):
fmap(f)(fmap(h)(List(12))) ==
fmap(f compose h)(List(12))
263
def beta: List ~> (Int => ?)
You might think of retrieving an x from the list, e.g., using head, but
that won’t work for an empty list. Notice that there is no choice for the
type a (in place of Int) that would work here. So the list functor is not
representable.
Remember when we talked about Haskell (endo-) functors being
a little like containers? In the same vein we can think of representable
functors as containers for storing memoized results of function calls (the
members of hom-sets in Haskell are just functions). The representing
object, the type 𝑎 in 𝐂(𝑎, −), is thought of as the key type, with which
we can access the tabulated values of a function. The transformation
we called alpha is called tabulate, and its inverse, beta, is called index.
Here’s a (slightly simplified) Representable class definition:
class Representable f where
type Rep f :: *
tabulate :: (Rep f -> x) -> f x
index :: f x -> Rep f -> x
trait Representable[F[_]] {
type Rep
Notice that the representing type, our 𝑎, which is called Rep f here, is
part of the definition of Representable. The star just means that Rep f
is a type (as opposed to a type constructor, or other more exotic kinds).
264
Infinite lists, or streams, which cannot be empty, are representable.
265
() => tabulate(f compose (_ + 1)))
def index[X]
: Stream[X] => Rep => X = {
case Stream(b, bs) => n =>
if (n == 0) b()
else index(bs())(n - 1)
}
}
266
are involved. On the other hand, the two representations are often im-
plemented differently and may have different performance characteris-
tics. Memoization is used as a performance enhancement and may lead
to substantially reduced run times. Being able to generate different rep-
resentations of the same underlying computation is very valuable in
practice. So, surprisingly, even though it’s not concerned with perfor-
mance at all, category theory provides ample opportunities to explore
alternative implementations that have practical value.
14.3 Challenges
1. Show that the hom-functors map identity morphisms in C to cor-
responding identity functions in 𝐒𝐞𝐭.
2. Show that Maybe is not representable.
3. Is the Reader functor representable?
4. Using Stream representation, memoize a function that squares its
argument.
5. Show that tabulate and index for Stream are indeed the inverse
of each other. (Hint: use induction.)
6. The functor:
Pair a = Pair a a
is representable. Can you guess the type that represents it? Im-
plement tabulate and index.
14.4 Bibliography
1. The Catsters video about representable functors1 .
1 https://www.youtube.com/watch?v=4QgjKUzyrhM
267
15
The Yoneda Lemma
268
functor. The Yoneda lemma tells us that all 𝐒𝐞𝐭-valued functors can be
obtained from hom-functors through natural transformations, and it ex-
plicitly enumerates all such transformations.
When I talked about natural transformations, I mentioned that the
naturality condition can be quite restrictive. When you define a compo-
nent of a natural transformation at one object, naturality may be strong
enough to “transport” this component to another object that is con-
nected to it through a morphism. The more arrows between objects in
the source and the target categories there are, the more constraints you
have for transporting the components of natural transformations. 𝐒𝐞𝐭
happens to be a very arrow-rich category.
The Yoneda lemma tells us that a natural transformation between
a hom-functor and any other functor 𝐹 is completely determined by
specifying the value of its single component at just one point! The rest
of the natural transformation just follows from naturality conditions.
So let’s review the naturality condition between the two functors
involved in the Yoneda lemma. The first functor is the hom-functor. It
maps any object 𝑥 in 𝐂 to the set of morphisms 𝐂(𝑎, 𝑥) — for 𝑎 a fixed
object in 𝐂. We’ve also seen that it maps any morphism 𝑓 from 𝑥 → 𝑦
to 𝐂(𝑎, 𝑓 ).
The second functor is an arbitrary 𝐒𝐞𝐭-valued functor 𝐹 .
Let’s call the natural transformation between these two functors 𝛼.
Because we are operating in 𝐒𝐞𝐭, the components of the natural trans-
formation, like 𝛼𝑥 or 𝛼𝑦 , are just regular functions between sets:
𝛼𝑥 ∷ 𝐂(𝑎, 𝑥) → 𝐹 𝑥
𝛼𝑦 ∷ 𝐂(𝑎, 𝑦) → 𝐹 𝑦
269
And because these are just functions, we can look at their values at
specific points. But what’s a point in the set 𝐂(𝑎, 𝑥)? Here’s the key
observation: Every point in the set 𝐂(𝑎, 𝑥) is also a morphism ℎ from 𝑎
to 𝑥.
So the naturality square for 𝛼:
𝛼𝑦 ∘ 𝐂(𝑎, 𝑓 ) = 𝐹 𝑓 ∘ 𝛼𝑥
You might recall from the previous section that the action of the hom-
functor 𝐂(𝑎, −) on a morphism 𝑓 was defined as precomposition:
𝐂(𝑎, 𝑓 )ℎ = 𝑓 ∘ ℎ
270
In that case ℎ becomes a morphism from 𝑎 to 𝑎. We know that there is
at least one such morphism, ℎ = id𝑎 . Let’s plug it in:
𝛼𝑦 𝑓 = (𝐹 𝑓 )(𝛼𝑎 id𝑎 )
Notice what has just happened: The left hand side is the action of 𝛼𝑦
on an arbitrary element 𝑓 of 𝐂(𝑎, 𝑦). And it is totally determined by the
single value of 𝛼𝑎 at id𝑎 . We can pick any such value and it will generate
a natural transformation. Since the values of 𝛼𝑎 are in the set 𝐹 𝑎, any
point in 𝐹 𝑎 will define some 𝛼.
Conversely, given any natural transformation 𝛼 from 𝐂(𝑎, −) to 𝐹 ,
you can evaluate it at id𝑎 to get a point in 𝐹 𝑎.
We have just proven the Yoneda lemma:
in other words,
𝐍𝐚𝐭(𝐂(𝑎, −), 𝐹 ) ≅ 𝐹 𝑎
Or, if we use the notation [𝐂, 𝐒𝐞𝐭] for the functor category between 𝐂 and
𝐒𝐞𝐭, the set of natural transformation is just a hom-set in that category,
and we can write:
[𝐂, 𝐒𝐞𝐭](𝐂(𝑎, −), 𝐹 ) ≅ 𝐹 𝑎
271
I’ll explain later how this correspondence is in fact a natural isomor-
phism.
Now let’s try to get some intuition about this result. The most amaz-
ing thing is that the whole natural transformation crystallizes from just
one nucleation site: the value we assign to it at id𝑎 . It spreads from that
point following the naturality condition. It floods the image of 𝐂 in 𝐒𝐞𝐭.
So let’s first consider what the image of 𝐂 is under 𝐂(𝑎, −).
Let’s start with the image of 𝑎 itself. Under the hom-functor 𝐂(𝑎, −),
𝑎 is mapped to the set 𝐂(𝑎, 𝑎). Under the functor 𝐹 , on the other hand, it
is mapped to the set 𝐹 𝑎. The component of the natural transformation 𝛼𝑎
is some function from 𝐂(𝑎, 𝑎) to 𝐹 𝑎. Let’s focus on just one point in the
set 𝐂(𝑎, 𝑎), the point corresponding to the morphism id𝑎 . To emphasize
the fact that it’s just a point in a set, let’s call it 𝑝. The component 𝛼𝑎
should map 𝑝 to some point 𝑞 in 𝐹 𝑎. I’ll show you that any choice of 𝑞
leads to a unique natural transformation.
The first claim is that the choice of one point 𝑞 uniquely determines the
rest of the function 𝛼𝑎 . Indeed, let’s pick any other point, 𝑝 ′ in 𝐂(𝑎, 𝑎),
corresponding to some morphism 𝑔 from 𝑎 to 𝑎. And here’s where the
magic of the Yoneda lemma happens: 𝑔 can be viewed as a point 𝑝 ′ in
272
the set 𝐂(𝑎, 𝑎). At the same time, it selects two functions between sets.
Indeed, under the hom-functor, the morphism 𝑔 is mapped to a function
𝐂(𝑎, 𝑔); and under 𝐹 it’s mapped to 𝐹 𝑔.
Now let’s consider the action of 𝐂(𝑎, 𝑔) on our original 𝑝 which, as you
remember, corresponds to id𝑎 . It is defined as precomposition, 𝑔 ∘ id𝑎 ,
which is equal to 𝑔, which corresponds to our point 𝑝 ′ . So the morphism
𝑔 is mapped to a function that, when acting on 𝑝 produces 𝑝 ′ , which is
𝑔. We have come full circle!
Now consider the action of 𝐹 𝑔 on 𝑞. It is some 𝑞 ′ , a point in 𝐹 𝑎. To
complete the naturality square, 𝑝 ′ must be mapped to 𝑞 ′ under 𝛼𝑎 . We
picked an arbitrary 𝑝 ′ (an arbitrary 𝑔) and derived its mapping under
𝛼𝑎 . The function 𝛼𝑎 is thus completely determined.
The second claim is that 𝛼𝑥 is uniquely determined for any object 𝑥
in 𝐂 that is connected to 𝑎. The reasoning is analogous, except that now
we have two more sets, 𝐂(𝑎, 𝑥) and 𝐹 𝑥, and the morphism 𝑔 from 𝑎 to
𝑥 is mapped, under the hom-functor, to:
273
Again, 𝐂(𝑎, 𝑔) acting on our 𝑝 is given by the precomposition: 𝑔 ∘ id𝑎 ,
which corresponds to a point 𝑝 ′ in 𝐂(𝑎, 𝑥). Naturality determines the
value of 𝛼𝑥 acting on 𝑝 ′ to be:
𝑞 ′ = (𝐹 𝑔)𝑞
274
functors are the representable ones. They are either the hom-functors or
the functors that are naturally isomorphic to hom-functors. Any other
functor 𝐹 is obtained from a hom-functor through a lossy transforma-
tion. Such a transformation may not only lose information, but it may
also cover only a small part of the image of the functor 𝐹 in 𝐒𝐞𝐭.
The Yoneda lemma tells us that the reader functor can be naturally
mapped to any other functor.
A natural transformation is a polymorphic function. So given a func-
tor F, we have a mapping to it from the reader functor:
275
alpha :: forall x . (a -> x) -> F x
The right hand side of this identity is what we would normally consider
a data structure. Remember the interpretation of functors as generalized
containers? F a is a container of a. But the left hand side is a polymor-
phic function that takes a function as an argument. The Yoneda lemma
tells us that the two representations are equivalent — they contain the
same information.
Another way of saying this is: Give me a polymorphic function of
the type:
276
alpha :: forall x . (a -> x) -> F x
and I’ll produce a container of a. The trick is the one we used in the proof
of the Yoneda lemma: we call this function with id to get an element of
F a:
alpha id :: F a
alpha(identity): F[A]
fa :: F a
alpha h = fmap h fa
alpha(h) == fmap(h)(fa)
of the correct type. You can easily go back and forth between the two
representations.
The advantage of having multiple representations is that one might
be easier to compose than the other, or that one might be more efficient
in some applications than the other.
277
The simplest illustration of this principle is the code transformation
that is often used in compiler construction: the continuation passing
style or cps. It’s the simplest application of the Yoneda lemma to the
identity functor. Replacing F with identity produces:
15.2 Co-Yoneda
As usual, we get a bonus construction by inverting the direction of ar-
rows. The Yoneda lemma can be applied to the opposite category 𝐂𝑜𝑝 to
give us a mapping between contravariant functors.
Equivalently, we can derive the co-Yoneda lemma by fixing the tar-
get object of our hom-functors instead of the source. We get the con-
travariant hom-functor from 𝐂 to 𝐒𝐞𝐭: 𝐂(−, 𝑎). The contravariant version
of the Yoneda lemma establishes one-to-one correspondence between
278
natural transformations from this functor to any other contravariant
functor 𝐹 and the elements of the set 𝐹 𝑎:
𝐍𝐚𝐭(𝐂(−, 𝑎), 𝐹 ) ≅ 𝐹 𝑎
Notice that in some literature it’s the contravariant version that’s called
the Yoneda lemma.
15.3 Challenges
1. Show that the two functions phi and psi that form the Yoneda
isomorphism in Haskell are inverses of each other.
phi :: (forall x . (a -> x) -> F x) -> F a
phi alpha = alpha id
279
15.4 Bibliography
1. Catsters1 video.
1 https://www.youtube.com/watch?v=TLMxHB19khE
280
16
Yoneda Embedding
𝑥 → 𝐂(𝑎, 𝑥)
(The codomain is 𝐒𝐞𝐭 because the hom-set 𝐂(𝑎, 𝑥) is a set.) We call this
mapping a hom-functor — we have previously defined its action on mor-
phisms as well.
Now let’s vary 𝑎 in this mapping. We get a new mapping that assigns
the hom-functor 𝐂(𝑎, −) to any 𝑎.
𝑎 → 𝐂(𝑎, −)
281
category from 𝐂 to 𝐒𝐞𝐭. You may also recall that hom-functors are the
prototypical representable functors.
Every time we have a mapping of objects between two categories,
it’s natural to ask if such a mapping is also a functor. In other words
whether we can lift a morphism from one category to a morphism in
the other category. A morphism in 𝐂 is just an element of 𝐂(𝑎, 𝑏), but a
morphism in the functor category [𝐂, 𝐒𝐞𝐭] is a natural transformation. So
we are looking for a mapping of morphisms to natural transformations.
Let’s see if we can find a natural transformation corresponding to
a morphism 𝑓 ∷ 𝑎 → 𝑏. First, lets see what 𝑎 and 𝑏 are mapped to.
They are mapped to two functors: 𝐂(𝑎, −) and 𝐂(𝑏, −). We need a natural
transformation between those two functors.
And here’s the trick: we use the Yoneda lemma:
and replace the generic 𝐹 with the hom-functor 𝐂(𝑏, −). We get:
282
This is exactly the natural transformation between the two hom-functors
we were looking for, but with a little twist: We have a mapping between
a natural transformation and a morphism — an element of 𝐂(𝑏, 𝑎) — that
goes in the “wrong” direction. But that’s okay; it only means that the
functor we are looking at is contravariant.
Actually, we’ve got even more than we bargained for. The mapping from
𝐂 to [𝐂, 𝐒𝐞𝐭] is not only a contravariant functor — it is a fully faith-
ful functor. Fullness and faithfulness are properties of functors that de-
scribe how they map hom-sets.
A faithful functor is injective on hom-sets, meaning that it maps
distinct morphisms to distinct morphisms. In other words, it doesn’t
coalesce them.
A full functor is surjective on hom-sets, meaning that it maps one
hom-set onto the other hom-set, fully covering the latter.
A fully faithful functor 𝐹 is a bijection on hom-sets — a one to one
matching of all elements of both sets. For every pair of objects 𝑎 and
𝑏 in the source category 𝐂 there is a bijection between 𝐂(𝑎, 𝑏) and
𝐃(𝐹 𝑎, 𝐹 𝑏), where 𝐃 is the target category of 𝐹 (in our case, the func-
tor category, [𝐂, 𝐒𝐞𝐭]). Notice that this doesn’t mean that 𝐹 is a bijection
283
on objects. There may be objects in 𝐃 that are not in the image of 𝐹 , and
we can’t say anything about hom-sets for those objects.
𝑎 → 𝐂(𝑎, −)
284
16.2 Application to Haskell
In Haskell, the Yoneda embedding can be represented as the isomor-
phism between natural transformations amongst reader functors on the
one hand, and functions (going in the opposite direction) on the other
hand:
forall x. (a -> x) -> (b -> x) ≅ b -> a
285
new ~>[A => ?, B => ?] {
def apply[X](f: A => X): B => X =
b => f(btoa(b))
}
fromY id :: b -> a
fromY(identity): B => A
This establishes the bijection between functions of the type fromY and
btoa.
An alternative way of looking at this isomorphism is that it’s a cps
encoding of a function from b to a. The argument a -> x is a contin-
uation (the handler). The result is a function from b to x which, when
called with a value of type b, will execute the continuation precomposed
with the function being encoded.
The Yoneda embedding also explains some of the alternative repre-
sentations of data structures in Haskell. In particular, it provides a very
useful representation1 of lenses from the Control.Lens library.
embedding/
286
set with an ordering relation between its elements that’s traditionally
written as ⩽ (less than or equal). The “pre” in preorder is there because
we’re only requiring the relation to be transitive and reflexive but not
necessarily antisymmetric (so it’s possible to have cycles).
A set with the preorder relation gives rise to a category. The objects
are the elements of this set. A morphism from object 𝑎 to 𝑏 either doesn’t
exist, if the objects cannot be compared or if it’s not true that 𝑎 ⩽ 𝑏; or
it exists if 𝑎 ⩽ 𝑏, and it points from 𝑎 to 𝑏. There is never more than one
morphism from one object to another. Therefore any hom-set in such a
category is either an empty set or a one-element set. Such a category is
called thin.
It’s easy to convince yourself that this construction is indeed a cate-
gory: The arrows are composable because, if 𝑎 ⩽ 𝑏 and 𝑏 ⩽ 𝑐 then 𝑎 ⩽ 𝑐;
and the composition is associative. We also have the identity arrows
because every element is (less than or) equal to itself (reflexivity of the
underlying relation).
We can now apply the co-Yoneda embedding to a preorder category.
In particular, we’re interested in its action on morphisms:
[𝐂, 𝐒𝐞𝐭](𝐂(−, 𝑎), 𝐂(−, 𝑏)) ≅ 𝐂(𝑎, 𝑏)
The hom-set on the right hand side is non-empty if and only if 𝑎 ⩽ 𝑏
— in which case it’s a one-element set. Consequently, if 𝑎 ⩽ 𝑏, there
exists a single natural transformation on the left. Otherwise there is no
natural transformation.
So what’s a natural transformation between hom-functors in a pre-
order? It should be a family of functions between sets 𝐂(−, 𝑎) and 𝐂(−, 𝑏).
In a preorder, each of these sets can either be empty or a singleton. Let’s
see what kind of functions are there at our disposal.
There is a function from an empty set to itself (the identity acting on
an empty set), a function absurd from an empty set to a singleton set (it
287
does nothing, since it only needs to be defined for elements of an empty
set, of which there are none), and a function from a singleton to itself
(the identity acting on a one-element set). The only combination that is
forbidden is the mapping from a singleton to an empty set (what would
the value of such a function be when acting on the single element?).
So our natural transformation will never connect a singleton hom-
set to an empty hom-set. In other words, if 𝑥 ⩽ 𝑎 (singleton hom-set
𝐂(𝑥, 𝑎)) then 𝐂(𝑥, 𝑏) cannot be empty. A non-empty 𝐂(𝑥, 𝑏) means that
𝑥 is less or equal to 𝑏. So the existence of the natural transformation in
question requires that, for every 𝑥, if 𝑥 ⩽ 𝑎 then 𝑥 ⩽ 𝑏.
for all 𝑥, 𝑥 ⩽ 𝑎 ⇒ 𝑥 ⩽ 𝑏
On the other hand, co-Yoneda tells us that the existence of this natural
transformation is equivalent to 𝐂(𝑎, 𝑏) being non-empty, or to 𝑎 ⩽ 𝑏.
Together, we get:
16.4 Naturality
The Yoneda lemma establishes the isomorphism between the set of nat-
ural transformations and an object in 𝐒𝐞𝐭. Natural transformations are
morphisms in the functor category [𝐂, 𝐒𝐞𝐭]. The set of natural transfor-
mation between any two functors is a hom-set in that category. The
288
Yoneda lemma is the isomorphism:
Φ𝑏 ∘ (𝐹 𝑓 )
I’m not going to prove the naturality of the whole isomorphism — after
you’ve established what the functors are, the proof is pretty mechanical.
289
It follows from the fact that our isomorphism is built up from functors
and natural transformations. There is simply no way for it to go wrong.
16.5 Challenges
1. Express the co-Yoneda embedding in Haskell.
2. Show that the bijection we established between fromY and btoa is
an isomorphism (the two mappings are the inverse of each other).
3. Work out the Yoneda embedding for a monoid. What functor cor-
responds to the monoid’s single object? What natural transforma-
tions correspond to monoid morphisms?
4. What is the application of the covariant Yoneda embedding to
preorders? (Question suggested by Gershom Bazerman.)
5. Yoneda embedding can be used to embed an arbitrary functor cat-
egory [𝐂, 𝐃] in the functor category [[𝐂, 𝐃], 𝐒𝐞𝐭]. Figure out how
it works on morphisms (which in this case are natural transfor-
mations).
290
Part Three
291
17
It’s All About Morphisms
I f I haven’t convinced you yet that category theory is all about mor-
phisms then I haven’t done my job properly. Since the next topic is
adjunctions, which are defined in terms of isomorphisms of hom-sets, it
makes sense to review our intuitions about the building blocks of hom-
sets. Also, you’ll see that adjunctions provide a more general language
to describe a lot of constructions we’ve studied before, so it might help
to review them too.
17.1 Functors
To begin with, you should really think of functors as mappings of mor-
phisms — the view that’s emphasized in the Haskell definition of the
Functor typeclass, which revolves around fmap. Of course, functors also
map objects — the endpoints of morphisms — otherwise we wouldn’t be
able to talk about preserving composition. Objects tell us which pairs of
292
morphisms are composable. The target of one morphism must be equal
to the source of the other — if they are to be composed. So if we want
the composition of morphisms to be mapped to the composition of lifted
morphisms, the mapping of their endpoints is pretty much determined.
293
17.3 Natural Transformations
In general, natural transformations are very convenient whenever we
need a mapping from morphisms to commuting squares. Two opposing
sides of a naturality square are the mappings of some morphism 𝑓 under
two functors 𝐹 and 𝐺. The other sides are the components of the natural
transformation (which are also morphisms).
294
We’ve seen examples of this
orthogonality in Haskell. There
the action of a functor mod-
ifies the content of a con-
tainer without changing its
shape, while a natural trans-
formation repackages the un-
touched contents into a dif-
ferent container. The order of
these operations doesn’t mat-
ter.
We’ve seen the cones in the definition of a limit replaced by natural
transformations. Naturality ensures that the sides of every cone com-
mute. Still, a limit is defined in terms of mappings between cones. These
mappings must also satisfy commutativity conditions. (For instance, the
triangles in the definition of the product must commute.)
These conditions, too, may be replaced by naturality. You may recall
that the universal cone, or the limit, is defined as a natural transforma-
tion between the (contravariant) hom-functor:
𝐹 ∷ 𝑐 → 𝐂(𝑐, Lim𝐷)
𝐺 ∷ 𝑐 → 𝐍𝐚𝐭(Δ𝑐 , 𝐷)
Here, Δ𝑐 is the constant functor, and 𝐷 is the functor that defines the
diagram in 𝐂. Both functors 𝐹 and 𝐺 have well defined actions on mor-
phisms in 𝐂. It so happens that this particular natural transformation
between 𝐹 and 𝐺 is an isomorphism.
295
17.4 Natural Isomorphisms
A natural isomorphism — which is a natural transformation whose ev-
ery component is reversible — is category theory’s way of saying that
“two things are the same.” A component of such a transformation must
be an isomorphism between objects — a morphism that has the inverse.
If you visualize functor images as sheets, a natural isomorphism is a
one-to-one invertible mapping between those sheets.
17.5 Hom-Sets
But what are morphisms? They do have more structure than objects:
unlike objects, morphisms have two ends. But if you fix the source and
the target objects, the morphisms between the two form a boring set
(at least for locally small categories). We can give elements of this set
names like 𝑓 or 𝑔, to distinguish one from another — but what is it,
really, that makes them different?
The essential difference between morphisms in a given hom-set lies
in the way they compose with other morphisms (from abutting hom-
sets). If there is a morphism ℎ whose composition (either pre- or post-)
with 𝑓 is different than that with 𝑔, for instance:
ℎ∘𝑓 ≠ℎ∘𝑔
then we can directly “observe” the difference between 𝑓 and 𝑔. But even
if the difference is not directly observable, we might use functors to
zoom in on the hom-set. A functor 𝐹 may map the two morphisms to
distinct morphisms:
𝐹𝑓 ≠ 𝐹𝑔
296
in a richer category, where the abutting hom-sets provide more resolu-
tion, e.g.,
ℎ′ ∘ 𝐹 𝑓 ≠ ℎ′ ∘ 𝐹 𝑔
where ℎ′ is not in the image of 𝐹 .
297
The definition of a limit is also a natural isomorphism between hom-
sets (the second one, again, in the functor category):
298
17.8 Challenges
1. Consider some degenerate cases of a naturality condition and
draw the appropriate diagrams. For instance, what happens if ei-
ther functor 𝐹 or 𝐺 map both objects 𝑎 and 𝑏 (the ends of
𝑓 ∷ 𝑎 → 𝑏) to the same object, e.g., 𝐹 𝑎 = 𝐹 𝑏 or 𝐺𝑎 = 𝐺𝑏? (Notice
that you get a cone or a co-cone this way.) Then consider cases
where either 𝐹 𝑎 = 𝐺𝑎 or 𝐹 𝑏 = 𝐺𝑏. Finally, what if you start with
a morphism that loops on itself — 𝑓 ∷ 𝑎 → 𝑎?
299
18
Adjunctions
300
you in the same spot, no matter on which side you start. In the case of
pairs, this isomorphism is called swap:
301
But here’s the tricky part: What does it mean for two functors to be
equal? What do we mean by this equality:
𝑅 ∘ 𝐿 = 𝐼𝐃
or this one:
𝐿 ∘ 𝑅 = 𝐼𝐂
It would be reasonable to define functor equality in terms of equality
of objects. Two functors, when acting on equal objects, should produce
equal objects. But we don’t, in general, have the notion of object equality
in an arbitrary category. It’s just not part of the definition. (Going deeper
into this rabbit hole of “what equality really is,” we would end up in
Homotopy Type Theory.)
You might argue that functors are morphisms in the category of
categories, so they should be equality-comparable. And indeed, as long
as we are talking about small categories, where objects form a set, we
can indeed use the equality of elements of a set to equality-compare
objects.
But, remember, 𝐂𝐚𝐭 is really a 𝟐-category. Hom-sets in a 𝟐-category
have additional structure — there are 2-morphisms acting between 1-
morphisms. In 𝐂𝐚𝐭, 1-morphisms are functors, and 2-morphisms are
302
natural transformations. So it’s more natural (can’t avoid this pun!) to
consider natural isomorphisms as substitutes for equality when talking
about functors.
So, instead of isomorphism of categories, it makes sense to consider
a more general notion of equivalence. Two categories 𝐂 and 𝐃 are equiv-
alent if we can find two functors going back and forth between them,
whose composition (either way) is naturally isomorphic to the identity
functor. In other words, there is a two-way natural transformation be-
tween the composition 𝑅 ∘ 𝐿 and the identity functor 𝐼𝐃 , and another
between 𝐿 ∘ 𝑅 and the identity functor 𝐼𝐂 .
Adjunction is even weaker than equivalence, because it doesn’t re-
quire that the composition of the two functors be isomorphic to the
identity functor. Instead it stipulates the existence of a one way nat-
ural transformation from 𝐼𝐃 to 𝑅 ∘ 𝐿, and another from 𝐿 ∘ 𝑅 to 𝐼𝐂 . Here
are the signatures of these two natural transformations:
𝜂 ∷ 𝐼𝐃 → 𝑅 ∘ 𝐿
𝜀 ∷ 𝐿 ∘ 𝑅 → 𝐼𝐂
𝑅 ∘ 𝐿 → 𝐼𝐃 not necessarily
𝐼𝐂 → 𝐿 ∘ 𝑅 not necessarily
Because of this asymmetry, the functor 𝐿 is called the left adjoint to the
functor 𝑅, while the functor 𝑅 is the right adjoint to 𝐿. (Of course, left
and right make sense only if you draw your diagrams one particular
way.)
303
The compact notation for the adjunction is:
𝐿⊣𝑅
To better understand the adjunction, let’s analyze the unit and the counit
in more detail.
Let’s start with the unit. It’s a natural transformation, so it’s a family of
morphisms. Given an object 𝑑 in 𝐃, the component of 𝜂 is a morphism
between 𝐼 𝑑, which is equal to 𝑑, and (𝑅 ∘ 𝐿)𝑑; which, in the picture, is
called 𝑑 ′ :
𝜂𝑑 ∷ 𝑑 → (𝑅 ∘ 𝐿)𝑑
Notice that the composition 𝑅 ∘ 𝐿 is an endofunctor in 𝐃.
This equation tells us that we can pick any object 𝑑 in 𝐃 as our
starting point, and use the round trip functor 𝑅 ∘ 𝐿 to pick our target
object 𝑑 ′ . Then we shoot an arrow — the morphism 𝜂𝑑 — to our target.
304
By the same token, the component of the counit ε can be described as:
𝜀𝑐 ′ ∷ (𝐿 ∘ 𝑅)𝑐 → 𝑐
𝐿 = 𝐿 ∘ 𝐼𝐃 → 𝐿 ∘ 𝑅 ∘ 𝐿 → 𝐼 𝐂 ∘ 𝐿 = 𝐿
𝑅 = 𝐼𝐃 ∘ 𝑅 → 𝑅 ∘ 𝐿 ∘ 𝑅 → 𝑅 ∘ 𝐼𝐂 = 𝑅
These are called triangular identities because they make the following
diagrams commute:
305
𝐿∘𝜂 𝜂∘𝑅
𝐿 𝐿∘𝑅∘𝐿 𝑅 𝑅∘𝐿∘𝑅
𝜖∘𝐿 𝑅∘𝜖
𝐿 𝑅
These are diagrams in the functor category: the arrows are natural trans-
formations, and their composition is the horizontal composition of nat-
ural transformations. In components, these identities become:
We often see unit and counit in Haskell under different names. Unit is
known as return (or pure, in the definition of Applicative):
return :: d -> m d
extract :: w c -> c
306
If you think of an endofunctor as a container, the unit (or return)
is a polymorphic function that creates a default box around a value of
arbitrary type. The counit (or extract) does the reverse: it retrieves or
produces a single value from a container.
We’ll see later that every pair of adjoint functors defines a monad
and a comonad. Conversely, every monad or comonad may be factorized
into a pair of adjoint functors — this factorization is not unique, though.
In Haskell, we use monads a lot, but only rarely factorize them into
pairs of adjoint functors, primarily because those functors would nor-
mally take us out of Hask.
We can however define adjunctions of endofunctors in Haskell. Here’s
part of the definition taken from Data.Functor.Adjunction:
307
Additional conditions, after the vertical bar, specify functional de-
pendencies. For instance, f -> u means that u is determined by f (the
relation between f and u is a function, here on type constructors). Con-
versely, u -> f means that, if we know u, then f is uniquely determined.
I’ll explain in a moment why, in Haskell, we can impose the condi-
tion that the right adjoint u be a representable functor.
308
category. As we’ll see soon, if the product of any pair of objects exists
in a category, it can be also defined through an adjunction.
𝐂(𝐿𝑑, 𝑐)
Similarly, we can map the target object 𝑐 using 𝑅. Now we have two
objects in 𝐃, 𝑑 and 𝑅𝑐. They, too, define a hom set:
𝐃(𝑑, 𝑅𝑐)
that is natural both in 𝑑 and 𝑐. Naturality means that the source 𝑑 can be
varied smoothly across 𝐃; and the target 𝑐, across 𝐂. More precisely, we
309
have a natural transformation 𝜑 between the following two (covariant)
functors from 𝐂 to 𝐒𝐞𝐭. Here’s the action of these functors on objects:
𝑐 → 𝐂(𝐿𝑑, 𝑐)
𝑐 → 𝐃(𝑑, 𝑅𝑐)
𝑑 → 𝐂(𝐿𝑑, 𝑐)
𝑑 → 𝐃(𝑑, 𝑅𝑐)
Since this isomorphism works for any object 𝑐, it must also work for
𝑐 = 𝐿𝑑:
𝐂(𝐿𝑑, 𝐿𝑑) ≅ 𝐃(𝑑, (𝑅 ∘ 𝐿)𝑑)
We know that the left hand side must contain at least one morphism,
the identity. The natural transformation will map this morphism to an
element of 𝐃(𝑑, (𝑅 ∘𝐿)𝑑) or, inserting the identity functor 𝐼 , a morphism
in:
𝐃(𝐼 𝑑, (𝑅 ∘ 𝐿)𝑑)
We get a family of morphisms parameterized by 𝑑. They form a nat-
ural transformation between the functor 𝐼 and the functor 𝑅 ∘ 𝐿 (the
naturality condition is easy to verify). This is exactly our unit, 𝜂.
310
Conversely, starting from the existence of the unit and counit, we
can define the transformations between hom-sets. For instance, let’s
pick an arbitrary morphism 𝑓 in the hom-set 𝐂(𝐿𝑑, 𝑐). We want to de-
fine a 𝜑 that, acting on 𝑓 , produces a morphism in 𝐃(𝑑, 𝑅𝑐).
There isn’t really much choice. One thing we can try is to lift 𝑓 using
𝑅. That will produce a morphism 𝑅𝑓 from 𝑅(𝐿𝑑) to 𝑅𝑐 — a morphism
that’s an element of 𝐃((𝑅 ∘ 𝐿)𝑑, 𝑅𝑐).
What we need for a component of 𝜑, is a morphism from 𝑑 to 𝑅𝑐.
That’s not a problem, since we can use a component of 𝜂𝑑 to get from 𝑑
to (𝑅 ∘ 𝐿)𝑑. We get:
𝜑𝑓 = 𝑅𝑓 ∘ 𝜂𝑑
The other direction is analogous, and so is the derivation of 𝜓 .
Going back to the Haskell definition of Adjunction, the natural trans-
formations 𝜑 and 𝜓 are replaced by polymorphic (in a and b) functions
leftAdjunct and rightAdjunct, respectively. The functors 𝐿 and 𝑅 are
called f and u:
class (Functor f, Representable u) =>
Adjunction f u | f -> u, u -> f where
leftAdjunct :: (f a -> b) -> (a -> u b)
rightAdjunct :: (a -> u b) -> (f a -> b)
311
def rightAdjunct[A, B]
(fa: F[A])(f: A => U[B]): B
}
unit = leftAdjunct id
counit = rightAdjunct id
leftAdjunct f = fmap f . unit
rightAdjunct f = counit . fmap f
def leftAdjunct[A, B]
(a: A)(f: F[A] => B): U[B] =
U.map(unit(a))(f)
def rightAdjunct[A, B]
(a: F[A])(f: A => U[B]): B =
counit(F.map(a)(f))
It’s very instructive to follow the translation from the categorical de-
scription of the adjunction to Haskell code. I highly encourage this as
an exercise.
We are now ready to explain why, in Haskell, the right adjoint is
automatically a representable functor. The reason for this is that, to the
312
first approximation, we can treat the category of Haskell types as the
category of sets.
When the right category 𝐃 is 𝐒𝐞𝐭, the right adjoint 𝑅 is a functor
from 𝐂 to 𝐒𝐞𝐭. Such a functor is representable if we can find an object
𝑟𝑒𝑝 in 𝐂 such that the hom-functor 𝐂(𝑟𝑒𝑝, _) is naturally isomorphic to
𝑅. It turns out that, if 𝑅 is the right adjoint of some functor 𝐿 from 𝐒𝐞𝐭
to 𝐂, such an object always exists — it’s the image of the singleton set
() under 𝐿:
𝑟𝑒𝑝 = 𝐿()
Indeed, the adjunction tells us that the following two hom-sets are nat-
urally isomorphic:
𝐂(𝐿(), 𝑐) ≅ 𝐒𝐞𝐭((), 𝑅𝑐)
For a given 𝑐, the right hand side is the set of functions from the sin-
gleton set () to 𝑅𝑐. We’ve seen earlier that each such function picks one
element from the set 𝑅 𝑐. The set of such functions is isomorphic to the
set 𝑅𝑐. So we have:
𝐂(𝐿(), −) ≅ 𝑅
which shows that 𝑅 is indeed representable.
313
More precisely, the product of two objects 𝑎 and 𝑏 is the object (𝑎 ×
𝑏) (or (a, b) in the Haskell notation) equipped with two morphisms
𝑓 𝑠𝑡 and 𝑠𝑛𝑑 such that, for any other candidate 𝑐 equipped with two
morphisms 𝑝 ∷ 𝑐 → 𝑎 and 𝑞 ∷ 𝑐 → 𝑏, there exists a unique morphism
𝑚 ∷ 𝑐 → (𝑎, 𝑏) that factorizes 𝑝 and 𝑞 through 𝑓 𝑠𝑡 and 𝑠𝑛𝑑.
As we’ve seen earlier, in Haskell, we can implement a factorizer
that generates this morphism from the two projections:
def factorizer[A, B, C](p: C => A)(q: C => B): (C => (A, B))
↪ =
x => (p(x), q(x))
fst . factorizer p q = p
snd . factorizer p q = q
factorizer(p)(q).andThen(_._1) == p
factorizer(p)(q).andThen(_._2) == q
314
Let me remind you what a product category is. Take two arbitrary
categories 𝐂 and 𝐃. The objects in the product category 𝐂 × 𝐃 are pairs
of objects, one from 𝐂 and one from 𝐃. The morphisms are pairs of
morphisms, one from 𝐂 and one from 𝐃.
To define a product in some category 𝐂, we should start with the
product category 𝐂×𝐂. Pairs of morphism from 𝐂 are single morphisms
in the product category 𝐂 × 𝐂.
315
Let’s now look at the factorizer as a mapping of hom-sets. The
first hom-set is in the product category 𝐂 × 𝐂, and the second is in 𝐂. A
general morphism in 𝐂 × 𝐂 would be a pair of morphisms ⟨𝑓 , 𝑔⟩:
𝑓 ∷ 𝑐′ → 𝑎
𝑔 ∷ 𝑐″ → 𝑏
Δ 𝑐 = ⟨𝑐, 𝑐⟩
It’s a hom-set in the product category. Its elements are pairs of mor-
phisms that we recognize as the arguments to our factorizer:
(𝑐 → 𝑎) → (𝑐 → 𝑏) …
The right-hand side hom-set lives in 𝐂, and it goes between the source
object 𝑐 and the result of some functor 𝑅 acting on the target object in
𝐂 × 𝐂. That’s the functor that maps the pair ⟨𝑎, 𝑏⟩ to our product object,
𝑎 × 𝑏. We recognize this element of the hom-set as the result of the
factorizer:
… → (𝑐 → (𝑎, 𝑏))
316
We still don’t have a full adjunction. For that we first need our factorizer
to be invertible — we are building an isomorphism between hom-sets.
The inverse of the factorizer should start from a morphism 𝑚 — a mor-
phism from some object 𝑐 to the product object 𝑎 × 𝑏. In other words, 𝑚
should be an element of:
𝐂(𝑐, 𝑎 × 𝑏)
The inverse factorizer should map 𝑚 to a morphism ⟨𝑝, 𝑞⟩ in 𝐂 × 𝐂 that
goes from ⟨𝑐, 𝑐⟩ to ⟨𝑎, 𝑏⟩; in other words, a morphism that’s an element
of:
(𝐂 × 𝐂)(Δ 𝑐, ⟨𝑎, 𝑏⟩)
If that mapping exists, we conclude that there exists the right adjoint to
the diagonal functor. That functor defines a product.
In Haskell, we can always construct the inverse of the factorizer
by composing m with, respectively, fst and snd.
p = fst . m
q = snd . m
317
To complete the proof of the equivalence of the two ways of defining
a product we also need to show that the mapping between hom-sets is
natural in 𝑎, 𝑏, and 𝑐. I will leave this as an exercise for the dedicated
reader.
To summarize what we have done: A categorical product may be
defined globally as the right adjoint of the diagonal functor:
Here, 𝑎×𝑏 is the result of the action of our right adjoint functor 𝑃𝑟𝑜𝑑𝑢𝑐𝑡
on the pair ⟨𝑎, 𝑏⟩. Notice that any functor from 𝐂 × 𝐂 is a bifunctor,
so 𝑃𝑟𝑜𝑑𝑢𝑐𝑡 is a bifunctor. In Haskell, the 𝑃𝑟𝑜𝑑𝑢𝑐𝑡 bifunctor is written
simply as (,). You can apply it to two types and get their product type,
for instance:
318
In this case, we are dealing with objects in the same category, so the
two adjoint functors are endofunctors. The left (endo-)functor 𝐿, when
acting on object 𝑧, produces 𝑧 × 𝑎. It’s a functor that corresponds to
taking a product with some fixed 𝑎.
The right (endo-)functor 𝑅, when acting on 𝑏 produces the function
object 𝑎 ⇒ 𝑏 (or 𝑏 𝑎 ). Again, 𝑎 is fixed. The adjunction between these
two functors is often written as:
− × 𝑎 ⊣ (−)𝑎
Notice that the 𝑒𝑣𝑎𝑙 morphism1 is nothing else but the counit of this
adjunction:
(𝑎 ⇒ 𝑏) × 𝑎 → 𝑏
where:
(𝑎 ⇒ 𝑏) × 𝑎 = (𝐿 ∘ 𝑅)𝑏
I have previously mentioned that a universal construction defines a
unique object, up to isomorphism. That’s why we have “the” product
1 See ch.9 on universal construction.
319
and “the” exponential. This property translates to adjunctions as well:
if a functor has an adjoint, this adjoint is unique up to isomorphism.
18.5 Challenges
1. Derive the naturality square for 𝜓 , the transformation between
the two (contravariant) functors:
𝑎 → 𝐂(𝐿𝑎, 𝑏)
𝑎 → 𝐃(𝑎, 𝑅𝑏)
320
19
Free/Forgetful Adjunctions
321
I like to think of 𝐌𝐨𝐧 as having split personality. On the one hand,
it’s a bunch of sets with multiplication and unit elements. On the other
hand, it’s a category with featureless objects whose only structure is
encoded in morphisms that go between them. Every set-function that
preserves multiplication and unit gives rise to a morphism in 𝐌𝐨𝐧.
Things to keep in mind:
• There may be many monoids that map to the same set, and
• There are fewer (or at most as many as) monoid morphisms than
there are functions between their underlying sets.
Monoids 𝑚1 and 𝑚2 have the same underlying set. There are more functions between the under-
lying sets of 𝑚2 and 𝑚3 than there are morphisms between them.
The functor 𝐹 that’s the left adjoint to the forgetful functor 𝑈 is the
free functor that builds free monoids from their generator sets. The ad-
junction follows from the free monoid universal construction we’ve dis-
cussed before.1
1 See ch.13 on free monoids.
322
In terms of hom-sets, we can write this adjunction as:
𝐌𝐨𝐧(𝐹 𝑥, 𝑚) ≅ 𝐒𝐞𝐭(𝑥, 𝑈 𝑚)
323
adjunction shows that the embedding of 𝑥, which is given by a function
from 𝐒𝐞𝐭(𝑥, 𝑈 𝑚) on the right, uniquely determines the embedding of
monoids on the left, and vice versa.
In Haskell, the list data structure is a free monoid (with some caveats:
see Dan Doel’s blog post2 ). A list type [a] is a free monoid with the type
a representing the set of generators. For instance, the type [Char] con-
tains the unit element — the empty list [] — and the singletons like
['a'], ['b'] — the generators of the free monoid. The rest is generated
by applying the “product.” Here, the product of two lists simply appends
one to another. Appending is associative and unital (that is, there is a
neutral element — here, the empty list). A free monoid generated by
Char is nothing but the set of all strings of characters from Char. It’s
called String in Haskell:
324
toNat :: [()] -> Int
toNat = length
For simplicity I used the type Int rather than Natural, but the idea is the
same. The function replicate creates a list of length n pre-filled with a
given value — here, the unit.
325
I tend to think of morphisms in an arbitrary category as being lossy
too. It’s just a mental model, but it’s a useful one, especially when think-
ing of adjunctions — in particular those in which one of the categories
is 𝐒𝐞𝐭.
Formally, we can only speak of morphisms that are invertible (iso-
morphisms) or non-invertible. It’s that latter kind that may be thought
of as lossy. There is also a notion of mono- and epi- morphisms that
generalize the idea of injective (non-collapsing) and surjective (cover-
ing the whole codomain) functions, but it’s possible to have a morphism
that is both mono and epi, and which is still non-invertible.
In the Free ⊣ Forgetful adjunction, we have the more constrained
category 𝐂 on the left, and a less constrained category 𝐃 on the right.
Morphisms in 𝐂 are “fewer” because they have to preserve some addi-
tional structure. In the case of 𝐌𝐨𝐧, they have to preserve multiplication
and unit. Morphisms in 𝐃 don’t have to preserve as much structure, so
there are “more” of them.
When we apply a forgetful functor 𝑈 to an object 𝑐 in 𝐂, we think
of it as revealing the “internal structure” of 𝑐. In fact, if 𝐃 is 𝐒𝐞𝐭 we
think of 𝑈 as defining the internal structure of 𝑐 — its underlying set.
(In an arbitrary category, we can’t talk about the internals of an object
other than through its connections to other objects, but here we are just
hand-waving.)
If we map two objects 𝑐 ′ and 𝑐 using 𝑈 , we expect that, in gen-
eral, the mapping of the hom-set 𝐂(𝑐 ′ , 𝑐) will cover only a subset of
𝐃(𝑈 𝑐 ′ , 𝑈 𝑐). That’s because morphisms in 𝐂(𝑐 ′ , 𝑐) have to preserve the
additional structure, whereas the ones in 𝐃(𝑈 𝑐 ′ , 𝑈 𝑐) don’t.
326
But since an adjunction is defined as an isomorphism of particular hom-
sets, we have to be very picky with our selection of 𝑐 ′ . In the adjunction,
𝑐 ′ is picked not from just anywhere in 𝐂, but from the (presumably
smaller) image of the free functor 𝐹 :
𝐂(𝐹 𝑑, 𝑐) ≅ 𝐃(𝑑, 𝑈 𝑐)
The image of 𝐹 must therefore consist of objects that have lots of mor-
phisms going to an arbitrary 𝑐. In fact, there has to be as many structure-
preserving morphisms from 𝐹 𝑑 to 𝑐 as there are non-structure preserv-
ing morphisms from 𝑑 to 𝑈 𝑐. It means that the image of 𝐹 must con-
sist of essentially structure-free objects (so that there is no structure
to preserve by morphisms). Such “structure-free” objects are called free
objects.
327
In the monoid example, a free monoid has no structure other than what’s
generated by unit and associativity laws. Other than that, all multipli-
cations produce brand new elements.
In a free monoid, 2 ∗ 3 is not 6 — it’s a new element [2, 3]. Since there
is no identification of [2, 3] and 6, a morphism from this free monoid
to any other monoid 𝑚 is allowed to map them separately. But it’s also
okay for it to map both [2, 3] and 6 (their product) to the same element
of 𝑚. Or to identify [2, 3] and 5 (their sum) in an additive monoid, and
so on. Different identifications give you different monoids.
This leads to another interesting intuition: Free monoids, instead
of performing the monoidal operation, accumulate the arguments that
were passed to it. Instead of multiplying 2 and 3 they remember 2 and
3 in a list. The advantage of this scheme is that we don’t have to specify
what monoidal operation we will use. We can keep accumulating argu-
ments, and only at the end apply an operator to the result. And it’s then
that we can chose what operator to apply. We can add the numbers, or
multiply them, or perform addition modulo 2, and so on. A free monoid
separates the creation of an expression from its evaluation. We’ll see
this idea again when we talk about algebras.
This intuition generalizes to other, more elaborate free construc-
tions. For instance, we can accumulate whole expression trees before
evaluating them. The advantage of this approach is that we can trans-
form such trees to make the evaluation faster or less memory consum-
ing. This is, for instance, done in implementing matrix calculus, where
eager evaluation would lead to lots of allocations of temporary arrays
to store intermediate results.
328
19.2 Challenges
1. Consider a free monoid built from a singleton set as its generator.
Show that there is a one-to-one correspondence between mor-
phisms from this free monoid to any monoid 𝑚, and functions
from the singleton set to the underlying set of 𝑚.
329
20
Monads: Programmer’s Definition
330
• sealing ducts
• fixing CO2 scrubbers on board Apollo 13
• wart treatment
• fixing Apple’s iPhone 4 dropped call issue
• making a prom dress
• building a suspension bridge
Now imagine that you didn’t know what duct tape was and you were
trying to figure it out based on this list. Good luck!
So I’d like to add one more item to the collection of “the monad is
like…” clichés: The monad is like duct tape. Its applications are widely
diverse, but its principle is very simple: it glues things together. More
precisely, it composes things.
This partially explains the difficulties a lot of programmers, espe-
cially those coming from the imperative background, have with under-
standing the monad. The problem is that we are not used to thinking
of programming in terms of function composition. This is understand-
able. We often give names to intermediate values rather than pass them
directly from function to function. We also inline short segments of
glue code rather than abstract them into helper functions. Here’s an
imperative-style implementation of the vector-length function in C:
double vlen(double * v) {
double d = 0.0;
int n;
for (n = 0; n < 3; ++n)
d += v[n] * v[n];
return sqrt(d);
}
Compare this with the (stylized) Haskell version that makes function
composition explicit:
331
vlen = sqrt . sum . fmap (flip (^) 2)
import math._
(Here, to make things even more cryptic, I partially applied the expo-
nentiation operator (^) by setting its second argument to 2.)
I’m not arguing that Haskell’s point-free style is always better, just
that function composition is at the bottom of everything we do in pro-
gramming. And even though we are effectively composing functions,
Haskell does go to great lengths to provide imperative-style syntax called
the do notation for monadic composition. We’ll see its use later. But first,
let me explain why we need monadic composition in the first place.
332
newtype Writer w a = Writer (a, w)
a -> Writer w b
A => Writer[W, B]
333
in the original category 𝐂. It’s important to keep in mind that we treat
a Kleisli arrow in 𝐊 as a morphism between 𝑎 and 𝑏, and not between 𝑎
and 𝑚 𝑏.
In our example, 𝑚 was specialized to Writer w, for some fixed monoid
w.
Kleisli arrows form a category only if we can define proper compo-
sition for them. If there is a composition, which is associative and has
an identity arrow for every object, then the functor 𝑚 is called a monad,
and the resulting category is called the Kleisli category.
In Haskell, Kleisli composition is defined using the fish operator >=>,
and the identity arrow is a polymorphic function called return. Here’s
the definition of a monad using Kleisli composition:
trait Monad[M[_]] {
def >=>[A, B, C](m1: A => M[B], m2: B => M[C]): A => M[C]
Keep in mind that there are many equivalent ways of defining a monad,
and that this is not the primary one in the Haskell ecosystem. I like it
for its conceptual simplicity and the intuition it provides, but there are
other definitions that are more convenient when programming. We’ll
talk about them momentarily.
In this formulation, monad laws are very easy to express. They can-
not be enforced in Haskell, but they can be used for equational rea-
334
soning. They are simply the standard composition laws for the Kleisli
category:
(f >=> g) >=> h = f >=> (g >=> h) -- associativity
return >=> f = f -- left unit
f >=> return = f -- right unit
This kind of a definition also expresses what a monad really is: it’s a
way of composing embellished functions. It’s not about side effects or
state. It’s about composition. As we’ll see later, embellished functions
may be used to express a variety of effects or state, but that’s not what
the monad is for. The monad is the sticky duct tape that ties one end of
an embellished function to the other end of an embellished function.
Going back to our Writer example: The logging functions (the Kleisli
arrows for the Writer functor) form a category because Writer is a monad:
335
}
def pure[A](a: A) =
Writer(a, Monoid[W].empty)
}
object kleisliSyntax {
//allows us to use >=> as an infix operator
implicit class MonadOps[M[_], A, B]
(m1: A => M[B]) {
def >=>[C](m2: B => M[C])
(implicit m: Monad[M]): A => M[C] = {
m.>=>(m1, m2)
}
}
}
Monad laws for Writer w are satisfied as long as monoid laws for w are
satisfied (they can’t be enforced in Haskell either).
There’s a useful Kleisli arrow defined for the Writer monad called
tell. It’s sole purpose is to add its argument to the log:
336
20.2 Fish Anatomy
When implementing the fish operator for different monads you quickly
realize that a lot of code is repeated and can be easily factored out. To
begin with, the Kleisli composition of two functions must return a func-
tion, so its implementation may as well start with a lambda taking an
argument of type a:
def >=>[A, B, C]
(f: A => M[B], g: B => M[C]) =
a => {...}
def >=>[A, B, C]
(f: A => M[B], g: B => M[C]) =
a => {
val mb = f(a)
...
}
337
(>>=) :: m a -> (a -> m b) -> m b
For every monad, instead of defining the fish operator, we may instead
define bind. In fact the standard Haskell definition of a monad uses bind:
class Monad m where
(>>=) :: m a -> (a -> m b) -> m b
return :: a -> m a
trait Monad[M[_]] {
def flatMap[A, B](ma: M[A])(f: A => M[B]): M[B]
object bindSyntax {
//allows us to use flatMap as an infix operator
implicit class MonadOps[A, B, W: Monoid]
(wa: Writer[W, A]) {
def flatMap(f: A => Writer[W, B])
: Writer[W, B] = wa match {
case Writer((a, w1)) =>
val Writer((b, w2)) = f(a)
Writer(b, Monoid[W].combine(w1, w2))
338
}
}
}
join :: m (m a) -> m a
339
trait Monad[M[_]] extends Functor[M] {
def flatten[A](mma: M[M[A]]): M[A]
340
20.3 The do Notation
One way of writing code using monads is to work with Kleisli arrows
— composing them using the fish operator. This mode of programming
is the generalization of the point-free style. Point-free code is compact
and often quite elegant. In general, though, it can be hard to under-
stand, bordering on cryptic. That’s why most programmers prefer to
give names to function arguments and intermediate values.
When dealing with monads it means favoring the bind operator over
the fish operator. Bind takes a monadic value and returns a monadic
value. The programmer may chose to give names to those values. But
that’s hardly an improvement. What we really want is to pretend that
we are dealing with regular values, not the monadic containers that en-
capsulate them. That’s how imperative code works — side effects, such
as updating a global log, are mostly hidden from view. And that’s what
the do notation emulates in Haskell.
You might be wondering then, why use monads at all? If we want
to make side effects invisible, why not stick to an imperative language?
The answer is that the monad gives us much better control over side
effects. For instance, the log in the Writer monad is passed from func-
tion to function and is never exposed globally. There is no possibility of
garbling the log or creating a data race. Also, monadic code is clearly
demarcated and cordoned off from the rest of the program.
The do notation is just syntactic sugar for monadic composition. On
the surface, it looks a lot like imperative code, but it translates directly
to a sequence of binds and lambda expressions.
For instance, take the example we used previously to illustrate the
composition of Kleisli arrows in the Writer monad. Using our current
definitions, it could be rewritten as:
341
process :: String -> Writer String [String]
process = upCase >=> toWords
This function turns all characters in the input string to upper case and
splits it into words, all the while producing a log of its actions.
In the do notation it would look like this:
process s = do
upStr <- upCase s
toWords upStr
342
upCase :: String -> Writer String String
upCase s = Writer (map toUpper s, "upCase ")
process s =
upCase s >>= \upStr ->
toWords upStr
343
"toWords ", followed by the call to return with the result of splitting the
string upStr using words. Notice that words is a regular function working
on strings.
process s = do
upStr <- upCase s
tell "toWords "
return (words upStr)
Here, each line in the do block introduces a new nested bind in the
desugared code:
process s =
upCase s >>= \upStr ->
tell "toWords " >>= \() ->
return (words upStr)
344
def words: String => List[String] =
_.split("\\s+").toList
object moreSyntax {
// allows us to use >> as an infix operator
implicit class MoreOps[A, B, W: Monoid]
(m: Writer[W, A])
extends bindSyntax.MonadOps[A, B, W](m) {
def >>(k: Writer[W, B]): Writer[W, B] =
m >>= (_ => k)
}
}
345
process s =
upCase s >>= \upStr ->
tell "toWords " >>
return (words upStr)
In general, do blocks consist of lines (or sub-blocks) that either use the
left arrow to introduce new names that are then available in the rest of
the code, or are executed purely for side-effects. Bind operators are im-
plicit between the lines of code. Incidentally, it is possible, in Haskell, to
replace the formatting in the do blocks with braces and semicolons. This
provides the justification for describing the monad as a way of overload-
ing the semicolon.
Notice that the nesting of lambdas and bind operators when desug-
aring the do notation has the effect of influencing the execution of the
rest of the do block based on the result of each line. This property can be
used to introduce complex control structures, for instance to simulate
exceptions.
Interestingly, the equivalent of the do notation has found its appli-
cation in imperative languages, C++ in particular. I’m talking about re-
sumable functions or coroutines. It’s not a secret that C++ futures form
a monad1 . It’s an example of the continuation monad, which we’ll dis-
1 https://bartoszmilewski.com/2014/02/26/c17-i-see-a-monad-in-your-
future/
346
cuss shortly. The problem with continuations is that they are very hard
to compose. In Haskell, we use the do notation to turn the spaghetti
of “my handler will call your handler” into something that looks very
much like sequential code. Resumable functions make the same trans-
formation possible in C++. And the same mechanism can be applied to
turn the spaghetti of nested loops2 into list comprehensions or “gener-
ators,” which are essentially the do notation for the list monad. With-
out the unifying abstraction of the monad, each of these problems is
typically addressed by providing custom extensions to the language. In
Haskell, this is all dealt with through libraries.
2 https://bartoszmilewski.com/2014/04/21/getting-lazy-with-c/
347
21
Monads and Effects
348
• Nondeterminism: Computations that may return many results
• Side effects: Computations that access/modify state
– Read-only state, or the environment
– Write-only state, or a log
– Read/write state
• Exceptions: Partial functions that may fail
• Continuations: Ability to save state of the program and then re-
store it on demand
• Interactive Input
• Interactive Output
What really is mind blowing is that all these problems may be solved
using the same clever trick: turning to embellished functions. Of course,
the embellishment will be totally different in each case.
You have to realize that, at this stage, there is no requirement that
the embellishment be monadic. It’s only when we insist on composition
— being able to decompose a single embellished function into smaller
embellished functions — that we need a monad. Again, since each of the
embellishments is different, monadic composition will be implemented
differently, but the overall pattern is the same. It’s a very simple pattern:
composition that is associative and equipped with identity.
The next section is heavy on Haskell examples. Feel free to skim or
even skip it if you’re eager to get back to category theory or if you’re
already familiar with Haskell’s implementation of monads.
349
duced a certain output. We replaced this function with another function
that embellished the original output by pairing it with a string. That was
our solution to the logging problem.
We couldn’t stop there because, in general, we don’t want to deal
with monolithic solutions. We needed to be able to decompose one log-
producing function into smaller log-producing functions. It’s the com-
position of those smaller functions that led us to the concept of a monad.
What’s really amazing is that the same pattern of embellishing the
function return types works for a large variety of problems that nor-
mally would require abandoning purity. Let’s go through our list and
identify the embellishment that applies to each problem in turn.
21.2.1 Partiality
We modify the return type of every function that may not terminate by
turning it into a “lifted” type — a type that contains all values of the
original type plus the special “bottom” value ⊥. For instance, the Bool
type, as a set, would contain two elements: True and False. The lifted
Bool contains three elements. Functions that return the lifted Bool may
produce True or False, or execute forever.
The funny thing is that, in a lazy language like Haskell, a never-
ending function may actually return a value, and this value may be
passed to the next function. We call this special value the bottom. As
long as this value is not explicitly needed (for instance, to be pattern
matched, or produced as output), it may be passed around without stalling
the execution of the program. Because every Haskell function may be
potentially non-terminating, all types in Haskell are assumed to be lifted.
This is why we often talk about the category Hask of Haskell (lifted)
350
types and functions rather than the simpler 𝐒𝐞𝐭. It is not clear, though,
that Hask is a real category (see this Andrej Bauer post2 ).
21.2.2 Nondeterminism
If a function can return many different results, it may as well return
them all at once. Semantically, a non-deterministic function is equiva-
lent to a function that returns a list of results. This makes a lot of sense
in a lazy garbage-collected language. For instance, if all you need is one
value, you can just take the head of the list, and the tail will never be
evaluated. If you need a random value, use a random number generator
to pick the n-th element of the list. Laziness even allows you to return
an infinite list of results.
In the list monad — Haskell’s implementation of nondeterministic
computations — join is implemented as concat. Remember that join is
supposed to flatten a container of containers — concat concatenates a
list of lists into a single list. return creates a singleton list:
2 http://math.andrej.com/2016/08/06/hask-is-not-a-category/
351
The bind operator for the list monad is given by the general formula:
fmap followed by join which, in this case gives:
352
fmap that applies the body of the loop to each element of the list. The do
notation in the list monad can be used to replace complex nested loops.
My favorite example is the program that generates Pythagorean
triples — triples of positive integers that can form sides of right trian-
gles.
triples = do
z <- [1..]
x <- [1..z]
y <- [x..z]
guard (x^2 + y^2 == z^2)
return (x, y, z)
The first line tells us that z gets an element from an infinite list of posi-
tive numbers [1..]. Then x gets an element from the (finite) list [1..z]
of numbers between 1 and z. Finally y gets an element from the list of
numbers between x and z. We have three numbers 1 ⩽ 𝑥 ⩽ 𝑦 ⩽ 𝑧 at our
disposal. The function guard takes a Bool expression and returns a list
of units:
353
guard :: Bool -> [()]
guard True = [()]
guard False = []
354
def triples = for {
z <- Stream.from(1)
x <- 1 to z
y <- x to z
if x * x + y * y == z * z
} yield (x, y, z)
This is just further syntactic sugar for the list monad (strictly speaking,
MonadPlus).
You might see similar constructs in other functional or imperative
languages under the guise of generators and coroutines.
355
runReader :: Reader e a -> e -> a
runReader (Reader f) e = f e
356
...
}
We can then pass the a to the continuation k to get a new action rb:
357
To implement return we create an action that ignores the environment
and returns the unchanged value.
Putting it all together, after a few simplifications, we get the follow-
ing definition:
358
runWriter :: Writer w a -> (a, w)
runWriter (Writer (a, w)) = (a, w)
359
21.2.5 State
Functions that have read/write access to state combine the embellish-
ments of the Reader and the Writer. You may think of them as pure
functions that take the state as an extra argument and produce a pair
value/state as a result: (a, s) -> (b, s). After currying, we get them
into the form of Kleisli arrows a -> (s -> (b, s)), with the embellish-
ment abstracted in the State functor:
Different initial states may not only produce different results, but also
different final states.
The implementation of bind for the State monad is very similar to
that of the Reader monad, except that care has to be taken to pass the
correct state at each step:
360
sa >>= k = State (\s -> let (a, s') = runState sa s
sb = k a
in runState sb s')
There are also two helper Kleisli arrows that may be used to manipulate
the state. One of them retrieves the state for inspection:
361
get :: State s s
get = State (\s -> (s, s))
21.2.6 Exceptions
An imperative function that throws an exception is really a partial func-
tion — it’s a function that’s not defined for some values of its argu-
ments. The simplest implementation of exceptions in terms of pure to-
tal functions uses the Maybe functor. A partial function is extended to a
total function that returns Just a whenever it makes sense, and Nothing
when it doesn’t. If we want to also return some information about the
cause of the failure, we can use the Either functor instead (with the first
type fixed, for instance, to String).
Here’s the Monad instance for Maybe:
362
implicit val optionMonad = new Monad[Option] {
def flatMap[A, B](ma: Option[A])(k: A => Option[B]):
↪ Option[B] = ma match {
case None => None
case Some(a) => k(a)
}
21.2.7 Continuations
It’s the “Don’t call us, we’ll call you!” situation you may experience after
a job interview. Instead of getting a direct answer, you are supposed to
provide a handler, a function to be called with the result. This style of
programming is especially useful when the result is not known at the
time of the call because, for instance, it’s being evaluated by another
thread or delivered from a remote web site. A Kleisli arrow in this case
returns a function that accepts a handler, which represents “the rest of
the computation”:
The handler a -> r, when it’s eventually called, produces the result of
type r, and this result is returned at the end. A continuation is parame-
363
terized by the result type. (In practice, this is often some kind of status
indicator.)
There is also a helper function for executing the action returned by
the Kleisli arrow. It takes the handler and passes it to the continuation:
def flatMap[A, B]
: ((A => R) => R) =>
(A => (B => R) => R) =>
((B => R) => R)
Our goal is to create a function that takes the handler (b -> r) and
produces the result r. So that’s our starting point:
364
ka >>= kab = Cont (\hb -> ...)
Inside the lambda, we want to call the function ka with the appropriate
handler that represents the rest of the computation. We’ll implement
this handler as a lambda:
In this case, the rest of the computation involves first calling kab with
a, and then passing hb to the resulting action kb:
runCont(ka) { a =>
val kb = kab(a)
runCont(kb)(hb)
}
As you can see, continuations are composed inside out. The final handler
hb is called from the innermost layer of the computation. Here’s the full
instance:
365
instance Monad (Cont r) where
ka >>= kab = Cont (\hb -> runCont ka (\a -> runCont (kab
↪ a) hb))
return a = Cont (\ha -> ha a)
366
The box is defined using the special built-in IO functor. In our example,
getChar could be declared as a Kleisli arrow:
367
main :: IO ()
main :: () -> IO ()
From that perspective, a Haskell program is just one big Kleisli arrow
in the IO monad. You can compose it from smaller Kleisli arrows using
monadic composition. It’s up to the runtime system to do something
with the resulting IO object (also called IO action).
Notice that the arrow itself is a pure function — it’s pure functions
all the way down. The dirty work is relegated to the system. When it
finally executes the IO action returned from main, it does all kinds of
nasty things like reading user input, modifying files, printing obnox-
ious messages, formatting a disk, and so on. The Haskell program never
dirties its hands (well, except when it calls unsafePerformIO, but that’s
a different story).
Of course, because Haskell is lazy, main returns almost immediately,
and the dirty work begins right away. It’s during the execution of the IO
action that the results of pure computations are requested and evaluated
on demand. So, in reality, the execution of a program is an interleaving
of pure (Haskell) and dirty (system) code.
There is an alternative interpretation of the IO monad that is even
more bizarre but makes perfect sense as a mathematical model. It treats
the whole Universe as an object in a program. Notice that, conceptually,
368
the imperative model treats the Universe as an external global object,
so procedures that perform I/O have side effects by virtue of interact-
ing with that object. They can both read and modify the state of the
Universe.
We already know how to deal with state in functional programming
— we use the state monad. Unlike simple state, however, the state of
the Universe cannot be easily described using standard data structures.
But we don’t have to, as long as we never directly interact with it. It’s
enough that we assume that there exists a type RealWorld and, by some
miracle of cosmic engineering, the runtime is able to provide an object
of this type. An IO action is just a function:
type IO a = RealWorld -> (a, RealWorld)
However, >=> and return for the IO monad have to be built into the
language.
369
putStr :: String -> IO ()
main :: IO ()
main = do
putStr "Hello "
putStr "World!"
370
the action that prints “World!” receives, as input, the Universe in which
“Hello ” is already on the screen. It outputs a new Universe, with “Hello
World!” on the screen.
21.3 Conclusion
Of course I have just scratched the surface of monadic programming.
Monads not only accomplish, with pure functions, what normally is
done with side effects in imperative programming, but they also do
it with a high degree of control and type safety. They are not with-
out drawbacks, though. The major complaint about monads is that they
don’t easily compose with each other. Granted, you can combine most
of the basic monads using the monad transformer library. It’s relatively
easy to create a monad stack that combines, say, state with exceptions,
but there is no formula for stacking arbitrary monads together.
371
22
Monads Categorically
𝑥 2 + 2𝑥 + 1
372
Trees are containers so, more generally, an expression is a container for
storing variables. In category theory, we represent containers as endo-
functors. If we assign the type 𝑎 to the variable 𝑥, our expression will
have the type 𝑚 𝑎, where 𝑚 is an endofunctor that builds expression
trees. (Nontrivial branching expressions are usually created using re-
cursively defined endofunctors.)
What’s the most common operation that can be performed on an
expression? It’s substitution: replacing variables with expressions. For
instance, in our example, we could replace 𝑋 with 𝑦 − 1 to get:
(𝑦 − 1)2 + 2(𝑦 − 1) + 1
𝑚 𝑎 → (𝑎 → 𝑚 𝑏) → 𝑚 𝑏
373
𝜂 for return. Both join and return are polymorphic functions, so we can
guess that they correspond to natural transformations.
Therefore, in category theory, a monad is defined as an endofunctor
𝑇 equipped with a pair of natural transformations 𝜇 and 𝜂.
𝜇 is a natural transformation from the square of the functor 𝑇 2 back
to 𝑇 . The square is simply the functor composed with itself, 𝑇 ∘ 𝑇 (we
can only do this kind of squaring for endofunctors).
𝜇 ∷ 𝑇2 → 𝑇
𝜂∷𝐼 →𝑇
𝜂𝑎 ∷ 𝑎 → 𝑇 𝑎
374
where
𝑓 ∷𝑎→𝑇 𝑏
𝑔∷𝑏→𝑇 𝑐
(f >=> g) ==
(flatten compose fmap[B, T[C]](g) compose f)
or, in components:
375
Things are a little tricky because we are composing natural trans-
formations and functors. So a little refresher on horizontal composition
is in order. For instance, 𝑇 3 can be seen as a composition of 𝑇 after 𝑇 2 .
We can apply to it the horizontal composition of two natural transfor-
mations:
𝐼𝑇 ∘ 𝜇
and get 𝑇 ∘ 𝑇 ; which can be
further reduced to 𝑇 by apply-
ing 𝜇. 𝐼𝑇 is the identity natu-
ral transformation from 𝑇 to
𝑇 . You will often see the nota-
tion for this type of horizontal
composition 𝐼𝑇 ∘ 𝜇 shortened
to 𝑇 ∘𝜇. This notation is unam-
biguous because it makes no sense to compose a functor with a natural
transformation, therefore 𝑇 must mean 𝐼𝑇 in this context.
We can also draw the diagram in the (endo-) functor category [𝐂, 𝐂]:
376
Alternatively, we can treat 𝑇 3 as the composition of 𝑇 2 ∘ 𝑇 and apply
𝜇 ∘ 𝑇 to it. The result is also 𝑇 ∘ 𝑇 which, again, can be reduced to 𝑇
using 𝜇. We require that the two paths produce the same result.
You can convince yourself that these laws guarantee that the composi-
tion of Kleisli arrows indeed satisfies the laws of a category.
The similarities between a monad and a monoid are striking. We
have multiplication 𝜇, unit 𝜂, associativity, and unit laws. But our defi-
377
nition of a monoid is too narrow to describe a monad as a monoid. So
let’s generalize the notion of a monoid.
trait Monoid[M] {
def combine(x: M, y: M): M
def empty: M
}
The binary operation mappend must be associative and unital (i.e., mul-
tiplication by the unit mempty is a no-op).
Notice that, in Haskell, the definition of mappend is curried. It can be
interpreted as mapping every element of m to a function:
mappend :: m -> (m -> m)
378
mu :: (m, m) -> m
𝜇 ∷𝑚×𝑚 →𝑚
eta :: () -> m
𝜂∷𝑡→𝑚
This lets us pick the unit “element” without having to talk about ele-
ments.
379
Unlike in our previous definition of a monoid as a single-object cat-
egory, monoidal laws here are not automatically satisfied — we have to
impose them. But in order to formulate them we have to establish the
monoidal structure of the underlying categorical product itself. Let’s
recall how monoidal structure works in Haskell first.
We start with associativity. In Haskell, the corresponding equational
law is:
mu (x, mu (y, z)) = mu (mu (x, y), z)
mu compose bimap(identity)(mu)
mu compose bimap(mu)(identity)
380
mu . bimap id mu = mu . bimap mu id
mu.compose(bimap(identity)(mu)) ==
mu.compose(bimap(mu)(identity))
On the other hand, the two nestings of pairs are isomorphic. There is an
invertible function called the associator that converts between them:
def alpha[A, B, C]: (((A, B), C)) => ((A, (B, C))) = {
case ((x, y), z) => (x, (y, z))
}
With the help of the associator, we can write the point-free associativity
law for mu:
mu.compose(
bimap(identity)(mu) compose alpha) ==
mu.compose(bimap(mu)(identity))
We can apply a similar trick to unit laws which, in the new notation,
take the form:
381
mu (eta (), x) = x
mu (x, eta ()) = x
mu(eta(), x) == x
mu(x, eta()) == x
mu.compose(bimap(eta)(identity[M]))(((), x))
== lambda(((), x))
mu.compose(bimap(identity[M])(eta))((x, ()))
== rho((x, ()))
The isomorphisms lambda and rho are called the left and right unitor,
respectively. They witness the fact that the unit () is the identity of the
Cartesian product up to isomorphism:
382
rho :: (a, ()) -> a
rho (x, ()) = x
mu.compose(bimap(eta)(identity[M])) == lambda
mu.compose(bimap(identity[M])(eta)) == rho
We have formulated point-free monoidal laws for mu and eta using the
fact that the underlying Cartesian product itself acts like a monoidal
multiplication in the category of types. Keep in mind though that the
associativity and unit laws for the Cartesian product are valid only up
to isomorphism.
It turns out that these laws can be generalized to any category with
products and a terminal object. Categorical products are indeed asso-
ciative up to isomorphism and the terminal object is the unit, also up
to isomorphism. The associator and the two unitors are natural isomor-
phisms. The laws can be represented by commuting diagrams.
383
Notice that, because the product is a bifunctor, it can lift a pair of mor-
phisms — in Haskell this was done using bimap.
We could stop here and say that we can define a monoid on top of
any category with categorical products and a terminal object. As long
as we can pick an object 𝑚 and two morphisms 𝜇 and 𝜂 that satisfy
monoidal laws, we have a monoid. But we can do better than that. We
don’t need a full-blown categorical product to formulate the laws for 𝜇
and 𝜂. Recall that a product is defined through a universal construction
that uses projections. We haven’t used any projections in our formula-
tion of monoidal laws.
A bifunctor that behaves like a product without being a product is
called a tensor product, often denoted by the infix operator ⊗. A defini-
tion of a tensor product in general is a bit tricky, but we won’t worry
about it. We’ll just list its properties — the most important being asso-
ciativity up to isomorphism.
Similarly, we don’t need the object 𝑡 to be terminal. We never used
its terminal property — namely, the existence of a unique morphism
from any object to it. What we require is that it works well in concert
with the tensor product. Which means that we want it to be the unit of
the tensor product, again, up to isomorphism. Let’s put it all together:
384
A monoidal category is a category 𝐂 equipped with a bifunctor
called the tensor product:
⊗∷𝐂×𝐂→𝐂
and a distinct object 𝑖 called the unit object, together with three natural
isomorphisms called, respectively, the associator and the left and right
unitors:
𝛼𝑎𝑏𝑐 ∷ (𝑎 ⊗ 𝑏) ⊗ 𝑐 → 𝑎 ⊗ (𝑏 ⊗ 𝑐)
𝜆𝑎 ∷ 𝑖 ⊗ 𝑎 → 𝑎
𝜌𝑎 ∷ 𝑎 ⊗ 𝑖 → 𝑎
385
phisms:
𝜇 ∷𝑚⊗𝑚 →𝑚
𝜂∷𝑖→𝑚
These morphisms have to satisfy associativity and unit laws, which can
be expressed in terms of the following commuting diagrams:
386
Notice that it’s essential that the tensor product be a bifunctor because
we need to lift pairs of morphisms to form products such as 𝜇 ⊗ id or
𝜂 ⊗ id. These diagrams are just a straightforward generalization of our
previous results for categorical products.
387
If you replace objects by endofunctors, arrows by natural transforma-
tions, and tensor products by composition, you get:
(𝐹 → 𝐹 ′ ) → (𝐺 → 𝐺 ′ ) → (𝐹 ∘ 𝐺 → 𝐹 ′ ∘ 𝐺 ′ )
We also have at our disposal the identity endofunctor 𝐼 , which can serve
as the identity for endofunctor composition — our new tensor prod-
uct. Moreover, functor composition is associative. In fact associativity
and unit laws are strict — there’s no need for the associator or the two
unitors. So endofunctors form a strict monoidal category with functor
composition as tensor product.
What’s a monoid in this category? It’s an object — that is an endo-
functor 𝑇 ; and two morphisms — that is natural transformations:
𝜇 ∷𝑇 ∘𝑇 →𝑇
𝜂∷𝐼 →𝑇
388
They are exactly the monad laws we’ve seen before. Now you under-
stand the famous quote from Saunders Mac Lane:
389
22.4 Monads from Adjunctions
An adjunction1 𝐿 ⊣ 𝑅, is a pair of functors going back and forth be-
tween two categories 𝐂 and 𝐃. There are two ways of composing them
giving rise to two endofunctors, 𝑅 ∘ 𝐿 and 𝐿 ∘ 𝑅. As per an adjunction,
these endofunctors are related to identity functors through two natural
transformations called unit and counit:
𝜂 ∷ 𝐼𝐃 → 𝑅 ∘ 𝐿
𝜀 ∷ 𝐿 ∘ 𝑅 → 𝐼𝐂
Immediately we see that the unit of an adjunction looks just like the unit
of a monad. It turns out that the endofunctor 𝑅 ∘ 𝐿 is indeed a monad.
All we need is to define the appropriate μ to go with the η. That’s a
natural transformation between the square of our endofunctor and the
endofunctor itself or, in terms of the adjoint functors:
𝑅∘𝐿∘𝑅∘𝐿→𝑅∘𝐿
And, indeed, we can use the counit to collapse the 𝐿 ∘ 𝑅 in the middle.
The exact formula for 𝜇 is given by the horizontal composition:
𝜇 =𝑅∘𝜀∘𝐿
Monadic laws follow from the identities satisfied by the unit and counit
of the adjunction and the interchange law.
We don’t see a lot of monads derived from adjunctions in Haskell,
because an adjunction usually involves two categories. However, the
1 See ch.18 on adjunctions.
390
definitions of an exponential, or a function object, is an exception. Here
are the two endofunctors that form this adjunction:
𝐿𝑧 =𝑧×𝑠
𝑅𝑏=𝑠⇒𝑏
𝑅 (𝐿 𝑧) = 𝑠 ⇒ (𝑧 × 𝑠)
Let’s also translate the adjunction to Haskell. The left functor is the
product functor:
391
instance Adjunction (Prod s) (Reader s) where
counit (Prod (Reader f, s)) = f s
unit a = Reader (\s -> Prod (a, s))
You can easily convince yourself that the composition of the reader
functor after the product functor is indeed equivalent to the state func-
tor:
392
runState :: State s a -> s -> (a, s)
runState (State f) s = f s
𝜇 =𝑅∘𝜀∘𝐿
In other words, we need to sneak the counit 𝜀 across one level of the
reader functor. We can’t just call fmap directly, because the compiler
would pick the one for the State functor, rather than the Reader functor.
But recall that fmap for the reader functor is just left function composi-
tion. So we’ll use function composition directly.
We have to first peel off the data constructor State to expose the
function inside the State functor. This is done using runState:
393
Then we left-compose it with the counit, which is defined by uncurry runState.
Finally, we clothe it back in the State data constructor:
394
23
Comonads
a -> m b
A => M[B]
395
w a -> b
W[A] => B
The analog of the fish operator for co-Kleisli arrows is defined as:
def =>=[A, B, C](w1: W[A] => B)(w2: W[B] => C): W[A] => C
extract :: w a -> a
This is the dual of return. We also have to impose the laws of associa-
tivity as well as left- and right-identity. Putting it all together, we could
define a comonad in Haskell as:
396
def extract[A](wa: W[A]): A
}
397
(a, e) -> b
a -> (e -> b)
A => (E => B)
But notice that these functions already have the form of co-Kleisli ar-
rows. Let’s massage their arguments into the more convenient functor
form:
We can easily define the composition operator by making the same en-
vironment available to the arrows that we are composing:
398
def =>=[E, A, B, C]:
: (Product[E, A] => B) =>
(Product[E, B] => C) =>
(Product[E, A] => C) = f => g => {
case Product((e, a)) =>
val b = f(Product((e, a)))
val c = g(Product((e, b)))
c
}
extract (P e a) = a
399
23.3 Dissecting the Composition
Continuing the process of dualization, we could go ahead and dualize
monadic bind and join. Alternatively, we can repeat the process we used
with monads, where we studied the anatomy of the fish operator. This
approach seems more enlightening.
The starting point is the realization that the composition operator
must produce a co-Kleisli arrow that takes w a and produces a c. The
only way to produce a c is to apply the second function to an argument
of the type w b:
def =>=[W[_], A, B, C]
: (W[A] => B) =>
(W[B] => C) =>
(W[A] => C) = f => g => {
g
...
}
400
def extend[W[_], A, B]: (W[A] => B) => W[A] => W[B]
def =>=[W[_], A, B, C]
: (W[A] => B) =>
(W[B] => C) =>
(W[A] => C) = f => g => {
g compose extend(f)
}
Can we next dissect extend? You might be tempted to say, why not just
apply the function w a -> b to the argument w a, but then you quickly
realize that you’d have no way of converting the resulting b to w b. Re-
member, the comonad provides no means of lifting values. At this point,
in the analogous construction for monads, we used fmap. The only way
we could use fmap here would be if we had something of the type w (w a)
at our disposal. If we could only turn w a into
w (w a). And, conveniently, that would be exactly the dual of join. We
call it duplicate:
duplicate :: w a -> w (w a)
So, just like with the definitions of the monad, we have three equiv-
alent definitions of the comonad: using co-Kleisli arrows, extend, or
duplicate. Here’s the Haskell definition taken directly from
Control.Comonad library:
401
class Functor w => Comonad w where
extract :: w a -> a
duplicate :: w a -> w (w a)
duplicate = extend id
extend :: (w a -> b) -> w a -> w b
extend f = fmap f . duplicate
def extend[A, B]
(f: W[A] => B)(wa: W[A]): W[B] =
(fmap(f) _ compose duplicate)(wa)
}
402
containers is focused on a different a inside w a. In the game of life, you
would get a grid of grids, each cell of the outer grid containing an inner
grid that’s focused on a different cell.
Now look at extend. It takes a co-Kleisli arrow and a comonadic
container w a filled with as. It applies the computation to all of these
as, replacing them with bs. The result is a comonadic container filled
with bs. extend does it by shifting the focus from one a to another and
applying the co-Kleisli arrow to each of them in turn. In the game of life,
the co-Kleisli arrow would calculate the new state of the current cell. To
do that, it would look at its context — presumably its nearest neighbors.
The default implementation of extend illustrates this process. First we
call duplicate to produce all possible foci and then we apply f to each
of them.
403
instance Functor Stream where
fmap f (Cons a as) = Cons (f a) (fmap f as)
extract (Cons a _) = a
404
def duplicateS[A](wa: Stream[A]): Stream[Stream[A]] = wa
↪ match {
case s @ Stream(a, as) =>
Stream(() => s, () => duplicateS(as()))
}
The first element is the original stream, the second element is the tail of
the original stream, the third element is its tail, and so on, ad infinitum.
Here’s the complete instance:
405
one fell swoop. Haskell’s laziness makes this possible and even desir-
able. Of course, to make a Stream practical, we would also implement
the analog of advance:
Here’s the function that calculates the average of the first n elements of
the stream:
406
average :: Fractional a => Int -> Stream a -> a
average n stm = (sumS n stm) / (fromIntegral n)
407
natural transformations, η and μ, that define the monad are simply re-
versed for the comonad:
𝜀∷𝑇 →𝐼
𝛿 ∷ 𝑇 → 𝑇2
𝜀 ∷𝐿∘𝑅 →𝐼
is indeed the same ε that we see in the definition of the comonad — or,
in components, as Haskell’s extract. We can also use the unit of the
adjunction:
𝜂∷𝐼 →𝑅∘𝐿
to insert an 𝑅 ∘𝐿 in the middle of 𝐿∘𝑅 and produce 𝐿∘𝑅 ∘𝐿∘𝑅. Making 𝑇 2
from 𝑇 defines the 𝛿, and that completes the definition of the comonad.
We’ve also seen that the monad is a monoid. The dual of this state-
ment would require the use of a comonoid, so what’s a comonoid? The
original definition of a monoid as a single-object category doesn’t du-
alize to anything interesting. When you reverse the direction of all en-
domorphisms, you get another monoid. Recall, however, that in our ap-
proach to a monad, we used a more general definition of a monoid as
an object in a monoidal category. The construction was based on two
408
morphisms:
𝜇 ∷𝑚⊗𝑚 →𝑚
𝜂∷𝑖→𝑚
𝛿 ∷𝑚 →𝑚⊗𝑚
𝜀∷𝑚→𝑖
trait Comonoid[M] {
def split(x: M): (M, M)
def destroy(x: M): Unit
}
destroy _ = ()
409
split x = (f x, g x)
Now consider comonoid laws that are dual to the monoid unit laws.
lambda . bimap destroy id . split = id
rho . bimap id destroy . split = id
Here, lambda and rho are the left and right unitors, respectively (see the
definition of monoidal categories). Plugging in the definitions, we get:
lambda(bimap(destroy)(identity[M])(split(x))) ==
lambda(bimap(destroy)(identity[M])(
(f(x), g(x)))) ==
lambda(((), g(x))) ==
g(x)
which proves that g = id. Similarly, the second law expands to f = id.
In conclusion:
410
split x = (x, x)
which shows that in Haskell (and, in general, in the category 𝐒𝐞𝐭) every
object is a trivial comonoid.
Fortunately there are other more interesting monoidal categories
in which to define comonoids. One of them is the category of endo-
functors. And it turns out that, just like the monad is a monoid in the
category of endofunctors,
The comonad is a comonoid in the category of endofunc-
tors.
411
data Store s a = Store (s -> a) s
𝜀𝑎 ∷ ((𝑠 ⇒ 𝑎) × 𝑠) → 𝑎
extract (Store f s) = f s
412
def unit[S, A](a: A): Reader[S, Prod[S, A]] =
Reader(s => Prod((a, s)))
object Store {
def apply[S, A](run: S => A): S => Store[S, A] =
s => new Store(run, s)
}
𝛿 ∷𝐿∘𝑅 →𝐿∘𝑅∘𝐿∘𝑅
𝛿 =𝐿∘𝜂∘𝑅
(Remember that, in the formula for 𝛿, 𝐿 and 𝑅 stand for identity natural
transformations whose components are identity morphisms.)
Here’s the complete definition of the Store comonad:
413
instance Comonad (Store s) where
extract (Store f s) = f s
duplicate (Store f s) = Store (Store f) s
override
def duplicate[A](wa: Store[S, A])
: Store[S, Store[S, A]] = wa match {
case Store(f, s) => Store(Store(f), s)
}
}
414
In general, you can convince yourself that when extract acts on the
duplicated Store it produces the original Store (in fact, the identity law
for the comonad states that extract . duplicate = id).
The Store comonad plays an important role as the theoretical basis
for the Lens library. Conceptually, the Store s a comonad encapsulates
the idea of “focusing” (like a lens) on a particular substructure of the
date type a using the type s as an index. In particular, a function of the
type:
a -> Store s a
A => Store[S, A]
23.7 Challenges
1. Implement the Conway’s Game of Life using the Store comonad.
Hint: What type do you pick for s?
415
24
F-Algebras
𝜇 ∷𝑚×𝑚 →𝑚
𝜂∷1→𝑚
Here, 1 is the terminal object in 𝐒𝐞𝐭 — the singleton set. The first func-
tion defines multiplication (it takes a pair of elements and returns their
product), the second selects the unit element from 𝑚. Not every choice
of two functions with these signatures results in a monoid. For that
we need to impose additional conditions: associativity and unit laws.
But let’s forget about that for a moment and just consider “potential
monoids.” A pair of functions is an element of a Cartesian product of
416
two sets of functions. We know that these sets may be represented as
exponential objects:
𝜇 ∈ 𝑚𝑚×𝑚
𝜂 ∈ 𝑚1
𝑚𝑚×𝑚 × 𝑚1
𝑚𝑚×𝑚+1
The + sign stands for the coproduct in 𝐒𝐞𝐭. We have just replaced a pair
of functions with a single function — an element of the set:
𝑚×𝑚+1→𝑚
𝑚×𝑚 →𝑚
𝑚→𝑚
1→𝑚
417
As before, we can combine all these triples into one set of functions:
𝑚×𝑚+𝑚+1→𝑚
418
The object is often called the carrier, an underlying object or, in the
context of programming, the carrier type. The morphism is often called
the evaluation function or the structure map. Think of the functor 𝐹 as
forming expressions and the morphism as evaluating them.
Here’s the Haskell definition of an F-algebra:
419
sealed trait RingF[+A]
case object RZero extends RingF[Nothing]
case object ROne extends RingF[Nothing]
final case class RAdd[A](m: A, n: A) extends RingF[A]
final case class RMul[A](m: A, n: A) extends RingF[A]
final case class RNeg[A](n: A) extends RingF[A]
There are more F-algebras based on the same functor RingF. For in-
stance, polynomials form a ring and so do square matrices.
As you can see, the role of the functor is to generate expressions that
can be evaluated using the evaluator of the algebra. So far we’ve only
seen very simple expressions. We are often interested in more elaborate
expressions that can be defined using recursion.
420
24.1 Recursion
One way to generate arbitrary expression trees is to replace the variable
a inside the functor definition with recursion. For instance, an arbitrary
expression in a ring is generated by this tree-like data structure:
We can replace the original ring evaluator with its recursive version:
421
case RAdd(e1, e2) => evalZ(e1) + evalZ(e2)
case RMul(e1, e2) => evalZ(e1) * evalZ(e2)
case RNeg(e) => -evalZ(e)
}
This is still not very practical, since we are forced to represent all inte-
gers as sums of ones, but it will do in a pinch.
But how can we describe expression trees using the language of F-
algebras? We have to somehow formalize the process of replacing the
free type variable in the definition of our functor, recursively, with the
result of the replacement. Imagine doing this in steps. First, define a
depth-one tree as:
We are filling the holes in the definition of RingF with depth-zero trees
generated by RingF a. Depth-2 trees are similarly obtained as:
422
type RingF2[A] = RingF[RingF1[A]]
𝐹 𝑖𝑥 𝑓 = 𝑓 (𝐹 𝑖𝑥 𝑓 )
423
newtype Fix f = In (f (Fix f))
but I’ll stick with the accepted notation. The constructor Fix (or In, if
you prefer) can be seen as a function:
Fix :: f (Fix f) -> Fix f
object Fix {
def apply[F[_]](f: F[Fix[F]]): Fix[F] = new Fix(f)
}
There is also a function that peels off one level of functor application:
unFix :: Fix f -> f (Fix f)
unFix (Fix x) = x
The two functions are the inverse of each other. We’ll use these func-
tions later.
424
in that category are algebras — pairs consisting of a carrier object 𝑎 and
a morphism 𝐹 𝑎 → 𝑎, both from the original category 𝐂.
To complete the picture, we have to define morphisms in the cate-
gory of F-algebras. A morphism must map one algebra (𝑎, 𝑓 ) to another
algebra (𝑏, 𝑔). We’ll define it as a morphism 𝑚 that maps the carriers
— it goes from 𝑎 to 𝑏 in the original category. Not any morphism will
do: we want it to be compatible with the two evaluators. (We call such a
structure-preserving morphism a homomorphism.) Here’s how you de-
fine a homomorphism of F-algebras. First, notice that we can lift 𝑚 to
the mapping:
𝐹 𝑚∷𝐹 𝑎→𝐹 𝑏
we can then follow it with 𝑔 to get to 𝑏. Equivalently, we can use 𝑓 to
go from 𝐹 𝑎 to 𝑎 and then follow it with 𝑚. We want the two paths to
be equal:
𝑔∘𝐹 𝑚 =𝑚∘𝑓
It’s easy to convince yourself that this is indeed a category (hint: iden-
tity morphisms from 𝐂 work just fine, and a composition of homomor-
phisms is a homomorphism).
An initial object in the category of F-algebras, if it exists, is called
the initial algebra. Let’s call the carrier of this initial algebra 𝑖 and its
425
evaluator 𝑗 ∷ 𝐹 𝑖 → 𝑖. It turns out that 𝑗, the evaluator of the initial
algebra, is an isomorphism. This result is known as Lambek’s theorem.
The proof relies on the definition of the initial object, which requires
that there be a unique homomorphism 𝑚 from it to any other F-algebra.
Since 𝑚 is a homomorphism, the following diagram must commute:
𝐹 𝑗 ∷ 𝐹 (𝐹 𝑖) → 𝐹 𝑖
426
But we also have this trivially commuting diagram (both paths are the
same!):
427
isomorphism between 𝐹 𝑖 and 𝑖:
𝐹 𝑖≅𝑖
But that is just saying that 𝑖 is a fixed point of 𝐹 . That’s the formal proof
behind the original hand-waving argument.
Back to Haskell: We recognize 𝑖 as our Fix f, 𝑗 as our constructor
Fix, and its inverse as unFix. The isomorphism in Lambek’s theorem
tells us that, in order to get the initial algebra, we take the functor 𝑓
and replace its argument 𝑎 with Fix f. We also see why the fixed point
does not depend on 𝑎.
𝑧𝑒𝑟𝑜 ∷ 1 → 𝑁
𝑠𝑢𝑐𝑐 ∷ 𝑁 → 𝑁
The first one picks the zero, and the second one maps all numbers to
their successors. As before, we can combine the two into one:
1+𝑁 →𝑁
The left hand side defines a functor which, in Haskell, can be written
like this:
428
sealed trait NatF[+A]
case object ZeroF extends NatF[Nothing]
final case class SuccF[A](a: A) extends NatF[A]
The fixed point of this functor (the initial algebra that it generates) can
be encoded in Haskell as:
data Nat = Zero | Succ Nat
24.4 Catamorphisms
Let’s rewrite the initiality condition using Haskell notation. We call
the initial algebra Fix f. Its evaluator is the constructor Fix. There is
a unique morphism m from the initial algebra to any other algebra over
the same functor. Let’s pick an algebra whose carrier is a and the eval-
uator is alg.
429
By the way, notice what m is: It’s an evaluator for the fixed point, an
evaluator for the whole recursive expression tree. Let’s find a general
way of implementing it.
Lambek’s theorem tells us that the constructor Fix is an isomor-
phism. We called its inverse unFix. We can therefore flip one arrow in
this diagram to get:
430
Since we can do this for any algebra alg, it makes sense to define
a higher order function that takes the algebra as a parameter and gives
us the function we called m. This higher order function is called a cata-
morphism:
Let’s see an example of that. Take the functor that defines natural num-
bers:
Let’s pick (Int, Int) as the carrier type and define our algebra as:
431
def fib: NatF[(Int, Int)] => (Int, Int) = {
case ZeroF => (1, 1)
case SuccF((m, n)) => (n, m + n)
}
You can easily convince yourself that the catamorphism for this algebra,
cata fib, calculates Fibonacci numbers.
In general, an algebra for NatF defines a recurrence relation: the
value of the current element in terms of the previous element. A cata-
morphism then evaluates the n-th element of that sequence.
24.5 Folds
A list of e is the initial algebra of the following functor:
Indeed, replacing the variable a with the result of recursion, which we’ll
call List e, we get:
432
final case class Cons[E](h: E, t: List[E]) extends List[E]
An algebra for a list functor picks a particular carrier type and defines a
function that does pattern matching on the two constructors. Its value
for NilF tells us how to evaluate an empty list, and its value for ConsF
tells us how to combine the current element with the previously accu-
mulated value.
For instance, here’s an algebra that can be used to calculate the
length of a list (the carrier type is Int):
433
def length[E](l: List[E]): Int =
l.foldRight(0)((e, n) => n + 1)
The two arguments to foldr are exactly the two components of the al-
gebra.
Let’s try another example:
sumAlg :: ListF Double Double -> Double
sumAlg (ConsF e s) = e + s
sumAlg NilF = 0.0
24.6 Coalgebras
As usual, we have a dual construction of an F-coagebra, where the di-
rection of the morphism is reversed:
𝑎→𝐹 𝑎
434
Coalgebras for a given functor also form a category, with homomor-
phisms preserving the coalgebraic structure. The terminal object (𝑡, 𝑢)
in that category is called the terminal (or final) coalgebra. For every
other algebra (𝑎, 𝑓 ) there is a unique homomorphism 𝑚 that makes the
following diagram commute:
435
(Fix.apply[F] _).compose(
F.fmap(ana(coalg)) _ compose coalg)
A coalgebra for StreamF e is a function that takes the seed of type a and
produces a pair (StreamF is a fancy name for a pair) consisting of an
element and the next seed.
You can easily generate simple examples of coalgebras that produce
infinite sequences, like the list of squares, or reciprocals.
A more interesting example is a coalgebra that produces a list of
primes. The trick is to use an infinite list as a carrier. Our starting seed
436
will be the list [2..]. The next seed will be the tail of this list with all
multiples of 2 removed. It’s a list of odd numbers starting with 3. In the
next step, we’ll take the tail of this list and remove all multiples of 3, and
so on. You might recognize the makings of the sieve of Eratosthenes.
This coalgebra is implemented by the following function:
437
toListC :: Fix (StreamF e) -> [e]
toListC = cata al
where al :: StreamF e [e] -> [e]
al (StreamF e a) = e : a
438
set :: a -> s -> a
get :: a -> s
Here, a is usually some product data type with a field of type s. The
getter retrieves the value of that field and the setter replaces this field
with a new value. These two functions can be combined into one:
a -> (s, s -> a)
A => Store[S, A]
Notice that this is not a simple algebraic functor constructed from sums
of products. It involves an exponential 𝑎𝑠 .
A lens is a coalgebra for this functor with the carrier type a. We’ve
seen before that Store s is also a comonad. It turns out that a well-
behaved lens corresponds to a coalgebra that is compatible with the
comonad structure. We’ll talk about this in the next section.
439
24.7 Challenges
1. Implement the evaluation function for a ring of polynomials of
one variable. You can represent a polynomial as a list of coef-
ficients in front of powers of 𝑥. For instance, 4𝑥 2 − 1 would be
represented as (starting with the zero’th power) [-1, 0, 4].
2. Generalize the previous construction to polynomials of many in-
dependent variables, like 𝑥 2 𝑦 − 3𝑦 3 𝑧.
3. Implement the algebra for the ring of 2 × 2 matrices.
4. Define a coalgebra whose anamorphism produces a list of squares
of natural numbers.
5. Use unfoldr to generate a list of the first 𝑛 primes.
440
25
Algebras for Monads
441
these transformations at 𝑎 are:
𝜂𝑎 ∷ 𝑎 → 𝑚 𝑎
𝜇𝑎 ∷ 𝑚 (𝑚 𝑎) → 𝑚 𝑎
𝑎𝑙𝑔 ∷ 𝑚 𝑎 → 𝑎
The first thing to notice is that the algebra goes in the opposite direction
to 𝜂𝑎 𝑎. The intuition is that 𝜂𝑎 creates a trivial expression from a value
of type 𝑎. The first coherence condition that makes the algebra compat-
ible with the monad ensures that evaluating this expression using the
algebra whose carrier is 𝑎 gives us back the original value:
𝑎𝑙𝑔 ∘ 𝜂𝑎 = id𝑎
The second condition arises from the fact that there are two ways of
evaluating the doubly nested expression 𝑚 (𝑚 𝑎). We can first apply 𝜇𝑎
to flatten the expression, and then use the evaluator of the algebra; or
we can apply the lifted evaluator to evaluate the inner expressions, and
then apply the evaluator to the result. We’d like the two strategies to be
equivalent:
𝑎𝑙𝑔 ∘ 𝜇𝑎 = 𝑎𝑙𝑔 ∘ 𝑚 𝑎𝑙𝑔
Here, m alg is the morphism resulting from lifting 𝑎𝑙𝑔 using the functor
𝑚. The following commuting diagrams describe the two conditions (I
replaced 𝑚 with 𝑇 in anticipation of what follows):
442
𝜂𝑎 𝑇 𝑎𝑙𝑔
𝑎 𝑇𝑎 𝑇 (𝑇 𝑎) 𝑇𝑎
𝑎𝑙𝑔 𝜇𝑎 𝑎𝑙𝑔
𝑎𝑙𝑔
𝑎 𝑇𝑎 𝑎
alg . return = id
alg . join = alg . fmap alg
443
foldr f z [x] = x `f` z
where the action of f is written in the infix notation. The algebra is com-
patible with the monad if the following coherence condition is satisfied
for every x:
x `f` z = x
f(x)(z) == x
25.1 T-algebras
Since mathematicians prefer to call their monads 𝑇 , they call algebras
compatible with them T-algebras. T-algebras for a given monad 𝑇 in
444
a category 𝐂 form a category called the Eilenberg-Moore category, of-
ten denoted by 𝐂𝑇 . Morphisms in that category are homomorphisms of
algebras. These are the same homomorphisms we’ve seen defined for
F-algebras.
A T-algebra is a pair consisting of a carrier object and an evaluator,
(𝑎, 𝑓 ). There is an obvious forgetful functor 𝑈 𝑇 from 𝐂𝑇 to 𝐂, which
maps (𝑎, 𝑓 ) to 𝑎. It also maps a homomorphism of T-algebras to a cor-
responding morphism between carrier objects in 𝐂. You may remember
from our discussion of adjunctions that the left adjoint to a forgetful
functor is called a free functor.
The left adjoint to 𝑈 𝑇 is called 𝐹 𝑇 . It maps an object 𝑎 in 𝐂 to a free
algebra in 𝐂𝑇 . The carrier of this free algebra is 𝑇 𝑎. Its evaluator is a
morphism from 𝑇 (𝑇 𝑎) back to 𝑇 𝑎. Since 𝑇 is a monad, we can use the
monadic 𝜇𝑎 (join in Haskell) as the evaluator.
We still have to show that this is a T-algebra. For that, two coherence
conditions must be satisfied:
𝑎𝑙𝑔 ∘ 𝜂𝑇 𝑎 = id𝑇 𝑎
𝑎𝑙𝑔 ∘ 𝜇𝑎 = 𝑎𝑙𝑔 ∘ 𝑇 𝑎𝑙𝑔
But these are just monadic laws, if you plug in 𝜇 for the algebra.
As you may recall, every adjunction defines a monad. It turns out
that the adjunction between 𝐹 𝑇 and 𝑈 𝑇 defines the very monad 𝑇 that
was used in the construction of the Eilenberg-Moore category. Since we
can perform this construction for every monad, we conclude that every
monad can be generated from an adjunction. Later I’ll show you that
there is another adjunction that generates the same monad.
Here’s the plan: First I’ll show you that 𝐹 𝑇 is indeed the left adjoint
of 𝑈 𝑇 . I’ll do it by defining the unit and the counit of this adjunction and
proving that the corresponding triangular identities are satisfied. Then
445
I’ll show you that the monad generated by this adjunction is indeed our
original monad.
The unit of the adjunction is the natural transformation:
𝜂 ∷ 𝐼 → 𝑈𝑇 ∘ 𝐹𝑇
Let’s calculate the 𝑎 component of this transformation. The identity
functor gives us 𝑎. The free functor produces the free algebra (𝑇 𝑎, 𝜇𝑎 ),
and the forgetful functor reduces it to 𝑇 𝑎. Altogether we get a mapping
from 𝑎 to 𝑇 𝑎. We’ll simply use the unit of the monad 𝑇 as the unit of
this adjunction.
Let’s look at the counit:
𝜀 ∷ 𝐹𝑇 ∘ 𝑈𝑇 → 𝐼
Let’s calculate its component at some T-algebra (𝑎, 𝑓 ). The forgetful
functor forgets the 𝑓 , and the free functor produces the pair (𝑇 𝑎, 𝜇𝑎 ). So
in order to define the component of the counit 𝜀 at (𝑎, 𝑓 ), we need the
right morphism in the Eilenberg-Moore category, or a homomorphism
of T-algebras:
(𝑇 𝑎, 𝜇𝑎 ) → (𝑎, 𝑓 )
Such a homomorphism should map the carrier 𝑇 𝑎 to 𝑎. Let’s just res-
urrect the forgotten evaluator 𝑓 . This time we’ll use it as a homomor-
phism of T-algebras. Indeed, the same commuting diagram that makes
𝑓 a T-algebra may be re-interpreted to show that it’s a homomorphism
of T-algebras:
𝑇𝑓
𝑇 (𝑇 𝑎) 𝑇𝑎
𝜇𝑎 𝑓
𝑓
𝑇𝑎 𝑎
446
We have thus defined the component of the counit natural transforma-
tion 𝜀 at (𝑎, 𝑓 ) (an object in the category of T-algebras) to be 𝑓 .
To complete the adjunction we also need to show that the unit and
the counit satisfy triangular identities. These are:
𝑇 𝜂𝑎 𝜂𝑎
𝑇𝑎 𝑇 (𝑇 𝑎) 𝑎 𝑇𝑎
𝜇𝑎 𝑓
𝑇𝑎 𝑎
The first one holds because of the unit law for the monad 𝑇 . The second
is just the law of the T-algebra (𝑎, 𝑓 ).
We have established that the two functors form an adjunction:
𝐹𝑇 ⊣ 𝑈𝑇
𝑈𝑇 ∘ 𝐹𝑇
𝜇 =𝑅∘𝜀∘𝐿
447
This is a horizontal composition of three natural transformations, two
of them being identity natural transformations mapping, respectively,
𝐿 to 𝐿 and 𝑅 to 𝑅. The one in the middle, the counit, is a natural trans-
formation whose component at an algebra (𝑎, 𝑓 ) is 𝑓 .
Let’s calculate the component 𝜇𝑎 . We first horizontally compose 𝜀
after 𝐹 𝑇 , which results in the component of 𝜀 at 𝐹 𝑇 𝑎. Since 𝐹 𝑇 takes 𝑎
to the algebra (𝑇 𝑎, 𝜇𝑎 ), and 𝜀 picks the evaluator, we end up with 𝜇𝑎 .
Horizontal composition on the left with 𝑈 𝑇 doesn’t change anything,
since the action of 𝑈 𝑇 on morphisms is trivial. So, indeed, the 𝜇 obtained
from the adjunction is the same as the 𝜇 of the original monad 𝑇 .
𝑓𝐊 ∷ 𝑎 → 𝑏
𝑔𝐊 ∷ 𝑏 → 𝑐
𝑓 ∷𝑎→𝑇 𝑏
𝑔∷𝑏→𝑇 𝑐
448
We define the composition:
ℎ𝐊 = 𝑔𝐊 ∘ 𝑓𝐊
as a Kleisli arrow in 𝐂
ℎ∷𝑎→𝑇 𝑐
ℎ = 𝜇 ∘ (𝑇 𝑔) ∘ 𝑓
h = join . fmap g . f
h == flatten.compose(fmap(g) _ compose f)
𝑓 ∷𝑎→𝑏
𝜂∘𝑓
return . f
unit compose f
449
We can also define a functor 𝐺 from 𝐂𝑇 back to 𝐂. It takes an object 𝑎
from the Kleisli category and maps it to an object 𝑇 𝑎 in 𝐂. Its action on
a morphism 𝑓𝐊 corresponding to a Kleisli arrow:
𝑓 ∷𝑎→𝑇 𝑏
is a morphism in 𝐂:
𝑇 𝑎→𝑇 𝑏
given by first lifting 𝑓 and then applying 𝜇:
𝜇𝑇 𝑏 ∘ 𝑇 𝑓
G fT = join . fmap f
𝐹 ⊣𝐺
450
25.3 Coalgebras for Comonads
Analogous constructions can be done for any comonad 𝑊 . We can de-
fine a category of coalgebras that are compatible with a comonad. They
make the following diagrams commute:
𝜖𝑎 𝑊 𝑐𝑜𝑎
𝑎 𝑊𝑎 𝑊 (𝑊 𝑎) 𝑊𝑎
𝑐𝑜𝑎 𝛿𝑎 𝑐𝑜𝑎
𝑎 𝑐𝑜𝑎
𝑊𝑎 𝑎
451
25.4 Lenses
Let’s go back to our discussion of lenses. A lens can be written as a
coalgebra:
𝑐𝑜𝑎𝑙𝑔𝑠 ∷ 𝑎 → 𝑆𝑡𝑜𝑟𝑒 𝑠 𝑎
for the functor 𝑆𝑡𝑜𝑟𝑒 𝑠:
𝑠𝑒𝑡 ∷ 𝑎 → 𝑠 → 𝑎
𝑔𝑒𝑡 ∷ 𝑎 → 𝑠
452
instance Comonad (Store s) where
extract (Store f s) = f s
duplicate (Store f s) = Store (Store f) s
The question is: Under what conditions is a lens a coalgebra for this
comonad? The first coherence condition:
𝜀𝑎 ∘ 𝑐𝑜𝑎𝑙𝑔 = id𝑎
translates to:
𝑠𝑒𝑡 𝑎 (𝑔𝑒𝑡 𝑎) = 𝑎
This is the lens law that expresses the fact that if you set a field of the
structure 𝑎 to its previous value, nothing changes.
The second condition:
requires a little more work. First, recall the definition of fmap for the
Store functor:
453
fmap g (Store f s) = Store (g . f) s
Store(Store(set(a)), get(a))
For these two expressions to be equal, the two functions under Store
must be equal when acting on an arbitrary s:
454
coalg(set(a)(s)) == Store(set(a))(s)
Store(set(set(a)(s)))(get(set(a)(s))) ==
Store(set(a))(s)
set(set(a)(s)) == set(a)
tells us that setting the value of a field twice is the same as setting it
once. The second law:
get (set a s) = s
get(set(a)(s)) == s
tells us that getting a value of a field that was set to 𝑠 gives 𝑠 back.
In other words, a well-behaved lens is indeed a comonad coalgebra
for the Store functor.
455
25.5 Challenges
1. What is the action of the free functor 𝐹 ∷ 𝐶 → 𝐶 𝑇 on morphisms.
Hint: use the naturality condition for monadic 𝜇.
2. Define the adjunction:
𝑈𝑊 ⊣ 𝐹𝑊
456
26
Ends and Coends
457
In general, any functor like this may be interpreted as establishing a
relation between objects in a category. A relation may also involve two
different categories C and D. A functor, which describes such a relation,
has the following signature and is called a profunctor:
𝑝 ∷ 𝐃𝑜𝑝 × 𝐂 → 𝐒𝐞𝐭
𝐂↛𝐃
458
}
dimap f id (p b b) :: p a b
dimap(f)(identity[B])(pbb): P[A, B]
dimap id f (p a a) :: p a b
dimap(identity[A])(f)(paa): P[A, B]
459
Such a transformation is called a dinatural transformation, provided it
satisfies the commuting conditions that reflect the two ways we can
connect diagonal elements to non-diagonal ones. A dinatural transfor-
mation between two profunctors 𝑝 and 𝑞, which are members of the
functor category [𝐂𝑜𝑝 × 𝐂, 𝐒𝐞𝐭], is a family of morphisms:
𝛼𝑎 ∷ 𝑝 𝑎 𝑎 → 𝑞 𝑎 𝑎
Notice that this is strictly weaker than the naturality condition. If 𝛼 were
a natural transformation in [𝐂𝑜𝑝 × 𝐂, 𝐒𝐞𝐭], the above diagram could be
constructed from two naturality squares and one functoriality condition
(profunctor 𝑞 preserving composition):
460
Notice that a component of a natural transformation 𝛼 in [𝐂𝑜𝑝 × 𝐂, 𝐒𝐞𝐭]
is indexed by a pair of objects 𝛼𝑎𝑏 . A dinatural transformation, on the
other hand, is indexed by one object, since it only maps diagonal ele-
ments of the respective profunctors.
26.2 Ends
We are now ready to advance from “algebra” to what could be consid-
ered the “calculus” of category theory. The calculus of ends (and coends)
borrows ideas and even some notation from traditional calculus. In par-
ticular, the coend may be understood as an infinite sum or an integral,
whereas the end is similar to an infinite product. There is even some-
thing that resembles the Dirac delta function.
An end is a generalization of a limit, with the functor replaced by a
profunctor. Instead of a cone, we have a wedge. The base of a wedge is
formed by diagonal elements of a profunctor 𝑝. The apex of the wedge is
an object (here, a set, since we are considering 𝐒𝐞𝐭-valued profunctors),
461
and the sides are a family of functions mapping the apex to the sets in
the base. You may think of this family as one polymorphic function —
a function that’s polymorphic in its return type:
𝛼 ∷ ∀𝑎 . 𝑎𝑝𝑒𝑥 → 𝑝 𝑎 𝑎
Unlike in cones, within a wedge we don’t have any functions that would
connect vertices of the base. However, as we’ve seen earlier, given any
morphism 𝑓 ∷ 𝑎 → 𝑏 in 𝐂, we can connect both 𝑝 𝑎 𝑎 and 𝑝 𝑏 𝑏 to
the common set 𝑝 𝑎 𝑏. We therefore insist that the following diagram
commute:
𝑝 id𝑎 𝑓 ∘ 𝛼𝑎 = 𝑝 𝑓 id𝑏 ∘ 𝛼𝑏
462
We can now proceed with the universal construction and define the end
of 𝑝 as the universal wedge — a set 𝑒 together with a family of functions
𝜋 such that for any other wedge with the apex 𝑎 and a family 𝛼 there is
a unique function ℎ ∷ 𝑎 → 𝑒 that makes all triangles commute:
𝜋𝑎 ∘ ℎ = 𝛼 𝑎
The symbol for the end is the integral sign, with the “integration vari-
able” in the subscript position:
∫ 𝑝𝑐𝑐
𝑐
𝜋𝑎 ∷ ∫ 𝑝 𝑐 𝑐 → 𝑝 𝑎 𝑎
𝑐
Note that if 𝐂 is a discrete category (no morphisms other than the iden-
tities) the end is just a global product of all diagonal entries of 𝑝 across
the whole category 𝐂. Later I’ll show you that, in the more general case,
463
there is a relationship between the end and this product through an
equalizer.
In Haskell, the end formula translates directly to the universal quan-
tifier:
forall a. p a a
dimap f id . pi = dimap id f . pi
1 https://bartoszmilewski.com/2017/04/11/profunctor-parametricity/
464
Profunctor p => (forall c. p c c) -> p a b
465
the dinaturality hexagon shrinks down to the wedge diamond when we
realize that Δ𝑐 lifts all morphisms to one identity function.
Ends can also be defined for target categories other than 𝐒𝐞𝐭, but
here we’ll only consider 𝐒𝐞𝐭-valued profunctors and their ends.
466
type ProdP p = forall a b. (a -> b) -> p a b
These functions have different types. However, we can unify their types,
if we form one big product type, gathering together all diagonal ele-
ments of p:
The functions lambda and rho induce two mappings from this product
type:
467
}
The end of p is the equalizer of these two functions. Remember that the
equalizer picks the largest subset on which two functions are equal. In
this case it picks the subset of the product of all diagonal elements for
which the wedge diagrams commute.
forall a. f a -> g a
468
}
F ~> G
Let’s just pick one element from the set ∫𝑐 𝐂(𝐹 𝑐, 𝐺 𝑐). The two projec-
tions will map this element to two components of a particular transfor-
mation, let’s call them:
𝜏𝑎 ∷ 𝐹 𝑎 → 𝐺 𝑎
𝜏𝑏 ∷ 𝐹 𝑏 → 𝐺 𝑏
469
In the left branch, we lift a pair of morphisms ⟨id𝑎 , 𝐺 𝑓 ⟩ using the hom-
functor. You may recall that such lifting is implemented as simultaneous
pre- and post-composition. When acting on 𝜏𝑎 the lifted pair gives us:
𝐺 𝑓 ∘ 𝜏𝑎 ∘ id𝑎
id𝑏 ∘ 𝜏𝑏 ∘ 𝐹 𝑓
26.5 Coends
As expected, the dual to an end is called a coend. It is constructed from
a dual to a wedge called a cowedge (pronounced co-wedge, not cow-
edge).
The symbol for a coend is the integral sign
with the “integration variable” in the superscript
position:
𝑐
∫ 𝑝𝑐𝑐
Just like the end is related to a product, the coend is related to a co-
product, or a sum (in this respect, it resembles an integral, which is a
limit of a sum). Rather than having projections, we have injections go-
ing from the diagonal elements of the profunctor down An to edgy
the coend.
cow? If
it weren’t for the cowedge conditions, we could say that the coend of
the profunctor 𝑝 is either 𝑝 𝑎 𝑎, or 𝑝 𝑏 𝑏, or 𝑝 𝑐 𝑐, and so on. Or we could
say that there exists such an 𝑎 for which the coend is just the set 𝑝 𝑎 𝑎.
470
The universal quantifier that we used in the definition of the end turns
into an existential quantifier for the coend.
This is why, in pseudo-Haskell, we would define the coend as:
exists a. p a a
There are two ways of evaluating this sum type, by lifting the function
using dimap and applying it to the profunctor 𝑝:
471
lambda, rho :: Profunctor p => SumP p -> DiagSum p
lambda (SumP f pab) = DiagSum (dimap f id pab)
rho (SumP f pab) = DiagSum (dimap id f pab)
472
type DiagSum p. In the coend, these two values are identified, making
the cowedge condition automatically satisfied.
The process of identification of related elements in a set is formally
known as taking a quotient. To define a quotient we need an equivalence
relation ∼, a relation that is reflexive, symmetric, and transitive:
𝑎∼𝑎
if 𝑎 ∼ 𝑏 then 𝑏 ∼ 𝑎
if 𝑎 ∼ 𝑏 and 𝑏 ∼ 𝑐 then 𝑎 ∼ 𝑐
Such a relation splits the set into equivalence classes. Each class con-
sists of elements that are related to each other. We form a quotient set
by picking one representative from each class. A classic example is the
definition of rational numbers as pairs of whole numbers with the fol-
lowing equivalence relation:
473
(exists x. p x x) -> c ≅ forall x. p x x -> c
∫ 𝐒𝐞𝐭(𝐂(𝑎, 𝑧), 𝐹 𝑧) ≅ 𝐹 𝑎
𝑧
This identity is strongly reminiscent of the formula for the Dirac delta
function (a function 𝛿(𝑎 −𝑧), or rather a distribution, that has an infinite
peak at 𝑎 = 𝑧). Here, the hom-functor plays the role of the delta function.
Together these two identities are sometimes called the Ninja Yoneda
lemma.
To prove the second formula, we will use the consequence of the
Yoneda embedding, which states that two objects are isomorphic if and
474
only if their hom-functors are isomorphic. In other words 𝑎 ≅ 𝑏 if and
only if there is a natural transformation of the type:
that is an isomorphism.
We start by inserting the left-hand side of the identity we want to
prove inside a hom-functor that’s going to some arbitrary object 𝑐:
𝑧
𝐒𝐞𝐭(∫ 𝐂(𝑎, 𝑧) × 𝐹 𝑧, 𝑐)
Using the continuity argument, we can replace the coend with the end:
∫ 𝐒𝐞𝐭(𝐂(𝑎, 𝑧) × 𝐹 𝑧, 𝑐)
𝑧
We can now take advantage of the adjunction between the product and
the exponential:
(𝐹 𝑧) )
∫ 𝐒𝐞𝐭(𝐂(𝑎, 𝑧), 𝑐
𝑧
We can “perform the integration” by using the Yoneda lemma to get:
𝑐 (𝐹 𝑎)
𝐒𝐞𝐭(𝐹 𝑎, 𝑐)
475
26.7 Profunctor Composition
Let’s explore further the idea that a profunctor describes a relation —
more precisely, a proof-relevant relation, meaning that the set 𝑝 𝑎 𝑏
represents the set of proofs that 𝑎 is related to 𝑏. If we have two relations
𝑝 and 𝑞 we can try to compose them. We’ll say that 𝑎 is related to 𝑏
through the composition of 𝑞 after 𝑝 if there exist an intermediary object
𝑐 such that both 𝑞 𝑏 𝑐 and 𝑝 𝑐 𝑎 are non-empty. The proofs of this new
relation are all pairs of proofs of individual relations. Therefore, with
the understanding that the existential quantifier corresponds to a coend,
and the Cartesian product of two sets corresponds to “pairs of proofs,”
we can define composition of profunctors using the following formula:
𝑐
(𝑞 ∘ 𝑝) 𝑎 𝑏 = ∫ 𝑝 𝑐 𝑎 × 𝑞 𝑏 𝑐
476
exists c. (q a c, p c b)
477
27
Kan Extensions
478
The new addition is the category 𝟏 that contains a single object (and
a single identity morphism). There is only one possible functor 𝐾 from
𝐈 to this category. It maps all objects to the only object in 𝟏, and all
morphisms to the identity morphism. Any functor 𝐹 from 𝟏 to 𝐂 picks
a potential apex for our cone.
We can now define a universal property that picks the “best” such func-
tor 𝐹 . This 𝐹 will map 𝟏 to the object that is the limit of 𝐷 in 𝐂, and the
natural transformation 𝜀 from 𝐹 ∘ 𝐾 to 𝐷 will provide the corresponding
projections. This universal functor is called the right Kan extension of
𝐷 along 𝐾 and is denoted by Ran𝐾 𝐷.
479
Let’s formulate the universal property. Suppose we have another
cone — that is another functor 𝐹 ′ together with a natural transformation
𝜀 ′ from 𝐹 ′ ∘ 𝐾 to 𝐷.
𝜀 ′ = 𝜀 . (𝜎 ∘ 𝐾 )
480
In components, when acting on an object 𝑖 in 𝐈, we get:
𝜀𝑖′ = 𝜀𝑖 ∘ 𝜎𝐾 𝑖
In our case, 𝜎 has only one component corresponding to the single ob-
ject of 𝟏. So, indeed, this is the unique morphism from the apex of the
cone defined by 𝐹 ′ to the apex of the universal cone defined by Ran𝐾 𝐷.
The commuting conditions are exactly the ones required by the defini-
tion of a limit.
But, importantly, we are free to replace the trivial category 𝟏 with
an arbitrary category 𝐀, and the definition of the right Kan extension
remains valid.
𝜎 ∷ 𝐹′ → 𝐹
that factorizes 𝜀 ′ :
𝜀 ′ = 𝜀 . (𝜎 ∘ 𝐾 )
This is quite a mouthful, but it can be visualized in this nice diagram:
481
An interesting way of looking at this is to notice that, in a sense, the
Kan extension acts like the inverse of “functor multiplication.” Some
authors go as far as use the notation 𝐷/𝐾 for Ran𝐾 𝐷. Indeed, in this
notation, the definition of 𝜀, which is also called the counit of the right
Kan extension, looks like simple cancellation:
𝜀 ∷ 𝐷/𝐾 ∘ 𝐾 → 𝐷
482
Of course, the embedding picture breaks down when the functor 𝐾 is
not injective on objects or not faithful on hom-sets, as in the example
of the limit. In that case, the Kan extension tries its best to extrapolate
the lost information.
483
Furthermore, if we chose the category 𝐈 to be the same as 𝐂, we can
substitute the identity functor 𝐼𝐂 for 𝐷. We get the following identity:
We can now chose 𝐹 ′ to be the same as Ran𝐾 𝐼𝐂 . In that case the right
hand side contains the identity natural transformation and, correspond-
ing to it, the left hand side gives us the following natural transformation:
𝜀 ∷ Ran𝐾 𝐼𝐂 ∘ 𝐾 → 𝐼𝐂
Ran𝐾 𝐼𝐂 ⊣ 𝐾
Indeed, the right Kan extension of the identity functor along a functor
𝐾 can be used to calculate the left adjoint of 𝐾 . For that, one more con-
dition is necessary: the right Kan extension must be preserved by the
functor 𝐾 . The preservation of the extension means that, if we calculate
the Kan extension of the functor precomposed with 𝐾 , we should get
484
the same result as precomposing the original Kan extension with 𝐾 . In
our case, this condition simplifies to:
𝐾 ∘ Ran𝐾 𝐼𝐂 ≅ Ran𝐾 𝐾
𝐾 ∘ 𝐼 /𝐾 ≅ 𝐾 /𝐾
485
The sides of the cocone, the injections, are components of a natural
transformation 𝜂 from 𝐷 to 𝐹 ∘ 𝐾 .
The colimit is the universal cocone. So for any other functor 𝐹 ′ and a
natural transformation
𝜂′ ∷ 𝐷 → 𝐹 ′ ∘ 𝐾
486
such that:
𝜂′ = (𝜎 ∘ 𝐾 ) . 𝜂
This is illustrated in the following diagram:
487
The natural transformation:
𝜂 ∷ 𝐷 → Lan𝐾 𝐷 ∘ 𝐾
𝜂′ = (𝜎 ∘ 𝐾 ) . 𝜂
In other words, the left Kan extension is the left adjoint, and the right
Kan extension is the right adjoint of the postcomposition with 𝐾 .
Just like the right Kan extension of the identity functor could be used
to calculate the left adjoint of 𝐾 , the left Kan extension of the identity
functor turns out to be the right adjoint of 𝐾 (with 𝜂 being the unit of
the adjunction):
𝐾 ⊣ Lan𝐾 𝐼𝐂
Combining the two results, we get:
Ran𝐾 𝐼𝐂 ⊣ 𝐾 ⊣ Lan𝐾 𝐼𝐂
488
Let’s revisit the idea that a Kan extension can be used to extend the
action of a functor outside of its original domain. Suppose that 𝐾 em-
beds 𝐈 inside 𝐀. Functor 𝐷 maps 𝐈 to 𝐒𝐞𝐭. We could just say that for any
object 𝑎 in the image of 𝐾 , that is 𝑎 = 𝐾 𝑖, the extended functor maps 𝑎
to 𝐷 𝑖. The problem is, what to do with those objects in 𝐀 that are out-
side of the image of 𝐾 ? The idea is that every such object is potentially
connected through lots of morphisms to every object in the image of 𝐾 .
A functor must preserve these morphisms. The totality of morphisms
from an object 𝑎 to the image of 𝐾 is characterized by the hom-functor:
𝐀(𝑎, 𝐾 −)
𝐀(𝑎, 𝐾 −) = 𝐀(𝑎, −) ∘ 𝐾
Let’s see what happens when we replace 𝐹 ′ with the hom functor:
489
and then inline the composition:
The right hand side can be reduced using the Yoneda lemma:
We can now rewrite the set of natural transformations as the end to get
this very convenient formula for the right Kan extension:
To see that this is the case, we’ll show that this is indeed the left adjoint
to functor composition:
490
Using the continuity of the hom-functor, we can replace the coend with
the end:
′
∫ ∫ 𝐒𝐞𝐭(𝐀(𝐾 𝑖, 𝑎) × 𝐷 𝑖, 𝐹 𝑎)
𝑎 𝑖
We can use the product-exponential adjunction:
′ 𝐷𝑖
∫ ∫ 𝐒𝐞𝐭(𝐀(𝐾 𝑖, 𝑎), (𝐹 𝑎) )
𝑎 𝑖
′
∫ ∫ 𝐒𝐞𝐭(𝐀(𝐾 𝑖, 𝑎), 𝐀(𝐷 𝑖, 𝐹 𝑎))
𝑎 𝑖
There is a theorem called the Fubini theorem that allows us to swap the
two ends:
′
∫ ∫ 𝐒𝐞𝐭(𝐀(𝐾 𝑖, 𝑎), 𝐴(𝐷 𝑖, 𝐹 𝑎))
𝑖 𝑎
The inner end represents the set of natural transformations between
two functors, so we can use the Yoneda lemma:
′
∫ 𝐀(𝐷 𝑖, 𝐹 (𝐾 𝑖))
𝑖
This is indeed the set of natural transformations that forms the right
hand side of the adjunction we set out to prove:
[𝐈, 𝐒𝐞𝐭](𝐷, 𝐹 ′ ∘ 𝐾 )
These kinds of calculations using ends, coends, and the Yoneda lemma
are pretty typical for the “calculus” of ends.
491
27.5 Kan Extensions in Haskell
The end/coend formulas for Kan extensions can be easily translated to
Haskell. Let’s start with the right extension:
We replace the end with the universal quantifier, and hom-sets with
function types:
Looking at this definition, it’s clear that Ran must contain a value of
type a to which the function can be applied, and a natural transfor-
mation between the two functors k and d. For instance, suppose that
k is the tree functor, and d is the list functor, and you were given a
Ran Tree [] String. If you pass it a function:
492
f :: String -> Tree Int
you’ll get back a list of Int, and so on. The right Kan extension will use
your function to produce a tree and then repackage it into a list. For
instance, you may pass it a parser that generates a parsing tree from a
string, and you’ll get a list that corresponds to the depth-first traversal
of this tree.
The right Kan extension can be used to calculate the left adjoint of a
given functor by replacing the functor d with the identity functor. This
leads to the left adjoint of a functor k being represented by the set of
polymorphic functions of the type:
type Id[I] = I
493
type Lst a = forall i. Monoid i => (a -> i) -> i
trait Lst[A] {
type aTo[X] = A => X
494
def apply[I: Monoid]
(fa: aTo[I]): Id[I] =
foldMap(as)(fa)
}
}
495
is, repack it into the container defined by the functor k using a natural
transformation, and call the function to obtain the a. For instance, if d
is a tree, and k is a list, you can serialize the tree, call the function with
the resulting list, and obtain an a.
The left Kan extension can be used to calculate the right adjoint of
a functor. We know that the right adjoint of the product functor is the
exponential, so let’s try to implement it using the Kan extension:
def toExp[A, B]
: (A => B) => Exp[A, B] = f =>
new Lan[(A, ?), I, B] {
def fk[L](ki: (A, L)): B =
f.compose(fst[A])(ki)
496
def fromExp[A, B]: Exp[A, B] => (A => B) =
lan => a => lan.fk((a, lan.di))
497
𝐽 , if it exists, is then a functor for 𝐂 to 𝐂:
𝑖
Lan𝐽 𝐹 𝑎 = ∫ 𝐂(𝐽 𝑖, 𝑎) × 𝐹 𝑖
trait FreeF[F[_], A] {
def h[I]: I => A
def fi[I]: F[I]
}
498
As you can see, the free functor fakes the lifting of a function by record-
ing both the function and its argument. It accumulates the lifted func-
tions by recording their composition. Functor rules are automatically
satisfied. This construction was used in a paper Freer Monads, More
Extensible Effects1 .
Alternatively, we can use the right Kan extension for the same pur-
pose:
1 http://okmij.org/ftp/Haskell/extensible/more.pdf
499
}
}
500
28
Enriched Categories
A category is small if its objects form a set. But we know that there
are things larger than sets. Famously, a set of all sets cannot be
formed within the standard set theory (the Zermelo-Fraenkel theory,
optionally augmented with the Axiom of Choice). So a category of all
sets must be large. There are mathematical tricks like Grothendieck uni-
verses that can be used to define collections that go beyond sets. These
tricks let us talk about large categories.
A category is locally small if morphisms between any two objects
form a set. If they don’t form a set, we have to rethink a few definitions.
In particular, what does it mean to compose morphisms if we can’t even
pick them from a set? The solution is to bootstrap ourselves by replacing
hom-sets, which are objects in 𝐒𝐞𝐭, with objects from some other cate-
gory 𝐕. The difference is that, in general, objects don’t have elements, so
we are no longer allowed to talk about individual morphisms. We have
501
to define all properties of an enriched category in terms of operations
that can be performed on hom-objects as a whole. In order to do that,
the category that provides hom-objects must have additional structure
— it must be a monoidal category. If we call this monoidal category 𝐕,
we can talk about a category 𝐂 enriched over 𝐕.
Beside size reasons, we might be interested in generalizing hom-sets
to something that has more structure than mere sets. For instance, a tra-
ditional category doesn’t have the notion of a distance between objects.
Two objects are either connected by morphisms or not. All objects that
are connected to a given object are its neighbors. Unlike in real life; in
a category, a friend of a friend of a friend is as close to me as my bosom
buddy. In a suitably enriched category, we can define distances between
objects.
There is one more very practical reason to get some experience with
enriched categories, and that’s because a very useful online source of
categorical knowledge, the nLab1 , is written mostly in terms of enriched
categories.
502
to a morphism from 𝐂(𝑎, 𝑐). In other words it’s a mapping:
This is a function between sets — one of them being the Cartesian prod-
uct of two hom-sets. This formula can be easily generalized by replacing
Cartesian product with something more general. A categorical product
would work, but we can go even further and use a completely general
tensor product.
Next come the identity morphisms. Instead of picking individual
elements from hom-sets, we can define them using functions from the
singleton set 𝟏:
𝑗𝑎 ∷ 𝟏 → 𝐂(𝑎, 𝑎)
Again, we could replace the singleton set with the terminal object, but
we can go even further by replacing it with the unit 𝑖 of the tensor
product.
As you can see, objects taken from some monoidal category 𝐕 are
good candidates for hom-set replacement.
𝛼𝑎𝑏𝑐 ∷ (𝑎 ⊗ 𝑏) ⊗ 𝑐 → 𝑎 ⊗ (𝑏 ⊗ 𝑐)
503
It must be natural in all three arguments.
A monoidal category must also define a special unit object 𝑖 that
serves as the unit of the tensor product; again, up to natural isomor-
phism. The two isomorphisms are called, respectively, the left and the
right unitor, and their components are:
𝜆𝑎 ∷ 𝑖 ⊗ 𝑎 → 𝑎
𝜌𝑎 ∷ 𝑎 ⊗ 𝑖 → 𝑎
𝛼𝑎𝑏𝑐 ⊗id𝑑
((𝑎 ⊗ 𝑏) ⊗ 𝑐) ⊗ 𝑑 (𝑎 ⊗ (𝑏 ⊗ 𝑐)) ⊗ 𝑑
𝛼(𝑎⊗𝑏)𝑐𝑑 𝛼𝑎(𝑏⊗𝑐)𝑑
(𝑎 ⊗ 𝑏) ⊗ (𝑐 ⊗ 𝑑) 𝑎 ⊗ ((𝑏 ⊗ 𝑐) ⊗ 𝑑)
𝛼𝑎𝑏(𝑐⊗𝑑)
id𝑎 ⊗𝛼𝑏𝑐𝑑
𝑎 ⊗ (𝑏 ⊗ (𝑐 ⊗ 𝑑))
𝛼𝑎𝑖𝑏
(𝑎 ⊗ 𝑖) ⊗ 𝑏 𝑎 ⊗ (𝑖 ⊗ 𝑏)
𝑎⊗𝑏
𝛾𝑎𝑏 ∷ 𝑎 ⊗ 𝑏 → 𝑏 ⊗ 𝑎
504
whose “square is one”:
Following G. M. Kelly2 , I’m using the notation [𝑏, 𝑐] for the internal
hom. The counit of this adjunction is the natural transformation whose
components are called evaluation morphisms:
𝜀𝑎𝑏 ∷ ([𝑎, 𝑏] ⊗ 𝑎) → 𝑏
Notice that, if the tensor product is not symmetric, we may define an-
other internal hom, denoted by [[𝑎, 𝑐]], using the following adjunction:
505
28.3 Enriched Category
A category 𝐂 enriched over a monoidal category 𝐕 replaces hom-sets
with hom-objects. To every pair of objects 𝑎 and 𝑏 in 𝐂 we associate
an object 𝐂(𝑎, 𝑏) in 𝐕. We use the same notation for hom-objects as
we used for hom-sets, with the understanding that they don’t contain
morphisms. On the other hand, 𝐕 is a regular (non-enriched) category
with hom-sets and morphisms. So we are not entirely rid of sets — we
just swept them under the rug.
Since we cannot talk about individual morphisms in 𝐂, composition
of morphisms is replaced by a family of morphisms in 𝐕:
506
Associativity of composition is defined in terms of the associator in 𝐕:
∘⊗id
(𝐂(𝑐, 𝑑) ⊗ 𝐂(𝑏, 𝑐)) ⊗ 𝐂(𝑎, 𝑏) 𝐂(𝑏, 𝑑) ⊗ 𝐂(𝑎, 𝑏)
∘
𝛼 𝐂(𝑎, 𝑑)
∘
id⊗∘
𝐂(𝑐, 𝑑) ⊗ (𝐂(𝑏, 𝑐) ⊗ 𝐂(𝑎, 𝑏)) 𝐂(𝑐, 𝑑) ⊗ 𝐂(𝑎, 𝑐)
id⊗𝑗𝑎
𝐂(𝑎, 𝑏) ⊗ 𝑖 𝐂(𝑎, 𝑏) ⊗ 𝐂(𝑎, 𝑎)
𝜌
∘
𝐂(𝑎, 𝑏)
𝑗𝑏 ⊗id
𝑖 ⊗ 𝐂(𝑎, 𝑏) 𝐂(𝑏, 𝑏) ⊗ 𝐂(𝑎, 𝑏)
𝜆
∘
𝐂(𝑎, 𝑏)
507
28.4 Preorders
A preorder is defined as a thin category, one in which every hom-set
is either empty or a singleton. We interpret a non-empty set 𝐂(𝑎, 𝑏) as
the proof that 𝑎 is less than or equal to 𝑏. Such a category can be inter-
preted as enriched over a very simple monoidal category that contains
just two objects, 0 and 1 (sometimes called 𝐹 𝑎𝑙𝑠𝑒 and 𝑇 𝑟𝑢𝑒). Besides the
mandatory identity morphisms, this category has a single morphism
going from 0 to 1, let’s call it 0 → 1. A simple monoidal structure can
be established in it, with the tensor product modeling the simple arith-
metic of 0 and 1 (i.e., the only non-zero product is 1 ⊗ 1). The identity
object in this category is 1. This is a strict monoidal category, that is,
the associator and the unitors are identity morphisms.
Since in a preorder the-hom set is either empty or a singleton, we
can easily replace it with a hom-object from our tiny category. The en-
riched preorder 𝐂 has a hom-object 𝐂(𝑎, 𝑏) for any pair of objects 𝑎 and
𝑏. If 𝑎 is less than or equal to 𝑏, this object is 1; otherwise it’s 0.
Let’s have a look at composition. The tensor product of any two
objects is 0, unless both of them are 1, in which case it’s 1. If it’s 0, then
we have two options for the composition morphism: it could be either
id0 or 0 → 1. But if it’s 1, then the only option is id1 . Translating this
back to relations, this says that if 𝑎 ⩽ 𝑏 and 𝑏 ⩽ 𝑐 then 𝑎 ⩽ 𝑐, which is
exactly the transitivity law we need.
What about the identity? It’s a morphism from 1 to 𝐂(𝑎, 𝑎). There is
only one morphism going from 1, and that’s the identity id1 , so 𝐂(𝑎, 𝑎)
must be 1. It means that 𝑎 ⩽ 𝑎, which is the reflexivity law for a pre-
order. So both transitivity and reflexivity are automatically enforced, if
we implement a preorder as an enriched category.
508
28.5 Metric Spaces
An interesting example is due to William Lawvere3 . He noticed that
metric spaces can be defined using enriched categories. A metric space
defines a distance between any two objects. This distance is a non-
negative real number. It’s convenient to include infinity as a possible
value. If the distance is infinite, there is no way of getting from the
starting object to the target object.
There are some obvious properties that have to be satisfied by dis-
tances. One of them is that the distance from an object to itself must be
zero. The other is the triangle inequality: the direct distance is no larger
than the sum of distances with intermediate stops. We don’t require the
distance to be symmetric, which might seem weird at first but, as Law-
vere explained, you can imagine that in one direction you’re walking
uphill, while in the other you’re going downhill. In any case, symmetry
may be imposed later as an additional constraint.
So how can a metric space be cast into a categorical language? We
have to construct a category in which hom-objects are distances. Mind
you, distances are not morphisms but hom-objects. How can a hom-
object be a number? Only if we can construct a monoidal category 𝐕
in which these numbers are objects. Non-negative real numbers (plus
infinity) form a total order, so they can be treated as a thin category. A
morphism between two such numbers 𝑥 and 𝑦 exists if and only if 𝑥 ⩾ 𝑦
(note: this is the opposite direction to the one traditionally used in the
definition of a preorder). The monoidal structure is given by addition,
with zero serving as the unit object. In other words, the tensor product
of two numbers is their sum.
3 http://www.tac.mta.ca/tac/reprints/articles/1/tr1.pdf
509
A metric space is a category enriched over such a monoidal cate-
gory. A hom-object 𝐂(𝑎, 𝑏) from object 𝑎 to 𝑏 is a non-negative (possi-
bly infinite) number that we will call the distance from 𝑎 to 𝑏. Let’s see
what we get for identity and composition in such a category.
By our definitions, a morphism from the tensorial unit, which is the
number zero, to a hom-object 𝐂(𝑎, 𝑎) is the relation:
0 ⩾ 𝐂(𝑎, 𝑎)
510
We can then use morphisms in 𝐕 to map the hom-objects between two
enriched categories.
An enriched functor 𝐹 between two categories 𝐂 and 𝐃, besides map-
ping objects to objects, also assigns, to every pair of objects in 𝐂, a mor-
phism in 𝐕:
𝐹𝑎𝑏 ∷ 𝐂(𝑎, 𝑏) → 𝐃(𝐹 𝑎, 𝐹 𝑏)
A functor is a structure-preserving mapping. For regular functors it
meant preserving composition and identity. In the enriched setting, the
preservation of composition means that the following diagram com-
mute:
∘
𝐂(𝑏, 𝑐) ⊗ 𝐂(𝑎, 𝑏) 𝐂(𝑎, 𝑐)
∘
𝐃(𝐹 𝑏, 𝐹 𝑐) ⊗ 𝐃(𝐹 𝑎, 𝐹 𝑏) 𝐃(𝐹 𝑎, 𝐹 𝑐)
𝑖
𝑗𝑎 𝑗𝐹 𝑎
𝐹𝑎𝑎
𝐂(𝑎, 𝑎) 𝐃(𝐹 𝑎, 𝐹 𝑎)
511
work, we have to define the composition law for internal homs. In other
words, we have to implement a morphism with the following signature:
This is not much different from any other programming task, except
that, in category theory, we usually use point free implementations. We
start by specifying the set whose element it’s supposed to be. In this
case, it’s a member of the hom-set:
I just used the adjunction that defined the internal hom [𝑎, 𝑐]. If we can
build a morphism in this new set, the adjunction will point us at the
morphism in the original set, which we can then use as composition.
We construct this morphism by composing several morphisms that are
at our disposal. To begin with, we can use the associator 𝛼[𝑏,𝑐] [𝑎,𝑏] 𝑎 to
reassociate the expression on the left:
And use the counit 𝜀𝑏𝑐 again to get to 𝑐. We have thus constructed a
morphism:
𝜀𝑏𝑐 . (id[𝑏,𝑐] ⊗ 𝜀𝑎𝑏 ) . 𝛼[𝑏,𝑐][𝑎,𝑏]𝑎
512
that is an element of the hom-set:
The adjunction will give us the composition law we were looking for.
Similarly, the identity:
𝑗𝑎 ∷ 𝑖 → [𝑎, 𝑎]
𝐕(𝑖 ⊗ 𝑎, 𝑎)
We know that this hom-set contains the left identity 𝜆𝑎 . We can define
𝑗𝑎 as its image under the adjunction.
A practical example of self-enrichment is the category 𝐒𝐞𝐭 that serves
as the prototype for types in programming languages. We’ve seen be-
fore that it’s a closed monoidal category with respect to Cartesian prod-
uct. In 𝐒𝐞𝐭, the hom-set between any two sets is itself a set, so it’s an
object in 𝐒𝐞𝐭. We know that it’s isomorphic to the exponential set, so
the external and the internal homs are equivalent. Now we also know
that, through self-enrichment, we can use the exponential set as the
hom-object and express composition in terms of Cartesian products of
exponential objects.
513
is an additional structure: natural transformations between functors.
In a 𝟐-category, the objects are often called zero-cells; morphisms, 1-
cells; and morphisms between morphisms, 2-cells. In 𝐂𝐚𝐭 the 0-cells are
categories, 1-cells are functors, and 2-cells are natural transformations.
But notice that functors between two categories form a category
too; so, in 𝐂𝐚𝐭, we really have a hom-category rather than a hom-set. It
turns out that, just like 𝐒𝐞𝐭 can be treated as a category enriched over
𝐒𝐞𝐭, 𝐂𝐚𝐭 can be treated as a category enriched over 𝐂𝐚𝐭. Even more gen-
erally, just like every category can be treated as enriched over 𝐒𝐞𝐭, every
𝟐-category can be considered enriched over 𝐂𝐚𝐭.
514
29
Topoi
515
to a particular architecture. If you want to run your math on different
architectures, you have to use more general tools.
One possibility is to use spaces in place of sets. Spaces come with
more structure, and may be defined without recourse to sets. One thing
usually associated with spaces is topology, which is necessary to define
things like continuity. And the conventional approach to topology is,
you guessed it, through set theory. In particular, the notion of a subset
is central to topology. Not surprisingly, category theorists generalized
this idea to categories other than 𝐒𝐞𝐭. The type of category that has just
the right properties to serve as a replacement for set theory is called a
topos (plural: topoi), and it provides, among other things, a generalized
notion of a subset.
516
domains. More precisely, we say that two injective functions:
𝑓 ∷𝑎→𝑏
𝑓 ′ ∷ 𝑎′ → 𝑏
ℎ ∷ 𝑎 → 𝑎′
such that:
𝑓 = 𝑓′ . ℎ
Such a family of equivalent injections defines a subset of 𝑏.
𝑔∷𝑐→𝑎
𝑔′ ∷ 𝑐 → 𝑎
517
such that:
𝑚 . 𝑔 = 𝑚 . 𝑔′
it must be that 𝑔 = 𝑔 ′ .
518
It remains to specify what it means to designate an element of Ω as
“true.” We can use the standard trick: use a function from a singleton
set to Ω. We’ll call this function 𝑡𝑟𝑢𝑒:
𝑡𝑟𝑢𝑒 ∷ 1 → Ω
These definitions can be combined in such a way that they not only
define what a subobject is, but also define the special object Ω without
talking about elements. The idea is that we want the morphism 𝑡𝑟𝑢𝑒 to
represent a “generic” subobject. In 𝐒𝐞𝐭, it picks a single-element subset
from a two-element set Ω. This is as generic as it gets. It’s clearly a
proper subset, because Ω has one more element that’s not in that subset.
In a more general setting, we define 𝑡𝑟𝑢𝑒 to be a monomorphism
from the terminal object to the classifying object Ω. But we have to define
the classifying object. We need a universal property that links this object
to the characteristic function. It turns out that, in 𝐒𝐞𝐭, the pullback of
𝑡𝑟𝑢𝑒 along the characteristic function 𝜒 defines both the subset 𝑎 and
the injective function that embeds it in 𝑏. Here’s the pullback diagram:
519
Let’s analyze this diagram. The pullback equation is:
𝑡𝑟𝑢𝑒 . 𝑢𝑛𝑖𝑡 = 𝜒 . 𝑓
520
equipped with two morphisms 𝑓 and 𝑢𝑛𝑖𝑡 (the latter is uniquely deter-
mined by the definition of the terminal object), that make the diagram
commute.
Here we are solving a different system of equations. We are solving
for Ω and 𝑡𝑟𝑢𝑒 while varying both 𝑎 and 𝑏. For a given 𝑎 and 𝑏 there
may or may not be a monomorphism 𝑓 ∷ 𝑎 → 𝑏. But if there is one,
we want it to be a pullback of some 𝜒 . Moreover, we want this 𝜒 to be
uniquely determined by 𝑓 .
We can’t say that there is a one-to-one correspondence between
monomorphisms 𝑓 and characteristic functions 𝜒 , because a pullback
is only unique up to isomorphism. But remember our earlier definition
of a subset as a family of equivalent injections. We can generalize it by
defining a subobject of 𝑏 as a family of equivalent monomorphisms to
𝑏. This family of monomorphisms is in one-to-one correspondence with
the family of equivalent pullbacks of our diagram.
We can thus define a set of subobjects of 𝑏, 𝑆𝑢𝑏(𝑏), as a family of
monomorphisms, and see that it is isomorphic to the set of morphisms
from 𝑏 to Ω:
𝑆𝑢𝑏(𝑏) ≅ 𝐂(𝑏, Ω)
This happens to be a natural isomorphism of two functors. In other
words, 𝑆𝑢𝑏(−) is a representable (contravariant) functor whose repre-
sentation is the object Ω.
29.2 Topos
A topos is a category that:
521
2. Has limits for all finite diagrams,
3. Has a subobject classifier Ω.
This set of properties makes a topos a shoe-in for 𝐒𝐞𝐭 in most appli-
cations. It also has additional properties that follow from its definition.
For instance, a topos has all finite colimits, including the initial object.
It would be tempting to define the subobject classifier as a coproduct
(sum) of two copies of the terminal object –that’s what it is in 𝐒𝐞𝐭— but
we want to be more general than that. Topoi in which this is true are
called Boolean.
522
non datur (there is no third option). But the only way we can know
whether something is true or false is if we can prove or disprove it. A
proof is a process, a computation — and we know that computations
take time and resources. In some cases, they may never terminate. It
doesn’t make sense to claim that a statement is true if we cannot prove
it in finite amount of time. A topos with its more nuanced truth object
provides a more general framework for modeling interesting logics.
29.4 Challenges
1. Show that the function 𝑓 that is the pullback of 𝑡𝑟𝑢𝑒 along the
characteristic function must be injective.
523
30
Lawvere Theories
524
a special element of the set. If we want to talk about groups, we add a
unary operator that takes an element and returns its inverse. There are
corresponding left and right inverse laws to go with it. A ring defines
two binary operators plus some more laws. And so on.
The big picture is that an algebra is defined by a set of 𝑛-ary oper-
ations for various values of 𝑛, and a set of equational identities. These
identities are all universally quantified. The associativity equation must
be satisfied for all possible combinations of three elements, and so on.
Incidentally, this eliminates fields from consideration, for the sim-
ple reason that zero (unit with respect to addition) has no inverse with
respect to multiplication. The inverse law for a field can’t be universally
quantified.
This definition of a universal algebra can be extended to categories
other than 𝐒𝐞𝐭, if we replace operations (functions) with morphisms. In-
stead of a set, we select an object 𝑎 (called a generic object). A unary
operation is just an endomorphism of 𝑎. But what about other arities
(arity is the number of arguments for a given operation)? A binary op-
eration (arity 2) can be defined as a morphism from the product 𝑎 × 𝑎
back to 𝑎. A general 𝑛-ary operation is a morphism from the 𝑛th power
of 𝑎 to 𝑎:
𝛼𝑛 ∷ 𝑎𝑛 → 𝑎
A nullary operation is a morphism from the terminal object (the zeroth
power of 𝑎). So all we need in order to define any algebra is a category
whose objects are powers of one special object 𝑎. The specific algebra is
encoded in the hom-sets of this category. This is a Lawvere theory in a
nutshell.
The derivation of Lawvere theories goes through many steps, so
here’s the roadmap:
1. Category of finite sets 𝐅𝐢𝐧𝐒𝐞𝐭.
525
2. Its skeleton 𝐅.
3. Its opposite 𝐅𝑜𝑝 .
4. Lawvere theory 𝐋: an object in the category 𝐋𝐚𝐰.
5. Model 𝑀 of a Lawvere category: an object in the category
𝐌𝐨𝐝(𝐋𝐚𝐰, 𝐒𝐞𝐭).
526
generated from the singleton set using coproducts (treating the empty
set as a special case of a nullary coproduct). For instance, a two-element
set is a sum of two singletons, 2 = 1 + 1, as expressed in Haskell:
type Two = Either () ()
However, even though it’s natural to think that there’s only one empty
set, there may be many distinct singleton sets. In particular, the set 1 + ∅
is different from the set ∅ + 1, and different from 1 — even though they
are all isomorphic. The coproduct in the category of sets is not associa-
tive. We can remedy that situation by building a category that identifies
all isomorphic sets. Such a category is called a skeleton. In other words,
the backbone of any Lawvere theory is the skeleton 𝐅 of 𝐅𝐢𝐧𝐒𝐞𝐭. The ob-
jects in this category can be identified with natural numbers (including
zero) that correspond to the element count in 𝐅𝐢𝐧𝐒𝐞𝐭. Coproduct plays
the role of addition. Morphisms in 𝐅 correspond to functions between
finite sets. For instance, there is a unique morphism from ∅ to 𝑛 (empty
set being the initial object), no morphisms from 𝑛 to ∅ (except ∅ → ∅),
𝑛 morphisms from 1 to 𝑛 (the injections), one morphism from 𝑛 to 1, and
so on. Here, 𝑛 denotes an object in 𝐅 corresponding to all 𝑛-element sets
in 𝐅𝐢𝐧𝐒𝐞𝐭 that have been identified through isomorphisms.
Using the category 𝐅 we can formally define a Lawvere theory as a
category 𝐋 equipped with a special functor
𝐼𝐋 ∶∶ 𝐅𝑜𝑝 → 𝐋
This functor must be a bijection on objects and it must preserve finite
products (products in 𝐅𝑜𝑝 are the same as coproducts in 𝐅):
𝐼𝐋 (𝑚 × 𝑛) = 𝐼𝐋 𝑚 × 𝐼𝐋 𝑛
527
You may sometimes see this functor characterized as identity-on-objects,
which means that the objects in 𝐅 and 𝐋 are the same. We will therefore
use the same names for them — we’ll denote them by natural numbers.
Keep in mind though that objects in 𝐅 are not the same as sets (they are
classes of isomorphic sets).
The hom-sets in 𝐋 are, in general, richer than those in 𝐅𝑜𝑝 . They may
contain morphisms other than the ones corresponding to functions in
𝐅𝐢𝐧𝐒𝐞𝐭 (the latter are sometimes called basic product operations). Equa-
tional laws of a Lawvere theory are encoded in those morphisms.
The key observation is that the singleton set 1 in 𝐅 is mapped to
some object that we also call 1 in 𝐋, and all the other objects in 𝐋 are
automatically powers of this object. For instance, the two-element set 2
in 𝐅 is the coproduct 1 + 1, so it must be mapped to a product 1 × 1 (or
12 ) in 𝐋. In this sense, the category 𝐅 behaves like the logarithm of 𝐋.
Among morphisms in 𝐋 we have those transferred by the functor 𝐼𝐋
from 𝐅. They play a structural role in 𝐋. In particular coproduct injec-
tions 𝑖𝑘 become product projections 𝑝𝑘 . A useful intuition is to imagine
the projection:
𝑝𝑘 ∷ 1𝑛 → 1
as the prototype for a function of 𝑛 variables that ignores all but the 𝑘 th
variable. Conversely, constant morphisms 𝑛 → 1 in 𝐅 become diagonal
morphisms 1 → 1𝑛 in 𝐋. They correspond to duplication of variables.
The interesting morphisms in 𝐋 are the ones that define 𝑛-ary op-
erations other than projections. It’s those morphisms that distinguish
one Lawvere theory from another. These are the multiplications, the
additions, the selections of unit elements, and so on, that define the al-
gebra. But to make 𝐋 a full category, we also need compound operations
𝑛 → 𝑚 (or, equivalently, 1𝑛 → 1𝑚 ). Because of the simple structure
of the category, they turn out to be products of simpler morphisms of
528
the type 𝑛 → 1. This is a generalization of the statement that a func-
tion that returns a product is a product of functions (or, as we’ve seen
earlier, that the hom-functor is continuous).
Lawvere theory 𝐋 is based on 𝐅𝑜𝑝 , from which it inherits the “boring” morphisms that define the
products. It adds the “interesting” morphisms that describe the 𝑛-ary operations (dotted arrows).
529
At this point it would be very helpful to present a non-trivial exam-
ple of a Lawvere theory, but it would be hard to explain it without first
understanding what models are.
𝑀 ∷ 𝐋 → 𝐒𝐞𝐭
𝑀 (𝑎 × 𝑏) ≅ 𝑀 𝑎 × 𝑀 𝑏
530
binary operations from 𝐋 are mapped to functions:
𝑎×𝑎 →𝑎
As with any functor, it’s possible that multiple morphisms in 𝐋 are col-
lapsed to the same function in 𝐒𝐞𝐭.
Incidentally, the fact that all laws are universally quantified equal-
ities means that every Lawvere theory has a trivial model: a constant
functor mapping all objects to the singleton set, and all morphisms to
the identity function on it.
A general morphism in 𝐋 of the form 𝑚 → 𝑛 is mapped to a func-
tion:
𝑎𝑚 → 𝑎𝑛
If we have two different models, 𝑀 and 𝑁 , a natural transformation
between them is a family of functions indexed by 𝑛:
𝜇𝑛 ∷ 𝑀 𝑛 → 𝑁 𝑛
or, equivalently:
𝜇𝑛 ∷ 𝑎𝑛 → 𝑏 𝑛
where 𝑏 = 𝑁 1.
Notice that the naturality condition guarantees the preservation of
𝑛-ary operations:
𝑁 𝑓 ∘ 𝜇𝑛 = 𝜇 1 ∘ 𝑀 𝑓
where 𝑓 ∷ 𝑛 → 1 is an 𝑛-ary operation in 𝐋.
The functors that define models form a category of models, 𝐌𝐨𝐝(𝐋, 𝐒𝐞𝐭),
with natural transformations as morphisms.
Consider a model for the trivial Lawvere category 𝐅𝑜𝑝 . Such a model
is completely determined by its value at 1, 𝑀 1. Since 𝑀 1 can be any
531
set, there are as many of these models as there are sets in 𝐒𝐞𝐭. More-
over, every morphism in 𝐌𝐨𝐝(𝐅𝑜𝑝 , 𝐒𝐞𝐭) (a natural transformation be-
tween functors 𝑀 and 𝑁 ) is uniquely determined by its component at
𝑀 1. Conversely, every function 𝑀 1 → 𝑁 1 induces a natural trans-
formation between the two models 𝑀 and 𝑁 . Therefore 𝐌𝐨𝐝(𝐅𝑜𝑝 , 𝐒𝐞𝐭)
is equivalent to 𝐒𝐞𝐭.
532
The question is: how many functions of two arguments can one im-
plement using only the monoidal operator. Let’s call the two arguments
𝑎 and 𝑏. There is one function that ignores both arguments and returns
the monoidal unit. Then there are two projections that return 𝑎 and 𝑏,
respectively. They are followed by functions that return 𝑎𝑏, 𝑏𝑎, 𝑎𝑎, 𝑏𝑏,
𝑎𝑎𝑏, and so on… In fact there are as many such functions of two argu-
ments as there are elements in the free monoid with generators 𝑎 and
𝑏. Notice that 𝐋𝐌𝐨𝐧 (2, 1) must contain all those morphisms because one
of the models is the free monoid. In a free monoid they correspond to
distinct functions. Other models may collapse multiple morphisms in
𝐋𝐌𝐨𝐧 (2, 1) down to a single function, but not the free monoid.
If we denote the free monoid with 𝑛 generators 𝑛∗ , we may iden-
tify the hom-set 𝐋(2, 1) with the hom-set 𝐌𝐨𝐧(1∗ , 2∗ ) in 𝐌𝐨𝐧, the cate-
gory of monoids. In general, we pick 𝐋𝐌𝐨𝐧 (𝑚, 𝑛) to be 𝐌𝐨𝐧(𝑛∗ , 𝑚∗ ). In
other words, the category 𝐋𝐌𝐨𝐧 is the opposite of the category of free
monoids.
The category of models of the Lawvere theory for monoids,
𝐌𝐨𝐝(𝐋𝐌𝐨𝐧 , 𝐒𝐞𝐭), is equivalent to the category of all monoids, 𝐌𝐨𝐧.
533
Another way of deriving 𝑈 is by exploiting the fact that 𝐅𝑜𝑝 is the
initial object in 𝐋𝐚𝐰. It means that, for any Lawvere theory 𝐋, there is a
unique functor 𝐅𝑜𝑝 → 𝐋. This functor induces the opposite functor on
models (since models are functors from theories to sets):
𝐌𝐨𝐝(𝐋, 𝐒𝐞𝐭) → 𝐌𝐨𝐝(𝐅𝑜𝑝 , 𝐒𝐞𝐭)
But, as we discussed, the category of models of 𝐅𝑜𝑝 is equivalent to 𝐒𝐞𝐭,
so we get the forgetful functor:
𝑈 ∷ 𝐌𝐨𝐝(𝐋, 𝐒𝐞𝐭) → 𝐒𝐞𝐭
It can be shown that so defined 𝑈 always has a left adjoint, the free
functor 𝐹 .
This is easily seen for finite sets. The free functor 𝐹 produces free
algebras. A free algebra is a particular model in 𝐌𝐨𝐝(𝐋, 𝐒𝐞𝐭) that is gen-
erated from a finite set of generators 𝑛. We can implement 𝐹 as the
representable functor:
𝐋(𝑛, −) ∷ 𝐋 → 𝐒𝐞𝐭
To show that it’s indeed free, all we have to do is to prove that it’s a left
adjoint to the forgetful functor:
𝐌𝐨𝐝(𝐋(𝑛, −), 𝑀) ≅ 𝐒𝐞𝐭(𝑛, 𝑈 (𝑀))
Let’s simplify the right hand side:
𝐒𝐞𝐭(𝑛, 𝑈 (𝑀)) ≅ 𝐒𝐞𝐭(𝑛, 𝑀 1) ≅ (𝑀 1)𝑛 ≅ 𝑀 𝑛
(I used the fact that a set of morphisms is isomorphic to the exponential
which, in this case, is just the iterated product.) The adjunction is the
result of the Yoneda lemma:
[𝐋, 𝐒𝐞𝐭](𝐋(𝑛, −), 𝑀) ≅ 𝑀 𝑛
534
Together, the forgetful and the free functor define a monad 𝑇 = 𝑈 ∘ 𝐹
on 𝐒𝐞𝐭. Thus every Lawvere theory generates a monad.
It turns out that the category of algebras for this monad is equivalent
to the category of models.
You may recall that monad algebras define ways to evaluate expres-
sions that are formed using monads. A Lawvere theory defines n-ary
operations that can be used to generate expressions. Models provide
means to evaluate these expressions.
The connection between monads and Lawvere theories doesn’t go
both ways, though. Only finitary monads lead to Lawvere theories. A
finitary monad is based on a finitary functor. A finitary functor on 𝐒𝐞𝐭
is fully determined by its action on finite sets. Its action on an arbitrary
set 𝑎 can be evaluated using the following coend:
𝑛
𝐹 𝑎 = ∫ 𝑎𝑛 × (𝐹 𝑛)
535
Conversely, given any finitary monad 𝑇 on 𝐒𝐞𝐭, we can construct a Law-
vere theory. We start by constructing a Kleisli category for 𝑇 . As you
may remember, a morphism in a Kleisli category from 𝑎 to 𝑏 is given by
a morphism in the underlying category:
𝑎→𝑇 𝑏
𝑚→𝑇 𝑛
𝑜𝑝
The category opposite to this Kleisli category, 𝐊𝐥𝑇 , restricted to finite
sets, is the Lawvere theory in question. In particular, the hom-set 𝐋(𝑛, 1)
that describes n-ary operations in 𝐋 is given by the hom-set 𝐊𝐥𝑇 (1, 𝑛).
It turns out that most monads that we encounter in programming
are finitary, with the notable exception of the continuation monad. It is
possible to extend the notion of Lawvere theory beyond finitary opera-
tions.
𝑃 𝑛 𝑚 = 𝑎𝑛 × 𝐋(𝑚, 1)
536
𝑓 ∷ 𝑚 → 𝑛. Such a mapping describes a selection of 𝑚 elements from
an 𝑛-element set (repetitions are allowed). It can be lifted to the mapping
of powers of 𝑎, namely (notice the direction):
𝑎𝑛 → 𝑎𝑚
𝜆𝑥 → (𝑥, 𝑥, ... , 𝑥 )
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
𝑚
537
You might notice that it’s not immediately obvious that the profunc-
tor in question is covariant in the second argument. The hom-functor
𝐋(𝑚, 1) is actually contravariant in 𝑚. However, we are taking the co-
end not in the category 𝐋 but in the category 𝐅. The coend variable 𝑛
goes over finite sets (or the skeletons of such). The category 𝐋 contains
the opposite of 𝐅, so a morphism 𝑚 → 𝑛 in 𝐅 is a member of 𝐋(𝑛, 𝑚) in
𝐋 (the embedding is given by the functor 𝐼𝐋 ).
Let’s check the functoriality of 𝐋(𝑚, 1) as a functor from 𝐅 to 𝐒𝐞𝐭.
We want to lift a function 𝑓 ∷ 𝑚 → 𝑛, so our goal is to implement a
function from 𝐋(𝑚, 1) to 𝐋(𝑛, 1). Corresponding to the function 𝑓 there
is a morphism in 𝐋 from 𝑛 to 𝑚 (notice the direction). Precomposing
this morphism with 𝐋(𝑚, 1) gives us a subset of 𝐋(𝑛, 1).
𝐋(𝑚, 1) 𝐋(𝑛, 1)
𝑚• •𝑛
𝑓
538
first or the second component of the product. The two results are then
identified.
𝑎𝑛 × 𝐋(𝑚, 1)
⟨𝑓 ,id⟩ ⟨id,𝑓 ⟩
𝑎𝑛 × 𝐋(𝑚, 1) ∼ 𝑎𝑛 × 𝐋(𝑛, 1)
𝑓 ∷𝑚→𝑛
539
Let’s see how this works in the simplest case of the Lawvere the-
ory, the 𝐅𝑜𝑝 itself. In such a theory, every 𝐋(𝑛, 1) can be reached from
𝐋(1, 1). This is because 𝐋(1, 1) is a singleton containing just the iden-
tity morphism, and 𝐋(𝑛, 1) only contains morphisms corresponding to
injections 1 → 𝑛 in 𝐅, which are basic morphisms. Therefore all the
addends in the coproduct are equivalent and we get:
𝑇 𝑎 = 𝑎 × 𝐋(1, 1) = 𝑎
540
raise :: () -> a
We can recover the Maybe monad using the coend formula. Let’s consider
what the addition of the nullary operation does to the hom-sets 𝐋(𝑛, 1).
Besides creating a new 𝐋(0, 1) (which is absent from 𝐅𝑜𝑝 ), it also adds
new morphisms to 𝐋(𝑛, 1). These are the results of composing morphism
of the type 𝑛 → 0 with our 0 → 1. Such contributions are all identified
with 𝑎0 × 𝐋(0, 1) in the coend formula, because they can be obtained
from:
𝑎𝑛 × 𝐋(0, 1)
by lifting 0 → 𝑛 in two different ways.
𝑎𝑛 × 𝐋(0, 1)
⟨𝑓 ,id⟩ ⟨id,𝑓 ⟩
𝑎0 × 𝐋(0, 1) ∼ 𝑎𝑛 × 𝐋(𝑛, 1)
𝑓 ∷0→𝑛
541
type Option[A] = Either[Unit, A]
Notice that this Lawvere theory only supports the raising of exceptions,
not their handling.
30.8 Challenges
1. Enumerate all morphisms between 2 and 3 in 𝐅 (the skeleton of
𝐅𝐢𝐧𝐒𝐞𝐭).
2. Show that the category of models for the Lawvere theory of monoids
is equivalent to the category of monad algebras for the list monad.
3. The Lawvere theory of monoids generates the list monad. Show
that its binary operations can be generated using the correspond-
ing Kleisli arrows.
4. FinSet is a subcategory of 𝐒𝐞𝐭 and there is a functor that embeds
it in 𝐒𝐞𝐭. Any functor on 𝐒𝐞𝐭 can be restricted to 𝐅𝐢𝐧𝐒𝐞𝐭. Show that
a finitary functor is the left Kan extension of its own restriction.
542
30.9 Further Reading
1. Functorial Semantics of Algebraic Theories1 , F. William Lawvere
2. Notions of computation determine monads2 , Gordon Plotkin and
John Power
1 http://www.tac.mta.ca/tac/reprints/articles/5/tr5.pdf
2 http://homepages.inf.ed.ac.uk/gdp/publications/Comp_Eff_Monads.pdf
543
31
Monads, Monoids, and Categories
544
31.1 Bicategories
One of the most difficult aspects of category theory is the constant
switching of perspectives. Take the category of sets, for instance. We
are used to defining sets in terms of elements. An empty set has no el-
ements. A singleton set has one element. A Cartesian product of two
sets is a set of pairs, and so on. But when talking about the category
𝐒𝐞𝐭 I asked you to forget about the contents of sets and instead con-
centrate on morphisms (arrows) between them. You were allowed, from
time to time, to peek under the covers to see what a particular universal
construction in 𝐒𝐞𝐭 described in terms of elements. The terminal object
turned out to be a set with one element, and so on. But these were just
sanity checks.
A functor is defined as a mapping of categories. It’s natural to con-
sider a mapping as a morphism in a category. A functor turned out to be
a morphism in the category of categories (small categories, if we want
to avoid questions about size). By treating a functor as an arrow, we for-
feit the information about its action on the internals of a category (its
objects and morphisms), just like we forfeit the information about the
action of a function on elements of a set when we treat it as an arrow
in 𝐒𝐞𝐭. But functors between any two categories also form a category.
This time you are asked to consider something that was an arrow in one
category to be an object in another. In a functor category functors are
objects and natural transformations are morphisms. We have discovered
that the same thing can be an arrow in one category and an object in
another. The naive view of objects as nouns and arrows as verbs doesn’t
hold.
Instead of switching between two views, we can try to merge them
into one. This is how we get the concept of a 𝟐-category, in which ob-
545
jects are called 0-cells, morphisms are 1-cells, and morphisms between
morphisms are 2-cells.
546
Let’s see what this means in our canonical example of a 𝟐-category
𝐂𝐚𝐭. The hom-category 𝐂𝐚𝐭(𝑎, 𝑎) is the category of endofunctors on 𝑎.
Endofunctor composition plays the role of a tensor product in it. The
identity functor is the unit with respect to this product. We’ve seen be-
fore that endofunctors form a monoidal category (we used this fact in
the definition of a monad), but now we see that this is a more general
phenomenon: endo-1-cells in any 𝟐-category form a monoidal category.
We’ll come back to it later when we generalize monads.
You might recall that, in a general monoidal category, we did not in-
sist on the monoid laws being satisfied on the nose. It was often enough
for the unit laws and the associativity laws to be satisfied up to isomor-
phism. In a 𝟐-category, monoidal laws in 𝐂(𝑎, 𝑎) follow from composi-
tion laws for 1-cells. These laws are strict, so we will always get a strict
monoidal category. It is, however, possible to relax these laws as well.
We can say, for instance, that a composition of the identity 1-cell id𝑎
with another 1-cell, 𝑓 ∷ 𝑎 → 𝑏, is isomorphic, rather than equal, to 𝑓 .
Isomorphism of 1-cells is defined using 2-cells. In other words, there is
a 2-cell:
𝜌 ∷ 𝑓 ∘ id𝑎 → 𝑓
that has an inverse.
547
We can do the same for the left identity and associativity laws. This kind
of relaxed 𝟐-category is called a bicategory (there are some additional
coherency laws, which I will omit here).
As expected, endo-1-cells in a bicategory form a general monoidal
category with non-strict laws.
An interesting example of a bicategory is the category of spans. A
span between two objects 𝑎 and 𝑏 is an object 𝑥 and a pair of morphisms:
𝑓 ∷𝑥→𝑎
𝑔∷𝑥→𝑏
548
The composition would be a third span, with some apex 𝑧. The most
natural choice for it is the pullback of 𝑔 along 𝑓 ′ . Remember that a
pullback is the object 𝑧 together with two morphisms:
ℎ∷𝑧→𝑥
ℎ′ ∷ 𝑧 → 𝑦
such that:
𝑔 ∘ ℎ = 𝑓 ′ ∘ ℎ′
which is universal among all such objects.
For now, let’s concentrate on spans over the category of sets. In that
case, the pullback is just a set of pairs (𝑝, 𝑞) from the Cartesian product
𝑥 × 𝑦 such that:
𝑔 𝑝 = 𝑓′ 𝑞
A morphism between two spans that share the same endpoints is de-
fined as a morphism ℎ between their apices, such that the appropriate
triangles commute.
549
A 2-cell in 𝐒𝐩𝐚𝐧.
To summarize, in the bicategory 𝐒𝐩𝐚𝐧: 0-cells are sets, 1-cells are spans,
2-cells are span morphisms. An identity 1-cell is a degenerate span in
which all three objects are the same, and the two morphisms are iden-
tities.
We’ve seen another example of a bicategory before: the bicategory
𝐏𝐫𝐨𝐟 of profunctors, where 0-cells are categories, 1-cells are profunctors,
and 2-cells are natural transformations. The composition of profunctors
was given by a coend.
31.2 Monads
By now you should be pretty familiar with the definition of a monad
as a monoid in the category of endofunctors. Let’s revisit this defini-
tion with the new understanding that the category of endofunctors is
just one small hom-category of endo-1-cells in the bicategory 𝐂𝐚𝐭. We
know it’s a monoidal category: the tensor product comes from the com-
position of endofunctors. A monoid is defined as an object in a monoidal
category — here it will be an endofunctor 𝑇 — together with two mor-
phisms. Morphisms between endofunctors are natural transformations.
550
One morphism maps the monoidal unit — the identity endofunctor —
to 𝑇 :
𝜂∷𝐼 →𝑇
The second morphism maps the tensor product of 𝑇 ⊗𝑇 to 𝑇 . The tensor
product is given by endofunctor composition, so we get:
𝜇 ∷𝑇 ∘𝑇 →𝑇
𝜂∷𝐼 →𝑇
𝜇 ∷𝑇 ∘𝑇 →𝑇
551
That’s a much more general definition of a monad using only 0-cells,
1-cells, and 2-cells. It reduces to the usual monad when applied to the
bicategory 𝐂𝐚𝐭. But let’s see what happens in other bicategories.
Let’s construct a monad in 𝐒𝐩𝐚𝐧. We pick a 0-cell, which is a set that,
for reasons that will become clear soon, I will call 𝑂𝑏. Next, we pick an
endo-1-cell: a span from 𝑂𝑏 back to 𝑂𝑏. It has a set at the apex, which
I will call 𝐴𝑟, equipped with two functions:
𝑑𝑜𝑚 ∷ 𝐴𝑟 → 𝑂𝑏
𝑐𝑜𝑑 ∷ 𝐴𝑟 → 𝑂𝑏
552
Let’s call the elements of the set 𝐴𝑟 “arrows.” If I also tell you to call the
elements of 𝑂𝑏 “objects,” you might get a hint where this is leading to.
The two functions 𝑑𝑜𝑚 and 𝑐𝑜𝑑 assign the domain and the codomain
to an “arrow.”
To make our span into a monad, we need two 2-cells, 𝜂 and 𝜇. The
monoidal unit, in this case, is the trivial span from 𝑂𝑏 to 𝑂𝑏 with the
apex at 𝑂𝑏 and two identity functions. The 2-cell 𝜂 is a function between
the apices 𝑂𝑏 and 𝐴𝑟. In other words, 𝜂 assigns an “arrow” to every
“object.” A 2-cell in 𝐒𝐩𝐚𝐧 must satisfy commutation conditions — in this
case:
𝑑𝑜𝑚 ∘ 𝜂 = id
𝑐𝑜𝑑 ∘ 𝜂 = id
553
In components, this becomes:
554
So, all told, a category is just a monad in the bicategory of spans.
What is amazing about this result is that it puts categories on the
same footing as other algebraic structures like monads and monoids.
There is nothing special about being a category. It’s just two sets and
four functions. In fact we don’t even need a separate set for objects,
because objects can be identified with identity arrows (they are in one-
to-one correspondence). So it’s really just a set and a few functions.
Considering the pivotal role that category theory plays in all of mathe-
matics, this is a very humbling realization.
31.3 Challenges
1. Derive unit and associativity laws for the tensor product defined
as composition of endo-1-cells in a bicategory.
2. Check that monad laws for a monad in 𝐒𝐩𝐚𝐧 correspond to iden-
tity and associativity laws in the resulting category.
3. Show that a monad in 𝐏𝐫𝐨𝐟 is an identity-on-objects functor.
4. What’s a monad algebra for a monad in 𝐒𝐩𝐚𝐧?
31.4 Bibliography
1. Paweł Sobociński’s blog1 .
1 https://graphicallinearalgebra.net/2017/04/16/a-monoid-is-a-
category-a-category-is-a-monad-a-monad-is-a-monoid/
555
Index
556
fixed point, 423 objects, 2
forgetful functor, 250, 321 one way, 303
free category, 30 one-to-one, 78
free functor, 321 onto, 78
free monoid, 246 operational semantics, 19
function application, 162 opposite category, 61
function composition, 45
functorial, 134 parametric polymorphism, 150, 186
partial order, 31
hom-set, 31 point-free, 35
homomorphism, 425 points, 35
homomorphisms, 248 poset, 58
horizontal composition, 204 predicate, 522
predicates, 26
identity, 6 preorder, 31
initial algebra, 425 profunctor, 155
injective, 78, 283 proof-relevant relation, 458
instance, 115 pure functions, 22
internal, 161
isomorphism, 300, 327 representable, 260, 308
representable presheaf, 228
Lawvere theory, 527 representation, 260
left adjoint, 303 rig, 99
lifted, 293 right adjoint, 318
linear order, 31 ring, 99
557
total, 78 up to isomorphism, 58
total order, 31
type inference, 14 variant, 74
558
Acknowledgments
I’d like to thank Edward Kmett and Gershom Bazerman for checking
my math and logic, and André van Meulebrouck, who has been volun-
teering his editing help throughout this series of posts.
I’d like to thank Andrew Sutton for rewriting my C++ monoid concept
code according to his and Bjarne Stroustrup’s latest proposal.
I’m grateful to Eric Niebler for reading the draft and providing the clever
implementation of compose that uses advanced features of C++14 to
drive type inference. I was able to cut the whole section of old fash-
ioned template magic that did the same thing using type traits. Good
riddance! I’m also grateful to Gershom Bazerman for useful comments
that helped me clarify some important points.
559
Colophon
2 https://hmemcpy.com
3 https://mercury.postlight.com/web-parser/
4 https://www.crummy.com/software/BeautifulSoup/
5 https://pandoc.org/
6 https://github.com/hugomaiavieira/pygments-style-github
7 https://github.com/typelevel/CT_from_Programmers.scala
560
Copyleft notice
8 https://www.gnu.org/philosophy/free-sw.en.html
561