Scala With Cats
Scala With Cats
Strategies
I� Scala with Cats
DRAFT
By Noel Welsh
Functional Programming Strategies In Scala with Cats
June 2024
Portions of this work are based on Scala with Cats, by Dave Pereira‐Gurnell
and Noel Welsh. Scala with Cats is licensed under CC BY‐SA 3.0.
Preface i
Versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
License . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
I Foundations 13
2.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . 51
iii
3 Objects as Codata 55
3.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4 Contextual Abstraction 89
8 Functors 177
V Appendices 449
Q Acknowledgements 555
Bibliography 559
xiv
Preface
Some twenty years ago I started my first job in the UK. This job involved a
commute by train, giving me about an hour a day to read without distraction.
Around about the same time I first heard about Structure and Interpretation
of Computer Programs, referred to as the “wizard book” and spoken of in
reverential terms. It sounded like the just the thing for a recent graduate
looking to become a better developer. I purchased a copy and spent the
journey reading it, doing most of the exercises in my head. Structure and
Interpretation of Computer Programs was already an old book at this time, and
it’s programming style was archaic. However it’s core concepts were timeless
and it’s fair to say it absolutely blew my mind, putting me on a path I’m still on
today.
Another notable stop on this path occured some ten years ago when Dave and
I started writing Scala with Cats. In Scala with Cats we attempted to explain the
core type classes found in the Cats library, and their use in building software.
I’m proud of the book we wrote together, but time and experience showed
that type classes are only a small piece of the puzzle of building software in a
functional programming style. We needed a much wider scope if we were to
show people how to effectively build software with all the tools that functional
programming provides. Still, writing a book is a lot of work, and we were busy
with other projects, so Scala with Cats remained largely untouched for many
years.
Around 2020 I got the itch to return to Scala with Cats. My initial plan was
simply to update the book for Scala 3. Dave was busy with other projects so
I decided to go alone. As the writing got underway I realized I really wanted
i
ii
to cover the additional topics I thought were missing. If Scala with Cats was
a good book, I wanted to aim to write a great book; one that would contain
almost everything I had learned about building software. The title Scala with
Cats no longer fit the content, and hence I adopted a new name for what is
largely a new book. The result, Functional Programming Strategies in Scala with
Cats, is what you are reading now. I hope you find it useful, and I hope that
just maybe some young developer will find this book inspiring the same way
I found Structure and Interpretation of Computer Programs inspiring all those
years ago.
The aims of this book are two‐fold: to introduce monads, functors, and other
functional programming patterns as a way to structure program design, and to
explain how these concepts are implemented in Cats.
In this book we aim to show the concepts in a number of different ways, to help
you build a mental model of how they work and where they are appropriate.
We have extended case studies, a simple graphical notation, many smaller
examples, and of course the mathematical definitions. Between them we hope
you’ll find something that works for you.
Versions
This book is written for Scala 3.3.4 and Cats 2.10.0. Here is a minimal build.
sbt containing the relevant dependencies and settings¹:
scalaVersion := "3.3.4"
libraryDependencies +=
"org.typelevel" %% "cats-core" % "2.10.0"
Template Projects
This will generate a sandbox project with Cats as a dependency. See the
generated README.md for instructions on how to run the sample code and/or
start an interactive Scala console.
This will generate a project with a suite of library dependencies and compiler
plugins, together with templates for unit tests and documentation. See the
project pages for catalysts and sbt‐catalysts for more information.
This book contains a lot of technical information and program code. We use
the following typographical conventions to reduce ambiguity and highlight
important concepts:
Typographical Conventions
New terms and phrases are introduced in italics. After their initial introduction
they are written in normal roman font.
Terms from program code, filenames, and file contents, are written in monospace
font. Note that we do not distinguish between singular and plural forms. For
example, we might write String or Strings to refer to java.lang.String.
Source Code
Most code passes through mdoc to ensure it compiles. mdoc uses the Scala
console behind the scenes, so we sometimes show console‐style output as
comments:
"Hello Cats!".toUpperCase
// res0: String = "HELLO CATS!"
v
Callout Boxes
License
This work is licensed under CC BY‐SA 4.0. To view a copy of this license, visit
http://creativecommons.org/licenses/by‐sa/4.0/
Portions of this work are based on Scala with Cats by Dave Pereira‐Gurnell and
Noel Welsh, which is licensed under CC BY‐SA 3.0.
vi
Chapter 1
Functional Programming
Strategies
1
2 CHAPTER 1. FUNCTIONAL PROGRAMMING STRATEGIES
spent on expanding those who are taught programming. However the actual
programming—the bit that produces the code that is the whole point of the
endeavour—is still largely treated as magic. It doesn’t have to be that way.
Functional programmers love fancy words for simple ideas, so it’s no surprise
I’m drawn to metacognitive programming strategies. Let’s unpack that phrase
to see what it means. Metacognition means thinking about thinking. A lot of
research has shown the benefits of metacognition in learning, and that it is an
important part of developing expertise. Metacognition is not just one thing—
it’s not sufficient to just tell someone to think about their thinking. Rather we
should expect metacognition to be a collection of different strategies, some
of which are general and some of which are domain specific. From this we
get the idea of metacognitive programming strategies—explicitly naming and
describing different thinking strategies that proficient programmers use.
useful abstraction we can get the programmer to ask themselves “would this
abstraction solve this problem?” This is essentially what the design patterns
community did, also back in the nineties, but there is an important difference.
The academic FP community strongly values formal models, which means that
the building blocks of FP have a precision that design patterns lack. However
there is more to process than categorizing the output. There is also the actual
process of how the code comes to be. Code doesn’t usually spring fully
formed from our keyboard, and in the iterative refinement of code we also
find structure. Here the academic FP community has less to say, but there is
a strong folklore of techniques such as “type driven development”
Over the last ten or so years of programming and teaching programming I’ve
collected a wide range of strategies. Some come from others (for example,
How to Design Programs and its many offshoots remain very influential for
me) and some I’ve found myself. Ultimately I don’t think anything here is new;
rather my contribution is in collecting and presenting these strategies as one
coherent whole.
Let’s start thinking about thinking about programming, with a model that
describes three different levels that we can use to think about code. The levels,
from highest to lowest, are paradigm, theory, and craft. Each level provides
guidance for the ones below.
solutions for any programming problem, and we can use the principles in the
paradigm to decide which approach to take. For example, if we’re a functional
programmer we can consider how easily we can reason about a particular
implementation, or how composable it is. Without the paradigm we have no
basis for making a choice.
The theory level translates the broad principles of the paradigm to specific well
defined techniques that apply to many languages within the paradigm. We are
still, however, at a level above the code. Design patterns are an example in
the object‐oriented world. Algebraic data types are an example in functional
programming. Most languages that are in the functional programming
paradigm, such as Haskell and O’Caml, support algebraic data types, as do
many languages that straddle multiple paradigms, such as Rust, Scala, and
Swift.
At the craft level we get to actual code, and the language specific nuance that
goes into it. An example in Scala is the implementation of algebraic data types
in terms of sealed trait and final case class in Scala 2, or enum in Scala 3.
There are many concerns at this level that are important for writing idiomatic
code, such as placing constructors on companion objects in Scala, that are not
relevant at the higher levels.
In the next section I’ll describe the functional programming paradigm. The
remainder of this book is primarily concerned with theory and craft. The
theory is language agnostic but the craft is firmly in the world of Scala. Before
we move onto the functional programming paradigm are two points I want to
emphasize:
2. The three level organization is just a tool for thought. In the real world
it is more complicated.
1.2. FUNCTIONAL PROGRAMMING 5
Composition means building big things out of smaller things. Numbers are
compositional. We can take any number and add one, giving us a new number.
Lego is also compositional. We compose Lego by sticking it together. In the
particular sense we’re using composition we also require the original elements
we combine don’t change in any way when they are composed. When we
create by 2 by adding 1 and 1 we get a new result that doesn’t change what 1
means.
1.2.1.1 Types
Types are not strictly part of functional programming but statically typed FP is
the most popular form of FP and sufficiently important to warrant a mention.
Types help compilers generate efficient code but types in FP are as much
for the programmer as they are the compiler. Types express properties of
programs, and the type checker automatically ensures that these properties
hold. They can tell us, for example, what a function accepts and what it returns,
or that a value is optional. We can also use types to express our beliefs about
a program and the type checker will tell us if those beliefs are correct. For
example, we can use types to tell the compiler we do not expect an error at a
particular point in our code and the type checker will let us know if this is the
case. In this way types are another tool for reasoning about code.
In this case we can reason about our code despite the mutation, but the
Scala compiler can determine that this is ok. Scala allows mutation but it’s
up to us to use it appropriately. A more expressive type system, perhaps with
features like Rust’s, would be able to tell that sum doesn’t allow mutation to
be observed by other parts of the system¹. Another approach, which is the
¹The example I gave is fairly simple. A compiler that used escape analysis could recognize
that no reference to total is possible outside sum and hence sum is pure (or referentially
transparent). Escape analysis is a well studied technique. In the general case the problem
is a lot harder. We’d often like to know that a value is only referenced once at various points
in our program, and hence we can mutate that value without changes being observable in
other parts of the program. This might be used, for example, to pass an accumulator through
various processing stages. To do this requires a programming language with what is called
a substructural type system. Rust has such a system, with affine types. Linear types are in
development for Haskell.
8 CHAPTER 1. FUNCTIONAL PROGRAMMING STRATEGIES
one taken by Haskell, is to disallow all mutation and thus guarantee it cannot
cause problems.
val it = Iterator(1, 2, 3, 4)
it.zip(it2).next()
// res0: Tuple2[Int, Int] = (1, 1)
it3.zip(it3).next()
// res1: Tuple2[Int, Int] = (1, 2)
I have described local reasoning and composition but have not discussed their
benefits. Why are they are desirable? The answer is that they make efficient
use of knowledge. Let me expand on this.
In a sense local reasoning and composition are two sides of the same coin.
Local reasoning is compositional; composition allows local reasoning. Both
make code easier to understand.
In the corners of the Internet I frequent the common refrain is that static
typing has neglible effect on productivity. I decided to look into this and
was surprised that the majority of the results I found support the claim that
static typing increases productivity. For example, the literature review in this
dissertation (section 2.3, p16–19) shows a majority of results in favour of
static typing, in particular the most recent studies. However the majority
of these studies are very small and use relatively inexperienced developers—
which is noted in the review by Dan Luu that I linked. My belief is that
functional programming comes into its own on larger systems. Furthermore,
programming languages, like all tools, require proficiency to use effectively.
I’m not convinced very junior developers have sufficient skill to demonstrate
a significant difference between languages.
This doesn’t mean we’ll all be using Haskell in five years. More likely we’ll
see something like the shift to object‐oriented programming of the nineties:
1.2. FUNCTIONAL PROGRAMMING 11
Smalltalk was the paradigmatic example of OO, but it was more familiar
languages like C++ and Java that brought OO to the mainstream. In the case
of FP this probably means languages like Scala, Swift, Kotlin, or Rust, and
mainstream languages like Javascript and Java continuing to adopt more FP
features.
I’ve given my opinion on functional programming—that the real goals are local
reasoning and composition, and programming practices like immutability are in
service of these. Other people may disagree with this definition, and that’s ok.
Words are defined by the community that uses them, and meanings change
over time.
Firstly, I find that FP is most valuable in the large. For a small system it is
possible to keep all the details in our head. It’s when a program becomes too
large for anyone to understand all of it that local reasoning really shows its
value. This is not to say that FP should not be used for small projects, but
rather that if you are, say, switching from an imperative style of programming
you shouldn’t expect to see the benefit when working on toy projects.
Finally, reasoning is not the only way to understand code. It’s valuable
to appreciate the limitations of reasoning, other methods for gaining
understanding, and using a variety of strategies depending on the situation.
12 CHAPTER 1. FUNCTIONAL PROGRAMMING STRATEGIES
Part I
Foundations
13
15
In this first part of the book we’re building the foundational strategies on which
the rest of the book will build and elaborate. In Chapter 2 we look at algebraic
data types, which are our main way of modelling data. We turn to codata in
Chapter 3, which is the opposite, or dual, or algebraic data. Type classes are
the focus on Chapter 4, while fundamentals of interpreters are discussed in
Chapter 5. These four strategies all describe code artifacts. For example, we
can label part of code as an algebraic data type or a type class. We’ll also
see strategies that help us write code but don’t necessarily end up directly
reflected in it, such as following the types.
16
Chapter 2
This chapter has our first example of a programming strategy: algebraic data
types. Any data we can describe using logical ands and logical ors is an
algebraic data type. Once we recognize an algebraic data type we get three
things for free:
We’ll start with some examples of data, from which we’ll extract the common
structure that motivates algebraic data types. We will then look at their
representation in Scala 2 and Scala 3. Next we’ll turn to structural recursion
for transforming algebraic data types, followed by structural corecursion for
constructing them. We’ll finish by looking at the algebra of algebraic data
types, which is interesting but not essential.
17
18 CHAPTER 2. ALGEBRAIC DATA TYPES
Let’s start with some examples of data from a few different domains. These
are simplified description but they are all representative of real applications.
What is common between all the examples above is that the individual
elements—the atoms, if you like—are connected by either a logical and or a
logical or. For example, a user is a screen name and an email address and
a password and a role. A 2D action is a straight line or a Bezier curve or a
move. This is the core of algebraic data types: an algebraic data type is data
that is combined using logical ands or logical ors. Conversely, whenever we
can describe data in terms of logical ands and logical ors we have an algebraic
data type.
Algebraic data types are closed worlds, which means they cannot be extended
after they have been defined. In practical terms this means we have to modify
the source code where we define the algebraic data type if we want to add or
remove elements.
Now we know what algebraic data types are, we will turn to their
representation in Scala. The important point here is that the translation
to Scala is entirely determined by the structure of the data; no thinking is
required! This means the work is in finding the structure of the data that best
represents the problem at hand. Work out the structure of the data and the
code directly follows from it.
As algebraic data types are defined in terms of logical ands and logical ors, to
represent algebraic data types in Scala we must know how to represent these
two concepts. Scala 3 simplifies the representation of algebraic data types
compared to Scala 2, so we’ll look at each language version separately.
I’m assuming that you’re familiar with the language features we use to
represent algebraic data types in Scala, so I won’t be going over them.
20 CHAPTER 2. ALGEBRAIC DATA TYPES
Not everyone makes their case classes final, but they should. A non‐final
case class can still be extended by a class, which breaks the closed world
criteria for algebraic data types.
enum A {
case B
case C
}
• A is B or C; and
• B is D and E; and
• C is F and G
the representation is
enum A {
case B(d: D, e: E)
case C(f: F, g: G)
}
In other words we don’t write final case class inside an enum. You also can’t
nest enum inside enum. Nested logical ors can be rewritten into a single logical or
containing only logical ands (known as disjunctive normal form) so this is not
a limitation in practice. However the Scala 2 representation is still available in
Scala 3 should you want more expressivity.
2.2. ALGEBRAIC DATA TYPES IN SCALA 21
A logical and (product type) has the same representation in Scala 2 as in Scala
3. If we define a product type A is B and C, the representation in Scala 2 is
Firstly, instead of using a sealed abstract class you can use a sealed trait
. There isn’t much practical difference between the two. When teaching
beginners I’ll often use sealed trait to avoid having to introduce abstract
class. I believe sealed abstract class has slightly better performance and Java
interoperability, but I haven’t tested this. I also think sealed abstract class is
closer, semantically, to the meaning of a sum type.
For extra style points we can extend Product with Serializable from sealed
abstract class. Compare the reported types below with and without this little
addition.
Let’s first see the code without extending Product and Serializable.
You’ll only see this in Scala 2. Scala 3 has the concept of transparent traits,
which aren’t reported in inferred types, so you’ll see the same output in Scala
3 no matter whether you add Product and Serializable or not.
Finally, we can use a case object instead of a case class when we’re defining
some type that holds no data. For example, reading from a text stream, such
as a terminal, can return a character or the end‐of‐file. We can model this as
As the end‐of‐file indicator Eof has no associated data we use a case object
. There is no need to mark the case object as final, as objects cannot be
extended.
2.2.3 Examples
Let’s make the discussion above more concrete with some examples.
enum Role {
case Normal
case Moderator
case Administrator
}
In Scala 2 we write
The cases within a role don’t hold any data, so we used a case object in the
Scala 2 code.
I’ve used String to represent most of the data within a User, but in real code
we might want to define distinct types for each field.
2.2.3.2 Paths
enum Action {
case Line(end: Point)
case Curve(cp1: Point, cp2: Point, end: Point)
case Move(end: Point)
}
We’ve seen that the Scala 3 representation of algebraic data types, using enum
, is more compact than the Scala 2 representation. However the Scala 2
representation is still available. Should you ever use the Scala 2 representation
in Scala 3? There are a few cases where you may want to:
• Scala 3’s doesn’t currently support nested enums (enums within enums).
This may change in the future, but right now it can be more convenient
to use the Scala 2 representation to express this without having to
convert to disjunctive normal form.
• Scala 2’s representation can express things that are almost, but not
quite, algebraic data types. For example, if you define a method on
an enum you must be able to define it for all the members of the enum.
Sometimes you want a case of an enum to have methods that are only
2.3. STRUCTURAL RECURSION 25
defined for that case. To implement this you’ll need to use the Scala 2
representation instead.
Exercise: Tree
To gain a bit of practice defining algebraic data types, code the following
description in Scala (your choice of version, or do both.)
I’m assuming you’re familiar with pattern matching in Scala, so I’ll only
talk about how to implement structural recursion using pattern matching.
Remember there are two kinds of algebraic data types: sum types (logical ors)
26 CHAPTER 2. ALGEBRAIC DATA TYPES
and product types (logical ands). We have corresponding rules for structural
recursion implemented using pattern matching:
1. For each branch in a sum type we have a distinct case in the pattern
match; and
2. Each case corresponds to a product type with the pattern written in the
usual way.
Let’s see this in code, using an example ADT that includes both sum and
product types:
• A is B or C; and
• B is D and E; and
• C is F and G
enum A {
case B(d: D, e: E)
case C(f: F, g: G)
}
Following the rules above means a structural recursion would look like
anA match {
case B(d, e) => ???
case C(f, g) => ???
}
The ??? bits are problem specific, and we cannot give a general solution for
them. However we’ll soon see strategies to help create them.
This is exactly the definition of List in the standard library. Notice it’s an
algebraic data type as it consists of sums and products. It is also recursive:
in the pair case the tail is itself a list.
We can directly translate this to code, using the strategy for algebraic data
types we saw previously. In Scala 3 we write
enum MyList[A] {
case Empty()
case Pair(head: A, tail: MyList[A])
}
Let’s implement map for MyList. We start with the method skeleton specifying
just the name and types.
enum MyList[A] {
case Empty()
case Pair(head: A, tail: MyList[A])
Our first step is to recognize that map can be written using a structural recursion.
MyList is an algebraic data type, map is transforming this algebraic data type,
and therefore structural recursion is applicable. We now apply the structural
recursion strategy, giving us
enum MyList[A] {
case Empty()
case Pair(head: A, tail: MyList[A])
28 CHAPTER 2. ALGEBRAIC DATA TYPES
I forgot the recursion rule! The data is recursive in the tail of Pair, so map is
recursive there as well.
enum MyList[A] {
case Empty()
case Pair(head: A, tail: MyList[A])
I left the ??? to indicate that we haven’t finished with that case.
Now we can move on to the problem specific parts. Here we have three
strategies to help us:
The first two are specific to structural recursion, while the final one is a general
strategy we can use in many situations. Let’s briefly discuss each and then see
how they apply to our example.
The first strategy is relatively simple: when we consider the problem specific
code on the right hand side of a pattern matching case, we can ignore the code
in any other pattern match cases. So, for example, when considering the case
for Empty above we don’t need to worry about the case for Pair, and vice versa.
2.3. STRUCTURAL RECURSION 29
The next strategy is a little bit more complicated, and has to do with recursion.
Remember that the structural recursion strategy tells us where to place any
recursive calls. This means we don’t have to think through the recursion.
Instead we assume the recursive call will correctly compute what it claims, and
only consider how to further process the result of the recursion. The result is
guaranteed to be correct so long as we get the non‐recursive parts correct.
In the example above we have the recursion tail.map(f). We can assume this
correctly computes map on the tail of the list, and we only need to think about
what we should do with the remaining data: the head and the result of the
recursive call.
Our final strategy is following the types. It can be used in many situations, not
just structural recursion, so I consider it a separate strategy. The core idea is
to use the information in the types to restrict the possible implementations.
We can look at the types of inputs and outputs to help us.
Now let’s use these strategies to finish the implementation of map. We start
with
enum MyList[A] {
case Empty()
case Pair(head: A, tail: MyList[A])
Our first strategy is to consider the cases independently. Let’s start with
the Empty case. There is no recursive call here, so reasoning about recursion
doesn’t come into play. Let’s instead use the types. There is no input here
other than the Empty case we have already matched, so we cannot use the
input types to further restrict the code. Let’s instead consider the output type.
30 CHAPTER 2. ALGEBRAIC DATA TYPES
We’re trying to create a MyList[B]. There are only two ways to create a MyList
[B]: an Empty or a Pair. To create a Pair we need a head of type B, which we
don’t have. So we can only use Empty. This is the only possible code we can write.
The types are sufficiently restrictive that we cannot write incorrect code for
this case.
enum MyList[A] {
case Empty()
case Pair(head: A, tail: MyList[A])
Now let’s move to the Pair case. We can apply both the structural recursion
reasoning strategy and following the types. Let’s use each in turn.
We could return just Empty, matching the case we’ve already written. This has
the correct type but we might expect it is not the correct answer because it
does not use the result of the recursion, head, or f in any way.
2.3. STRUCTURAL RECURSION 31
We could return just tail.map(f). This has the correct type but we might
expect it is not correct because we don’t use head or f in any way.
We can call f on head, producing a value of type B, and then combine this value
and the result of the recursive call using Pair to produce a MyList[B]. This is
the correct solution.
enum MyList[A] {
case Empty()
case Pair(head: A, tail: MyList[A])
If you’ve followed this example you’ve hopefully see how we can use the three
strategies to systematically find the correct implementation. Notice how we
interleaved the recursion strategy and following the types to guide us to a
solution for the Pair case. Also note how following the types alone gave us
three possible implementations for the Pair case. In this code, and as is usually
the case, the solution was the implementation that used all of the available
inputs.
Remember that algebraic data types are a closed world: they cannot be
extended once defined. The Scala compiler can use this to check that we
handle all possible cases in a pattern match, so long as we write the pattern
match in a way the compiler can work with. This is known as exhaustivity
checking.
import CssLength.*
CssLength.Em(2.0) match {
case Em(value) => value
case Rem(value) => value
}
// -- [E029] Pattern Match Exhaustivity Warning:
----------------------------------
// 1 |CssLength.Em(2.0) match {
// |^^^^^^^^^^^^^^^^^
// |match may not be exhaustive.
// |
// |It would fail on pattern case: CssLength.Pt(_)
// |
// | longer explanation available when compiling with `-explain`
1. defining an abstract method at the root of the algebraic data types; and
2.3. STRUCTURAL RECURSION 33
Let’s see it in the MyList example we just looked at. Our first step is to rewrite
the definition of MyList to the Scala 2 style.
We can use exactly the same strategies we used in the pattern matching
case to create this code. The implementation technique is different but the
underlying concept is the same.
34 CHAPTER 2. ALGEBRAIC DATA TYPES
enum Tree[A] {
case Leaf(value: A)
case Node(left: Tree[A], right: Tree[A])
}
Let’s get some practice with structural recursion and write some methods for
Tree. Implement
• size,
which returns the number of values (Leafs) stored in the Tree;
• contains, which returns true if the Tree contains a given element of type
A, and false otherwise; and
• map, which creates a Tree[B] given a function A => B
Let’s see how this is done using the example of MyList. Recall the definition of
MyList is
36 CHAPTER 2. ALGEBRAIC DATA TYPES
enum MyList[A] {
case Empty()
case Pair(head: A, tail: MyList[A])
}
To complete fold we add method parameters for the problem specific (???)
parts. In the case for Empty, we need a value of type B (notice that I’m following
the types here).
For the Pair case, we have the head of type A and the recursion producing a
value of type B. This means we need a function to combine these two values.
2.3. STRUCTURAL RECURSION 37
This is foldRight (and I’ve renamed the method to indicate this). You might
have noticed there is another valid solution. Both empty and the recursion
produce values of type B. If we follow the types we can come up with
which is foldLeft, the tail‐recursive variant of fold for a list. (We’ll talk about
tail‐recursion in a later chapter.)
We can follow the same process for any algebraic data type to create its folds.
The rules are:
• two cases, and hence two parameters to fold (other than the parameter
that is the list itself);
• Empty is a constructor with no arguments and hence we use a parameter
of type B; and
• Pair is a constructor with one parameter of type A and one recursive
parameter, and hence the corresponding function has type (A, B)=> B.
38 CHAPTER 2. ALGEBRAIC DATA TYPES
Implement a fold for Tree defined earlier. There are several different ways to
traverse a tree (pre‐order, post‐order, and in‐order). Just choose whichever
seems easiest.
Prove to yourself that you can replace structural recursion with calls to fold,
by redefining size, contains, and map for Tree using only fold.
product types.
In Scala 3 we write
enum MyList[A] {
case Empty()
case Pair(head: A, tail: MyList[A])
}
enum MyList[A] {
case Empty()
case Pair(head: A, tail: MyList[A])
The output of this method is a MyList, which is an algebraic data type. Since
we need to construct a MyList we can use structural corecursion. The
structural corecursion strategy says we write down all the constructors and
then consider the conditions that will cause us to call each constructor. So our
starting point is to just write down the two constructors, and put in dummy
conditions.
40 CHAPTER 2. ALGEBRAIC DATA TYPES
enum MyList[A] {
case Empty()
case Pair(head: A, tail: MyList[A])
We can also apply the recursion rule: where the data is recursive so is the
method.
enum MyList[A] {
case Empty()
case Pair(head: A, tail: MyList[A])
To complete the left‐hand side we can use the strategies we’ve already seen:
enum MyList[A] {
case Empty()
case Pair(head: A, tail: MyList[A])
}
}
Just as we could abstract structural recursion as a fold, for any given algebraic
data type we can abstract structural corecursion as an unfold. Unfolds are
much less commonly used than folds, but they are still a nice tool to have.
Let’s work through the process of deriving unfold, using MyList as our example
again.
enum MyList[A] {
case Empty()
case Pair(head: A, tail: MyList[A])
}
Our starting point is writing the skeleton for unfold. It’s a little bit unusual in
that I’ve added a parameter seed. This is the information we use to create an
element. We’ll need this, but we cannot derive it from our strategies, so I’ve
added it in here as a starting assumption.
Now we start using our strategies to fill in the missing pieces. I’m using the
corecursion skeleton and I’ve applied the recursion rule immediately in the
code below, to save a bit of time in the derivation.
Now we need to handle the case for Pair. We have a value of type A (seed), so
to create the head element of Pair we can ask for a function A => B
Finally we need to update the current value of seed to the next value. That’s
a function A => A.
2.4. STRUCTURAL CORECURSION 43
At this point we’re done. Let’s see that unfold is useful by declaring some other
methods in terms of it. We’re going to declare map, which we’ve already seen
is a structural corecursion, using unfold. We will also define fill and iterate,
which are methods that construct lists and correspond to the methods with
the same names on List in the Scala standard library.
To make this easier to work with I’m going to declare unfold as a method on
the MyList companion object. I have made a slight tweak to the definition to
make type inference work a bit better. In Scala, types inferred for one method
parameter cannot be used for other method parameters in the same parameter
list. However, types inferred for one method parameter list can be used in
subsequent lists. Separating the function parameters from the seed parameter
means that the value inferred for A from seed can be used for inference of the
function parameters’ input parameters.
I have also declared some destructor methods, which are methods that take
apart an algebraic data type. For MyList these are head, tail, and the predicate
isEmpty. We’ll talk more about these a bit later.
enum MyList[A] {
case Empty()
case Pair(_head: A, _tail: MyList[A])
def head: A =
this match {
case Pair(head, _) => head
}
44 CHAPTER 2. ALGEBRAIC DATA TYPES
Now let’s define the constructors fill and iterate, and map, in terms of unfold.
I think the constructors are a bit simpler, so I’ll do those first.
object MyList {
def unfold[A, B](seed: A)(stop: A => Boolean, f: A => B, next: A =>
A): MyList[B] =
if stop(seed) then MyList.Empty()
else MyList.Pair(f(seed), unfold(next(seed))(stop, f, next))
Here I’ve just added the method skeletons, which are taken straight from the
List documentation. To implement these methods we can use one of two
strategies:
You might have noticed that the parameters to unfold are almost exactly those
you need to create a for‐loop in a language like Java. A classic for‐loop, of the
for(i = 0; i < n; i++) kind, has four components:
Loop variants and invariants are the standard way of reasoning about
imperative loops. I’m not going to describe them here, as you have probably
already learned how to reason about loops (though perhaps not using these
terms). Instead I’m going to discuss the second reasoning strategy, which
relates writing unfold to something we’ve already discussed: structural
recursion.
Our first step is to note that natural numbers (the integers 0 and larger) are
conceptually algebraic data types even though the implementation in Scala—
using Int—is not. A natural number is either:
• zero; or
• 1 + a natural number.
It’s the simplest possible algebraic data type that is both a sum and a product
type.
Once we see this, we can use the reasoning tools for structural recursion for
creating the parameters to unfold. Let’s show how this works with fill. The n
parameter tells us how many elements there are in the List we’re creating. The
elem parameter creates those elements, and is called once for each element. So
our starting point is to consider this as a structural recursion over the natural
numbers. We can take n as seed, and stop as the function x => x == 0. These
are the standard conditions for a structural recursion over the natural numbers.
What about next? Well, the definition of natural numbers tells us we should
46 CHAPTER 2. ALGEBRAIC DATA TYPES
subtract one in the recursive case, so next becomes x => x - 1. We only need
f, and that comes from the definition of how fill is supposed to work. We
create the value from elem, so f is just _ => elem
object MyList {
def unfold[A, B](seed: A)(stop: A => Boolean, f: A => B, next: A =>
A): MyList[B] =
if stop(seed) then MyList.Empty()
else MyList.Pair(f(seed), unfold(next(seed))(stop, f, next))
List.fill(5)(1)
// res6: List[Int] = List(1, 1, 1, 1, 1)
MyList.fill(5)(1)
// res7: MyList[Int] = MyList(1, 1, 1, 1, 1)
var counter = 0
def getAndInc(): Int = {
val temp = counter
counter = counter + 1
temp
}
List.fill(5)(getAndInc())
// res8: List[Int] = List(0, 1, 2, 3, 4)
counter = 0
MyList.fill(5)(getAndInc())
// res10: MyList[Int] = MyList(0, 1, 2, 3, 4)
Exercise: Iterate
Implement iterate using the same reasoning as we did for fill. This is slightly
more complex than fill as we need to keep two bits of information: the value
of the counter and the value of type A.
Exercise: Map
Once you’ve completed iterate, try to implement map in terms of unfold. You’ll
need to use the destructors to implement it.
The destructors are another part of the duality between structural recursion
and corecursion. Structural recursion is:
One last thing before we leave unfold. If we look at the usual definition of
unfold we’ll probably find the following definition.
This is equivalent to the definition we used, but a bit more compact in terms
of the interface it presents. We used a more explicit definition that makes the
structure of the method clearer.
×. You’ll perhaps guess that these correspond to sum and product types
respectively, and you’d be absolutely correct. What about the properties of
these operations? We’ll they are similar to what we know from basic algebra:
Remember the algebraic data types work with types, so the operations + and
× take types as parameters. So Int × String is equivalent to
enum IntOrString {
case IsInt(int: Int)
case IsString(string: String)
}
or just
Exercise: Identities
Can you work out which Scala type corresponds to the identity 1 for product
types?
What about the Scala type corresponding to the identity 0 for sum types?
What about the distribution law? This allows us to manipulate algebraic data
types to form equivalent, but perhaps more useful, representations. Consider
this example of a user data type.
enum Person {
case User(name: String)
case Moderator(name: String)
}
Is this representation more useful? I can’t say without the context of where
the data is being used. However I can say that knowing this manipulation is
possible, and correct, is useful.
There is a lot more that could be said about algebraic data types, but at this
point I feel we’re really getting into the weeds. I’ll finish up with a few pointers
to other interesting facts:
2.6 Conclusions
We have covered a lot of material in this chapter. Let’s recap the key points.
Algebraic data types allow us to express data types by combining existing data
types with logical and and logical or. A logical and constructs a product type
while a logical or constructs a sum type. Algebraic data types are the main
way to represent data in Scala.
Following the types is a very general strategy that is can be used in many other
situations.
Structural corecursion gives us a skeleton for creating any given algebraic data
type from any other type. Structural corecursion can be abstracted into an
unfold method. When reasoning about structural corecursion we can reason
as we would for an imperative loop, or, if the input is an algebraic data type,
use the principles for reasoning about structural recursion.
I’m not aware of any approachable yet thorough treatment of either algebraic
data types or structural recursion. Both seem to have become assumed
background of any researcher in the field of programming languages, and
relatively recent work is caked in layers of mathematics and obtuse notation
that I find difficult reading. The infamous Functional Programming with Bananas,
Lenses, Envelopes and Barbed Wire [Meijer et al. 1991] is an example of such
work. I suspect the core ideas of both date back to at least the emergence of
computability theory in the 1930s, well before any digital computers existed.
seem to have been fully developed, along with pattern matching, until NPL in
1977. NPL was quickly followed by the more influential language Hope, which
spread the concept to other programming languages.
The Derivative of a Regular Type is its Type of One‐Hole Contexts [McBride 2001]
describes the derivative of algebraic data types.
54 CHAPTER 2. ALGEBRAIC DATA TYPES
Chapter 3
Objects as Codata
In this chapter we will look at codata, the dual of algebraic data types.
Algebraic data types focus on how things are constructed. Codata, in
contrast, focuses on how things are used. We define codata by specifying the
operations that can be performed on the type. This is very similar to the use
of interfaces in object‐oriented programming, and this is the first reason that
we are interested in codata: codata puts object‐oriented programming into a
coherent conceptual framework with the other strategies we are discussing.
55
56 CHAPTER 3. OBJECTS AS CODATA
We’ll begin our exploration of codata by more precisely defining it and seeing
some examples. We’ll then talk about representing codata in Scala, and the
relationship to object‐oriented programming. Once we can create codata,
we’ll see how to work with it using structural recursion and corecursion, using
an example of an infinite structure. Next we will look at transforming algebraic
data to codata, and vice versa. We will finish by examining differences in
extensibility.
Data describes what things are, while codata describes what things can do.
enum Bool {
case True
case False
}
The definition tells us there are two ways to construct an element of type Bool.
Furthermore, if we have such an element we can tell exactly which case it is,
by using a pattern match for example. Similarly, if the instances themselves
hold data, as in List for example, we can always extract all the data within
them. Again, we can use pattern matching to achieve this.
3.1. DATA AND CODATA 57
trait Set[A] {
/** Construct a new set containing all elements in this set and the
given element */
def insert(elt: A): Set[A]
This definition does not tell us anything about the internal representation of
the elements in the set. It could use a hash table, a tree, or something more
exotic. It does, however, tell us what we can do with the set. We know we
can take the union but not the intersection, for example.
If you come from the object‐oriented world you might recognize the
description of codata above as programming to an interface. In some ways
codata is just taking concepts from the object‐oriented world and presenting
them in a way that is consistent with the rest of the functional programming
58 CHAPTER 3. OBJECTS AS CODATA
paradigm. However, this does not mean adopting all the features of object‐
oriented programming. We won’t use state, which is difficult to reason
about. We also won’t use implementation inheritance either, for the same
reason. In our subset of object‐oriented programming we’ll either be defining
interfaces (which may have default implementations of some methods) or final
classes that implement those interfaces. Interestingly, this subset of object‐
oriented programming is often recommended by advocates of object‐oriented
programming.
Let’s now be a little more precise in our definition of codata, which will make
the duality between data and codata clearer. Remember the definition of data:
it is defined in terms of sums (logical ors) and products (logical ands). We
can transform any data into a sum of products. Each product in the sum is
a constructor, and the product itself is the parameters that the constructor
accepts. Finally, we can think of constructors as functions which take some
arbitrary input and produce an element of data. Our end point is a sum of
functions from arbitrary input to data.
trait Set[A] {
Notice that the first parameter of each function is the type we are defining,
Set[A].
This is only half the story for codata. We also need to actually implement the
interface we’ve just defined. There are three approaches we can use:
Some examples are in order. Here’s a simple example of Set, which uses a List
to hold the elements in the set.
This uses the first implementation approach, a final subclass. Where would
we use an anonymous subclass? They are most useful when implementing
methods that return our codata type. Let’s take union as an example. It returns
our codata type, Set, and we could implement it as shown below.
3.2. CODATA IN SCALA 61
trait Set[A] {
This uses an anonymous subclass to implement union on the Set trait, and
hence defines the method for all subclasses. I haven’t made the method final
so that subclasses can override it with a more efficient implementation. This
does open up the danger of implementation inheritance. This is an example
of where theory and craft diverge. In theory we never want implementation
inheritance, but in practice it can be useful as an optimization.
trait Set[A] {
Once again we could make this a final method. In this case it’s probably more
justified as it’s difficult to imagine a more efficient implementation.
Data and codata are both realized in Scala as variations of the same language
features of classes and objects. This means we can define types that have
properties of both data and codata. We have actually already done this. When
we define data we must define names for the fields within the data, thus
defining destructors. This is the same in most languages, which don’t make
a hard distinction between data and codata.
Part of the appeal, I think, of classes and objects is that they can express so
many conceptually different abstractions with the same language constructs.
This gives them a surface appearance of simplicity; it seems we need to learn
only one abstraction to solve a huge of number of coding problems. However
this apparent simplicity hides real complexity, as this variety of uses forces us
to reverse engineer the conceptual intention from the code.
In this section we’ll build a library for streams, also known as lazy lists. These
are the codata equivalent of lists. Whereas a list must have a finite length,
streams have an infinite length. We’ll use this example to explore structural
recursion and structural corecursion as applied to codata.
Let’s start by reviewing structural recursion and corecursion. The key idea is to
use the input or output type, respectively, to drive the process of writing the
method. We’ve already seen how this works with data, where we emphasized
structural recursion. With codata it’s more often the case that structural
corecursion is used. The steps for using structural corecursion are:
It’s important that any computation takes places within the methods, and so
only runs when the methods are called. Once we start creating streams the
importance of this will become clear.
Our first step is to define our stream type. As this is codata, it is defined in
terms of its destructors. The destructors that define a Stream of elements of
type A are:
Note these are almost the destructors of List. We haven’t defined isEmpty as a
destructor because our streams never end and thus this method would always
return false. (A lot of real implementations, such as the LazyList in the Scala
standard library, do define such a method which allows them to represent
finite and infinite lists in the same structure. We’re not doing this for simplicity
and because we want to work with codata in its purest form.)
trait Stream[A] {
def head: A
def tail: Stream[A]
}
Here I’ve used the anonymous subclass approach, so I can just write all the
code in one place.
The next step is to fill in the method bodies. The first method, head, is trivial.
The answer is 1 by definition.
new Stream[Int] {
def head: Int = 1
def tail: Stream[Int] = ???
}
}
This approach doesn’t seem like it’s going to work. We’ll have to write this out
an infinite number of times to correctly implement the method, which might
be a problem.
You might be alarmed to see the circular reference to ones in tail. This works
because it is within a method, and so is only evaluated when that method is
called. This delaying of evaluation is what allows us to represent an infinite
number of elements, as we only ever evaluate a finite portion of them. This is
a core difference from data, which is fully evaluated when it is constructed.
Let’s check that our definition of ones does indeed work. We can’t extract all
the elements from an infinite Stream (at least, not in finite time) so in general
we’ll have to resort to checking a finite sequence of elements.
ones.head
// res0: Int = 1
ones.tail.head
// res1: Int = 1
ones.tail.tail.head
// res2: Int = 1
This all looks correct. We’ll often want to check our implementation in this
way, so let’s implement a method, take, to make this easier.
66 CHAPTER 3. OBJECTS AS CODATA
trait Stream[A] {
def head: A
def tail: Stream[A]
ones.take(5)
// res4: List[Int] = List(1, 1, 1, 1, 1)
For our next task we’ll implement map. Implementing a method on Stream
allows us to see both structural recursion and corecursion for codata in action.
As usual we begin by writing out the method skeleton.
trait Stream[A] {
def head: A
def tail: Stream[A]
trait Stream[A] {
def head: A
def tail: Stream[A]
To make progress we can follow the types or use structural corecursion. Let’s
choose corecursion to see another example of it in use.
trait Stream[A] {
def head: A
def tail: Stream[A]
new Stream[B] {
def head: B = ???
def tail: Stream[B] = ???
}
}
}
trait Stream[A] {
def head: A
def tail: Stream[A]
}
}
There are two important points. Firstly, notice how I gave the name self to
this. This is so I can access the value inside the new Stream we are creating,
where this would be bound to this new Stream. Next, notice that we access
self.head and self.tail inside the methods on the new Stream. This maintains
the correct semantics of only performing computation when it has been asked
for. If we performed the computation outside of the methods that we would
do it too early, which is some cases can lead to an infinite loop.
trait Stream[A] {
def head: A
def tail: Stream[A]
}
object Stream {
def unfold[A, B](seed: A): Stream[B] =
???
}
trait Stream[A] {
def head: A
def tail: Stream[A]
}
object Stream {
def unfold[A, B](seed: A): Stream[B] =
new Stream[B]{
def head: B = ???
def tail: Stream[B] = ???
}
}
Now we can follow the types, adding parameters as we need them. This gives
us the complete method shown below.
3.3. STRUCTURAL RECURSION AND CORECURSION FOR CODATA 69
trait Stream[A] {
def head: A
def tail: Stream[A]
}
object Stream {
def unfold[A, B](seed: A, f: A => B, next: A => A): Stream[B] =
new Stream[B]{
def head: B =
f(seed)
def tail: Stream[B] =
unfold(next(seed), f, next)
}
}
We can use this to implement some interesting streams. Here’s a stream that
alternates between 1 and -1.
alternating.take(5)
// res11: List[Int] = List(1, -1, 1, -1, 1)
It’s time for you to get some practice with structural recursion and structural
corecursion using codata. Implement filter, zip, and scanLeft on Stream. They
have the same semantics as the same methods on List, and the signatures
shown below.
trait Stream[A] {
def head: A
def tail: Stream[A]
70 CHAPTER 3. OBJECTS AS CODATA
We can do some neat things with the methods defined above. For example,
here is the stream of natural numbers.
naturals.take(5)
// res15: List[Int] = List(0, 1, 2, 3, 4)
This might be confusing. If so, spend a bit of time thinking about it. It really
does work!
naturals.take(5)
// res17: List[Int] = List(1, 2, 3, 4, 5)
You may have noticed that our implement recomputes values, possibly many
times. A good example is the implementation of filter. This recalculates the
head and tail on each call, which could be a very expensive operation.
3.3. STRUCTURAL RECURSION AND CORECURSION FOR CODATA 71
loop(self)
}
loop(self)
}
}
}
We know that delaying the computation until the method is called is important,
because that is how we can handle infinite and self‐referential data. However
we don’t need to redo this computation on successive calls. We can instead
cache the result from the first call and use that next time. Scala makes
this easy with lazy val, which is a val that is not computed until its first
call. Additionally, Scala’s use of the uniform access principle means we can
implement a method with no parameters using a lazy val. Here’s a quick
example demonstrating it in use.
twos.take(5)
// res18: List[Int] = List(2, 2, 2, 2, 2)
We get the same result whether we use a method or a lazy val, because
we are assuming that we are only dealing with pure computations that have
no dependency on state that might change. In this case a lazy val simply
consumes additional space to save on time.
We can illustrate the difference between call by name and call by need if we
use an impure computation. For example, we can define a stream of random
numbers. Random number generators depend on some internal state.
Here’s the call by name implementation, using the methods we have already
defined.
import scala.util.Random
Notice that we get different results each time we take a section of the Stream.
We would expect these results to be the same.
randoms.take(5)
// res19: List[Double] = List(
// 0.992356225587004,
// 0.5782412616663237,
// 0.2226030245370484,
// 0.35727986544485835,
// 0.31298531150309006
3.3. STRUCTURAL RECURSION AND CORECURSION FOR CODATA 73
// )
randoms.take(5)
// res20: List[Double] = List(
// 0.7752616583143408,
// 0.4751181675682493,
// 0.17156767265471862,
// 0.27135659877759877,
// 0.7619242761867147
// )
Now let’s define the same stream in a call by need style, using lazy val.
This time we get the same result when we take a section, and each number is
the same.
randomsByNeed.take(5)
// res21: List[Double] = List(
// 0.9016738887598141,
// 0.9016738887598141,
// 0.9016738887598141,
// 0.9016738887598141,
// 0.9016738887598141
// )
randomsByNeed.take(5)
// res22: List[Double] = List(
// 0.9016738887598141,
// 0.9016738887598141,
// 0.9016738887598141,
// 0.9016738887598141,
// 0.9016738887598141
// )
If we wanted a stream that had a different random number for each element
but those numbers were constant, we could redefine unfold to use call by
need.
74 CHAPTER 3. OBJECTS AS CODATA
val randomsByNeed2 =
unfoldByNeed(Random, r => r.nextDouble(), r => r)
randomsByNeed2.take(5)
// res23: List[Double] = List(
// 0.050299168953211404,
// 0.3965737383904735,
// 0.4145424713147482,
// 0.1843136303656039,
// 0.3245530902085244
// )
randomsByNeed2.take(5)
// res24: List[Double] = List(
// 0.050299168953211404,
// 0.3965737383904735,
// 0.4145424713147482,
// 0.1843136303656039,
// 0.3245530902085244
// )
These subtleties are one of the reasons that functional programmers try to
avoid using state as far as possible.
In this section we’ll explore the relationship between data and codata, and in
paritcular converting one to the other. We’ll look at it in two ways: firstly a
3.4. RELATING DATA AND CODATA 75
very surface‐level relationship between the two, and then a deep connection
via fold.
Remember that data is a sum of products, where the products are constructors
and we can view constructors as functions. So we can view data as a sum
of functions. Meanwhile, codata is a product of functions. We can easily
make a direct correspondence between the functions‐as‐constructors and the
functions in codata. What about the difference between the sum and the
product that remains. Well, when we have a product of functions we only call
one at any point in our code. So the logical or is in the choice of function to
call.
Let’s see how this works with a familiar example of data, List. As an algebraic
data type we can define
enum List[A] {
case Pair(head: A, tail: List[A])
case Empty()
}
trait List[A] {
def pair(head: A, tail: List[A]): List[A]
def empty: List[A]
}
The other way to view the relationship is a connection via fold. We’ve already
learned how to derive the fold for any algebraic data type. For Bool, defined
as
76 CHAPTER 3. OBJECTS AS CODATA
enum Bool {
case True
case False
}
enum Bool {
case True
case False
We know that fold is universal: we can write any other method in terms of
it. It therefore provides a universal destructor and is the key to treating data
as codata. In this case the fold is something we use all the time, except we
usually call it if.
Here’s the codata version of Bool, with fold renamed to if. (Note that Scala
allows us to define methods with the same name as key words, in this case if,
but we have to surround them in backticks to use them.)
trait Bool {
def `if`[A](t: A)(f: A): A
}
Let’s see this in use by defining and in terms of if, and then creating some
examples. First the definition of and.
Now the examples. This is simple enough that we can try the entire truth
table.
and(True, True).`if`("yes")("no")
// res1: String = "yes"
and(True, False).`if`("yes")("no")
// res2: String = "no"
and(False, True).`if`("yes")("no")
// res3: String = "no"
and(False, False).`if`("yes")("no")
// res4: String = "no"
Test your understanding of Bool by implementing or and not in the same way
we implemented and above.
Notice that, once again, computation only happens on demand. In this case,
nothing happens until if is actually called. Until that point we’re just building
up a representation of what we want to happen. This again points to how
codata can handle infinite data, by only computing the finite amount required
by the actual computation.
1. On the interface (trait) defining the codata, define a method with the
same signature as fold.
2. Define an implementation of the interface for each product case in the
data. The data’s constructor arguments become constructor arguments
on the codata classes. If there are no constructor arguments, as in Bool,
we can define values instead of classes.
3. Each implementation implements the case of fold that it corresponds
to.
Let’s apply this to a slightly more complex example: List. We’ll start by
defining it as data and implementing fold. I’ve chosen to implement foldRight
but foldLeft would be just as good.
enum List[A] {
case Pair(head: A, tail: List[A])
case Empty()
Now let’s implement it as codata. We start by defining the interface with the
fold method. In this case I’m calling it foldRight as it’s going to exactly mirror
the foldRight we just defined.
trait List[A] {
def foldRight[B](empty: B)(f: (A, B) => B): B
}
Now we define the implementations. There is one for Pair and one for Empty,
which are the two cases in data definition of List. Notice that in this case
the classes have constructor arguments, which correspond to the constructor
arguments on the correspnding product types.
3.4. RELATING DATA AND CODATA 79
This code is almost the same as the dynamic dispatch implementation, which
again shows the relationship between codata and object‐oriented code.
The starting point is creating a type alias List, which defines a list as a fold.
This uses a polymorphic function type, which is new in Scala 3. Inspect the
80 CHAPTER 3. OBJECTS AS CODATA
Now we can define Pair and Empty as functions. The first parameter list is the
constructor arguments, and the second parameter list is the parameters for
foldRight.
Finally, let’s see an example to show it working. We will first define the list
containing 1, 2, 3. Due to a restriction in polymorphic function types, I have to
add the useless empty parameter.
Now we can compute the sum and product of the elements in this list.
It works!
The purpose of this little demonstration is to show that functions are just
objects (in the codata sense) with a single method. Scala this makes apparent,
as functions are objects with an apply method.
We’ve seen that data can be translated to codata. The reverse is also possible:
we simply tabulate the results of each possible method call. In other words,
the data representation is memoisation, a lookup table, or a cache.
3.5. DATA AND CODATA EXTENSIBILITY 81
Although we can convert data to codata and vice versa, there are good reasons
to choose one over the other. We’ve already seen one reason: with codata
we can represent infinite structures. In this next section we’ll see another
difference: the extensibility that data and codata permit.
We have seen that codata can represent types with an infinite number of
elements, such as Stream. This is one expressive difference from data, which
must always be finite. We’ll now look at another, which is the type of
extensibility we get from data and from codata. Together these gives use
guidelines to choose between the two.
Firstly, let’s define extensibility. It means the ability to add new features
without modifying existing code. (If we allow modification of existing code
then any extension becomes trivial.) In particular there are two dimensions
along which we can extend code: adding new functions or adding new
elements. We will see that data and codata have orthogonal extensibility: it’s
easy to add new functions to data but adding new elements is impossible
without modifying existing code, while adding new elements to codata is
straight‐forward but adding new functions is not.
Let’s start with a concrete example of both data and codata. For data we’ll use
the familiar List type.
enum List[A] {
case Empty()
case Pair(head: A, tail: List[A])
}
trait Set[A] {
def contains(elt: A): Boolean
def insert(elt: A): Set[A]
def union(that: Set[A]): Set[A]
82 CHAPTER 3. OBJECTS AS CODATA
We know there are lots of methods we can define on List. The standard
library is full of them! We also know that any method we care to write can
be written using structural recursion. Finally, we can write these methods
without modifying existing code.
import List.*
What about adding new elements to data? Perhaps we want to add a special
case to optimize single‐element lists. This is impossible without changing
existing code. By definition, we cannot add a new element to an enum without
changing the enum. Adding such a new element would break all existing pattern
matches, and so require they all change. So in summary we can add new
functions to data, but not new elements.
3.5. DATA AND CODATA EXTENSIBILITY 83
Now let’s look at codata. This has the opposite extensibility; duality strikes
again! In the codata case we can easily add new elements. We simply
implement the trait that defines the codata interface. We saw this when
we defined, for example, ListSet.
So in summary we can add new elements to codata, but not new functions.
If we tabulate this we clearly see that data and codata have orthogonal
extensibility.
84 CHAPTER 3. OBJECTS AS CODATA
You might wonder if we can have both forms of extensibility. Achieving this is
called the expression problem. There are various ways to solve the expression
problem, and we’ll see one that works particularly well in Scala in a later
chapter.
In this extended exercise we’ll explore the Set interface we have already used
in several examples, reproduced below.
trait Set[A] {
We also saw a simple implementation, storing the elements in the set in a List.
3.6. EXERCISE: SETS 85
The implementation for union is a bit unsatisfactory; it’s doesn’t use any of
our strategies for writing code. We can implement both union and insert in a
generic way that works for all sets (in other words, is implemented on the Set
trait) and uses the strategies we’ve seen in this chapter. Go ahead and do this.
Your next challenge is to implement Evens, the set of all even integers, which
we’ll represent as a Set[Int]. This is an infinite set; we cannot directly
enumerate all the elements in this set. (We actually could enumerate all the
even elements that are 32‐bit Ints, but we don’t want to as this would use
excessive amounts of space.)
3.7 Conclusions
In this chapter we’ve explored codata, the dual of data. Codata is defined by
its interface—what we can do with it—as opposed to data, which is defined by
what it is. More formally, codata is a product of destructors, where destructors
are functions from the codata type (and, optionally, some other inputs) to
some type. By avoiding the elements of object‐oriented programming that
make it hard to reason about—state and implementation inheritance—codata
brings elements of object‐oriented programming that accord with the other
functional programming strategies. In Scala we define codata as a trait, and
implement it as a final class, anonymous subclass, or an object.
We saw that data is connected to codata via fold: any data can instead be
implemented as codata with a single destructor that is the fold for that data.
The reverse is also: we can enumerate all potential pairs of inputs and outputs
of destructors to represent codata as data. However this does not mean that
data and codata are equivalent. We have seen many examples of codata
representing infinite structures, such as sets of all even numbers and streams
of all natural numbers. We have also seen that data and codata offer different
forms of extensibility: data makes it easy to add new functions, but adding
new elements requires changing existing code, while it is easy to add new
elements to codata but we change existing code if we add new functions.
Contextual Abstraction
All but the simplest programs depend on the context in which they run.
The number of available CPU cores is an example of context provided by
the computer, which a program might adapt to by changing how work is
distributed. Other forms of context include configuration read from files and
environment variables, and (and we’ll see at lot of this later) values created at
compile‐time, such as serialization formats, in response to the type of some
method parameters.
Scala is one of the few languages that provides features for contextual
abstraction, known as implicits in Scala 2 or given instances in Scala 3. In
Scala these features are intimately related to types; types are used to select
between different available given instances and drive construction of given
instances at compile‐time.
Most Scala programmers are less confident with the features for contextual
abstraction than with other parts of the language, and they are often entirely
novel to programmers coming from other languages. Hence this chapter will
start by reviewing the abstractions formely known as implicits: given instances
and using clauses. We will then look at one of their major uses, type classes¹.
Type classes allow us to extend existing types with new functionality, without
using traditional inheritance, and without altering the original source code.
¹The word “class” doesn’t strictly mean class in the Scala or Java sense.
89
90 CHAPTER 4. CONTEXTUAL ABSTRACTION
Type classes are the core of Cats, which we will be exploring in the next part
of this book.
In section we’ll go through the main Scala language features for contextual
abstraction. Once we have a firm understanding of the mechanics of
contextual abstraction we’ll move on to their use.
The language features for contextual abstraction have changed name from
Scala 2 to Scala 3, but they work in largely the same way. In the table below
I show the Scala 3 features, and their Scala 2 equivalents. If you use Scala 2
you’ll find that most of the code works simply by replacing given with implicit
val and using with implicit.
Scala 3 Scala 2
We’ll start with using clauses. A using clause is a method parameter list that
starts with the using keyword. We use the term context parameters for the
parameters in a using clause.
The using keyword applies to all parameters in the list, so in add below both x
and y are context parameters.
4.1. THE MECHANICS OF CONTEXTUAL ABSTRACTION 91
We can have normal parameter lists, and multiple using clauses, in the same
method.
double(using 1)
// res0: Int = 2
add(using 1, 2)
// res1: Int = 3
addAll(1)(using 2)(using 3)
// res2: Int = 6
However this is not the typical way to pass parameters. In fact we don’t usually
explicit pass parameters to using clause at all. We usually use given instances
instead, so let’s turn to them.
A given instance is a value that is defined with the given keyword. Here’s a
simple example.
theMagicNumber * 2
However, it’s more common to use them with a using clause. When we call a
method that has a using clause, and we do not explicitly supply values for the
context parameters, the compiler will look for given instances of the required
92 CHAPTER 4. CONTEXTUAL ABSTRACTION
For example, we defined double above with a single Int context parameter.
The given instance we just defined, theMagicNumber, also has type Int. So if we
call double without providing any value for the context parameter the compiler
will provide the value theMagicNumber for us.
double
// res4: Int = 6
The same given instance will be used for multiple parameters in a using clause
with the same type, as in add defined above.
add
// res5: Int = 6
The above are the most important points for using clauses and given instances.
We’ll now turn to some of the details of their semantics.
Given instances are usually not explicitly passed to using clauses. Their whole
reason for existence is to get the compiler to do this for us. This could make
code hard to understand, so we need to be very clear about which given
instances are candidates to be supplied to a using clause. In this section we’ll
look at the given scope, which is all the places that the compiler will look for
given instances, and the special syntax for importing given instances.
The first rule we should know about the given scope is that it starts at the call
site, where the method with a using clause is called, not at the definition site
where the method is defined. This means the following code does not compile,
because the given instance is not in scope at the call site, even though it is in
scope at the definition site.
4.1. THE MECHANICS OF CONTEXTUAL ABSTRACTION 93
object A {
given a: Int = 1
def whichInt(using int: Int): Int = int
}
A.whichInt
// error:
// No given instance of type Int was found for parameter int of method
whichInt in object A
// A.whichInt
// ^^^^^^^^
The second rule, which we have been relying on in all our examples so far,
is that the given scope includes the lexical scope at the call site. The lexical
scope is where we usually look up the values associated with names (like the
names of method parameters or val declarations). This means the following
code works, as a is defined in a scope that includes the call site.
object A {
given a: Int = 1
object B {
C.whichInt
}
object C {
def whichInt(using int: Int): Int = int
}
}
However, if there are multiple given instances in the same scope the compiler
will not arbitrarily choose one. Instead it fails with an error telling us the choice
is ambiguous.
object A {
given a: Int = 1
given b: Int = 2
whichInt
}
// error:
// Ambiguous given instances: both given instance a in object A and
// given instance b in object A match type Int of parameter int of
// method whichInt in object A
We can import given instances from other scopes, just like we can import
normal declarations, but we must explicitly say we want to import given
instances. The following code does not work because we have not explicitly
imported the given instances.
object A {
given a: Int = 1
whichInt
}
// error:
// No given instance of type Int was found for parameter int of method
whichInt in object A
//
// Note: given instance a in object A was not considered because it
was not imported with `import given`.
// whichInt
// ^
object A {
given a: Int = 1
whichInt
}
One final wrinkle: the given scope includes the companion objects of any type
involved in the type of the using clause. This is best illustrated with an example.
We’ll start by defining a type Sound that represents the sound made by its type
variable A, and a method soundOf to access that sound.
trait Sound[A] {
def sound: String
}
Now we’ll define some given instances. Notice that they are defined on the
relevant companion objects.
trait Cat
object Cat {
given catSound: Sound[Cat] =
new Sound[Cat]{
def sound: String = "meow"
}
}
trait Dog
object Dog {
given dogSound: Sound[Dog] =
new Sound[Dog]{
def sound: String = "woof"
}
}
When we call soundOf we don’t have to explicitly bring the instances into scope.
They are automatically in the given scope by virtue of being defined on the
companion objects of the types we use (Cat and Dog). If we had defined these
instances on the Sound companion object they would also be in the given scope;
when looking for a Sound[A] both the companion objects of Sound and A are in
scope.
96 CHAPTER 4. CONTEXTUAL ABSTRACTION
soundOf[Cat]
// res12: String = "meow"
soundOf[Dog]
// res13: String = "woof"
Notice that given instance selection is based entirely on types. We don’t even
pass any values to soundOf! This means given instances are easiest to use
when there is only one instance for each type. In this case we can just put
the instances on a relevant companion object and everything works out.
However, this is not always possible (though it’s often an indication of a bad
design if it is not). For cases where we need multiple instances for a type, we
can use the instance priority rules to select between them. We’ll look at the
three most important rules below.
The first rule is that explicitly passing an instance takes priority over everything
else.
given a: Int = 1
def whichInt(using int: Int): Int = int
whichInt(using 2)
// res15: Int = 2
The second rule is that instances in the lexical scope take priority over
instances in a companion object
4.1. THE MECHANICS OF CONTEXTUAL ABSTRACTION 97
trait Sound[A] {
def sound: String
}
trait Cat
object Cat {
given catSound: Sound[Cat] =
new Sound[Cat]{
def sound: String = "meow"
}
}
soundOf[Cat]
// res17: String = "purr"
The final rule is that instances in a closer lexical scope take preference over
those further away.
{
given growl: Sound[Cat] =
new Sound[Cat]{
def sound: String = "growl"
}
{
given mew: Sound[Cat] =
new Sound[Cat]{
def sound: String = "mew"
}
soundOf[Cat]
}
}
98 CHAPTER 4. CONTEXTUAL ABSTRACTION
We’re now seen most of the details of how given instances and using clauses
work. This is a craft level explanation, and it naturally leads to the question:
where would use these tools? This is what we’ll address next, where we look
at type classes and their implementation in Scala.
Let’s now look at how type classes are implemented. There are three
important components to a type class: the type class itself, which defines an
interface, type class instances, which implement the type class for particular
types, and the methods that use type classes. The table below shows the
language features that correspond to each component.
JsonWriter is our type class in this example, with Json and its subtypes
providing supporting code. When we come to implement instances of
JsonWriter, the type parameter A will be the concrete type of data we are
writing.
The instances of a type class provide implementations of the type class for
specific types we care about, which can include types from the Scala standard
library and types from our domain model.
object JsonWriterInstances {
given stringWriter: JsonWriter[String] =
new JsonWriter[String] {
def write(value: String): Json =
JsString(value)
}
// etc...
}
100 CHAPTER 4. CONTEXTUAL ABSTRACTION
In this example we define two type class instances of JsonWriter, one for
String and one for Person. The definition for String uses the syntax we saw
in the previous section. The definition for Person uses two bits of syntax
that are new in Scala 3. Firstly, writing given JsonWriter[Person] creates an
anonymous given instance. We declare just the type and don’t need to name
the instance. This is fine because we don’t usually need to refer to given
instances by name. The second bit of syntax is the use of with to implement a
trait directly without having to write out new JsonWriter[Person] and so on.
A type class use is any functionality that requires a type class instance to work.
In Scala this means any method that accepts instances of the type class as part
of a using clause.
We’re going to look at two patterns of type class usage, which we call interface
objects and interface syntax. You’ll find these in Cats and other libraries.
The simplest way of creating an interface that uses a type class is to place
methods in a singleton object:
object Json {
def toJson[A](value: A)(using w: JsonWriter[A]): Json =
w.write(value)
}
To use this object, we import any type class instances we care about and call
the relevant method:
4.2. ANATOMY OF A TYPE CLASS 101
Json.toJson(Person("Dave", "[email protected]"))
// res1: Json = JsObject(
// get = Map(
// "name" -> JsString(get = "Dave"),
// "email" -> JsString(get = "[email protected]")
// )
// )
The compiler spots that we’ve called the toJson method without providing
the given instances. It tries to fix this by searching for given instances of the
relevant types and inserting them at the call site.
object JsonSyntax {
extension [A](value: A) {
def toJson(using w: JsonWriter[A]): Json =
w.write(value)
}
}
We use interface syntax by importing it alongside the instances for the types
we need:
²You may occasionally see extension methods referred to as “type enrichment” or “pimping”.
These are older terms that we don’t use anymore.
102 CHAPTER 4. CONTEXTUAL ABSTRACTION
import JsonWriterInstances.given
import JsonSyntax.*
Person("Dave", "[email protected]").toJson
// res2: Json = JsObject(
// get = Map(
// "name" -> JsString(get = "Dave"),
// "email" -> JsString(get = "[email protected]")
// )
// )
trait JsonWriter[A] {
extension (value: A) def toJson: Json
}
object JsonWriter {
given stringWriter: JsonWriter[String] =
new JsonWriter[String] {
extension (value: String)
def toJson: Json = JsString(value)
}
// etc...
}
"A string".toJson
// error:
// value toJson is not a member of String
// "A string".toJson
// ^^^^^^^^^^^^^^^^^
This means that users will have to explicitly import at least the instances
for the built‐in types (for which we cannot modify the companion
objects).
import JsonWriter.given
"A string".toJson
// res5: Json = JsString(get = "A string")
The Scala standard library provides a generic type class interface called summon.
Its definition is very simple:
We can use summon to summon any value in the given scope. We provide the
type we want and summon does the rest:
summon[JsonWriter[String]]
// res6: JsonWriter[String] = repl.
MdocSession$MdocApp3$JsonWriter$$anon$7@2840ad29
So far we’ve seen type classes as a way to get the compiler to pass values
to methods. This is nice but it does seem like we’ve introduced a lot of new
concepts for a small gain. The real power of type classes lies in the compiler’s
ability to combine given instances to construct new given instances. This is
known as type class composition.
// and so on...
However, this approach clearly doesn’t scale. We end up requiring two given
instances for every type A in our application: one for A and one for Option[A].
Fortunately, we can abstract the code for handling Option[A] into a common
constructor based on the instance for A:
Here is the same code written out using a parameterized given instance:
Json.toJson(Option("A string"))
In this way, given instance resolution becomes a search through the space of
possible combinations of given instance, to find a combination that creates a
type class instance of the correct overall type.
In Scala 2 we can achieve the same effect with an implicit method with
implicit parameters. Here’s the Scala 2 equivalent of optionWriter above.
106 CHAPTER 4. CONTEXTUAL ABSTRACTION
Make sure you make the method’s parameter implicit! If you don’t, you’ll
end up defining an implicit conversion. Implicit conversion is an older
programming pattern that is frowned upon in modern Scala code. Fortunately,
the compiler will warn you should you do this.
We’ve have now seen the mechanics of type classes: they are a specific
arrangement of trait, given instances, and using clauses. This is a very craft‐
level explanation. Let’s now raise the level of the explanation with three
different views of type classes.
The first view goes back Chapter 3, where we looked at codata. The type class
itself—the trait—is an example of codata with the usual advantages of codata
(we can easily add implementations) and disadvantages (we cannot easily
change the interface). Given instances and using clauses add the ability to
chose the codata implementation based on the type of the context parameter
and the instances in the given scope, and to compose instances from smaller
components.
Raising the level of abstraction again, we can say that type classes allow us to
implement functionality (the type class instance) separately from the type to
which it applies, so that the implementation only needs to be defined at the
point of the use—the call site—not at the point of declaration.
Raising the level again, we can say type classes allow us to implement ad‐hoc
polymorphism. I find it easiest to understand ad‐hoc polymorphism in contrast
to parametric polymorphism. Parametric polymorphism is what we get with
4.4. WHAT TYPE CLASSES ARE 107
type parameters, also known as generic types. It allows us to treat all types
in a uniform way. For example, the following function calculates the length of
any list of an arbitrary type A.
import scala.math.Numeric
Scala provides a toString method to let us convert any value to a String. This
method comes with a few disadvantages:
2. Create instances of Display for String and Int on the Display companion
object.
The code above forms a general purpose printing library that we can use in
multiple applications. Let’s define an “application” now that uses the library.
First we’ll define a data type to represent a well‐known type of furry animal:
Next we’ll create an implementation of Display for Cat that returns content in
the following format:
Finally, use the type class on the console or in a short demo app: create a Cat
and print it to the console:
// Define a cat:
val cat = Cat(/* ... */)
Let’s make our printing library easier to use by adding extension methods for
its functionality:
3. Use the extension methods to print the example Cat you created in the
previous exercise.
In this section we’ll discuss how variance interacts with type class instance
selection. Variance is one of the darker corners of Scala’s type system, so we
start by reviewing it before moving on to its interaction with type classes.
4.6.1 Variance
Variance applies to any type constructor, which is the F in a type F[A]. So,
for example, List, Option, and JsonWriter are all type constructors. A type
constructor must have at least one type parameter, and may have more. So
Either, with two type parameters, is also a type constructor.
Variance concerns the subtyping relationship between types F[A] and F[B],
given a subtyping relationship between A and B. If B is a subtype of A then
4.6.2 Covariance
Covariance means that the type F[B] is a subtype of the type F[A] if B is a
subtype of A. This is useful for modelling many types, including collections
like List and Option:
trait List[+A]
trait Option[+A]
Generally speaking, covariance is used for outputs: data that we can later get
out of a container type such as List, or otherwise returned by some method.
4.6.3 Contravariance
trait F[-A]
trait JsonWriter[-A] {
def write(value: A): Json
}
Let’s unpack this a bit further. Remember that variance is all about the ability
to substitute one value for another. Consider a scenario where we have two
values, one of type Shape and one of type Circle, and two JsonWriters, one for
Shape and one for Circle:
Now ask yourself the question: “Which combinations of value and writer can
I pass to format?” We can write a Circle with either writer because all Circles
are Shapes. Conversely, we can’t write a Shape with circleWriter because not
all Shapes are Circles.
4.6.4 Invariance
Invariance is the easiest situation to describe. It’s what we get when we don’t
write a + or - in a type constructor:
trait F[A]
This means the types F[A] and F[B] are never subtypes of one another, no
matter what the relationship between A and B. This is the default semantics
for Scala type constructors.
When the compiler searches for a given instance it looks for one matching the
type or subtype. Thus we can use variance annotations to control type class
instance selection to some extent.
There are two issues that tend to arise. Let’s imagine we have an algebraic
data type like:
enum A {
case B
case C
}
It turns out we can’t have both at once. The three choices give us behaviour
as follows:
Let’s see some examples, using the following types to show the subtyping
relationship.
trait Animal
trait Cat extends Animal
trait DomesticShorthair extends Cat
Now we’ll define three different type classes for the three types of variance,
and define an instance of each for the Cat type.
trait Inv[A] {
def result: String
}
object Inv {
given Inv[Cat] with
def result = "Invariant"
trait Co[+A] {
def result: String
}
object Co {
given Co[Cat] with
def result = "Covariant"
trait Contra[-A] {
def result: String
}
object Contra {
given Contra[Cat] with
def result = "Contravariant"
Now the cases that work, all of which select the Cat instance. For the invariant
case we must ask for exactly the Cat type. For the covariant case we can ask
for a supertype of Cat. For contravariance we can ask for a subtype of Cat.
Inv[Cat]
// res1: String = "Invariant"
Co[Animal]
// res2: String = "Covariant"
Co[Cat]
// res3: String = "Covariant"
Contra[DomesticShorthair]
// res4: String = "Contravariant"
Contra[Cat]
// res5: String = "Contravariant"
Now cases that fail. With invariance any type that is not Cat will fail. So the
supertype fails
Inv[Animal]
// error:
// No given instance of type MdocApp0.this.Inv[MdocApp0.this.Animal]
was found for parameter instance of method apply in object Inv
Inv[DomesticShorthair]
// error:
// No given instance of type MdocApp0.this.Inv[MdocApp0.this.
DomesticShorthair] was found for parameter instance of method
116 CHAPTER 4. CONTEXTUAL ABSTRACTION
Covariance fails for any subtype of the type for which the instance is declared.
Co[DomesticShorthair]
// error:
// No given instance of type MdocApp0.this.Co[MdocApp0.this.
DomesticShorthair] was found for parameter instance of method
apply in object Co
Contravariance fails for any supertype of the type for which the instance is
declared.
Contra[Animal]
// error:
// No given instance of type MdocApp0.this.Contra[MdocApp0.this.Animal
] was found for parameter instance of method apply in object
Contra
It’s clear there is no perfect system. The most choice is to use invariant type
classes. This allows us to specify more specific instances for subtypes if we
want. It does mean that if we have, for example, a value of type Some[Int], our
type class instance for Option will not be used. We can solve this problem with
a type annotation like Some(1): Option[Int] or by using “smart constructors”
like the Option.apply, Option.empty, some, and none methods we saw in Section
6.3.3.
4.7 Conclusions
In this chapter we took a first look at type classes. We saw the components
that make up a type class:
We saw that type classes can be composed from components using type class
composition. This is one form of metaprogramming in Scala, where we can
get the compiler to do work for us based on our program’s types.
We can view type classes as marrying codata with tools to select and compose
implementations based on type. We can also view type classes as shifting
implementation from the definition site to the call site. Finally, can see
type classes as a mechanism for ad‐hoc polymorphism, allowing us to define
common functionality for otherwise unrelated types.
Type classes were first described in Kaes [1988] and Wadler and Blott [1989].
Oliveira et al. [2010] details the encoding of type classes in Scala 2, and
compares Scala’s and Haskell’s approach to type classes. Note that type
classes are not restricted to Haskell and Scala. For examples, Rust’s traits are
essentially type classes.
Reified Interpreters
val route =
Route(
Request.get(Path.root / "user" / Param.int),
Response.ok(Entity.text)
).handle(userId => s"You asked for the user ${userId.toString}")
This defines a route, which matches GET requests for the path "/user/<int
>", and responds with an Ok containing text. This kind of routing library is
119
120 CHAPTER 5. REIFIED INTERPRETERS
We’ll start with a basic implementation strategy that uses algebraic data types
and structural recursion. We’ll then look at transformations to turn our
interpreter into a version that avoids using the stack and hence avoids the
possibility of stack overflow.
We’ll start this case study by briefly describing the usual task for regular
expressions—matching text—and then take a more theoretical view. We’ll
then move on to implementation.
one string. In Scala we can create a regular expression by calling the r method
on String. Here’s a regular expression that matches exactly the string "Scala".
We can see that it matches only "Scala" and fails if we give it a shorter or
longer input.
regexp.matches("Scala")
// res0: Boolean = true
regexp.matches("Sca")
// res1: Boolean = false
regexp.matches("Scalaland")
// res2: Boolean = false
There are some characters that have a special meaning within the String
describing a regular expression. For example, the character * matches the
preceding character zero or more times.
regexp.matches("Scal")
// res4: Boolean = true
regexp.matches("Scala")
// res5: Boolean = true
regexp.matches("Scalaaaa")
// res6: Boolean = true
regexp.matches("Scala")
// res8: Boolean = true
regexp.matches("Scalalalala")
// res9: Boolean = true
regexp.matches("Sca")
// res10: Boolean = false
regexp.matches("Scalal")
// res11: Boolean = false
regexp.matches("Scalaland")
// res12: Boolean = false
That’s all I’m going to say about Scala’s built‐in regular expressions. If you’d like
to learn more there are many resources online. The JDK documentation is one
example, which describes all the features available in the JVM implementation
of regular expressions.
This kind of description may seem very abstract if you’re not used to it. It
is very useful for our purposes because it defines a minimal API that we can
5.1. REGULAR EXPRESSIONS 123
easily implement. Let’s walk through the description and see how each part
relates to code.
The empty regular expression is defining a constructor with type () => Regexp,
which we can simplify to a value of type Regexp. In Scala we put constructors
on the companion object, so this tells us we need
object Regexp {
val empty: Regexp =
???
}
The second part tells us we need another constructor, this one with type
String => Regexp.
object Regexp {
val empty: Regexp =
???
The other three components all take a regular expression and produce a
regular expression. In Scala these will become methods on the Regexp type.
Let’s model this as a trait for now, and define these methods.
trait Regexp {
def ++(that: Regexp): Regexp
}
trait Regexp {
def ++(that: Regexp): Regexp
def orElse(that: Regexp): Regexp
}
Repetition we’ll call repeat, and define an alias * that matches how this
operation is written in conventional regular expressions.
trait Regexp {
def ++(that: Regexp): Regexp
def orElse(that: Regexp): Regexp
def repeat: Regexp
def `*`: Regexp = this.repeat
}
We’re missing one thing: a method to actually match our regular expression
against some input. Let’s call this method matches.
trait Regexp {
def ++(that: Regexp): Regexp
def orElse(that: Regexp): Regexp
def repeat: Regexp
def `*`: Regexp = this.repeat
This completes our API. Now we can turn to implementation. We’re going to
represent Regexp as an algebraic data type, and each method that returns a
Regexp will return an instance of this algebraic data type. What should be the
elements that make up the algebraic data type? There will be one element for
each method, and the constructor arguments will be exactly the parameters
passed to the method including the hidden this parameter for methods on the
trait.
enum Regexp {
def ++(that: Regexp): Regexp =
Append(this, that)
Now we can apply the usual strategies to complete the implementation. Let’s
reason independently by case, starting with the case for Empty. This case is
trivial as it always fails to match, so we just return false.
Let’s move on to the Append case. This should match if the left regular
expression matches the start of the input, and the right regular expression
matches starting where the left regular expression stopped. This has
uncovered a hidden requirement: we need to keep an index into the input
that tells us where we should start matching from. Using a nested method is
the easiest way to keep around additional information that we need. Here I’ve
created a nested method that returns an Option[Int]. The Int is the new index
to use, and we return an Option to indicate if the regular expression matched
or not.
The implementation for Repeat is a little tricky, so I’ll walk through the code.
128 CHAPTER 5. REIFIED INTERPRETERS
The first line (loop(source, index)) is seeing if the source regular expression
matches. If it does we loop again, but on regexp (which is Repeat(source)), not
source. This is because we want to repeat an indefinite number of times. If we
looped on source we would only try twice. Remember that failing to match is
still a success; repeat matches zero or more times. This condition is handled
by the orElse clause.
regexp.matches("Scala")
// res14: Boolean = true
regexp.matches("Scalalalala")
// res15: Boolean = true
regexp.matches("Sca")
// res16: Boolean = false
regexp.matches("Scalal")
// res17: Boolean = false
regexp.matches("Scalaland")
// res18: Boolean = false
Success! At this point we could add many extensions to our library. For
example, regular expressions usually have a method (by convention denoted
+) that matches one or more times, and one that matches zero or once (usually
denoted ?). These are both conveniences we can build on our existing API.
However, our goal at the moment is to fully understand interpreters and
5.2. INTERPRETERS AND REIFICATION 129
the implementation technique we’ve used here. So in the next section we’ll
discuss these in detail.
val r1 = "(z|zxy)ab".r
val r2 = Regexp("z").orElse(Regexp("zxy")) ++ Regexp("ab")
r1.matches("zxyab")
// res19: Boolean = true
r2.matches("zxyab")
// res20: Boolean = false
All uses of the interpreter strategy have a particular structure to their methods.
There are three different kinds of methods:
Once we’ve defined the Program algebraic data type, the interpreter becomes
a structural recursion on Program.
132 CHAPTER 5. REIFIED INTERPRETERS
Exercise: Arithmetic
Now it’s your turn to practice using reification. Your task is to implement an
interpreter for arithmetic expressions. An expression is:
Add methods +, - and so on that make your system a bit nicer to use. Then
write some expressions and show that it works as expected.
Structural recursion, as we have written it, uses the stack. This is not often a
problem, but particularly deep recursions can lead to the stack running out of
space. A solution is to write a tail recursive program. A tail recursive program
does not need to use any stack space, and so is sometimes known as stack
safe. Any program can be turned into a tail recursive version, which does not
use the stack and therefore cannot run out of stack space.
5.3. TAIL RECURSIVE INTERPRETERS 133
Let’s start by seeing the problem. In Scala we can create a repeated String
using the * method.
"a" * 4
// res0: String = "aaaa"
Regexp("a").repeat.matches("a" * 4)
// res1: Boolean = true
However, if we make the input very long the interpreter will fail with a stack
overflow exception.
Regexp("a").repeat.matches("a" * 20000)
// java.lang.StackOverflowError
134 CHAPTER 5. REIFIED INTERPRETERS
This is because the interpreter calls loop for each instance of a repeat, without
returning. However, all is not lost. We can rewrite the interpreter in a way
that consumes a fixed amount of stack space, and therefore match input that
is as large as we like.
Our starting point is tail calls. A tail call is a method call that does not take
any additional stack space. Only method calls that are in tail position are
candidates to be turned into tail calls. Even then, runtime limitations mean
that not all calls in tail position will be converted to tail calls.
A method call in tail position is a call that immediately returns the value
returned by the call. Let’s see an example. Below are two versions of a method
to calculate the sum of the integers from 0 to count.
loop(count, 0)
}
is not in tail position, because the value returned by the call is then used in the
addition. However, the call to loop in
5.3. TAIL RECURSIVE INTERPRETERS 135
is in tail position because the value returned by the call to loop is itself
immediately returned. Similarly, the call to loop in
loop(count, 0)
is converted to a tail call, because loop is calling itself. However, the call
loop(count, 0)
is not converted to a tail call, because the call is from isTailRecursive to loop.
This will not cause issues with stack consumption, however, because this call
only happens once.
Scala supports three different platforms: the JVM, Javascript via Scala.js,
and native code via Scala Native. Each platform provides what is known
as a runtime, which is code that supports our Scala code when it is
running. The garbage collector, for example, is part of the runtime.
At the time of writing none of Scala’s runtimes support full tail calls.
However, there is reason to think this may change in the future. Project
Loom should eventually add support for tail calls to the JVM. Scala
Native is likely to support tail calls soon, as part of other work to
implement continuations. Tail calls have been part of the Javascript
136 CHAPTER 5. REIFIED INTERPRETERS
We can ask the Scala compiler to check that all self calls are in tail position by
adding the @tailrec annotation to a method. The code will fail to compile if
any calls from the method to itself are not in tail position.
import scala.annotation.tailrec
@tailrec
def isntTailRecursive(count: Int): Int =
count match {
case 0 => 0
case n => n + isntTailRecursive(n - 1)
}
// error:
// Cannot rewrite recursive call: it is not in tail position
// case n => n + isntTailRecursive(n - 1)
// ^^^^^^^^^^^^^^^^^^^^^^^^
We can check the tail recursive version is truly tail recursive by passing it a
very large input. The non‐tail recursive version crashes.
isntTailRecursive(100000)
// java.lang.StackOverflowError
isTailRecursive(100000)
// res4: Int = 705082704
Now that we know about tail calls, how do we convert the regular
expression interpreter to use them? Any program can be converted to an
5.3. TAIL RECURSIVE INTERPRETERS 137
equivalent program with all calls in tail position. This conversion is known as
continuation‐passing style or CPS for short. Our first step to understanding
CPS is to understand continuations.
enum Regexp {
def ++(that: Regexp): Regexp =
Append(this, that)
What happens next when we call loop(left, idx)? Let’s give the name result
to the value returned by the call to loop. The answer is we run result.flatMap
(i => loop(right, i)). We can represent this as a function, to which we pass
result:
As is often the case, there is a distinction between the concept and the
representation. The concept of continuations always exists in code. A
continuation means “what happens next”. In other words, it is the program’s
control flow. There is always some concept of control flow, even if it is just
“the program halts”. We can represent continuations as functions in code. This
transforms the abstract concept of continuations into concrete values in our
program, and hence reifies them.
continuation with the value. This is another example of duality, in this case
between returning a value and calling a continuation.
Let’s see how this works. We’ll start with a simple example written in the
normal style, also known as direct style.
(1 + 2) * 3
// res5: Int = 9
To rewrite this in CPS style we need to create replacements for + and * with
the extra continuation parameter.
Now we can rewrite our example in CPS. (1 + 2) becomes add(1, 2, k), but
what is k, the continuation? What we do next is multiply the result by 3.
Thus the continuation is a => mul(a, 3, k2). What is the next continuation,
k2? Here the program finishes, so we just return the value with the identity
continuation b => b. Put it all together and we get
Notice that every continuation call is in tail position in the CPS code. This
means that code written in CPS can potentially consume no stack space.
Now we can return to the interpreter loop for Regexp. We are going to CPS it,
so we need to add an extra parameter for the continuation. In this case the
contination accepts and returns the result type of loop: Option[Int].
=
// etc...
}
def loop(
regexp: Regexp,
idx: Int,
cont: Continuation
): Option[Int] =
regexp match {
case Append(left, right) =>
val k: Continuation = _ match {
case None => cont(None)
case Some(i) => loop(right, i, cont)
}
loop(left, idx, k)
size))
Every call in this interpreter loop is in tail position. However Scala cannot
convert these to tail calls because the calls go from loop to a continuation
and vice versa. To make the interpreter fully stack safe we need to add
trampolining.
5.3.4 Trampolining
Earlier we said that CPS utilizes the duality between function calls and returns:
instead of returning a value we call a function with a value. This allows us to
transform our code so it only has calls in tail positions. However, we still have
a problem with stack safety. Scala’s runtimes don’t support full tail calls, so
142 CHAPTER 5. REIFIED INTERPRETERS
calls from a continuation to loop or from loop to a continuation will use a stack
frame. We can use this same duality to avoid using the stack by, instead of
making a call, returning a value that reifies the call we want to make. This idea
is the core of trampolining. Let’s see it in action, which will help clear up what
exactly this all means.
Our first step is to reify all the method calls made by the interpreter loop and
the continuations. There are three cases: calls to loop, calls to a continuation,
and, to avoid an infinite loop, the case when we’re done.
enum Call {
case Loop(regexp: Regexp, index: Int, continuation: Continuation)
case Continue(index: Option[Int], continuation: Continuation)
case Done(index: Option[Int])
}
Now we update loop to return instances of Call instead of making the calls
directly.
This gives us an interpreter loop that returns values instead of making calls,
and so does not consume stack space. However, we need to actually make
these calls at some point, and doing this is the job of the trampoline. The
trampoline is simply a tail recursive loop that makes calls until it reaches Done.
Now every call has a corresponding return, so the stack usage is limited. Our
interpreter can handle input of any size, up to the limits of available memory.
enum Call {
case Loop(regexp: Regexp, index: Int, continuation: Continuation)
case Continue(index: Option[Int], continuation: Continuation)
case Done(index: Option[Int])
144 CHAPTER 5. REIFIED INTERPRETERS
enum Regexp {
def ++(that: Regexp): Regexp =
Append(this, that)
cont
)
Doing a full CPS conversion and trampoline can be quite involved. Some
methods can made tail recursive without so large a change. Remember these
examples we looked at earlier?
loop(count, 0)
}
The tail recursive version doesn’t seem to involve the complexity of CPS. How
can we relate this to what we’ve just learned, and when can we avoid the work
of CPS and trampolining?
Let’s use substitution to show how the stack is used by each method, for a
small value of count.
isntTailRecursive(2)
// expands to
(2 match {
case 0 => 0
case n => n + isntTailRecursive(n - 1)
})
// expands to
(2 + isntTailRecursive(1))
// expands to
(2 + (1 match {
case 0 => 0
case n => n + isntTailRecursive(n - 1)
5.3. TAIL RECURSIVE INTERPRETERS 147
}))
// expands to
(2 + (1 + isntTailRecursive(n - 1)))
// expands to
(2 + (1 + (0 match {
case 0 => 0
case n => n + isntTailRecursive(n - 1)
})))
// expands to
(2 + (1 + (0)))
// expands to
3
Here each set of brackets indicates a new method call and hence a stack frame
allocation.
isTailRecursive(2)
// expands to
(loop(2, 0))
// expands to
(2 match {
case 0 => 0
case n => loop(n - 1, 0 + n)
})
// expands to
(loop(1, 2))
// call to loop is a tail call, so no stack frame is allocated
// expands to
(1 match {
case 0 => 2
case n => loop(n - 1, 2 + n)
})
// expands to
(loop(0, 3))
// call to loop is a tail call, so no stack frame is allocated
// expands to
(0 match {
case 0 => 3
case n => loop(n - 1, 3 + n)
})
148 CHAPTER 5. REIFIED INTERPRETERS
// expands to
(3)
// expands to
3
This doesn’t explain, though, how we come to realize that addition is the
correct operation to use. The second criteria is that we don’t need any
memory beyond the partial result calculated from the data we’ve already seen.
Some implications of this are that we can stop at any time and have a usable
result, and that we are only applying a single operation to the data. This is
not the case in the regular expression example. For example, we have the
following code in the Append case:
To compute the result for the Append we need to compute and combine results
from both left and right. So when we have computed the result for right we
need to remember both the result from left and that we’re combining the two
results using the rule for Append rather than, say, OrElse. It’s remembering this
that is exactly what the continuation does, and what stops us from using the
easy method we saw when summing the elements of a list.
You might be wondering how we handle tree‐shaped data with this technique.
One consequence of an associative operation is that we can transform any
sequence of operations into a list‐shaped sequence. If, for example, we have
an expression tree that suggests we should call operations in the order (1 +
2)+ (3 + 4) (where I’m using + to indicate the operation) we can rewrite that
to (((1 + 2)+ 3)+ 4) via associativity. So we can transform our tree into a list
and then apply the recipe above.
5.4 Conclusions
In this chapter we’ve discussed why we might want to build interpreters, and
seen techniques for building them. To recap, the core of the interpreter
strategy is a separation between description and action. The description is
the program, and the interpreter is the action that carries out the program.
This separation is allows for composition of programs, and managing effects
by delaying them till the time the program is run. We sometimes call this
structure an algebra, with constructs and combinators defining programs and
destructors defining interpreters. Although the name of the strategy focuses
on the interpreter, the design of the program is just as important as it is the
user interface through which the programmer interacts with the system.
Stack‐safe interpreters are important in many situations, but the code is harder
150 CHAPTER 5. REIFIED INTERPRETERS
to read than the basic structural recursion. In some contexts a basic interpreter
may be just fine. It’s unlikely to run out of stack space when evaluating
a straightforward expression tree, as in the arithmetic example. The depth
of such a tree grows logarithmically with the number of elements, so only
extremely large trees will have sufficient depth that stack safety becomes
relevant. However, in the regular expression example the stack consumption
is determined not by the depth of the regular expression tree, but by the length
of the input being matched. In this situation stack safety is more important.
There may still be other constraints that allow a simpler implementation. For
example, if we know the library will only used in situations where inputs were
guaranteed to be small. As always, only use coding techniques where they
make sense.
Type Classes
151
153
{#sec:part:two}
Using Cats
In this Chapter we’ll learn how to use the Cats library. Cats provides two main
things: type classes and their instances, and some useful data structures. Our
focus will mostly be on the type classes, though we will touch on the data
structures where appropriate.
The easiest, and recommended, way to use Cats is to add the following
imports:
import cats.*
import cats.syntax.all.*
The first import adds all the type classes (and makes their instances available,
as they are found in the companion objects.) The second import adds the
syntax helpers, which makes the type classes easier to work with. Note we
don’t need to import cats.{*, given} as, at the time of writing, Cats is written
in Scala 2 style (using implicits) and these are imported by the wildcard
import.
155
156 CHAPTER 6. USING CATS
import cats.data.*
Let’s now see how we work with Cats, using cats.Show as an example.
Show is Cats’ equivalent of the Display type class we defined in Section 4.5.
It provides a mechanism for producing developer‐friendly console output
without using toString. Here’s an abbreviated definition:
package cats
trait Show[A] {
def show(value: A): String
}
The easiest way to use Show is with the wildcard import above. However, we
can also import Show directly from the cats package:
import cats.Show
The companion object of every Cats type class has an apply method that
locates an instance for any type we specify:
showInt.show(42)
// res0: String = "42"
42.show
// res1: String = "42"
If, for some reason, we wanted just the syntax for show, we could import cats
.syntax.show.
We can define an instance of Show simply by implementing the trait for a given
type:
import java.util.Date
new Date().show
// res2: String = "1744285799966ms since the epoch."
object Show {
// Convert a function to a `Show` instance:
def show[A](f: A => String): Show[A] =
???
These allow us to quickly construct instances with less ceremony than defining
them from scratch:
As you can see, the code using construction methods is much terser than the
code without. Many type classes in Cats provide helper methods like these
for constructing instances, either from scratch or by transforming existing
instances for other types.
Re‐implement the Cat application from Section 4.5.1 using Show instead of
Display.
Then use the type class on the console or in a short demo app: create a Cat
and print it to the console:
// Define a cat:
val cat = Cat(/* ... */)
6.3 Example: Eq
We will finish off this chapter by looking at another useful type class: cats.
Eq. Eq is designed to support type‐safe equality and address annoyances using
Scala’s built‐in == operator.
Almost every Scala developer has written code like this before:
Ok, many of you won’t have made such a simple mistake as this, but the
principle is sound. The predicate in the filter clause always returns false
because it is comparing an Int to an Option[Int].
package cats
trait Eq[A] {
def eqv(a: A, b: A): Boolean
// other concrete methods based on eqv...
}
import cats.*
eqInt.eqv(123, 123)
// res1: Boolean = true
eqInt.eqv(123, 234)
// res2: Boolean = false
eqInt.eqv(123, "234")
// error:
// Found: ("234" : String)
// Required: Int
// eqInt.eqv(123, "234")
// ^^^^^
We can also import the interface syntax in cats.syntax.eq to use the === and
=!= methods:
We have received an error here because the types don’t quite match up. We
have Eq instances in scope for Int and Option[Int] but the values we are
comparing are of type Some[Int]. To fix the issue we have to re‐type the
arguments as Option[Int]:
We can define our own instances of Eq using the Eq.instance method, which
accepts a function of type (A, A)=> Boolean and returns an Eq[A]:
import java.util.Date
x === x
// res12: Boolean = true
x === y
// res13: Boolean = true
Use this to compare the following pairs of objects for equality and inequality:
6.3. EXAMPLE: EQ 163
In this section we explore our first type classes, monoid and semigroup. These
allow us to add or combine values. There are instances for Ints, Strings, Lists
, Options, and many more. Let’s start by looking at a few simple types and
operations to see what common principles we can extract.
Addition of Ints is a binary operation that is closed, meaning that adding two
Ints always produces another Int:
2 + 1
// res0: Int = 3
2 + 0
// res1: Int = 2
0 + 2
// res2: Int = 2
165
166 CHAPTER 7. MONOIDS AND SEMIGROUPS
There are also other properties of addition. For instance, it doesn’t matter in
what order we add elements because we always get the same result. This is a
property known as associativity:
(1 + 2) + 3
// res3: Int = 6
1 + (2 + 3)
// res4: Int = 6
The same properties for addition also apply for multiplication, provided we
use 1 as the identity instead of 0:
1 * 3
// res5: Int = 3
3 * 1
// res6: Int = 3
(1 * 2) * 3
// res7: Int = 6
1 * (2 * 3)
// res8: Int = 6
We can also add Strings, using string concatenation as our binary operator:
"One" ++ "two"
// res9: String = "Onetwo"
"" ++ "Hello"
// res10: String = "Hello"
"Hello" ++ ""
// res11: String = "Hello"
Note that we used ++ above instead of the more usual + to suggest a parallel
with sequences. We can do the same with other types of sequence, using
concatenation as the binary operator and the empty sequence as our identity.
This definition translates nicely into Scala code. Here is a simplified version of
the definition from Cats:
trait Monoid[A] {
def combine(x: A, y: A): A
def empty: A
}
def associativeLaw[A](x: A, y: A, z: A)
(using m: Monoid[A]): Boolean = {
m.combine(x, m.combine(y, z)) ==
m.combine(m.combine(x, y), z)
}
def identityLaw[A](x: A)
(using m: Monoid[A]): Boolean = {
(m.combine(x, m.empty) == x) &&
(m.combine(m.empty, x) == x)
}
(1 - 2) - 3
// res14: Int = -4
1 - (2 - 3)
// res15: Int = 2
In practice we only need to think about laws when we are writing our own
Monoid instances. Unlawful instances are dangerous because they can yield
unpredictable results when used with the rest of Cats’ machinery. Most of
the time we can rely on the instances provided by Cats and assume the library
authors know what they’re doing.
A semigroup is just the combine part of a monoid, without the empty part. While
many semigroups are also monoids, there are some data types for which we
cannot define an empty element. For example, we have just seen that sequence
concatenation and integer addition are monoids. However, if we restrict
ourselves to non‐empty sequences and positive integers, we are no longer
able to define a sensible empty element. Cats has a NonEmptyList data type that
has an implementation of Semigroup but no implementation of Monoid.
trait Semigroup[A] {
def combine(x: A, y: A): A
}
We’ll see this kind of inheritance often when discussing type classes. It
provides modularity and allows us to re‐use behaviour. If we define a Monoid
for a type A, we get a Semigroup for free. Similarly, if a method requires a
parameter of type Semigroup[B], we can pass a Monoid[B] instead.
We’ve seen a few examples of monoids but there are plenty more to be found.
Consider Boolean. How many monoids can you define for this type? For each
monoid, define the combine and empty operations and convince yourself that
the monoid laws hold. Use the following definitions as a starting point:
trait Semigroup[A] {
def combine(x: A, y: A): A
}
object Monoid {
def apply[A](implicit monoid: Monoid[A]) =
monoid
}
Now we’ve seen what monoids are, let’s look at their implementation in Cats.
Once again we’ll look at the three main aspects of the implementation: the
type class, the instances, and the interface.
import cats.Monoid
import cats.Semigroup
or just
import cats.*
Cats Kernel?
The Cats Kernel type classes covered in this book are Eq, Semigroup, and
Monoid. All the other type classes we cover are part of the main Cats
7.3. MONOIDS IN CATS 171
Monoid follows the standard Cats pattern for the user interface:
the companion
object has an apply method that returns the type class instance for a particular
type. For example, if we want the monoid instance for String, and we have
the correct given instances in scope, we can write the following:
import cats.Monoid
import cats.Semigroup
The standard type class instances for Monoid are all found on the appropriate
companion objects, and so are automatically in the given scope with no further
imports required.
172 CHAPTER 7. MONOIDS AND SEMIGROUPS
Cats provides syntax for the combine method in the form of the |+| operator.
Because combine technically comes from Semigroup, we access the syntax by
importing from cats.syntax.semigroup:
import cats.syntax.all.*
The cutting edge SuperAdder v3.5a‐32 is the world’s first choice for adding
together numbers. The main function in the program has signature def add(
items: List[Int]): Int. In a tragic accident this code is deleted! Rewrite the
method and save the day!
Well done! SuperAdder’s market share continues to grow, and now there is
demand for additional functionality. People now want to add List[Option[Int
]]. Change add so this is possible. The SuperAdder code base is of the highest
quality, so make sure there is no code duplication!
SuperAdder is entering the POS (point‐of‐sale, not the other POS) market.
Now we want to add up Orders:
7.4. APPLICATIONS OF MONOIDS 173
We need to release this code really soon so we can’t make any modifications
to add. Make it so!
In big data applications like Spark and Flink we distribute data analysis over
many machines, giving fault tolerance and scalability. This means each
machine will return results over a portion of the data, and we must then
combine these results to get our final result. In the vast majority of cases
this can be viewed as a monoid.
If we want to calculate how many total visitors a web site has received, that
means calculating an Int on each portion of the data. We know the monoid
instance of Int is addition, which is the right way to combine partial results.
If we want to find out how many unique visitors a website has received, that’s
equivalent to building a Set[User] on each portion of the data. We know the
monoid instance for Set is the set union, which is the right way to combine
partial results.
If we want to calculate 99% and 95% response times from our server logs, we
can use a data structure called a QTree for which there is a monoid.
Hopefully you get the idea. Almost every analysis that we might want to do
over a large data set is a monoid, and therefore we can build an expressive
and powerful analytics system around this idea. This is exactly what Twitter’s
174 CHAPTER 7. MONOIDS AND SEMIGROUPS
Algebird and Summingbird projects have done. We explore this idea further
in the map‐reduce case study in Section 18.
A particular class of data types support this reconciliation. These data types
are called conflict‐free replicated data types (CRDTs). The key operation is
the ability to merge two data instances, with a result that captures all the
information in both instances. This operation relies on having a monoid
instance. We explore this idea further in the CRDT case study.
The two examples above are cases where monoids inform the entire system
architecture. There are also many cases where having a monoid around makes
it easier to write a small code fragment. We’ll see lots of examples in the
remainder of this book.
7.5 Summary
We hit a big milestone in this chapter—we covered our first type classes with
fancy functional programming names:
We can use Semigroups and Monoids by importing two things: the type classes
themselves, and the semigroup syntax to give us the |+| operator:
7.5. SUMMARY 175
import cats.Monoid
import cats.syntax.semigroup.* // for |+|
With the correct instances in scope, we can set about adding anything we
want:
We can also write generic code that works with any type for which we have
an instance of Monoid:
addAll(List(1, 2, 3))
// res4: Int = 6
addAll(List(None, Some(1), Some(2)))
// res5: Option[Int] = Some(value = 3)
Functors
Informally, a functor is anything with a map method. You probably know lots
of types that have this: Option, List, and Either, to name a few.
177
178 CHAPTER 8. FUNCTORS
map
map
map
Figure 8.1: Type chart: mapping over List, Option, and Either
Similarly, when we map over an Option, we transform the contents but leave
the Some or None context unchanged. The same principle applies to Either with
its Left and Right contexts. This general notion of transformation, along with
the common pattern of type signatures shown in Figure 8.1, is what connects
the behaviour of map across different data types.
Because map leaves the structure of the context unchanged, we can call it
repeatedly to sequence multiple computations on the contents of an initial
data structure:
List(1, 2, 3).
map(n => n + 1).
map(n => n * 2).
map(n => s"${n}!")
// res1: List[String] = List("4!", "6!", "8!")
map
The map methods of List, Option, and Either apply functions eagerly. However,
the idea of sequencing computations is more general than this. Let’s
investigate the behaviour of some other functors that apply the pattern in
different ways.
Futures
When we work with a Future we have no guarantees about its internal state.
The wrapped computation may be ongoing, complete, or rejected. If the
Future is complete, our mapping function can be called immediately. If not,
some underlying thread pool queues the function call and comes back to it
later. We don’t know when our functions will be called, but we do know what
order they will be called in. In this way, Future provides the same sequencing
behaviour seen in List, Option, and Either:
180 CHAPTER 8. FUNCTORS
Await.result(future, 1.second)
// res2: String = "248!"
import scala.util.Random
val future1 = {
// Initialize Random with a fixed seed:
val r = new Random(0L)
for {
a <- x
b <- x
} yield (a, b)
}
val future2 = {
val r = new Random(0L)
for {
a <- Future(r.nextInt())
b <- Future(r.nextInt())
} yield (a, b)
}
Ideally we would like result1 and result2 to contain the same value.
However, the computation for future1 calls nextInt once and the
computation for future2 calls it twice. Because nextInt returns a
different result every time we get a different result in each case.
map
the program should run. For more information see this excellent Reddit
answer by Rob Norris.
When we look at Cats Effect we’ll see that the IO type solves these
problems.
Functions (?!)
It turns out that single argument functions are also functors. To see this we
have to tweak the types a little. A function A => B has two type parameters:
the parameter type A and the result type B. To coerce them to the correct
shape we can fix the parameter type and let the result type vary:
If we alias X => A as MyFunc[A], we see the same pattern of types we saw with
the other examples in this chapter. We also see this in Figure 8.3:
val func =
((x: Int) => x.toDouble).
map(x => x + 1).
map(x => x * 2).
map(x => s"${x}!")
func(123)
// res5: String = "248.0!"
Partial Unification
map
scalacOptions += "-Ypartial-unification"
func1.map(func2)
// <console>: error: value map is not a member of Int => Double
// func1.map(func2)
^
package cats
trait Functor[F[_]] {
def map[A, B](fa: F[A])(f: A => B): F[B]
}
If you haven’t seen syntax like F[_] before, it’s time to take a brief detour to
8.4. ASIDE: HIGHER KINDS AND TYPE CONSTRUCTORS 185
Functor Laws
Identity: calling map with the identity function is the same as doing
nothing:
fa.map(a => a) == fa
fa.map(g(f(_))) == fa.map(f).map(g)
Kinds are like types for types. They describe the number of “holes” in a
type. We distinguish between regular types that have no holes and “type
constructors” that have holes we can fill to produce types.
For example, List is a type constructor with one hole. We fill that hole by
specifying a parameter to produce a regular type like List[Int] or List[A]. The
trick is not to confuse type constructors with generic types. List is a type
constructor, List[A] is a type:
There’s a close analogy here with functions and values. Functions are “value
constructors”—they produce values when we supply parameters:
186 CHAPTER 8. FUNCTORS
// ...
}
Armed with this knowledge of type constructors, we can see that the Cats
definition of Functor allows us to create instances for any single‐parameter
type constructor, such as List, Option, Future, or a type alias such as MyFunc.
import scala.language.higherKinds
8.5. FUNCTORS IN CATS 187
scalacOptions += "-language:higherKinds"
Let’s look at the implementation of functors in Cats. We’ll examine the same
aspects we did for monoids: the type class, the instances, and the syntax.
The functor type class is cats.Functor. We obtain instances using the standard
Functor.apply method on the companion object. As usual, default instances
are found on companion objects and do not have to be explicity imported:
import cats.*
import cats.syntax.all.*
Functor provides a method called lift, which converts a function of type A =>
B to one that operates over a functor and has type F[A] => F[B]:
188 CHAPTER 8. FUNCTORS
liftedFunc(Option(1))
// res1: Option[Int] = Some(value = 2)
The as method is the other method you are likely to use. It replaces with value
inside the Functor with the given value.
Functor[List].as(list1, "As")
// res2: List[String] = List("As", "As", "As")
The main method provided by the syntax for Functor is map. It’s difficult to
demonstrate this with Options and Lists as they have their own built‐in map
methods and the Scala compiler will always prefer a built‐in method over an
extension method. We’ll work around this with two examples.
First let’s look at mapping over functions. Scala’s Function1 type doesn’t have
a map method (it’s called andThen instead) so there are no naming conflicts:
func4(123)
// res3: String = "248!"
Let’s look at another example. This time we’ll abstract over functors so we’re
not working with any particular concrete type. We can write a method that
applies an equation to a number no matter what functor context it’s in:
8.5. FUNCTORS IN CATS 189
doMath(Option(20))
// res4: Option[Int] = Some(value = 22)
doMath(List(1, 2, 3))
// res5: List[Int] = List(3, 4, 5)
To illustrate how this works, let’s take a look at the definition of the map method
in cats.syntax.functor. Here’s a simplified version of the code:
The compiler can use this extension method to insert a map method wherever
no built‐in map is available:
Assuming foo has no built‐in map method, the compiler detects the potential
error and wraps the expression in a FunctorOps to fix the code:
List(1, 2, 3).as("As")
// res7: List[String] = List("As", "As", "As")
We can define a functor simply by defining its map method. Here’s an example
of a Functor for Option, even though such a thing already exists in cats.
instances. The implementation is trivial—we simply call Option's map method:
// We write this:
Functor[Future]
Write a Functor for the following binary tree data type. Verify that the code
works as expected on instances of Branch and Leaf:
contramap
The first of our type classes, the contravariant functor, provides an operation
called contramap that represents “prepending” an operation to a chain. The
general type signature is shown in Figure 8.5.
The contramap method only makes sense for data types that represent
transformations. For example, we can’t define contramap for an Option because
there is no way of feeding a value in an Option[B] backwards through a function
A => B. However, we can define contramap for the Display type class we
discussed in Section 4.5:
trait Display[A] {
def display(value: A): String
}
method accepts a function func of type B => A and creates a new Display[B
]:
trait Display[A] {
def display(value: A): String
Implement the contramap method for Display above. Start with the following
code template and replace the ??? with a working method body:
trait Display[A] {
def display(value: A): String
If you get stuck, think about the types. You need to turn value, which is of
type B, into a String. What functions and methods do you have available and
in what order do they need to be combined?
For testing purposes, let’s define some instances of Display for String and
Boolean:
194 CHAPTER 8. FUNCTORS
display("hello")
// res2: String = "'hello'"
display(true)
// res3: String = "yes"
Now define an instance of Display for the following Box case class. This is an
example of type class composotion as described in Section 4.3:
Rather than writing out the complete definition from scratch (new Display[Box
] etc…), create your instance from an existing instance using contramap.
display(Box("hello world"))
// res4: String = "'hello world'"
display(Box(true))
// res5: String = "yes"
If we don’t have a Display for the type inside the Box, calls to display should
fail to compile:
display(Box(123))
// error:
// No given instance of type repl.MdocSession.MdocApp1.Display[repl.
MdocSession.MdocApp1.Box[Int]] was found for parameter p of
method display in object MdocApp1.
// I found:
8.6. CONTRAVARIANT AND INVARIANT FUNCTORS 195
//
// repl.MdocSession.MdocApp1.boxDisplay[A](
// /* missing */summon[repl.MdocSession.MdocApp1.Display[A]])
//
// But no implicit values were found that match type repl.MdocSession.
MdocApp1.Display[A].
// display(Box(123))
// ^
The most intuitive examples of this are a type class that represents encoding
and decoding as some data type, such as Circe’s Codec and Play JSON’s Format.
We can build our own Codec by enhancing Display to support encoding and
decoding to/from a String:
trait Codec[A] {
def encode(value: A): String
def decode(value: String): A
def imap[B](dec: A => B, enc: B => A): Codec[B] = ???
}
The type chart for imap is shown in Figure 8.6. If we have a Codec[A] and a pair
of functions A => B and B => A, the imap method creates a Codec[B]:
196 CHAPTER 8. FUNCTORS
imap
,
We can construct many useful Codecs for other types by building off of
stringCodec using imap:
Note that the decode method of our Codec type class doesn’t account for
failures. If we want to model more sophisticated relationships we can
move beyond functors to look at lenses and optics.
Optics are beyond the scope of this book. However, Julien Truffaut’s
library Monocle provides a great starting point for further investigation.
encode(123.4)
// res11: String = "123.4"
decode[Double]("123.4")
// res12: Double = 123.4
encode(Box(123.4))
// res13: String = "123.4"
decode[Box[Double]]("123.4")
// res14: Box[Double] = Box(value = 123.4)
Finally, invariant functors capture the case where we can convert from
F[A] to F[B] via a function A => B and vice versa via a function B => A.
trait Contravariant[F[_]] {
def contramap[A, B](fa: F[A])(f: B => A): F[B]
}
trait Invariant[F[_]] {
def imap[A, B](fa: F[A])(f: A => B)(g: B => A): F[B]
}
import cats.*
showSymbol.show(Symbol("dave"))
// res1: String = "'dave"
showString
.contramap[Symbol](sym => s"'${sym.name}")
.show(Symbol("dave"))
// res2: String = "'dave"
Among other types, Cats provides an instance of Invariant for Monoid. This is
a little different from the Codec example we introduced in Section 8.6.2. If you
recall, this is what Monoid looks like:
package cats
trait Monoid[A] {
def empty: A
def combine(x: A, y: A): A
}
Imagine we want to produce a Monoid for Scala’s Symbol type. Cats doesn’t
provide a Monoid for Symbol but it does provide a Monoid for a similar type: String
. We can write our new semigroup with an empty method that relies on the
empty String, and a combine method that works as follows:
We can implement combine using imap, passing functions of type String =>
Symbol and Symbol => String as parameters. Here’ the code, written out using
the imap extension method provided by cats.syntax.invariant:
import cats.*
import cats.syntax.invariant.* // for imap
import cats.syntax.semigroup.* // for |+|
Monoid[Symbol].empty
// res3: Symbol = '
import cats.*
import cats.syntax.functor.* // for map
Function1 has two type parameters (the function argument and the result
type):
8.8. ASIDE: PARTIAL UNIFICATION 201
trait Functor[F[_]] {
def map[A, B](fa: F[A])(func: A => B): F[B]
}
The compiler has to fix one of the two parameters of Function1 to create a
type constructor of the correct kind to pass to Functor. It has two options to
choose from:
We know that the former of these is the correct choice. However the compiler
doesn’t understand what the code means. Instead it relies on a simple rule,
implementing what is called “partial unification”.
The partial unification in the Scala compiler works by fixing type parameters
from left to right. In the above example, the compiler fixes the Int in Int =>
Double and looks for a Functor for functions of type Int => ?:
either.map(_ + 1)
// res0: Either[String, Int] = Right(value = 124)
202 CHAPTER 8. FUNCTORS
scalacOptions += "-Ypartial-unification"
There are situations where left‐to‐right elimination is not the correct choice.
One example is the Or type in Scalactic, which is a conventionally left‐biased
equivalent of Either:
contramap
The problem here is that the Contravariant for Function1 fixes the return type
and leaves the parameter type varying, requiring the compiler to eliminate
type parameters from right to left, as shown below and in Figure 8.7:
The compiler fails simply because of its left‐to‐right bias. We can prove this
by creating a type alias that flips the parameters on Function1:
8.9 Summary
• Regular covariant Functors, with their map method, represent the ability
to apply functions to a value in some context. Successive calls to map
apply these functions in sequence, each accepting the result of its
predecessor as a parameter.
Regular Functors are by far the most common of these type classes, but even
then it is rare to use them on their own. Functors form a foundational building
block of several more interesting abstractions that we use all the time. In
the following chapters we will look at two of these abstractions: monads and
applicative functors.
The Contravariant and Invariant type classes are less widely applicable but
are still useful for building data types that represent transformations. We will
revisit them to discuss the Semigroupal type class later in Chapter 11.
206 CHAPTER 8. FUNCTORS
Chapter 9
Monads
Monads are one of the most common abstractions in Scala. Many Scala
programmers quickly become intuitively familiar with monads, even if we don’t
know them by name.
In this chapter we will take a deep dive into monads. We will start by
motivating them with a few examples. We’ll proceed to their formal definition,
and see how we can create a concrete type as a type class. We’ll then look at
their implementation in Cats. Finally, we’ll tour some interesting monads that
you may not have seen, providing introductions and examples of their use.
This is the question that has been posed in a thousand blog posts, with
explanations and analogies involving concepts as diverse as cats, Mexican
207
208 CHAPTER 9. MONADS
food, space suits full of toxic waste, and monoids in the category of
endofunctors (whatever that means). We’re going to solve the problem of
explaining monads once and for all by stating very simply:
That was easy! Problem solved, right? But then again, last chapter we said
functors were a mechanism for exactly the same thing. Ok, maybe we need
some more discussion…
Option allows us to sequence computations that may or may not return values.
Here are some examples:
Each of these methods may “fail” by returning None. The flatMap method allows
us to ignore this when we sequence operations:
9.1. WHAT IS A MONAD? 209
flatMap
At each step, flatMap chooses whether to call our function, and our function
generates the next computation in the sequence. This is shown in Figure 9.1.
stringDivideBy("6", "2")
// res0: Option[Int] = Some(value = 3)
stringDivideBy("6", "0")
// res1: Option[Int] = None
stringDivideBy("6", "foo")
210 CHAPTER 9. MONADS
Every monad is also a functor (see below for proof), so we can rely on both
flatMap and map to sequence computations that do and don’t introduce a new
monad. Plus, if we have both flatMap and map we can use for comprehensions
to clarify the sequencing behaviour:
for {
x <- (1 to 3).toList
y <- (4 to 5).toList
} yield (x, y)
// res5: List[Tuple2[Int, Int]] = List(
// (1, 4),
// (1, 5),
// (2, 4),
// (2, 5),
// (3, 4),
// (3, 5)
// )
However, there is another mental model we can apply that highlights the
monadic behaviour of List. If we think of Lists as sets of intermediate results,
flatMap becomes a construct that calculates permutations and combinations.
9.1. WHAT IS A MONAD? 211
For example, in the for comprehension above there are three possible values
of x and two possible values of y. This means there are six possible values of
(x, y). flatMap is generating these combinations from our code, which states
the sequence of operations:
• get x
• get y
• create a tuple (x, y)
import scala.concurrent.Future
import scala.concurrent.ExecutionContext.Implicits.global
Again, we specify the code to run at each step, and flatMap takes care of all
the horrifying underlying complexities of thread pools and schedulers.
If you’ve made extensive use of Future, you’ll know that the code above is
running each operation in sequence. This becomes clearer if we expand out
the for comprehension to show the nested calls to flatMap:
flatMap
Each Future in our sequence is created by a function that receives the result
from a previous Future. In other words, each step in our computation can only
start once the previous step is finished. This is born out by the type chart for
flatMap in Figure 9.2, which shows the function parameter of type A => Future
[B].
We can run futures in parallel, of course, but that is another story and shall be
told another time. Monads are all about sequencing.
While we have only talked about flatMap above, monadic behaviour is formally
captured in two operations:
¹In the programming literature and Haskell, pure is referred to as point or return and
flatMap is referred to as bind or >>=. This is purely a difference in terminology. We’ll use the
term flatMap for compatibility with Cats and the Scala standard library.
9.1. WHAT IS A MONAD? 213
trait Monad[F[_]] {
def pure[A](value: A): F[A]
Monad Laws
Left identity: calling pure and transforming the result with func is the
same as calling func:
pure(a).flatMap(func) == func(a)
m.flatMap(pure) == m
Every monad is also a functor. We can define map in the same way for every
monad using the existing methods, flatMap and pure:
214 CHAPTER 9. MONADS
trait Monad[F[_]] {
def pure[A](a: A): F[A]
It’s time to give monads our standard Cats treatment. As usual we’ll look at
the type class, instances, and syntax.
The monad type class is cats.Monad. Monad extends two other type classes:
FlatMap, which provides the flatMap method, and Applicative, which provides
pure. Applicative also extends Functor, which gives every Monad a map method
as we saw in the exercise above. We’ll discuss Applicatives in Chapter 11.
Here are some examples using pure and flatMap, and map directly:
import cats.Monad
Monad provides many other methods, including all of the methods from Functor.
See the scaladoc for more information.
Cats provides instances for all the monads in the standard library (Option, List,
Vector and so on). Cats also provides a Monad for Future. Unlike the methods
on the Future class itself, the pure and flatMap methods on the monad can’t
accept implicit ExecutionContext parameters (because the parameters aren’t
part of the definitions in the Monad trait). To work around this, Cats requires us
to have an ExecutionContext in scope when we summon a Monad for Future:
import scala.concurrent.*
import scala.concurrent.duration.*
val fm = Monad[Future]
// error:
// No given instance of type cats.Monad[scala.concurrent.Future] was
found for parameter instance of method apply in object Monad.
// I found:
//
// cats.Invariant.catsInstancesForFuture(
// /* missing */summon[scala.concurrent.ExecutionContext])
//
// But no implicit values were found that match type scala.concurrent.
ExecutionContext.
// val fm = Monad[Future]
// ^
216 CHAPTER 9. MONADS
Bringing the ExecutionContext into scope fixes the implicit resolution required
to summon the instance:
import scala.concurrent.ExecutionContext.Implicits.global
val fm = Monad[Future]
// fm: Monad[[T >: Nothing <: Any] => Future[T]] = cats.instances.
FutureInstances$$anon$1@7ea45f8e
The Monad instance uses the captured ExecutionContext for subsequent calls to
pure and flatMap:
Await.result(future, 1.second)
// res1: Int = 3
In addition to the above, Cats provides a host of new monads that we don’t
have in the standard library. We’ll familiarise ourselves with some of these in
a moment.
We can use pure to construct instances of a monad. We’ll often need to specify
the type parameter to disambiguate the particular instance we want.
9.2. MONADS IN CATS 217
1.pure[Option]
// res2: Option[Int] = Some(value = 1)
1.pure[List]
// res3: List[Int] = List(1)
It’s difficult to demonstrate the flatMap and map methods directly on Scala
monads like Option and List, because they define their own explicit versions
of those methods. Instead we’ll write a generic function that performs a
calculation on parameters that come wrapped in a monad of the user’s choice:
import cats.Monad
import cats.syntax.functor.* // for map
import cats.syntax.flatMap.* // for flatMap
sumSquare(Option(3), Option(4))
// res4: Option[Int] = Some(value = 25)
sumSquare(List(1, 2, 3), List(4, 5))
// res5: List[Int] = List(17, 26, 20, 29, 25, 34)
We can rewrite this code using for comprehensions. The compiler will “do the
right thing” by rewriting our comprehension in terms of flatMap and map and
inserting the correct conversions to use our Monad:
sumSquare(Option(3), Option(4))
// res7: Option[Int] = Some(value = 25)
sumSquare(List(1, 2, 3), List(4, 5))
// res8: List[Int] = List(17, 26, 20, 29, 25, 34)
import cats.Monad
import cats.syntax.functor.* // for map
import cats.syntax.flatMap.* // for flatMap
This method works well on Options and Lists but we can’t call it passing in
plain values:
sumSquare(3, 4)
// error:
// Found: (3 : Int)
// Required: ([_] =>> Any)[Int]
// Note that implicit conversions were not tried because the result of
an implicit conversion
// must be more specific than ([_] =>> Any)[Int]
// sumSquare(3, 4)
// ^
// error:
// Found: (4 : Int)
9.3. THE IDENTITY MONAD 219
import cats.Id
package cats
type Id[A] = A
Id is actually a type alias that turns an atomic type into a single‐parameter type
constructor. We can cast any value of any type to a corresponding Id:
"Dave" : Id[String]
// res2: String = "Dave"
123 : Id[Int]
// res3: Int = 123
List(1, 2, 3) : Id[List[Int]]
// res4: List[Int] = List(1, 2, 3)
Cats provides instances of various type classes for Id, including Functor and
Monad. These let us call map, flatMap, and pure on plain values:
220 CHAPTER 9. MONADS
val a = Monad[Id].pure(3)
// a: Int = 3
val b = Monad[Id].flatMap(a)(_ + 1)
// b: Int = 4
for {
x <- a
y <- b
} yield x + y
// res5: Int = 7
Implement pure, map, and flatMap for Id! What interesting discoveries do you
uncover about the implementation?
9.4 Either
Let’s look at another useful monad: the Either type from the Scala standard
library. In Scala 2.11 and earlier, many people didn’t consider Either a monad
because it didn’t have map and flatMap methods. In Scala 2.12, however, Either
became right biased.
9.4. EITHER 221
In Scala 2.11, Either had no default map or flatMap method. This made the
Scala 2.11 version of Either inconvenient to use in for comprehensions. We
had to insert calls to .right in every generator clause:
for {
a <- either1.right
b <- either2.right
} yield a + b
In Scala 2.12, Either was redesigned. The modern Either makes the decision
that the right side represents the success case and thus supports map and
flatMap directly. This makes for comprehensions much more pleasant:
for {
a <- either1
b <- either2
} yield a + b
// res1: Either[String, Int] = Right(value = 42)
Cats back‐ports this behaviour to Scala 2.11 via the cats.syntax.either import,
allowing us to use right‐biased Either in all supported versions of Scala. In
Scala 2.12+ we can either omit this import or leave it in place without breaking
anything:
for {
a <- either1
b <- either2
} yield a + b
222 CHAPTER 9. MONADS
In addition to creating instances of Left and Right directly, we can also import
the asLeft and asRight extension methods from cats.syntax.either:
val a = 3.asRight[String]
// a: Either[String, Int] = Right(value = 3)
val b = 4.asRight[String]
// b: Either[String, Int] = Right(value = 4)
for {
x <- a
y <- b
} yield x*x + y*y
// res3: Either[String, Int] = Right(value = 25)
// Left("Negative. Stopping!")
// ^^^^^^^^^^^^^^^^^^^^^^^^^^^
Switching to asRight avoids both of these problems. asRight has a return type
of Either, and allows us to completely specify the type with only one type
parameter:
countPositive(List(1, 2, 3))
// res5: Either[String, Int] = Right(value = 3)
countPositive(List(1, -2, 3))
// res6: Either[String, Int] = Left(value = "Negative. Stopping!")
Either.catchOnly[NumberFormatException]("foo".toInt)
// res7: Either[NumberFormatException, Int] = Left(
// value = java.lang.NumberFormatException: For input string: "foo"
// )
Either.catchNonFatal(sys.error("Badness"))
// res8: Either[Throwable, Nothing] = Left(
224 CHAPTER 9. MONADS
There are also methods for creating an Either from other data types:
Either.fromTry(scala.util.Try("foo".toInt))
// res9: Either[Throwable, Int] = Left(
// value = java.lang.NumberFormatException: For input string: "foo"
// )
Either.fromOption[String, Int](None, "Badness")
// res10: Either[String, Int] = Left(value = "Badness")
Users of Scala 2.11 or 2.12 can use orElse and getOrElse to extract values from
the right side or return a default:
import cats.syntax.either.*
"Error".asLeft[Int].getOrElse(0)
// res11: Int = 0
"Error".asLeft[Int].orElse(2.asRight[String])
// res12: Either[String, Int] = Right(value = 2)
The ensure method allows us to check whether the right‐hand value satisfies
a predicate:
The recover and recoverWith methods provide similar error handling to their
namesakes on Future:
9.4. EITHER 225
"error".asLeft[Int].recover {
case _: String => -1
}
// res14: Either[String, Int] = Right(value = -1)
"error".asLeft[Int].recoverWith {
case _: String => Right(-1)
}
// res15: Either[String, Int] = Right(value = -1)
"foo".asLeft[Int].leftMap(_.reverse)
// res16: Either[String, Int] = Left(value = "oof")
6.asRight[String].bimap(_.reverse, _ * 7)
// res17: Either[String, Int] = Right(value = 42)
"bar".asLeft[Int].bimap(_.reverse, _ * 7)
// res18: Either[String, Int] = Left(value = "rab")
123.asRight[String]
// res19: Either[String, Int] = Right(value = 123)
123.asRight[String].swap
// res20: Either[Int, String] = Left(value = 123)
for {
a <- 1.asRight[String]
b <- 0.asRight[String]
226 CHAPTER 9. MONADS
When using Either for error handling, we need to determine what type we
want to use to represent errors. We could use Throwable for this:
enum LoginError {
case UserNotFound(username: String)
case UnexpectedError
}
import LoginError.*
error match {
case UserNotFound(u) =>
println(s"User not found: $u")
result1.fold(handleError, println)
// User(dave,passw0rd)
result2.fold(handleError, println)
// User not found: dave
Is the error handling strategy in the previous examples well suited for all
purposes? What other features might we want from error handling?
Cats provides an additional type class called MonadError that abstracts over
Either‐like data types that are used for error handling. MonadError provides
extra operations for raising and handling errors.
228 CHAPTER 9. MONADS
You won’t need to use MonadError unless you need to abstract over error
handling monads. For example, you can use MonadError to abstract over
Future and Try, or over Either and EitherT (which we will meet in Chapter
10).
If you don’t need this kind of abstraction right now, feel free to skip
onwards to Section 9.6.
package cats
import cats.MonadError
ApplicativeError
monadError.handleErrorWith(failure) {
case "Badness" =>
monadError.pure("It's ok")
case _ =>
monadError.raiseError("It's not ok")
}
// res0: Either[String, String] = Right(value = "It's ok")
monadError.handleError(failure) {
case "Badness" => 42
case _ => -1
}
// res1: Either[String, Int] = Right(value = 42)
case _ =>
("It's not ok").raiseError
}
// res4: Either[String, Int] = Right(value = 256)
success.ensure("Number to low!")(_ > 1000)
// res5: Either[String, Int] = Left(value = "Number to low!")
There are other useful variants of these methods. See the source of cats.
MonadError and cats.ApplicativeError for more information.
9.5. ASIDE: ERROR HANDLING AND MONADERROR 231
import scala.util.Try
exn.raiseError[Try, Int]
// res6: Try[Int] = Failure(
// exception = java.lang.RuntimeException: It's all gone wrong
// )
validateAdult[Try](18)
// res7: Try[Int] = Success(value = 18)
validateAdult[Try](8)
// res8: Try[Int] = Failure(
// exception = java.lang.IllegalArgumentException: Age must be
greater than or equal to 18
232 CHAPTER 9. MONADS
// )
type ExceptionOr[A] = Either[Throwable, A]
validateAdult[ExceptionOr](-1)
// res9: Either[Throwable, Int] = Left(
// value = java.lang.IllegalArgumentException: Age must be greater
than or equal to 18
// )
Eval is also stack‐safe, which means we can use it in very deep recursions
without blowing up the stack.
What do these terms for models of evaluation mean? Let’s see some examples.
Let’s first look at Scala vals. We can see the evaluation model using a
computation with a visible side‐effect. In the following example, the code
to compute the value of x happens at place where it is defined rather than on
access. Accessing x recalls the stored value without re‐running the code.
val x = {
println("Computing X")
math.random()
}
// Computing X
// x: Double = 0.1628645952859491
x // first access
9.6. THE EVAL MONAD 233
Let’s look at an example using a def. The code to compute y below is not run
until we use it, and is re‐run on every access:
def y = {
println("Computing Y")
math.random()
}
y // first access
// Computing Y
// res2: Double = 0.5861049079273605
y // second access
// Computing Y
// res3: Double = 0.662151707600181
Last but not least, lazy vals are an example of call‐by‐need evaluation. The
code to compute z below is not run until we use it for the first time (lazy). The
result is then cached and re‐used on subsequent accesses (memoized):
lazy val z = {
println("Computing Z")
math.random()
}
z // first access
234 CHAPTER 9. MONADS
// Computing Z
// res4: Double = 0.07459313972153514
z // second access
// res5: Double = 0.07459313972153514
Eval has three subtypes: Now, Always, and Later. They correspond to call‐by‐
value, call‐by‐name, and call‐by‐need respectively. We construct these with
three constructor methods, which create instances of the three classes and
return them typed as Eval:
import cats.Eval
now.value
// res6: Double = 1000.2706901626503
always.value
// res7: Double = 3000.1577141271696
later.value
// res8: Double = 2000.5200067719463
Each type of Eval calculates its result using one of the evaluation models
defined above. Eval.now captures a value right now. Its semantics are similar
to a val—eager and memoized:
val x = Eval.now{
println("Computing X")
math.random()
}
// Computing X
// x: Eval[Double] = Now(value = 0.8328933027409376)
val y = Eval.always{
println("Computing Y")
math.random()
}
// y: Eval[Double] = cats.Always@5cc5b4bd
// Computing Y
// res13: Double = 0.7015752494374264
val z = Eval.later{
println("Computing Z")
math.random()
}
// z: Eval[Double] = cats.Later@36614d67
Like all monads, Eval's map and flatMap methods add computations to a chain.
In this case, however, the chain is stored explicitly as a list of functions. The
functions aren’t run until we call Eval's value method to request a result:
greeting.value
9.6. THE EVAL MONAD 237
// Step 1
// Step 2
// res16: String = "Hello world"
Note that, while the semantics of the originating Eval instances are maintained,
mapping functions are always called lazily on demand (def semantics):
One useful property of Eval is that its map and flatMap methods are trampolined.
This means we can nest calls to map and flatMap arbitrarily without consuming
stack frames. We call this property “stack safety”.
factorial(50000)
// java.lang.StackOverflowError
// ...
factorial(50000).value
// java.lang.StackOverflowError
// ...
Oops! That didn’t work—our stack still blew up! This is because we’re still
making all the recursive calls to factorial before we start working with Eval
's map method. We can work around this using Eval.defer, which takes an
9.6. THE EVAL MONAD 239
existing instance of Eval and defers its evaluation. The defer method is
trampolined like map and flatMap, so we can use it as a quick way to make
an existing operation stack safe:
factorial(50000).value
// res: A very big value
Eval is a useful tool to enforce stack safety when working on very large
computations and data structures. However, we must bear in mind that
trampolining is not free. It avoids consuming stack by creating a chain of
function objects on the heap. There are still limits on how deeply we can
nest computations, but they are bounded by the size of the heap rather than
the stack.
Writer is the first data type we’ve seen from the cats.data package.
This package provides instances of various type classes that produce
useful semantics. Other examples from cats.data include the monad
transformers that we will see in the next chapter, and the Validated type
we will encounter in Chapter 11.
import cats.data.Writer
import cats.instances.vector._ // for Monoid
Writer(Vector(
"It was the best of times",
"it was the worst of times"
), 1859)
// res0: WriterT[Id, Vector[String], Int] = WriterT(
// run = (Vector("It was the best of times", "it was the worst of
times"), 1859)
// )
9.7. THE WRITER MONAD 241
Notice that the type reported on the console is actually WriterT[Id, Vector[
String], Int] instead of Writer[Vector[String], Int] as we might expect. In
the spirit of code reuse, Cats implements Writer in terms of another type,
WriterT. WriterT is an example of a new concept called a monad transformer,
which we will cover in the next chapter.
Let’s try to ignore this detail for now. Writer is a type alias for WriterT, so we
can read types like WriterT[Id, W, A] as Writer[W, A]:
For convenience, Cats provides a way of creating Writers specifying only the
log or the result. If we only have a result we can use the standard pure syntax.
To do this we must have a Monoid[W] in scope so Cats knows how to produce
an empty log:
123.pure[Logged]
// res1: WriterT[Id, Vector[String], Int] = WriterT(run = (Vector(),
123))
If we have a log and no result we can create a Writer[Unit] using the tell
syntax from cats.syntax.writer:
If we have both a result and a log, we can either use Writer.apply or we can
use the writer syntax from cats.syntax.writer:
242 CHAPTER 9. MONADS
We can extract the result and log from a Writer using the value and written
methods respectively:
We can extract both values at the same time using the run method:
The log in a Writer is preserved when we map or flatMap over it. flatMap
appends the logs from the source Writer and the result of the user’s
sequencing function. For this reason it’s good practice to use a log type that
has an efficient append and concatenate operations, such as a Vector:
9.7. THE WRITER MONAD 243
writer1.run
// res3: Tuple2[Vector[String], Int] = (
// Vector("a", "b", "c", "x", "y", "z"),
// 42
// )
In addition to transforming the result with map and flatMap, we can transform
the log in a Writer with the mapWritten method:
writer2.run
// res4: Tuple2[Vector[String], Int] = (
// Vector("A", "B", "C", "X", "Y", "Z"),
// 42
// )
We can transform both log and result simultaneously using bimap or mapBoth.
bimap takes two function parameters, one for the log and one for the result.
mapBoth takes a single function that accepts two parameters:
writer3.run
// res5: Tuple2[Vector[String], Int] = (
// Vector("A", "B", "C", "X", "Y", "Z"),
// 4200
// )
writer4.run
// res6: Tuple2[Vector[String], Int] = (
// Vector("a!", "b!", "c!", "x!", "y!", "z!"),
// 42000
// )
Finally, we can clear the log with the reset method and swap log and result
with the swap method:
writer5.run
// res7: Tuple2[Vector[String], Int] = (Vector(), 42)
writer6.run
// res8: Tuple2[Int, Vector[String]] = (
// 42,
// Vector("a", "b", "c", "x", "y", "z")
// )
9.7. THE WRITER MONAD 245
The factorial function below computes a factorial and prints out the
intermediate steps as it runs. The slowly helper function ensures this takes
a while to run, even on the very small examples below:
factorial(5)
// fact 0 1
// fact 1 1
// fact 2 2
// fact 3 6
// fact 4 24
// fact 5 120
// res9: Int = 120
import scala.concurrent._
import scala.concurrent.ExecutionContext.Implicits._
import scala.concurrent.duration._
Await.result(Future.sequence(Vector(
Future(factorial(5)),
Future(factorial(5))
246 CHAPTER 9. MONADS
)), 5.seconds)
// fact 0 1
// fact 0 1
// fact 1 1
// fact 1 1
// fact 2 2
// fact 2 2
// fact 3 6
// fact 3 6
// fact 4 24
// fact 4 24
// fact 5 120
// fact 5 120
// res: scala.collection.immutable.Vector[Int] =
// Vector(120, 120)
import cats.data.Reader
We can extract the function again using the Reader's run method and call it
using apply as usual:
catName.run(Cat("Garfield", "lasagne"))
// res1: String = "Garfield"
So far so simple, but what advantage do Readers give us over the raw
functions?
The power of Readers comes from their map and flatMap methods, which
represent different kinds of function composition. We typically create a set
of Readers that accept the same type of configuration, combine them with map
and flatMap, and then call run to inject the config at the end.
The map method simply extends the computation in the Reader by passing its
result through a function:
greetAndFeed(Cat("Garfield", "lasagne"))
// res3: String = "Hello Garfield. Have a nice bowl of lasagne."
greetAndFeed(Cat("Heathcliff", "junk food"))
// res4: String = "Hello Heathcliff. Have a nice bowl of junk food."
Start by creating a type alias DbReader for a Reader that consumes a Db as input.
This will make the rest of our code shorter.
Now create methods that generate DbReaders to look up the username for
an Int user ID, and look up the password for a String username. The type
signatures should be as follows:
def checkPassword(
username: String,
password: String): DbReader[Boolean] =
???
Finally create a checkLogin method to check the password for a given user ID.
The type signature should be as follows:
def checkLogin(
userId: Int,
password: String): DbReader[Boolean] =
???
checkLogin(1, "zerocool").run(db)
// res7: Boolean = true
checkLogin(4, "davinci").run(db)
// res8: Boolean = false
Readers provide a tool for doing dependency injection. We write steps of our
program as instances of Reader, chain them together with map and flatMap, and
build a function that accepts the dependency as input.
By representing the steps of our program as Readers we can test them as easily
as pure functions, plus we gain access to the map and flatMap combinators.
Kleisli Arrows
You may have noticed from console output that Reader is implemented
in terms of another type called Kleisli. Kleisli arrows provide a more
general form of Reader that generalise over the type constructor of the
9.9. THE STATE MONAD 251
import cats.data.State
We can “run” our monad by supplying an initial state. State provides three
methods—run, runS, and runA—that return different combinations of state and
result. Each method returns an instance of Eval, which State uses to maintain
stack safety. We call the value method as usual to extract the actual result:
252 CHAPTER 9. MONADS
As we’ve seen with Reader and Writer, the power of the State monad comes
from combining instances. The map and flatMap methods thread the state
from one instance to another. Each individual instance represents an atomic
state transformation, and their combination represents a complete sequence
of changes:
As you can see, in this example the final state is the result of applying both
transformations in sequence. State is threaded from step to step even though
we don’t interact with it in the for comprehension.
The general model for using the State monad is to represent each step of
a computation as an instance and compose the steps using the standard
monad operators. Cats provides several convenience constructors for creating
primitive steps:
import cats.data.State
import State._
In case you haven’t heard of post‐order expressions before (don’t worry if you
haven’t), they are a mathematical notation where we write the operator after
its operands. So, for example, instead of writing 1 + 2 we would write:
1 2 +
Although post‐order expressions are difficult for humans to read, they are easy
to evaluate in code. All we need to do is traverse the symbols from left to right,
carrying a stack of operands with us as we go:
• when we see an operator, we pop two operands off the stack, operate
on them, and push the result in their place.
Let’s write an interpreter for these expressions. We can parse each symbol
into a State instance representing a transformation on the stack and an
intermediate result. The State instances can be threaded together using
flatMap to produce an interpreter for any sequence of symbols.
Start by writing a function evalOne that parses a single symbol into an instance
of State. Use the code below as a template. Don’t worry about error handling
for now—if the stack is in the wrong configuration, it’s OK to throw an
exception.
256 CHAPTER 9. MONADS
import cats.data.State
If this seems difficult, think about the basic form of the State instances you’re
returning. Each instance represents a functional transformation from a stack
to a pair of a stack and a result. You can ignore any wider context and focus
on just that one step:
Feel free to write your Stack instances in this form or as sequences of the
convenience constructors we saw above.
evalOne("42").runA(Nil).value
// res10: Int = 42
We can represent more complex programs using evalOne, map, and flatMap.
Note that most of the work is happening on the stack, so we ignore the results
of the intermediate steps for evalOne("1") and evalOne("2"):
program.runA(Nil).value
// res11: Int = 3
multistageProgram.runA(Nil).value
// res13: Int = 9
Because evalOne and evalAll both return instances of State, we can thread
these results together using flatMap. evalOne produces a simple stack
transformation and evalAll produces a complex one, but they’re both pure
functions and we can use them in any order as many times as we like:
biggerProgram.runA(Nil).value
// res14: Int = 21
import cats.Monad
import scala.annotation.tailrec
@tailrec
def tailRecM[A, B](a: A)(fn: A => Option[Either[A, B]]): Option[B] =
{
fn(a) match {
case None => None
case Some(Left(a1)) => tailRecM(a1)(fn)
case Some(Right(b)) => Some(b)
}
}
}
from a 2015 paper by PureScript creator Phil Freeman. The method should
recursively call itself until the result of fn returns a Right.
To motivate its use let’s use the following example: Suppose we want to write
a method that calls a function until the function indicates it should stop. The
function will return a monad instance because, as we know, monads represent
sequencing and many monads have some notion of stopping.
import cats.instances.option._
It’s important to note that we have to explicitly call tailRecM. There isn’t a code
transformation that will convert non‐tail recursive code into tail recursive code
that uses tailRecM. However there are several utilities provided by the Monad
type class that makes these kinds of methods easier to write. For example, we
can rewrite retry in terms of iterateWhileM and we don’t have to explicitly call
tailRecM.
Let’s write a Monad for our Tree data type from last chapter. Here’s the type
again:
Branch(left, right)
Verify that the code works on instances of Branch and Leaf, and that the Monad
provides Functor‐like behaviour for free.
Also verify that having a Monad in scope allows us to use for comprehensions,
despite the fact that we haven’t directly implemented flatMap or map on Tree.
Don’t feel you have to make tailRecM tail‐recursive. Doing so is quite difficult.
We’ve included both tail‐recursive and non‐tail‐recursive implementations in
the solutions so you can check your work.
9.11 Summary
In this chapter we’ve seen monads up‐close. We saw that flatMap can
be viewed as an operator for sequencing computations, dictating the order
in which operations must happen. From this viewpoint, Option represents
a computation that can fail without an error message, Either represents
computations that can fail with a message, List represents multiple possible
results, and Future represents a computation that may produce a value at some
point in the future.
We’ve also seen some of the custom types and data structures that Cats
provides, including Id, Reader, Writer, and State. These cover a wide range
of use cases.
Finally, in the unlikely event that we have to implement a custom monad, we’ve
learned about defining our own instance using tailRecM. tailRecM is an odd
wrinkle that is a concession to building a functional programming library that
is stack‐safe by default. We don’t need to understand tailRecM to understand
monads, but having it around gives us benefits of which we can be grateful
when writing monadic code.
262 CHAPTER 9. MONADS
Chapter 10
Monad Transformers
Monads are like burritos, which means that once you acquire a taste, you’ll
find yourself returning to them again and again. This is not without issues. As
burritos can bloat the waist, monads can bloat the code base through nested
for‐comprehensions.
To use this value we must nest flatMap calls (or equivalently, for‐
comprehensions):
263
264 CHAPTER 10. MONAD TRANSFORMERS
A question arises. Given two arbitrary monads, can we combine them in some
way to make a single monad? That is, do monads compose? We can try to
write the code but we soon hit problems:
new Monad[Composed] {
def pure[A](a: A): Composed[A] =
a.pure[M2].pure[M1]
Notice that the definition above makes use of None—an Option‐specific concept
that doesn’t appear in the general Monad interface. We need this extra detail
to combine Option with other monads. Similarly, there are things about other
monads that help us write composed flatMap methods for them. This is the
idea behind monad transformers: Cats defines transformers for a variety of
10.2. A TRANSFORMATIVE EXAMPLE 265
monads, each providing the extra knowledge we need to compose that monad
with others. Let’s look at some examples.
Cats provides transformers for many monads, each named with a T suffix:
EitherT composes Either with other monads, OptionT composes Option, and
so on.
Here’s an example that uses OptionT to compose List and Option. We can
use OptionT[List, A], aliased to ListOption[A] for convenience, to transform
a List[Option[A]] into a single monad:
import cats.data.OptionT
Note how we build ListOption from the inside out: we pass List, the type
of the outer monad, as a parameter to OptionT, the transformer for the inner
monad.
The map and flatMap methods combine the corresponding methods of List and
Option into single operations:
266 CHAPTER 10. MONAD TRANSFORMERS
This is the basis of all monad transformers. The combined map and flatMap
methods allow us to use both component monads without having to
recursively unpack and repack values at each stage in the computation. Now
let’s look at the API in more depth.
Complexity of Imports
The imports in the code samples above hint at how everything bolts
together.
By convention, in Cats a monad Foo will have a transformer class called FooT.
In fact, many monads in Cats are defined by combining a monad transformer
with the Id monad. Concretely, some of the available instances are:
Kleisli Arrows
We can now reveal that Kleisli and ReaderT are, in fact, the same thing!
ReaderT is actually a type alias for Kleisli. Hence, we were creating
Readers last chapter and seeing Kleislis on the console.
268 CHAPTER 10. MONAD TRANSFORMERS
All of these monad transformers follow the same convention. The transformer
itself represents the inner monad in a stack, while the first type parameter
specifies the outer monad. The remaining type parameters are the types we’ve
used to form the corresponding monads.
For example, our ListOption type above is an alias for OptionT[List, A] but the
result is effectively a List[Option[A]]. In other words, we build monad stacks
from the inside out:
Many monads and all transformers have at least two type parameters, so we
often have to define type aliases for intermediate stages.
For example, suppose we want to wrap Either around Option. Option is the
innermost type so we want to use the OptionT monad transformer. We need
to use Either as the first type parameter. However, Either itself has two type
parameters and monads only have one. We need a type alias to convert the
type constructor to the correct shape:
ErrorOrOption is a monad, just like ListOption. We can use pure, map, and
flatMap as usual to create and transform instances:
val a = 10.pure[ErrorOrOption]
// a: OptionT[ErrorOr, Int] = OptionT(value = Right(value = Some(value
= 10)))
val b = 32.pure[ErrorOrOption]
10.3. MONAD TRANSFORMERS IN CATS 269
Things become even more confusing when we want to stack three or more
monads.
For example, let’s create a Future of an Either of Option. Once again we build
this from the inside out with an OptionT of an EitherT of Future. However, we
can’t define this in one line because EitherT has three type parameters:
This time we create an alias for EitherT that fixes Future and Error and allows
A to vary:
import scala.concurrent.Future
import cats.data.{EitherT, OptionT}
Our mammoth stack now composes three monads and our map and flatMap
methods cut through three layers of abstraction:
270 CHAPTER 10. MONAD TRANSFORMERS
Kind Projector
Kind Projector can’t simplify all type declarations down to a single line,
but it can reduce the number of intermediate type definitions needed
to keep our code readable.
As we saw above, we can create transformed monad stacks using the relevant
monad transformer’s apply method or the usual pure syntax¹:
¹Cats provides an instance of MonadError for EitherT, allowing us to create instances
using raiseError as well as pure.
10.3. MONAD TRANSFORMERS IN CATS 271
Each call to value unpacks a single monad transformer. We may need more
than one call to completely unpack a large stack. For example, to Await the
FutureEitherOption stack above, we need to call value twice:
futureEitherOr
// res6: OptionT[FutureEither, Int] = OptionT(
// value = EitherT(value = Future(Success(Right(Some(42)))))
// )
Await.result(stack, 1.second)
// res7: Either[String, Option[Int]] = Right(value = Some(value = 42))
Many monads in Cats are defined using the corresponding transformer and
the Id monad. This is reassuring as it confirms that the APIs for monads and
transformers are identical. Reader, Writer, and State are all defined in this way:
We can cope with this in multiple ways. One approach involves creating a
single “super stack” and sticking to it throughout our code base. This works
if the code is simple and largely uniform in nature. For example, in a web
application, we could decide that all request handlers are asynchronous and
all can fail with the same set of HTTP error codes. We could design a custom
ADT representing the errors and use a fusion Future and Either everywhere in
our code:
10.3. MONAD TRANSFORMERS IN CATS 273
The “super stack” approach starts to fail in larger, more heterogeneous code
bases where different stacks make sense in different contexts. Another design
pattern that makes more sense in these contexts uses monad transformers as
local “glue code”. We expose untransformed stacks at module boundaries,
transform them to operate on them locally, and untransform them before
passing them on. This allows each module of code to make its own decisions
about which transformers to use:
import cats.data.Writer
result.value
}
274 CHAPTER 10. MONAD TRANSFORMERS
Optimus Prime is getting tired of the nested for comprehensions in his neural
matrix. Help him by rewriting Response using a monad transformer.
²It is a well known fact that Autobot neural nets are implemented in Scala. Decepticon
brains are, of course, dynamically typed.
10.4. EXERCISE: MONADS: TRANSFORM AND ROLL OUT 275
Now test the code by implementing getPowerLevel to retrieve data from a set
of imaginary allies. Here’s the data we’ll use:
Two autobots can perform a special move if their combined power level is
greater than 15. Write a second method, canSpecialMove, that accepts the
names of two allies and checks whether a special move is possible. If either
ally is unavailable, fail with an appropriate error message:
Finally, write a method tacticalReport that takes two ally names and prints a
message saying whether they can perform a special move:
tacticalReport("Jazz", "Bumblebee")
// res13: String = "Jazz and Bumblebee need a recharge."
tacticalReport("Bumblebee", "Hot Rod")
// res14: String = "Bumblebee and Hot Rod are ready to roll out!"
276 CHAPTER 10. MONAD TRANSFORMERS
tacticalReport("Jazz", "Ironhide")
// res15: String = "Comms error: Ironhide unreachable"
10.5 Summary
The type signatures of monad transformers are written from the inside out, so
an EitherT[Option, String, A] is a wrapper for an Option[Either[String, A]].
It is often useful to use type aliases when writing transformer types for deeply
nested monads.
for {
a <- parseInt("a")
b <- parseInt("b")
c <- parseInt("c")
} yield (a + b + c)
// res0: Either[String, Int] = Left(value = "Couldn't read a")
277
278 CHAPTER 11. SEMIGROUPAL AND APPLICATIVE
The calls to parseInt and Future.apply above are independent of one another,
but map and flatMap can’t exploit this. We need a weaker construct—one that
doesn’t guarantee sequencing—to achieve the result we want. In this chapter
we will look at three type classes that support this pattern:
11.1 Semigroupal
trait Semigroupal[F[_]] {
def product[A, B](fa: F[A], fb: F[B]): F[(A, B)]
}
import cats.Semigroupal
import cats.instances.option._ // for Semigroupal
Semigroupal[Option].product(Some(123), Some("abc"))
// res1: Option[Tuple2[Int, String]] = Some(value = (123, "abc"))
If both parameters are instances of Some, we end up with a tuple of the values
within. If either parameter evaluates to None, the entire result is None:
¹It is also the winner of Underscore’s 2017 award for the most difficult functional
programming term to work into a coherent English sentence.
280 CHAPTER 11. SEMIGROUPAL AND APPLICATIVE
Semigroupal[Option].product(None, Some("abc"))
// res2: Option[Tuple2[Nothing, String]] = None
Semigroupal[Option].product(Some(123), None)
// res3: Option[Tuple2[Int, Nothing]] = None
The methods map2 through map22 apply a user‐specified function to the values
inside 2 to 22 contexts:
Semigroupal.map2(Option(1), Option.empty[Int])(_ + _)
// res7: Option[Int] = None
There are also methods contramap2 through contramap22 and imap2 through
imap22, that require instances of Contravariant and Invariant respectively.
There is only one law for Semigroupal: the product method must be associative.
11.2. APPLY SYNTAX 281
Cats provides a convenient apply syntax that provides a shorthand for the
methods described above. We import the syntax from cats.syntax.apply.
Here’s an example:
The tupled method is implicitly added to the tuple of Options. It uses the
Semigroupal for Option to zip the values inside the Options, creating a single
Option of a tuple:
(Option(123), Option("abc")).tupled
// res8: Option[Tuple2[Int, String]] = Some(value = (123, "abc"))
We can use the same trick on tuples of up to 22 values. Cats defines a separate
tupled method for each arity:
In addition to tupled, Cats’ apply syntax provides a method called mapN that
accepts an implicit Functor and a function of the correct arity to combine the
values.
(
Option("Garfield"),
Option(1978),
Option("Orange & black")
).mapN(Cat.apply)
// res10: Option[Cat] = Some(
// value = Cat(name = "Garfield", born = 1978, color = "Orange &
black")
// )
Internally mapN uses the Semigroupal to extract the values from the Option and
the Functor to apply the values to the function.
It’s nice to see that this syntax is type checked. If we supply a function that
accepts the wrong number or types of parameters, we get a compile error:
(Option("cats"), Option(true)).mapN(add)
// error:
// ':' expected, but '(' found
// error:
11.2. APPLY SYNTAX 283
Apply syntax also has contramapN and imapN methods that accept Contravariant
and Invariant functors (Section 8.6). For example, we can combine Monoids
using Invariant. Here’s an example:
import cats.Monoid
import cats.instances.int._ // for Monoid
import cats.instances.invariant._ // for Semigroupal
import cats.instances.list._ // for Monoid
import cats.instances.string._ // for Monoid
import cats.syntax.apply._ // for imapN
Our Monoid allows us to create “empty” Cats, and add Cats together using the
syntax from Chapter 7:
Future
import cats.Semigroupal
import cats.instances.future._ // for Semigroupal
import scala.concurrent._
import scala.concurrent.duration._
import scala.concurrent.ExecutionContext.Implicits.global
Await.result(futurePair, 1.second)
// res0: Tuple2[String, Int] = ("Hello", 123)
11.3. SEMIGROUPAL APPLIED TO DIFFERENT TYPES 285
The two Futures start executing the moment we create them, so they are
already calculating results by the time we call product. We can use apply syntax
to zip fixed numbers of Futures:
val futureCat = (
Future("Garfield"),
Future(1978),
Future(List("Lasagne"))
).mapN(Cat.apply)
Await.result(futureCat, 1.second)
// res1: Cat = Cat(
// name = "Garfield",
// yearOfBirth = 1978,
// favoriteFoods = List("Lasagne")
// )
List
import cats.Semigroupal
import cats.instances.list._ // for Semigroupal
Either
Semigroupal[ErrorOr].product(
Left(Vector("Error 1")),
Left(Vector("Error 2"))
)
// res3: Either[Vector[String], Tuple2[Nothing, Nothing]] = Left(
// value = Vector("Error 1")
// )
In this example product sees the first failure and stops, even though it is
possible to examine the second parameter and see that it is also a failure.
The reason for the surprising results for List and Either is that they are both
monads. If we have a monad we can implement product as follows.
import cats.Monad
import cats.syntax.functor._ // for map
import cats.syntax.flatMap._ // for flatmap
Even our results for Future are a trick of the light. flatMap provides sequential
ordering, so product provides the same. The parallel execution we observe
occurs because our constituent Futures start running before we call product.
This is equivalent to the classic create‐then‐flatMap pattern:
for {
x <- a
y <- b
} yield (x, y)
So why bother with Semigroupal at all? The answer is that we can create useful
data types that have instances of Semigroupal (and Applicative) but not Monad.
This frees us to implement product in different ways. We’ll examine this further
in a moment when we look at an alternative data type for error handling.
Why does product for List produce the Cartesian product? We saw an
example above. Here it is again.
11.4 Parallel
In the previous section we saw that when call product on a type that has a
Monad instance we get sequential semantics. This makes sense from the point‐
of‐view of keeping consistency with implementations of product in terms of
flatMap and map. However it’s not always what we want. The Parallel type
class, and its associated syntax, allows us to access alternate semantics for
certain monads.
We’ve seen how the product method on Either stops at the first error.
import cats.Semigroupal
import cats.instances.either._ // for Semigroupal
Semigroupal[ErrorOr].product(error1, error2)
// res0: Either[Vector[String], Tuple2[Int, Int]] = Left(
// value = Vector("Error 1")
// )
(error1, error2).tupled
// res1: Either[Vector[String], Tuple2[Int, Int]] = Left(
// value = Vector("Error 1")
// )
To collect all the errors we simply replace tupled with its “parallel” version
called parTupled.
11.4. PARALLEL 289
(error1, error2).parTupled
// res2: Either[Vector[String], Tuple2[Int, Int]] = Left(
// value = Vector("Error 1", "Error 2")
// )
Notice that both errors are returned! This behaviour is not special to using
Vector as the error type. Any type that has a Semigroup instance will work. For
example, here we use List instead.
(errStr1, errStr2).parTupled
// res3: Either[List[String], Tuple2[Int, Int]] = Left(
// value = List("error 1", "error 2")
// )
(error1, error2).parMapN(addTwo)
(success1, success2).parMapN(addTwo)
// res4: Either[Vector[String], Int] = Right(value = 3)
Let’s dig into how Parallel works. The definition below is the core of Parallel.
290 CHAPTER 11. SEMIGROUPAL AND APPLICATIVE
trait Parallel[M[_]] {
type F[_]
This tells us if there is a Parallel instance for some type constructor M then:
We haven’t seen ~> before. It’s a type alias for FunctionK and is what performs
the conversion from M to F. A normal function A => B converts values of type
A to values of type B. Remember that M and F are not types; they are type
constructors. A FunctionK M ~> F is a function from a value with type M[A] to
a value with type F[A]. Let’s see a quick example by defining a FunctionK that
converts an Option to a List.
import cats.arrow.FunctionK
optionToList(Some(1))
// res5: List[Int] = List(1)
optionToList(None)
// res6: List[Nothing] = List()
11.5. APPLY AND APPLICATIVE 291
So in summary, Parallel allows us to take a type that has a monad instance and
convert it to some related type that instead has an applicative (or semigroupal)
instance. This related type will have some useful alternate semantics. We’ve
seen the case above where the related applicative for Either allows for
accumulation of errors instead of fail‐fast semantics.
Now we’ve seen Parallel it’s time to finally learn about Applicative.
Does List have a Parallel instance? If so, what does the Parallel instance
do?
Cats models applicatives using two type classes. The first, cats.Apply, extends
Semigroupal and Functor and adds an ap method that applies a parameter to a
function within a context. The second, cats.Applicative, extends Apply and
adds the pure method introduced in Chapter 9. Here’s a simplified definition
in code:
Applicative also introduces the pure method. This is the same pure we saw in
Monad. It constructs a new applicative instance from an unwrapped value. In
this sense, Applicative is related to Apply as Monoid is related to Semigroup.
With the introduction of Apply and Applicative, we can zoom out and see
a whole family of type classes that concern themselves with sequencing
computations in different ways. Figure 11.1 shows the relationship between
the type classes covered in this book³.
• and so on.
Because of the lawful nature of the relationships between the type classes,
the inheritance relationships are constant across all instances of a type class.
Apply defines product in terms of ap and map; Monad defines product, ap, and map,
in terms of pure and flatMap.
What can we say about these two data types without knowing more about
their implementation?
We know strictly more about Foo than Bar: Monad is a subtype of Applicative, so
we can guarantee properties of Foo (namely flatMap) that we cannot guarantee
with Bar. Conversely, we know that Bar may have a wider range of behaviours
294 CHAPTER 11. SEMIGROUPAL AND APPLICATIVE
than Foo. It has fewer laws to obey (no flatMap), so it can implement behaviours
that Foo cannot.
This demonstrates the classic trade‐off of power (in the mathematical sense)
versus constraint. The more constraints we place on a data type, the more
guarantees we have about its behaviour, but the fewer behaviours we can
model.
Monads happen to be a sweet spot in this trade‐off. They are flexible enough
to model a wide range of behaviours and restrictive enough to give strong
guarantees about those behaviours. However, there are situations where
monads aren’t the right tool for the job. Sometimes we want Thai food, and
burritos just won’t satisfy.
11.6 Summary
While monads and functors are the most widely used sequencing data types
we’ve covered in this book, semigroupals and applicatives are the most general.
These type classes provide a generic mechanism to combine values and apply
functions within a context, from which we can fashion monads and a variety
of other combinators.
agenda for this book. The next chapter covers Traverse and Foldable, two
powerful type classes for converting between data types. After that we’ll look
at several case studies that bring together all of the concepts from Part I.
296 CHAPTER 11. SEMIGROUPAL AND APPLICATIVE
Chapter 12
In this chapter we’ll look at two type classes that capture iteration over
collections:
We’ll start by looking at Foldable, and then examine cases where folding
becomes complex and Traverse becomes convenient.
12.1 Foldable
The Foldable type class captures the foldLeft and foldRight methods we’re
used to in sequences like Lists, Vectors, and Streams. Using Foldable, we can
write generic folds that work with a variety of sequence types. We can also
invent new sequences and plug them into our code. Foldable gives us great
use cases for Monoids and the Eval monad.
297
298 CHAPTER 12. FOLDABLE AND TRAVERSE
Let’s start with a quick recap of the general concept of folding. We supply an
accumulator value and a binary function to combine it with each item in the
sequence:
show(Nil)
// res0: String = "nil"
show(List(1, 2, 3))
// res1: String = "3 then 2 then 1 then nil"
The foldLeft method works recursively down the sequence. Our binary
function is called repeatedly for each item, the result of each call becoming
the accumulator for the next. When we reach the end of the sequence, the
final accumulator becomes our final result.
Depending on the operation we’re performing, the order in which we fold may
be important. Because of this there are two standard variants of fold:
List(1, 2, 3).foldLeft(0)(_ + _)
// res2: Int = 6
List(1, 2, 3).foldRight(0)(_ + _)
// res3: Int = 6
12.1. FOLDABLE 299
1 1
2 2
3
3
0 + 1
3 + 0
1 + 2 2 + 3
3 + 3 1 + 5
6 6
List(1, 2, 3).foldLeft(0)(_ - _)
// res4: Int = -6
List(1, 2, 3).foldRight(0)(_ - _)
// res5: Int = 2
Try using foldLeft and foldRight with an empty list as the accumulator and ::
as the binary operator. What results do you get in each case?
foldLeft and foldRight are very general methods. We can use them to
implement many of the other high‐level sequence operations we know. Prove
this to yourself by implementing substitutes for List's map, flatMap, filter, and
sum methods in terms of foldRight.
Cats’ Foldable abstracts foldLeft and foldRight into a type class. Instances
of Foldable define these two methods and inherit a host of derived methods.
Cats provides out‐of‐the‐box instances of Foldable for a handful of Scala data
types: List, Vector, LazyList, and Option.
import cats.Foldable
import cats.instances.list._ // for Foldable
Foldable[List].foldLeft(ints, 0)(_ + _)
// res0: Int = 6
Other sequences like Vector and LazyList work in the same way. Here is an
example using Option, which is treated like a sequence of zero or one elements:
Foldable[Option].foldLeft(maybeInt, 10)(_ * _)
// res1: Int = 1230
Using Eval means folding is always stack safe, even when the collection’s
default definition of foldRight is not. For example, the default implementation
of foldRight for LazyList is not stack safe. The longer the lazy list, the larger
the stack requirements for the fold. A sufficiently large lazy list will trigger a
StackOverflowError:
import cats.Eval
import cats.Foldable
bigData.foldRight(0L)(_ + _)
// java.lang.StackOverflowError ...
Using Foldable forces us to use stack safe operations, which fixes the overflow
exception:
eval.value
// res3: Long = 5000050000L
Stack safety isn’t typically an issue when using the standard library. The
most commonly used collection types, such as List and Vector, provide
stack safe implementations of foldRight:
302 CHAPTER 12. FOLDABLE AND TRAVERSE
(1 to 100000).toList.foldRight(0L)(_ + _)
(1 to 100000).toVector.foldRight(0L)(_ + _)
// res4: Long = 5000050000L
Foldable[Option].nonEmpty(Option(42))
// res5: Boolean = true
Foldable[List].find(List(1, 2, 3))(_ % 2 == 0)
// res6: Option[Int] = Some(value = 2)
In addition to these familiar methods, Cats provides two methods that make
use of Monoids:
• combineAll (and its alias fold) combines all elements in the sequence
using their Monoid;
Foldable[List].combineAll(List(1, 2, 3))
// res7: Int = 6
12.1. FOLDABLE 303
Foldable[List].foldMap(List(1, 2, 3))(_.toString)
// res8: String = "123"
List(1, 2, 3).combineAll
// res11: Int = 6
List(1, 2, 3).foldMap(_.toString)
// res12: String = "123"
304 CHAPTER 12. FOLDABLE AND TRAVERSE
Remember that Scala will only use an instance of Foldable if the method
isn’t explicitly available on the receiver. For example, the following code
will use the version of foldLeft defined on List:
List(1, 2, 3).foldLeft(0)(_ + _)
// res13: Int = 6
12.2 Traverse
foldLeft and foldRight are flexible iteration methods but they require us to do
a lot of work to define accumulators and combinator functions. The Traverse
type class is a higher level tool that leverages Applicatives to provide a more
convenient, more lawful, pattern for iteration.
import scala.concurrent._
import scala.concurrent.duration._
import scala.concurrent.ExecutionContext.Implicits.global
Now, suppose we want to poll all of the hosts and collect all of their uptimes.
We can’t simply map over hostnames because the result—a List[Future[Int]]—
would contain more than one Future. We need to reduce the results to a single
Future to get something we can block on. Let’s start by doing this manually
using a fold:
Await.result(allUptimes, 1.second)
// res0: List[Int] = List(1020, 960, 840)
Intuitively, we iterate over hostnames, call func for each item, and combine the
results into a list. This sounds simple, but the code is fairly unwieldy because
of the need to create and combine Futures at every iteration. We can improve
on things greatly using Future.traverse, which is tailor‐made for this pattern:
306 CHAPTER 12. FOLDABLE AND TRAVERSE
Await.result(allUptimes, 1.second)
// res2: List[Int] = List(1020, 960, 840)
This is much clearer and more concise—let’s see how it works. If we ignore
distractions like CanBuildFrom and ExecutionContext, the implementation of
Future.traverse in the standard library looks like this:
object Future {
def sequence[B](futures: List[Future[B]]): Future[List[B]] =
traverse(futures)(identity)
// etc...
}
Cats’ Traverse type class generalises these patterns to work with any type of
Applicative: Future, Option, Validated, and so on. We’ll approach Traverse
in the next sections in two steps: first we’ll generalise over the Applicative,
then we’ll generalise over the sequence type. We’ll end up with an extremely
valuable tool that trivialises many operations involving sequences and other
data types.
Future(List.empty[Int])
is equivalent to Applicative.pure:
import cats.Applicative
import cats.instances.future._ // for Applicative
import cats.syntax.applicative._ // for pure
308 CHAPTER 12. FOLDABLE AND TRAVERSE
List.empty[Int].pure[Future]
def oldCombine(
accum : Future[List[Int]],
host : String
): Future[List[Int]] = {
val uptime = getUptime(host)
for {
accum <- accum
uptime <- uptime
} yield accum :+ uptime
}
Await.result(totalUptime, 1.second)
// res5: List[Int] = List(1020, 960, 840)
or we can use it with other Applicative data types as shown in the following
exercises.
What is the return type of this method? What does it produce for the following
inputs?
310 CHAPTER 12. FOLDABLE AND TRAVERSE
process(List(2, 4, 6))
process(List(1, 2, 3))
import cats.data.Validated
import cats.instances.list._ // for Monoid
process(List(2, 4, 6))
process(List(1, 2, 3))
Our listTraverse and listSequence methods work with any type of Applicative
, but they only work with one type of sequence: List. We can generalise over
different sequence types using a type class, which brings us to Cats’ Traverse.
Here’s the abbreviated definition:
12.2. TRAVERSE 311
package cats
trait Traverse[F[_]] {
def traverse[G[_]: Applicative, A, B]
(inputs: F[A])(func: A => G[B]): G[F[B]]
Cats provides instances of Traverse for List, Vector, Stream, Option, Either, and
a variety of other types. We can summon instances as usual using Traverse.
apply and use the traverse and sequence methods as described in the previous
section:
import cats.Traverse
import cats.instances.future._ // for Applicative
import cats.instances.list._ // for Traverse
Await.result(totalUptime, 1.second)
// res0: List[Int] = List(1020, 960, 840)
Await.result(numbers2, 1.second)
// res1: List[Int] = List(1, 2, 3)
There are also syntax versions of the methods, imported via cats.syntax.
traverse:
312 CHAPTER 12. FOLDABLE AND TRAVERSE
Await.result(numbers3, 1.second)
// res2: List[Int] = List(1020, 960, 840)
Await.result(numbers4, 1.second)
// res3: List[Int] = List(1, 2, 3)
As you can see, this is much more compact and readable than the foldLeft
code we started with earlier this chapter!
12.3 Summary
In this chapter we were introduced to Foldable and Traverse, two type classes
for iterating over sequences.
The real power comes from Traverse, which abstracts and generalises the
traverse and sequence methods we know from Future. Using these methods
we can turn an F[G[A]] into a G[F[A]] for any F with an instance of Traverse
and any G with an instance of Applicative. In terms of the reduction we get in
lines of code, Traverse is one of the most powerful patterns in this book. We
can reduce folds of many lines down to a single foo.traverse.
12.3. SUMMARY 313
…and with that, we’ve finished all of the theory in this book. There’s plenty
more to come, though, as we put everything we’ve learned into practice in a
series of in‐depth case studies in Part II!
314 CHAPTER 12. FOLDABLE AND TRAVERSE
Part III
Interpreters
315
Chapter 13
Indexed Types
In this chapter we look at indexed types. Both data and codata can be indexed.
An indexed type is a type constructor, so a type like F[_], along with a set of
types that can fill in the type parameters for the constructor. Let’s say the set
of types is Int, String, and Option[A]. Then, for a type constructor F we can
construct an indexed type from the set F[Int], F[String], and F[Option[A]]. As
the name suggests, the indices act as indexes into this set of types.
This is a very abstract definition and doesn’t help us understand how indexed
types are useful. We’ll see a lot of details and examples in this chapter, but
let’s start with a more useful high‐level overview. When we’re treating a type
as indexed data we create elements from our set of types. We can think of
this as providing a proof that some type parameter, A in the example above, is
equal to some other type. When we’re treating a type as indexed codata we
require
TODO: Complete
As you might expect, indexed data and indexed codata are duals. We can
317
318 CHAPTER 13. INDEXED TYPES
Indexed data are more usually known as generalized algebraic data types.
Indexed codata are sometimes known as typestate. Both can make use of
what is known as phantom types. Indeed, an early name for indexed data was
first‐class phantom types.
As you might expect, indexed data and indexed codata are dual are one
another. I’ve tried to represent this in the description above. Another way
to look at this is terms of type equalities, which are proofs or guarantees that
a particular type parameter is equal to a particular concrete type. When we
work with indexed codata we require the user supplies us with these type
equalities. When we work with indexed data we discover these type equalities
as we destructure the data.
Phantom types are a basic building block of indexed types, so we’ll start with
an example of them. A phantom type is simply a type parameter that doesn’t
correspond to any value. In the example below, the type parameter A is a
phantom type, because there is no value of type A, while B is not because
there is a value of that type.
Using phantom types we can annotate measurements with their units, which
in turn can prevent us ever using incompatible units. Let’s work with just
length, which is sufficient to show the idea. We’ll start by defining a length
type with a phantom type recording the unit, and a method that allows us to
add together lengths.
We’ll need to define a few unit types to use this, and some Lengths using these
units.
trait Metres
trait Feet
Now we can add Lengths together if they have the same unit.
threeMetres + threeMetres
// res0: Length[Metres] = Length(value = 6.0)
However if we try to add Lengths with different units the code will not compile.
320 CHAPTER 13. INDEXED TYPES
threeMetres + threeFeetAndRising
// error:
// Found: (repl.MdocSession.MdocApp.threeFeetAndRising :
// repl.MdocSession.MdocApp.Length[repl.MdocSession.MdocApp.Feet])
// Required: repl.MdocSession.MdocApp.Length[repl.MdocSession.MdocApp.
Metres]
// threeMetres + threeFeetAndRising
// ^^^^^^^^^^^^^^^^^^
There is one big problem with phantom types on their own: there is no way
to use the information stored in the phantom type in further processing. For
example, force times length gives torque (with the SI unit of newton‐metres).
However we cannot define a * method on Length that can only be called if the
Unit is Metre using just the tool of phantom types. Similarly, we cannot define,
say, a toString method that uses the Unit type to appropriately print the result.
Solving these problems leads us to indexed codata, so let’s now look at that.
We’ll solve all these problems in due course, but before we move on let’s
see another, more complex, example of phantom types to give you a better
understanding of their power.
Our next example will represent a subset of HTML, the language used to write
web pages, in a typesafe way. An example of HTML is below.
<!DOCTYPE html>
<html>
<head><title>Our Amazing Web Page</title></head>
<body>
<h1>This Is Our Amazing Web Page</h1>
<p>Please be in awe of its <strong>amazingness</strong></p>
</body>
</html>
In HTML the content of the page is marked up with tags, like <h1>, that give
it meaning. For example, <h1> means a heading at level one, and <p> means
a paragraph. An opening tag is closed by a corresponding closing tag such as
</h1> for <h1> and </p> for <p>.
There are rules that control where tags are allowed. The complete set of tags,
13.2. INDEXED CODATA 321
and their associated rules, is very complex. We’ll use the following, much
simplified, rules:
We’re missing one thing: we need to be able to end our recursion in text
content. In fact we have two different kinds of tags: those that can contain
text content in addition to other tags (we’ll call these content tags) and those
that cannot (which we will call structural tags.)
The basic idea of indexed codata is to prevent methods being called unless
certain conditions, encoded in types, are met. More precisely, methods are
guarded by type equalities that callers must prove they satisfy to call a method.
The contextual abstraction features, given instances and using clauses, are
used to implement this in Scala.
We’ll start our exploration of indexed codata with a very simple example. We
are going to define a switch that can only be turned on when it is off, and off
when it is on. Since this is codata, we start with an interface.
trait Switch {
def on: Switch
def off: Switch
}
There are no constraints on this interface as defined; we can turn any switch
on, even if it is already on, and vice versa. The first step to implement such a
constraint is to add a type parameter, which will hold the state of the Switch.
322 CHAPTER 13. INDEXED TYPES
trait Switch[A] {
def on: Switch[A]
def off: Switch[A]
}
Implementing these constraints has two parts. The first is defining types to
represent on and off.
trait On
trait Off
The second step is to add the constraints to the relevant methods on Switch.
Here is how we do it.
trait Switch[A] {
def on(using ev: A =:= Off): Switch[On]
def off(using ev: A =:= On): Switch[Off]
}
SimpleSwitch.on.off
// res2: Switch[Off] = SimpleSwitch()
SimpleSwitch.off.on
// res3: Switch[On] = SimpleSwitch()
SimpleSwitch.on.on
// error:
// Cannot prove that MdocApp1.this.On =:= MdocApp1.this.Off.
The constraint is made of two parts: using clauses, which we learned about in
4, and the A =:= B construction, which is new. =:= represents a type equality.
If a given instance A =:= B exists, then the type A is equal to the type B. (Note
we can write this with the more familiar prefix notation =:=[A, B] if we prefer.)
We never create these instances ourselves. Instead the compiler creates them
for us. In the on method, we are asking the compiler to construct an instance
A =:= Off, which can only be done if A is Off. This in turn means we can only
call the method when the Switch is Off. This is the core idea of indexed codata:
we reflect states as types, and restrict method calls to a subset of states.
Exercise: Torque
In Section 13.1 we saw how we could use phantom types to represent units.
We also ran into a limitation: we had no way to inspect the phantom types
and hence make decisions based on them. Now, with indexed codata, we can
do that.
An API protocol defines the order in which methods must be called. The
protocol in the case of Switch is that we can only call off after calling on and
vice versa. This protocol is a simple finite state machine, and illustrated in
Figure 13.1. Many common types have similar protocols. For example, files
can only be read once they are opened and cannot be read once they have
been closed.
off
On Off
on
with a single type parameter that represents the state, as we did with Switch.
We can also use multiple type parameters if that makes for a more convenient
representation.
Let’s see an example using multiple type parameters. We’re going to build an
API that represents a very limited subset of HTML, the language the defines
web pages. An example of HTML is below.
<!DOCTYPE html>
<html>
<head><title>Our Amazing Web Page</title></head>
<body>
<h1>This Is Our Amazing Web Page</h1>
<p>Please be in awe of its <strong>amazingness</strong></p>
</body>
</html>
In HTML the content of the page is marked up with tags, like <h1>, that give
it meaning. For example, <h1> means a heading at level one, and <p> means
a paragraph. An opening tag is closed by a corresponding closing tag such as
</h1> for <h1> and </p> for <p>.
There are several rules for valid HTML¹. We’re going to focus on the following:
¹The HTML specification allows for very lenient parsing of HTML. For example, if we don’t
326 CHAPTER 13. INDEXED TYPES
Link Link H1 P
1. Within the html tag there can only be a head and a body tag, in that order.
2. Within the head tag there must be exactly one title, and there can be
any other number of allowed tags (of which we’re only going to model
link).
3. Within the body there can be any number of allowed tags (of which we
are only going to model h1 and p).
As the code is fairly repetitive I will just present all the code and then discuss
the important parts. Here’s the implementation.
define the head tag it will usually be inferred. However we aren’t going to allow that kind of
leniency in our API.
13.2. INDEXED CODATA 327
def title(
text: String
)(using S =:= InHead, T =:= WithoutTitle): Html[InHead, WithTitle] =
Html(head :+ s"<title>$text</title>", this.body)
// Interpreter ------------------------------------------
s"\n<html>\n$h\n$b\n</html>"
}
328 CHAPTER 13. INDEXED TYPES
}
object Html {
val empty: Html[Empty, WithoutTitle] = Html(Vector.empty, Vector.
empty)
}
The key point is that we factor the state into two components. StructureState
represents where in the overall structure we are (inside the head, inside the
body, or inside neither). TitleState represents the state when defining the
elements inside the head, specifically whether we have a title element or
not. We could certainly represent this with one state type variable, but I
find the factored representation both easier to work with and easier for other
developers to understand. We can implement more complex protcols, such
as those that can be represented by context‐free or even context‐sensitive
grammars, using the same technique.
Html.empty.head
.link("stylesheet", "styles.css")
.title("Our Amazing Webpage")
.body
.h1("Where Amazing Exists")
.p("Right here")
.toString
// res6: String = """
// <html>
// <head>
// <link rel="stylesheet" href="styles.css"/>
// <title>Our Amazing Webpage</title>
// </head>
// <body>
// <h1>Where Amazing Exists</h1>
// <p>Right here</p>
// </body>
// </html>"""
Html.empty.head
.link("stylesheet", "styles.css")
.body
.h1("This Shouldn't Work")
// error:
// Cannot prove that MdocApp2.this.WithoutTitle =:= MdocApp2.this.
WithTitle.
These error messages are not great. We’ll address this in Chapter 16.
I don’t particularly like the HTML API we developed above, as the flat method
call structure doesn’t match the nesting in the HTML structure we’re creating.
I would prefer to write the following.
Html.empty
.head(_.title("Our Amazing Webpage"))
.body(_.h1("Where Amazing Happens").p("Right here"))
.toString
We still require the head is specified before the body, but now the nesting of
the method calls matches the nesting of the structure. Notice we’re still using
a Church‐encoded representation.
Can you think of how to implement this? You’ll need to use indexed codata,
and perhaps a bit of inspiration. This is a very open ended question, so don’t
worry if you struggle with it!
Indexed data is all about equality constraints: proofs that some type parameter
is equal to some type. However we can go beyond equality constraints
with contextual abstraction. We can use <:< for evidence of a subtyping
relationship, and NotGiven for evidence that no given instance exists (with
330 CHAPTER 13. INDEXED TYPES
which we can test that types are not equal, for example). Beyond that, we
can view any given instance as evidence.
Let’s return to our example of length, force, and torque to see how this is useful.
In the exercise where we defined torque as force times length, we fixed the
computation to have SI units. The example code is below. This is a reasonable
thing to do, as other units are insane, but there are a lot of insane people out
there.
To accomodate other unit types we can create given instances that represent
the results of operations of interest. In this case we want to represent the
result of multiplying a length unit by the force unit. In code we can write the
following.
// Weird units
trait Feet
trait Pounds
trait PoundsFeet
// An instance exists if A * B = C
trait Multiply[A, B, C]
object Multiply {
given Multiply[Metres, Newtons, NewtonMetres] = new Multiply {}
given Multiply[Feet, Pounds, PoundsFeet] = new Multiply {}
}
Length[Metres](3) * Force[Newtons](4)
// res11: Torque[NewtonMetres] = Torque(value = 12.0)
Note that’s it hard to think of Multiply as a type class, as it does not provide
any methods. Viewing it as evidence, however, does make sense.
Exercise: Commutivitiy
In the example above we defined a Multiply type class to represent that metres
times newtons gives newton metres. Multiplication is commutative. If A ×
B = C , then B × A = C . However we have not represented this, and if we
try newtons times metres, as in the example below, the code will fail.
Force[Newtons](3) * Length[Metres](4)
// error:
// No given instance of type MdocApp4.this.Multiply[MdocApp4.this.
Newtons, MdocApp4.this.Metres, Any] was found for parameter x$2
of method * in class Force
// Force[Newtons](3) * Length[Metres](4)
// ^
The key idea of indexed data is to encode type equalities in data. When
we come to inspect the data (usually, via structural recursion) we discover
these equalities, which in turn limit what values we can produce. Notice,
again, the duality with codata. Indexed codata limits methods we can call.
Indexed data limits values we can produce. Also, remember that indexed data
is often known as generalized algebraic data types. We are using the simpler
term indexed data to emphasise the relationship to indexed codata, and also
because it’s much easier to type!
Let’s see a simple, and classic, example: evaluating a small language. Our
language will have basic arithmetic as well as conditionals, which is just enough
to make it interesting. We’ll start with the version without indexed data.
enum Expr {
case Literal
}
13.4 Conclusions
The earliest reference I’ve found to phantom types is [Leijen and Meijer 2000].
We’ve seen the duality between data and codata in many places, starting
with Chapter 3. This chapter will begin by applying that duality to build an
interpreter using codata, which contrasts with the data approach we saw in
Section 5.2. This will illustrate the technique and give us a concrete example
to discuss its shortcoming. In particular we’ll see that extensibility is limited, a
problem we first encountered in Section 3.5.
333
334 CHAPTER 14. TAGLESS FINAL INTERPRETERS
In this section we’ll explore codata interpreters, using a DSL for terminal
interaction as a case study. The terminal is familiar to most programmers, and
terminal applications are common for developer focused tools. Most terminal
features are controlled by writing so‐called escape codes to the terminal.
However, applications benefit from higher‐level abstractions, motivating
textual user interface (TUI) libraries that present a more ergonomic interface¹.
Our library will showcase codata interpreters, monads, and the central role of
designing for composition and reasoning.
The modern terminal is an accretion of features that started with the VT‐
100 in 1978 and continues [to this day][kitty‐kp]. Most terminal features
are accessed by reading and writing ANSI escape codes, which are sequence
of characters starting with the escape character. We will work only with
escape codes that change the text style. This allows us to produce interesting
output, and raises all the design issues we want to address, but keeps the
system simple. The ideas here are extended to a more complete system in the
Terminus library.
The code below is written so that with a single change it can pasted into a
file and run with any recent version of Scala with just scala <filename>. The
required change is to add the @main annotation before the method go. That is,
change
to
(This is due to a limitation of the software that compiles the code in the book.)
¹If you’re interested in TUI libraries you might like to look at the brilliantly named ratatui for
Rust, brick for Haskell, or Textual for Python.
14.1. CODATA INTERPRETERS 335
The examples should work with any terminal from the last 40 odd years. If
you’re on Windows you can use Windows Terminal, WSL, or another terminal
that runs on Windows such as WezTerm.
We will start by writing color codes straight to the terminal. This will introduce
us to controlling the terminal, and show the problems of using ANSI escape
codes directly. Here’s our starting point:
Try running the above code (e.g. add the @main annotation to go, save it to a file
ColorCodes.scala and run scala ColorCodes.scala.) You should see text in the
normal style for your terminal, followed by text colored red, and then some
more text in the normal style. The change in color is controlled by writing
escape codes. These are strings starting with ESC (which is the character '\
u001b') followed by '['. This is the value of csiString (where CSI stands for
Control Sequence Introducer). The CSI is followed by a string indicating the
text style to use, and ended with a "m" The string "\u001b[31m" tells the terminal
to set the text foreground color to red, and the string "\u001b[0m" tells the
terminal to reset all text styling to the default.
336 CHAPTER 14. TAGLESS FINAL INTERPRETERS
Escape codes are simple for the terminal to process but lack useful structure
for the programmer generating them. The code above shows one potential
problem: we must remember to reset the color when we finish a run of styled
text. This problem is no different to that of remembering to free manually
allocated memory, and the long history of memory safety problems in C
programs show us that we cannot expect to do this reliably. Luckily, we’re
unlikely to crash our program if we forget an escape code!
To solve this problem we might decide to write functions like printRed below,
which prints a colored string and resets the styling afterwards.
Changing color is the not the only way that we can style terminal output. We
can also, for example, turn text bold. Continuing the above design gives us
the following.
print(output)
print(resetCode)
This works, but what if we want text that is both red and bold? We cannot
express this with our current design, without creating methods for every
possible combination of styles. Concretely this means methods like
This is not feasible to implement for all possible combinations of styles. The
root problem is that our design is not compositional: there is no way to build
a combination of styles from smaller pieces.
To solve the problem above we need printRed and printBold to accept not
a String to print but a program to run. We don’t need to know what these
programs do; we just need a way to run them. Then the combinators printRed
, printBold, and so on, can also return programs. These programs will set
the style appropriately before running their program parameter, and reset it
after the parameter program has finished running. By accepting and returning
programs the combinators have the property of closure, meaning that type of
the input (a program) is the same as the type of the outpt. Closure in turn
makes composition possible.
338 CHAPTER 14. TAGLESS FINAL INTERPRETERS
result
}
result
}
Notice that we have the usual structure for an algebra, which we first met in
Section 5.2.1:
This code works, for the example we have chosen, but there are two issues:
composition and ergonomics. That we have a problem with composition is
perhaps surprising, as that’s the problem we set out to solve. We have made
the system compositional in some aspects, but there are still ways in which it
does not work correctly. For example, take the following code:
run(printBold(() => {
run(print("This should be bold, "))
run(printBold(print("as should this ")))
run(print("and this.\n"))
}))
but we get
The inner call to printBold resets the bold styling when it finishes, which means
the surrounding call to printBold does not have effect on later statements.
The issue with ergonomics is that this code is tedious and error‐prone to write.
We have to pepper calls to run in just the right places, and even in these small
examples I found myself making mistakes. This is actually another failing of
composition, because we don’t have methods to combine together programs.
340 CHAPTER 14. TAGLESS FINAL INTERPRETERS
For example, we don’t have methods to say that the program above is the
sequential composition of three sub‐programs.
We can solve the first problem by keeping track of the state of the terminal.
If printBold is called within a state that is already printing bold it should do
nothing, otherwise it should update the state to indicate bold styling has been
turned on. This means the type of programs changes from () => A to Terminal
=> (Terminal, A), where Terminal holds the current state of the terminal.
To solve the second problem we’re looking for a way to sequentially compose
programs. Remember programs have type Terminal => (Terminal, A) and
pass around the state in Terminal. When you hear the phrase “sequentially
compose”, or see that type, your monad sense might start tingling. You are
correct: this is an instance of the state monad, which we first met in Section
9.9.
import cats.data.State
type Program[A] = State[Terminal, A]
assuming some suitable definition of Terminal. Let’s accept this definition for
now, and focus on defining Terminal.
Terminal has two pieces of state: the current bold setting and the current
color. (The real terminal has much more state, but these are representative
and modelling additional state does not introduce any new concepts.) The
bold setting could simply be a toggle that is either on or off, but when we come
to the implementation it will be easier to work with a counter that records the
depth of the nesting. The current color must be a stack. We can nest color
changes, and the color should change back to the surrounding color when a
nested level exits. Concretely, we should be able to write code like
where we use List to represent the stack of color codes. (We could also use
a mutable stack, as working with the state monad ensures the state will be
threaded through our program.) I’ve also defined some convenience methods
to simplify working with the state.
With this in place we can write the rest of the code, which is shown below.
Compared to the previous code I’ve shortened a few method names and
abstracted the escape codes. Remember this code can be directly executed
by scala. Just copy it into a file (e.g. Terminal.scala), add the @main annotation
to go, and run scala Terminal.scala.
import cats.data.State
import cats.syntax.all.*
object AnsiCodes {
val csiString: String = "\u001b["
}
a <- program
_ <- State.modify[Terminal] { terminal =>
val newTerminal = terminal.popColor
newTerminal.peekColor match {
case None => Console.print(AnsiCodes.reset)
case Some(c) => Console.print(c)
}
newTerminal
}
} yield a
Program.run(program)
}
Having defined the structure of Terminal, the majority of the remaining code
manipulates the Terminal state. Most of the methods on Program have a
common structure that specifies a state change before and after the main
program runs.
Keen readers will recall that data makes it easy to add new interpreters but
hard to add new operations, while codata makes it easy to add new operations
but hard to add new interpreters. We see that in action here. For example, it’s
trivial to add a new color combinator by defining a method like the below.
Another advantage of codata is that we can mix in arbitrary other Scala code.
For example, we can use map like shown below.
Using the native representation of programs (i.e. functions) gives us the entire
Scala language for free. In a data representation we have to reify every kind
of expression we wish to support. There is a downside to this as well: we get
Scala semantics whether we like them or not. A codata representation would
not be appropriate if we wanted to make an exotic language that worked in a
different way.
We could factor the interpreter in different ways, and it would still be a codata
interpreter. For example, we could put a method to write to the terminal on
the Terminal type. This would give us a bit more flexibility as changing the
implementation of Terminal could, say, write to a network socket or a terminal
embedded in a browser. We still have the limitation that we cannot create
truly different interpretations, such as serializing programs to disk, with the
codata approach. We’ll address this limitation in the next section where we
look at tagless final.
We’ll now explore tagless final, an extension to the basic codata interpreter.
In the terminal DSL case study we used an ad‐hoc process to produce the
DSL, fixing problems as we uncovered them. In this section we will be more
systematic, illustrating how we can apply strategies to derive code. This will
in turn make it clearer how we can derive tagless final for the basic codata
interpreter.
We’ll start by being explicit about the role of the different types in the codata
interpreter. Following Section 5.2.1, remember there are three different kinds
of methods in an algebra:
There is a single constructor, print, with type String => Program[Unit]. All
of the methods that change the output style, such as bold, red, and blue, are
combinators with the type Program[A] => Program[A]. Finally, there is a single
interpreter, function application, with type Program[A] => A.
We’ll start with a data interpreter, convert it to a codata interpreter, and then
apply tagless final. Here’s our program type, defined using an algebraic data
type. We don’t need to explicitly define constructors as they come as part of
the ADT.
enum Expr {
case Add(l: Expr, r: Expr)
case Sub(l: Expr, r: Expr)
case Mul(l: Expr, r: Expr)
case Div(l: Expr, r: Expr)
We will now define two interpreters, one that evaluates Expr to a Double
14.2. TAGLESS FINAL INTERPRETERS 347
and one that prints them to String. They are implemented using structural
recursion.
object EvalInterpreter {
import Expr.*
EvalInterpreter.eval(onePlusTwo)
// res0: Double = 3.0
PrintInterpreter.print(onePlusTwo)
// res1: String = "(1.0 + 2.0)"
We have the usual trade‐off for data: we can easily add more interpreters, but
348 CHAPTER 14. TAGLESS FINAL INTERPRETERS
Let’s now convert this to codata. The interpreters become methods on the
Expr type.
trait Expr {
def eval: Double
def print: String
}
trait Expr {
def eval: Double
def print: String
onePlusTwo.eval
// res4: Double = 3.0
onePlusTwo.print
// res5: String = "(1.0 + 2.0)"
For the example we have just seen we could define a program algebra as
follows:
trait Arithmetic[Expr] {
def +(l: Expr, r: Expr): Expr
def -(l: Expr, r: Expr): Expr
def *(l: Expr, r: Expr): Expr
def /(l: Expr, r: Expr): Expr
Now we can create a program. Here’s the same example we saw above, but
written in tagless final style.
Notice the distinction between a program and the program type: a program
creates a value of the program type, but a program is not itself of the program
type. In tagless final a program is a function from program algebras to the
program type.
onePlusTwo(DoubleArithmetic)
// res7: Double = 3.0
Tagless final gives us both forms of extensibility. We can add a new interpreter.
onePlusTwo(PrintArithmetic)
// res8: String = "(1.0 + 2.0)"
trait Trigonometry[Expr] {
def sin(expr: Expr): Expr
}
def sinOnePlusTwo[Expr](
arithmetic: Arithmetic[Expr],
trigonometry: Trigonometry[Expr]
): Expr =
trigonometry.sin(onePlusTwo(arithmetic))
Notice that we are using composition here; the program sinOnePlusTwo reuses
onePlusTwo.
In this example the program type is the same as the type we interpret to. We
can use Double as the program type when we want to interpret to Double, and
likewise with String. This is usually not the case. It’s just a coincidence of using
arithmetic that we don’t need any additional information to calculate the final
result, and hence the program type and interpreter result type are the same.
There is quite a high notational overhead of tagless final, compared to the data
and codata interpreters. We’ll address this later, and end up with an encoding
of tagless final in Scala that looks like ordinary code. First, however, we’ll
introduce a more compelling example: cross‐platform user interfaces.
and mobile. We will build such a library here, but our ambitions are a bit
reduced: we will create a terminal backend but leave other backends up to
your inspiration and perspiration.
Broadly speaking, there are two kinds of user interfaces. When operating,
say, a digital musical instrument, we require a continuous stream of values
from the user interface. In contrast, when working with a form we only
require the values once, when the form is submitted. Modelling a continuous
stream of values is certainly doable (see functional reactive programming) but
it adds inessential complexity. Therefore we will stick with the simpler kind of
interface where the user submits values once.
Constructors will define the atomic units of user interface for our library. The
granularity we use here trades off expressivity and convenience. At the very
lowest level we could work with vertex buffers and the like, which would
make our library a general purpose graphics library. This gives us the ultimate
flexibility but is far too low level for this case study. At a higher level we
might think of atomic units as user interface elements like labels, buttons, text
inputs, and so on. This is the level at which HTML operates. At this level
we still usually require multiple elements to construct a complete control. For
example, in HTML what is conceptually a single form field will often consist
of separate DOM elements for the label, the input control, and the control
to show validation errors, plus some Javascript to add interactivity. We will
go even higher level. Our atomic elements will specify the kind of user input
we wants, such as a choice between a number of elements, and leave it up
to the interpreter to decide how to render this using the platform’s available
controls. For example, we could render a one‐of‐many control using either
radio buttons or a dropdown, or choose between the two depending on the
number of choices. We’ll also add labels, and optional validation rules, to each
element. Let’s model two such elements, to illustrate the idea.
trait Controls[Ui[_]] {
def textInput(
label: String,
placeholder: String,
validation: Validation[String] = succeed
): Ui[String]
• textInput,
which creates a text input where the user can enter any text
that passes the validation rule; and
• choice, which gives the user a choice of one of the given items.
Notice how our modelling decisions restrict our expressivity. For example,
textInput has a placeholder, which is displayed before the user enters
input, but does not have a default value. By reducing expressivity we gain
convenience. If the user’s requirements fit our model it is very easy to create
controls. Also notice that we don’t have any way to control the appearance of
controls. This is deliberate; we are pushing that concern into the interpreters.
These controls generate an element of the program type Ui. Each particular
interpreter, corresponding to a backend, will choose a concrete type for Ui
corresponding to the needs of the user interface toolkit it is working with.
These two constructors are enough to illustrate the idea, so we will move on to
combinators. In the context of user interfaces the most common combinators
will specify the layout of elements. As with the constructors there are a
number of possible designs: we could allow a lot of precision in layout, as
CSS does for HTML, or we could provide a few pre‐defined layouts, or we
could even push layout into the interpreter. In keeping with our design for
the constructors, and with the need to keep things simple, we will go with
a very high‐level design. Our single combinator, and, only specifies that two
elements should occur together. It leaves it up to the interpreter how this
should be rendered on the screen.
14.3. ALGEBRAIC USER INTERFACES 355
trait Layout[Ui[_]] {
def and[A, B](first: Ui[A], second: Ui[B]): Ui[(A, B)]
}
You might have noticed that and is another name for product from Semigroupal
, which we encountered in Section 11.1. It has exactly the same signature,
apart from the name, and it represents the same concept as applied to user
interfaces.
At this point we have defined two program algebras, Controls and Layout,
and shown examples of both constructors and combinators. The next step
is to create an interpreter. Here we are going to create an extremely simple
interpreter to illustrate the idea and to allow us to show how to write programs
using our algebras. More full featured interpreters are certainly possible, but
they don’t introduce any new concepts and take considerably more code.
Our interpreter will use the Console IO features of the standard library to
interact with the user.
import cats.syntax.all.*
import scala.io.StdIn
import scala.util.Try
def textInput(
label: String,
placeholder: String,
validation: Validation[String] = succeed
): Program[String] =
() => {
def loop(): String = {
println(s"$label (e.g. $placeholder):")
val input = StdIn.readLine
356 CHAPTER 14. TAGLESS FINAL INTERPRETERS
validation(input).fold(
msg => {
println(msg)
loop()
},
value => value
)
}
loop()
}
Try(StdIn.readInt).fold(
_ => {
println("Please enter a valid number.")
loop()
},
idx => {
if idx >= 0 && idx < options.size then options(idx)(1)
else {
println("Please enter a valid number.")
loop()
}
}
)
}
loop()
}
}
def quiz[Ui[_]](
controls: Controls[Ui],
layout: Layout[Ui]
): Ui[(String, Int)] =
layout.and(
controls.textInput("What is your name?", "John Doe"),
controls.choice(
"Tagless final is the greatest thing ever",
Seq(
"Strongly disagree" -> 1,
"Disagree" -> 2,
"Neutral" -> 3,
"Agree" -> 4,
"Strongly agree" -> 5
)
)
)
We have a basic example working, but it is not very nice to work with. The
way in which we write code in tagless final style is very convoluted compared
358 CHAPTER 14. TAGLESS FINAL INTERPRETERS
to normal code. In the next section we’ll see a different encoding of tagless
final that gives the user a much better experience.
def quiz[Ui[_]](
controls: Controls[Ui],
layout: Layout[Ui]
): Ui[(String, Int)] =
layout.and(name(controls), rating(controls))
This style of code quickly becomes tedious to write. The method signatures
are quite involved, and passing the program algebras from method to method
is annoying busy work.
object Controls {
def apply[Ui[_]](using controls: Controls[Ui]): Controls[Ui] =
controls
14.4. A BETTER ENCODING 359
object Layout {
def apply[Ui[_]](using layout: Layout[Ui]): Layout[Ui] =
layout
}
This is the encoding of tagless final that is common in the Scala community,
but there is still a lot of notational overhead for the developer who has to
write this code. We can use Scala language features to reduce the overhead
of writing code using a tagless final style to the point where is a simple as
standard code.
This is quite involved, but each step is relatively simple. Let’s see how it works.
Our first step is to create a base type for algebras. This is just a trait like
trait Algebra[Ui[_]]
trait Algebra {
type Ui[_]
}
At this point we’ve made sufficient changes that our example program is
meaningfully changed. Our starting point was
Our next step is to define a type for programs. Programs are conceptually
functions from an algebra to a program type, so we can define such a type.
val quiz =
new Program[Controls & Layout, (String, Int)] {
def apply(alg: Controls & Layout) =
alg.and(
alg.textInput("What is your name?", "John Doe"),
alg.choice(
"Tagless final is the greatest thing ever",
Seq(
"Strongly disagree" -> 1,
"Disagree" -> 2,
"Neutral" -> 3,
"Agree" -> 4,
"Strongly agree" -> 5
)
)
)
}
Programs are now values instead of methods. Notice that first type parameter
of Program declares all the program algebras the program requires. It’s still quite
involved to write this code, though we can simplify it a bit by using the single
abstract method technique, which means a trait with a single abstract method
(like Program) can be implemented with a function.
14.4. A BETTER ENCODING 363
Programs‐as‐values is the key that unlocks the next two improvements. The
first is to define constructors as methods on companion objects.
object Controls {
def textInput(
label: String,
placeholder: String,
validation: Validation[String] = succeed
): Program[Controls, String] =
alg => alg.textInput(label, placeholder, validation)
def choice[A](
label: String,
options: Seq[(String, A)]
): Program[Controls, A] =
alg => alg.choice(label, options)
}
Pay particular attention to how the types are defined for this extension
method. We define the extension on a Program requiring algebras Alg. The
parameter to the and method is a Program requiring algebras Alg2. The result
requires algebras Alg & Alg2 & Layout, which is the union of the algebras
required by the two programs and the Layout algebra. In this way the
combinators build up the algebras required for the program.
val quiz =
Controls
.textInput("What is your name?", "John Doe")
.and(
Controls.choice(
"Tagless final is the greatest thing ever",
Seq(
"Strongly disagree" -> 1,
"Disagree" -> 2,
"Neutral" -> 3,
"Agree" -> 4,
"Strongly agree" -> 5
)
)
)
which looks just like normal code. The type of quiz shows that type inference
has correctly inferred all the needed program algebras.
quiz
// res0: Program[Controls & Layout, Tuple2[String, Int]] = repl.
MdocSession$MdocApp$$Lambda$21327/0x00000008052e6840@2b263a89
This encoding requires more work from the library developer. However this
14.5. CONCLUSIONS 365
is a one off cost, and result is that library users write much simpler code. For
most applications of tagless final I think this is an appropriate trade off.
14.5 Conclusions
The error message does tell us the problem, but it exposes a lot of the internal
machinery that the user is not normally exposed to, and hence they’ll probably
366 CHAPTER 14. TAGLESS FINAL INTERPRETERS
From the library author’s point of view, it is a lot more work to create tagless
final code. It can also be difficult to onboard new developers to this code, as
the techniques are not familiar to most.
As always, the applicability of tagless final comes down to the context in which
it is used. In cases where the extensibility is truly justified it is a powerful tool.
In other cases it just introduces unwarranted complexity.
The term “expression problem” was first introduced in an email by Phil Wadler
[Wadler 1998], but there are much earlier sources that discuss the same issue.
One example is [Cook 1990]. Tagless final was first introduced in [Carette et
al. 2009] and expanded on in [Kiselyov 2012]. It is just one of many solutions
that have been proposed to the expression problem. I’m no expert on the
wider field of solutions to the expression problem, but of the papers I’ve read
the ones I’d like to highlight object algebras [Oliveira and Cook 2012] and
data types à la carte [Swierstra 2008]. Object algebras are, in all essentials,
the same as tagless final. They were developed in object‐oriented languages
rather than functional programming languages, making an interesting case of
convergent evolution in two distinct, but connected, fields of research. The
object algebras paper is also a good read for a more formal, if brief, discussion
of the theory behind the concepts we’ve been dealing with. Data types à la
carte is a data, rather than codata, approach to the expression problem, and so
makes an interesting contrast to tagless final. I find tagless final much simpler,
so we have not explored data types à la carte in this book. Another noteworthy
paper is [Gibbons and Wu 2014], which discuss the duality between data and
codata and its implication for embedded domain specific languages.
367
368 CHAPTER 15. OPTIMIZING INTERPRETERS AND COMPILERS
Our starting point is the basic reified interpreter we developed in the previous
chapter. This is the simplest code and therefore the easiest to work with.
15.1. ALGEBRAIC MANIPULATION 369
enum Regexp {
def ++(that: Regexp): Regexp =
Append(this, that)
We want to explicitly represent the regular expression that matches the empty
string, as it plays an important part in the algorithms that follow. This is simple
to do: we just reify it and adjust the constructors as necessary. I’ve called this
case “epsilon”, which matches the terminology used in the literature.
enum Regexp {
// ...
case Epsilon
}
object Regexp {
val epsilon: Regexp = Epsilon
I think this code is reasonably straightforward, except perhaps for the cases
for OrElse and Append. The case for OrElse is trying to match both regular
expressions simultaneously, which gets around the problem in our earlier
implementation. The definition of nullable ensures we match if either side
matches. The case for Append is attempting to match the left side if it is still
looking for characters; otherwise it is attempting to match the right side.
regexp.matches("Scala")
// res1: Boolean = true
regexp.matches("Scalalalala")
// res2: Boolean = true
regexp.matches("Sca")
// res3: Boolean = false
regexp.matches("Scalal")
// res4: Boolean = false
Regexp("cat").orElse(Regexp("cats")).matches("cats")
// res5: Boolean = true
This is a nice result for a very simple algorithm. However there is a problem.
You might notice that regular expression matching can become very slow. In
fact we can run out of heap space trying a simple match like
Regexp("cats").repeat.matches("catscatscatscats")
// java.lang.OutOfMemoryError: Java heap space
This happens because the derivative of the regular expression can grow very
large. Look at this example, after only a few derivatives.
Regexp("cats").repeat.derivative('c').derivative('a').derivative('t')
// res6: Regexp = OrElse(OrElse(Append(Apply(s),Repeat(Apply(cats))),
Append(Empty,Append(Empty,Repeat(Apply(cats))))),OrElse(Append(
Empty,Append(Empty,Repeat(Apply(cats)))),Append(Empty,OrElse(
Append(Empty,Repeat(Apply(cats))),Append(Empty,Append(Empty,
Repeat(Apply(cats))))))))
The root cause is that the derivative rules for Append, OrElse, and Repeat can
produce a regular expression that is larger than the input. However this output
often contains redundant information. In the example above there are multiple
occurrences of Append(Empty, ...), which is equivalent to just Empty. This is
15.1. ALGEBRAIC MANIPULATION 373
With this small change in‐place, our regular expressions stay at a reasonable
size for any input.
Regexp("cats").repeat.derivative('c').derivative('a').derivative('t')
// res8: Regexp = Append(Apply(s),Repeat(Apply(cats)))
enum Regexp {
def ++(that: Regexp): Regexp = {
(this, that) match {
case (Epsilon, re2) => re2
case (re1, Epsilon) => re1
case (Empty, _) => Empty
case (_, Empty) => Empty
case _ => Append(this, that)
}
}
Notice that our implementation is tail recursive. The only “looping” is the
376 CHAPTER 15. OPTIMIZING INTERPRETERS AND COMPILERS
In this section we’ve also seen the power of rewrites. Regular expression
matching using derivatives works solely by rewriting the regular expression.
We also used rewriting to simplify the regular expressions, avoiding the
explosion in size that derivatives can cause. The abstract type of these
methods is Program => Program so we might think they are combinators.
However the implementation uses structural recursion and they serve the role
of interpreters. Rewrites are the one place where the types alone can lead us
astray.
I hope you find regular expression derivatives interesting and a bit surprising. I
certainly did when I first read about them. There is a deeper point here, which
runs throughout the book: most problems have already been solved and we
can save a lot of time if we can just find those solutions. I elevate this idea
of the status of a strategy, which I call read the literature for reasons that will
soon be clear. Most developers read the occasional blog post and might attend
a conference from time to time. Many fewer, I think, read academic papers.
This is unfortunate. Part of the fault is with the academics: they write in a style
that is hard to read without some practice. However I think many developers
think the academic literature is irrelevant. One of the goals of this book is to
show the relevance of academic work, which is why each chapter conclusion
sketches the development of its main ideas with links to relevant papers.
We’ll start with the CPSed regular expression interpreter (not using
derivatives), shown below.
enum Regexp {
def ++(that: Regexp): Regexp =
Append(this, that)
loop(source, idx, k)
To reify the continuations we can apply the same recipe as before: we create
a case for each place in which we construct a continuation. In our interpreter
loop this is for Append, OrElse, and Repeat. We also construct a continuation
using the identity function when we first call loop, which represents the
continuation to call when the loop has finished. This gives us four cases.
enum Continuation {
case AppendK
case OrElseK
case RepeatK
case DoneK
}
What data does each case next to hold? Let’s let look at the structure of the
380 CHAPTER 15. OPTIMIZING INTERPRETERS AND COMPILERS
cases within the CPS interpreter. The case for Append is typical.
The continuation k refers to the Regexp right, the method loop, and the
continuation cont. Our reification should reflect this by holding the same data.
If we consider all the cases we end up with the following definition. Notice
that I implemented an apply method so we can still call these continuations
like a function.
Now we can rewrite the interpreter loop using the Continuation type.
The point of this construction is that we’ve reified the stack: it’s now explicitly
represented as the next field in each Continuation. The stack is a last‐in
first‐out (LIFO) data structure: the last element we add to the stack is the
first element we use. (This is exactly the same as efficient use of a List.)
We construct continuations by adding elements to the front of the existing
continuation, which is exactly how we construct lists or stacks. We use
continuations from front‐to‐back; in other words in last‐in first‐out (LIFO)
order. This is the correct access pattern to use a list efficiently, and also the
access pattern that defines a stack. Reifying the continuations as data has
reified the stack. In the next section we’ll use this fact to build a compiler that
targets a stack machine.
We’ve reified continuations and seen they contain a stack structure: each
continuation contains a references to the next continuation, and continuations
are constructed in a last‐in first‐out order. We’ll now, once again, reify this
structure. This time we’ll create an explicit stack, giving rise to a stack‐based
virtual machine to run our code. We’ll also introduce a compiler, transforming
our code into a sequence of operations that run on this virtual machine.
We’ll then look at optimizing our virtual machine. As this code involves
benchmarking, there is an accompanying repository that contains benchmarks
you can run on your own computer.
Stack machines are also very common virtual machines. The Java Virtual
Machine is a stack machine, as are the .Net and WASM virtual machines. They
are easy to implement, and to write compilers for. We’ve already seen how
easy it is to implement an interpreter so why should we care about stack
machines, or virtual machines in general? The usual answer is performance.
Implementing a virtual machine opens up opportunities for optimizations that
are difficult to implement in interpreters. Virtual machines also give us a lot of
flexibility. It’s simple to trace or otherwise inspect the execution of a virtual
machine, which makes debugging easier. They are easy to port to different
platforms and languages. Virtual machines are often very compact, as is the
code they run. This makes them suitable for embedded devices. Our focus will
be on performance. Although we won’t go down the rabbit‐hole of compiler
and virtual machine optimizations, which would easily take up an entire book,
we’ll at least tip‐toe to the edge and peek down.
15.3.2 Compilation
combinators into the instruction set for our virtual machine. The virtual
machine itself is an interpreter for its instruction set. Computation always
bottoms out in interpretation: a hardware CPU is nothing but an interpreter
for it’s machine code.
Notice there are two notions of program here, and two corresponding
instruction sets: there is the program the structurally recursive interpreter
executes, with an instruction set consisting of reified constructors and
combinators, and there is the program we compile this into for the stack
machine using the stack machine’s instruction set. We will call these the
interpreter program and instruction set, and stack machine program and
instruction set respectively.
enum Expression {
def +(that: Expression): Expression = Addition(this, that)
def *(that: Expression): Expression = Multiplication(this, that)
def -(that: Expression): Expression = Subtraction(this, that)
def /(that: Expression): Expression = Division(this, that)
enum Expression {
case Literal(value: Double)
case Addition(left: Expression, right: Expression)
case Subtraction(left: Expression, right: Expression)
case Multiplication(left: Expression, right: Expression)
case Division(left: Expression, right: Expression)
}
and shortened the name from Literal to Lit to make it clearer which
instruction set we are using.
enum Op {
case Lit(value: Double)
}
The other instructions are all combinators. They also all only contain values of
type Expression, and hence in the stack machine the corresponding values will
be found on the stack. This gives us the complete stack machine instruction
set.
enum Op {
case Lit(value: Double)
case Add
case Sub
case Mul
case Div
}
This completes the first step of the process. The second step is to implement
the compiler. The secret to compiling for a stack machine is to transfrom
instructions into reverse polish notation (RPN). In RPN operations follow their
operands. So, instead of writing 1 + 2 we write 1 2 +. This is exactly the order
in which a stack machine works. To evaluate 1 + 2 we should first push 1
onto the stack, then push 2, and finally pop both these values, perform the
addition, and push the result back to the stack. RPN also does not need
nesting. To represent 1 + (2 + 3) in RPN we simply use 2 3 + 1 +. Doing
away with brackets means that stack machine programs can be represented
as a linear sequence of instructions, not a tree. Concretely, we can use List[
Op].
We now are left to implement the stack machine. We’ll start by sketching out
the interface for the stack machine.
In this design the program is fixed for a given StackMachine instance, but we
can run the program multiple times.
???
}
}
Now we can define the main stack machine loop. It takes as parameters the
program and the stack, and is a structural recursion over the program.
loop(List.empty, program)
}
I’ve implemented a simple benchmark for this code (see the repository) and it’s
roughly five times slower than the interpreter we started with. Clearly some
optimization is needed.
One of the reasons for using the interpreter strategy is to isolate effects, such
as state or input and output. An interpreter can be effectful without impacting
the ability to reason about or compose the programs the interpreter runs.
Sometimes the effects are the entire point of the interpreter as the program
may describe effectful actions, such as parsing network data or drawing on a
screen, which the interpreter then carries out. Sometimes effects may just be
optimizations, which is how we are going to use them in our arithmetic stack
machine.
There are many inefficiencies in the stack machine we have just created. A
List is a poor choice of data structure for both the stack and program. We
can avoid a lot of pointer chasing and memory allocation by using a fixed size
Array. The program never changes in size, and we can simply allocate a large
enough stack that resizing it becomes very unlikely. We can also avoid the
indirection of pushing and popping and operate directly on the stack array.
loop(0, 0)
}
}
The above optimization is, to me, the most obvious and straightforward to
implement. In this section we’ll attempt to go further, by looking at some of
the optimizations described in the literature. We’ll see that there is not always
a straight path to faster code.
The benchmark I used is the simple recursive Fibonacci. Calculating the nth
Fibonacci number produces a large expression for a modest choice of n. I used
15.4. FROM INTERPRETER TO STACK MACHINE 391
a value of 25, and the expression has over one million elements. Notably the
expressions only involve addition, and the only literals in use are zero and one.
This limits the applicability of the optimizations to a wider range of inputs,
but the intention is not to produce an optimized interpreter for this specific
case but rather to discuss possible optimizations and issues that arise when
attempting to optimize an interpreter in general.
We’ll look at four different optimizations, which all use the optimized stack
machine above as their base:
• Byte code replaces the Op algebraic data type with a single byte. The
hope here is that the smaller representation will lead to better cache
utilization, and possibly a faster match expression, and therefore a faster
overall interpreter. In this representation literals are also stored in a
separate array of Doubles. More on this later.
• Stack caching stores the top of the stack in a variable, which we hope
will be allocated to a register and therefore be extremely fast to access.
The remainder of the stack is stored in an array as above. Stack caching
involves more work when pushing values on to the stack, as we must
copy the value from the top into the array, but less work when popping
values off the stack. The hope is that the savings will outweigh the
costs.
Below are the benchmarks results obtained on an AMD Ryzen 5 3600 and an
Apple M1, both running JDK 21. Results are shown in operations per second.
The Baseline interpreter is the one using structural recursion. The Stack
interpreter uses a List to represent the stack and program. The Optimized
Stack represents the stack and program as arrays. The other interpreters build
on the Optimized Stack interpreter and add the optimizations described above.
The All interpreter has all the optimizations.
There are a few lessons to take from this. The most important, in my
opinion, is that performance is not compositional. The results of applying two
optimizations is not simply the sum of applying the optimizations individually.
You can see that most of the optimizations on their own make little or no
change to performance relative to the Optimized Stack interpreter. Taken
together, however, they make a significant improvement.
Finally, differences between platforms are also significant. It’s hard to know
how much this due to differences in the computer’s architecture, and how
much is down to differences in the JVM. Either way, be aware of which
platform or platforms you expect the majority of users to run on, and don’t
naively assume performance on one platform will directly translate to another.
15.5 Conclusions
Let’s now talk about instruction dispatch, which is area we did not consider
for optimization. Instruction dispatch is the process by which the interpreter
chooses the code to run for a given interpreter instruction. The Structure and
Performance of Efficient Interpreters argues that instruction dispatch makes
up a major portion of an interpreter’s execution time. The approach we used is
generally called switch dispatch in the literature. There are several alternative
15.5. CONCLUSIONS 395
Stack machines are not the only virtual machine used for implementing
interpreters. Register machines are the most common alternative. The
Lua virtual machine, for example, is a register machine. Virtual Machine
Showdown: Stack Versus Registers compares the two and concludes that
register machines are faster. However they are more complex to implement.
Case Studies
397
Chapter 16
399
400 CHAPTER 16. CREATING USABLE CODE
Chapter 17
We’ll start with a straightforward case study: how to simplify unit tests for
asynchronous code by making them synchronous.
Let’s return to the example from Chapter 12 where we’re measuring the
uptime on a set of servers. We’ll flesh out the code into a more complete
structure. There will be two components. The first is an UptimeClient that
polls remote servers for their uptime:
import scala.concurrent.Future
trait UptimeClient {
def getUptime(hostname: String): Future[Int]
}
We’ll also have an UptimeService that maintains a list of servers and allows the
user to poll them for their total uptime:
401
402 CHAPTER 17. CASE STUDY: TESTING ASYNCHRONOUS CODE
import scala.concurrent.ExecutionContext.Implicits.global
Now, suppose we’re writing unit tests for UptimeService. We want to test its
ability to sum values, regardless of where it is getting them from. Here’s an
example:
def testTotalUptime() = {
val hosts = Map("host1" -> 10, "host2" -> 6)
val client = new TestUptimeClient(hosts)
val service = new UptimeService(client)
val actual = service.getTotalUptime(hosts.keys.toList)
val expected = hosts.values.sum
assert(actual == expected)
}
// error:
// Values of types scala.concurrent.Future[Int] and Int cannot be
compared with == or !=
// assert(actual == expected)
// ^^^^^^^^^^^^^^^^^^
The code doesn’t compile because we’ve made a classic error¹. We forgot that
our application code is asynchronous. Our actual result is of type Future[Int]
and our expected result is of type Int. We can’t compare them directly!
¹Technically this is a warning not an error. It has been promoted to an error in our case
because we’re using the -Xfatal-warnings flag on scalac.
17.1. ABSTRACTING OVER TYPE CONSTRUCTORS 403
There are a couple of ways to solve this problem. We could alter our test
code to accommodate the asynchronousness. However, there is another
alternative. Let’s make our service code synchronous so our test works
without modification!
The question is: what result type should we give to the abstract method in
UptimeClient? We need to abstract over Future[Int] and Int:
trait UptimeClient {
def getUptime(hostname: String): ???
}
At first this may seem difficult. We want to retain the Int part from each type
but “throw away” the Future part in the test code. Fortunately, Cats provides a
solution in terms of the identity type, Id, that we discussed way back in Section
9.3. Id allows us to “wrap” types in a type constructor without changing their
meaning:
package cats
type Id[A] = A
• write out the method signature for getUptime in each case to verify that
it compiles.
You should now be able to flesh your definition of TestUptimeClient out into a
full class based on a Map[String, Int] as before.
The problem here is that traverse only works on sequences of values that
have an Applicative. In our original code we were traversing a List[Future
[Int]]. There is an applicative for Future so that was fine. In this version
we are traversing a List[F[Int]]. We need to prove to the compiler that F
has an Applicative. Do this by adding an implicit constructor parameter to
UptimeService.
Finally, let’s turn our attention to our unit tests. Our test code now works as
intended without any modification. We create an instance of TestUptimeClient
and wrap it in an UptimeService. This effectively binds F to Id, allowing the
rest of the code to operate synchronously without worrying about monads or
applicatives:
def testTotalUptime() = {
val hosts = Map("host1" -> 10, "host2" -> 6)
val client = new TestUptimeClient(hosts)
val service = new UptimeService(client)
val actual = service.getTotalUptime(hosts.keys.toList)
val expected = hosts.values.sum
assert(actual == expected)
}
testTotalUptime()
17.3 Summary
This case study provides an example of how Cats can help us abstract over
different computational scenarios. We used the Applicative type class to
abstract over asynchronous and synchronous code. Leaning on a functional
abstraction allows us to specify the sequence of computations we want to
perform without worrying about the details of the implementation.
406 CHAPTER 17. CASE STUDY: TESTING ASYNCHRONOUS CODE
We used Applicative in this case study because it was the least powerful type
class that did what we needed. If we had required flatMap, we could have
swapped out Applicative for Monad. If we had needed to abstract over different
sequence types, we could have used Traverse. There are also type classes like
ApplicativeError and MonadError that help model failures as well as successful
computations.
Let’s move on now to a more complex case study where type classes will help
us produce something more interesting: a map‐reduce‐style framework for
parallel processing.
Chapter 18
If you have used Hadoop or otherwise worked in “big data” you will have
heard of MapReduce, which is a programming model for doing parallel data
processing across clusters of machines (aka “nodes”). As the name suggests,
the model is built around a map phase, which is the same map function we know
from Scala and the Functor type class, and a reduce phase, which we usually
call fold¹ in Scala.
Recall the general signature for map is to apply a function A => B to a F[A],
returning a F[B]:
¹In Hadoop there is also a shuffle phase that we will ignore here.
407
408 CHAPTER 18. CASE STUDY: MAP‐REDUCE
map
foldLeft ,
What about fold? We can implement this step with an instance of Foldable
. Not every functor also has an instance of foldable but we can implement a
map‐reduce system on top of any data type that has both of these type classes.
Our reduction step becomes a foldLeft over the results of the distributed map.
By distributing the reduce step we lose control over the order of traversal.
Our overall reduction may not be entirely left‐to‐right—we may reduce left‐
to‐right across several subsequences and then combine the results. To ensure
correctness we need a reduction operation that is associative:
In summary, our parallel fold will yield the correct results if:
What does this pattern sound like? That’s right, we’ve come full circle back
to Monoid, the first type class we discussed in this book. We are not the
first to recognise the importance of monoids. The monoid design pattern for
map‐reduce jobs is at the core of recent big data systems such as Twitter’s
Summingbird.
We saw foldMap briefly back when we covered Foldable. It is one of the derived
operations that sits on top of foldLeft and foldRight. However, rather than
use Foldable, we will re‐implement foldMap here ourselves as it will provide
useful insight into the structure of map‐reduce.
Start by writing out the signature of foldMap. It should accept the following
parameters:
You will have to add implicit parameters or context bounds to complete the
type signature.
Now implement the body of foldMap. Use the flow chart in Figure 18.3 as a
guide to the steps required:
410 CHAPTER 18. CASE STUDY: MAP‐REDUCE
2. Map step
3. Fold/reduce step
4. Final result
foldMap(Vector(1, 2, 3))(identity)
// res1: Int = 6
We already know a fair amount about the monadic nature of Futures. Let’s take
a moment for a quick recap, and to describe how Scala futures are scheduled
behind the scenes.
412 CHAPTER 18. CASE STUDY: MAP‐REDUCE
6. Final result
import scala.concurrent.Future
import scala.concurrent.ExecutionContext.Implicits.global
Some combinators create new Futures that schedule work based on the
results of other Futures. The map and flatMap methods, for example, schedule
computations that run as soon as their input values are computed and a CPU
is available:
or an instance of Traverse:
import scala.concurrent._
import scala.concurrent.duration._
There are also Monad and Monoid implementations for Future available from cats
.instances.future:
Monad[Future].pure(42)
Monoid[Future[Int]].combine(Future(1), Future(2))
18.3. PARALLELISING FOLDMAP 415
Now we’ve refreshed our memory of Futures, let’s look at how we can divide
work into batches. We can query the number of available CPUs on our
machine using an API call from the Java standard library:
Runtime.getRuntime.availableProcessors
// res11: Int = 4
(1 to 10).toList.grouped(3).toList
// res12: List[List[Int]] = List(
// List(1, 2, 3),
// List(4, 5, 6),
// List(7, 8, 9),
// List(10)
// )
Use the techniques described above to split the work into batches, one batch
per CPU. Process each batch in a parallel thread. Refer back to Figure 18.4 if
you need to review the overall algorithm.
For bonus points, process the batches for each CPU using your
implementation of foldMap from above.
18.4 Summary
In this case study we will build a library for validation. What do we mean by
validation? Almost all programs must check their input meets certain criteria.
Usernames must not be blank, email addresses must be valid, and so on. This
type of validation often occurs in web forms, but it could be performed on
configuration files, on web service responses, and any other case where we
have to deal with data that we can’t guarantee is correct. Authentication, for
example, is just a specialised form of validation.
We want to build a library that performs these checks. What design goals
should we have? For inspiration, let’s look at some examples of the types of
checks we want to perform:
• A bid in an auction must apply to one or more items and have a positive
value.
417
418 CHAPTER 19. CASE STUDY: DATA VALIDATION
• An email address must contain a single @ sign. Split the string at the @.
The string to the left must not be empty. The string to the right must
be at least three characters long and contain a dot.
These goals assume we’re checking a single piece of data. We will also need to
combine checks across multiple pieces of data. For a login form, for example,
we’ll need to combine the check results for the username and the password.
This will turn out to be quite a small component of the library, so the majority
of our time will focus on checking a single data item.
Let’s start at the bottom, checking individual pieces of data. Before we start
coding let’s try to develop a feel for what we’ll be building. We can use a
graphical notation to help us. We’ll go through our goals one by one.
Our first goal requires us to associate useful error messages with a check
failure. The output of a check could be either the value being checked, if it
19.1. SKETCHING THE LIBRARY STRUCTURE 419
F[A]
A => F[A]
passed the check, or some kind of error message. We can abstractly represent
this as a value in a context, where the context is the possibility of an error
message as shown in Figure 19.1.
Combine checks
Not really. With applicative combination, both checks are applied to the same
value and result in a tuple with the value repeated. What we want feels more
like a monoid as shown in Figure 19.4. We can define a sensible identity—a
check that always passes—and two binary combination operators—and and or:
We’ll probably be using and and or about equally often with our validation
( , ).tupled
|+|
map
flatMap
Monoids also feel like a good mechanism for accumulating error messages. If
we store messages as a List or NonEmptyList, we can even use a pre‐existing
monoid from inside Cats.
In addition to checking data, we also have the goal of transforming it. This
seems like it should be a map or a flatMap depending on whether the transform
can fail or not, so it seems we also want checks to be a monad as shown in
Figure 19.5.
We’ve now broken down our library into familiar abstractions and are in a good
position to begin development.
19.2. THE CHECK DATATYPE 421
Our design revolves around a Check, which we said was a function from a value
to a value in a context. As soon as you see this description you should think
of something like
We could attempt to build some kind of ErrorMessage type that holds all
the information we can think of. However, we can’t predict the user’s
requirements. Instead let’s let the user specify what they want. We can do
this by adding a second type parameter to Check:
trait Check[E, A] {
def apply(value: A): Either[E, A]
// other methods...
}
Type classes allow us to unify disparate data types with a common interface.
This doesn’t seem like what we’re trying to do here. That leaves us with an
422 CHAPTER 19. CASE STUDY: DATA VALIDATION
E • E => E
algebraic data type. Let’s keep that thought in mind as we explore the design
a bit further.
Let’s add some combinator methods to Check, starting with and. This method
combines two checks into one, succeeding only if both checks succeed. Think
about implementing this method now. You should hit some problems. Read
on when you do!
trait Check[E, A] {
def and(that: Check[E, A]): Check[E, A] =
???
// other methods...
}
The problem is: what do you do when both checks fail? The correct thing to
do is to return both errors, but we don’t currently have any way to combine
Es. We need a type class that abstracts over the concept of “accumulating”
errors as shown in Figure 19.6 What type class do we know that looks like
this? What method or operator should we use to implement the ? operation?
There is another semantic issue that will come up quite quickly: should and
short‐circuit if the first check fails. What do you think the most useful
behaviour is?
Use this knowledge to implement and. Make sure you end up with the
behaviour you expect!
Strictly speaking, Either[E, A] is the wrong abstraction for the output of our
check. Why is this the case? What other data type could we use instead?
Switch your implementation over to this new data type.
With and and or we can implement many of checks we’ll want in practice.
However, we still have a few more methods to add. We’ll turn to map and
related methods next.
When we map over a check, what type do we assign to the result? It can’t be
A and it can’t be B. We are at an impasse:
424 CHAPTER 19. CASE STUDY: DATA VALIDATION
However, splitting our input and output types raises another issue. Up until
now we have operated under the assumption that a Check always returns its
input when successful. We used this in and and or to ignore the output of the
left and right rules and simply return the original input on success:
// etc...
}
In our new formulation we can’t return Right(a) because its type is Either[
E, A] not Either[E, B]. We’re forced to make an arbitrary choice between
returning Right(result1) and Right(result2). The same is true of the or
method. From this we can derive two things:
19.4.1 Predicates
We can make progress by pulling apart the concept of a predicate, which can
be combined using logical operations such as and and or, and the concept of
a check, which can transform data.
19.4. TRANSFORMING DATA 425
What we have called Check so far we will call Predicate. For Predicate we can
state the following identity law encoding the notion that a predicate always
returns its input if it succeeds:
import cats.Semigroup
import cats.data.Validated
import cats.syntax.semigroup._ // for |+|
import cats.syntax.apply._ // for mapN
import cats.data.Validated._ // for Valid and Invalid
19.4.2 Checks
What about flatMap? The semantics are a bit unclear here. The method is
simple enough to declare but it’s not so obvious what it means or how we
should implement apply. The general shape of flatMap is shown in Figure 19.7.
How do we relate F in the figure to Check in our code? Check has three type
variables while F only has one.
To unify the types we need to fix two of the type parameters. The idiomatic
choices are the error type E and the input type A. This gives us the relationships
shown in Figure 19.8. In other words, the semantics of applying a FlatMap are:
flatMap
flatMap
• return to the original input of type A and apply it to the chosen check to
generate the final result of type F[C].
This is quite an odd method. We can implement it, but it is hard to find a use
for it. Go ahead and implement flatMap for Check, and then we’ll see a more
generally useful method.
We can write a more useful combinator that chains together two Checks. The
output of the first check is connected to the input of the second. This is
analogous to function composition using andThen:
trait Check[E, A, B] {
def andThen[C](that: Check[E, B, C]): Check[E, A, C]
}
19.4.3 Recap
We now have two algebraic data types, Predicate and Check, and a host of
combinators with their associated case class implementations. Look at the
following solution for a complete definition of each ADT.
There are a lot of ways this library could be cleaned up. However, let’s
implement some examples to prove to ourselves that our library really does
work, and then we’ll turn to improving it.
• An email address must contain an @ sign. Split the string at the @. The
string to the left must not be empty. The string to the right must be at
least three characters long and contain a dot.
19.5 Kleislis
We’ll finish off this case study by cleaning up the implementation of Check. A
justifiable criticism of our approach is that we’ve written a lot of code to do
very little. A Predicate is essentially a function A => Validated[E, A], and a
Check is basically a wrapper that lets us compose these functions.
flatMap flatMap
• We lift some value into a monad (by using pure, for example). This is a
function with type A => F[A].
We can illustrate this as shown in Figure 19.9. We can also write out this
example using the monad API as follows:
Recall that Check is, in the abstract, allowing us to compose functions of type
A => F[B]. We can write the above in terms of andThen as:
The result is a (wrapped) function aToC of type A => F[C] that we can
subsequently apply to a value of type A.
We have achieved the same thing as the example method without having to
reference an argument of type A. The andThen method on Check is analogous to
function composition, but is composing function A => F[B] instead of A => B.
The abstract concept of composing functions of type A => F[B] has a name: a
Kleisli.
Cats contains a data type cats.data.Kleisli that wraps a function just as Check
does. Kleisli has all the methods of Check plus some additional ones. If
Kleisli seems familiar to you, then congratulations. You’ve seen through its
19.5. KLEISLIS 431
disguise and recognised it as another concept from earlier in the book: Kleisli
is just another name for ReaderT.
import cats.data.Kleisli
import cats.instances.list._ // for Monad
These steps each transform an input Int into an output of type List[Int]:
We can combine the steps into a single pipeline that combines the underlying
Lists using flatMap:
The result is a function that consumes a single Int and returns eight outputs,
each produced by a different combination of transformations from step1, step2
, and step3:
pipeline.run(20)
// res0: List[Int] = List(42, 10, -42, -10, 38, 9, -38, -9)
The only notable difference between Kleisli and Check in terms of API is that
Kleisli renames our apply method to run.
Add a method to Predicate called run that returns a function of the correct
type. Leave the rest of the code in Predicate the same.
Now rewrite our username and email validation example in terms of Kleisli
and Predicate. Here are few tips in case you get stuck:
First, remember that the run method on Predicate takes an implicit parameter.
If you call aPredicate.run(a) it will try to pass the implicit parameter explicitly.
If you want to create a function from a Predicate and immediately apply that
function, use aPredicate.run.apply(a)
Second, type inference can be tricky in this exercise. We found that the
following definitions helped us to write code with fewer type declarations.
We have now written our code entirely in terms of Kleisli and Predicate,
completely removing Check. This is a good first step to simplifying our library.
There’s still plenty more to do, but we have a sophisticated building block from
Cats to work with. We’ll leave further improvements up to the reader.
19.6. SUMMARY 433
19.6 Summary
This case study has been an exercise in removing rather than building
abstractions. We started with a fairly complex Check type. Once we realised
we were conflating two concepts, we separated out Predicate leaving us with
something that could be implemented with Kleisli.
In this case study we will explore Commutative Replicated Data Types (CRDTs),
a family of data structures that can be used to reconcile eventually consistent
data.
One approach is to build a system that is consistent, meaning that all machines
have the same view of data. For example, if a user changes their password
then all machines that store a copy of that password must accept the change
before we consider the operation to have completed successfully.
435
436 CHAPTER 20. CASE STUDY: CRDTS
Consistent systems are easy to work with but they have their disadvantages.
They tend to have high latency because a single change can result in many
messages being sent between machines. They also tend to have relatively low
uptime because outages can cut communications between machines creating
a network partition. When there is a network partition, a consistent system
may refuse further updates to prevent inconsistencies across machines.
Machine A Machine B
0 0
Machine A Machine B
5 5
Now imagine we receive some web traffic. Our load balancer distributes five
incoming requests to A and B, A serving three visitors and B two. The machines
have inconsistent views of the system state that they need to reconcile to
achieve consistency. One reconciliation strategy with simple counters is to
exchange counts and add them as shown in Figure 20.2.
So far so good, but things will start to fall apart shortly. Suppose A serves
a single visitor, which means we’ve seen six visitors in total. The machines
attempt to reconcile state again using addition leading to the answer shown
in Figure 20.3.
This is clearly wrong! The problem is that simple counters don’t give us
enough information about the history of interactions between the machines.
Fortunately we don’t need to store the complete history to get the correct
answer—just a summary of it. Let’s look at how the GCounter solves this
problem.
438 CHAPTER 20. CASE STUDY: CRDTS
Machine A Machine B
Incoming request
6 5
Add counters
11 11 Incorrect result!
Machine A Machine B
A:0 A:0
B:0 B:0
20.2.2 GCounters
The first clever idea in the GCounter is to have each machine storing a separate
counter for every machine it knows about (including itself). In the previous
example we had two machines, A and B. In this situation both machines would
store a counter for A and a counter for B as shown in Figure 20.4.
The rule with GCounters is that a given machine is only allowed to increment
its own counter. If A serves three visitors and B serves two visitors the counters
look as shown in Figure 20.5.
When two machines reconcile their counters the rule is to take the largest
value stored for each machine. In our example, the result of the first merge
will be as shown in Figure 20.6.
Machine A Machine B
Incoming requests
A:3 A:0 Incoming requests
B:0 B:2
Machine A Machine B
Incoming requests
A:3 A:0 Incoming requests
B:0 B:2
Merge, take max
A:3 A:3
B:2 B:2
Figure 20.7.
Machine A Machine B
Incoming request
A:4 A:3
B:2 B:2
Merge, take max
20.3 Generalisation
You can probably guess that there’s a monoid in here somewhere, but let’s look
in more detail at the properties we’re relying on.
As a refresher, in Chapter 7 we saw that monoids must satisfy two laws. The
binary operation + must be associative:
(a + b)+ c == a + (b + c)
0 + a == a + 0 == a
a max a = a
increment Y N Y N
merge Y Y Y Y
total Y Y Y N
Since increment and get both use the same binary operation (addition) it’s usual
to require the same commutative monoid for both.
20.3.1 Implementation
Cats provides a type class for both Monoid and CommutativeMonoid, but doesn’t
provide one for bounded semilattice¹. That’s why we’re going to implement
our own BoundedSemiLattice type class.
import cats.kernel.CommutativeMonoid
Implement BoundedSemiLattice type class instances for Ints and for Sets. The
instance for Int will technically only hold for non‐negative numbers, but you
don’t need to model non‐negativity explicitly in the types.
When you implement this, look for opportunities to use methods and syntax
on Monoid to simplify your implementation. This is a good example of how type
class abstractions work at multiple levels in our code. We’re using monoids to
design a large component—our CRDTs—but they are also useful in the small,
simplifying our code and making it shorter and clearer.
We’ve created a generic GCounter that works with any value that has
instances of BoundedSemiLattice and CommutativeMonoid. However we’re still
tied to a particular representation of the map from machine IDs to values.
444 CHAPTER 20. CASE STUDY: CRDTS
There is no need to have this restriction, and indeed it can be useful to abstract
away from it. There are many key‐value stores that we want to work with, from
a simple Map to a relational database.
trait GCounter[F[_,_],K, V] {
def increment(f: F[K, V])(k: K, v: V)
(implicit m: CommutativeMonoid[V]): F[K, V]
object GCounter {
def apply[F[_,_], K, V]
(implicit counter: GCounter[F, K, V]) =
counter
}
Try defining an instance of this type class for Map. You should be able to
reuse your code from the case class version of GCounter with some minor
modifications.
The implementation strategy for the type class instance is a bit unsatisfying.
Although the structure of the implementation will be the same for most
instances we define, we won’t get any code reuse.
One solution is to capture the idea of a key‐value store within a type class,
and then generate GCounter instances for any type that has a KeyValueStore
instance. Here’s the code for such a type class:
trait KeyValueStore[F[_,_]] {
def put[K, V](f: F[K, V])(k: K, v: V): F[K, V]
With our type class in place we can implement syntax to enhance data types
for which we have instances:
446 CHAPTER 20. CASE STUDY: CRDTS
Now we can generate GCounter instances for any data type that has instances
of KeyValueStore and CommutativeMonoid using an implicit def:
The complete code for this case study is quite long, but most of it is boilerplate
setting up syntax for operations on the type class. We can cut down on this
using compiler plugins such as Simulacrum and Kind Projector.
20.6. SUMMARY 447
20.6 Summary
In this case study we’ve seen how we can use type classes to model a simple
CRDT, the GCounter, in Scala. Our implementation gives us a lot of flexibility
and code reuse: we aren’t tied to the data type we “count”, nor to the data
type that maps machine IDs to counters.
The focus in this case study has been on using the tools that Scala provides,
not on exploring CRDTs. There are many other CRDTs, some of which operate
in a similar manner to the GCounter, and some of which have very different
implementations. A fairly recent survey gives a good overview of many of the
basic CRDTs. However this is an active area of research and we encourage you
to read the recent publications in the field if CRDTs and eventually consistency
interest you.
448 CHAPTER 20. CASE STUDY: CRDTS
Part V
Appendices
449
Appendix A
A.1 Tree
We can directly translate this binary tree into Scala. Here’s the Scala 3 version.
enum Tree[A] {
case Leaf(value: A)
case Node(left: Tree[A], right: Tree[A])
}
451
452 APPENDIX A. SOLUTIONS FOR: ALGEBRAIC DATA TYPES
I chose to use pattern matching to implement these methods. I’m using the
Scala 3 encoding so I have no choice.
enum Tree[A] {
case Leaf(value: A)
case Node(left: Tree[A], right: Tree[A])
Now these methods all transform an algebraic data type so I can implement
them using structural recursion. I write down the structural recursion skeleton
for Tree, remembering to apply the recursion rule.
enum Tree[A] {
case Leaf(value: A)
case Node(left: Tree[A], right: Tree[A])
Now I can use the other reasoning techniques to complete the method
declarations. Let’s work through size.
Now I can use the rule for reasoning about recursion: I assume the recursive
calls successfully compute the size of the left and right children. What is the
size then of the combined tree? It must be the sum of the size of the children.
With this, I’m done.
I can use the same process to work through the other two methods, giving me
the complete solution shown below.
454 APPENDIX A. SOLUTIONS FOR: ALGEBRAIC DATA TYPES
enum Tree[A] {
case Leaf(value: A)
case Node(left: Tree[A], right: Tree[A])
enum Tree[A] {
case Leaf(value: A)
case Node(left: Tree[A], right: Tree[A])
def fold[B]: B =
???
}
enum Tree[A] {
case Leaf(value: A)
case Node(left: Tree[A], right: Tree[A])
def fold[B]: B =
this match {
case Leaf(value) => ???
case Node(left, right) => left.fold ??? right.fold
}
}
Now I follow the types to add the method parameters. For the Leaf case we
want a function of type A => B.
enum Tree[A] {
case Leaf(value: A)
case Node(left: Tree[A], right: Tree[A])
For the Node case we want a function that combines the two recursive results,
and therefore has type (B, B)=> B.
enum Tree[A] {
case Leaf(value: A)
case Node(left: Tree[A], right: Tree[A])
enum Tree[A] {
case Leaf(value: A)
case Node(left: Tree[A], right: Tree[A])
A.5 Iterate
object MyList {
def unfold[A, B](seed: A)(stop: A => Boolean, f: A => B, next: A =>
A): MyList[B] =
if stop(seed) then MyList.Empty()
else MyList.Pair(f(seed), unfold(next(seed))(stop, f, next))
A.6 Map
A.7 Identities
It’s Unit, because adding Unit to any product doesn’t add any more information.
So, Int contains exactly as much information as Int × U nit (written as the
tuple (Int, Unit) in Scala).
It’s Nothing, following the same reasoning as products: a case of Nothing adds
no further information (and we cannot even create a value with this type.)
For all of these methods I found that structural corecursion was the most
natural way to tackle them. You could start with structural recursion, though.
You might be worried about the inefficiency of filter. That’s something we’ll
discuss a bit later.
trait Stream[A] {
def head: A
def tail: Stream[A]
loop(self)
}
459
460 APPENDIX B. SOLUTIONS FOR: OBJECTS AS CODATA
loop(self)
}
}
}
or(True, True).`if`("yes")("no")
// res5: String = "yes"
or(True, False).`if`("yes")("no")
// res6: String = "yes"
or(False, True).`if`("yes")("no")
// res7: String = "yes"
or(False, False).`if`("yes")("no")
// res8: String = "no"
not(True).`if`("yes")("no")
// res9: String = "no"
not(False).`if`("yes")("no")
// res10: String = "yes"
B.3 Sets
trait Set[A] {
UnionSet(this, that)
}
It turns out, perhaps surprisingly, that this works. Let’s define a few sets using
Evens and ListSet.
evensAndOne.contains(1)
// res1: Boolean = true
evensAndOthers.contains(1)
// res2: Boolean = true
evensAndOne.contains(2)
// res3: Boolean = true
evensAndOthers.contains(2)
// res4: Boolean = true
evensAndOne.contains(3)
// res5: Boolean = false
evensAndOthers.contains(3)
// res6: Boolean = true
odds.contains(1)
// res7: Boolean = true
odds.contains(2)
// res8: Boolean = false
odds.contains(3)
// res9: Boolean = true
Taking the union of even and odd integers gives us a set that contains all
integers.
464 APPENDIX B. SOLUTIONS FOR: OBJECTS AS CODATA
integers.contains(1)
// res10: Boolean = true
integers.contains(2)
// res11: Boolean = true
integers.contains(3)
// res12: Boolean = true
These steps define the three main components of our type class. First we
define Display—the type class itself:
trait Display[A] {
def display(value: A): String
}
Then we define some default instances of Display and package them in the
Display companion object:
object Display {
given stringDisplay: Display[String] with {
def display(input: String) = input
}
465
466 APPENDIX C. SOLUTIONS FOR: CONTEXTUAL ABSTRACTION
object Display {
given stringDisplay: Display[String] with {
def display(input: String) = input
}
This is a standard use of the type class pattern. First we define custom data
type for our application:
Then we define type class instances for the types we care about. These either
go into the companion object of Cat or a separate object to act as a namespace:
C.3. BETTER SYNTAX 467
Finally, we use the type class by bringing the relevant instances into scope
and using interface object/syntax. If we defined the instances in companion
objects Scala brings them into scope for us automatically. Otherwise we use
an import to access them:
Display.print(cat)
// Garfield is a 41 year-old ginger and black cat.
object DisplaySyntax {
extension [A](value: A)(using p: Display[A]) {
def display: String = p.display(value)
def print: Unit = Display.print(value)
}
}
import DisplaySyntax.*
import java.util.Date
new Date().print
// error:
// value print is not a member of java.util.Date.
// An extension method was tried, but could not be fully constructed:
//
// repl.MdocSession.MdocApp1.DisplaySyntax.print[java.util.Date](
// new java.util.Date())(
// /* missing */summon[repl.MdocSession.MdocApp1.Display[java.
util.Date]])
//
// failed with:
//
// No given instance of type repl.MdocSession.MdocApp1.Display
[java.util.Date] was found for parameter p of method print in
object DisplaySyntax
// new Date().print
// ^^^^^^^^^^^^^^^^
D.1 Arithmetic
The trick here is to recognize how the textual description relates to code, and
to apply reification correctly.
enum Expression {
case Literal(value: Double)
case Addition(left: Expression, right: Expression)
case Subtraction(left: Expression, right: Expression)
case Multiplication(left: Expression, right: Expression)
case Division(left: Expression, right: Expression)
}
object Expression {
def apply(value: Double): Expression =
Literal(value)
}
469
470 APPENDIX D. SOLUTIONS FOR: REIFIED INTERPRETERS
enum Expression {
case Literal(value: Double)
case Addition(left: Expression, right: Expression)
case Subtraction(left: Expression, right: Expression)
case Multiplication(left: Expression, right: Expression)
case Division(left: Expression, right: Expression)
enum Expression {
case Literal(value: Double)
case Addition(left: Expression, right: Expression)
case Subtraction(left: Expression, right: Expression)
case Multiplication(left: Expression, right: Expression)
case Division(left: Expression, right: Expression)
fortyTwo.eval
// res2: Double = 42.0
enum Expression {
case Literal(value: Double)
case Addition(left: Expression, right: Expression)
case Subtraction(left: Expression, right: Expression)
case Multiplication(left: Expression, right: Expression)
case Division(left: Expression, right: Expression)
loop(this, identity)
}
The process to produce this code is very similar to the regular expression
example. We just identify all the different types of calls (which are the same
as the regular expression example) and reify them.
enum Call {
case Continue(value: Double, k: Continuation)
case Loop(expr: Expression, k: Continuation)
case Done(result: Double)
}
enum Expression {
case Literal(value: Double)
case Addition(left: Expression, right: Expression)
case Subtraction(left: Expression, right: Expression)
case Multiplication(left: Expression, right: Expression)
case Division(left: Expression, right: Expression)
import cats.*
import cats.syntax.all.*
Finally, we use the Show interface syntax to print our instance of Cat:
475
476 APPENDIX E. SOLUTIONS FOR: USING CATS
First we need our Cats imports. In this exercise we’ll be using the Eq type class
and the Eq interface syntax, so we start by importing that.
import cats.*
import cats.syntax.all.*
We bring the Eq instances for Int and String into scope for the implementation
of Eq[Cat]:
There are at least four monoids for Boolean! First, we have and with operator
&& and identity true:
479
480 APPENDIX F. SOLUTIONS FOR: MONOIDS AND SEMIGROUPS
Finally, we have exclusive nor (the negation of exclusive or) with identity true:
Showing that the identity law holds in each case is straightforward. Similarly
associativity of the combine operation can be shown by enumerating the cases.
Set intersection forms a semigroup, but doesn’t form a monoid because it has
no identity element:
Set complement and set difference are not associative, so they cannot be
considered for either monoids or semigroups. However, symmetric difference
(the union less the intersection) does form a monoid with the empty set:
We can alternatively write the fold using Monoids, although there’s not a
compelling use case for this yet:
482 APPENDIX F. SOLUTIONS FOR: MONOIDS AND SEMIGROUPS
import cats.Monoid
import cats.syntax.all.*
Now there is a use case for Monoids. We need a single method that adds Ints
and instances of Option[Int]. We can write this as a generic method that
accepts an implicit Monoid as a parameter:
import cats.Monoid
import cats.syntax.all.*
We can optionally use Scala’s context bound syntax to write the same code in
a shorter way:
We can use this code to add values of type Int and Option[Int] as requested:
add(List(1, 2, 3))
// res9: Int = 6
Note that if we try to add a list consisting entirely of Some values, we get a
compile error:
F.5. ADDING ALL THE THINGS PART 3 483
This happens because the inferred type of the list is List[Some[Int]], while
Cats will only generate a Monoid for Option[Int]. We’ll see how to get around
this in a moment.
The semantics are similar to writing a Functor for List. We recurse over the
data structure, applying the function to every Leaf we find. The functor laws
intuitively require us to retain the same structure with the same pattern of
Branch and Leaf nodes:
Branch(Leaf(10), Leaf(20)).map(_ * 2)
// error:
// value map is not a member of repl.MdocSession.MdocApp0.Branch[Int]
485
486 APPENDIX G. SOLUTIONS FOR: FUNCTORS
// Branch(Leaf(10), Leaf(20)).map(_ * 2)
// ^
Oops! This falls foul of the same invariance problem we discussed in Section
4.6.1. The compiler can find a Functor instance for Tree but not for Branch or
Leaf. Let’s add some smart constructors to compensate:
object Tree {
def branch[A](left: Tree[A], right: Tree[A]): Tree[A] =
Branch(left, right)
Tree.leaf(100).map(_ * 2)
// res9: Tree[Int] = Leaf(value = 200)
Tree.branch(Tree.leaf(10), Tree.leaf(20)).map(_ * 2)
// res10: Tree[Int] = Branch(left = Leaf(value = 20), right = Leaf(
value = 40))
Here’s a working implementation. We call func to turn the B into an A and then
use our original Display to turn the A into a String. In a small show of sleight
of hand we use a self alias to distinguish the outer and inner Displays:
new Display[B] {
def display(value: B): String =
self.display(func(value))
}
}
To make the instance generic across all types of Box, we base it on the Display
for the type inside the Box. We can either write out the complete definition
by hand:
given boxDisplay[A](
using p: Display[A]
): Display[Box[A]] with {
def display(box: Box[A]): String =
p.display(box.value)
}
We need a generic Codec for Box[A] for any given A. We create this by calling
imap on a Codec[A], which we bring into scope using an implicit parameter:
G.6. TRANSFORMATIVE THINKING WITH IMAP PART 3 489
At first glance this seems tricky, but if we follow the types we’ll see there’s only
one solution. We are passed a value of type F[A]. Given the tools available
there’s only one thing we can do: call flatMap:
trait Monad[F[_]] {
def pure[A](value: A): F[A]
We need a function of type A => F[B] as the second parameter. We have two
function building blocks available: the func parameter of type A => B and the
pure function of type A => F[A]. Combining these gives us our result:
trait Monad[F[_]] {
def pure[A](value: A): F[A]
491
492 APPENDIX H. SOLUTIONS FOR: MONADS
import cats.Id
Now let’s look at each method in turn. The pure operation creates an Id[A]
from an A. But A and Id[A] are the same type! All we have to do is return the
initial value:
pure(123)
// res7: Int = 123
map(123)(_ * 2)
// res8: Int = 246
The final punch line is that, once we strip away the Id type constructors,
flatMap and map are actually identical:
flatMap(123)(_ * 2)
// res9: Int = 246
This ties in with our understanding of functors and monads as sequencing type
classes. Each type class allows us to sequence operations ignoring some kind
of complication. In the case of Id there is no complication, making map and
flatMap the same thing.
Notice that we haven’t had to write type annotations in the method bodies
above. The compiler is able to interpret values of type A as Id[A] and vice
versa by the context in which they are used.
The only restriction we’ve seen to this is that Scala cannot unify types and type
constructors when searching for given instances. Hence our need to re‐type
Int as Id[Int] in the call to sumSquare at the opening of this section:
This is an open question. It’s also kind of a trick question—the answer depends
on the semantics we’re looking for. Some points to ponder:
494 APPENDIX H. SOLUTIONS FOR: MONADS
• In a number of cases, we want to collect all the errors, not just the first
one we encountered. A typical example is validating a web form. It’s a
far better experience to report all errors to the user when they submit
a form than to report them one at a time.
H.4 Abstracting
We can solve this using pure and raiseError. Note the use of type parameters
to these methods, to aid type inference.
The easiest way to fix this is to introduce a helper method called foldRightEval
. This is essentially our original method with every occurrence of B replaced
with Eval[B], and a call to Eval.defer to protect the recursive call:
H.6. SHOW YOUR WORKING 495
import cats.Eval
We’ll start by defining a type alias for Writer so we can use it with pure syntax:
import cats.data.Writer
import cats.instances.vector._
import cats.syntax.applicative._ // for pure
42.pure[Logged]
// res11: WriterT[Id, Vector[String], Int] = WriterT(run = (Vector(),
42))
Vector("Message").tell
// res12: WriterT[Id, Vector[String], Unit] = WriterT(
// run = (Vector("Message"), ())
// )
Finally, we’ll import the Semigroup instance for Vector. We need this to map and
flatMap over Logged:
41.pure[Logged].map(_ + 1)
// res13: WriterT[Id, Vector[String], Int] = WriterT(run = (Vector(),
42))
When we call factorial, we now have to run the return value to extract the
log and our factorial:
H.7. HACKING ON READERS 497
Await.result(Future.sequence(Vector(
Future(factorial(5)),
Future(factorial(5))
)).map(_.map(_.written)), 5.seconds)
// res: scala.collection.immutable.Vector[cats.Id[Vector[String]]] =
// Vector(
// Vector(fact 0 1, fact 1 1, fact 2 2, fact 3 6, fact 4 24, fact
5 120),
// Vector(fact 0 1, fact 1 1, fact 2 2, fact 3 6, fact 4 24, fact
5 120)
// )
Our type alias fixes the Db type but leaves the result type flexible:
Remember: the idea is to leave injecting the configuration until last. This
means setting up functions that accept the config as a parameter and check it
against the concrete user info we have been given:
def checkPassword(
username: String,
password: String): DbReader[Boolean] =
Reader(db => db.passwords.get(username).contains(password))
def checkLogin(
userId: Int,
password: String): DbReader[Boolean] =
for {
username <- findUsername(userId)
passwordOk <- username.map { username =>
checkPassword(username, password)
}.getOrElse {
false.pure[DbReader]
}
} yield passwordOk
The stack operation required is different for operators and operands. For
clarity we’ll implement evalOne in terms of two helper functions, one for each
case:
Let’s look at operand first. All we have to do is push a number onto the stack.
We also return the operand as an intermediate result:
The operator function is a little more complex. We have to pop two operands
off the stack (having the second operand at the top of the stack)i and push
the result in their place. The code can fail if the stack doesn’t have enough
operands on it, but the exercise description allows us to throw an exception
in this case:
case _ =>
sys.error("Fail!")
}
We’ve done all the hard work now. All we need to do is split the input into
terms and call runA and value to unpack the result:
evalInput("1 2 + 3 4 + *")
// res15: Int = 21
The code for flatMap is similar to the code for map. Again, we recurse down
the structure and use the results from func to build a new Tree.
H.13. BRANCHING OUT FURTHER WITH MONADS 501
The code for tailRecM is fairly complex regardless of whether we make it tail‐
recursive or not.
import cats.Monad
The solution above is perfectly fine for this exercise. Its only downside is that
Cats cannot make guarantees about stack safety.
import cats.Monad
import scala.annotation.tailrec
502 APPENDIX H. SOLUTIONS FOR: MONADS
loop(List(func(arg)), Nil).head
}
}
H.13. BRANCHING OUT FURTHER WITH MONADS 503
branch(leaf(100), leaf(200)).
flatMap(x => branch(leaf(x - 1), leaf(x + 1)))
// res5: Tree[Int] = Branch(
// left = Branch(left = Leaf(value = 99), right = Leaf(value = 101))
,
// right = Branch(left = Leaf(value = 199), right = Leaf(value =
201))
// )
for {
a <- branch(leaf(100), leaf(200))
b <- branch(leaf(a - 10), leaf(a + 10))
c <- branch(leaf(b - 1), leaf(b + 1))
} yield c
// res6: Tree[Int] = Branch(
// left = Branch(
// left = Branch(left = Leaf(value = 89), right = Leaf(value = 91)
),
// right = Branch(left = Leaf(value = 109), right = Leaf(value =
111))
// ),
// right = Branch(
// left = Branch(left = Leaf(value = 189), right = Leaf(value =
191)),
// right = Branch(left = Leaf(value = 209), right = Leaf(value =
211))
// )
// )
The monad for Option provides fail‐fast semantics. The monad for List
provides concatenation semantics. What are the semantics of flatMap for
a binary tree? Every node in the tree has the potential to be replaced with
504 APPENDIX H. SOLUTIONS FOR: MONADS
import cats.data.EitherT
import scala.concurrent.Future
505
506 APPENDIX I. SOLUTIONS FOR: MONAD TRANSFORMERS
We request the power level from each ally and use map and flatMap to combine
the results:
We use the value method to unpack the monad stack and Await and fold to
unpack the Future and Either:
I.4. MONADS: TRANSFORM AND ROLL OUT PART 4 507
import scala.concurrent.Await
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.duration._
This exercise is checking that you understood the definition of product in terms
of flatMap and map.
The semantics of flatMap are what give rise to the behaviour for List and
Either:
509
510 APPENDIX J. SOLUTIONS FOR: SEMIGROUPAL AND APPLICATIVE
List does have a Parallel instance, and it zips the List insted of creating the
cartesian product.
import cats.instances.list._
Folding right to left copies the list, leaving the order intact:
Note that we have to carefully specify the type of the accumulator to avoid a
type error. We use List.empty[Int] to avoid inferring the accumulator type as
Nil.type or List[Nothing]:
List(1, 2, 3).foldRight(Nil)(_ :: _)
// error:
// Found: List[Int]
511
512 APPENDIX K. SOLUTIONS FOR: FOLDABLE AND TRAVERSE
// Required: scala.collection.immutable.Nil.type
// List(1, 2, 3).foldRight(Nil)(_ :: _)
// ^^^^^^
map(List(1, 2, 3))(_ * 2)
// res9: List[Int] = List(2, 4, 6)
filter(List(1, 2, 3))(_ % 2 == 1)
// res11: List[Int] = List(1, 3)
import scala.math.Numeric
sumWithNumeric(List(1, 2, 3))
// res12: Int = 6
and one using cats.Monoid (which is more appropriate to the content of this
book):
import cats.Monoid
sumWithMonoid(List(1, 2, 3))
// res13: Int = 6
With three items in the input list, we end up with combinations of three Ints:
one from the first item, one from the second, and one from the third:
The arguments to listTraverse are of types List[Int] and Int => Option[
Int], so the return type is Option[List[Int]]. Again, Option is a monad, so
K.6. TRAVERSING WITH VALIDATED 515
the semigroupal combine function follows from flatMap. The semantics are
therefore fail‐fast error handling: if all inputs are even, we get a list of outputs.
Otherwise we get None:
process(List(2, 4, 6))
// res12: Option[List[Int]] = Some(value = List(2, 4, 6))
process(List(1, 2, 3))
// res13: Option[List[Int]] = None
process(List(2, 4, 6))
// res17: Validated[List[String], List[Int]] = Valid(a = List(2, 4, 6)
)
process(List(1, 2, 3))
// res18: Validated[List[String], List[Int]] = Invalid(
// e = List("1 is not even", "3 is not even")
// )
L.1 Torque
Defining Force, Torque, and the unit types is just repeating the pattern we saw
in the example code.
trait Newtons
trait NewtonMetres
To define the * method on Force we need constraints that Forces Unit type is
Newtons, and Lengths Unit type is Metres. These are both type equalities, so we
can express them with =:=.
517
518 APPENDIX L. SOLUTIONS FOR: INDEXED TYPES
Here’s how I implemented it. The structure is very similar to the original
implementation, but where we factored the state into type parameters I also
factored the implementation into types. Notice how we use Head and Body
to accumulate the set of tags that make up the head and body respectively.
We still need to use indexed codata in some place, but we can avoid it in
others. For example, the head method simply requires a function of type Head
[WithoutTitle] => Head[WithTitle].
}
}
object Html {
val empty: Html[NeedsHead] = Html(Head.empty, Body.empty)
}
As always, we should show that is works. Here’s the output from the
motivating example.
Html.empty
.head(_.title("Our Amazing Webpage"))
.body(_.h1("Where Amazing Happens").p("Right here"))
.toString()
// res9: String = """
// <html>
// <head>
// <title>Our Amazing Webpage</title>
// </head>
// <body>
// <h1>Where Amazing Happens</h1>
520 APPENDIX L. SOLUTIONS FOR: INDEXED TYPES
// <p>Right here</p>
// </body>
// </html>"""
L.3 Commutivitiy
// An instance exists if A * B = C
trait Multiply[A, B, C]
object Multiply {
given Multiply[Metres, Newtons, NewtonMetres] = new Multiply {}
// A * B == B * A
given commutative[A, B, C](using Multiply[A, B, C]): Multiply[B, A,
C] =
new Multiply {}
}
Force[Newtons](3) * Length[Metres](4)
// res15: Torque[NewtonMetres] = Torque(value = 12.0)
import cats.Id
trait UptimeClient[F[_]] {
def getUptime(hostname: String): F[Int]
}
Note that, because Id[A] is just a simple alias for A, we don’t need to refer to
the type in TestUptimeClient as Id[Int]—we can simply write Int instead:
521
522APPENDIX M. SOLUTIONS FOR: CASE STUDY: TESTING ASYNCHRONOUS CODE
import cats.Applicative
import cats.syntax.functor._ // for map
import cats.Monoid
We have to modify the type signature to accept a Monoid for B. With that
change we can use the Monoid empty and |+| syntax as described in Section
7.3.3:
525
526 APPENDIX N. SOLUTIONS FOR: CASE STUDY: MAP‐REDUCE
import cats.Monoid
import cats.syntax.semigroup._ // for |+|
Here is an annotated solution that splits out each map and fold into a separate
line of code:
iterable.foldLeft(Monoid[B].empty)(_ |+| _)
}
}
Await.result(result, 1.second)
// res14: Int = 1784293664
We can re‐use our definition of foldMap for a more concise solution. Note
that the local maps and reduces in steps 3 and 4 of Figure 18.4 are actually
equivalent to a single call to foldMap, shortening the entire algorithm as follows:
Await.result(result, 1.second)
// res16: Int = 1784293664
import cats.Monoid
import scala.concurrent._
import scala.concurrent.duration._
import scala.concurrent.ExecutionContext.Implicits.global
values
.grouped(groupSize)
.toVector
.traverse(group => Future(group.toVector.foldMap(func)))
.map(_.combineAll)
}
Await.result(future, 1.second)
// res18: Int = 500500000
We need a Semigroup for E. Then we can combine values of E using the combine
method or its associated |+| syntax:
import cats.Semigroup
import cats.instances.list._ // for Semigroup
import cats.syntax.semigroup._ // for |+|
Note we don’t need a full Monoid because we don’t need the identity element.
531
532 APPENDIX O. SOLUTIONS FOR: CASE STUDY: DATA VALIDATION
We want to report all the errors we can, so we should prefer not short‐
circuiting whenever possible.
In the case of the and method, the two checks we’re combining are
independent of one another. We can always run both rules and combine any
errors we see.
In the first we represent checks as functions. The Check data type becomes a
simple wrapper for a function that provides our library of combinator methods.
For the sake of disambiguation, we’ll call this implementation CheckF:
import cats.Semigroup
import cats.syntax.either._ // for asLeft and asRight
import cats.syntax.semigroup._ // for |+|
Let’s test the behaviour we get. First we’ll setup some checks:
check(5)
// res5: Either[List[String], Int] = Left(value = List("Must be < -2")
)
check(0)
// res6: Either[List[String], Int] = Left(
// value = List("Must be > 2", "Must be < -2")
// )
What happens if we try to create checks that fail with a type that we can’t
accumulate? For example, there is no Semigroup instance for Nothing. What
happens if we create instances of CheckF[Nothing, A]?
534 APPENDIX O. SOLUTIONS FOR: CASE STUDY: DATA VALIDATION
We can create checks just fine but when we come to combine them we get an
error we might expect:
While the ADT implementation is more verbose than the function wrapper
implementation, it has the advantage of cleanly separating the structure of
the computation (the ADT instance we create) from the process that gives it
meaning (the apply method). From here we have a number of options:
Because of its flexibility, we will use the ADT implementation for the rest of
this case study.
The implementation of apply for And is using the pattern for applicative
functors. Either has an Applicative instance, but it doesn’t have the semantics
we want. It fails fast instead of accumulating errors.
import cats.Semigroup
import cats.data.Validated
import cats.syntax.apply._ // for mapN
This reuses the same technique for and. We have to do a bit more work in
the apply method. Note that it’s OK to short‐circuit in this case because the
choice of rules is implicit in the semantics of “or”.
import cats.Semigroup
import cats.data.Validated
import cats.syntax.semigroup._ // for |+|
import cats.syntax.apply._ // for mapN
import cats.data.Validated._ // for Valid and Invalid
object Check {
final case class And[E, A](
left: Check[E, A],
right: Check[E, A]) extends Check[E, A]
O.6 Checks
If you follow the same strategy as Predicate you should be able to create code
similar to the below:
import cats.Semigroup
import cats.data.Validated
object Check {
final case class Map[E, A, B, C](
check: Check[E, A, B],
func: B => C) extends Check[E, A, C] {
check(in).map(func)
}
It’s the same implementation strategy as before with one wrinkle: Validated
doesn’t have a flatMap method. To implement flatMap we must momentarily
switch to Either and then switch back to Validated. The withEither method
on Validated does exactly this. From here we can just follow the types to
implement apply.
import cats.Semigroup
import cats.data.Validated
// other methods...
}
O.9 Recap
Here’s our final implementaton, including some tidying and repackaging of the
code:
O.9. RECAP 541
import cats.Semigroup
import cats.data.Validated
import cats.data.Validated._ // for Valid and Invalid
import cats.syntax.semigroup._ // for |+|
import cats.syntax.apply._ // for mapN
import cats.syntax.validated._ // for valid and invalid
object Predicate {
542 APPENDIX O. SOLUTIONS FOR: CASE STUDY: DATA VALIDATION
import cats.Semigroup
import cats.data.Validated
import cats.syntax.apply._ // for mapN
import cats.syntax.validated._ // for valid and invalid
object Check {
final case class Map[E, A, B, C](
check: Check[E, A, B],
func: B => C) extends Check[E, A, C] {
def apply(a: A)
(implicit s: Semigroup[E]): Validated[E, C] =
check(a) map func
}
def apply(a: A)
(implicit s: Semigroup[E]): Validated[E, C] =
check(a).withEither(_.flatMap(b => func(b)(a).toEither))
}
def apply(a: A)
(implicit s: Semigroup[E]): Validated[E, C] =
check(a).withEither(_.flatMap(b => next(b).toEither))
}
def apply(a: A)
(implicit s: Semigroup[E]): Validated[E, B] =
func(a)
}
def apply(a: A)
(implicit s: Semigroup[E]): Validated[E, A] =
544 APPENDIX O. SOLUTIONS FOR: CASE STUDY: DATA VALIDATION
pred(a)
}
def apply[E, A, B]
(func: A => Validated[E, B]): Check[E, A, B] =
Pure(func)
}
Here’s our reference solution. Implementing this required more thought than
we expected. Switching between Check and Predicate at appropriate places
felt a bit like guesswork till we got the rule into our heads that Predicate
doesn’t transform its input. With this rule in mind things went fairly smoothly.
In later sections we’ll make some changes that make the library easier to use.
case _ =>
"Must contain a single @ character".
invalidNel[(String, String)]
})
Finally, here’s a check for a User that depends on checkUsername and checkEmail:
def createUser(
username: String,
email: String): Validated[Errors, User] =
(checkUsername(username), checkEmail(email)).mapN(User.apply)
createUser("Noel", "[email protected]")
// res5: Validated[Errors, User] = Valid(
// a = User(username = "Noel", email = "[email protected]")
// )
createUser("", "[email protected]@io")
// res6: Validated[Errors, User] = Invalid(
// e = NonEmptyList(
// head = "Must be longer than 3 characters",
// tail = List("Must contain a single @ character")
// )
// )
One distinct disadvantage of our example is that it doesn’t tell us where the
errors came from. We can either achieve that through judicious manipulation
of error messages, or we can modify our library to track error locations as well
as messages. Tracking error locations is outside the scope of this case study,
so we’ll leave this as an exercise to the reader.
O.11 Kleislis
Here’s an abbreviated definition of run. Like apply, the method must accept
an implicit Semigroup:
import cats.Semigroup
import cats.data.Validated
// other methods...
}
Here is the preamble we suggested in the main text of the case study:
Our username and email examples are slightly different in that we make use
of check() and checkPred() in different situations:
case _ =>
Left(error("Must contain a single @ character"))
})
Finally, we can see that our createUser example works as expected using
Kleisli:
O.12. KLEISLIS PART 2 549
def createUser(
username: String,
email: String): Either[Errors, User] = (
checkUsername.run(username),
checkEmail.run(email)
).mapN(User.apply)
createUser("Noel", "[email protected]")
// res2: Either[Errors, User] = Right(
// value = User(username = "Noel", email = "[email protected]")
// )
createUser("", "[email protected]@io")
// res3: Either[Errors, User] = Left(
// value = NonEmptyList(head = "Must be longer than 3 characters",
tail = List())
// )
Hopefully the description above was clear enough that you can get to an
implementation like the one below.
551
552 APPENDIX P. SOLUTIONS FOR: CASE STUDY: CRDTS
Implementing the instance for Set provides good practice with implicit
methods.
object wrapper {
trait BoundedSemiLattice[A] extends CommutativeMonoid[A] {
def combine(a1: A, a2: A): A
def empty: A
}
object BoundedSemiLattice {
implicit val intInstance: BoundedSemiLattice[Int] =
new BoundedSemiLattice[Int] {
def combine(a1: Int, a2: Int): Int =
a1 max a2
Here’s a working implementation. Note the use of |+| in the definition of merge,
which significantly simplifies the process of merging and maximising counters:
Here’s the complete code for the instance. Write this definition in the
companion object for GCounter to place it in global implicit scope:
new GCounter[Map, K, V] {
def increment(map: Map[K, V])(key: K, value: V)
(implicit m: CommutativeMonoid[V]): Map[K, V] = {
val total = map.getOrElse(key, m.empty) |+| value
map + (key -> total)
}
Here’s the code for the instance. Write the definition in the companion object
for KeyValueStore to place it in global implicit scope:
Acknowledgements
No book is an island. This book wouldn’t exist without it’s predecessor, Scala
with Cats, and everyone involved in creating that book implicitly played some
part in this book’s creation. See below for that book’s acknowledgements, but
in particular I want to highlight my coauthor, Dave “Lord of Types” Pereira‐
Gurnell, without whom that book would not exist and hence neither would
this one. Thanks Dave!
Thanks also to Adam Rosien, who gave me low‐key encouragement and put
up with my bullshit. Also my wife and children, who put up with even more
of my bullshit, and gave me the space to finish this project. The members
of ScalaBridge London and attendees at various training courses acted as
experimental subjects for a lot of the material here. Thank you for being
willing test subjects; you greatly helped improve the content. Thanks for the
members of the PLT research group who inspired me directly back in the day,
and continue to provide inspiration from afar. Finally, thanks to the following
who sponsored my work or contributed with corrections and suggestions:
555
556 APPENDIX Q. ACKNOWLEDGEMENTS
We’d like to thank our colleagues at Inner Product and Underscore, our friends
at Typelevel, and everyone who helped contribute to this book. Special thanks
to Jenny Clements for her fantastic artwork and Richard Dallaway for his proof
reading expertise. Here is an alphabetical list of contributors:
Backers
We’d also like to extend very special thanks to our backers—fine people who
helped fund the development of the book by buying a copy before we released
it as open source. This book wouldn’t exist without you:
Carette, J., Kiselyov, O., and Shan, C. 2009. Finally tagless, partially
evaluated: Tagless staged interpreters for simpler typed languages. Journal
of Functional Programming 5, 509–543.
Downen, P., Sullivan, Z., Ariola, Z.M., and Peyton Jones, S. 2019. Codata
in action. European symposium on programming, Springer International
Publishing Cham, 119–146.
Felleisen, M., Findler, R.B., Flatt, M., and Krishnamurthi, S. 2018. How to design
programs, second edition: An introductino to programming and computing.
The MIT Press.
559
560 BIBLIOGRAPHY
Kiselyov, O. 2012. Typed tagless final interpreters. In: J. Gibbons, ed., Generic
and indexed programming: International spring school, SSGIP 2010, oxford,
UK, march 22‐26, 2010, revised lectures. Springer Berlin Heidelberg, Berlin,
Heidelberg, 130–174.
Křikava, F., Miller, H., and Vitek, J. 2019. Scala implicits are everywhere:
A large‐scale study of the use of scala implicits in the wild. Proc. ACM
Program. Lang. 3, OOPSLA.
Lewis, J.R., Launchbury, J., Meijer, E., and Shields, M.B. 2000. Implicit
parameters: Dynamic scoping with static types. Proceedings of the 27th
ACM SIGPLAN‐SIGACT symposium on principles of programming languages,
Association for Computing Machinery, 108–118.
Odersky, M., Blanvillain, O., Liu, F., Biboudis, A., Miller, H., and Stucki, S. 2017.
Simplicitly: Foundations and applications of implicit function types. Proc.
ACM Program. Lang. 2, POPL.
Oliveira, B.C.d.S. and Cook, W.R. 2012. Extensibility for the masses: Practical
extensibility with object algebras. Proceedings of the 26th european
conference on object‐oriented programming, Springer‐Verlag, 2–27.
Oliveira, B.C.d.S., Moors, A., and Odersky, M. 2010. Type classes as objects
and implicits. Proceedings of the ACM international conference on object
oriented programming systems languages and applications, Association for
Computing Machinery, 341–360.
Thibodeau, D., Cave, A., and Pientka, B. 2016. Indexed codata types. SIGPLAN
Not. 51, 9, 351–363.
Wadler, P. and Blott, S. 1989. How to make ad‐hoc polymorphism less ad hoc.
Proceedings of the 16th ACM SIGPLAN‐SIGACT symposium on principles of
programming languages, Association for Computing Machinery, 60–76.
Wadler, P., Taha, W., and MacQueen, D. 1998. How to add laziness to a strict
language without even being odd. SML’98, the SML workshop.