100% found this document useful (1 vote)
314 views479 pages

Haskell Made Easy

This document outlines the contents and sections of 43 chapters on various Haskell concepts and techniques. The chapters cover topics such as getting started with Haskell, type classes, monads, lenses, testing, concurrency, and more. Each chapter is further broken down into multiple sections that delve deeper into the topic covered by that chapter.

Uploaded by

Michael Chan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
314 views479 pages

Haskell Made Easy

This document outlines the contents and sections of 43 chapters on various Haskell concepts and techniques. The chapters cover topics such as getting started with Haskell, type classes, monads, lenses, testing, concurrency, and more. Each chapter is further broken down into multiple sections that delve deeper into the topic covered by that chapter.

Uploaded by

Michael Chan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 479

Contents

About
Chapter 1: Getting started with Haskell Language
Section 1.1: Getting started
Section 1.2: Hello, World!
Section 1.3: Factorial
Section 1.4: Fibonacci, Using Lazy Evaluation
Section 1.5: Primes
Section 1.6: Declaring Values
Chapter 2: Overloaded Literals
Section 2.1: Strings
Section 2.2: Floating Numeral
Section 2.3: Integer Numeral
Section 2.4: List Literals
Chapter 3: Foldable
Section 3.1: Definition of Foldable
Section 3.2: An instance of Foldable for a binary tree
Section 3.3: Counting the elements of a Foldable structure
Section 3.4: Folding a structure in reverse
Section 3.5: Flattening a Foldable structure into a list
Section 3.6: Performing a side-eect for each element of a Foldable
structure
Section 3.7: Flattening a Foldable structure into a Monoid
Section 3.8: Checking if a Foldable structure is empty
Chapter 4: Traversable
Section 4.1: Definition of Traversable
Section 4.2: Traversing a structure in reverse
Section 4.3: An instance of Traversable for a binary tree
Section 4.4: Traversable structures as shapes with contents
Section 4.5: Instantiating Functor and Foldable for a Traversable structure
Section 4.6: Transforming a Traversable structure with the aid of an
accumulating parameter
Section 4.7: Transposing a list of lists
Chapter 5: Lens
Section 5.1: Lenses for records
Section 5.2: Manipulating tuples with Lens
Section 5.3: Lens and Prism
Section 5.4: Stateful Lenses
Section 5.5: Lenses compose
Section 5.6: Writing a lens without Template Haskell
Section 5.7: Fields with makeFields
Section 5.8: Classy Lenses
Section 5.9: Traversals
Chapter 6: QuickCheck
Section 6.1: Declaring a property
Section 6.2: Randomly generating data for custom types
Section 6.3: Using implication (==>) to check properties with
preconditions
Section 6.4: Checking a single property
Section 6.5: Checking all the properties in a file
Section 6.6: Limiting the size of test data
Chapter 7: Common GHC Language Extensions
Section 7.1: RankNTypes
Section 7.2: OverloadedStrings
Section 7.3: BinaryLiterals
Section 7.4: ExistentialQuantification
Section 7.5: LambdaCase
Section 7.6: FunctionalDependencies
Section 7.7: FlexibleInstances
Section 7.8: GADTs
Section 7.9: TupleSections
Section 7.10: OverloadedLists
Section 7.11: MultiParamTypeClasses
Section 7.12: UnicodeSyntax
Section 7.13: PatternSynonyms
Section 7.14: ScopedTypeVariables
Section 7.15: RecordWildCards
Chapter 8: Free Monads
Section 8.1: Free monads split monadic computations into data structures
and interpreters
Section 8.2: The Freer monad
Section 8.3: How do foldFree and iterM work?
Section 8.4: Free Monads are like fixed points
Chapter 9: Type Classes
Section 9.1: Eq
Section 9.2: Monoid
Section 9.3: Ord
Section 9.4: Num
Section 9.5: Maybe and the Functor Class
Section 9.6: Type class inheritance: Ord type class
Chapter 10: IO
Section 10.1: Getting the 'a' "out of" 'IO a'
Section 10.2: IO defines your program's `main` action
Section 10.3: Checking for end-of-file conditions
Section 10.4: Reading all contents of standard input into a string
Section 10.5: Role and Purpose of IO
Section 10.6: Writing to stdout
Section 10.7: Reading words from an entire file
Section 10.8: Reading a line from standard input
Section 10.9: Reading from `stdin`
Section 10.10: Parsing and constructing an object from standard input
Section 10.11: Reading from file handles
Chapter 11: Record Syntax
Section 11.1: Basic Syntax
Section 11.2: Defining a data type with field labels
Section 11.3: RecordWildCards
Section 11.4: Copying Records while Changing Field Values
Section 11.5: Records with newtype
Chapter 12: Partial Application
Section 12.1: Sections
Section 12.2: Partially Applied Adding Function
Section 12.3: Returning a Partially Applied Function
Chapter 13: Monoid
Section 13.1: An instance of Monoid for lists
Section 13.2: Collapsing a list of Monoids into a single value
Section 13.3: Numeric Monoids
Section 13.4: An instance of Monoid for ()
Chapter 14: Category Theory
Section 14.1: Category theory as a system for organizing abstraction
Section 14.2: Haskell types as a category
Section 14.3: Definition of a Category
Section 14.4: Coproduct of types in Hask
Section 14.5: Product of types in Hask
Section 14.6: Haskell Applicative in terms of Category Theory
Chapter 15: Lists
Section 15.1: List basics
Section 15.2: Processing lists
Section 15.3: Ranges
Section 15.4: List Literals
Section 15.5: List Concatenation
Section 15.6: Accessing elements in lists
Section 15.7: Basic Functions on Lists
Section 15.8: Transforming with `map`
Section 15.9: Filtering with `filter`
Section 15.10: foldr
Section 15.11: Zipping and Unzipping Lists
Section 15.12: foldl
Chapter 16: Sorting Algorithms
Section 16.1: Insertion Sort
Section 16.2: Permutation Sort
Section 16.3: Merge Sort
Section 16.4: Quicksort
Section 16.5: Bubble sort
Section 16.6: Selection sort
Chapter 17: Type Families
Section 17.1: Datatype Families
Section 17.2: Type Synonym Families
Section 17.3: Injectivity
Chapter 18: Monads
Section 18.1: Definition of Monad
Section 18.2: No general way to extract value from a monadic
computation
Section 18.3: Monad as a Subclass of Applicative
Section 18.4: The Maybe monad
Section 18.5: IO monad
Section 18.6: List Monad
Section 18.7: do-notation
Chapter 19: Stack
Section 19.1: Profiling with Stack
Section 19.2: Structure
Section 19.3: Build and Run a Stack Project
Section 19.4: Viewing dependencies
Section 19.5: Stack install
Section 19.6: Installing Stack
Section 19.7: Creating a simple project
Section 19.8: Stackage Packages and changing the LTS (resolver) version
Chapter 20: Generalized Algebraic Data Types
Section 20.1: Basic Usage
Chapter 21: Recursion Schemes
Section 21.1: Fixed points
Section 21.2: Primitive recursion
Section 21.3: Primitive corecursion
Section 21.4: Folding up a structure one layer at a time
Section 21.5: Unfolding a structure one layer at a time
Section 21.6: Unfolding and then folding, fused
Chapter 22: Data.Text
Section 22.1: Text Literals
Section 22.2: Checking if a Text is a substring of another Text
Section 22.3: Stripping whitespace
Section 22.4: Indexing Text
Section 22.5: Splitting Text Values
Section 22.6: Encoding and Decoding Text
Chapter 23: Using GHCi
Section 23.1: Breakpoints with GHCi
Section 23.2: Quitting GHCi
Section 23.3: Reloading a already loaded file
Section 23.4: Starting GHCi
Section 23.5: Changing the GHCi default prompt
Section 23.6: The GHCi configuration file
Section 23.7: Loading a file
Section 23.8: Multi-line statements
Chapter 24: Strictness
Section 24.1: Bang Patterns
Section 24.2: Lazy patterns
Section 24.3: Normal forms
Section 24.4: Strict fields
Chapter 25: Syntax in Functions
Section 25.1: Pattern Matching
Section 25.2: Using where and guards
Section 25.3: Guards
Chapter 26: Functor
Section 26.1: Class Definition of Functor and Laws
Section 26.2: Replacing all elements of a Functor with a single value
Section 26.3: Common instances of Functor
Section 26.4: Deriving Functor
Section 26.5: Polynomial functors
Section 26.6: Functors in Category Theory
Chapter 27: Testing with Tasty
Section 27.1: SmallCheck, QuickCheck and HUnit
Chapter 28: Creating Custom Data Types
Section 28.1: Creating a data type with value constructor parameters
Section 28.2: Creating a data type with type parameters
Section 28.3: Creating a simple data type
Section 28.4: Custom data type with record parameters
Chapter 29: Reactive-banana
Section 29.1: Injecting external events into the library
Section 29.2: Event type
Section 29.3: Actuating EventNetworks
Section 29.4: Behavior type
Chapter 30: Optimization
Section 30.1: Compiling your Program for Profiling
Section 30.2: Cost Centers
Chapter 31: Concurrency
Section 31.1: Spawning Threads with `forkIO`
Section 31.2: Communicating between Threads with `MVar`
Section 31.3: Atomic Blocks with Software Transactional Memory
Chapter 32: Function composition
Section 32.1: Right-to-left composition
Section 32.2: Composition with binary function
Section 32.3: Left-to-right composition
Chapter 33: Databases
Section 33.1: Postgres
Chapter 34: Data.Aeson - JSON in Haskell
Section 34.1: Smart Encoding and Decoding using Generics
Section 34.2: A quick way to generate a Data.Aeson.Value
Section 34.3: Optional Fields
Chapter 35: Higher-order functions
Section 35.1: Basics of Higher Order Functions
Section 35.2: Lambda Expressions
Section 35.3: Currying
Chapter 36: Containers - Data.Map
Section 36.1: Importing the Module
Section 36.2: Monoid instance
Section 36.3: Constructing
Section 36.4: Checking If Empty
Section 36.5: Finding Values
Section 36.6: Inserting Elements
Section 36.7: Deleting Elements
Chapter 37: Fixity declarations
Section 37.1: Associativity
Section 37.2: Binding precedence
Section 37.3: Example declarations
Chapter 38: Web Development
Section 38.1: Servant
Section 38.2: Yesod
Chapter 39: Vectors
Section 39.1: The Data.Vector Module
Section 39.2: Filtering a Vector
Section 39.3: Mapping (`map`) and Reducing (`fold`) a Vector
Section 39.4: Working on Multiple Vectors
Chapter 40: Cabal
Section 40.1: Working with sandboxes
Section 40.2: Install packages
Chapter 41: Type algebra
Section 41.1: Addition and multiplication
Section 41.2: Functions
Section 41.3: Natural numbers in type algebra
Section 41.4: Recursive types
Section 41.5: Derivatives
Chapter 42: Arrows
Section 42.1: Function compositions with multiple channels
Chapter 43: Typed holes
Section 43.1: Syntax of typed holes
Section 43.2: Semantics of typed holes
Section 43.3: Using typed holes to define a class instance
Chapter 44: Rewrite rules (GHC)
Section 44.1: Using rewrite rules on overloaded functions
Chapter 45: Date and Time
Section 45.1: Finding Today's Date
Section 45.2: Adding, Subtracting and Comparing Days
Chapter 46: List Comprehensions
Section 46.1: Basic List Comprehensions
Section 46.2: Do Notation
Section 46.3: Patterns in Generator Expressions
Section 46.4: Guards
Section 46.5: Parallel Comprehensions
Section 46.6: Local Bindings
Section 46.7: Nested Generators
Chapter 47: Streaming IO
Section 47.1: Streaming IO
Chapter 48: Google Protocol Buers
Section 48.1: Creating, building and using a simple .proto file
Chapter 49: Template Haskell & QuasiQuotes
Section 49.1: Syntax of Template Haskell and Quasiquotes
Section 49.2: The Q type
Section 49.3: An n-arity curry
Chapter 50: Phantom types
Section 50.1: Use Case for Phantom Types: Currencies
Chapter 51: Modules
Section 51.1: Defining Your Own Module
Section 51.2: Exporting Constructors
Section 51.3: Importing Specific Members of a Module
Section 51.4: Hiding Imports
Section 51.5: Qualifying Imports
Section 51.6: Hierarchical module names
Chapter 52: Tuples (Pairs, Triples, ...)
Section 52.1: Extract tuple components
Section 52.2: Strictness of matching a tuple
Section 52.3: Construct tuple values
Section 52.4: Write tuple types
Section 52.5: Pattern Match on Tuples
Section 52.6: Apply a binary function to a tuple (uncurrying)
Section 52.7: Apply a tuple function to two arguments (currying)
Section 52.8: Swap pair components
Chapter 53: Graphics with Gloss
Section 53.1: Installing Gloss
Section 53.2: Getting something on the screen
Chapter 54: State Monad
Section 54.1: Numbering the nodes of a tree with a counter
Chapter 55: Pipes
Section 55.1: Producers
Section 55.2: Connecting Pipes
Section 55.3: Pipes
Section 55.4: Running Pipes with runEect
Section 55.5: Consumers
Section 55.6: The Proxy monad transformer
Section 55.7: Combining Pipes and Network communication
Chapter 56: Infix operators
Section 56.1: Prelude
Section 56.2: Finding information about infix operators
Section 56.3: Custom operators
Chapter 57: Parallelism
Section 57.1: The Eval Monad
Section 57.2: rpar
Section 57.3: rseq
Chapter 58: Parsing HTML with taggy-lens and lens
Section 58.1: Filtering elements from the tree
Section 58.2: Extract the text contents from a div with a particular id
Chapter 59: Foreign Function Interface
Section 59.1: Calling C from Haskell
Section 59.2: Passing Haskell functions as callbacks to C code
Chapter 60: Gtk3
Section 60.1: Hello World in Gtk
Chapter 61: Monad Transformers
Section 61.1: A monadic counter
Chapter 62: Bifunctor
Section 62.1: Definition of Bifunctor
Section 62.2: Common instances of Bifunctor
Section 62.3: first and second
Chapter 63: Proxies
Section 63.1: Using Proxy
Section 63.2: The "polymorphic proxy" idiom
Section 63.3: Proxy is like ()
Chapter 64: Applicative Functor
Section 64.1: Alternative definition
Section 64.2: Common instances of Applicative
Chapter 65: Common monads as free monads
Section 65.1: Free Empty ~~ Identity
Section 65.2: Free Identity ~~ (Nat,) ~~ Writer Nat
Section 65.3: Free Maybe ~~ MaybeT (Writer Nat)
Section 65.4: Free (Writer w) ~~ Writer [w]
Section 65.5: Free (Const c) ~~ Either c
Section 65.6: Free (Reader x) ~~ Reader (Stream x)
Chapter 66: Common functors as the base of cofree comonads
Section 66.1: Cofree Empty ~~ Empty
Section 66.2: Cofree (Const c) ~~ Writer c
Section 66.3: Cofree Identity ~~ Stream
Section 66.4: Cofree Maybe ~~ NonEmpty
Section 66.5: Cofree (Writer w) ~~ WriterT w Stream
Section 66.6: Cofree (Either e) ~~ NonEmptyT (Writer e)
Section 66.7: Cofree (Reader x) ~~ Moore x
Chapter 67: Arithmetic
Section 67.1: Basic examples
Section 67.2: `Could not deduce (Fractional Int) ...`
Section 67.3: Function examples
Chapter 68: Role
Section 68.1: Nominal Role
Section 68.2: Representational Role
Section 68.3: Phantom Role
Chapter 69: Arbitrary-rank polymorphism with RankNTypes
Section 69.1: RankNTypes
Chapter 70: GHCJS
Section 70.1: Running "Hello World!" with Node.js
Chapter 71: XML
Section 71.1: Encoding a record using the `xml` library
Chapter 72: Reader / ReaderT
Section 72.1: Simple demonstration
Chapter 73: Function call syntax
Section 73.1: Partial application - Part 1
Section 73.2: Partial application - Part 2
Section 73.3: Parentheses in a basic function call
Section 73.4: Parentheses in embedded function calls
Chapter 74: Logging
Section 74.1: Logging with hslogger
Chapter 75: Attoparsec
Section 75.1: Combinators
Section 75.2: Bitmap - Parsing Binary Data
Chapter 76: zipWithM
Section 76.1: Calculatings sales prices
Chapter 77: Profunctor
Section 77.1: (->) Profunctor
Chapter 78: Type Application
Section 78.1: Avoiding type annotations
Section 78.2: Type applications in other languages
Section 78.3: Order of parameters
Section 78.4: Interaction with ambiguous types
Chapter 1: Getting started with
Haskell Language
Section 1.1: Getting started
Online REPL
The easiest way to get started writing Haskell is probably by going to the
Haskell website or Try Haskell and use the online REPL (read-eval-print-
loop) on the home page. The online REPL supports most basic functionality
and even some IO. There is also a basic tutorial available which can be
started by typing the command help. An ideal tool to start learning the basics
of Haskell and try out some stuff.
GHC(i)
For programmers that are ready to engage a little bit more, there is GHCi, an
interactive environment that comes with the Glorious/Glasgow Haskell
Compiler. The GHC can be installed separately, but that is only a compiler.
In order to be able to install new libraries, tools like Cabal and Stack must be
installed as well. If you are running a Unix-like operating system, the easiest
installation is to install Stack using:

This installs GHC isolated from the rest of your system, so it is easy to
remove. All commands must be preceded by stack though. Another simple
approach is to install a Haskell Platform. The platform exists in two flavours:
1. The minimal distribution contains only GHC (to compile) and
Cabal/Stack (to install and build packages)
2. The full distribution additionally contains tools for project development,
profiling and coverage analysis. Also an additional set of widely-used
packages is included.
These platforms can be installed by downloading an installer and following
the instructions or by using your distribution's package manager (note that
this version is not guaranteed to be up-to-date):
Once installed, it should be possible to start GHCi by invoking the ghci
command anywhere in the terminal. If the installation went well, the console
should look something like

Prelude quit possibly


with some more information on what libraries have been
loaded before the >. Now, the console has become a Haskell REPL and you
can execute Haskell code as with the online REPL. In order to quit this
interactive environment, one can type :qor :. For more information on what
commands are available in GHCi, type :? as indicated in the starting screen.
loadBecause writing the same things again and again on a single line is not
always that practically, it might be a good idea to write the Haskell code in
files. These files normally have .hs for an extension and can be loaded into the
REPL by using :l or :.
As mentioned earlier, GHCi is a part of the GHC, which is actually a
compiler. This compiler can be used to transform a .hs file with Haskell code
into a running program. Because a .hs file can contain a lot of functions, a main
function must be defined in the file. This will be the starting point for the
program. The file test.hs can be compiled with the
command

this will create object files and an executable if there were no errors and the
main function was defined correctly.

More advanced tools


1. It has already been mentioned earlier as package manager, but stack can
be a useful tool for Haskell development in completely different ways.
Once installed, it is capable of
installing (multiple
versions of) GHC project
creation and scaffolding
dependency management
building and testing
projects benchmarking
2. IHaskell is a haskell kernel for IPython and allows to combine (runnable)
code with markdown and mathematical notation.
Section 1.2: Hello, World!
A basic "Hello, World!" program in Haskell can be expressed concisely in
just one or two lines:

IO The first line is an optional type annotation, indicating that main is a value
of type (), representing an I/O action which "computes" a value of type ()
(read "unit"; the empty tuple conveying no information) besides performing
some side effects on the outside world (here, printing a string at the
terminal). This type annotation is usually omitted for main because it is its
only possible type.
Put this into a helloworld.hs file and compile it using a Haskell compiler, such as
GHC:

Executing the compiled file will result in the output "Hello, World!" being
printed to the screen:

Alternatively, runhaskell or runghc make it possible to run the program in


interpreted mode without having to compile it:

The interactive REPL can also be used instead of compiling. It comes


shipped with most Haskell environments, such as ghci which comes with the
GHC compiler:

Alternatively, load scripts into ghci from a file using load (or :l):

reload : (or :r) reloads everything in ghci:


Explanation:
This first line is a type signature, declaring the type of main:

IO Values of type () describe actions which can interact with the outside
world.
main :: IOBecause Haskell has a fully-fledged Hindley-Milner type system
which allows for automatic type inference, type signatures are technically
optional: if you simply omit the (), the compiler will be able to infer the type
on its own by analyzing the definition of main. However, it is very much
considered bad style not to write type signatures for top-level definitions. The
reasons include:
Type signatures in Haskell are a very helpful piece of documentation
because the type system is so expressive that you often can see what sort
of thing a function is good for simply by looking at its type. This
“ documentation ” can be conveniently accessed with tools like GHCi.
And unlike normal documentation, the compiler's type checker will
make sure it actually matches the function definition!
Type signatures keep bugs local. If you make a mistake in a definition
without providing its type signature, the compiler may not immediately
report an error but instead simply infer a nonsensical type for it, with
which it actually typechecks. You may then get a cryptic error message
when using that value. With a signature, the compiler is very good at
spotting bugs right where they happen.
This second line does the actual work:

If you come from an imperative language, it may be helpful to note that this
definition can also be written as:
Or equivalently (Haskell has layout-based parsing; but beware mixing tabs
and spaces inconsistently which will confuse this mechanism):

Each line in a do block represents some monadic (here, I/O) computation, so


that the whole do block represents the overall action comprised of these sub-
steps by combining them in a manner specific to the given monad (for I/O
this means just executing them one after another).
The do syntax is itself a syntactic sugar for monads, like IO here, and return is a
no-op action producing its argument without performing any side effects or
additional computations which might be part of a particular monad
definition.
main = putStrLn "Hello, World!" , because the putStrLn "Hello, The above is
value the same as
defining
IO (). Viewed as a putStrLn "Hello, World!" World!" already has the type can
“ statement ” , be seen as a complete program,
and you simply define main to
refer to this program.
You can look up the signature of putStrLn online:

putStrLn "Hello, World!" putStrLn


is a function that takes a string as its argument
and outputs an I/O-action (i.e. a value representing a program that the
runtime can execute). The runtime always executes the action named main, so
we simply need to define it as equal to .
Section 1.3: Factorial
The factorial function is a Haskell "Hello World!" (and for functional
programming generally) in the sense that it succinctly demonstrates basic
principles of the language.
Variation 1

Live demo
Integral is the class of integral number types. Examples include Int and
Integer.
Integral a ) => fac :: a -> a

( places a constraint on the type a to be in said class


says that fac is a function that takes an
a and returns an a
product is a function that accumulates all numbers in a list by multiplying
them together.
..n ] is special notation which enumFromTo 1 n x ≤ n [1, and is the range of
desugars to numbers 1 ≤ .
Variation 2
fac :: (Integral a) => a -> a fac 0
=1
fac n = n * fac (n - 1)

Live demo
This variation uses pattern matching to split the function definition into
separate cases. The first definition is invoked if the argument is 0 (sometimes
called the stop condition) and the second definition otherwise (the order of
definitions is significant). It also exemplifies recursion as fac refers to itself.
It is worth noting that, due to rewrite rules, both versions of fac will compile
to identical machine code when using GHC with optimizations activated.
So, in terms of efficiency, the two would be equivalent.
Section 1.4: Fibonacci, Using Lazy
Evaluation
Lazy evaluation means Haskell will evaluate only list items whose values are
needed.
The basic recursive definition is:

If evaluated directly, it will be very slow. But, imagine we have a list that
records all the results,

Then

│ f(0) : f(1) : f(2) : ..... │

└────────────────────────────────────────┘
-> 0 : 1 : +

┌────────────────────────────────────────┐

│ f(1) : f(2) : f(3) : ..... │

└────────────────────────────────────────┘

This is coded as:


Or even as
GHCi> let fibs = 0 : 1 : zipWith (+) fibs (tail fibs)
GHCi> take 10 fibs
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34]

zipWith makes a list by applying a given binary function to corresponding


elements of the two lists given to it, so
x1, x2, ... zipWith y1, y2, ... ] is equal x1 + y1, x2 + y2, ... (+) [] [].
to [ Another way of
writing fibs is with the scanl function:

scanl f z0 [ x1, x2, ...is equal z0, z1, z2, ... ] where z1 = f z0 x1; z2 = f z1 x2; ...
]
to [
scanl builds the list of partial results that foldl would produce, working from
left to right along the input list. That is, .
Thanks to lazy evaluation, both functions define infinite lists without
computing them out entirely. That is, we can write a fib function, retrieving
the nth element of the unbounded Fibonacci sequence:
Section 1.5: Primes
A few most salient variants:
Below 100

Unlimited
Sieve of Eratosthenes, using data-ordlist package:

Traditional
(a sub-optimal trial division sieve)

Optimal trial division ps = 2 : [n | n <- [3..], all ((> 0).rem n) $


takeWhile ((<= n).(^2)) ps]

-- = 2 : [n | n <- [3..], foldr (\p r-> p*p > n || (rem n p > 0 && r)) True ps]

Transitional
From trial division to sieve of Eratosthenes:

\\
The Shortest Code nubBy (((>1).).gcd) [2..] -- i.e., nubBy (\a b -

> gcd a b > 1) [2..] nubBy is also from Data.List, like ().
Section 1.6: Declaring Values
We can declare a series of expressions in the REPL like this:

To declare the same values in a file we write the following:

Module names are capitalized, unlike variable names.


Chapter 2: Overloaded Literals
Section 2.1: Strings
The type of the literal
Without any extensions, the type of a string literal – i.e., something between
double quotes – is just a string, aka list of characters:

However, when the OverloadedStrings extension is enabled, string literals become


polymorphic, similar to number literals:

This allows us to define values of string-like types without the need for any
explicit conversions. In essence, the OverloadedStrings extension just wraps
every string literal in the generic fromString conversion function, so if the
context demands e.g. the more efficient Text instead of String, you don't need
to worry about that yourself.
Using string literals

Char Notice how we were able to construct values of Text and ByteString in the
same way we construct ordinary String (or []) Values, rather than using each
types pack function to encode the string explicitly.
For more information on the OverloadedStrings language extension, see the
extension documentation.
Section 2.2: Floating Numeral

Choosing a concrete type with annotations


You can specify the type with a type annotation. The only requirement is that
the type must have a Fractional instance.

if not the compiler will complain


Section 2.3: Integer Numeral

choosing a concrete type with annotations


You can specify the type as long as the target type is Num with an annotation:

if not the compiler will complain


Prelude> 1 :: String
Section 2.4: List Literals
GHC's OverloadedLists extension allows you to construct list-like data
structures with the list literal syntax.
This allows you to Data.Map like this:

Instead of this (note the use of the extra M.fromList):


Chapter 3: Foldable
t :: * -> Foldable
is the class of types * which admit a folding operation. A
fold aggregates the elements of a structure in a well-defined order, using a
combining function.
Section 3.1: Definition of Foldable

Intuitively (though not technically), Foldable structures are containers of


elements a which allow access to their elements in a well-defined order.
The foldMap operation maps each element of the container to a Monoid and
collapses them using the Monoid structure.
Section 3.2: An instance of Foldable
for a binary tree
To instantiate Foldable you need to provide a definition for at least foldMap or
foldr.

This implementation performs an in-order traversal of the tree.

The DeriveFoldable extension allows GHC to generate Foldable instances based on


the structure of the type. We can vary the order of the machine-written
traversal by adjusting the layout of the Node constructor.
Section 3.3: Counting the elements of a
Foldable structure
length counts the occurrences of elements a in a foldable structure t a.

length is defined as being equivalent to:

Note that this return type Int restricts the operations that can be performed
on values obtained by calls to the length function. fromIntegralis a useful
function that allows us to deal with this problem.
Section 3.4: Folding a structure in
reverse
Any fold can be run in the opposite direction with the help of the Dual
monoid, which flips an existing monoid so that aggregation goes backwards.

When the underlying monoid of a foldMap call is flipped


Data.Functor.Reverse
with Dual, the fold runs backwards; the following Reverse type is defined in :

We can use this machinery to write a terse reverse for lists:


Section 3.5: Flattening a Foldable
structure into a list
Section 3.6: Performing a side-eect
for each element of a Foldable
structure
executes an Applicative action for every element in a Foldable structure. It
traverse_
ignores the action's result, keeping only the side-effects. (For a version which
doesn't discard results, use Traversable.)

is traverse_ with the arguments flipped. It resembles a foreach loop in an


for_
imperative language.

sequenceA_collapses a Foldable full of Applicative actions into a single action,


ignoring the result.

traverse_ is defined as being equivalent to:

traverse_ :: (Foldable t, Applicative f) => (a -> f b) -> t a -> f ()

traverse_ f = foldr (\x action -> f x *> action) (pure ()) sequenceA_ is
defined as:
Moreover, when the Foldable is also a Functor, traverse_ and sequenceA_ have the
following relationship:
Section 3.7: Flattening a Foldable
structure into a Monoid
maps each element of the Foldable structure to a Monoid, and then
foldMap
combines them into a single value.
foldMapand foldr can be defined in terms of one another, which means that
instances of Foldable need only give a definition for one of them.

Example usage with the Product monoid:


Section 3.8: Checking if a Foldable
structure is empty
nullreturns True if there are no elements a in a foldable structure t a, and False if
there is one or more. Structures for which null is True have a length of 0.
Chapter 4: Traversable
mapM :: Monad m ( a -> m b ) -> [a] -> mThe Traversableclass generalises the
=>
function formerly known as
[b] to work with Applicative effects over structures other

than lists.
Section 4.1: Definition of
Traversable

f :: a -> f b structures t are finitary containers of elements a which


Traversable
can be operated on with an effectful "visitor" operation. The visitor
function performs a side-effect on each element of the structure and traverse
composes those side-effects using Applicative. Another way of looking at it is
that sequenceA says Traversable structures commute with Applicatives.
Section 4.2: Traversing a structure in
reverse
A traversal can be run in the opposite direction with the help of the Backwards
applicative functor, which flips an existing applicative so that composed
effects take place in reversed order.

Backwardscan be put to use in a "reversed traverse". When the underlying


applicative of a traverse call is flipped with Backwards, the resulting effect
happens in reverse order.

The Reverse newtype is found under Data.Functor.Reverse.


Section 4.3: An instance of Traversable
for a binary tree
Implementations of traverse usually look like an implementation of fmap lifted
into an Applicative context.

This implementation performs an in-order traversal of the tree.

The DeriveTraversable extension allows GHC to generate Traversable instances


based on the structure of the type. We can vary the order of the machine-
written traversal by adjusting the layout of the Node constructor.
Section 4.4: Traversable structures as
shapes with contents
If a type t is Traversable then values of t a can be split into two pieces: their
"shape" and their "contents":

where the "contents" are the same as what you'd "visit" using a Foldable
instance.
Going one direction, from t a to Traversed t a doesn't require anything but Functor
and Foldable

break :: (Functor t, Foldable t) => t a -> Traversed t a break


ta = Traversed (fmap (const ()) ta) (toList ta)

break . recombine
and recombine . break The Traversable laws require that are both
identity. Notably, this means that there are
exactly the right number elements in contents to fill shape completely with no
left-overs.
is Traversable itself. The implementation of traverse works by visiting
Traversed t
the elements using the list's instance of Traversable and then reattaching the
inert shape to the result.
Section 4.5: Instantiating Functor and
Foldable for a Traversable structure

Every Traversable structure can be made a Foldable Functor using the fmapDefault
and foldMapDefault functions found in Data.Traversable.

fmapDefault is defined by running traverse in the Identity applicative functor.

is defined using the Const applicative functor, which ignores its


foldMapDefault
parameter while accumulating a monoidal value.
Section 4.6: Transforming a
Traversable structure with the aid of
an accumulating parameter
The two mapAccum functions combine the operations of folding and mapping.
-- A Traversable structure
-- |
-- A seed value |

-- | | -- |-
| |---| mapAccumL, mapAccumR :: Traversable t => (a -> b -> (a, c)) -> a -> t b ->
(a, t c)
-- |------------------| |--------|
-- | |
-- A folding function which produces a new mapped |
-- element 'c' and a new accumulator value 'a' |
-- |
-- Final accumulator value
-- and mapped structure

These functions generalise fmap in that they allow the mapped values to
depend on what has happened earlier in the fold. They generalise foldl/foldr in
that they map the structure in place as well as reducing it to a value.
For example, tails can be implemented using mapAccumR and its sister inits can
be implemented using mapAccumL.
mapAccumL is implemented by traversing in the State applicative functor.
Section 4.7: Transposing a list of lists
Noting that zip transposes a tuple of lists into a list of tuples,

and the similarity between the types of transpose and sequenceA,


-- transpose exchanges the inner list with the outer list
-- +---+-->--+-+ -
- | | | | transpose ::
[[a]] -> [[a]] -- ||
| |
-- +-+-->--+---+
-- sequenceA exchanges the inner Applicative with the outer Traversable

-- +------>------+ -
- | | sequenceA :: (Traversable t,
Applicative f) => t (f a) -> f (t a)
-- | |
-- +--->---+

the idea is to use []'s Traversable and Applicative structure to deploy sequenceA as a
sort of n-ary zip, zipping together all the inner lists together pointwise.
[]'s
default "prioritised choice" Applicative instance is not appropriate for our
use - we need a "zippy" Applicative. For this we use the ZipList newtype, found
in Control.Applicative.

Now we get transpose for free, by traversing in the ZipList Applicative.


Chapter 5: Lens
Lens is a library for Haskell that provides lenses, isomorphisms, folds,
traversals, getters and setters, which exposes a uniform interface for querying
and manipulating arbitrary structures, not unlike Java's accessor and mutator
concepts.
Section 5.1: Lenses for records
Simple record

set x 10 p -- returns Point { _x = 10.0, _y = 6.0 } p


& x +~ 1 -- returns Point { _x = 6.0, _y = 6.0 }
Managing records with repeating fields names

data Person = Person { _personName :: String }


makeFields ''Person

Creates a type class HasName, lens name for Person, and makes Person an instance
of HasName. Subsequent records will be added to the class as well:

The Template Haskell extension is required for makeFields to work.


Technically, it's entirely possible to create the lenses made this way via
other means, e.g. by hand.
Section 5.2: Manipulating tuples with
Lens
Getting

Setting

Modifying

both Traversal
Section 5.3: Lens and Prism
A Lens' s a means that you can always find an a within any s. A Prism' s a means
that you can sometimes find that s actually just is a but sometimes it's
something else.
_1 :: Lens' ( a, b ) abecause any tuple always has a first _Just :: To be
element. We have
Prism' ( Maybe ) a Maybe more
a
because a
sometimes clear, we have is actually an a value
wrapped in Just but sometimes it's Nothing. With this intuition, some standard
combinators can be interpreted parallel to one another
view :: Lens' s a -> ( s -> a set :: Lens' s a -> ( a -> s -> s )"gets" the a
review :: Prism' s a -> ( a -> s preview :: Prism' s a -> ( s -> Maybe a
out of the s
) "sets"
the a
slot in s
) "realizes" that an a could be an s
) "attempts" to turn an s into an a.

r, Either r a
Another way to think about it is that a value of type Lens' s a
demonstrates that s has the same structure as ( a) for some unknown r. On
the other hand, Prism' s a demonstrates that s has the same structure as for
some r. We can write those four functions above with this knowledge:
Section 5.4: Stateful Lenses
Lens operators have useful variants that operate in stateful contexts. They are
obtained by replacing ~ with = in the operator name.

Getting rid of & chains


If lens-ful operations need to be chained, it often looks like this:

This works thanks to the associativity of &. The stateful version is clearer,
though.

If lensX is actually id, the whole operation can of course be executed

directly by just lifting it with modify. Imperative code with structured

state Assuming this example state:

We can write code that resembles classic imperative languages, while still
allowing us to use benefits of Haskell:
Section 5.5: Lenses compose
f :: Lens' a b and g :: Lens' b c If you have a then f . g is a Lens' a c gotten by
a following f first and then g. Notably:
Lenses compose as functions (really they just are functions)
If you think of the view functionality of Lens, it seems like data flows
"left to right" — this might feel backwards to your normal intuition for
function composition. On the other hand, it ought to feel natural if you
think of .notation like how it happens in OO languages.
More than just composing Lens with Lens, (.) can be used to compose nearly
any "Lens-like" type together. It's not always easy to see what the result is
since the type becomes tougher to follow, but you can use the lens chart to
figure it out. The composition x . y has the type of the least-upper-bound of
the types of both x and y in that chart.
Section 5.6: Writing a lens without
Template Haskell
To demystify Template Haskell, suppose you have

then

produces (more or less)

There's nothing particularly magical going on, though. You can write these
yourself:
foo :: Lens' (Example a) Int

-- :: Functor f => (Int -> f Int) -> (Example a -> f (Example a)) ;; expand the alias foo
wrap (Example foo bar) = fmap (\newFoo -> Example newFoo bar) (wrap foo)
bar :: Lens (Example a) (Example b) a b
-- :: Functor f => (a -> f b) -> (Example a -> f (Example b)) ;; expand the alias bar
wrap (Example foo bar) = fmap (\newBar -> Example foo newBar) (wrap bar)

Essentially, you want to "visit" your lens' "focus" with the wrap function and
then rebuild the "entire" type.
Section 5.7: Fields with makeFields
(This example copied from this StackOverflow answer)
Let's say you have a number of different data types that all ought to have a
lens with the same name, in this case capacity. The makeFields slice will create a
class that accomplish this without namespace conflicts.

Then in ghci:
So what it's actually done is declared a class HasCapacity s a, where capacity is a
Lens' from s to a (a is fixed once s is known). It figured out the name
"capacity" by stripping off the (lowercased) name of the data type from the
field; I find it pleasant not to have to use an underscore on either the field
name or the lens name, since sometimes record syntax is actually what you
want. You can use makeFieldsWith and the various lensRules to have some
different options for calculating the lens names.
In case it helps, using ghci -ddump-splices Foo.hs:
So the first splice made the class HasCapcity and added an instance for Foo; the
second used the existing class and made an instance for Bar.
This also works if you import the HasCapcity class from another module;
makeFields can add more instances to the existing class and spread your types
out across multiple modules. But if you use it again in another module where
you haven't imported the class, it'll make a new class (with the same name),
and you'll have two separate overloaded capacity lenses that are not
compatible.
Section 5.8: Classy Lenses
In addition to the standard makeLenses function for generating Lenses,
Control.Lens.TH also offers the makeClassy function. makeClassy has the same type
and works in essentially the same way as makeLenses, with one key difference.
In addition to generating the standard lenses and traversals, if the type has no
arguments, it will also create a class describing all the datatypes which
possess the type as a field. For example

will create
Section 5.9: Traversals
A Traversal' s a shows that s has 0-to-many as inside of it.

traverse :: Traversal (ta )a Any type t which is Traversable automatically has that
.
We can use a Traversal to set or map over all of these a values

f :: Lens' s a
says there's exactly one a g :: Prism' a b A says there are
inside of s. A either 0 or 1 bs in a.
Composing f . g gives us a Traversal' s b because following f and then g shows
how there there are 0-to-1 bs in s.
Chapter 6: QuickCheck
Section 6.1: Declaring a property
At its simplest, a property is a function which returns a Bool.

prop_reverseDoesNotChangeLength xs = length (reverse xs) == length xs

A property declares a high-level invariant of a program. The QuickCheck test


runner will evaluate the function with 100 random inputs and check that the
result is always True.
By convention, functions that are properties have names which start
with prop_.
Section 6.2: Randomly generating
data for custom types
The Arbitrary class is for types that can be randomly generated by QuickCheck.
The minimal implementation of Arbitrary is the arbitrary method, which runs in
the Gen monad to produce a random value.
Here is an instance of Arbitrary for the following datatype of non-empty lists.
Section 6.3: Using implication (==>) to
check properties with preconditions

If you want to check that a property holds given that a precondition holds,
you can use the
==>
operator. Note that if it's very unlikely for arbitrary inputs to match the
precondition, QuickCheck can give up early.

prop_overlySpecific x y = x == 0 ==> x * y == 0
Section 6.4: Checking a single property
The quickCheck function tests a property on 100 random inputs.

If a property fails for some input, quickCheck prints out a counterexample.


Section 6.5: Checking all the properties
in a file
quickCheckAllis a Template Haskell helper which finds all the definitions in the
current file whose name begins with prop_ and tests them.

return Note that the [] line is required. It makes definitions textually above
that line visible to Template Haskell.
Section 6.6: Limiting the size of test
data
It can be difficult to test functions with poor asymptotic complexity using
quickcheck as the random inputs are not usually size bounded. By adding an
upper bound on the size of the input we can still test these expensive
functions.

By using quickCheckWith with a modified version of stdArgs we can limit the size
of the inputs to be at most 10. In this case, as we are generating lists, this
means we generate lists of up to size 10. Our permutations function doesn't
take too long to run for these short lists but we can still be reasonably
confident that our definition is correct.
Chapter 7: Common GHC
Language Extensions
Section 7.1: RankNTypes
Imagine the following situation:

Here, we want to pass in a function that converts a value into a String, apply
that function to both a string parameter and and int parameter and print them
both. In my mind, there is no reason this should fail! We have a function that
works on both types of the parameters we're passing in.
Unfortunately, this won't type check! GHC infers the a type based off of its
first occurrence in the function body. That is, as soon as we hit:

show' :: String -> String


GHC will infer that , since string is a String. It will
proceed to blow up while trying to
show' int .

show
RankNTypes lets you instead write the type signature as follows,
quantifying over all functions that satisfy the ' type: foo :: (forall a. Show a => (a ->
String)) -> String -> Int -> IO ()

show Thisis rank 2 polymorphism: We are asserting that the ' function must
work for all as within our function, and the previous implementation now
works.
forall ...
The RankNTypes extension allows arbitrary nesting of blocks in type
signatures. In other words, it allows rank N polymorphism.
Section 7.2: OverloadedStrings
Char Normally, string literals in Haskell have a type of String (which is a type
alias for []). While this isn't a problem for smaller, educational programs,
real-world applications often require more efficient storage such as Text or
ByteString.

OverloadedStrings simply changes the type of literals to

Char Allowing them to be directly passed to functions expecting such a type.


Many libraries implement this interface for their string-like types including
Data.Text and Data.ByteString which both provide certain time and space
advantages over [].
There are also some unique uses of OverloadedStrings like those from the
Postgresql-simple library which allows SQL queries to be written in double
quotes like a normal string, but provides protections against improper
concatenation, a notorious source of SQL injection attacks.
To create a instance of the IsString class you need to impliment the fromString
function. Example † :

† This example courtesy of Lyndon Maydwell (sordina on


GitHub) found here.
Section 7.3: BinaryLiterals
Standard Haskell allows you to write integer literals in decimal (without any
prefix), hexadecimal (preceded by 0x or 0X), and octal (preceded by 0o or 0O).
The BinaryLiterals extension adds the option of binary (preceded by 0b or 0B).
Section 7.4: ExistentialQuantification
This is a type system extension that allows types that are existentially
quantified, or, in other words, have type variables that only get instantiated at
runtime † .
A value of existential type is similar to an abstract-base-class reference in OO
languages: you don't know the exact type in contains, but you can constrain
the class of types.

or equivalently, with GADT syntax:

Existential types open the door to things like almost-heterogenous containers:


as said above, there can actually be various types in an S value, but all of
them can be shown, hence you can also do

Now we can create a collection of such objects:

Which also allows us to use the polymorphic behaviour:

Existentials can be very powerful, but note that they are actually not
necessary very often in Haskell. In the example above, all you can actually
do with the Show instance is show (duh!) the values, i.e. create a string
representation. The entire S type therefore contains exactly as much
information as the string you get when showing it. Therefore, it is usually
better to simply store that string right away, especially since Haskell is lazy
and therefore the string will at first only be an unevaluated thunk anyway.
On the other hand, existentials cause some unique problems. For instance, the
way the type information is “ hidden ” in an existential. If you pattern-
match on an S value, you will have the contained type in scope (more
precisely, its Show instance), but this information can never escape its scope,
which therefore becomes a bit of a “ secret society ” : the compiler doesn't
let anything escape the scope except values whose type is already known
from the outside.
This can lead to strange errors like Couldn't match type ‘a0’ with ‘()’ ‘a0’ is untouchable.

† Contrast this with ordinary parametric polymorphism, which is generally


resolved at compile time (allowing full type erasure).

Existential types are different from Rank-N types – these extensions are,
roughly speaking, dual to each other: to actually use values of an existential
type, you need a (possibly constrained-) polymorphic function, like show in
the example. A polymorphic function is universally quantified, i.e. it works
for any type in a given class, whereas existential quantification means it
works for some particular type which is a priori unknown. If you have a
polymorphic function, that's sufficient, however to pass polymorphic
functions as such as arguments, you need {-# LANGUAGE Rank2Types #-}:
Section 7.5: LambdaCase
case in place arg -> case arg of A syntactic extension that lets you write \.
of \ Consider the following function definition:

If you want to avoid repeating the function name, you might write something
like:

Using the LambdaCase extension, you can write that as a function


expression, without having to name the argument:
Section 7.6: FunctionalDependencies
If you have a multi-parameter type-class with arguments a, b, c, and x, this
extension lets you express that the type x can be uniquely identified from a,
b, and c:

When declaring an instance of such class, it will be checked against all other
instances to make sure that the functional dependency holds, that is, no other
instance with same a b c but different x exists.
You can specify multiple dependencies in a comma-separated list:

For example in MTL we can see:

MonadReader a (( - ) Foo ) => a , the compiler can a ~ Foo


Now, if you have a
>
infer that value of type , since
the second argument
completely determines the first, and will simplify the type accordingly.
The SomeClass class can be thought of as a function of the arguments a b c that
results in x. Such classes can be used to do computations in the typesystem.
Section 7.7: FlexibleInstances
Regular instances require:
All instance types must be of the form (T
a1 ... an) where a1 ... an are *distinct type
variables*, and each type variable appears
at most once in the instance head.
That means that, for example, while you can create an instance for [a]
you can't create an instance for specifically
Section 7.8: GADTs
Conventional algebraic datatypes are parametric in their type variables. For
example, if we define an ADT like

IntLit :: Int -> Expr a with the hope that this will statically rule out non-well-
typed conditionals, this will not behave as expected since the type of is
universially quantified: for any choice of a, it produces a value of type Expr a.
a ~ Bool , we IntLit :: Int -> Expr Bool In particular, for , allowing us to
have construct something like If
IntLit 1) e1 e2 ( which is what the type of the If constructor was trying to rule
out.
Generalised Algebraic Data Types allows us to control the resulting type of a
data constructor so that they are not merely parametric. We can rewrite our
Expr type as a GADT like this:

Int -> Expr Int , and IntLit 1 :: Expr Bool Here, the type of the constructor IntLit
so is will not typecheck.
Pattern matching on a GADT value causes refinement of the type of the term
returned. For example, it is possible to write an evaluator for Expr a like this:

a ~ Int (and likewise for not and if_then_else_ a ~ BoolNote that we are able to
when use (+) in the above
definitions because when e.g. IntLit x is pattern matched, we also learn that ).
Section 7.9: TupleSections
A syntactic extension that allows applying the tuple constructor (which is an
operator) in a section way:

N-tuples
It also works for tuples with arity greater than two

Mapping
This can be useful in other places where sections are used:

The above example without this extension would look like this:
Section 7.10: OverloadedLists
added in GHC 7.8.
OverloadedLists, similar to OverloadedStrings, allows list literals to be
desugared as follows:

This comes handy when dealing with types such as Set, Vector and Maps.

IsList class in GHC.Exts is intended to be used with this extension.


fromList :: [ IsList
Item l is] equipped
-> l , toList :: l ->
with one[ Item
typel ]function,
fromListN
Item::
, and [ Item
Int ->three l ] -> l
functions,
and
where fromListN is optional.
Typical implementations are:

Examples taken from OverloadedLists –


GHC.
Section 7.11:
MultiParamTypeClasses
It's a very common extension that allows type classes with multiple type
parameters. You can think of MPTC as a relation between types.

The order of parameters matters.


MPTCs can sometimes be replaced with type
families.
Section 7.12:
UnicodeSyntax
An extension that allows you to use Unicode characters in lieu of certain
built-in operators and names.
ASCII Unicode Use(s)
:: ∷ has type
-> → function types, lambdas, case
branches, etc.
=> ⇒ class constraints
forall ∀ explicit polymorphism
<- ← do notation
Int :: ★ * ★ the type (or kind) of types (e.g., )

>-proc notation for Arrows


-<proc notation for Arrows
>>-proc notation for Arrows
-<<proc notation for Arrows

For example:

would become

Note that the * vs. ★ example is slightly different: since * isn't reserved, ★
also works the same way as * for multiplication, or any other function named
(*), and vice-versa. For example:
Section 7.13: PatternSynonyms
Pattern synonyms are abstractions of patterns similar to how functions are
abstractions of expressions.
For this example, let's look at the interface Data.Sequence exposes, and let's see
how it can be improved with pattern synonyms. The Seq type is a data type
that, internally, uses a complicated representation to achieve good asymptotic
complexity for various operations, most notably both O(1) (un)consing and
(un)snocing.
But this representation is unwieldy and some of its invariants cannot be
expressed in Haskell's type system. Because of this, the Seq type is exposed to
users as an abstract type, along with invariant-preserving accessor and
constructor functions, among them:

But using this interface can be a bit cumbersome:

We can use view patterns to clean it up somewhat:

Using the PatternSynonyms language extension, we can give an even nicer


interface by allowing pattern matching to pretend that we have a cons- or a
snoc-list:
Section 7.14: ScopedTypeVariables
let you refer to universally quantified types inside of a
ScopedTypeVariables
declaration. To be more explicit:

The important thing is that we can use a, b and c to instruct the compiler in
subexpressions of the declaration (the tuple in the where clause and the first a
in the final result). In practice, ScopedTypeVariables assist in writing complex
functions as a sum of parts, allowing the programmer to add type signatures
to intermediate values that don't have concrete types.
Section 7.15: RecordWildCards
See RecordWildCards
Chapter 8: Free Monads
Section 8.1: Free monads split monadic
computations into data structures and
interpreters
For instance, a computation involving commands to read and write from the
prompt:
First we describe the "commands" of our computation as a Functor data type

Then we use Free to create the "Free Monad over TeletypeF" and build some
basic operations.

Since Free f is a Monad whenever f is a Functor, we can use the standard Monad
combinators (including do notation) to build Teletype computations.

IO Finally, we write an "interpreter" turning Teletype a values into


a
something we know how to work with like

Which we can use to "run" the Teletype a computation in IO


Section 8.2: The Freer monad
There's an alternative formulation of the free monad called the Freer (or
Prompt, or Operational) monad. The Freer monad doesn't require a Functor
instance for its underlying instruction set, and it has a more recognisably list-
like structure than the standard free monad.
i :: The Freer monad represents programs as a sequence of atomic
instructions belonging to the instruction set *
-> *. Each instruction uses its parameter to declare its return type. For
example, the set of base instructions for the State monad are as follows:
data StateI s a where
Get :: StateI s s -- the Get instruction returns a value of type 's'

Put :: s -> StateI s () -- the Put instruction contains an 's' as an argument and returns ()

Sequencing these instructions takes place with the :>>= constructor. :>>= takes
a single instruction returning an a and prepends it to the rest of the program,
piping its return value into the continuation. In other words, given an
instruction returning an a, and a function to turn an a into a program returning
a b, :>>= will produce a program returning a b.

Note that a is existentially quantified in the :>>= constructor. The only way for
an interpreter to learn what a is is by pattern matching on the GADT i.
Because CoYoneda i is a Functor for any i, Freer is a Monad for any i, even if i isn't
a Functor.

Interpreters can be built for Freer by mapping instructions to some handler


monad.

foldFreer :: Monad m => (forall x. i x -> m x) -> Freer i a -> m a


foldFreer eta (Return x) = return x foldFreer eta (i :>>= f) = eta i
>>= (foldFreer eta . f)
Freer ( StateI s For example, we can interpret the ) monad using the regular State
s monad as a handler:
Section 8.3: How do foldFree and
iterM work?
There are some functions to help tear down Free computations by interpreting
them into another monad m: iterM
:: ( Functor f, Monad ) => ( f ( m ) -> m a ) -> ( Free f a -> m a ) foldFree :: Monad m
m a
and
x. f x -> m x ) -> ( Free f a -> m a ). What are they doing?
First let's see what it would take to tear down an interpret a Teletype a function
into IO manually. We can see Free f a as being defined

The Pure case is easy:

Now, how to interpret a Teletype computation that was built with the Free
constructor? We'd like to arrive at a
IO by teletypeF :: TeletypeF ( Teletype a value of type ). To start with,
a
examining we'll write a function runIO
:: TeletypeF a -> IO a which maps
a single layer of the free monad to an IO action:

teletypeF :: TeletypeF ( Teletype a Now we can use runIO to fill in the rest of
interpretTeletype. Recall that )
is a layer of the TeletypeF functor which contains the rest of the Free
computation. We'll use runIO to interpret the
runIO teletypeF :: IO ( Teletype a outermost layer (so we have )) and then use the
IO monad's >>= combinator to interpret the returned Teletype a.

interpretTeletype :: Teletype a -> IO a interpretTeletype


(Pure x) = return x
interpretTeletype (Free teletypeF) = runIO teletypeF >>= interpretTeletype
The definition of foldFree is just that of interpretTeletype, except that the runIO
function has been factored out. As a result, foldFree works independently of
any particular base functor and of any target monad.

Monad m foldFree has a rank-2 type: eta is a natural transformation. We could


=>
have given foldFree a type of (f
Free f a ) -> m ( Free f a )) -> Free f a -> m a (, but that gives eta the liberty of
inspecting the Free
computation inside the f layer. Giving foldFree this more restrictive type
ensures that eta can only process a single layer at a time.
iterM does give the folding function the ability to examine the
subcomputation. The (monadic) result of the previous iteration is available to
the next, inside f's parameter. iterM is analogous to a paramorphism whereas
foldFree is like a catamorphism.
Section 8.4: Free Monads are like fixed
points
Compare the definition of Free to that of Fix:

In particular, compare the type of the Free constructor with the type of the Fix
constructor. Free layers up a functor just like Fix, except that Free has an
additional Return a case.
Chapter 9: Type Classes
Typeclasses in Haskell are a means of defining the behaviour associated with
a type separately from that type's definition. Whereas, say, in Java, you'd
define the behaviour as part of the type's definition -- i.e. in an interface,
abstract class or concrete class -- Haskell keeps these two things separate.
There are a number of typeclasses already defined in Haskell's base
package. The relationship between these is illustrated in the Remarks
section below.
Section 9.1: Eq
Eq a =>All basic datatypes (like Int, String, [a]) from Prelude except for
functions and IO have instances of Eq. If a type instantiates Eq it means that we
know how to compare two values for value or structural equality.

Required methods
== ) :: Eq a => a -> a -> Boolean or ( /= ) :: Eq a => a -> a -> Boolean ( (if only one is
implemented,
the other defaults to the negation of the defined one)
Defines
== ) :: Eq a => a -> a -> Boolean /= ) :: Eq a => a -> a -> Boolean
(
(

Direct superclasses
None
Notable subclasses
Ord
Section
9.2: Monoid
<> Types instantiating Monoid include lists, numbers, and functions with Monoid
return values, among others. To instantiate Monoid a type must support an
associative binary operation (mappend or ()) which combines its values, and
have a special "zero" value (mempty) such that combining a value with it does
not change that value:

Intuitively, Monoid types are "list-like" in that they support appending values
together. Alternatively, Monoid types can be thought of as sequences of values
for which we care about the order but not the grouping. For instance, a binary
tree is a Monoid, but using the Monoid operations we cannot witness its
branching structure, only a traversal of its values (see Foldable and Traversable).
Required methods

mempty :: Monoid m =>


m

mappend :: Monoid m => m -> m -> m

Direct superclasses
None
Section
9.3: Ord
Ord Types instantiating Ord include, e.g., Int, String, and [a] (for types a where
a
there's an instance). If a type instantiates Ord it means that we know a
“ natural ” ordering of values of that type. Note, there are often many
possible choices of the “ natural ” ordering of a type and Ord forces us to
favor one.
<= ), (<), >= Ord
provides the standard () operators but interestingly defines
(>), ( them all using a custom algebraic data type

Required methods
compare :: Ord a => a -> a -> Ordering or ( <= ) :: Ord a => a -> a -> Boolean <=
(the
standard ’ s default compare method uses () in its implementation)
Defines
compare :: Ord a => a -> a -> Ordering <= ) :: Ord a => a -> a -> Boolean (
:: Ord a => a -> a -> Boolean >= ) :: Ord a => a -> a -> Boolean (<)
:: Ord a => a -> a -> Boolean min :: Ord a => a -> a -> a max :: Ord a => a -> a -> a (
(>)

Direct superclasses
Eq
Section
9.4: Num
The most general class for number types, more precisely for rings, i.e.
numbers that can be added and subtracted and multiplied in the usual sense,
but not necessarily divided.
This class contains both integral types (Int, Integer, Word32 etc.) and fractional
types (Double, Rational, also complex numbers etc.). In case of finite types, the
semantics are generally understood as modular arithmetic, i.e. with over- and
underflow † .
Note that the rules for the numerical classes are much less strictly obeyed
than the monad or monoid laws, or those for equality comparison. In
particular, floating-point numbers generally obey laws only in a approximate
sense.
The methods
fromInteger :: Num a => Integer -> a . convert
an integer to the general number type (wrapping around
Complex the range, if necessary). Haskell number literals can be
Double
understood as a monomorphic Integer literal with the general
conversion around it, so you can use the literal 5 in both an Int context
and a
setting.
:: Num a => a -> a -> a . Standard addition, generally understood as
(+)
associative and commutative, i.e.,
for the most common instances, multiplication is also commutative, but
this is definitely not a requirement.
negate :: Num a => a -> a
. The full name of the unary negation operator. -1 is
syntactic sugar for negate 1.

abs a >=For real types it's clear what non-negative means: you always have 0.
Complex etc. types don't have a well-defined ordering, however the
result of abs should always lie in the real subset ‡ (i.e. give a number that
could also be written as a single number literal without negation).
signum :: Num a => a -> a . The sign
function, according to the name, yields only -1 or 1, depending on the
sign of the argument. Actually, that's only true for nonzero real numbers;
in general signum is better understood as the normalising function:

Note that section 6.4.4 of the Haskell 2010 Report explicitly requires this
last equality to hold for any valid Num instance.

Some libraries, notably linear and hmatrix, have a much laxer understanding
of what the Num class is for: they treat it just as a way to overload the
arithmetic operators. While this is pretty straightforward for + and -, it
already becomes troublesome with * and more so with the other methods. For
instance, should * mean matrix multiplication or element-wise
multiplication?
It is arguably a bad idea to define such non-number instances; please consider
dedicated classes such as VectorSpace.
:: ) ==† In particular, the “ negatives ” of unsigned types
Word32
are wrapped around to large positive, e.g. (-4 4294967292.
‡ This is widely not fulfilled: vector types do not have a real
subset. The controversial Num-instances for such types generally define
abs and signum element-wise, which mathematically speaking doesn't
really make sense.
Section 9.5: Maybe and the Functor
Class
In Haskell, data types can have arguments just like functions. Take the Maybe
type for example.
Maybe is a very useful type which allows us to represent the idea of failure, or
the possiblity thereof. In other words, if there is a possibility that a
computation will fail, we use the Maybe type there. Maybe acts kind of like a
wrapper for other types, giving them additional functionality.
Its actual declaration is fairly simple.

What this tells is that a Maybe comes in two forms, a Just, which represents
success, and a Nothing, which represents failure. Just takes one argument which
determines the type of the Maybe, and Nothing takes none. For
Just "foo" will have Maybe example, the value , which is a string type
String
type wrapped with the additional
Maybe Maybe functionality. The value Nothing has type where a can be any type.
a
This idea of wrapping types to give them additional functionality is a very
useful one, and is applicable to more than just Maybe. Other examples include
the Either, IO and list types, each providing different functionality. However,
there are some actions and abilities which are common to all of these wrapper
types. The most notable of these is the ability to modify the encapsulated
value.
It is common to think of these kinds of types as boxes which can have values
placed in them. Different boxes hold different values and do different things,
but none are useful without being able to access the contents within.
To encapsulate this idea, Haskell comes with a standard typeclass, named
Functor. It is defined as follows.

As can be seen, the class has a single function, fmap, of two arguments. The
first argument is a function from one type, a, to another, b. The second
argument is a functor (wrapper type) containing a value of type a. It returns a
functor (wrapper type) containing a value of type b.
In simple terms, fmap takes a function and applies to the value inside of a
functor. It is the only function necessary for a type to be a member of the
Functor class, but it is extremely useful. Functions operating on functors that
have more specific applications can be found in the Applicative and Monad
typeclasses.
Section 9.6: Type class inheritance:
Ord type class
Haskell supports a notion of class extension. For example, the class Ord
inherits all of the operations in Eq, but in addition has a compare function that
returns an Ordering between values. Ord may also contain the common order
comparison operators, as well as a min method and a max method.
The => notation has the same meaning as it does in a function signature and
requires type a to implement Eq, in

All of the methods following compare can be derived from it in a number of


ways:

<= Type classes that themselves extend Ord must implement at least either the
compare method or the () method themselves, which builds up the directed
inheritance lattice.
Chapter 10: IO
Section 10.1: Getting the 'a' "out of"
'IO a'
IO A common question is "I have a value of , but I want to do something to
a
that a value: how do I get access to it?" How can one operate on data that
comes from the outside world (for example, incrementing a number typed by
the user)?
IO The point is that if you use a pure function on data obtained impurely,
a
then the result is still impure. It depends on what the user did! A value of
type stands for a "side-effecting computation resulting in a value of type a"
which can only be run by (a) composing it into main and (b) compiling and
executing your program. For that reason, there is no way within pure Haskell
world to "get the a out".
Instead, we want to build a new computation, a new IO value, which makes
use of the a value at runtime. This is another way of composing IO values and
so again we can use do-notation:

Here we're using a pure function (getMessage) to turn an Int into a String, but
we're using do notation to make it be applied to the result of an IO
computation myComputation when (after) that computation runs. The result is a
bigger IO computation, newComputation. This technique of using pure functions
in an impure context is called lifting.
Section 10.2: IO defines your
program's `main` action
IO To make a Haskell program executable you must provide a file with a main
function of type ()

Hello world
When Haskell is compiled it examines the IO data here and turns it
into a executable. When we run this program it will print !.
IO If you have values of type other than main they won't do anything.
a

Compiling this program and running it will have the same effect as the last
example. The code in other is ignored.
In order to make the code in other have runtime effects you have to compose it
into main. No IO values not eventually composed into main will have any
runtime effect. To compose two IO values sequentially you can use
donotation:

When you compile and run this program it outputs

Note that the order of operations is described by how other was composed into
main and not the definition order.
Section 10.3: Checking for end-of-file
conditions
A bit counter-intuitive to the way most other languages' standard I/O libraries
do it, Haskell's isEOF does not require you to perform a read operation before
checking for an EOF condition; the runtime will do it for you.

Input:

Output:
Section 10.4: Reading all contents of
standard input into a string

Input:

Output:

Note: This program will actually print parts of the output before all of the
input has been fully read in. This means that, if, for example, you use
getContents over a 50MiB file, Haskell's lazy evaluation and garbage
collector will ensure that only the parts of the file that are currently needed
(read: indispensable for further execution) will be loaded into memory.
Thus, the 50MiB file won't be loaded into memory at once.
Section 10.5: Role and Purpose of IO
Haskell is a pure language, meaning that expressions cannot have side
effects. A side effect is anything that the expression or function does other
than produce a value, for example, modify a global counter or print to
standard output.
IO Int InHaskell, side-effectful computations (specifically, those which can
have an effect on the real world) are modelled using IO. Strictly speaking, IO
is a type constructor, taking a type and producing a type. For example, is the
type of an I/O computation producing an Int value. The IO type is abstract,
and the interface provided for IO ensures that certain illegal values (that is,
functions with non-sensical types) cannot exist, by ensuring that all builtin
functions which perform IO have a return type enclosed in IO.
IO a Haskell program is run, the computation represented by the Haskell
When
x
value named main, whose type can be for any type x, is executed.
Manipulating IO values
There are many functions in the standard library providing typical IO actions
that a general purpose programming language should perform, such as
reading and writing to file handles. General IO actions are created and
combined primarily with two functions:

This function (typically called bind) takes an IO action and a function which
returns an IO action, and produces the IO action which is the result of applying
the function to the value produced by the first IO action.

This function takes any value (i.e., a pure value) and returns the IO
computation which does no IO and produces the given value. In other words,
it is a no-op I/O action.
>> ) :: IO a -> IO b -> IO b
is similar >>=There are additional general functions
to ( which are often used, but all can be
written in terms of the two above. For
example, () but the result of the first action is ignored.
A simple program greeting the user using these functions:

putStrLn :: String -> IO () and getLine :: IO String This program also uses .
Note: the types of certain functions above are actually more general than
those types given (namely >>=, >> and return).
IO semantics
The IO type in Haskell has very similar semantics to that of imperative
programming languages. For example, when one writes s1 ; s2 in an
imperative language to indicate executing statement s1, then statement s2, one
can write s1 >> s2 to model the same thing in Haskell.
return () >> putStrLn However,
the semantics of IO diverge slightly of
"boom"
what would be expected coming from an imperative
background. The return function does not interrupt control flow - it has no
effect on the program if another IO action is run in sequence. For example,
correctly prints "boom" to standard output.
The formal semantics of IO can given in terms of simple equalities involving
the functions in the previous section:

These laws are typically referred to as left identity, right identity, and
composition, respectively. They can be stated more naturally in terms of the
function

as follows:

Lazy IO
putStrLn "X" >> putStrLn "Y" Functionsperforming I/O computations are
typically strict, meaning that all preceding actions in a sequence of actions
must be completed before the next action is begun. Typically this is useful
and expected behaviour should print "XY". However, certain library
functions perform I/O lazily, meaning
that the I/O actions required to produce the value are only performed when
the value is actually consumed. Examples of such functions are getContents and
readFile. Lazy I/O can drastically reduce the performance of a Haskell
program, so when using library functions, care should be taken to note which
functions are lazy.
IO and do notation
Haskell provides a simpler method of combining different IO values into
larger IO values. This special syntax is known as do notation* and is simply
syntactic sugar for usages of the >>=, >> and return functions.
The program in the previous section can be written in two different ways
using do notation, the first being layoutsensitive and the second being layout
insensitive:

All three programs are exactly equivalent.


*Note that do notation is also applicable to a broader class of type
constructors called monads.
Section 10.6: Writing to stdout
Per the Haskell 2010 Language Specification the following are standard IO
functions available in Prelude, so no imports are required to use them.
putChar :: Char -> IO () - writes a char to stdout

putStr :: String -> IO () - writes a String to stdout

putStrLn :: String -> IO () - writes a String to stdout and adds a new line

Recall that you can instantiate Show for your own types using deriving:

-- In ex.hs data Person = Person { name :: String }


deriving Show
main = print (Person "Alex") -- Person is an instance of `Show`, thanks to `deriving`

Loading & running in GHCi:


Section 10.7: Reading words from an
entire file
In Haskell, it often makes sense not to bother with file handles at all, but
simply read or write an entire file straight from disk to memory † , and do all
the partitioning/processing of the text with the pure string data structure. This
avoids mixing IO and program logic, which can greatly help avoiding bugs.

then the program will output

Here, mapM_ went through the list of all words in the file, and printed each of
them to a separate line with putStrLn.

† If you think this is wasteful on memory, you have a point. Actually,


Haskell's laziness can often avoid that the entire file needs to reside in
memory simultaneously... but beware, this kind of lazy IO causes its own set
of problems. For performance-critical applications, it often makes sense to
enforce the entire file to be read at once, strictly; you can do this with the
Data.Text version of readFile.
Section 10.8: Reading a line from
standard input

Input:

Output:
Section 10.9: Reading from `stdin`
As-per the Haskell 2010 Language Specification, the following are standard
IO functions available in Prelude, so no imports are required to use them.

read :: Read a => String -> a - convert a String to a value

Other, less common functions are:


getContents :: IO String interact :: ( String ->-String ) ->
returns allIOuser
input as a single
string, which is
read lazily as it is needed () - takes a function of type String-
>String as its argument. The
entire input from the standard input device is passed to this function as
its argument
Section 10.10: Parsing and
constructing an object from standard
input

Input:

Output:
Section 10.11: Reading from file
handles
Like in several other parts of the I/O library, functions that implicitly use a
standard stream have a counterpart in
System.IO getLine :: IO String has a hGetLine :: Handle -> IO String
counterpart
that performs the same job, but with an extra parameter at the
left, of type Handle, that represents the stream being handled. For instance, .

Input:

Output:
Chapter 11: Record Syntax
Section 11.1: Basic Syntax
Records are an extension of sum algebraic data type that allow fields to be
named:

The field names can then be used to get the named field out of the record

Records can be pattern matched against

Notice that not all fields need be named


Records are created by naming their fields, but can also be created as
ordinary sum types (often useful when the number of fields is small and not
likely to change)

If a record is created without a named field, the compiler will issue a


warning, and the resulting value will be

A field of a record can be updated by setting its value. Unmentioned fields do


not change.
> let r = RecordType {aString = "Foobar", aNumber= 42, isTrue = True}
> let r' = r{aNumber=117}
-- r'{aString = "Foobar", aNumber= 117, isTrue = True}

It is often useful to create lenses for complicated record types.


Section 11.2: Defining a data type with
field labels
It is possible to define a data type with field labels.

This definition differs from a normal record definition as it also defines


*record accessors which can be used to access parts of a data type.
In this example, two record accessors are defined, age and name, which allow
us to access the age and name fields respectively.

Record accessors are just Haskell functions which are automatically


generated by the compiler. As such, they are used like ordinary Haskell
functions.
By naming fields, we can also use the field labels in a number of other
contexts in order to make our code more readable.
Pattern Matching

We can bind the value located at the position of the relevant field label whilst
pattern matching to a new value (in this case x) which can be used on the
RHS of a definition.

The NamedFieldPuns extension instead allows us to just specify the field label we
want to match upon, this name is then shadowed on the RHS of a definition
so referring to name refers to the value rather than the record accessor.

When matching using RecordWildCards, all field labels are brought into scope.
(In this specific example, name and age)
This extension is slightly controversial as it is not clear how values are
brought into scope if you are not sure of the definition of Person.
Record Updates

There is also special syntax for updating data types with field labels.
Section 11.3: RecordWildCards

Client { .. The
pattern } brings in scope all the fields of the constructor Client, and
is equivalent to the pattern

Client{ firstName = firstName, lastName = lastName, clientID = clientID }

It can also be combined with other field matchers like so:

This is equivalent to

Client{ firstName = "Joe", lastName = lastName, clientID = clientID }


Section 11.4: Copying Records while
Changing Field Values
Suppose you have this type: data Person = Person { name ::

String, age:: Int } deriving (Show, Eq) and two values:

a new value of type Person can be created by copying from alex, specifying
which values to change:

The values of alex and anotherAlex will now be:


Section 11.5: Records with newtype
Record syntax can be used with newtype with the restriction that there is
exactly one constructor with exactly one field. The benefit here is the
automatic creation of a function to unwrap the newtype. These fields are
often named starting with run for monads, get for monoids, and un for other
types.

-- a fancy string that wants to avoid concatenation with ordinary strings

It is important to note that the record syntax is typically never used to form
values and the field name is used strictly for unwrapping
Chapter 12: Partial Application
Section 12.1: Sections
Sectioning is a concise way to partially apply arguments to infix operators.
For example, if we want to write a function which adds "ing" to the end of a
word we can use a section to succinctly define a function.

Notice how we have partially applied the second argument. Normally, we can
only partially apply the arguments in the specified order.
We can also use left sectioning to partially apply the first argument.

We could equivalently write this using normal prefix partial application:

A Note on Subtraction
Beginners often incorrectly section negation.

This does not work as -1 is parsed as the literal -1 rather than the sectioned
operator - applied to 1. The subtract function exists to circumvent this issue.
Section 12.2: Partially Applied Adding
Function
We can use partial application to "lock" the first argument. After applying
one argument we are left with a function which expects one more argument
before returning the result.

We can then use addOne in order to add one to an Int.


Section 12.3: Returning a Partially
Applied Function
Returning partially applied functions is one technique to write concise code.

In this example (+x) is a partially applied function. Notice that the second
parameter to the add function does not need to be specified in the function
definition.
add
The result of calling 5 2 is seven.
Chapter 13: Monoid
Section 13.1: An instance of Monoid
for lists
Section 13.2: Collapsing a list of
Monoids into a single value
mconcat :: [a] -> a is another method of the Monoid typeclass:

mconcat = foldr mappend mempty Its default definition is .


Section 13.3: Numeric Monoids
Numbers are monoidal in two ways: addition with 0 as the unit, and
multiplication with 1 as the unit. Both are equally valid and useful in
different circumstances. So rather than choose a preferred instance for
numbers, there are two newtypes, Sum and Product to tag them for the different
functionality.

This effectively allows for the developer to choose which functionality to use
by wrapping the value in the appropriate newtype.
Section 13.4: An instance of Monoid
for ()
is a Monoid. Since there is only one value of type (), there's only one thing
()
mempty and mappend could do:
Chapter 14: Category Theory
Section 14.1: Category theory as a
system for organizing abstraction
Category theory is a modern mathematical theory and a branch of abstract
algebra focused on the nature of connectedness and relation. It is useful for
giving solid foundations and common language to many highly reusable
programming abstractions. Haskell uses Category theory as inspiration for
some of the core typeclasses available in both the standard library and several
popular third-party libraries.
An example
Functor The Functor typeclass says that if a type F instantiates Functor (for
F
which we write ) then we have a generic operation

which lets us "map" over F. The standard (but imperfect) intuition is that F a is
a container full of values of type a and fmap lets us apply a transformation to
each of these contained elements. An example is Maybe

instance Functor Maybe where fmap f Nothing = Nothing -- if there are


no values contained, do nothing fmap f (Just a) = Just (f a) -- else, apply our
transformation

Given this intuition, a common question is "why not call Functor something
obvious like Mappable?".
A hint of Category Theory
The reason is that Functor fits into a set of common structures in Category
theory and therefore by calling Functor "Functor" we can see how it connects
to this deeper body of knowledge.
a -> b
In particular, Category Theory is highly concerned with the idea of
arrows from one place to another. In Haskell, the most important set of
arrows are the function arrows . A common thing to study in Category
Theory is how one set of arrows relates to another set. In particular, for any
type constructor F, the set of arrows of the shape F a -> F b are also interesting.
a -> b F a -> F b
So a Functor is any F such that there is a connection between
normal Haskell arrows and the F-specific arrows . The connection is defined
by fmap and we also recognize a few laws which must hold

All of these laws arise naturally from the Category Theoretic


interpretation of Functor and would not be as obviously necessary if we
only thought of Functor as relating to "mapping over elements".
Section 14.2: Haskell types as a
category
Definition of the category
The Haskell types along with functions between types form (almost † ) a
category. We have an identity morphism
id :: a -> a ) for every object (type) a; and composition of :: ( b -> c ) -> ( a -> b
morphisms ((.)
(function) ()
-> a -> c ), which obey category laws:

We usually call this category Hask.


Isomorphisms
In category theory, we have an isomorphism when we have a morphism
which has an inverse, in other words, there is a morphism which can be
composed with it in order to create the identity. In Hask this amounts to have
a pair of morphisms f,g such that:

If we find a pair of such morphisms between two types, we call them


isomorphic to one another.
,a Anexample of two isomorphic types would be (()) and a for some a. We can
construct the two morphisms:

f . g == id == g . f And we can check that .


Functors
A functor, in category theory, goes from a category to another, mapping
objects and morphisms. We are working only on one category, the category
Hask of Haskell types, so we are going to see only functors from Hask to
Hask, those functors, whose origin and destination category are the same, are
called endofunctors. Our endofunctors will be the polymorphic types taking
a type and returning another:

To obey the categorical functor laws (preserve identities and composition) is


equivalent to obey the Haskell functor laws:

-> r So, we have, for example, that [], Maybe or () are functors in Hask.
Monads
F :: * -> *and natural transformations (transformations forall a . F a -> G a
between them
A monad in category theory is a monoid on the category of endofunctors.
This category has endofunctors as objects ) as morphisms.
A monoid object can be defined on a monoidal category, and is a type having
two morphisms:

We can translate this roughly to the category of Hask endofunctors as:

And, to obey the monad laws is equivalent to obey the categorical monoid
object laws.
† In fact, the class of all types along with the class of functions between types
do not strictly form a category in Haskell, due to the existance of undefined.
Typically this is remedied by simply defining the objects of the Hask
category as types without bottom values, which excludes non-terminating
functions and infinite values (codata). For a detailed discussion of this topic,
see here.
Section 14.3: Definition of a Category
A category C consists of:
Obj Hom (C)) of morphisms between those objects. If Obj A collection of
a and b are in objects called (C) ;
A collection (called (C), then a morphism f
Hom (C) is typically f : a -> b in , and the collection of all morphism between
denoted a and b is denoted
hom ( a,b) ;

a : Obj (C) there exists a id : a -> A special morphism called the identity
morphism morphism - for every a ;
f : a -> b ,g : b -> c and producing a a ->A composition operator (.),
morphism taking two morphisms c
which obey the following laws:
For all f : a -> x, g : x -> b, then id . f = f and g . id = g

For all f : a -> b, g : b -> c and h : c -> d, then h . (g . f) = (h . g) . f

In other words, composition with the identity morphism (on either the left or
right) does not change the other morphism, and composition is associative.
In Haskell, the Category is defined as a typeclass in Control.Category:

cat :: k -> k -> Obj (C). Obj k


In this case, * objectifies the morphism relation -
there exists a morphism cat a b if and only if cat a b is inhabited (i.e. has a
value). a, b and c are all in (C) itself is represented by the kind k - for example,
when ~ *, as is typically the case, objects are types.
The canonical example of a Category in Haskell is the function category:
Another common example is the Category of Kleisli arrows for a Monad:
Section 14.4: Coproduct of types in
Hask
Intuition
Either a b Either a ( b,Bool The
categorical product of two types A and B should
contain the minimal information necessary to contain inside an instance of
type A or type B. We can see now that the intuitive coproduct of two types
should be . Other candidates, such as ), would contain a part of unnecessary
information, and they wouldn't be minimal.
The formal definition is derived from the categorical definition of coproduct.
Categorical coproducts
A categorical coproduct is the dual notion of a categorical product. It is
obtained directly by reversing all the arrows in the definition of the product.
The coproduct of two objects X,Y is another object Z with two inclusions:
i_1: X → Z and i_2: Y → Z; such that any other two morphisms from X
and Y to another object decompose uniquely through those inclusions. In
other words, if there are two morphisms f ₁ : X → W and f ₂ : Y → W,
exists a unique morphism g : Z → W such that g ○ i ₁ = f ₁ and g ○ i ₂ =
f₂
Coproducts in Hask
The translation into the Hask category is similar to the translation of the
product:

Either a b
The coproduct type of two types A and B in Hask is or any other
type isomorphic to it:
Section 14.5: Product of types in Hask
Categorical products
In category theory, the product of two objects X, Y is another object Z with
two projections: π₁ : Z → X and π₂ : Z → Y; such that any other two
morphisms from another object decompose uniquely through those
projections. In other words, if there exist f ₁ : W → X and f ₂ : W → Y,
exists a unique morphism g : W → Z such that π₁ ○ g = f ₁ and π₂ ○ g
=f₂.
Products in Hask
This translates into the Hask category of Haskell types as follows, Z is
product of A, B when:

A,B f1 :: W ->
A
and f2 :: W -> B The product type of two types A, B, which
follows the law stated above, is the tuple of the
two types (), and the two projections are fst and snd. We can check that it
follows the above rule, if we have two functions we can decompose them
uniquely as follow:

And we can check that the decomposition is correct:

Uniqueness up to isomorphism
A,BThe choice of () as the product of A and B is not unique. Another logical
and equivalent choice would have been:
B,A )as the product, or B,A, Moreover, we could have also chosen (()), and we
even ( could find a decomposition function like the above
also following the rules:

A,B B,A, Thisis because the product is not unique but unique up to
isomorphism. Every two products of A and B do not have to be equal, but they
should be isomorphic. As an example, the two different products we have
just defined, () and (()), are isomorphic:

Uniqueness of the decomposition


A, ( B,Bool fst . snd
It is important to remark that also the decomposition
function must be unique. There are types which follow all the rules required
to be product, but the decomposition is not unique. As an example, we can
try to use ()) with projections fst as a product of A and B:

We can check that it does work:

But the problem here is that we could have written another decomposition,
namely:

decompose3' :: (W -> A) -> (W -> B) -> (W -> (A,(B,Bool)))


decompose3' f1 f2 = (\x -> (f1 x, (f2 x, False)))

A, ( B,BoolAnd, as the decomposition is not unique, ()) is not the product of A


and B in Hask
Section 14.6: Haskell Applicative in
terms of Category Theory
a -> b F a -> F b
A Haskell's Functor allows one to map any type a (an object
of Hask) to a type F a and also map a function (a morphism of Hask) to a
function with type . This corresponds to a Category Theory definition in a
sense that functor preserves basic category structure.
A monoidal category is a category that has some additional structure:
A tensor product (see Product
of types in Hask) A tensor unit
(unit object)
Taking a pair as our product, this definition can be translated to Haskell in
the following way:

The Applicative class is equivalent to this Monoidal one and thus can be
implemented in terms of it:
Chapter 15: Lists
Section 15.1: List basics
The type constructor for lists in the Haskell Prelude is []. The type declaration
for a list holding values of type Int is written as follows:

Lists in Haskell are homogeneous sequences, which is to say that all elements
must be of the same type. Unlike tuples, list type is not affected by length:

Lists are constructed using two constructors:


[] constructs an empty list.
(:),
pronounced "cons", prepends elements to a list. Consing x (a value
of type a) onto xs (a list of values of the same type a) creates a new list,
whose head (the first element) is x, and tail (the rest of the elements) is xs.
We can define simple lists as follows:

++ Note that (), which can be used to build lists is defined recursively in
terms of (:) and [].
Section 15.2: Processing lists
To process lists, we can simply pattern match on the constructors of the list
type:

We can match more values by specifying a more elaborate pattern:

Note that in the above example, we had to provide a more exhaustive pattern
match to handle cases where an odd length list is given as an argument.
The Haskell Prelude defines many built-ins for handling lists, like map, filter,
etc.. Where possible, you should use these instead of writing your own
recursive functions.
Section 15.3: Ranges
Creating a list from 1 to 10 is simple using range notation:

To specify a step, add a comma and the next element after the start element:

Note that Haskell always takes the step as the arithmetic difference between
terms, and that you cannot specify more than the first two elements and the
upper bound:

To generate a range in descending order, always specify the negative step:

..
Because Haskell is non-strict, the elements of the list are evaluated only if
they are needed, which allows us to use infinite lists. [1] is an infinite list
starting from 1. This list can be bound to a variable or passed as a function
argument: take 5 [1..] -- returns [1,2,3,4,5] even though [1..] is infinite

Be careful when using ranges with floating-point values, because it accepts


spill-overs up to half-delta, to fend off rounding issues:

Ranges work not just with numbers but with any type that implements Enum
typeclass. Given some enumerable variables a, b, c, the range syntax is
equivalent to calling these Enum methods:

For example, with Bool it's


Notice the space after False, to prevent this to be parsed as a module name
qualification (i.e. False.. would be parsed as . from a module False).
Section 15.4: List Literals
Section 15.5: List Concatenation
Section 15.6: Accessing elements in lists
Access the nth element of a list (zero-based):

Note that !! is a partial function, so certain inputs produce errors:

There's also Data.List.genericIndex, an overloaded version of !!, which accepts any


Integral value as the index.

When implemented as singly-linked lists, these operations take O(n) time. If


you frequently access elements by index, it's probably better to use Data.Vector
(from the vector package) or other data structures.
Section 15.7: Basic Functions on Lists
Section 15.8: Transforming with `map`
Often we wish to convert, or transform the contents of a collection (a list, or
something traversable). In Haskell we use map:
Section 15.9: Filtering with `filter`
Given a list:

filter :: ( a -> Bool ) -> [a] -> we can filter a list with a predicate using [a]:

Of course it's not just about numbers:


Section 15.10: foldr
This is how the right fold is implemented:

The right fold, foldr, associates to the right. That is:

The reason is that foldr is evaluated like this (look at the inductive step of
foldr):

foldr (+) 0 [1, 2, 3] -- foldr (+) 0 [1,2,3] (+) 1 (foldr (+) 0 [2,
3]) -- 1 + foldr (+) 0 [2,3]
(+) 1 ((+) 2 (foldr (+) 0 [3])) -- 1 + (2 + foldr (+) 0 [3])
(+) 1 ((+) 2 ((+) 3 (foldr (+) 0 []))) -- 1 + (2 + (3 + foldr (+) 0 []))
(+) 1 ((+) 2 ((+) 3 0)) -- 1 + (2 + (3 + 0 ))

The last line is equivalent to 1 + (2 + (3 + 0)), because ((+) 3 0) is the same


as (3 + 0).
Section 15.11: Zipping and
Unzipping Lists
zip takes two lists and returns a list of corresponding pairs:

Zipping two lists with a function:

Unzipping a list:
Section 15.12: foldl
This is how the left fold is implemented. Notice how the order of the
arguments in the step function is flipped compared to foldr (the right fold):

The left fold, foldl, associates to the left. That is:

The reason is that foldl is evaluated like this (look at foldl's inductive step):

foldl (+) 0 [1, 2, 3] -- foldl (+) 0 [ 1, 2, 3 ] foldl (+) ((+) 0 1) [2,


3] -- foldl (+) (0 + 1) [ 2, 3 ] foldl (+) ((+) ((+) 0 1) 2) [3] --
foldl (+) ((0 + 1) + 2) [ 3 ] foldl (+) ((+) ((+) ((+) 0 1) 2) 3) [] -- foldl (+) (((0 +
1) + 2) + 3) []
((+) ((+) ((+) 0 1) 2) 3) -- (((0 + 1) + 2) + 3)

fab ) is the same a `f` b The last line is equivalent to ((0 + 1) + 2) + 3. This is
as ( because () in general, and so ((+) 0 1) is the same as (0 +
1) in particular.
Chapter 16: Sorting Algorithms
Section 16.1: Insertion Sort

Example use:

Result:
Section 16.2: Permutation Sort
Also known as bogosort.

Extremely inefficient (on today's


computers).
Section 16.3: Merge
Sort
Ordered merging of two

ordered lists Preserving the

duplicates:

Top-down version:

firstHalf xs = let { n = length xs } in take (div n 2) xs secondHalf xs


= let { n = length xs } in drop (div n 2) xs

It is defined this way for clarity, not for efficiency.


Example use:

Result:

Bottom-up version:
Section 16.4: Quicksort
Section 16.5: Bubble sort
Section 16.6: Selection sort
Selection sort selects the minimum element, repeatedly, until the list is
empty.
Chapter 17: Type Families
Section 17.1: Datatype Families
Data families can be used to build datatypes that have different
implementations based on their type arguments.

Nil :: List Char , UnitList :: Int -> List In the above declaration, ()
and Associated data families
Data families can also be associated with typeclasses. This is often useful for
types with “ helper objects ” , which are required for generic typeclass
methods but need to contain different information depending on the concrete
instance. For instance, indexing locations in a list just requires a single
number, whereas in a tree you need a number to indicate the path at each
node:
Section 17.2: Type Synonym Families
Type synonym families are just type-level functions: they associate
parameter types with result types. These come in three different varieties.
Closed type-synonym families
These work much like ordinary value-level Haskell functions: you specify
some clauses, mapping certain types to others:

Open type-synonym families


These work more like typeclass instances: anybody can add more clauses in
other modules.

Class-associated type synonyms


An open type family can also be combined with an actual class. This is
usually done when, like with associated data families, some class method
needs additional helper objects, and these helper objects can be different for
different instances but may possibly also shared. A good example is
VectorSpace class:
Note how in the first two instances, the implementation of Scalar is the same.
This would not be possible with an associated data family: data families are
injective, type-synonym families aren't.
While non-injectivity opens up some possibilities like the above, it also
makes type inference more difficult. For instance, the following will not
typecheck:

Bar a = StringIn this case, the compiler can't know what instance to use,
because the argument to bar is itself just a polymorphic Num literal. And the
type function Bar can't be resolved in “ inverse direction ” , precisely because
it's not injective † and hence not invertible (there could be more than one
type with ).

† With only these two instances, it is actually injective, but the compiler can't
know somebody won't add more instances later on and thereby break the
behaviour.
Section 17.3: Injectivity
Type Families are not necessarily injective. Therefore, we cannot infer the
parameter from an application. For example, in servant, given a type Server a we
cannot infer the type a. To solve this problem, we can use Proxy. For
... Proxy a -> Server a -> ... example, in servant, the serve function has type . We
can infer a from Proxy a because Proxy is defined by data which is injective.
Chapter 18: Monads
IO A monad is a data type of composable actions. Monad is the class of type
a
constructors whose values represent such actions. Perhaps IO is the most
recognizable one: a value of is a "recipe for retrieving an a value from the
real world".
instance MonadWe say a type constructor m (such as [] or Maybe) forms a
m
monad if there is an satisfying certain laws about
composition of actions. We can then reason about m a as an "action whose
result has type a".
Section 18.1: Definition of Monad

The most important function for dealing with monads is the bind operator
>>=:

>>= sequences two actions together by piping the result from the first
action to the second.
The other function defined by Monad is:

Its name is unfortunate: this return has nothing to do with the return keyword
found in imperative programming languages.
return x is the trivial action yielding x as its result. (It is trivial in the
following sense:)

return x >>= f ≡ fx -- “ left identity ” monad law


x >>= return ≡ x -- “ right identity ” monad law
Section 18.2: No general way to extract
value from a monadic computation
You can wrap values into actions and pipe the result of one computation into
another:

Monad m => m a -> However, the definition of a Monad doesn ’ t guarantee the
a
existence of a function of type .
That means there is, in general, no way to extract a value from a
computation (i.e. “ unwrap ” it). This is the case for many instances:

IO a -> a
Specifically, there is no function , which often confuses
beginners; see this example.
Section 18.3: Monad as a Subclass of
Applicative
<*As of GHC 7.10, Applicative is a superclass of Monad (i.e., every type which is
a Monad must also be an Applicative). All the methods of Applicative (pure, >) can
be implemented in terms of methods of Monad (return, >>=).
pure = return . The <*It is obvious that pure and return serve equivalent
definition for purposes, so > is too relatively clear:
mf <*> mx = do { f <- mf; x <- mx; return (f x) }
-- = mf >>= (\f -> mx >>= (\x -> return (f x)))

-- = [r | f <- mf, x <- mx, r <- return (f x)] -- with MonadComprehensions


-- = [f x | f <- mf, x <- mx]

This function is defined as ap in the standard libraries.


Thus if you have already defined an instance of Monad for a type, you
effectively can get an instance of Applicative for it "for free" by defining

As with the monad laws, these equivalencies are not enforced, but
developers should ensure that they are always upheld.
Section 18.4: The Maybe monad
Maybeis used to represent possibly empty values - similar to null in other
languages. Usually it is used as the output type of functions that can fail in
some way.
Consider the following function:

Think of halve as an action, depending on an Int, that tries to halve the integer,
failing if it is odd.
How do we halve an integer three times?

takeOneEighth :: Int -> Maybe Int -- (after you read the 'do' sub-section:)
takeOneEighth x =
case halve x of -- do {
Nothing -> Nothing
Just oneHalf -> -- oneHalf <- halve x case halve
oneHalf of
Nothing -> Nothing
Just oneQuarter -> -- oneQuarter <- halve oneHalf case halve
oneQuarter of
Nothing -> Nothing -- oneEighth <- halve oneQuarter
Just oneEighth ->
Just oneEighth -- return oneEighth }
takeOneEighth :: Int -> Maybe Int takeOneEighth x = halve x
>>= halve >>= halve -- or,
-- return x >>= halve >>= halve >>= halve -- which is parsed as
-- (((return x) >>= halve) >>= halve) >>= halve -- which can also be written as
-- (halve =<<) . (halve =<<) . (halve =<<) $ return x -- or, equivalently, as
-- halve <=< halve <=< halve $ x

g <=< ) x = g =<< f x , or f >=> g ) x = f x >>= g Kleisli composition <=


f
equivalently as < is defined as (. With
( it
the above definition becomes just

takeOneEighth :: Int -> Maybe Int takeOneEighth = halve <=< halve


<=< halve -- infixr 1 <=<
-- or, equivalently,
-- halve >=> halve >=> halve -- infixr 1 >=>

There are three monad laws that should be obeyed by every monad, that is
every type which is an instance of the Monad typeclass:

a -> m b and g has b -> m c where m is a monad, f has type .


type Or equivalently, using the >=> Kleisli composition
operator defined above:
1. return >=> g = g -- do { y <- return x ; g y } == g x
2. f >=> return = f -- do { y <- f x ; return y } == f x
3. (f >=> g) >=> h = f >=> (g >=> h) -- do { z <- do { y <- f x; g y } ; h z }

-- == do { y <- f x ; do { z <- g y; h z } }

Obeying these laws makes it a lot easier to reason about the monad, because
it guarantees that using monadic functions and composing them behaves in a
reasonable way, similar to other monads.
Let's check if the Maybe monad obeys the three monad laws.
return x >>= f = f x 1. The left
identity law -

>>= f ) >>= g = >>= (\ -> f x >>= g 3. The associativity law - ()


Section 18.5: IO monad
IOThere is no way to get a value of type a out of an expression of type and
a
there shouldn't be. This is actually a large part of why monads are used to
model IO.
IO getLine :: IO String
An expression of type can be thought of as representing
a
an action that can interact with the real world and, if executed, would
result in something of type a. For example, the function from the prelude
doesn't mean that underneath getLine there is some specific string that I can
extract - it means that getLine represents the action of getting a line from
standard input.
main :: IO
Not surprisingly, () since a Haskell program does represent a
computation/action that interacts with the real world.
IO The things you can do to expressions of type because IO is a monad:
a
>> Sequence two actions using ( ) to produce a new action that executes the
first action, discards whatever value it produced, and then executes the
second action.

>>= ) :: IO a -> ( a -> IO b ) -> IOSometimes


you don't want to discard the value
that was produced in the first action - you'd actually like it to be fed into
a second action. For that, we have >>=. For IO, it has type ( b.

Take a normal value and convert it into an action which just immediately
returns the value you gave it. This function is less obviously useful until
you start using do notation.

More from the Haskell Wiki on the IO


monad here.
Section 18.6: List
Monad
The lists form a monad. They have a monad instantiation

equivalent to this one: instance Monad [] where return x =

[x] xs >>= f = concat (map f xs)


xs >>= f , the f :: a We can use them to emulate non-determinism in our
function computations. When we use
-> [b] is mapped over the list xs, obtaining a list of lists
of results of each application of f over each element of xs, and all the lists of
results are then concatenated into one list of all the results. As an example,
we compute a sum of two non-deterministic numbers using do-notation, the
sum being represented by list of sums of all pairs of integers from two lists,
each list representing all possible values of a non-deterministic number:

Control.Monad Or equivalently, using liftM2 in :

we obtain:
Section 18.7: do-notation
do-notation is syntactic sugar for monads. Here are the rules:
do x <- mx do x <- mx y <- my is equivalent to do y <- my ... ... do let a = b
let a = b in ... is equivalent to do ... do m m >> ( e is equivalent to e) do x <-
m m >>= (\x -> e is equivalent to e) do m is equivalent to m
For example, these definitions are equivalent:
Chapter 19: Stack
Section 19.1: Profiling with Stack
Configure profiling for a project via stack. First build the project with the --
profile flag:

prof RTS GHC flags are not required in the cabal file for this to work (like -).
stack will automatically turn on profiling for both the library and executables
in the project. The next time an executable runs in the project, the usual +
flags can be used:
Section 19.2: Structure
File structure
A simple project has the following files included in it:

In the folder src there is a file named Main.hs. This is the "starting point" of the
helloworld project. By default Main.hs contains a simple "Hello, World!"
program.
Main.hs

putStrLn "hello world" Running the program


Make sure you are in the directory helloworld and run:
Section 19.3: Build and Run a Stack
Project
In this example our project name is "helloworld" which was created with stack
new helloworld simple

First we have to build the project with stack build and then we can run it with
Section 19.4: Viewing dependencies
To find out what packages your project directly depends on, you can simply
use this command:

This way you can find out what version of your dependencies where actually
pulled down by stack.
Haskell projects frequently find themselves pulling in a lot of libraries
indirectly, and sometimes these external dependencies cause problems that
you need to track down. If you find yourself with a rogue external
dependency that you'd like to identify, you can grep through the entire
dependency graph and identify which of your dependencies is ultimately
pulling in the undesired package:

prints out a dependency graph in text form that can be searched. It can
stack dot
also be viewed:

You can also set the depth of the dependency graph if you want:
Section 19.5: Stack install
By running the command

Stack will copy a executable file to the folder


Section 19.6: Installing Stack
Mac OSX
Using Homebrew:
Section 19.7: Creating a simple project
To create a project called helloworld run:

This will create a directory called helloworld with the files necessary for a Stack
project.
Section 19.8: Stackage Packages and
changing the LTS (resolver) version
Stackage is a repository for Haskell packages. We can add these packages to
a stack project.
Adding lens to a project.
In a stack project, there is a file called stack.yaml. In stack.yaml there is a segment
that looks like:

lts - 6.8 Stackage


keeps a list of packages for every revision of lts. In our case
we want the list of packages for To find these packages visit:

https://www.stackage.org/lts-6.8 # if a different version is used, change 6.8 to the correct resolver


number.

Looking through the packages, there is a Lens-4.13.


We can now add the language package by modifying the section of
helloworld.cabal:

to:

Obviously, if we want to change a newer LTS (after it's released), we just


change the resolver number, eg.:

With the next stack build Stack will use the LTS 6.9 version and hence
download some new dependencies.
Chapter 20: Generalized Algebraic
Data Types
Section 20.1: Basic Usage
When the GADTs extension is enabled, besides regular data declarations, you
can also declare generalized algebraic datatypes as follows:

A GADT declaration lists the types of all constructors a datatype has,


explicitly. Unlike regular datatype declarations, the type of a constructor can
be any N-ary (including nullary) function that ultimately results in the
datatype applied to some arguments.
In this case we've declared that the type DataType has three constructors:
Constr1, Constr2 and Constr3.

data DataType a ... Constr1 Int a ( Foo a


The Constr1 constructor is no different from one declare
regular data declaration: = ) |
Constr2however requires that a has an instance of Show, and so when using the
constructor the instance would need to exist. On the other hand, when
pattern-matching on it, the fact that a is an instance of Show comes into scope,
so you can write:

Show Note that the constraint doesn't appear in the type of the function, and is
a
only visible in the code to the right of ->.
DataType Int a ~ IntConstr3 has type , which means that whenever a value of type
DataType a is a Constr3, it is known that . This information, too, can be recovered
with a pattern match.
Chapter 21: Recursion Schemes
Section 21.1: Fixed points
Fixtakes a "template" type and ties the recursive knot, layering the template
like a lasagne.

Inside a Fix f we find a layer of the template f. To fill in f's parameter, Fix f
plugs in itself. So when you look inside the template f you find a recursive
occurrence of Fix f.
Here is how a typical recursive datatype can be translated into our framework
of templates and fixed points. We remove recursive occurrences of the type
and mark their positions using the r parameter.
Section 21.2: Primitive recursion
Paramorphisms model primitive recursion. At each iteration of the fold, the
folding function receives the subtree for further processing.

The Prelude's tails can be modelled as a paramorphism.


Section 21.3: Primitive corecursion
Apomorphisms model primitive corecursion. At each iteration of the unfold,
the unfolding function may return either a new seed or a whole subtree.

Note that apo and para are dual. The arrows in the type are flipped; the tuple
in para is dual to the Either in apo, and the implementations are mirror images
of each other.
Section 21.4: Folding up a structure
one layer at a time
Catamorphisms, or folds, model primitive recursion. cata tears down a fixpoint
layer by layer, using an algebra function (or folding function) to process each
layer. cata requires a Functor instance for the template type f.
Section 21.5: Unfolding a structure one
layer at a time
Anamorphisms, or unfolds, model primitive corecursion. ana builds up a
fixpoint layer by layer, using a coalgebra function (or unfolding function) to
produce each new layer. ana requires a Functor instance for the template type f.

Note that ana and cata are dual. The types and implementations are mirror
images of one another.
Section 21.6: Unfolding and then
folding, fused
It's common to structure a program as building up a data structure and then
collapsing it to a single value. This is called a hylomorphism or refold. It's
possible to elide the intermediate structure altogether for improved
efficiency.

hylo :: Functor f => (a -> f a) -> (f b -> b) -> a -> b hylo f g


= g . fmap (hylo f g) . f -- no mention of Fix!
Derivation:
hylo f g = cata g . ana f

= g . fmap (cata g) . unFix . Fix . fmap (ana f) . f -- definition of cata and ana
= g . fmap (cata g) . fmap (ana f) . f -- unfix . Fix = id
= g . fmap (cata g . ana f) . f -- Functor law
= g . fmap (hylo f g) . f -- definition of hylo
Chapter 22: Data.Text
Section 22.1: Text Literals
The OverloadedStrings language extension allows the use of normal string literals
to stand for Text values.
Section 22.2: Checking if a Text is a
substring of another Text

isInfixOf :: Text -> Text -> Bool checks whether a Text is contained anywhere
within another Text.

isPrefixOf :: Text -> Text -> Bool checks whether a Text appears at the beginning
of another Text.

isSuffixOf :: Text -> Text -> Bool checks whether a Text appears at the end of
another Text.
Section 22.3: Stripping whitespace

strip removes whitespace from the start and end of a Text value.

stripStart removes whitespace only from the start.

stripEnd removes whitespace only from the end.

filter can be used to remove whitespace, or other characters, from the middle.
Section 22.4: Indexing Text

Characters at specific indices can be returned by the index function.

Char -> Bool The findIndex


function takes a function of type () and Text and
returns the index of the first occurrence of a given string or Nothing if it
doesn't occur.

The count function returns the number of times a query Text occurs within
another Text.
Section 22.5: Splitting Text Values

splitOn breaks a Text up into a list of Texts on occurrences of a substring.

splitOn is the inverse of intercalate.

breaks a Text value into chunks on characters that satisfy a Boolean


split
predicate.
Section 22.6: Encoding and Decoding
Text
Encoding and decoding functions for a variety of Unicode encodings can be
found in the Data.Text.Encoding module.

Note that decodeUtf8 will throw an exception on invalid input. If you want

to handle invalid UTF-8 yourself, use decodeUtf8With. ghci> decodeUtf8With

(\errorDescription input -> Nothing) messyOutsideData


Chapter 23: Using GHCi
Section 23.1: Breakpoints with GHCi
:loaded GHCi supports imperative-style breakpoints out of the box with
interpreted code (code that's been ).
With the following program:

loaded into GHCi:

We can now set breakpoints using line numbers:

and GHCi will stop at the relevant line when we run the function:

:listIt might be confusing where we are in the program, so we can use to


clarify:

We can print variables, and continue execution too:


Section 23.2: Quitting GHCi
:quit
You can quit GHCi
simply with :q or

Alternatively, the shortcut CTRL + D ( Cmd + D for OSX) has the


same effect as :q.
Section 23.3: Reloading a already
loaded file
:l filename.hs :reload
If you have loaded a file into GHCi (e.g. using ) and you
have changed the file in an editor outside of GHCi you must reload the file
with :r or in order to make use of the changes, hence you don't need to type
again the filename.
Section 23.4: Starting GHCi
Type ghci at a shell prompt to start GHCI.
Section 23.5: Changing the GHCi
default prompt
By default, GHCI's prompt shows all the modules you have loaded into your
interactive session. If you have many modules loaded this can get long:

:set prompt The command changes the prompt for this interactive session.

:set prompt "foo> "


To change the prompt permanently, add to the
GHCi config file.
Section 23.6: The GHCi
configuration file
~/.ghci
GHCi uses a configuration file in . A configuration file consists of a
sequence of commands which GHCi will execute on startup.
Section 23.7: Loading a file
Section 23.8: Multi-line statements
The :{ instruction begins multi-line mode and :} ends it. In multi-line mode
GHCi will interpret newlines as semicolons, not as the end of an instruction.
Chapter 24: Strictness
Section 24.1: Bang Patterns
Patterns annotated with a bang (!) are evaluated strictly instead of lazily.

In this example, x and z will both be evaluated to weak head normal form
before returning the list. It's equivalent to:

Bang patterns are enabled using the Haskell 2010 BangPatterns


language extension.
Section 24.2: Lazy patterns
pat Lazy,or irrefutable, patterns (denoted with the syntax ~) are patterns that
always match, without even looking at the matched value. This means lazy
patterns will match even bottom values. However, subsequent uses of
variables bound in sub-patterns of an irrefutable pattern will force the pattern
matching to occur, evaluating to bottom unless the match succeeds.
The following function is lazy in its argument:

and so we get

The following function is written with a lazy pattern but is in fact using the
pattern's variable which forces the match, so will fail for Left arguments:
putStrLn s1, s2
Here act1 works on inputs that parse to any list of strings,
s1
whereas in act2 the needs the value of s1 which forces the pattern
matching for [], so it works only for lists of exactly two strings:
Section 24.3: Normal forms
This example provides a brief overview - for a more in-depth explanation of
normal forms and examples, see this question.
Reduced normal form
x -> ..The reduced normal form (or just normal form, when the context is
clear) of an expression is the result of evaluating all reducible subexpressions
in the given expression. Due to the non-strict semantics of Haskell (typically
called laziness), a subexpression is not reducible if it is under a binder (i.e. a
lambda abstraction - \). The normal form of an expression has the property
that if it exists, it is unique.
In other words, it does not matter (in terms of denotational semantics) in
which order you reduce subexpressions. However, the key to writing
performant Haskell programs is often ensuring that the right expression is
evaluated at the right time, i.e, the understanding the operational semantics.
An expression whose normal form is itself is said to be in normal form.
let x x in x
Some expressions, e.g. = 1:, have no normal form, but are still
productive. The example expression
, ... ]. Other expressions, let y still has a value, if one admits infinite
such as values, which here is the list [1,1=
y in y 1+, have no value, or their value is undefined.

Weak head normal form


x -> e1The RNF corresponds to fully evaluating an expression - likewise, the
weak head normal form (WHNF) corresponds to evaluating to the head of
the expression. The head of an expression e is fully evaluated if e is an
application Con e1 e2 .. en and Con is a constructor; or an abstraction \; or a
partial application f e1 e2 .. en, where partial application means f takes more
than n arguments (or equivalently, the type of e is a function type). In any
case, the subexpressions e1..en can be evaluated or unevaluated for the
expression to be in WHNF - they can even
be undefined.
The evaluation semantics of Haskell can be described in terms of the WHNF
- to evaluate an expression e, first evaluate it to WHNF, then recursively
evaluate all of its subexpressions from left to right.
seq x y seq x y XBangPatternsThe
primitive seq function is used to evaluate an
expression to WHNF. is denotationally equal to y (the value of is precisely
y); furthermore x is evaluated to WHNF when y is evaluated to WHNF. An
expression can also be evaluated to WHNF with a bang pattern (enabled by
the - extension), whose syntax is as follows:

In which x will be evaluated to WHNF when f is evaluated, while y is not


(necessarily) evaluated. A bang pattern can also appear in a constructor, e.g.

in which case the constructor Con is said to be strict in the B field, which
means the B field is evaluated to WHNF when the constructor is applied to
sufficient (here, two) arguments.
Section 24.4: Strict fields
In a data declaration, prefixing a type with a bang (!) makes the field a strict
field. When the data constructor is applied, those fields will be evaluated to
weak head normal form, so the data in the fields is guaranteed to always be in
weak head normal form.
Strict fields can be used in both record and non-record types:
Chapter 25: Syntax in Functions
Section 25.1: Pattern Matching
Haskell supports pattern matching expressions in both function definition and
through case statements.
A case statement is much like a switch in other languages, except it supports
all of Haskell's types.
Let's start simple:

Or, we could define our function like an equation which would be pattern
matching, just without using a case statement:

Pattern matching can also be used on lists:

Actually, Pattern Matching can be used on any constructor for any type
class. E.g. the constructor for lists is : and for tuples ,
Section 25.2: Using where and guards
Given this function:

annualSalaryCalc :: (RealFloat a) => a -> a -> String


annualSalaryCalc hourlyRate weekHoursOfWork
| hourlyRate * (weekHoursOfWork * 52) <= 40000 = "Poor child, try to get another job"
| hourlyRate * (weekHoursOfWork * 52) <= 120000 = "Money, Money, Money!"
| hourlyRate * (weekHoursOfWork * 52) <= 200000 = "Ri ¢ hie Ri ¢ h"
| otherwise = "Hello Elon Musk!"
We can use where to avoid the repetition and make our code more readable.
See the alternative function below, using where:
annualSalaryCalc' :: (RealFloat a) => a -> a -> String annualSalaryCalc'
hourlyRate weekHoursOfWork
| annualSalary <= smallSalary = "Poor child, try to get another job"
| annualSalary <= mediumSalary = "Money, Money, Money!"
| annualSalary <= highSalary = "Ri ¢ hie Ri ¢ h"

| otherwise = "Hello Elon Musk!" where


annualSalary = hourlyRate * (weekHoursOfWork * 52)
(smallSalary, mediumSalary, highSalary) = (40000, 120000, 200000)

As observed, we used the where in the end of the function body eliminating
the repetition of the calculation
hourlyRate weekHoursOfWork * 52(* ()) and we also used where to organize the
salary range.
The naming of common sub-expressions can also be achieved with let
expressions, but only the where syntax makes it possible for guards to
refer to those named sub-expressions.
Section 25.3: Guards
A function can be defined using guards, which can be thought of classifying
behaviour according to input.
Take the following function definition:

absolute :: Int -> Int -- definition restricted to Ints for simplicity

absolute n = if (n < 0) then (-n) else n We can rearrange it using


guards:

In this context otherwise is a meaningful alias for True, so it should always be


the last guard.
Chapter 26: Functor
f :: * -> Functor
is the class of types * which can be covariantly mapped
over. Mapping a function over a data structure applies the function to all
the elements of the structure without changing the structure itself.
Section 26.1: Class Definition of
Functor and Laws

One way of looking at it is that fmap lifts a function of values into a function
of values in a context f.
A correct instance of Functor should satisfy the functor laws, though these are
not enforced by the compiler:

<$There's a commonly-used infix alias for fmap called >.


Section 26.2: Replacing all elements of
a Functor with a single value
Data.FunctorThe module contains two combinators, <$ and $>, which ignore all
of the values contained in a functor, replacing them all with a single constant
value.
Section 26.3: Common instances of
Functor
Maybe
Maybe is a Functor containing a possibly-absent value:

Maybe's instance of Functor applies a function to a value wrapped in a Just. If


the computation has previously failed (so the Maybe value is a Nothing), then
there's no value to apply the function to, so fmap is a no-op.

We can check the functor laws for this instance using equational reasoning.
For the identity law,

For the composition law,

Lists
Lists' instance of Functor applies the function to every value in the list in
place.

This example shows that fmap generalises map. map only operates on lists,
whereas fmap works on an arbitrary Functor.
The identity law can be shown to hold by induction:

and similarly, the composition law:

Functions
Not every Functor looks like a container. Functions' instance of Functor applies
a function to the return value of another function.

Once more checking the identity law:


and the composition law:
Section 26.4: Deriving Functor
The DeriveFunctor language extension allows GHC to generate instances of
Functor automatically.
Section 26.5: Polynomial functors
There's a useful set of type combinators for building big Functors out of
smaller ones. These are instructive as example instances of Functor, and
they're also useful as a technique for generic programming, because they can
be used to represent a large class of common functors.
The identity functor
The identity functor simply wraps up its argument. It's a type-level
implementation of the I combinator from SKI calculus.

Data.Functor.Identity I can be found, under the name of Identity, in the module.


The constant functor
The constant functor ignores its second argument, containing only a constant
value. It's a type-level analogue of const, the K combinator from SKI calculus.

K Notethat K c a doesn't contain any a-values; () is isomorphic to Proxy. This


means that K's implementation of fmap doesn't do any mapping at all!

Data.Functor.Const K is otherwise known as Const, from .


The remaining functors in this example combine smaller functors into bigger
ones.
Functor products
:: The functor product takes a pair of functors and packs them up. It's
analogous to a tuple, except that while (,)
-> * -> * operates on :*: ) :: (* -> *) -> (* -> *) -> (* -> *) operates on
types *, ( functors *
**.
Data.Functor.Product
This type can be found, under the name Product,

in the module. Functor coproducts


Just like :*: is analogous to (,), :+: is the functor-level analogue of Either.

Data.Functor.Sum:+: can be found under the name Sum, in the module.


Functor composition
Finally, :.: works like a type-level (.), taking the output of one functor and
plumbing it into the input of another.

Polynomial functors for generic programming


I, K, :*:, :+:
and :.: can be thought of as a kit of building blocks for a certain
class of simple datatypes. The kit becomes especially powerful when you
combine it with fixed points because datatypes built with these combinators
are automatically instances of Functor. You use the kit to build a template
type, marking recursive points using I, and then plug it into Fix to get a type
that can be used with the standard zoo of recursion schemes.
Name As a datatype Using the
functor kit
Pairs
data Pair a = Pair a a type Pair = I :*: I of
values
type Grid a = Pair ( Pair a type Grid = Pair :.: Pair Two-by-two grids)
Natural
data Nat = Zero | Succ Nat type Nat = Fix ( K () :+: I numbers)
data List a = Nil | Cons a ( List a type List a = Fix ( K () :+: K a :*: I Lists))
type Tree a data Tree a Leaf | Node ( Tree a ) a Fix ( K () :+: I :*: K a :*: ==
I) (Tree a) Binary
trees
data Rose a = Rose a ( List ( Rose a type Rose a = Fix ( K a :*: List :.: I Rose trees)))
generics - sopThis
"kit" approach to designing datatypes is the idea behind
generic programming libraries such as .
The idea is to write generic operations using a kit like the one presented
above, and then use a type class to convert arbitrary datatypes to and from
their generic representation:
Section 26.6: Functors in Category
Theory
A Functor is defined in category theory as a structure-preserving map (a
'homomorphism') between categories. Specifically, (all) objects are mapped
to objects, and (all) arrows are mapped to arrows, such that the category laws
are preserved.
The category in which objects are Haskell types and morphisms are Haskell
functions is called Hask. So a functor from Hask to Hask would consist of a
mapping of types to types and a mapping from functions to functions.
f :: * -> fmap :: ( a -> b ) -> ( f a -> f b
The relationship that this category
theoretic concept bears to the Haskell programming construct Functor is rather
direct. The mapping from types to types takes the form of a type *, and the
mapping from functions to functions takes the form of a function ). Putting
those together in a class,

:: a -> b , and maps it to another :: f a fmap is an operation that takes a


function, function (a type of morphism),
-> f b . It is assumed (but left to the programmer to ensure) that instances of
Functor are indeed mathematical functors, preserving Hask's categorical
structure:

:: a -> b fmap
lifts a function into a subcategory of Hask in a way that
preserves both the existence of any identity arrows, and the associativity of
composition.
The Functor class only encodes endofunctors on Hask. But in mathematics,
functors can map between arbitrary categories. A more faithful encoding of
this concept would look like this:
The standard Functor class is a special case of this class in which the source
and target categories are both Hask.
For example,
Chapter 27: Testing with Tasty
Section 27.1: SmallCheck, QuickCheck
and HUnit

Install packages:

Run with cabal:


Chapter 28: Creating Custom Data
Types
Section 28.1: Creating a data type with
value constructor parameters
Value constructors are functions that return a value of a data type. Because of
this, just like any other function, they can take one or more parameters:

Let's check the type of the Bar value constructor.

prints

which proves that Bar is indeed a function.


Section 28.2: Creating a data type with
type parameters
Type constructors can take one or more type parameters:

Type parameters in Haskell must begin with a lowercase letter. Our custom
data type is not a real type yet. In order to create values of our type, we must
substitute all type parameters with actual types. Because a and b can be of any
type, our value constructors are polymorphic functions.
Creating variables of our custom type

let x = Bar "Hello" 10 -- x :: Foo [Char] Integer let y = Biz


"Goodbye" 6.0 -- y :: Fractional b => Foo [Char] b let z = Biz
True False -- z :: Foo Bool Bool
Section 28.3: Creating a simple data
type
The easiest way to create a custom data type in Haskell is to use the data
keyword:

The name of the type is specified between data and =, and is called a type
constructor. After = we specify all value constructors of our data type,
delimited by the | sign. There is a rule in Haskell that all type and value
constructors must begin with a capital letter. The above declaration can be
read as follows:

Define a type called Foo, which has two possible


values: Bar and Biz.
Creating variables of our custom type

The above statement creates a variable named x of type Foo. Let's verify this
by checking its type.

prints
Section 28.4: Custom data type with
record parameters
Assume we want to create a data type Person, which has a first and last name,
an age, a phone number, a street, a zip code and a town.
We could write

If we want now to get the phone number, we need to make a function

Well, this is no fun. We can do better using parameters:

data Person' = Person' { firstName :: String


, lastName :: String
, age :: Int
, phone :: Int
, street :: String
, code :: String
, town :: String }

Now we get the function phone where

We can now do whatever we want, eg:

We can also bind the phone number by Pattern Matching:

For easy use of the parameters see RecordWildCards


Chapter 29: Reactive-banana
Section 29.1: Injecting external events
into the library
This example is not tied to any concrete GUI toolkit, like reactive-banana-wx
does, for instance. Instead it shows how to inject arbitrary IO actions into FRP
machinery.
The Control.Event.Handler module provides an addHandler function which creates a
pair of AddHandler a and a
-> IO () values. The former is used by reactive-banana itself to obtain
an Event a value, while the latter is a plain function that is used to trigger the
corresponding event.
import Data.Char (toUpper) import Control.Event.Handler import
Reactive.Banana main = do (inputHandler, inputFire) <- newAddHandler
In our case the a parameter of the handler is of type String, but the code that
lets compiler infer that will be written later.
Now we define the EventNetwork that describes our FRP-driven system. This is
done using compile function:

main = do (inputHandler, inputFire) <- newAddHandler compile $ do

inputEvent <- fromAddHandler inputHandler The fromAddHandler function

transforms AddHandler a value into a Event a, which is covered in the next

example. Finally, we launch our "event loop", that would fire events on user

input:
main = do (inputHandler, inputFire) <- newAddHandler compile $ do ...
forever $ do input <- getLine inputFire input
Section 29.2: Event type
In reactive-banana the Event type represents a stream of some events in time.
An Event is similar to an analog impulse signal in the sense that it is not
continuous in time. As a result, Event is an instance of the Functor typeclass
only. You can't combine two Events together because they may fire at different
times. You can do something with an Event's [current] value and react to it
with some IO action.
Transformations on Events value are done using fmap:
main = do (inputHandler, inputFire) <- newAddHandler compile $ do
inputEvent <- fromAddHandler inputHandler -turn all characters in the signal
to upper case let inputEvent' = fmap (map toUpper) inputEvent
a -> IOReacting to an Event is done the same way. First you fmap it with an
action of type () and then pass it to reactimate function:
main = do (inputHandler, inputFire) <- newAddHandler compile $ do
inputEvent <- fromAddHandler inputHandler -turn all characters in the signal
to upper case let inputEvent' = fmap (map toUpper) inputEvent let
inputEventReaction = fmap putStrLn inputEvent' -- this has type `Event (IO
()) reactimate inputEventReaction
inputFire "something" Now whenever is called, "SOMETHING" would be printed.
Section 29.3: Actuating EventNetworks
EventNetworks
returned by compile must be actuated before reactimated events
have an effect.
Section 29.4: Behavior type
<$ > <*Torepresent continious signals, reactive-banana features Behavior a
and type. Unlike Event, a Behavior is an Applicative, which lets you combine n
Behaviors using an n-ary pure function (using >).

To obtain a Behavior a from the Event a there is accumE function:

takes Behavior's initial value and an Event, containing a function that


accumE
would set it to the new value.
<*>As with Events, you can use fmap to work with current Behaviors value, but
you can also combine them with ().
main = do (inputHandler, inputFire) <-
newAddHandler compile $ do ...

inputBehavior <- accumE "" $ fmap (\oldValue newValue -> newValue)

inputEvent inputBehavior' <- accumE "" $ fmap (\oldValue newValue -> newValue)

inputEvent let constantTrueBehavior = (==) <$> inputBehavior <*> inputBehavior'

To react on Behavior changes there is a changes function:


main = do (inputHandler, inputFire) <-
newAddHandler compile $ do ...

inputBehavior <- accumE "" $ fmap (\oldValue newValue -> newValue)


inputEvent inputBehavior' <- accumE "" $ fmap (\oldValue newValue -> newValue)
inputEvent let constantTrueBehavior = (==) <$> inputBehavior <*>
inputBehavior' inputChanged <- changes inputBehavior

Event ( Future a
The only thing that should be noted is that changes return ) instead of Event a.
Because of this, reactimate' should be used instead of reactimate. The rationale
behind this can be obtained from the documentation.
Chapter 30: Optimization
Section 30.1: Compiling your Program
for Profiling
The GHC compiler has mature support for compiling with profiling
annotations.
prof and fprof - auto Using the - flags when compiling will add support to your
- binary for profiling flags for use at runtime.
Suppose we have this program:

Compiled it like so:

Then ran it with runtime system options for profiling:

We will see a main.prof file created post execution (once the program has
exited), and this will give us all sorts of profiling information such as cost
centers which gives us a breakdown of the cost associated with running the
various parts of the code:
Wed Oct 12 16:14 2011 Time and Allocation Profiling Report (Final)

Main +RTS -p -RTS


total time = 0.68 secs (34 ticks @ 20 ms)

total alloc = 204,677,844 bytes (excludes profiling overheads)


COST CENTRE MODULE %time %alloc fib Main 100.0 100.0

individual inherited COST CENTRE


MODULE no. entries %time %alloc %time %alloc
MAIN MAIN 102 0 0.0 0.0 100.0 100.0
CAF GHC.IO.Handle.FD 128 0 0.0 0.0 0.0 0.0
CAF GHC.IO.Encoding.Iconv 120 0 0.0 0.0 0.0 0.0
CAF GHC.Conc.Signal 110 0 0.0 0.0 0.0 0.0 CAF
Main 108 0 0.0 0.0 100.0 100.0 main Main
204 1 0.0 0.0 100.0 100.0 fib Main 205 2692537
100.0 100.0 100.0 100.0
Section 30.2: Cost Centers
Cost centers are annotations on a Haskell program which can be added
automatically by the GHC compiler -- using
fprot - auto -- or by a {-# SCC "name" #-} < expression <expression ->, where
programmer using "name" is any
name you wish and
> is any valid Haskell expression:

fprof
and running RTS -p ghc - prof - rtsopts Main.hs && . / Main.hs + RTS
with + e.g.
Compiling with --p would produce Main.prof once the program's exited.
Chapter 31: Concurrency
Section 31.1: Spawning Threads with
`forkIO`
Haskell supports many forms of concurrency and the most obvious being
forking a thread using forkIO.
forkIO :: IO () -> IO ThreadId
The function takes an IO action and returns its
ThreadId, meanwhile the action will be run in the background.

Both actions will run in the background, and the second is almost guaranteed
to finish before the last!
Section 31.2: Communicating between
Threads with `MVar`
It is very easy to pass information between threads using the MVar a type and
its accompanying functions in Control.Concurrent:
newEmptyMVar :: ( MVar newMVar :: a -> IO ( MVar takeMVar :: MVar a -> IO a ) --
IO a a putMVar :: MVar a -> a -> IO

creates a new MVar a


) -- creates a new MVar with the given value
-- retrieves the value from the given MVar, or blocks until one is
available
() -- puts the given value in the MVar, or blocks until it's empty

Let's sum the numbers from 1 to 100 million in a thread and wait on the
result:

A more complex demonstration might be to take user input and sum in the
background while waiting for more input:

As stated earlier, if you call takeMVar and the MVar is empty, it blocks until
another thread puts something into the
MVar, which could result in a Dining Philosophers Problem. The same thing
happens with putMVar: if it's full, it'll block 'til it's empty!
Take the following function:

What could happen is that:


1. Thread 1 reads ma and blocks ma
2. Thread 2 reads mb and thus blocks mb
Now Thread 1 cannot read mb as Thread 2 has blocked it, and Thread 2
cannot read ma as Thread 1 has blocked it. A classic deadlock!
Section 31.3: Atomic Blocks with
Software Transactional Memory
Another powerful & mature concurrency tool in Haskell is Software
Transactional Memory, which allows for multiple threads to write to a single
variable of type TVar a in an atomic manner.
TVar ais the main type associated with the STM monad and stands for
transactional variable. They're used much like MVar but within the STM monad
through the following functions:
atomically :: STM a -> IO a

Perform a series of STM actions atomically.


readTVar :: TVar a -> STM a

Read the TVar's value, e.g.:

Write a value to the given TVar.

This example is taken from the Haskell Wiki:


Chapter 32: Function composition
Section 32.1: Right-to-left composition
(.) lets us compose two functions, feeding output of one as an input to the
other:

For example, if we want to square the successor of an input number, we can


write

(<<<) which is an (.) There is also . So,


alias to
Section 32.2: Composition with binary
function
The regular composition works for unary functions. In the case of binary, we
can define

(f .: g) = ((f .) . g) Thus, by eta-contraction, and furthermore,

(.:) = ((.) . (.)) so , a semi-famous definition.


Examples:
Section 32.3: Left-to-right composition
(>>>)Control.Category defines , which, when specialized to functions, is

Example:
Chapter 33: Databases
Section 33.1: Postgres
Postgresql-simple is a mid-level Haskell library for communicating with a
PostgreSQL backend database. It is very simple to use and provides a type-
safe API for reading/writing to a DB.
Running a simple query is as easy as:

Parameter substitution
PostreSQL-Simple supports parameter substitution for safe parameterised
queries using query:

Executing inserts or updates


You can run inserts/update SQL queries using execute:
Chapter 34: Data.Aeson - JSON in
Haskell
Section 34.1: Smart Encoding and
Decoding using Generics
The easiest and quickest way to encode a Haskell data type to JSON with
Aeson is using generics.

First let us create a data type Person:

In order to use the encode and decode function from the Data.Aeson package we
need to make Person an instance of ToJSON and FromJSON. Since we derive
Generic for Person, we can create empty instances for these classes. The default
definitions of the methods are defined in terms of the methods provided by
the Generic type class.

Done! In order to improve the encoding speed we can slightly change the
ToJSON instance:

Now we can use the encode function to convert Person to a (lazy) Bytestring:

encodeNewPerson :: Text -> Text -> Int -> ByteString


encodeNewPerson first last age = encode $ Person first last age
Section 34.2: A quick way to generate a
Data.Aeson.Value
Section 34.3: Optional Fields
Sometimes, we want some fields in the JSON string to be optional. For
example,

This can be achieved by


Chapter 35: Higher-order functions
Section 35.1: Basics of Higher Order
Functions
Review Partial Application before proceeding.
In Haskell, a function that can take other functions as arguments or return
functions is called a higher-order function.
The following are all higher-order functions:

These are particularly useful in that they allow us to create new functions on
top of the ones we already have, by passing functions as arguments to other
functions. Hence the name, higher-order functions.
Consider:

This ability to easily create functions (like e.g. by partial application as used
here) is one of the features that makes functional programming particularly
powerful and allows us to derive short, elegant solutions that would
otherwise take dozens of lines in other languages. For example, the following
function gives us the number of aligned elements in two lists.
Section 35.2: Lambda Expressions
Lambda expressions are similar to anonymous functions in other languages.
Lambda expressions are open formulas which also specify variables which
are to be bound. Evaluation (finding the value of a function call) is then
achieved by substituting the bound variables in the lambda expression's body,
with the user supplied arguments. Put simply, lambda expressions allow us to
express functions by way of variable binding and substitution.
Lambda expressions look like

Within a lambda expression, the variables on the left-hand side of the arrow
are considered bound in the righthand side, i.e. the function's body.
Consider the mathematical function

As a Haskell definition it is

x -> x which means that the function f is equivalent to the lambda expression
\^2.
a -> bConsider the parameter of the higher-order function map, that is a
function of type . In case it is used only once in a call to map and nowhere
else in the program, it is convenient to specify it as a lambda expression
instead of naming such a throwaway function. Written as a lambda
expression,

xholds a value of type a, ...x... is a Haskell expression that refers to the


variable x, and y holds a value of type b. So, for example, we could write the
following
Section 35.3: Currying
In Haskell, all functions are considered curried: that is, all functions in
Haskell take just one argument.
Let's take the function div:
div :: Int -> Int -> Int
If we call this function with 6 and 2 we unsurprisingly get 3:
Prelude> div 6 2 3
div Int -> IntHowever, this doesn't quite behave in the way we might think.
First 6 is evaluated and returns a function of type . This resulting function is
then applied to the value 2 which yields 3.
When we look at the type signature of a function, we can shift our thinking
from "takes two arguments of type Int" to "takes one Int and returns a
function that takes an Int". This is reaffirmed if we consider that arrows in
the type notation associate to the right, so div can in fact be read thus: div ::
Int -> (Int -> Int)
In general, most programmers can ignore this behaviour at least while they're
learning the language. From a theoretical point of view, "formal proofs are
easier when all functions are treated uniformly (one argument in, one result
out)."
Chapter 36: Containers - Data.Map
Section 36.1: Importing the Module
The Data.Map module in the containers package provides a Map structure that has
both strict and lazy implementations.

When using Data.Map, one usually imports it qualified to avoid clashes with

functions already defined in Prelude: import qualified Data.Map as Map


So we'd then prepend Map function calls with Map., e.g.
Section 36.2: Monoid instance
Map k v provides a Monoid instance with the following semantics:
m1 <> mempty is the empty Map, i.e. the same as Map.empty
m2
is the left-biased union of m1 and m2, i.e. if any key is present both
in m1 and m2, then the value from
m1 <> m1 is picked for . This operation is also available outside the
m2
Monoid instance as Map.union.
Section 36.3: Constructing
We can create a Map from a list of tuples like this:
Map.fromList [("Alex", 31), ("Bob", 22)]
A Map can also be constructed with a single value:
> Map.singleton "Alex" 31 fromList [("Alex",31)]
There is also the empty function.
empty :: Map k a
Data.Map also supports typical set operations such as union, difference and
intersection.
Section 36.4: Checking If Empty
We use the null function to check if a given Map is empty:
> Map.null $ Map.fromList [("Alex", 31), ("Bob", 22)] False >
Map.null $ Map.empty True
Section 36.5: Finding Values
There are many querying operations on maps.
member :: Ord k => k -> Map k a -> Bool
yields True if the key of type k

is in Map k a: > Map.member "Alex" $ Map.singleton "Alex" 31

True > Map.member "Jenny" $ Map.empty False notMember is

similar:
> Map.notMember "Alex" $ Map.singleton "Alex" 31 False >
Map.notMember "Jenny" $ Map.empty True
findWithDefault :: Ord k => a -> k -> Map k a -> a You can also use to yield a
default value if the key isn't present:
Map.findWithDefault 'x' 1 (fromList [(5,'a'), (3,'b')]) == 'x'
Map.findWithDefault 'x' 5 (fromList [(5,'a'), (3,'b')]) == 'a'
Section 36.6: Inserting Elements
Inserting elements is simple:
> let m = Map.singleton "Alex" 31 fromList [("Alex",31)] > Map.insert
"Bob" 99 m fromList [("Alex",31),("Bob",99)]
Section 36.7: Deleting Elements
> let m = Map.fromList [("Alex", 31), ("Bob", 99)] fromList [("Alex",31),
("Bob",99)] > Map.delete "Bob" m fromList [("Alex",31)]
Chapter 37: Fixity declarations
Declaration component Meaning
infixr the operator is right-associative
infixl the operator is left-associative
infix the operator is non-associative
optional digit binding precedence of the operator (range
0...9, default 9)
op1, ... , opn operators
Section 37.1: Associativity
vs infixr vs infix describe on which sides the parens will be grouped. For
infixl
example, consider the following fixity declarations (in base)

True == False == The infix tells us that == cannot be used without us including
parenthesis, which means that
True == ( False == True ) or True == False ) == TrueTrue is a syntax error. On the other
( hand, are fine.
infixlOperators without an explicit fixity
declaration are 9.
Section 37.2: Binding
precedence
The number that follows the associativity information describes in what order
the operators are applied. It must always be between 0 and 9 inclusive. This is
commonly referred to as how tightly the operator binds. For example,
consider the following fixity declarations (in base)

In short, the higher the number, the closer the operator will "pull" the parens
on either side of it.
Remarks
fx Function application always binds higher than operators, so f x `op` g y
must be interpreted as ( )op(g y) no matter what the operator `op` and
its fixity declaration are.
infixl *!?
If the binding precedence is omitted in a fixity declaration (for
example we have ) the default is 9.
Section 37.3: Example declarations
infixr 5 ++

infixl 4 <*>, <*, *>, <**>

infixl 8 `shift`, `rotate`, `shiftL`, `shiftR`, `rotateL`, `rotateR`

infix 4 ==, /=, <, <=, >=, >

infix ??
Chapter 38: Web Development
Section 38.1: Servant
Servant is a library for declaring APIs at the type-level and then:

write servers (this part of servant can be


considered a web framework), obtain
client functions (in haskell), generate
client functions for other programming
languages, generate documentation for
your web applications and more...

Servant has a concise yet powerful API. A simple API can be


written in very few lines of code:

Now we can declare our API: type UserAPI = "users" :>

QueryParam "sortby" SortBy :> Get '[JSON] [User]

users which states that we wish to expose / to GET requests with a query param
sortby of type SortBy and return JSON of type User in the response.

Now we can define our handler:


And the main method which listens on port 8081 and serves our user API:

Note, Stack has a template for generating basic APIs in Servant, which is
useful for getting up and running very quick.
Section 38.2: Yesod
Yesod project can be created with stack new using following templates:
yesod - minimal yesod - mongo yesod - mysql yesod - postgres yesod - postgres - fay yesod - simple.
yesod - sqlite
Simplest Yesod scaffold possible.
. Uses MongoDB as DB engine.
. Uses MySQL as DB engine.
. Uses PostgreSQL as DB engine.
. Uses PostgreSQL as DB engine. Uses Fay language for front-
end.
. Recommended template to use, if you
don't need database. . Uses SQlite as DB
engine.
yesod - bin package
provides yesod executable, which can be used to run
development server. Note that you also
can run your application directly, so yesod tool is optional.
contains code that dispatches requests between handlers. It also
Application.hs
sets up database and logging settings, if you used them.
defines App type, that can be seen as an environment for all
Foundation.hs
handlers. Being in HandlerT monad, you can get this value using getYesod
function.
Import.hs is a module that just re-exports commonly used stuff.
contains Template Haskell that generates code and data types used for
Model.hs
DB interaction. Present only if you are using DB.
config / models is where you define your DB schema. Used by Model.hs.
config / routes defines
URI's of the Web application. For each HTTP method of
the route, you'd need to create a
method } RouteRhandler named {}.
{
static /
directory contains site's static resources. These get Settings / StaticFiles.hs
compiled into binary by
module.
templates /directory contains Shakespeare templates that are used when serving
requests.
Handler Finally, / directory contains modules that define handlers for routes.
Each handler is a HandlerT monad action based on IO. You can inspect request
parameters, its body and other information, make queries to the DB with
runDB, perform arbitrary IO and return various types of content to the user. To
serve HTML, defaultLayout function is used that allows neat composition of
shakespearian templates.
Chapter 39: Vectors
Section 39.1: The Data.Vector Module
The Data.Vector module provided by the vector is a high performance library
for working with arrays.
Once you've imported Data.Vector, it's easy to start using a Vector:

You can even have a multi-dimensional array:


Section 39.2: Filtering a Vector
Filter odd elements:
Section 39.3: Mapping (`map`) and
Reducing (`fold`) a Vector
'd and Vectors can be map'd and fold'd,filterzip`'d:
Prelude Data.Vector> Data.Vector.map (^2) y

fromList [0,1,4,9,16,25,36,49,64,81,100,121] :: Data.Vector.Vector

Reduce to a single value:


Section 39.4: Working on Multiple
Vectors
Zip two arrays into an array of pairs:
Chapter 40: Cabal
Section 40.1: Working with sandboxes
A Haskell project can either use the system wide packages or use a sandbox.
A sandbox is an isolated package database and can prevent dependency
conflicts, e. g. if multiple Haskell projects use different versions of a
package.
To initialize a sandbox for a Haskell package go to its directory and run:

Now packages can be installed by simply running cabal install.


Listing packages in a sandbox:

Deleting a sandbox:

Add local dependency:


Section 40.2: Install packages
To install a new package, e.g. aeson:
Chapter 41: Type algebra
Section 41.1: Addition and
multiplication
The addition and multiplication have equivalents in this type algebra. They
correspond to the tagged unions and product types.

We can see how the number of inhabitants of every type corresponds to the
operations of the algebra.
Equivalently, we can use Either and (,) as type constructors for the addition
and the multiplication. They are isomorphic to our previously defined types:

The expected results of addition and multiplication are followed by the type
algebra up to isomorphism. For example, we can see an isomorphism
between 1 + 2, 2 + 1 and 3; as 1 + 2 = 3 = 2 + 1.

Rules of addition and multiplication


The common rules of commutativity, associativity and distributivity are valid
because there are trivial isomorphisms between the following types:
Section 41.2: Functions
a -> b
Functions can be seen as exponentials in our algebra. As we can see, if
we take a type a with n instances and a type b with m instances, the type will
have m to the power of n instances.
Bool
Bool
-> is isomorphic Bool , Bool As an example, ), as 2*2 = 2 ² .
to (
Section 41.3: Natural numbers in type
algebra
We can draw a connection between the Haskell types and the natural
numbers. This connection can be made assigning to every type the number of
inhabitants it has.
Finite union types
For finite types, it suffices to see that we can assign a natural type to every
number, based in the number of constructors. For example:

would be 3. And the Bool type would be 2.

Uniqueness up to isomorphism
We have seen that multiple types would correspond to a single number, but in
this case, they would be isomorphic. This is to say that there would be a pair
of morphisms f and g, whose composition would be the identity, connecting
the two types.

In this case, we would say that the types are isomorphic. We will consider
two types equal in our algebra as long as they are isomorphic.
For example, two different representations of the number two are trivally
isomorphic:

bitValue . booleanBit == id == booleanBit . bitValue


Because we can see
One and Zero
The representation of the number 1 is obviously a type with only one
constructor. In Haskell, this type is canonically the type (), called Unit. Every
other type with only one constructor is isomorphic to ().
And our representation of 0 will be a type without constructors. This is the
Void type in Haskell, as defined in Data.Void. This would be equivalent to a
unhabited type, wihtout data constructors:
Section 41.4: Recursive types
Lists
Lists can be defined as:

If we translate this into our type algebra, we get

List(a) = 1 + a * List(a)
But we can now substitute List(a) again in this expression multiple times, in
order to get:

List(a) = 1 + a + a*a + a*a*a + a*a*a*a + ...


x,yThis makes sense if we see a list as a type that can contain only one value,
as in []; or every value of type a, as in [x]; or two values of type a, as in []; and
so on. The theoretical definition of List that we should get from there would
be:

Trees
We can do the same thing with binary trees, for example. If we define them
as:

We get the expression:


Tree(a) = 1 + a * Tree(a) * Tree(a)
And if we make the same substitutions again and again, we would obtain the
following sequence:

Tree(a) = 1 + a + 2 (a*a) + 5 (a*a*a) + 14 (a*a*a*a) + ...


The coefficients we get here correspond to the Catalan numbers
sequence, and the n-th catalan number is precisely the number of
possible binary trees with n nodes.
Section 41.5: Derivatives
a,a,a
The derivative of a type is the type of its type of one-hole contexts.
This is the type that we would get if we make a type variable disappear in
every possible point and sum the results. As an example, we can take the
triple type (), and derive it, obtaining

This is coherent with our usual definition of derivation, as:

d/da (a*a*a) = 3*a*a


More on this topic can be read on this article.
Chapter 42: Arrows
Section 42.1: Function compositions
with multiple channels
Arrow ( - Arrowis, vaguely speaking, the class of morphisms that compose like
>
functions, with both serial composition and “ parallel composition ” .
While it is most interesting as a generalisation of functions, the ) instance
itself is already quite useful. For instance, the following function:

can also be written with arrow combinators: spaceAround

x = partition (>x) >>> minimum *** maximum >>> uncurry (-) This

kind of composition can best be visualised with a

diagram:

Here,
The >>> operator is just a flipped version of the ordinary . composition
operator (there's also a <<< version that composes right-to-left). It pipes
the data from one processing step to the next.
the out-going ╱ ╲ indicate the data flow is split up in two “ channels ” .
In terms of Haskell types, this is realised with tuples:

Doublesplits up the flow in two [] channels, whereas

merges two Double channels.


*** is the parallel † composition operator. It lets maximum and minimum
operate independently on different channels of the data. For functions,
the signature of this operator is

Arrow ( - ) *** †At least in the Hask category (i.e. in the g does not
>
instance), actually compute f and g in parallel as in, on different
f
threads. This would theoretically be possible, though.
Chapter 43: Typed holes
Section 43.1: Syntax of typed holes
A typed hole is a single underscore (_) or a valid Haskell identifier which is
not in scope, in an expression context. Before the existance of typed holes,
both of these things would trigger an error, so the new syntax does not
interfere with any old syntax.
Controlling behaviour of typed holes
The default behaviour of typed holes is to produce a compile-time error when
encountering a typed hole. However, there are several flags to fine-tune their
behaviour. These flags are summarized as follows (GHC trac):
fdefer - type - errors or fdefer - typed - holes By
default GHC has typed holes enabled
- and
fwarn - typed - holes is on by default. fdefer - type - errors or fdefer - typed
produces a
Without - -
compile
fno - warn - typed - holes
error
when it

encounters a typed hole.


When - is enabled, hole errors are converted to warnings and result in
runtime errors when evaluated.
The warning flag --
holesthis flag is a no-op, since typed holes are an error under these
conditions. If either of the defer flags are enabled (converting typed hole
errors into warnings) the - flag disables the warnings. This means
compilation silently succeeds and evaluating a hole will produce a
runtime error.
Section 43.2: Semantics of typed holes
The value of a type hole can simply said to be undefined, although a typed hole
triggers a compile-time error, so it is not strictly necessary to assign it a
value. However, a typed hole (when they are enabled) produces a compile
time error (or warning with deferred type errors) which states the name of the
typed hole, its inferred most general type, and the types of any local
bindings. For example:

-> IntNote that in the case of typed holes in expressions entered into the
GHCi repl (as above), the type of the expression entered also reported, as it
(here of type [a]).
Section 43.3: Using typed holes to
define a class instance
Typed holes can make it easier to define functions, through an interactive
process.
Say you want to define a class instance Foo Bar (for your custom Bar type, in
order to use it with some polymorphic library function that requires a Foo
instance). You would now traditionally look up the documentation of Foo,
figure out which methods you need to define, scrutinise their types etc. –
but with typed holes, you can actually skip that!
First just define a dummy instance:

The compiler will now complain

Ok, so we need to define foom for Bar. But what is that even supposed to be?
Again we're too lazy to look in the documentation, and just ask the compiler:

Here we've used a typed hole as a simple “ documentation query ” . The


compiler outputs

Note how the compiler has already filled the class type variable with the
concrete type Bar that we want to instantiate it for. This can make the
signature a lot easier to understand than the polymorphic one found in the
class documentation, especially if you're dealing with a more complicated
method of e.g. a multi-parameter type class.
But what the hell is Gronk? At this point, it is probably a good idea to ask
Hayoo. However we may still get away without that: as a blind guess, we
assume that this is not only a type constructor but also the single value
constructor, i.e. it can be used as a function that will somehow produce a
Gronk a value. So we try

If we're lucky, Gronk is actually a value, and the compiler will now say

Ok, that's ugly – at first just note that Gronk has two arguments, so we can
refine our attempt:

And this now is pretty clear:

You can now further progress by e.g. deconstructing the bar value (the
components will then show up, with types, in the Relevant bindings section).
Often, it is at some point completely obvious what the correct definition will
be, because you you see all avaliable arguments and the types fit together like
a jigsaw puzzle. Or alternatively, you may see that the definition is
impossible and why.
All of this works best in an editor with interactive compilation, e.g. Emacs
with haskell-mode. You can then use typed holes much like mouse-over
value queries in an IDE for an interpreted dynamic imperative language, but
without all the limitations.
Chapter 44: Rewrite rules (GHC)
Section 44.1: Using rewrite rules on
overloaded functions
In this question, @Viclib asked about using rewrite rules to exploit typeclass
laws to eliminate some overloaded function calls:

fromList :: Seq a -> [a] would be Seq $ fromList This is a somewhat tricky
rewritten into use case for GHC's
rewrite rules mechanism,
because overloaded functions are rewritten into their specific instance
methods by rules that are implicitly created behind the scenes by GHC (so
something like etc.).
However, by first rewriting toList and fromList into non-inlined non-typeclass
methods, we can protect them from premature rewriting, and preserve them
until the rule for the composition can fire:
Chapter 45: Date and Time
Section 45.1: Finding Today's Date
Current date and time can be found with getCurrentTime:

Alternatively, just the date is returned by fromGregorian:


Section 45.2: Adding, Subtracting and
Comparing Days
Given a Day, we can perform simple arithmetic and comparisons, such as
adding:

Subtract:

and even find the difference:

note that the order matters:


Chapter 46: List Comprehensions
Section 46.1: Basic List
Comprehensions
Haskell has list comprehensions, which are a lot like set comprehensions in
math and similar implementations in imperative languages such as Python
and JavaScript. At their most basic, list comprehensions take the following
form.

For example

Functions can be directly applied to x as well:

This is equivalent to:

Example:
Section 46.2: Do Notation
Any list comprehension can be correspondingly coded with list monad's do
notation.
[f x | x <- xs] f <$> xs do { x <- xs ; return (f x) }

[f x | f <- fs, x <- xs] fs <*> xs do { f <- fs ; x <- xs ; return (f x) }


[y | x <- xs, y <- f x] f =<< xs do { x <- xs ; y <- f x ; return y }

Control.Monad.guard The guards can be handled using :

[x | x <- xs, even x] do { x <- xs ; guard (even x) ; return x }


Section 46.3: Patterns in Generator
Expressions
However, x in the generator expression is not just variable, but can be any
pattern. In cases of pattern mismatch the generated element is skipped over,
and processing of the list continues with the next element, thus acting like a
filter:

A generator with a variable x in its pattern creates new scope containing all
the expressions on its right, where x is defined to be the generated element.
This means that guards can be coded as
Section 46.4: Guards
Another feature of list comprehensions is guards, which also act as filters.
Guards are Boolean expressions and appear on the right side of the bar in a
list comprehension.
Their most basic use is

Any variable used in a guard must appear on its left in the comprehension, or
otherwise be in scope. So,

[ f x | x <- list, pred1 x y, pred2 x] -- `y` must be defined in outer scope which is

equivalent to

infixl
(the >>= operator is 1, i.e. it associates (is parenthesized) to the left).
Examples:
Section 46.5: Parallel Comprehensions
With Parallel List Comprehensions language extension,

is equivalent to

Example:
Section 46.6: Local Bindings
List comprehensions can introduce local bindings for variables to hold some
interim values:

Same effect can be achieved with a trick,

The let in list comprehensions is recursive, as usual. But generator bindings


are not, which enables shadowing:
Section 46.7: Nested Generators
List comprehensions can also draw elements from multiple lists, in which
case the result will be the list of every possible combination of the two
elements, as if the two lists were processed in the nested fashion. For
example,
Chapter 47: Streaming IO
Section 47.1: Streaming IO
io-streams is Stream-based library that focuses on the Stream abstraction but
for IO. It exposes two types:

InputStream:a read-only smart handle


OutputStream: a write-only smart handle

makeInputStream :: IO (Maybe a) -> IO (InputStream a) We can create a stream with .


read :: InputStream a -> IO (Maybe a) Reading from a stream is performed using ,
where Nothing denotes an EOF:
Chapter 48: Google Protocol
Buers
Section 48.1: Creating, building and
using a simple .proto file

After saving we can now create the Haskell files which we can use in our
project by running

$HOME/.local/bin/hprotoc --proto_path=. --haskell_out=. person.proto

We should get an output similar to this:

will create a new folder Protocol in the current directory with Person.hs
hprotoc
which we can simply import into our haskell project:

As a next step, if using Stack add

build - depends to : and

exposed - modulesto in your .cabal file.


If we get now a incoming message from a stream, the message will have the
type ByteString.
In order to transform the ByteString (which obviously should contain encoded
"Person" data) into our Haskell data type, we need to call the function
messageGet which we import by

which enables to create a value of type Person using:


Chapter 49: Template Haskell &
QuasiQuotes
Section 49.1: Syntax of Template
Haskell and Quasiquotes
XTemplateHaskell TemplateHaskell is enabled by the - GHC extension. This
extension enables all the syntactic features further detailed in this section.
The full details on Template Haskell are given by the user guide.
Splices
... ), ... Asplice is a new syntactic entity enabled by Template Haskell,
where ( written as $( ) is some expression.
f g There must not be a space between $ and the first character of the
expression; and Template Haskell overrides the parsing of the $ operator
- e.g. f$g is normally parsed as ($) whereas with Template Haskell
enabled, it is parsed as a splice.
When a splice appears at the top level, the $ may be omitted. In this case,
the spliced expression is the entire line.
A splice represents code which is run at compile time to produce a
Haskell AST, and that AST is compiled as Haskell code and inserted into
the program
Q [ DeclSplices can appear in place of: expressions, patterns, types, and top-
level declarations. The type of the spliced expression, in each case
respectively, is Q Exp, Q Pat, Q Type, ]. Note that declaration splices may
only appear at the top level, whereas the others may be inside other
expressions, patterns, or types, respectively.
Expression quotations (note: not a QuasiQuotation)
An expression quotation is a new syntactic entity written as one of:
.. |] or [| .. .. .. .. |]- .. is a list of declarations and the Q [ Dec [e||] - .. is
quotation has type an

expression and the quotation has type Q Exp;


[p||] - .. is a pattern and the quotation has type Q Pat;
[t||]- .. is a type and the quotation has type Q Type;
[d|].
x -> [| x An expression quotation takes a compile time program and
produces the AST represented by that program.
The use of a value in a quotation (e.g. \|]) without a splice corresponds to
syntactic sugar for \x
lift :: Lift t => t -> Q Exp lift x ->
[| $() |], where comes from the class
class Lift t where lift :: t -> Q Exp default lift :: Data t => t -> Q Exp Typed
splices and quotations
$$ ( .. ) ..Typed splices are similair to previously mentioned (untyped)
where splices, and are written as ) is an expression.
(
Q ( TExp ) $$ ||..|| If e has type e has type a.
a
then Q ( TExp Typed quotations take the form
a
[] where .. is an expression of
type a; the resulting quotation has type ).
unType :: TExp a -> Exp Typed expression can be converted to untyped ones:
.
QuasiQuotes
QuasiQuotes generalize expression quotations - previously, the parser
used by the expression quotation is one of a fixed set (e,p,t,d), but
QuasiQuotes allow a custom parser to be defined and used to produce
code at compile time. Quasi-quotations can appear in all the same
contexts as regular quotations.
iden | ... A quasi-quotation is written as [ |], where iden is an identifier of
type Language.Haskell.TH.Quote.QuasiQuoter.
A QuasiQuoter is simply composed of four parsers, one for each of the
different contexts in which quotations can appear:
data QuasiQuoter = QuasiQuoter { quoteExp :: String -> Q Exp, quotePat ::
String -> Q Pat, quoteType :: String -> Q Type, quoteDec :: String -> Q
[Dec] } Names
Haskell identifiers are represented by the type
Language.Haskell.TH.Syntax.Name. Names form the leaves of abstract syntax
trees representing Haskell programs in Template Haskell.
An identifier which is currently in scope may be turned into a name with
either: 'e or 'T. In the first case, e is interpreted in the expression scope,
while in the second case T is in the type scope (recalling that types and
value constructors may share the name without amiguity in Haskell).
Section 49.2: The Q type
Q :: * -> The * type constructor defined in Language.Haskell.TH.Syntax is an
abstract type representing computations which have access to the compile-
time environment of the module in which the computation is run. The Q type
also handles variable substituion, called name capture by TH (and discussed
here.) All splices have type Q X for some X.
The compile-time environment includes:

in-scope identifiers and information about said


identifiers, types of functions types
and source data types of constructors
full specification of type declarations
(classes, type families)
the location in the source code (line, column, module,
package) where the splice occurs fixities of functions
(GHC 7.10) enabled GHC extensions (GHC 8.0)
newName :: String -> Q NameThe Q type also has the ability to generate fresh
names, with the function . Note that
the name is not bound anywhere implicitly, so the user must bind it
themselves, and so making sure the resulting use of the name is well-scoped
is the responsibility of the user.
Functor , Monad,Applicative Q
has instances for and this is the main interface for
manipulating Q values, along with the combinators provided in
Language.Haskell.TH.Lib, which define a helper function for every constructor of
the TH ast of the form:

Note that ExpQ, TypeQ, DecsQ and PatQ are synonyms for the AST types which
are typically stored inside the Q type.
runQ :: Quasi m => Q a -> m a Quasi IO runQ :: Q a -> IO a
, and there is an The TH library provides
instance a function , so it would
seem that the Q type is just a fancy IO. However, the use of produces an IO
action which does
not have access to any compile-time environment - this is only available in
the actual Q type. Such IO actions will fail at runtime if trying to access said
environment.
Section 49.3: An n-arity curry
The familiar

function can be generalized to tuples of arbitrary arity, for example:

However, writing such functions for tuples of arity 2 to (e.g.) 20 by hand


would be tedious (and ignoring the fact that the presence of 20 tuples in your
program almost certainly signal design issues which should be fixed with
records).
We can use Template Haskell to produce such curryN functions for arbitrary n:

The curryN function takes a natural number, and produces the curry function of
that arity, as a Haskell AST.

First we produces fresh type variables for each of the arguments of the
function - one for the input function, and one for each of the arguments to
said function.

The expression args represents the pattern f x1 x2 .. xn. Note that a pattern is
separate syntactic entity - we could take this same pattern and place it in a
lambda, or a function binding, or even the LHS of a let binding (which would
be an error).

The function must build the argument tuple from the sequence of arguments,
which is what we've done here. Note the distinction between pattern variables
(VarP) and expression variables (VarE).
f x1 x2 .. xn -> f ( x1, x2, .. , xn Finally, the value which we produce is the
AST \).
We could have also written this function using quotations and 'lifted'
constructors:
var -> .. Note that quotations must be syntactically valid, so

[| \ $( is invalid, because there is no way in regular Haskell to declare a


|]
'list' of patterns - the above is interpreted as \ and the spliced expression is
expected to have type PatQ, i.e. a single pattern, not a list of patterns.
Finally, we can load this TH function in GHCi:

This example is adapted primarily from here.


Chapter 50: Phantom types
Section 50.1: Use Case for Phantom
Types: Currencies
Phantom types are useful for dealing with data, that has identical
representations but isn't logically of the same type.
5.32 € + 2.94A
good example is dealing with currencies. If you work with
currencies you absolutely never want to e.g. add two amounts of different
currencies. What would the result currency of $ be? It's not defined and there
is no good reason to do this.
A solution to this could look something like this:

The GeneralisedNewtypeDeriving extension allows us to derive Num for the Amount


type. GHC reuses Double's Num instance.
5.0 :: Amount EUR 1.13 :: Amount EUR 5.30 ::
Now if you represent Euro amounts
with e.g. () you have solved the problem of keeping double amounts separate
at the type level without introducing overhead. Stuff like () + (
Amount ) will result in a type error and require you to deal with currency
USD
conversion appropriately.
More comprehensive documentation can be found in the haskell wiki article
Chapter 51: Modules
Section 51.1: Defining Your Own
Module
If we have a file called Business.hs, we can define a Business module that can be
import-ed, like so:

A deeper hierarchy is of course possible; see the Hierarchical


module names example.
Section 51.2: Exporting
Constructors
To export the type and all its constructors, one must use the following syntax:

So, for the following top-level definitions in a file called People.hs:

This module declaration at the top:

would only export Person and its constructors Friend and Foe.
If the export list following the module keyword is omitted, all of the names
bound at the top level of the module would be exported:

would export Person, its constructors, and the isFoe function.


Section 51.3: Importing Specific
Members of a Module
Haskell supports importing a subset of items from a module.

would only import map from Data.Stream, and calls to this function would
require D.:

otherwise the compiler will try to use Prelude's map function.


Section 51.4: Hiding Imports
Prelude often defines functions whose names are used elsewhere. Not hiding
such imports (or using qualified imports where clashes occur) will cause
compilation errors.
defines functions named map, head and tail which normally clashes
Data.Stream
with those defined in Prelude. We can hide those imports from Prelude using
hiding:
import Data.Stream -- everything from Data.Stream

import Prelude hiding (map, head, tail, scan, foldl, foldr, filter, dropWhile, take) -- etc

In reality, it would require too much code to hide Prelude clashes like this,
so you would in fact use a qualified import of Data.Stream instead.
Section 51.5: Qualifying Imports
When multiple modules define the same functions by name, the compiler will
complain. In such cases (or to improve readability), we can use a qualified
import:

Now we can prevent ambiguity compiler errors when we use map, which is
defined in Prelude and Data.Stream:

import Data.Text as T
It is also possible to import a module with only the
clashing names being qualified via , which allows one to have Text instead
of T.Text etc.
Section 51.6: Hierarchical module
names
The names of modules follow the filesystem's hierarchical structure. With the
following file layout:

the module headers would look like this:

Note that:
the module name is based on the path of the file declaring the module
Folders may share a name with a module, which gives a naturally
hierarchical naming structure to modules
Chapter 52: Tuples (Pairs, Triples,
...)
Section 52.1: Extract tuple components
Use the fst and snd functions (from Prelude or Data.Tuple) to extract the first and
second component of pairs.

Or use pattern matching.

Pattern matching also works for tuples with more than two components.

Haskell does not provide standard functions like fst or snd for tuples with
more than two components. The tuple library on Hackage provides such
functions in the Data.Tuple.Select module.
Section 52.2: Strictness of matching a
tuple
p1, p2 Thepattern () is strict in the outermost tuple constructor, which can lead
to unexpected strictness behaviour. For example, the following expression
diverges (using Data.Function.fix):

x, y ,
since the match on () is strict in the tuple constructor. However, the
following expression, using an irrefutable pattern, evaluates to (12) as
expected:
Section 52.3: Construct tuple values
Use parentheses and commas to create tuples. Use one comma to create a
pair.

Use more commas to create tuples with more components.

Note that it is also possible to declare tuples using in their unsugared form.

Tuples can contain values of different types.

Tuples can contain complex values such as lists or more tuples.


Section 52.4: Write tuple types
Use parentheses and commas to write tuple types. Use one comma to write a
pair type.

Use more commas to write tuple types with more components.

Tuples can contain values of different types.

Tuples can contain complex values such as lists or more tuples.


Section 52.5: Pattern Match on Tuples
Pattern matching on tuples uses the tuple constructors. To match a pair for
example, we'd use the (,) constructor:

We use more commas to match tuples with more components:

Tuple patterns can contain complex patterns such as list patterns or more
tuple patterns.
Section 52.6: Apply a binary function
to a tuple (uncurrying)
Use the uncurry function (from Prelude or Data.Tuple) to convert a binary function
to a function on tuples.
Section 52.7: Apply a tuple function to
two arguments (currying)
Use the curry function (from Prelude or Data.Tuple) to convert a function that
takes tuples to a function that takes two arguments.
Section 52.8: Swap pair components
Use swap (from Data.Tuple) to swap the components of a pair.

Or use pattern matching.


Chapter 53: Graphics with Gloss
Section 53.1: Installing Gloss
Gloss is easily installed using the Cabal tool. Having installed Cabal, one can
run cabal install gloss to install Gloss.
Alternatively the package can be built from source, by downloading the
source from Hackage or GitHub, and doing the following:
gloss / gloss - rendering
1. Enter
the / directory and do cabal install
gloss / gloss
2. Enter the / directory and once more do cabal install
Section 53.2: Getting
something on the screen
In Gloss, one can use the display function to create very simple static graphics.
import Graphics.Gloss
To use this one needs to first . Then in the code there
should the following:

window is of type Display which can be constructed in two ways:

Here the last argument (0,0) in InWindow marks the location of the top left
corner.
For versions older than 1.11: In older versions of Gloss FullScreen takes
another argument which is meant to be the size of the frame that gets drawn
on which in turn gets stretched to fullscreen-size, for example: FullScreen
1024 , 768 ()

background is of type Color. It defines the background color, so it's as simple as:

Then we get to the drawing itself. Drawings can be very complex. How to
specify these will be covered elsewhere ([one can refer to this for the
moment][1]), but it can be as simple as the following circle with a radius of
80:

Summarizing example
As more or less stated in the documentation on Hackage, getting something
on the screen is as easy as:
Chapter 54: State Monad
State monads are a kind of monad that carry a state that might change during
each computation run in the monad. Implementations are usually of the form
State s a which represents a computation that carries and potentially modifies a
state of type s and produces a result of type a, but the term "state monad" may
generally refer to any monad which carries a state. The mtl and transformers
package provide general implementations of state monads.
Section 54.1: Numbering the nodes of a
tree with a counter
We have a tree data type like this:

And we wish to write a function that assigns a number to each node of the
tree, from an incrementing counter:

The long way


First we'll do it the long way around, since it illustrates the State monad's low-
level mechanics quite nicely.

Refactoring Split out the counter into a postIncrement action


The bit where we are getting the current counter and then putting counter + 1
can be split off into a postIncrement action, similar to what many C-style
languages provide:

Split out the tree walk into a higher-order function


The tree walk logic can be split out into its own function, like this:

With this and the postIncrement function we can rewrite tagStep:

Use the Traversable class


The mapTreeM solution above can be easily rewritten into an instance of the
Traversable class:

<*Note that this required us to use Applicative (the > operator) instead of Monad.
With that, now we can write tag like a pro:

Note that this works for any Traversable type, not just our Tree type!
Getting rid of the Traversable boilerplate
GHC has a DeriveTraversable extension that eliminates the need for writing the
instance above:
Chapter 55: Pipes
Section 55.1: Producers
A Producer is some monadic action that can yield values for downstream
consumption:

For example:
naturals :: Monad m => Producer Int m ()

naturals = each [1..] -- each is a utility function exported by Pipes We

can of course have Producers that are functions of other

values too:
Section 55.2: Connecting Pipes
Use >-> to connect Producers, Consumers and Pipes to compose larger Pipe
functions.

Producer, Consumer, Pipe,


and Effect types are all defined in terms of the general
Proxy type. Therefore >-> can be used for a variety of purposes. Types defined
by the left argument must match the type consumed by the right argument:

(>->) :: Monad m => Producer b m r -> Consumer b m r -> Effect m r (>->)


:: Monad m => Producer b m r -> Pipe b c m r -> Producer c m r
(>->) :: Monad m => Pipe a b m r -> Consumer b m r -> Consumer a m r
(>->) :: Monad m => Pipe a b m r -> Pipe b c m r -> Pipe a c m r
Section 55.3: Pipes
Pipes can both await and yield.

This Pipe awaits an Int and converts it to a String:


Section 55.4: Running Pipes with
runEect
We use runEffect to run our Pipe:

Note that runEffect requires an Effect, which is a self-contained Proxy with no


inputs or outputs:

(where X is the empty type, also known


as Void).
Section 55.5:
Consumers
A Consumer can only await values from upstream.

For example:
Section 55.6: The Proxy monad
transformer
pipes's
core data type is the Proxy monad transformer. Pipe, Producer, Consumer and
so on are defined in terms of Proxy.
Since Proxy is a monad transformer, definitions of Pipes take the form of
monadic scripts which await and yield values, additionally performing effects
from the base monad m.
Section 55.7: Combining Pipes and
Network communication
Pipes supports simple binary communication between a client and a server
In this example:
1. a client connects and sends a FirstMessage
DoSomething
2. the server receives and answers 0
3. the client receives and answers DoNothing
4. step 2 and 3 are repeated indefinitely
The command data type exchanged over the network:

Here, the server waits for a client to connect:


The client connects thus:
Chapter 56: Infix operators
Section 56.1: Prelude
Logical
&& is logical AND, || is logical OR.
== is equality, /= non-equality, < / <= lesser and > / >= greater operators.
Arithmetic operators
The numerical operators +, - and / behave largely as you'd expect. (Division
works only on fractional numbers to avoid rounding issues – integer
division must be done with quot or div). More unusual are Haskell's three
exponentiation operators:
^ takes a base of any number type to a non-negative, integral power. This
works simply by (fast) iterated multiplication. E.g.

Rational).

** implements real-number exponentiation. This works for very general


arguments, but is more computionally expensive than ^ or ^^, and
generally incurs small floating-point errors.

Lists
There are two concatenation operators:
: (pronounced cons) prepends a single argument before a list. This
operator is actually a constructor and can thus also be used to pattern
match( “ inverse construct ” ) a list.

!! is an indexing operator.
Note that indexing lists is inefficient (complexity O(n) instead of O(1) for
arrays or O(log n) for maps); it's generally preferred in Haskell to deconstruct
lists by folding ot pattern matching instead of indexing.

This operator is mostly used to avoid parentheses. It also has a strict


version $!, which forces the argument to be evaluated before applying the
function.

file, then print a message to the screen.


>>= does the same, while also accepting an argument to be passed from
the first action to the following.
readLn \ x -> print (x^2) will wait for the user to input a number, then
>>=
output the square of that
number to the screen.
Section 56.2: Finding information
about infix operators
Because infixes are so common in Haskell, you will regularly need to look up
their signature etc.. Fortunately, this is just as easy as for any other function:
The Haskell search engines Hayoo and Hoogle can be used for infix
operators, like for anything else that's defined in some library.
In GHCi or IHaskell, you can use the :i and :t (info and type) directives to
learn the basic properties of an operator. For example,

This tells me that ^^ binds more tightly than +, both take numerical types
as their elements, but ^^ requires the exponent to be integral and the base
to be fractional.
The less verbose :t requires the operator in parentheses, like
Section 56.3: Custom operators
In Haskell, you can define any infix operator you like. For example, I could
define the list-enveloping operator as

You should always give such operators a fixity declaration, like

(which would mean >+< binds as tightly as ++ and : do).


Chapter 57: Parallelism
Type/Function Detail
data Eval a Eval is a Monad that makes
it easier to define parallel strategies
a -> Eval
type Strategy a =a function that embodies a parallel evaluation strategy. The
function traverses a (parts of) its argument, evaluating subexpressions in
parallel or in sequence rpar :: Strategy a sparks its argument (for
evaluation in parallel) rseq :: Strategy a evaluates its argument to weak
head normal form
evaluates the entire structure of its argument, reducing it to normal form,
before
NFData a => a -> a returning the argument itself. It is provided by the
Control.DeepSeq module
Section 57.1: The Eval Monad
Parallelism in Haskell can be expressed using the Eval Monad from
Control.Parallel.Strategies, using the rpar and rseq functions (among others).

Running main above will execute and "return" immediately, while the two
values, a and b are computed in the background through rpar.
threaded Note: ensure you compile with - for parallel execution to occur.
Section 57.2: rpar
rpar :: Strategy a executes the given type Strategy a = a -> Eval a

strategy (recall:
) in parallel:

Running this will demonstrate the concurrent behaviour:


Section 57.3: rseq
rseq :: Strategy a We can use to force an argument to Weak Head Normal
Form:

This subtly changes the semantics of the rpar example; whereas the latter
would return immediately whilst computing the values in the background,
this example will wait until a can be evaluated to WHNF.
Chapter 58: Parsing HTML with
taggy-lens and lens
Section 58.1: Filtering elements from
the tree
id="article" Find div with and strip out all the inner script tags.

Contribution based upon @duplode's SO answer


Section 58.2: Extract the text contents
from a div with a particular id
Taggy-lens allows us to use lenses to parse and inspect HTML documents.
Chapter 59: Foreign Function
Interface
Section 59.1: Calling C from Haskell
For performance reasons, or due to the existence of mature C libraries, you
may want to call C code from a Haskell program. Here is a simple example
of how you can pass data to a C library and get an answer back. foo.c:

Foo.hs:

The unsafe keyword generates a more efficient call than 'safe', but requires that
the C code never makes a callback to the Haskell system. Since foo is
completely in C and will never call Haskell, we can use unsafe.

We also need to instruct cabal to compile and

link in C source. foo.cabal:

Then you can run:


Section 59.2: Passing Haskell functions
as callbacks to C code
It is very common for C functions to accept pointers to other functions as
arguments. Most popular example is setting an action to be executed when a
button is clicked in some GUI toolkit library. It is possible to pass Haskell
functions as C callbacks.
To call this C function:

void event_callback_add (Object *obj, Object_Event_Cb func, const void *data)

we first import it to Haskell code:


foreign import ccall "header.h event_callback_add"

callbackAdd :: Ptr () -> FunPtr Callback -> Ptr () -> IO ()

Now looking at how Object_Event_Cb is defined in C header, define what Callback


is in Haskell:

Finally, create a special function that would wrap Haskell function of type
Callback into a pointer FunPtr Callback:

Now we can register callback with C code:

It is important to free allocated FunPtr once you unregister the callback:


Chapter 60: Gtk3
Section 60.1: Hello World in Gtk
This example show how one may create a simple "Hello World" in Gtk3,
setting up a window and button widgets. The sample code will also
demonstrate how to set different attributes and actions on the widgets.
Chapter 61: Monad Transformers
Section 61.1: A monadic counter
An example on how to compose the reader, writer, and state monad using
monad transformers. The source code can be found in this repository
We want to implement a counter, that increments its value by a given
constant.
We start by defining some types, and functions:

Assume we want to carry out the following computation using the counter:
set the counter to 0
set the increment
constant to 3
increment the
counter 3 times set
the increment
constant to 5
increment the counter
2 times
The state monad provides abstractions for passing state around. We can make
use of the state monad, and define our increment function as a state
transformer.

This already enables us to express a computation in a more clear and succinct


way:
But we still have to pass the increment constant at each invocation. We
would like to avoid this.
Adding an environment
The reader monad provides a convenient way to pass an environment around.
This monad is used in functional programming to perform what in the OO
world is known as dependency injection.
In its simplest version, the reader monad requires two types:
the type of the value being read (i.e. our
environment, r below), the value returned
by the reader monad (a below).

Reader r a

However, we need to make use of the state monad as well. Thus, we need to
use the ReaderT transformer:

Using ReaderT, we can define our counter with environment and state as
follows:

We define an incR function that takes the increment constant from the
environment (using ask), and to define our increment function in terms of our
CounterS monad we make use of the lift function (which belongs to the monad
transformer class).

Using the reader monad we can define our computation as follows:


The requirements changed: we need logging!
Now assume that we want to add logging to our computation, so that we can
see the evolution of our counter in time.
We also have a monad to perform this task, the writer monad. As with the
reader monad, since we are composing them, we need to make use of the
reader monad transformer:

Here w represents the type of the output to accumulate (which has to be a


monoid, which allow us to accumulate this value), m is the inner monad, and
a the type of the computation.

We can then define our counter with logging, environment, and state as
follows:

And making use of lift we can define the version of the increment function
which logs the value of the counter after each increment:

Now the computation that contains logging can be written as follows:

Doing everything in one go


This example intended to show monad transformers at work. However,
we can achieve the same effect by composing all the aspects
(environment, state, and logging) in a single increment operation. To
do this we make use of type-constraints:

inc' :: (MonadReader Int m, MonadState Counter m, MonadWriter [Int] m) => m ()


inc' = ask >>= modify . (flip inc) >> get >>= tell . (:[]) . cValue

Here we arrive at a solution that will work for any monad that satisfies the

constraints above. The computation function is defined thus with type:


mComputation' :: (MonadReader Int m, MonadState Counter m, MonadWriter [Int] m) => m ()

since in its body we make use of inc'.


We could run this computation, in the ghci REPL for instance, as follows:

runState ( runReaderT ( runWriterT mComputation' ) 15 ) (MkCounter 0)


Chapter 62: Bifunctor
Section 62.1: Definition of Bifunctor
f :: * -> * -> Bifunctor
is the class of types with two type parameters (*), both
of which can be covariantly mapped over simultaneously.

bimap can be thought of as applying a pair of fmap operations to a datatype.


A correct instance of Bifunctor for a type f must satisfy the bifunctor laws,
which are analogous to the functor laws:

The Bifunctor class is found in the Data.Bifunctor module. For GHC versions
>7.10, this module is bundled with the compiler; for earlier versions you
need to install the bifunctors package.
Section 62.2: Common instances of
Bifunctor
Two-element tuples
(,) is an example of a type that has a Bifunctor instance.

takes a pair of functions and applies them to the tuple's respective


bimap
components.

Either's
instance of Bifunctor selects one of the two functions to apply depending
on whether the value is Left or Right.
Section 62.3: first and second
If mapping covariantly over only the first argument, or only the second
argument, is desired, then first or second ought to be used (in lieu of bimap).

For example,
Chapter 63: Proxies
Section 63.1: Using Proxy
Proxy :: k ->
The * type, found in Data.Proxy, is used when you need to give the
compiler some type information - eg, to pick a type class instance - which is
nonetheless irrelevant at runtime.

Functions which use a Proxy typically use ScopedTypeVariables to pick a type class

instance based on the a type. For example, the classic example of an

ambiguous function,

which results in a type error because the elaborator doesn't know which
instance of Show or Read to use, can be resolved using Proxy:

When calling a function with Proxy, you need to use a type annotation to
declare which a you meant.
Section 63.2: The "polymorphic
proxy" idiom
Since Proxy contains no runtime information, there is never a need to pattern-
match on the Proxy constructor. So a common idiom is to abstract over the
Proxy datatype using a type variable.

showread :: forall proxy a. (Show a, Read a) => proxy a -> String -> String
showread _ = (show :: a -> String) . read

Proxy :: Proxy a
Now, if you happen to have an f a in scope for some f, you don't
need to write out when calling f.
Section 63.3: Proxy is like ()
f a -> Proxy a
Since Proxy contains no runtime information, you can always
write a natural transformation for any f.

This is just like how any given value can always be erased to ():

Technically, Proxy is the terminal object in the category of functors, just like ()
is the terminal object in the category of values.
Chapter 64: Applicative Functor
f :: * -> Applicative
is the class of types * which allows lifted function
application over a structure where the function is also embedded in that
structure.
Section 64.1: Alternative definition
Since every Applicative Functor is a Functor, fmap can always be used on it;
thus the essence of Applicative is the pairing of carried contents, as well as
the ability to create it:

class Functor f => PairingFunctor f where funit :: f () -- create a


context, carrying nothing of import

fpair :: (f a,f b) -> f (a,b) -- collapse a pair of contexts into a pair-carrying context This

class is isomorphic to Applicative.

Conversely,
Section 64.2: Common instances of
Applicative
Maybe
Maybe is an applicative functor containing a possibly-absent value.

<*>purelifts the given value into Maybe by applying Just to it. The () function
applies a function wrapped in a Maybe to a value in a Maybe. If both the
function and the value are present (constructed with Just), the function is
applied to the value and the wrapped result is returned. If either is missing,
the computation can't proceed and Nothing is returned instead.
Lists
<* > :: [a -> b] -> [a] -> [b]
One way for lists to fit the type signature is to take
the two lists' Cartesian product, pairing up each element of the first list with
each element of the second one:

This is usually interpreted as emulating nondeterminism, with a list of values


standing for a nondeterministic value whose possible values range over that
list; so a combination of two nondeterministic values ranges over all possible
combinations of the values in the two lists:

There's a class of Applicatives which "zip" their two inputs together. One simple
example is that of infinite streams:

Stream's Applicative
instance applies a stream of functions to a stream of
arguments point-wise, pairing up the values in the two streams by position.
pure returns a constant stream – an infinite list of a single fixed value:

Lists too admit a "zippy" Applicative instance, for which there exists the ZipList
newtype:

Since zip trims its result according to the shortest input, the only

implementation of pure that satisfies the Applicative laws is one which


returns an infinite list: pure a = ZipList (repeat a) -- ZipList (fix (a:)) = ZipList

[a,a,a,a,...

For example:

n x x mThe two possibilities remind us of the outer and the inner product,
similar to multiplying a 1-column (1) matrix with a 1-row (1) one in the first
case, getting the n x m matrix as a result (but flattened); or multiplying a 1-row
and a 1-column matrices (but without the summing up) in the second case.
Functions
- )r , the type signatures of <*When specialised to functions (> match those
>
pure and of the K and S combinators, respectively:

<*puremust be const, and > takes a pair of functions and applies them each to a
fixed argument, applying the two results:

- ) Nat Functions are the prototypical "zippy" applicative. For example, since
>
infinite streams are isomorphic to (, ...
... representing streams in a higher-order way produces the zippy Applicative
instance automatically.
Chapter 65: Common monads as
free monads
Section 65.1: Free Empty ~~ Identity
Given

we have

which is isomorphic to
Section 65.2: Free Identity ~~ (Nat,) ~~
Writer Nat
Given

we have

which is isomorphic to

(Nat, a)or equivalently (if you promise to evaluate the fst element first) , aka
Writer Nat a, with
Section 65.3: Free Maybe ~~ MaybeT
(Writer Nat)
Given

we have

which is equivalent to

(Nat, Maybe a) , MaybeT (Writer Nat) a or equivalently (if you promise to


aka evaluate the fst element first) with
Section 65.4: Free (Writer w) ~~
Writer [w]
Given

we have

which is isomorphic to

Writer [w] aor, equivalently, (if you promise to evaluate


the log first), .
Section 65.5: Free (Const c)
~~ Either c
Given

we have

which is isomorphic to
Section 65.6: Free (Reader x) ~~
Reader (Stream x)
Given

we have

which is isomorphic to

Stream x -> a or equivalently with


Chapter 66: Common functors as
the base of cofree comonads
Section 66.1: Cofree Empty ~~ Empty
Given

we have
Section 66.2: Cofree (Const c) ~~
Writer c
Given

we have

which is isomorphic to
Section 66.3: Cofree Identity ~~
Stream
Given

we have

which is isomorphic to
Section 66.4: Cofree Maybe ~~
NonEmpty
Given

we have

which is isomorphic to
Section 66.5: Cofree (Writer w) ~~
WriterT w Stream
Given

we have

which is equivalent to

which can properly be written as WriterT w Stream with


Section 66.6: Cofree (Either e) ~~
NonEmptyT (Writer e)
Given

we have

which is isomorphic to

NonEmptyT (Writer e) a
or, if you promise to only evaluate the log after the
complete result, with
Section 66.7: Cofree (Reader x) ~~
Moore x
Given

we have

which is isomorphic to

aka Moore machine.


Chapter 67: Arithmetic
The numeric typeclass hierarchy
Num sits at the root of the numeric typeclass hierarchy. Its characteristic
operations and some common instances are shown below (the ones loaded by
default with Prelude plus those of Data.Complex):

We have already seen the Fractional class, which requires Num and introduces
the notions of "division" (/) and reciprocal of a number:

The Real class models .. the real numbers. It requires Num and Ord, therefore it
models an ordered numerical field. As a counterexample, Complex numbers
are not an ordered field (i.e. they do not possess a natural ordering
relationship):
(which implies Fractional) represents constants and operations that may
Floating
not have a finite decimal expansion.
sqrt . negate :: Floating a => a -> a
Caution: while expressions such as are
perfectly valid, they might return
NaN ("not-a-number"), which may not be an intended behaviour. In such
cases, we might want to work over the Complex field (shown later).
In Haskell, all expressions (which includes numerical constants and functions
operating on those) have a decidable type. At compile time, the type-checker
infers the type of an expression from the types of the elementary functions
that compose it. Since data is immutable by default, there are no "type
casting" operations, but there are functions that copy data and generalize or
specialize the types within reason.
Section 67.1: Basic examples

In the examples above, the type-checker infers a type-class rather than a


concrete type for the two constants. In Haskell, the Num class is the most
general numerical one (since it encompasses integers and reals), but pi must
belong to a more specialized class, since it has a nonzero fractional part.

list0 :: Num a => The concrete types above were inferred by GHC. More
general types like [a] would have
worked, but would have also been harder to preserve (e.g. if one consed a
Double onto a list of Nums), due to the caveats shown above.
Section 67.2: `Could not deduce
(Fractional Int) ...`
The error message in the title is a common beginner mistake. Let's see how it
arises and how to fix it.
Suppose we need to compute the average value of a list of numbers; the
following declaration would seem to do it, but it wouldn't compile:

:: Fractional a => a -> a -> a length :: Foldable t => t a -> Int


The problem is
with the division (/) function: its signature is (/), but in the case above the
denominator (given by ) is of type Int (and Int does not belong to the Fractional
class) hence the error message.
fromIntegral :: ( Num b, Integral a ) => a -> b
We can fix the error message with .
One can see that this function accepts values of any Integral type and returns
corresponding ones in the Num class:

averageOfList' :: (Foldable t, Fractional a) => t a -> a averageOfList' ll


= sum ll / fromIntegral (length ll)
Section 67.3: Function examples
What's the type of (+) ?

What's the type of sqrt ?

sqrt . fromIntegral What's the type of ?


Chapter 68: Role
The TypeFamilies language extension allows the programmer to define type-
level functions. What distinguishes type functions from non-GADT type
constructors is that parameters of type functions can be non-parametric
whereas parameters of type constructors are always parametric. This
distinction is important to the correctness of the GeneralizedNewTypeDeriving
extension. To explicate this distinction, roles are introduced in Haskell.
Section 68.1: Nominal Role
Haskell Wiki has an example of a non-parametric parameter of a type
function:

Here x is non-parametric because to determine the outcome of applying Inspect


to a type argument, the type function must inspect x.
In this case, the role of x is nominal. We can declare the role explicitly with
the RoleAnnotations extension:
Section 68.2: Representational Role
An example of a parametric parameter of a type function:

Here x is parametric because to determine the outcome of applying DoNotInspect


to a type argument, the type function do not need to inspect x.
In this case, the role of x is representational. We can declare the role
explicitly with the RoleAnnotations extension:
Section 68.3: Phantom Role
A phantom type parameter has a phantom role. Phantom roles cannot be
declared explicitly.
Chapter 69: Arbitrary-rank
polymorphism with RankNTypes
GHC ’ s type system supports arbitrary-rank explicit universal
quantification in types through the use of the Rank2Types and RankNTypes
language extensions.
Section 69.1: RankNTypes
StackOverflow forces me to have one example. If this topic is approved, we
should move this example here.
Chapter 70: GHCJS
GHCJS is a Haskell to JavaScript compiler that uses the
GHC API.
Section 70.1: Running "Hello
World!" with Node.js
can be invoked with the same command line arguments as ghc. The
ghcjs
generated programs can be run directly from the shell with Node.js and
SpiderMonkey jsshell. for example:
Chapter 71: XML
Encoding and decoding of XML documents.
Section 71.1: Encoding a record using
the `xml` library
Chapter 72: Reader / ReaderT
Reader provides functionality to pass a value along to each function. A
helpful guide with some diagrams can be found here:
http://adit.io/posts/2013-06-10-three-useful-monads.html
Section 72.1: Simple demonstration
A key part of the Reader monad is the ask
(https://hackage.haskell.org/package/mtl-2.2.1/docs/Control-Monad-
Reader.html#v:ask) function, which is defined for illustrative purposes:

The above will print out:


Chapter 73: Function call syntax
Haskell's function call syntax, explained with comparisons to C-style
languages where applicable. This is aimed at people who are coming to
Haskell from a background in C-style languages.
Section 73.1: Partial application - Part
1
In Haskell, functions can be partially applied; we can think of all functions as
taking a single argument, and returning a modified function for which that
argument is constant. To illustrate this, we can bracket functions as follows:
(((plus) 1) 2)

plus )is applied to 1 yielding the plus Here, the function () 1), which is applied
function (( to 2, yielding the
plus plus function ((() 1) 2). Because 1 2 is a function which takes no arguments,
you can consider it a plain value; however in Haskell, there is little
distinction between functions and values.
To go into more detail, the function plus is a function that adds its arguments.
plus The function 1 is a function that adds 1 to its argument.
plus
The function 1 2 is a function that adds 1 to 2, which is
always the value 3.
Section 73.2: Partial
application - Part 2
As another example, we have the function map, which takes a function and a
list of values, and applies the function to each value of the list:

Let's say we want to increment each value in a list. You may decide to define
your own function, which adds one to its argument, and map that function
over your list

but if you have another look at addOne's definition, with parentheses added for
emphasis:

plus plus plus 1, remembering to use parentheses plus The function addOne,
to isolate when applied to any
value x, is the same as
the partially applied function 1 applied to x. This means the functions addOne
and 1 are identical, and we can avoid defining a new function by just
replacing addOne with 1 as a subexpression:
Section 73.3: Parentheses in a basic
function call
For a C-style function call, e.g.

plus(a, b); // Parentheses surrounding only the arguments, comma separated

Then the equivalent Haskell code will be


(plus a b) -- Parentheses surrounding the function and the arguments, no commas

In Haskell, parentheses are not explicitly required for function application,


and are only used to disambiguate expressions, like in mathematics; so in
cases where the brackets surround all the text in the expression, the
parentheses are actually not needed, and the following is also equivalent:

It is important to remember that while in C-style languages, the


function
Section 73.4: Parentheses in
embedded function calls
In the previous example, we didn't end up needing the parentheses, because
they did not affect the meaning of the statement. However, they are often
necessary in more complex expression, like the one below. In C:

In Haskell this becomes:

Note, that this is not equivalent to:

One might think that because the compiler knows that take is a function, it
would be able to know that you want to apply it to the arguments b and c, and
pass its result to plus.
However, in Haskell, functions often take other functions as arguments, and
little actual distinction is made between functions and other values; and so
the compiler cannot assume your intention simply because take is a function.
And so, the last example is analogous to the following C function call:
Chapter 74: Logging
Logging in Haskell is achieved usually through functions in the IO monad,
and so is limited to non-pure functions or "IO actions".
There are several ways to log information in a Haskell program: from
putStrLn (or print), to libraries such as hslogger or through Debug.Trace.
Section 74.1: Logging with hslogger
The hslogger module provides a similar API to Python's logging framework, and
supports hierarchically named loggers, levels and redirection to handles
outside of stdout and stderr.
By default, all messages of level WARNING and above are sent to stderr and all
other log levels are ignored.

We can set the level of a logger by its name using updateGlobalLogger:

Each Logger has a name, and they are arranged hierarchically, so MyProgram is
a parent of MyParent.Module.
Chapter 75: Attoparsec
Type Detail
Parser i a The core type for representing a parser. i is the string type, e.g.
ByteString.
Fail i [ String ] String , Partial ( i -> IResult i r The result of a parse, with ) and
Done i r as
IResult i r constructors.

Attoparsec is a parsing combinator library that is "aimed particularly at


dealing efficiently with network protocols and complicated text/binary file
formats".
Attoparsec offers not only speed and efficiency, but backtracking and
incremental input.
Its API closely mirrors that of another parser combinator library, Parsec.
There are submodules for compatibility with ByteString, Text and Char8. Use of
the OverloadedStrings language extension is recommended.
Section 75.1: Combinators
Parsing input is best achieved through larger parser functions that are
composed of smaller, single purpose ones.
Let's say we wished to parse the following text which represents working
hours:

Monday: 0800 1600.


We could split these into two "tokens": the day name -- "Monday" -- and a
time portion "0800" to "1600".
To parse a day name, we could write the following:
data Day = Day String day :: Parser Day day = do name <- takeWhile1 (/= ':')
skipMany1 (char ':') skipSpace return $ Day name
To parse the time portion we could write:
data TimePortion = TimePortion String String time = do start <- takeWhile1
isDigit skipSpace end <- takeWhile1 isDigit return $ TimePortion start end
Now we have two parsers for our individual parts of the text, we can combine
these in a "larger" parser to read an entire day's working hours:

data WorkPeriod = WorkPeriod Day TimePortion work = do d <- day

t <- time return $ WorkPeriod d t and then run the parser:


parseOnly work "Monday: 0800 1600"
Section 75.2: Bitmap -
Parsing Binary Data
Attoparsec makes parsing binary data trivial. Assuming these definitions:

import Data.Attoparsec.ByteString (Parser, eitherResult, parse, take)


import Data.Binary.Get (getWord32le, runGet) import
Data.ByteString (ByteString, readFile)

We can parse the header from a bitmap file easily. Here, we have 4 parser
functions that represent the header section from a bitmap file:
Firstly, the DIB section can be read by taking the first 2 bytes

Similarly, the size of the bitmap, the reserved sections and the pixel offset
can be read easily too:

which can then be combined into a larger parser function for the entire
header:
Chapter 76: zipWithM
is to zipWith as mapM is to map: it lets you combine two lists using a
zipWithM
monadic function.
Control.Monad From the module
Section 76.1:
Calculatings sales
prices
Suppose you want to see if a certain set of sales prices makes sense for a
store.
The items originally cost $5, so you don't want to accept the sale if the sales
price is less for any of them, but you do want to know what the new price is
otherwise.
Calculating one price is easy: you calculate the sales price, and return Nothing
if you don't get a profit:

calculateOne :: Double -> Double -> Maybe Double


calculateOne price percent = let newPrice = price*(percent/100)

in if newPrice < 5 then Nothing else Just newPrice To

calculate it for the entire sale, zipWithM makes it really simple:


calculateAllPrices :: [Double] -> [Double] -> Maybe [Double]

calculateAllPrices prices percents = zipWithM calculateOne prices percents

This will return Nothing if any of the sales prices are below $5.
Chapter 77: Profunctor
Profunctor is a typeclass provided by the profunctors package in Data.Profunctor.
See the "Remarks" section for a full
explanation.
Section 77.1: (->)
Profunctor
(->) is a simple example of a profunctor: the left argument is the input to a
function, and the right argument is the same as the reader functor instance.
Chapter 78: Type Application
are an alternative to type annotations when the compiler
TypeApplications
struggles to infer types for a given expression.
This series of examples will explain the purpose of the TypeApplications
extension and how to use it
Don't forget to enable the extension by placing {-# LANGUAGE TypeApplications #-}
at the top of your source file.
Section 78.1: Avoiding type
annotations
We use type annotations to avoid ambiguity. Type applications can be used
for the same purpose. For example

This code has an ambiguity error. We know that a has a Num instance, and in
order to print it we know it needs a
Show instance. This could work if a was, for example, an Int, so to fix the error
we can add a type annotation

Another solution using type applications would look like this

To understand what this means we need to look at the type signature of print.

The function takes one parameter of type a, but another way to look at it is
that it actually takes two parameters. The first one is a type parameter, the
second one is a value whose type is the first parameter.
The main difference between value parameters and the type parameters is
that the latter ones are implicitly provided to functions when we call them.
Who provides them? The type inference algorithm! What TypeApplications let us
do is give those type parameters explicitly. This is especially useful when the
type inference can't determine the correct type.
So to break down the above example
Section 78.2: Type applications in
other languages
If you're familiar with languages like Java, C# or C++ and the concept of
generics/templates then this comparison might be useful for you.
Say we have a generic function in C#

DoNothing ( 5.0fthis
To call DoNothing < float
function >( 5.0f
with a float
we can do ) or if we want to be
explicit we can say ). That part inside of the
angle brackets is the type application.
In Haskell it's the same, except that the type parameters are not only implicit
at call sites but also at definition sites.

This can also be made explicit using either ScopedTypeVariables, Rank2Types or


RankNTypes extensions like this.

doNothing or doNothing @ Float 5.0Then at


5.0
the call
site we can again either write
Section 78.3: Order
of parameters
The problem with type arguments being implicit becomes obvious once we
have more than one. Which order do they come in?

const @ Int const :: forall a b. a -> b -> a


Does writing mean a is equal to Int, or
is it b? In case we explicitly state the type parameters using a forall like then
the order is as written: a, then b.
If we don't, then the order of variables is from left to right. The first variable
to be mentioned is the first type parameter, the second is the second type
parameter and so on.
What if we want to specify the second type variable, but not the first? We can
use a wildcard for the first variable like this

The type of this expression is


Section 78.4: Interaction with
ambiguous types
Say you're introducing a class of types that have a size in bytes.

The problem is that the size should be constant for every value of that type.
We don't actually want the sizeOf function to depend on a, but only on it's type.
Without type applications, the best solution we had was the Proxy type defined
like this

The purpose of this type is to carry type information, but no value


information. Then our class could look like this

Now you might be wondering, why not drop the first argument altogether?
The type of our function would then just
sizeOf :: Int or, to be more precise because it is a sizeOf :: SizeOf a => Int
method of a class,
sizeOf :: forall a. SizeOf a => Int be or to be even more explicit .
The problem is type inference. If I write sizeOf somewhere, the inference
algorithm only knows that I expect an Int. It has no idea what type I want to
substitute for a. Because of this, the definition gets rejected by the compiler
unless you have the {-# LANGUAGE AllowAmbiguousTypes #-} extension enabled. In
that case the definition compiles,it just can't be used anywhere without an
ambiguity error.
sizeOf @ Int Luckily,
the introduction of type applications saves the day! Now
we can write , explicitly saying that a is Int. Type applications allow us to
provide a type parameter, even if it doesn't appear in the actual parameters of
the function!

You might also like