Haskell Made Easy
Haskell Made Easy
About
Chapter 1: Getting started with Haskell Language
Section 1.1: Getting started
Section 1.2: Hello, World!
Section 1.3: Factorial
Section 1.4: Fibonacci, Using Lazy Evaluation
Section 1.5: Primes
Section 1.6: Declaring Values
Chapter 2: Overloaded Literals
Section 2.1: Strings
Section 2.2: Floating Numeral
Section 2.3: Integer Numeral
Section 2.4: List Literals
Chapter 3: Foldable
Section 3.1: Definition of Foldable
Section 3.2: An instance of Foldable for a binary tree
Section 3.3: Counting the elements of a Foldable structure
Section 3.4: Folding a structure in reverse
Section 3.5: Flattening a Foldable structure into a list
Section 3.6: Performing a side-eect for each element of a Foldable
structure
Section 3.7: Flattening a Foldable structure into a Monoid
Section 3.8: Checking if a Foldable structure is empty
Chapter 4: Traversable
Section 4.1: Definition of Traversable
Section 4.2: Traversing a structure in reverse
Section 4.3: An instance of Traversable for a binary tree
Section 4.4: Traversable structures as shapes with contents
Section 4.5: Instantiating Functor and Foldable for a Traversable structure
Section 4.6: Transforming a Traversable structure with the aid of an
accumulating parameter
Section 4.7: Transposing a list of lists
Chapter 5: Lens
Section 5.1: Lenses for records
Section 5.2: Manipulating tuples with Lens
Section 5.3: Lens and Prism
Section 5.4: Stateful Lenses
Section 5.5: Lenses compose
Section 5.6: Writing a lens without Template Haskell
Section 5.7: Fields with makeFields
Section 5.8: Classy Lenses
Section 5.9: Traversals
Chapter 6: QuickCheck
Section 6.1: Declaring a property
Section 6.2: Randomly generating data for custom types
Section 6.3: Using implication (==>) to check properties with
preconditions
Section 6.4: Checking a single property
Section 6.5: Checking all the properties in a file
Section 6.6: Limiting the size of test data
Chapter 7: Common GHC Language Extensions
Section 7.1: RankNTypes
Section 7.2: OverloadedStrings
Section 7.3: BinaryLiterals
Section 7.4: ExistentialQuantification
Section 7.5: LambdaCase
Section 7.6: FunctionalDependencies
Section 7.7: FlexibleInstances
Section 7.8: GADTs
Section 7.9: TupleSections
Section 7.10: OverloadedLists
Section 7.11: MultiParamTypeClasses
Section 7.12: UnicodeSyntax
Section 7.13: PatternSynonyms
Section 7.14: ScopedTypeVariables
Section 7.15: RecordWildCards
Chapter 8: Free Monads
Section 8.1: Free monads split monadic computations into data structures
and interpreters
Section 8.2: The Freer monad
Section 8.3: How do foldFree and iterM work?
Section 8.4: Free Monads are like fixed points
Chapter 9: Type Classes
Section 9.1: Eq
Section 9.2: Monoid
Section 9.3: Ord
Section 9.4: Num
Section 9.5: Maybe and the Functor Class
Section 9.6: Type class inheritance: Ord type class
Chapter 10: IO
Section 10.1: Getting the 'a' "out of" 'IO a'
Section 10.2: IO defines your program's `main` action
Section 10.3: Checking for end-of-file conditions
Section 10.4: Reading all contents of standard input into a string
Section 10.5: Role and Purpose of IO
Section 10.6: Writing to stdout
Section 10.7: Reading words from an entire file
Section 10.8: Reading a line from standard input
Section 10.9: Reading from `stdin`
Section 10.10: Parsing and constructing an object from standard input
Section 10.11: Reading from file handles
Chapter 11: Record Syntax
Section 11.1: Basic Syntax
Section 11.2: Defining a data type with field labels
Section 11.3: RecordWildCards
Section 11.4: Copying Records while Changing Field Values
Section 11.5: Records with newtype
Chapter 12: Partial Application
Section 12.1: Sections
Section 12.2: Partially Applied Adding Function
Section 12.3: Returning a Partially Applied Function
Chapter 13: Monoid
Section 13.1: An instance of Monoid for lists
Section 13.2: Collapsing a list of Monoids into a single value
Section 13.3: Numeric Monoids
Section 13.4: An instance of Monoid for ()
Chapter 14: Category Theory
Section 14.1: Category theory as a system for organizing abstraction
Section 14.2: Haskell types as a category
Section 14.3: Definition of a Category
Section 14.4: Coproduct of types in Hask
Section 14.5: Product of types in Hask
Section 14.6: Haskell Applicative in terms of Category Theory
Chapter 15: Lists
Section 15.1: List basics
Section 15.2: Processing lists
Section 15.3: Ranges
Section 15.4: List Literals
Section 15.5: List Concatenation
Section 15.6: Accessing elements in lists
Section 15.7: Basic Functions on Lists
Section 15.8: Transforming with `map`
Section 15.9: Filtering with `filter`
Section 15.10: foldr
Section 15.11: Zipping and Unzipping Lists
Section 15.12: foldl
Chapter 16: Sorting Algorithms
Section 16.1: Insertion Sort
Section 16.2: Permutation Sort
Section 16.3: Merge Sort
Section 16.4: Quicksort
Section 16.5: Bubble sort
Section 16.6: Selection sort
Chapter 17: Type Families
Section 17.1: Datatype Families
Section 17.2: Type Synonym Families
Section 17.3: Injectivity
Chapter 18: Monads
Section 18.1: Definition of Monad
Section 18.2: No general way to extract value from a monadic
computation
Section 18.3: Monad as a Subclass of Applicative
Section 18.4: The Maybe monad
Section 18.5: IO monad
Section 18.6: List Monad
Section 18.7: do-notation
Chapter 19: Stack
Section 19.1: Profiling with Stack
Section 19.2: Structure
Section 19.3: Build and Run a Stack Project
Section 19.4: Viewing dependencies
Section 19.5: Stack install
Section 19.6: Installing Stack
Section 19.7: Creating a simple project
Section 19.8: Stackage Packages and changing the LTS (resolver) version
Chapter 20: Generalized Algebraic Data Types
Section 20.1: Basic Usage
Chapter 21: Recursion Schemes
Section 21.1: Fixed points
Section 21.2: Primitive recursion
Section 21.3: Primitive corecursion
Section 21.4: Folding up a structure one layer at a time
Section 21.5: Unfolding a structure one layer at a time
Section 21.6: Unfolding and then folding, fused
Chapter 22: Data.Text
Section 22.1: Text Literals
Section 22.2: Checking if a Text is a substring of another Text
Section 22.3: Stripping whitespace
Section 22.4: Indexing Text
Section 22.5: Splitting Text Values
Section 22.6: Encoding and Decoding Text
Chapter 23: Using GHCi
Section 23.1: Breakpoints with GHCi
Section 23.2: Quitting GHCi
Section 23.3: Reloading a already loaded file
Section 23.4: Starting GHCi
Section 23.5: Changing the GHCi default prompt
Section 23.6: The GHCi configuration file
Section 23.7: Loading a file
Section 23.8: Multi-line statements
Chapter 24: Strictness
Section 24.1: Bang Patterns
Section 24.2: Lazy patterns
Section 24.3: Normal forms
Section 24.4: Strict fields
Chapter 25: Syntax in Functions
Section 25.1: Pattern Matching
Section 25.2: Using where and guards
Section 25.3: Guards
Chapter 26: Functor
Section 26.1: Class Definition of Functor and Laws
Section 26.2: Replacing all elements of a Functor with a single value
Section 26.3: Common instances of Functor
Section 26.4: Deriving Functor
Section 26.5: Polynomial functors
Section 26.6: Functors in Category Theory
Chapter 27: Testing with Tasty
Section 27.1: SmallCheck, QuickCheck and HUnit
Chapter 28: Creating Custom Data Types
Section 28.1: Creating a data type with value constructor parameters
Section 28.2: Creating a data type with type parameters
Section 28.3: Creating a simple data type
Section 28.4: Custom data type with record parameters
Chapter 29: Reactive-banana
Section 29.1: Injecting external events into the library
Section 29.2: Event type
Section 29.3: Actuating EventNetworks
Section 29.4: Behavior type
Chapter 30: Optimization
Section 30.1: Compiling your Program for Profiling
Section 30.2: Cost Centers
Chapter 31: Concurrency
Section 31.1: Spawning Threads with `forkIO`
Section 31.2: Communicating between Threads with `MVar`
Section 31.3: Atomic Blocks with Software Transactional Memory
Chapter 32: Function composition
Section 32.1: Right-to-left composition
Section 32.2: Composition with binary function
Section 32.3: Left-to-right composition
Chapter 33: Databases
Section 33.1: Postgres
Chapter 34: Data.Aeson - JSON in Haskell
Section 34.1: Smart Encoding and Decoding using Generics
Section 34.2: A quick way to generate a Data.Aeson.Value
Section 34.3: Optional Fields
Chapter 35: Higher-order functions
Section 35.1: Basics of Higher Order Functions
Section 35.2: Lambda Expressions
Section 35.3: Currying
Chapter 36: Containers - Data.Map
Section 36.1: Importing the Module
Section 36.2: Monoid instance
Section 36.3: Constructing
Section 36.4: Checking If Empty
Section 36.5: Finding Values
Section 36.6: Inserting Elements
Section 36.7: Deleting Elements
Chapter 37: Fixity declarations
Section 37.1: Associativity
Section 37.2: Binding precedence
Section 37.3: Example declarations
Chapter 38: Web Development
Section 38.1: Servant
Section 38.2: Yesod
Chapter 39: Vectors
Section 39.1: The Data.Vector Module
Section 39.2: Filtering a Vector
Section 39.3: Mapping (`map`) and Reducing (`fold`) a Vector
Section 39.4: Working on Multiple Vectors
Chapter 40: Cabal
Section 40.1: Working with sandboxes
Section 40.2: Install packages
Chapter 41: Type algebra
Section 41.1: Addition and multiplication
Section 41.2: Functions
Section 41.3: Natural numbers in type algebra
Section 41.4: Recursive types
Section 41.5: Derivatives
Chapter 42: Arrows
Section 42.1: Function compositions with multiple channels
Chapter 43: Typed holes
Section 43.1: Syntax of typed holes
Section 43.2: Semantics of typed holes
Section 43.3: Using typed holes to define a class instance
Chapter 44: Rewrite rules (GHC)
Section 44.1: Using rewrite rules on overloaded functions
Chapter 45: Date and Time
Section 45.1: Finding Today's Date
Section 45.2: Adding, Subtracting and Comparing Days
Chapter 46: List Comprehensions
Section 46.1: Basic List Comprehensions
Section 46.2: Do Notation
Section 46.3: Patterns in Generator Expressions
Section 46.4: Guards
Section 46.5: Parallel Comprehensions
Section 46.6: Local Bindings
Section 46.7: Nested Generators
Chapter 47: Streaming IO
Section 47.1: Streaming IO
Chapter 48: Google Protocol Buers
Section 48.1: Creating, building and using a simple .proto file
Chapter 49: Template Haskell & QuasiQuotes
Section 49.1: Syntax of Template Haskell and Quasiquotes
Section 49.2: The Q type
Section 49.3: An n-arity curry
Chapter 50: Phantom types
Section 50.1: Use Case for Phantom Types: Currencies
Chapter 51: Modules
Section 51.1: Defining Your Own Module
Section 51.2: Exporting Constructors
Section 51.3: Importing Specific Members of a Module
Section 51.4: Hiding Imports
Section 51.5: Qualifying Imports
Section 51.6: Hierarchical module names
Chapter 52: Tuples (Pairs, Triples, ...)
Section 52.1: Extract tuple components
Section 52.2: Strictness of matching a tuple
Section 52.3: Construct tuple values
Section 52.4: Write tuple types
Section 52.5: Pattern Match on Tuples
Section 52.6: Apply a binary function to a tuple (uncurrying)
Section 52.7: Apply a tuple function to two arguments (currying)
Section 52.8: Swap pair components
Chapter 53: Graphics with Gloss
Section 53.1: Installing Gloss
Section 53.2: Getting something on the screen
Chapter 54: State Monad
Section 54.1: Numbering the nodes of a tree with a counter
Chapter 55: Pipes
Section 55.1: Producers
Section 55.2: Connecting Pipes
Section 55.3: Pipes
Section 55.4: Running Pipes with runEect
Section 55.5: Consumers
Section 55.6: The Proxy monad transformer
Section 55.7: Combining Pipes and Network communication
Chapter 56: Infix operators
Section 56.1: Prelude
Section 56.2: Finding information about infix operators
Section 56.3: Custom operators
Chapter 57: Parallelism
Section 57.1: The Eval Monad
Section 57.2: rpar
Section 57.3: rseq
Chapter 58: Parsing HTML with taggy-lens and lens
Section 58.1: Filtering elements from the tree
Section 58.2: Extract the text contents from a div with a particular id
Chapter 59: Foreign Function Interface
Section 59.1: Calling C from Haskell
Section 59.2: Passing Haskell functions as callbacks to C code
Chapter 60: Gtk3
Section 60.1: Hello World in Gtk
Chapter 61: Monad Transformers
Section 61.1: A monadic counter
Chapter 62: Bifunctor
Section 62.1: Definition of Bifunctor
Section 62.2: Common instances of Bifunctor
Section 62.3: first and second
Chapter 63: Proxies
Section 63.1: Using Proxy
Section 63.2: The "polymorphic proxy" idiom
Section 63.3: Proxy is like ()
Chapter 64: Applicative Functor
Section 64.1: Alternative definition
Section 64.2: Common instances of Applicative
Chapter 65: Common monads as free monads
Section 65.1: Free Empty ~~ Identity
Section 65.2: Free Identity ~~ (Nat,) ~~ Writer Nat
Section 65.3: Free Maybe ~~ MaybeT (Writer Nat)
Section 65.4: Free (Writer w) ~~ Writer [w]
Section 65.5: Free (Const c) ~~ Either c
Section 65.6: Free (Reader x) ~~ Reader (Stream x)
Chapter 66: Common functors as the base of cofree comonads
Section 66.1: Cofree Empty ~~ Empty
Section 66.2: Cofree (Const c) ~~ Writer c
Section 66.3: Cofree Identity ~~ Stream
Section 66.4: Cofree Maybe ~~ NonEmpty
Section 66.5: Cofree (Writer w) ~~ WriterT w Stream
Section 66.6: Cofree (Either e) ~~ NonEmptyT (Writer e)
Section 66.7: Cofree (Reader x) ~~ Moore x
Chapter 67: Arithmetic
Section 67.1: Basic examples
Section 67.2: `Could not deduce (Fractional Int) ...`
Section 67.3: Function examples
Chapter 68: Role
Section 68.1: Nominal Role
Section 68.2: Representational Role
Section 68.3: Phantom Role
Chapter 69: Arbitrary-rank polymorphism with RankNTypes
Section 69.1: RankNTypes
Chapter 70: GHCJS
Section 70.1: Running "Hello World!" with Node.js
Chapter 71: XML
Section 71.1: Encoding a record using the `xml` library
Chapter 72: Reader / ReaderT
Section 72.1: Simple demonstration
Chapter 73: Function call syntax
Section 73.1: Partial application - Part 1
Section 73.2: Partial application - Part 2
Section 73.3: Parentheses in a basic function call
Section 73.4: Parentheses in embedded function calls
Chapter 74: Logging
Section 74.1: Logging with hslogger
Chapter 75: Attoparsec
Section 75.1: Combinators
Section 75.2: Bitmap - Parsing Binary Data
Chapter 76: zipWithM
Section 76.1: Calculatings sales prices
Chapter 77: Profunctor
Section 77.1: (->) Profunctor
Chapter 78: Type Application
Section 78.1: Avoiding type annotations
Section 78.2: Type applications in other languages
Section 78.3: Order of parameters
Section 78.4: Interaction with ambiguous types
Chapter 1: Getting started with
Haskell Language
Section 1.1: Getting started
Online REPL
The easiest way to get started writing Haskell is probably by going to the
Haskell website or Try Haskell and use the online REPL (read-eval-print-
loop) on the home page. The online REPL supports most basic functionality
and even some IO. There is also a basic tutorial available which can be
started by typing the command help. An ideal tool to start learning the basics
of Haskell and try out some stuff.
GHC(i)
For programmers that are ready to engage a little bit more, there is GHCi, an
interactive environment that comes with the Glorious/Glasgow Haskell
Compiler. The GHC can be installed separately, but that is only a compiler.
In order to be able to install new libraries, tools like Cabal and Stack must be
installed as well. If you are running a Unix-like operating system, the easiest
installation is to install Stack using:
This installs GHC isolated from the rest of your system, so it is easy to
remove. All commands must be preceded by stack though. Another simple
approach is to install a Haskell Platform. The platform exists in two flavours:
1. The minimal distribution contains only GHC (to compile) and
Cabal/Stack (to install and build packages)
2. The full distribution additionally contains tools for project development,
profiling and coverage analysis. Also an additional set of widely-used
packages is included.
These platforms can be installed by downloading an installer and following
the instructions or by using your distribution's package manager (note that
this version is not guaranteed to be up-to-date):
Once installed, it should be possible to start GHCi by invoking the ghci
command anywhere in the terminal. If the installation went well, the console
should look something like
this will create object files and an executable if there were no errors and the
main function was defined correctly.
IO The first line is an optional type annotation, indicating that main is a value
of type (), representing an I/O action which "computes" a value of type ()
(read "unit"; the empty tuple conveying no information) besides performing
some side effects on the outside world (here, printing a string at the
terminal). This type annotation is usually omitted for main because it is its
only possible type.
Put this into a helloworld.hs file and compile it using a Haskell compiler, such as
GHC:
Executing the compiled file will result in the output "Hello, World!" being
printed to the screen:
Alternatively, load scripts into ghci from a file using load (or :l):
IO Values of type () describe actions which can interact with the outside
world.
main :: IOBecause Haskell has a fully-fledged Hindley-Milner type system
which allows for automatic type inference, type signatures are technically
optional: if you simply omit the (), the compiler will be able to infer the type
on its own by analyzing the definition of main. However, it is very much
considered bad style not to write type signatures for top-level definitions. The
reasons include:
Type signatures in Haskell are a very helpful piece of documentation
because the type system is so expressive that you often can see what sort
of thing a function is good for simply by looking at its type. This
“ documentation ” can be conveniently accessed with tools like GHCi.
And unlike normal documentation, the compiler's type checker will
make sure it actually matches the function definition!
Type signatures keep bugs local. If you make a mistake in a definition
without providing its type signature, the compiler may not immediately
report an error but instead simply infer a nonsensical type for it, with
which it actually typechecks. You may then get a cryptic error message
when using that value. With a signature, the compiler is very good at
spotting bugs right where they happen.
This second line does the actual work:
If you come from an imperative language, it may be helpful to note that this
definition can also be written as:
Or equivalently (Haskell has layout-based parsing; but beware mixing tabs
and spaces inconsistently which will confuse this mechanism):
Live demo
Integral is the class of integral number types. Examples include Int and
Integer.
Integral a ) => fac :: a -> a
Live demo
This variation uses pattern matching to split the function definition into
separate cases. The first definition is invoked if the argument is 0 (sometimes
called the stop condition) and the second definition otherwise (the order of
definitions is significant). It also exemplifies recursion as fac refers to itself.
It is worth noting that, due to rewrite rules, both versions of fac will compile
to identical machine code when using GHC with optimizations activated.
So, in terms of efficiency, the two would be equivalent.
Section 1.4: Fibonacci, Using Lazy
Evaluation
Lazy evaluation means Haskell will evaluate only list items whose values are
needed.
The basic recursive definition is:
If evaluated directly, it will be very slow. But, imagine we have a list that
records all the results,
Then
└────────────────────────────────────────┘
-> 0 : 1 : +
┌────────────────────────────────────────┐
└────────────────────────────────────────┘
scanl f z0 [ x1, x2, ...is equal z0, z1, z2, ... ] where z1 = f z0 x1; z2 = f z1 x2; ...
]
to [
scanl builds the list of partial results that foldl would produce, working from
left to right along the input list. That is, .
Thanks to lazy evaluation, both functions define infinite lists without
computing them out entirely. That is, we can write a fib function, retrieving
the nth element of the unbounded Fibonacci sequence:
Section 1.5: Primes
A few most salient variants:
Below 100
Unlimited
Sieve of Eratosthenes, using data-ordlist package:
Traditional
(a sub-optimal trial division sieve)
-- = 2 : [n | n <- [3..], foldr (\p r-> p*p > n || (rem n p > 0 && r)) True ps]
Transitional
From trial division to sieve of Eratosthenes:
\\
The Shortest Code nubBy (((>1).).gcd) [2..] -- i.e., nubBy (\a b -
> gcd a b > 1) [2..] nubBy is also from Data.List, like ().
Section 1.6: Declaring Values
We can declare a series of expressions in the REPL like this:
This allows us to define values of string-like types without the need for any
explicit conversions. In essence, the OverloadedStrings extension just wraps
every string literal in the generic fromString conversion function, so if the
context demands e.g. the more efficient Text instead of String, you don't need
to worry about that yourself.
Using string literals
Char Notice how we were able to construct values of Text and ByteString in the
same way we construct ordinary String (or []) Values, rather than using each
types pack function to encode the string explicitly.
For more information on the OverloadedStrings language extension, see the
extension documentation.
Section 2.2: Floating Numeral
Note that this return type Int restricts the operations that can be performed
on values obtained by calls to the length function. fromIntegralis a useful
function that allows us to deal with this problem.
Section 3.4: Folding a structure in
reverse
Any fold can be run in the opposite direction with the help of the Dual
monoid, which flips an existing monoid so that aggregation goes backwards.
traverse_ f = foldr (\x action -> f x *> action) (pure ()) sequenceA_ is
defined as:
Moreover, when the Foldable is also a Functor, traverse_ and sequenceA_ have the
following relationship:
Section 3.7: Flattening a Foldable
structure into a Monoid
maps each element of the Foldable structure to a Monoid, and then
foldMap
combines them into a single value.
foldMapand foldr can be defined in terms of one another, which means that
instances of Foldable need only give a definition for one of them.
than lists.
Section 4.1: Definition of
Traversable
where the "contents" are the same as what you'd "visit" using a Foldable
instance.
Going one direction, from t a to Traversed t a doesn't require anything but Functor
and Foldable
break . recombine
and recombine . break The Traversable laws require that are both
identity. Notably, this means that there are
exactly the right number elements in contents to fill shape completely with no
left-overs.
is Traversable itself. The implementation of traverse works by visiting
Traversed t
the elements using the list's instance of Traversable and then reattaching the
inert shape to the result.
Section 4.5: Instantiating Functor and
Foldable for a Traversable structure
Every Traversable structure can be made a Foldable Functor using the fmapDefault
and foldMapDefault functions found in Data.Traversable.
-- | | -- |-
| |---| mapAccumL, mapAccumR :: Traversable t => (a -> b -> (a, c)) -> a -> t b ->
(a, t c)
-- |------------------| |--------|
-- | |
-- A folding function which produces a new mapped |
-- element 'c' and a new accumulator value 'a' |
-- |
-- Final accumulator value
-- and mapped structure
These functions generalise fmap in that they allow the mapped values to
depend on what has happened earlier in the fold. They generalise foldl/foldr in
that they map the structure in place as well as reducing it to a value.
For example, tails can be implemented using mapAccumR and its sister inits can
be implemented using mapAccumL.
mapAccumL is implemented by traversing in the State applicative functor.
Section 4.7: Transposing a list of lists
Noting that zip transposes a tuple of lists into a list of tuples,
-- +------>------+ -
- | | sequenceA :: (Traversable t,
Applicative f) => t (f a) -> f (t a)
-- | |
-- +--->---+
the idea is to use []'s Traversable and Applicative structure to deploy sequenceA as a
sort of n-ary zip, zipping together all the inner lists together pointwise.
[]'s
default "prioritised choice" Applicative instance is not appropriate for our
use - we need a "zippy" Applicative. For this we use the ZipList newtype, found
in Control.Applicative.
Creates a type class HasName, lens name for Person, and makes Person an instance
of HasName. Subsequent records will be added to the class as well:
Setting
Modifying
both Traversal
Section 5.3: Lens and Prism
A Lens' s a means that you can always find an a within any s. A Prism' s a means
that you can sometimes find that s actually just is a but sometimes it's
something else.
_1 :: Lens' ( a, b ) abecause any tuple always has a first _Just :: To be
element. We have
Prism' ( Maybe ) a Maybe more
a
because a
sometimes clear, we have is actually an a value
wrapped in Just but sometimes it's Nothing. With this intuition, some standard
combinators can be interpreted parallel to one another
view :: Lens' s a -> ( s -> a set :: Lens' s a -> ( a -> s -> s )"gets" the a
review :: Prism' s a -> ( a -> s preview :: Prism' s a -> ( s -> Maybe a
out of the s
) "sets"
the a
slot in s
) "realizes" that an a could be an s
) "attempts" to turn an s into an a.
r, Either r a
Another way to think about it is that a value of type Lens' s a
demonstrates that s has the same structure as ( a) for some unknown r. On
the other hand, Prism' s a demonstrates that s has the same structure as for
some r. We can write those four functions above with this knowledge:
Section 5.4: Stateful Lenses
Lens operators have useful variants that operate in stateful contexts. They are
obtained by replacing ~ with = in the operator name.
This works thanks to the associativity of &. The stateful version is clearer,
though.
We can write code that resembles classic imperative languages, while still
allowing us to use benefits of Haskell:
Section 5.5: Lenses compose
f :: Lens' a b and g :: Lens' b c If you have a then f . g is a Lens' a c gotten by
a following f first and then g. Notably:
Lenses compose as functions (really they just are functions)
If you think of the view functionality of Lens, it seems like data flows
"left to right" — this might feel backwards to your normal intuition for
function composition. On the other hand, it ought to feel natural if you
think of .notation like how it happens in OO languages.
More than just composing Lens with Lens, (.) can be used to compose nearly
any "Lens-like" type together. It's not always easy to see what the result is
since the type becomes tougher to follow, but you can use the lens chart to
figure it out. The composition x . y has the type of the least-upper-bound of
the types of both x and y in that chart.
Section 5.6: Writing a lens without
Template Haskell
To demystify Template Haskell, suppose you have
then
There's nothing particularly magical going on, though. You can write these
yourself:
foo :: Lens' (Example a) Int
-- :: Functor f => (Int -> f Int) -> (Example a -> f (Example a)) ;; expand the alias foo
wrap (Example foo bar) = fmap (\newFoo -> Example newFoo bar) (wrap foo)
bar :: Lens (Example a) (Example b) a b
-- :: Functor f => (a -> f b) -> (Example a -> f (Example b)) ;; expand the alias bar
wrap (Example foo bar) = fmap (\newBar -> Example foo newBar) (wrap bar)
Essentially, you want to "visit" your lens' "focus" with the wrap function and
then rebuild the "entire" type.
Section 5.7: Fields with makeFields
(This example copied from this StackOverflow answer)
Let's say you have a number of different data types that all ought to have a
lens with the same name, in this case capacity. The makeFields slice will create a
class that accomplish this without namespace conflicts.
Then in ghci:
So what it's actually done is declared a class HasCapacity s a, where capacity is a
Lens' from s to a (a is fixed once s is known). It figured out the name
"capacity" by stripping off the (lowercased) name of the data type from the
field; I find it pleasant not to have to use an underscore on either the field
name or the lens name, since sometimes record syntax is actually what you
want. You can use makeFieldsWith and the various lensRules to have some
different options for calculating the lens names.
In case it helps, using ghci -ddump-splices Foo.hs:
So the first splice made the class HasCapcity and added an instance for Foo; the
second used the existing class and made an instance for Bar.
This also works if you import the HasCapcity class from another module;
makeFields can add more instances to the existing class and spread your types
out across multiple modules. But if you use it again in another module where
you haven't imported the class, it'll make a new class (with the same name),
and you'll have two separate overloaded capacity lenses that are not
compatible.
Section 5.8: Classy Lenses
In addition to the standard makeLenses function for generating Lenses,
Control.Lens.TH also offers the makeClassy function. makeClassy has the same type
and works in essentially the same way as makeLenses, with one key difference.
In addition to generating the standard lenses and traversals, if the type has no
arguments, it will also create a class describing all the datatypes which
possess the type as a field. For example
will create
Section 5.9: Traversals
A Traversal' s a shows that s has 0-to-many as inside of it.
traverse :: Traversal (ta )a Any type t which is Traversable automatically has that
.
We can use a Traversal to set or map over all of these a values
f :: Lens' s a
says there's exactly one a g :: Prism' a b A says there are
inside of s. A either 0 or 1 bs in a.
Composing f . g gives us a Traversal' s b because following f and then g shows
how there there are 0-to-1 bs in s.
Chapter 6: QuickCheck
Section 6.1: Declaring a property
At its simplest, a property is a function which returns a Bool.
If you want to check that a property holds given that a precondition holds,
you can use the
==>
operator. Note that if it's very unlikely for arbitrary inputs to match the
precondition, QuickCheck can give up early.
prop_overlySpecific x y = x == 0 ==> x * y == 0
Section 6.4: Checking a single property
The quickCheck function tests a property on 100 random inputs.
return Note that the [] line is required. It makes definitions textually above
that line visible to Template Haskell.
Section 6.6: Limiting the size of test
data
It can be difficult to test functions with poor asymptotic complexity using
quickcheck as the random inputs are not usually size bounded. By adding an
upper bound on the size of the input we can still test these expensive
functions.
By using quickCheckWith with a modified version of stdArgs we can limit the size
of the inputs to be at most 10. In this case, as we are generating lists, this
means we generate lists of up to size 10. Our permutations function doesn't
take too long to run for these short lists but we can still be reasonably
confident that our definition is correct.
Chapter 7: Common GHC
Language Extensions
Section 7.1: RankNTypes
Imagine the following situation:
Here, we want to pass in a function that converts a value into a String, apply
that function to both a string parameter and and int parameter and print them
both. In my mind, there is no reason this should fail! We have a function that
works on both types of the parameters we're passing in.
Unfortunately, this won't type check! GHC infers the a type based off of its
first occurrence in the function body. That is, as soon as we hit:
show
RankNTypes lets you instead write the type signature as follows,
quantifying over all functions that satisfy the ' type: foo :: (forall a. Show a => (a ->
String)) -> String -> Int -> IO ()
show Thisis rank 2 polymorphism: We are asserting that the ' function must
work for all as within our function, and the previous implementation now
works.
forall ...
The RankNTypes extension allows arbitrary nesting of blocks in type
signatures. In other words, it allows rank N polymorphism.
Section 7.2: OverloadedStrings
Char Normally, string literals in Haskell have a type of String (which is a type
alias for []). While this isn't a problem for smaller, educational programs,
real-world applications often require more efficient storage such as Text or
ByteString.
Existentials can be very powerful, but note that they are actually not
necessary very often in Haskell. In the example above, all you can actually
do with the Show instance is show (duh!) the values, i.e. create a string
representation. The entire S type therefore contains exactly as much
information as the string you get when showing it. Therefore, it is usually
better to simply store that string right away, especially since Haskell is lazy
and therefore the string will at first only be an unevaluated thunk anyway.
On the other hand, existentials cause some unique problems. For instance, the
way the type information is “ hidden ” in an existential. If you pattern-
match on an S value, you will have the contained type in scope (more
precisely, its Show instance), but this information can never escape its scope,
which therefore becomes a bit of a “ secret society ” : the compiler doesn't
let anything escape the scope except values whose type is already known
from the outside.
This can lead to strange errors like Couldn't match type ‘a0’ with ‘()’ ‘a0’ is untouchable.
Existential types are different from Rank-N types – these extensions are,
roughly speaking, dual to each other: to actually use values of an existential
type, you need a (possibly constrained-) polymorphic function, like show in
the example. A polymorphic function is universally quantified, i.e. it works
for any type in a given class, whereas existential quantification means it
works for some particular type which is a priori unknown. If you have a
polymorphic function, that's sufficient, however to pass polymorphic
functions as such as arguments, you need {-# LANGUAGE Rank2Types #-}:
Section 7.5: LambdaCase
case in place arg -> case arg of A syntactic extension that lets you write \.
of \ Consider the following function definition:
If you want to avoid repeating the function name, you might write something
like:
When declaring an instance of such class, it will be checked against all other
instances to make sure that the functional dependency holds, that is, no other
instance with same a b c but different x exists.
You can specify multiple dependencies in a comma-separated list:
IntLit :: Int -> Expr a with the hope that this will statically rule out non-well-
typed conditionals, this will not behave as expected since the type of is
universially quantified: for any choice of a, it produces a value of type Expr a.
a ~ Bool , we IntLit :: Int -> Expr Bool In particular, for , allowing us to
have construct something like If
IntLit 1) e1 e2 ( which is what the type of the If constructor was trying to rule
out.
Generalised Algebraic Data Types allows us to control the resulting type of a
data constructor so that they are not merely parametric. We can rewrite our
Expr type as a GADT like this:
Int -> Expr Int , and IntLit 1 :: Expr Bool Here, the type of the constructor IntLit
so is will not typecheck.
Pattern matching on a GADT value causes refinement of the type of the term
returned. For example, it is possible to write an evaluator for Expr a like this:
a ~ Int (and likewise for not and if_then_else_ a ~ BoolNote that we are able to
when use (+) in the above
definitions because when e.g. IntLit x is pattern matched, we also learn that ).
Section 7.9: TupleSections
A syntactic extension that allows applying the tuple constructor (which is an
operator) in a section way:
N-tuples
It also works for tuples with arity greater than two
Mapping
This can be useful in other places where sections are used:
The above example without this extension would look like this:
Section 7.10: OverloadedLists
added in GHC 7.8.
OverloadedLists, similar to OverloadedStrings, allows list literals to be
desugared as follows:
This comes handy when dealing with types such as Set, Vector and Maps.
For example:
would become
Note that the * vs. ★ example is slightly different: since * isn't reserved, ★
also works the same way as * for multiplication, or any other function named
(*), and vice-versa. For example:
Section 7.13: PatternSynonyms
Pattern synonyms are abstractions of patterns similar to how functions are
abstractions of expressions.
For this example, let's look at the interface Data.Sequence exposes, and let's see
how it can be improved with pattern synonyms. The Seq type is a data type
that, internally, uses a complicated representation to achieve good asymptotic
complexity for various operations, most notably both O(1) (un)consing and
(un)snocing.
But this representation is unwieldy and some of its invariants cannot be
expressed in Haskell's type system. Because of this, the Seq type is exposed to
users as an abstract type, along with invariant-preserving accessor and
constructor functions, among them:
The important thing is that we can use a, b and c to instruct the compiler in
subexpressions of the declaration (the tuple in the where clause and the first a
in the final result). In practice, ScopedTypeVariables assist in writing complex
functions as a sum of parts, allowing the programmer to add type signatures
to intermediate values that don't have concrete types.
Section 7.15: RecordWildCards
See RecordWildCards
Chapter 8: Free Monads
Section 8.1: Free monads split monadic
computations into data structures and
interpreters
For instance, a computation involving commands to read and write from the
prompt:
First we describe the "commands" of our computation as a Functor data type
Then we use Free to create the "Free Monad over TeletypeF" and build some
basic operations.
Since Free f is a Monad whenever f is a Functor, we can use the standard Monad
combinators (including do notation) to build Teletype computations.
Put :: s -> StateI s () -- the Put instruction contains an 's' as an argument and returns ()
Sequencing these instructions takes place with the :>>= constructor. :>>= takes
a single instruction returning an a and prepends it to the rest of the program,
piping its return value into the continuation. In other words, given an
instruction returning an a, and a function to turn an a into a program returning
a b, :>>= will produce a program returning a b.
Note that a is existentially quantified in the :>>= constructor. The only way for
an interpreter to learn what a is is by pattern matching on the GADT i.
Because CoYoneda i is a Functor for any i, Freer is a Monad for any i, even if i isn't
a Functor.
Now, how to interpret a Teletype computation that was built with the Free
constructor? We'd like to arrive at a
IO by teletypeF :: TeletypeF ( Teletype a value of type ). To start with,
a
examining we'll write a function runIO
:: TeletypeF a -> IO a which maps
a single layer of the free monad to an IO action:
teletypeF :: TeletypeF ( Teletype a Now we can use runIO to fill in the rest of
interpretTeletype. Recall that )
is a layer of the TeletypeF functor which contains the rest of the Free
computation. We'll use runIO to interpret the
runIO teletypeF :: IO ( Teletype a outermost layer (so we have )) and then use the
IO monad's >>= combinator to interpret the returned Teletype a.
In particular, compare the type of the Free constructor with the type of the Fix
constructor. Free layers up a functor just like Fix, except that Free has an
additional Return a case.
Chapter 9: Type Classes
Typeclasses in Haskell are a means of defining the behaviour associated with
a type separately from that type's definition. Whereas, say, in Java, you'd
define the behaviour as part of the type's definition -- i.e. in an interface,
abstract class or concrete class -- Haskell keeps these two things separate.
There are a number of typeclasses already defined in Haskell's base
package. The relationship between these is illustrated in the Remarks
section below.
Section 9.1: Eq
Eq a =>All basic datatypes (like Int, String, [a]) from Prelude except for
functions and IO have instances of Eq. If a type instantiates Eq it means that we
know how to compare two values for value or structural equality.
Required methods
== ) :: Eq a => a -> a -> Boolean or ( /= ) :: Eq a => a -> a -> Boolean ( (if only one is
implemented,
the other defaults to the negation of the defined one)
Defines
== ) :: Eq a => a -> a -> Boolean /= ) :: Eq a => a -> a -> Boolean
(
(
Direct superclasses
None
Notable subclasses
Ord
Section
9.2: Monoid
<> Types instantiating Monoid include lists, numbers, and functions with Monoid
return values, among others. To instantiate Monoid a type must support an
associative binary operation (mappend or ()) which combines its values, and
have a special "zero" value (mempty) such that combining a value with it does
not change that value:
Intuitively, Monoid types are "list-like" in that they support appending values
together. Alternatively, Monoid types can be thought of as sequences of values
for which we care about the order but not the grouping. For instance, a binary
tree is a Monoid, but using the Monoid operations we cannot witness its
branching structure, only a traversal of its values (see Foldable and Traversable).
Required methods
Direct superclasses
None
Section
9.3: Ord
Ord Types instantiating Ord include, e.g., Int, String, and [a] (for types a where
a
there's an instance). If a type instantiates Ord it means that we know a
“ natural ” ordering of values of that type. Note, there are often many
possible choices of the “ natural ” ordering of a type and Ord forces us to
favor one.
<= ), (<), >= Ord
provides the standard () operators but interestingly defines
(>), ( them all using a custom algebraic data type
Required methods
compare :: Ord a => a -> a -> Ordering or ( <= ) :: Ord a => a -> a -> Boolean <=
(the
standard ’ s default compare method uses () in its implementation)
Defines
compare :: Ord a => a -> a -> Ordering <= ) :: Ord a => a -> a -> Boolean (
:: Ord a => a -> a -> Boolean >= ) :: Ord a => a -> a -> Boolean (<)
:: Ord a => a -> a -> Boolean min :: Ord a => a -> a -> a max :: Ord a => a -> a -> a (
(>)
Direct superclasses
Eq
Section
9.4: Num
The most general class for number types, more precisely for rings, i.e.
numbers that can be added and subtracted and multiplied in the usual sense,
but not necessarily divided.
This class contains both integral types (Int, Integer, Word32 etc.) and fractional
types (Double, Rational, also complex numbers etc.). In case of finite types, the
semantics are generally understood as modular arithmetic, i.e. with over- and
underflow † .
Note that the rules for the numerical classes are much less strictly obeyed
than the monad or monoid laws, or those for equality comparison. In
particular, floating-point numbers generally obey laws only in a approximate
sense.
The methods
fromInteger :: Num a => Integer -> a . convert
an integer to the general number type (wrapping around
Complex the range, if necessary). Haskell number literals can be
Double
understood as a monomorphic Integer literal with the general
conversion around it, so you can use the literal 5 in both an Int context
and a
setting.
:: Num a => a -> a -> a . Standard addition, generally understood as
(+)
associative and commutative, i.e.,
for the most common instances, multiplication is also commutative, but
this is definitely not a requirement.
negate :: Num a => a -> a
. The full name of the unary negation operator. -1 is
syntactic sugar for negate 1.
abs a >=For real types it's clear what non-negative means: you always have 0.
Complex etc. types don't have a well-defined ordering, however the
result of abs should always lie in the real subset ‡ (i.e. give a number that
could also be written as a single number literal without negation).
signum :: Num a => a -> a . The sign
function, according to the name, yields only -1 or 1, depending on the
sign of the argument. Actually, that's only true for nonzero real numbers;
in general signum is better understood as the normalising function:
Note that section 6.4.4 of the Haskell 2010 Report explicitly requires this
last equality to hold for any valid Num instance.
Some libraries, notably linear and hmatrix, have a much laxer understanding
of what the Num class is for: they treat it just as a way to overload the
arithmetic operators. While this is pretty straightforward for + and -, it
already becomes troublesome with * and more so with the other methods. For
instance, should * mean matrix multiplication or element-wise
multiplication?
It is arguably a bad idea to define such non-number instances; please consider
dedicated classes such as VectorSpace.
:: ) ==† In particular, the “ negatives ” of unsigned types
Word32
are wrapped around to large positive, e.g. (-4 4294967292.
‡ This is widely not fulfilled: vector types do not have a real
subset. The controversial Num-instances for such types generally define
abs and signum element-wise, which mathematically speaking doesn't
really make sense.
Section 9.5: Maybe and the Functor
Class
In Haskell, data types can have arguments just like functions. Take the Maybe
type for example.
Maybe is a very useful type which allows us to represent the idea of failure, or
the possiblity thereof. In other words, if there is a possibility that a
computation will fail, we use the Maybe type there. Maybe acts kind of like a
wrapper for other types, giving them additional functionality.
Its actual declaration is fairly simple.
What this tells is that a Maybe comes in two forms, a Just, which represents
success, and a Nothing, which represents failure. Just takes one argument which
determines the type of the Maybe, and Nothing takes none. For
Just "foo" will have Maybe example, the value , which is a string type
String
type wrapped with the additional
Maybe Maybe functionality. The value Nothing has type where a can be any type.
a
This idea of wrapping types to give them additional functionality is a very
useful one, and is applicable to more than just Maybe. Other examples include
the Either, IO and list types, each providing different functionality. However,
there are some actions and abilities which are common to all of these wrapper
types. The most notable of these is the ability to modify the encapsulated
value.
It is common to think of these kinds of types as boxes which can have values
placed in them. Different boxes hold different values and do different things,
but none are useful without being able to access the contents within.
To encapsulate this idea, Haskell comes with a standard typeclass, named
Functor. It is defined as follows.
As can be seen, the class has a single function, fmap, of two arguments. The
first argument is a function from one type, a, to another, b. The second
argument is a functor (wrapper type) containing a value of type a. It returns a
functor (wrapper type) containing a value of type b.
In simple terms, fmap takes a function and applies to the value inside of a
functor. It is the only function necessary for a type to be a member of the
Functor class, but it is extremely useful. Functions operating on functors that
have more specific applications can be found in the Applicative and Monad
typeclasses.
Section 9.6: Type class inheritance:
Ord type class
Haskell supports a notion of class extension. For example, the class Ord
inherits all of the operations in Eq, but in addition has a compare function that
returns an Ordering between values. Ord may also contain the common order
comparison operators, as well as a min method and a max method.
The => notation has the same meaning as it does in a function signature and
requires type a to implement Eq, in
<= Type classes that themselves extend Ord must implement at least either the
compare method or the () method themselves, which builds up the directed
inheritance lattice.
Chapter 10: IO
Section 10.1: Getting the 'a' "out of"
'IO a'
IO A common question is "I have a value of , but I want to do something to
a
that a value: how do I get access to it?" How can one operate on data that
comes from the outside world (for example, incrementing a number typed by
the user)?
IO The point is that if you use a pure function on data obtained impurely,
a
then the result is still impure. It depends on what the user did! A value of
type stands for a "side-effecting computation resulting in a value of type a"
which can only be run by (a) composing it into main and (b) compiling and
executing your program. For that reason, there is no way within pure Haskell
world to "get the a out".
Instead, we want to build a new computation, a new IO value, which makes
use of the a value at runtime. This is another way of composing IO values and
so again we can use do-notation:
Here we're using a pure function (getMessage) to turn an Int into a String, but
we're using do notation to make it be applied to the result of an IO
computation myComputation when (after) that computation runs. The result is a
bigger IO computation, newComputation. This technique of using pure functions
in an impure context is called lifting.
Section 10.2: IO defines your
program's `main` action
IO To make a Haskell program executable you must provide a file with a main
function of type ()
Hello world
When Haskell is compiled it examines the IO data here and turns it
into a executable. When we run this program it will print !.
IO If you have values of type other than main they won't do anything.
a
Compiling this program and running it will have the same effect as the last
example. The code in other is ignored.
In order to make the code in other have runtime effects you have to compose it
into main. No IO values not eventually composed into main will have any
runtime effect. To compose two IO values sequentially you can use
donotation:
Note that the order of operations is described by how other was composed into
main and not the definition order.
Section 10.3: Checking for end-of-file
conditions
A bit counter-intuitive to the way most other languages' standard I/O libraries
do it, Haskell's isEOF does not require you to perform a read operation before
checking for an EOF condition; the runtime will do it for you.
Input:
Output:
Section 10.4: Reading all contents of
standard input into a string
Input:
Output:
Note: This program will actually print parts of the output before all of the
input has been fully read in. This means that, if, for example, you use
getContents over a 50MiB file, Haskell's lazy evaluation and garbage
collector will ensure that only the parts of the file that are currently needed
(read: indispensable for further execution) will be loaded into memory.
Thus, the 50MiB file won't be loaded into memory at once.
Section 10.5: Role and Purpose of IO
Haskell is a pure language, meaning that expressions cannot have side
effects. A side effect is anything that the expression or function does other
than produce a value, for example, modify a global counter or print to
standard output.
IO Int InHaskell, side-effectful computations (specifically, those which can
have an effect on the real world) are modelled using IO. Strictly speaking, IO
is a type constructor, taking a type and producing a type. For example, is the
type of an I/O computation producing an Int value. The IO type is abstract,
and the interface provided for IO ensures that certain illegal values (that is,
functions with non-sensical types) cannot exist, by ensuring that all builtin
functions which perform IO have a return type enclosed in IO.
IO a Haskell program is run, the computation represented by the Haskell
When
x
value named main, whose type can be for any type x, is executed.
Manipulating IO values
There are many functions in the standard library providing typical IO actions
that a general purpose programming language should perform, such as
reading and writing to file handles. General IO actions are created and
combined primarily with two functions:
This function (typically called bind) takes an IO action and a function which
returns an IO action, and produces the IO action which is the result of applying
the function to the value produced by the first IO action.
This function takes any value (i.e., a pure value) and returns the IO
computation which does no IO and produces the given value. In other words,
it is a no-op I/O action.
>> ) :: IO a -> IO b -> IO b
is similar >>=There are additional general functions
to ( which are often used, but all can be
written in terms of the two above. For
example, () but the result of the first action is ignored.
A simple program greeting the user using these functions:
putStrLn :: String -> IO () and getLine :: IO String This program also uses .
Note: the types of certain functions above are actually more general than
those types given (namely >>=, >> and return).
IO semantics
The IO type in Haskell has very similar semantics to that of imperative
programming languages. For example, when one writes s1 ; s2 in an
imperative language to indicate executing statement s1, then statement s2, one
can write s1 >> s2 to model the same thing in Haskell.
return () >> putStrLn However,
the semantics of IO diverge slightly of
"boom"
what would be expected coming from an imperative
background. The return function does not interrupt control flow - it has no
effect on the program if another IO action is run in sequence. For example,
correctly prints "boom" to standard output.
The formal semantics of IO can given in terms of simple equalities involving
the functions in the previous section:
These laws are typically referred to as left identity, right identity, and
composition, respectively. They can be stated more naturally in terms of the
function
as follows:
Lazy IO
putStrLn "X" >> putStrLn "Y" Functionsperforming I/O computations are
typically strict, meaning that all preceding actions in a sequence of actions
must be completed before the next action is begun. Typically this is useful
and expected behaviour should print "XY". However, certain library
functions perform I/O lazily, meaning
that the I/O actions required to produce the value are only performed when
the value is actually consumed. Examples of such functions are getContents and
readFile. Lazy I/O can drastically reduce the performance of a Haskell
program, so when using library functions, care should be taken to note which
functions are lazy.
IO and do notation
Haskell provides a simpler method of combining different IO values into
larger IO values. This special syntax is known as do notation* and is simply
syntactic sugar for usages of the >>=, >> and return functions.
The program in the previous section can be written in two different ways
using do notation, the first being layoutsensitive and the second being layout
insensitive:
putStrLn :: String -> IO () - writes a String to stdout and adds a new line
Recall that you can instantiate Show for your own types using deriving:
Here, mapM_ went through the list of all words in the file, and printed each of
them to a separate line with putStrLn.
Input:
Output:
Section 10.9: Reading from `stdin`
As-per the Haskell 2010 Language Specification, the following are standard
IO functions available in Prelude, so no imports are required to use them.
Input:
Output:
Section 10.11: Reading from file
handles
Like in several other parts of the I/O library, functions that implicitly use a
standard stream have a counterpart in
System.IO getLine :: IO String has a hGetLine :: Handle -> IO String
counterpart
that performs the same job, but with an extra parameter at the
left, of type Handle, that represents the stream being handled. For instance, .
Input:
Output:
Chapter 11: Record Syntax
Section 11.1: Basic Syntax
Records are an extension of sum algebraic data type that allow fields to be
named:
The field names can then be used to get the named field out of the record
We can bind the value located at the position of the relevant field label whilst
pattern matching to a new value (in this case x) which can be used on the
RHS of a definition.
The NamedFieldPuns extension instead allows us to just specify the field label we
want to match upon, this name is then shadowed on the RHS of a definition
so referring to name refers to the value rather than the record accessor.
When matching using RecordWildCards, all field labels are brought into scope.
(In this specific example, name and age)
This extension is slightly controversial as it is not clear how values are
brought into scope if you are not sure of the definition of Person.
Record Updates
There is also special syntax for updating data types with field labels.
Section 11.3: RecordWildCards
Client { .. The
pattern } brings in scope all the fields of the constructor Client, and
is equivalent to the pattern
This is equivalent to
a new value of type Person can be created by copying from alex, specifying
which values to change:
It is important to note that the record syntax is typically never used to form
values and the field name is used strictly for unwrapping
Chapter 12: Partial Application
Section 12.1: Sections
Sectioning is a concise way to partially apply arguments to infix operators.
For example, if we want to write a function which adds "ing" to the end of a
word we can use a section to succinctly define a function.
Notice how we have partially applied the second argument. Normally, we can
only partially apply the arguments in the specified order.
We can also use left sectioning to partially apply the first argument.
A Note on Subtraction
Beginners often incorrectly section negation.
This does not work as -1 is parsed as the literal -1 rather than the sectioned
operator - applied to 1. The subtract function exists to circumvent this issue.
Section 12.2: Partially Applied Adding
Function
We can use partial application to "lock" the first argument. After applying
one argument we are left with a function which expects one more argument
before returning the result.
In this example (+x) is a partially applied function. Notice that the second
parameter to the add function does not need to be specified in the function
definition.
add
The result of calling 5 2 is seven.
Chapter 13: Monoid
Section 13.1: An instance of Monoid
for lists
Section 13.2: Collapsing a list of
Monoids into a single value
mconcat :: [a] -> a is another method of the Monoid typeclass:
This effectively allows for the developer to choose which functionality to use
by wrapping the value in the appropriate newtype.
Section 13.4: An instance of Monoid
for ()
is a Monoid. Since there is only one value of type (), there's only one thing
()
mempty and mappend could do:
Chapter 14: Category Theory
Section 14.1: Category theory as a
system for organizing abstraction
Category theory is a modern mathematical theory and a branch of abstract
algebra focused on the nature of connectedness and relation. It is useful for
giving solid foundations and common language to many highly reusable
programming abstractions. Haskell uses Category theory as inspiration for
some of the core typeclasses available in both the standard library and several
popular third-party libraries.
An example
Functor The Functor typeclass says that if a type F instantiates Functor (for
F
which we write ) then we have a generic operation
which lets us "map" over F. The standard (but imperfect) intuition is that F a is
a container full of values of type a and fmap lets us apply a transformation to
each of these contained elements. An example is Maybe
Given this intuition, a common question is "why not call Functor something
obvious like Mappable?".
A hint of Category Theory
The reason is that Functor fits into a set of common structures in Category
theory and therefore by calling Functor "Functor" we can see how it connects
to this deeper body of knowledge.
a -> b
In particular, Category Theory is highly concerned with the idea of
arrows from one place to another. In Haskell, the most important set of
arrows are the function arrows . A common thing to study in Category
Theory is how one set of arrows relates to another set. In particular, for any
type constructor F, the set of arrows of the shape F a -> F b are also interesting.
a -> b F a -> F b
So a Functor is any F such that there is a connection between
normal Haskell arrows and the F-specific arrows . The connection is defined
by fmap and we also recognize a few laws which must hold
-> r So, we have, for example, that [], Maybe or () are functors in Hask.
Monads
F :: * -> *and natural transformations (transformations forall a . F a -> G a
between them
A monad in category theory is a monoid on the category of endofunctors.
This category has endofunctors as objects ) as morphisms.
A monoid object can be defined on a monoidal category, and is a type having
two morphisms:
And, to obey the monad laws is equivalent to obey the categorical monoid
object laws.
† In fact, the class of all types along with the class of functions between types
do not strictly form a category in Haskell, due to the existance of undefined.
Typically this is remedied by simply defining the objects of the Hask
category as types without bottom values, which excludes non-terminating
functions and infinite values (codata). For a detailed discussion of this topic,
see here.
Section 14.3: Definition of a Category
A category C consists of:
Obj Hom (C)) of morphisms between those objects. If Obj A collection of
a and b are in objects called (C) ;
A collection (called (C), then a morphism f
Hom (C) is typically f : a -> b in , and the collection of all morphism between
denoted a and b is denoted
hom ( a,b) ;
a : Obj (C) there exists a id : a -> A special morphism called the identity
morphism morphism - for every a ;
f : a -> b ,g : b -> c and producing a a ->A composition operator (.),
morphism taking two morphisms c
which obey the following laws:
For all f : a -> x, g : x -> b, then id . f = f and g . id = g
In other words, composition with the identity morphism (on either the left or
right) does not change the other morphism, and composition is associative.
In Haskell, the Category is defined as a typeclass in Control.Category:
Either a b
The coproduct type of two types A and B in Hask is or any other
type isomorphic to it:
Section 14.5: Product of types in Hask
Categorical products
In category theory, the product of two objects X, Y is another object Z with
two projections: π₁ : Z → X and π₂ : Z → Y; such that any other two
morphisms from another object decompose uniquely through those
projections. In other words, if there exist f ₁ : W → X and f ₂ : W → Y,
exists a unique morphism g : W → Z such that π₁ ○ g = f ₁ and π₂ ○ g
=f₂.
Products in Hask
This translates into the Hask category of Haskell types as follows, Z is
product of A, B when:
A,B f1 :: W ->
A
and f2 :: W -> B The product type of two types A, B, which
follows the law stated above, is the tuple of the
two types (), and the two projections are fst and snd. We can check that it
follows the above rule, if we have two functions we can decompose them
uniquely as follow:
Uniqueness up to isomorphism
A,BThe choice of () as the product of A and B is not unique. Another logical
and equivalent choice would have been:
B,A )as the product, or B,A, Moreover, we could have also chosen (()), and we
even ( could find a decomposition function like the above
also following the rules:
A,B B,A, Thisis because the product is not unique but unique up to
isomorphism. Every two products of A and B do not have to be equal, but they
should be isomorphic. As an example, the two different products we have
just defined, () and (()), are isomorphic:
But the problem here is that we could have written another decomposition,
namely:
The Applicative class is equivalent to this Monoidal one and thus can be
implemented in terms of it:
Chapter 15: Lists
Section 15.1: List basics
The type constructor for lists in the Haskell Prelude is []. The type declaration
for a list holding values of type Int is written as follows:
Lists in Haskell are homogeneous sequences, which is to say that all elements
must be of the same type. Unlike tuples, list type is not affected by length:
++ Note that (), which can be used to build lists is defined recursively in
terms of (:) and [].
Section 15.2: Processing lists
To process lists, we can simply pattern match on the constructors of the list
type:
Note that in the above example, we had to provide a more exhaustive pattern
match to handle cases where an odd length list is given as an argument.
The Haskell Prelude defines many built-ins for handling lists, like map, filter,
etc.. Where possible, you should use these instead of writing your own
recursive functions.
Section 15.3: Ranges
Creating a list from 1 to 10 is simple using range notation:
To specify a step, add a comma and the next element after the start element:
Note that Haskell always takes the step as the arithmetic difference between
terms, and that you cannot specify more than the first two elements and the
upper bound:
..
Because Haskell is non-strict, the elements of the list are evaluated only if
they are needed, which allows us to use infinite lists. [1] is an infinite list
starting from 1. This list can be bound to a variable or passed as a function
argument: take 5 [1..] -- returns [1,2,3,4,5] even though [1..] is infinite
Ranges work not just with numbers but with any type that implements Enum
typeclass. Given some enumerable variables a, b, c, the range syntax is
equivalent to calling these Enum methods:
filter :: ( a -> Bool ) -> [a] -> we can filter a list with a predicate using [a]:
The reason is that foldr is evaluated like this (look at the inductive step of
foldr):
foldr (+) 0 [1, 2, 3] -- foldr (+) 0 [1,2,3] (+) 1 (foldr (+) 0 [2,
3]) -- 1 + foldr (+) 0 [2,3]
(+) 1 ((+) 2 (foldr (+) 0 [3])) -- 1 + (2 + foldr (+) 0 [3])
(+) 1 ((+) 2 ((+) 3 (foldr (+) 0 []))) -- 1 + (2 + (3 + foldr (+) 0 []))
(+) 1 ((+) 2 ((+) 3 0)) -- 1 + (2 + (3 + 0 ))
Unzipping a list:
Section 15.12: foldl
This is how the left fold is implemented. Notice how the order of the
arguments in the step function is flipped compared to foldr (the right fold):
The reason is that foldl is evaluated like this (look at foldl's inductive step):
fab ) is the same a `f` b The last line is equivalent to ((0 + 1) + 2) + 3. This is
as ( because () in general, and so ((+) 0 1) is the same as (0 +
1) in particular.
Chapter 16: Sorting Algorithms
Section 16.1: Insertion Sort
Example use:
Result:
Section 16.2: Permutation Sort
Also known as bogosort.
duplicates:
Top-down version:
Result:
Bottom-up version:
Section 16.4: Quicksort
Section 16.5: Bubble sort
Section 16.6: Selection sort
Selection sort selects the minimum element, repeatedly, until the list is
empty.
Chapter 17: Type Families
Section 17.1: Datatype Families
Data families can be used to build datatypes that have different
implementations based on their type arguments.
Nil :: List Char , UnitList :: Int -> List In the above declaration, ()
and Associated data families
Data families can also be associated with typeclasses. This is often useful for
types with “ helper objects ” , which are required for generic typeclass
methods but need to contain different information depending on the concrete
instance. For instance, indexing locations in a list just requires a single
number, whereas in a tree you need a number to indicate the path at each
node:
Section 17.2: Type Synonym Families
Type synonym families are just type-level functions: they associate
parameter types with result types. These come in three different varieties.
Closed type-synonym families
These work much like ordinary value-level Haskell functions: you specify
some clauses, mapping certain types to others:
Bar a = StringIn this case, the compiler can't know what instance to use,
because the argument to bar is itself just a polymorphic Num literal. And the
type function Bar can't be resolved in “ inverse direction ” , precisely because
it's not injective † and hence not invertible (there could be more than one
type with ).
† With only these two instances, it is actually injective, but the compiler can't
know somebody won't add more instances later on and thereby break the
behaviour.
Section 17.3: Injectivity
Type Families are not necessarily injective. Therefore, we cannot infer the
parameter from an application. For example, in servant, given a type Server a we
cannot infer the type a. To solve this problem, we can use Proxy. For
... Proxy a -> Server a -> ... example, in servant, the serve function has type . We
can infer a from Proxy a because Proxy is defined by data which is injective.
Chapter 18: Monads
IO A monad is a data type of composable actions. Monad is the class of type
a
constructors whose values represent such actions. Perhaps IO is the most
recognizable one: a value of is a "recipe for retrieving an a value from the
real world".
instance MonadWe say a type constructor m (such as [] or Maybe) forms a
m
monad if there is an satisfying certain laws about
composition of actions. We can then reason about m a as an "action whose
result has type a".
Section 18.1: Definition of Monad
The most important function for dealing with monads is the bind operator
>>=:
>>= sequences two actions together by piping the result from the first
action to the second.
The other function defined by Monad is:
Its name is unfortunate: this return has nothing to do with the return keyword
found in imperative programming languages.
return x is the trivial action yielding x as its result. (It is trivial in the
following sense:)
Monad m => m a -> However, the definition of a Monad doesn ’ t guarantee the
a
existence of a function of type .
That means there is, in general, no way to extract a value from a
computation (i.e. “ unwrap ” it). This is the case for many instances:
IO a -> a
Specifically, there is no function , which often confuses
beginners; see this example.
Section 18.3: Monad as a Subclass of
Applicative
<*As of GHC 7.10, Applicative is a superclass of Monad (i.e., every type which is
a Monad must also be an Applicative). All the methods of Applicative (pure, >) can
be implemented in terms of methods of Monad (return, >>=).
pure = return . The <*It is obvious that pure and return serve equivalent
definition for purposes, so > is too relatively clear:
mf <*> mx = do { f <- mf; x <- mx; return (f x) }
-- = mf >>= (\f -> mx >>= (\x -> return (f x)))
As with the monad laws, these equivalencies are not enforced, but
developers should ensure that they are always upheld.
Section 18.4: The Maybe monad
Maybeis used to represent possibly empty values - similar to null in other
languages. Usually it is used as the output type of functions that can fail in
some way.
Consider the following function:
Think of halve as an action, depending on an Int, that tries to halve the integer,
failing if it is odd.
How do we halve an integer three times?
takeOneEighth :: Int -> Maybe Int -- (after you read the 'do' sub-section:)
takeOneEighth x =
case halve x of -- do {
Nothing -> Nothing
Just oneHalf -> -- oneHalf <- halve x case halve
oneHalf of
Nothing -> Nothing
Just oneQuarter -> -- oneQuarter <- halve oneHalf case halve
oneQuarter of
Nothing -> Nothing -- oneEighth <- halve oneQuarter
Just oneEighth ->
Just oneEighth -- return oneEighth }
takeOneEighth :: Int -> Maybe Int takeOneEighth x = halve x
>>= halve >>= halve -- or,
-- return x >>= halve >>= halve >>= halve -- which is parsed as
-- (((return x) >>= halve) >>= halve) >>= halve -- which can also be written as
-- (halve =<<) . (halve =<<) . (halve =<<) $ return x -- or, equivalently, as
-- halve <=< halve <=< halve $ x
There are three monad laws that should be obeyed by every monad, that is
every type which is an instance of the Monad typeclass:
-- == do { y <- f x ; do { z <- g y; h z } }
Obeying these laws makes it a lot easier to reason about the monad, because
it guarantees that using monadic functions and composing them behaves in a
reasonable way, similar to other monads.
Let's check if the Maybe monad obeys the three monad laws.
return x >>= f = f x 1. The left
identity law -
Take a normal value and convert it into an action which just immediately
returns the value you gave it. This function is less obviously useful until
you start using do notation.
we obtain:
Section 18.7: do-notation
do-notation is syntactic sugar for monads. Here are the rules:
do x <- mx do x <- mx y <- my is equivalent to do y <- my ... ... do let a = b
let a = b in ... is equivalent to do ... do m m >> ( e is equivalent to e) do x <-
m m >>= (\x -> e is equivalent to e) do m is equivalent to m
For example, these definitions are equivalent:
Chapter 19: Stack
Section 19.1: Profiling with Stack
Configure profiling for a project via stack. First build the project with the --
profile flag:
prof RTS GHC flags are not required in the cabal file for this to work (like -).
stack will automatically turn on profiling for both the library and executables
in the project. The next time an executable runs in the project, the usual +
flags can be used:
Section 19.2: Structure
File structure
A simple project has the following files included in it:
In the folder src there is a file named Main.hs. This is the "starting point" of the
helloworld project. By default Main.hs contains a simple "Hello, World!"
program.
Main.hs
First we have to build the project with stack build and then we can run it with
Section 19.4: Viewing dependencies
To find out what packages your project directly depends on, you can simply
use this command:
This way you can find out what version of your dependencies where actually
pulled down by stack.
Haskell projects frequently find themselves pulling in a lot of libraries
indirectly, and sometimes these external dependencies cause problems that
you need to track down. If you find yourself with a rogue external
dependency that you'd like to identify, you can grep through the entire
dependency graph and identify which of your dependencies is ultimately
pulling in the undesired package:
prints out a dependency graph in text form that can be searched. It can
stack dot
also be viewed:
You can also set the depth of the dependency graph if you want:
Section 19.5: Stack install
By running the command
This will create a directory called helloworld with the files necessary for a Stack
project.
Section 19.8: Stackage Packages and
changing the LTS (resolver) version
Stackage is a repository for Haskell packages. We can add these packages to
a stack project.
Adding lens to a project.
In a stack project, there is a file called stack.yaml. In stack.yaml there is a segment
that looks like:
to:
With the next stack build Stack will use the LTS 6.9 version and hence
download some new dependencies.
Chapter 20: Generalized Algebraic
Data Types
Section 20.1: Basic Usage
When the GADTs extension is enabled, besides regular data declarations, you
can also declare generalized algebraic datatypes as follows:
Show Note that the constraint doesn't appear in the type of the function, and is
a
only visible in the code to the right of ->.
DataType Int a ~ IntConstr3 has type , which means that whenever a value of type
DataType a is a Constr3, it is known that . This information, too, can be recovered
with a pattern match.
Chapter 21: Recursion Schemes
Section 21.1: Fixed points
Fixtakes a "template" type and ties the recursive knot, layering the template
like a lasagne.
Inside a Fix f we find a layer of the template f. To fill in f's parameter, Fix f
plugs in itself. So when you look inside the template f you find a recursive
occurrence of Fix f.
Here is how a typical recursive datatype can be translated into our framework
of templates and fixed points. We remove recursive occurrences of the type
and mark their positions using the r parameter.
Section 21.2: Primitive recursion
Paramorphisms model primitive recursion. At each iteration of the fold, the
folding function receives the subtree for further processing.
Note that apo and para are dual. The arrows in the type are flipped; the tuple
in para is dual to the Either in apo, and the implementations are mirror images
of each other.
Section 21.4: Folding up a structure
one layer at a time
Catamorphisms, or folds, model primitive recursion. cata tears down a fixpoint
layer by layer, using an algebra function (or folding function) to process each
layer. cata requires a Functor instance for the template type f.
Section 21.5: Unfolding a structure one
layer at a time
Anamorphisms, or unfolds, model primitive corecursion. ana builds up a
fixpoint layer by layer, using a coalgebra function (or unfolding function) to
produce each new layer. ana requires a Functor instance for the template type f.
Note that ana and cata are dual. The types and implementations are mirror
images of one another.
Section 21.6: Unfolding and then
folding, fused
It's common to structure a program as building up a data structure and then
collapsing it to a single value. This is called a hylomorphism or refold. It's
possible to elide the intermediate structure altogether for improved
efficiency.
= g . fmap (cata g) . unFix . Fix . fmap (ana f) . f -- definition of cata and ana
= g . fmap (cata g) . fmap (ana f) . f -- unfix . Fix = id
= g . fmap (cata g . ana f) . f -- Functor law
= g . fmap (hylo f g) . f -- definition of hylo
Chapter 22: Data.Text
Section 22.1: Text Literals
The OverloadedStrings language extension allows the use of normal string literals
to stand for Text values.
Section 22.2: Checking if a Text is a
substring of another Text
isInfixOf :: Text -> Text -> Bool checks whether a Text is contained anywhere
within another Text.
isPrefixOf :: Text -> Text -> Bool checks whether a Text appears at the beginning
of another Text.
isSuffixOf :: Text -> Text -> Bool checks whether a Text appears at the end of
another Text.
Section 22.3: Stripping whitespace
strip removes whitespace from the start and end of a Text value.
filter can be used to remove whitespace, or other characters, from the middle.
Section 22.4: Indexing Text
The count function returns the number of times a query Text occurs within
another Text.
Section 22.5: Splitting Text Values
Note that decodeUtf8 will throw an exception on invalid input. If you want
and GHCi will stop at the relevant line when we run the function:
:set prompt The command changes the prompt for this interactive session.
In this example, x and z will both be evaluated to weak head normal form
before returning the list. It's equivalent to:
and so we get
The following function is written with a lazy pattern but is in fact using the
pattern's variable which forces the match, so will fail for Left arguments:
putStrLn s1, s2
Here act1 works on inputs that parse to any list of strings,
s1
whereas in act2 the needs the value of s1 which forces the pattern
matching for [], so it works only for lists of exactly two strings:
Section 24.3: Normal forms
This example provides a brief overview - for a more in-depth explanation of
normal forms and examples, see this question.
Reduced normal form
x -> ..The reduced normal form (or just normal form, when the context is
clear) of an expression is the result of evaluating all reducible subexpressions
in the given expression. Due to the non-strict semantics of Haskell (typically
called laziness), a subexpression is not reducible if it is under a binder (i.e. a
lambda abstraction - \). The normal form of an expression has the property
that if it exists, it is unique.
In other words, it does not matter (in terms of denotational semantics) in
which order you reduce subexpressions. However, the key to writing
performant Haskell programs is often ensuring that the right expression is
evaluated at the right time, i.e, the understanding the operational semantics.
An expression whose normal form is itself is said to be in normal form.
let x x in x
Some expressions, e.g. = 1:, have no normal form, but are still
productive. The example expression
, ... ]. Other expressions, let y still has a value, if one admits infinite
such as values, which here is the list [1,1=
y in y 1+, have no value, or their value is undefined.
in which case the constructor Con is said to be strict in the B field, which
means the B field is evaluated to WHNF when the constructor is applied to
sufficient (here, two) arguments.
Section 24.4: Strict fields
In a data declaration, prefixing a type with a bang (!) makes the field a strict
field. When the data constructor is applied, those fields will be evaluated to
weak head normal form, so the data in the fields is guaranteed to always be in
weak head normal form.
Strict fields can be used in both record and non-record types:
Chapter 25: Syntax in Functions
Section 25.1: Pattern Matching
Haskell supports pattern matching expressions in both function definition and
through case statements.
A case statement is much like a switch in other languages, except it supports
all of Haskell's types.
Let's start simple:
Or, we could define our function like an equation which would be pattern
matching, just without using a case statement:
Actually, Pattern Matching can be used on any constructor for any type
class. E.g. the constructor for lists is : and for tuples ,
Section 25.2: Using where and guards
Given this function:
As observed, we used the where in the end of the function body eliminating
the repetition of the calculation
hourlyRate weekHoursOfWork * 52(* ()) and we also used where to organize the
salary range.
The naming of common sub-expressions can also be achieved with let
expressions, but only the where syntax makes it possible for guards to
refer to those named sub-expressions.
Section 25.3: Guards
A function can be defined using guards, which can be thought of classifying
behaviour according to input.
Take the following function definition:
One way of looking at it is that fmap lifts a function of values into a function
of values in a context f.
A correct instance of Functor should satisfy the functor laws, though these are
not enforced by the compiler:
We can check the functor laws for this instance using equational reasoning.
For the identity law,
Lists
Lists' instance of Functor applies the function to every value in the list in
place.
This example shows that fmap generalises map. map only operates on lists,
whereas fmap works on an arbitrary Functor.
The identity law can be shown to hold by induction:
Functions
Not every Functor looks like a container. Functions' instance of Functor applies
a function to the return value of another function.
:: a -> b fmap
lifts a function into a subcategory of Hask in a way that
preserves both the existence of any identity arrows, and the associativity of
composition.
The Functor class only encodes endofunctors on Hask. But in mathematics,
functors can map between arbitrary categories. A more faithful encoding of
this concept would look like this:
The standard Functor class is a special case of this class in which the source
and target categories are both Hask.
For example,
Chapter 27: Testing with Tasty
Section 27.1: SmallCheck, QuickCheck
and HUnit
Install packages:
prints
Type parameters in Haskell must begin with a lowercase letter. Our custom
data type is not a real type yet. In order to create values of our type, we must
substitute all type parameters with actual types. Because a and b can be of any
type, our value constructors are polymorphic functions.
Creating variables of our custom type
The name of the type is specified between data and =, and is called a type
constructor. After = we specify all value constructors of our data type,
delimited by the | sign. There is a rule in Haskell that all type and value
constructors must begin with a capital letter. The above declaration can be
read as follows:
The above statement creates a variable named x of type Foo. Let's verify this
by checking its type.
prints
Section 28.4: Custom data type with
record parameters
Assume we want to create a data type Person, which has a first and last name,
an age, a phone number, a street, a zip code and a town.
We could write
example. Finally, we launch our "event loop", that would fire events on user
input:
main = do (inputHandler, inputFire) <- newAddHandler compile $ do ...
forever $ do input <- getLine inputFire input
Section 29.2: Event type
In reactive-banana the Event type represents a stream of some events in time.
An Event is similar to an analog impulse signal in the sense that it is not
continuous in time. As a result, Event is an instance of the Functor typeclass
only. You can't combine two Events together because they may fire at different
times. You can do something with an Event's [current] value and react to it
with some IO action.
Transformations on Events value are done using fmap:
main = do (inputHandler, inputFire) <- newAddHandler compile $ do
inputEvent <- fromAddHandler inputHandler -turn all characters in the signal
to upper case let inputEvent' = fmap (map toUpper) inputEvent
a -> IOReacting to an Event is done the same way. First you fmap it with an
action of type () and then pass it to reactimate function:
main = do (inputHandler, inputFire) <- newAddHandler compile $ do
inputEvent <- fromAddHandler inputHandler -turn all characters in the signal
to upper case let inputEvent' = fmap (map toUpper) inputEvent let
inputEventReaction = fmap putStrLn inputEvent' -- this has type `Event (IO
()) reactimate inputEventReaction
inputFire "something" Now whenever is called, "SOMETHING" would be printed.
Section 29.3: Actuating EventNetworks
EventNetworks
returned by compile must be actuated before reactimated events
have an effect.
Section 29.4: Behavior type
<$ > <*Torepresent continious signals, reactive-banana features Behavior a
and type. Unlike Event, a Behavior is an Applicative, which lets you combine n
Behaviors using an n-ary pure function (using >).
inputEvent inputBehavior' <- accumE "" $ fmap (\oldValue newValue -> newValue)
Event ( Future a
The only thing that should be noted is that changes return ) instead of Event a.
Because of this, reactimate' should be used instead of reactimate. The rationale
behind this can be obtained from the documentation.
Chapter 30: Optimization
Section 30.1: Compiling your Program
for Profiling
The GHC compiler has mature support for compiling with profiling
annotations.
prof and fprof - auto Using the - flags when compiling will add support to your
- binary for profiling flags for use at runtime.
Suppose we have this program:
We will see a main.prof file created post execution (once the program has
exited), and this will give us all sorts of profiling information such as cost
centers which gives us a breakdown of the cost associated with running the
various parts of the code:
Wed Oct 12 16:14 2011 Time and Allocation Profiling Report (Final)
fprof
and running RTS -p ghc - prof - rtsopts Main.hs && . / Main.hs + RTS
with + e.g.
Compiling with --p would produce Main.prof once the program's exited.
Chapter 31: Concurrency
Section 31.1: Spawning Threads with
`forkIO`
Haskell supports many forms of concurrency and the most obvious being
forking a thread using forkIO.
forkIO :: IO () -> IO ThreadId
The function takes an IO action and returns its
ThreadId, meanwhile the action will be run in the background.
Both actions will run in the background, and the second is almost guaranteed
to finish before the last!
Section 31.2: Communicating between
Threads with `MVar`
It is very easy to pass information between threads using the MVar a type and
its accompanying functions in Control.Concurrent:
newEmptyMVar :: ( MVar newMVar :: a -> IO ( MVar takeMVar :: MVar a -> IO a ) --
IO a a putMVar :: MVar a -> a -> IO
Let's sum the numbers from 1 to 100 million in a thread and wait on the
result:
A more complex demonstration might be to take user input and sum in the
background while waiting for more input:
As stated earlier, if you call takeMVar and the MVar is empty, it blocks until
another thread puts something into the
MVar, which could result in a Dining Philosophers Problem. The same thing
happens with putMVar: if it's full, it'll block 'til it's empty!
Take the following function:
Example:
Chapter 33: Databases
Section 33.1: Postgres
Postgresql-simple is a mid-level Haskell library for communicating with a
PostgreSQL backend database. It is very simple to use and provides a type-
safe API for reading/writing to a DB.
Running a simple query is as easy as:
Parameter substitution
PostreSQL-Simple supports parameter substitution for safe parameterised
queries using query:
In order to use the encode and decode function from the Data.Aeson package we
need to make Person an instance of ToJSON and FromJSON. Since we derive
Generic for Person, we can create empty instances for these classes. The default
definitions of the methods are defined in terms of the methods provided by
the Generic type class.
Done! In order to improve the encoding speed we can slightly change the
ToJSON instance:
Now we can use the encode function to convert Person to a (lazy) Bytestring:
These are particularly useful in that they allow us to create new functions on
top of the ones we already have, by passing functions as arguments to other
functions. Hence the name, higher-order functions.
Consider:
This ability to easily create functions (like e.g. by partial application as used
here) is one of the features that makes functional programming particularly
powerful and allows us to derive short, elegant solutions that would
otherwise take dozens of lines in other languages. For example, the following
function gives us the number of aligned elements in two lists.
Section 35.2: Lambda Expressions
Lambda expressions are similar to anonymous functions in other languages.
Lambda expressions are open formulas which also specify variables which
are to be bound. Evaluation (finding the value of a function call) is then
achieved by substituting the bound variables in the lambda expression's body,
with the user supplied arguments. Put simply, lambda expressions allow us to
express functions by way of variable binding and substitution.
Lambda expressions look like
Within a lambda expression, the variables on the left-hand side of the arrow
are considered bound in the righthand side, i.e. the function's body.
Consider the mathematical function
As a Haskell definition it is
x -> x which means that the function f is equivalent to the lambda expression
\^2.
a -> bConsider the parameter of the higher-order function map, that is a
function of type . In case it is used only once in a call to map and nowhere
else in the program, it is convenient to specify it as a lambda expression
instead of naming such a throwaway function. Written as a lambda
expression,
When using Data.Map, one usually imports it qualified to avoid clashes with
similar:
> Map.notMember "Alex" $ Map.singleton "Alex" 31 False >
Map.notMember "Jenny" $ Map.empty True
findWithDefault :: Ord k => a -> k -> Map k a -> a You can also use to yield a
default value if the key isn't present:
Map.findWithDefault 'x' 1 (fromList [(5,'a'), (3,'b')]) == 'x'
Map.findWithDefault 'x' 5 (fromList [(5,'a'), (3,'b')]) == 'a'
Section 36.6: Inserting Elements
Inserting elements is simple:
> let m = Map.singleton "Alex" 31 fromList [("Alex",31)] > Map.insert
"Bob" 99 m fromList [("Alex",31),("Bob",99)]
Section 36.7: Deleting Elements
> let m = Map.fromList [("Alex", 31), ("Bob", 99)] fromList [("Alex",31),
("Bob",99)] > Map.delete "Bob" m fromList [("Alex",31)]
Chapter 37: Fixity declarations
Declaration component Meaning
infixr the operator is right-associative
infixl the operator is left-associative
infix the operator is non-associative
optional digit binding precedence of the operator (range
0...9, default 9)
op1, ... , opn operators
Section 37.1: Associativity
vs infixr vs infix describe on which sides the parens will be grouped. For
infixl
example, consider the following fixity declarations (in base)
True == False == The infix tells us that == cannot be used without us including
parenthesis, which means that
True == ( False == True ) or True == False ) == TrueTrue is a syntax error. On the other
( hand, are fine.
infixlOperators without an explicit fixity
declaration are 9.
Section 37.2: Binding
precedence
The number that follows the associativity information describes in what order
the operators are applied. It must always be between 0 and 9 inclusive. This is
commonly referred to as how tightly the operator binds. For example,
consider the following fixity declarations (in base)
In short, the higher the number, the closer the operator will "pull" the parens
on either side of it.
Remarks
fx Function application always binds higher than operators, so f x `op` g y
must be interpreted as ( )op(g y) no matter what the operator `op` and
its fixity declaration are.
infixl *!?
If the binding precedence is omitted in a fixity declaration (for
example we have ) the default is 9.
Section 37.3: Example declarations
infixr 5 ++
infix ??
Chapter 38: Web Development
Section 38.1: Servant
Servant is a library for declaring APIs at the type-level and then:
users which states that we wish to expose / to GET requests with a query param
sortby of type SortBy and return JSON of type User in the response.
Note, Stack has a template for generating basic APIs in Servant, which is
useful for getting up and running very quick.
Section 38.2: Yesod
Yesod project can be created with stack new using following templates:
yesod - minimal yesod - mongo yesod - mysql yesod - postgres yesod - postgres - fay yesod - simple.
yesod - sqlite
Simplest Yesod scaffold possible.
. Uses MongoDB as DB engine.
. Uses MySQL as DB engine.
. Uses PostgreSQL as DB engine.
. Uses PostgreSQL as DB engine. Uses Fay language for front-
end.
. Recommended template to use, if you
don't need database. . Uses SQlite as DB
engine.
yesod - bin package
provides yesod executable, which can be used to run
development server. Note that you also
can run your application directly, so yesod tool is optional.
contains code that dispatches requests between handlers. It also
Application.hs
sets up database and logging settings, if you used them.
defines App type, that can be seen as an environment for all
Foundation.hs
handlers. Being in HandlerT monad, you can get this value using getYesod
function.
Import.hs is a module that just re-exports commonly used stuff.
contains Template Haskell that generates code and data types used for
Model.hs
DB interaction. Present only if you are using DB.
config / models is where you define your DB schema. Used by Model.hs.
config / routes defines
URI's of the Web application. For each HTTP method of
the route, you'd need to create a
method } RouteRhandler named {}.
{
static /
directory contains site's static resources. These get Settings / StaticFiles.hs
compiled into binary by
module.
templates /directory contains Shakespeare templates that are used when serving
requests.
Handler Finally, / directory contains modules that define handlers for routes.
Each handler is a HandlerT monad action based on IO. You can inspect request
parameters, its body and other information, make queries to the DB with
runDB, perform arbitrary IO and return various types of content to the user. To
serve HTML, defaultLayout function is used that allows neat composition of
shakespearian templates.
Chapter 39: Vectors
Section 39.1: The Data.Vector Module
The Data.Vector module provided by the vector is a high performance library
for working with arrays.
Once you've imported Data.Vector, it's easy to start using a Vector:
Deleting a sandbox:
We can see how the number of inhabitants of every type corresponds to the
operations of the algebra.
Equivalently, we can use Either and (,) as type constructors for the addition
and the multiplication. They are isomorphic to our previously defined types:
The expected results of addition and multiplication are followed by the type
algebra up to isomorphism. For example, we can see an isomorphism
between 1 + 2, 2 + 1 and 3; as 1 + 2 = 3 = 2 + 1.
Uniqueness up to isomorphism
We have seen that multiple types would correspond to a single number, but in
this case, they would be isomorphic. This is to say that there would be a pair
of morphisms f and g, whose composition would be the identity, connecting
the two types.
In this case, we would say that the types are isomorphic. We will consider
two types equal in our algebra as long as they are isomorphic.
For example, two different representations of the number two are trivally
isomorphic:
List(a) = 1 + a * List(a)
But we can now substitute List(a) again in this expression multiple times, in
order to get:
Trees
We can do the same thing with binary trees, for example. If we define them
as:
x = partition (>x) >>> minimum *** maximum >>> uncurry (-) This
diagram:
Here,
The >>> operator is just a flipped version of the ordinary . composition
operator (there's also a <<< version that composes right-to-left). It pipes
the data from one processing step to the next.
the out-going ╱ ╲ indicate the data flow is split up in two “ channels ” .
In terms of Haskell types, this is realised with tuples:
Arrow ( - ) *** †At least in the Hask category (i.e. in the g does not
>
instance), actually compute f and g in parallel as in, on different
f
threads. This would theoretically be possible, though.
Chapter 43: Typed holes
Section 43.1: Syntax of typed holes
A typed hole is a single underscore (_) or a valid Haskell identifier which is
not in scope, in an expression context. Before the existance of typed holes,
both of these things would trigger an error, so the new syntax does not
interfere with any old syntax.
Controlling behaviour of typed holes
The default behaviour of typed holes is to produce a compile-time error when
encountering a typed hole. However, there are several flags to fine-tune their
behaviour. These flags are summarized as follows (GHC trac):
fdefer - type - errors or fdefer - typed - holes By
default GHC has typed holes enabled
- and
fwarn - typed - holes is on by default. fdefer - type - errors or fdefer - typed
produces a
Without - -
compile
fno - warn - typed - holes
error
when it
-> IntNote that in the case of typed holes in expressions entered into the
GHCi repl (as above), the type of the expression entered also reported, as it
(here of type [a]).
Section 43.3: Using typed holes to
define a class instance
Typed holes can make it easier to define functions, through an interactive
process.
Say you want to define a class instance Foo Bar (for your custom Bar type, in
order to use it with some polymorphic library function that requires a Foo
instance). You would now traditionally look up the documentation of Foo,
figure out which methods you need to define, scrutinise their types etc. –
but with typed holes, you can actually skip that!
First just define a dummy instance:
Ok, so we need to define foom for Bar. But what is that even supposed to be?
Again we're too lazy to look in the documentation, and just ask the compiler:
Note how the compiler has already filled the class type variable with the
concrete type Bar that we want to instantiate it for. This can make the
signature a lot easier to understand than the polymorphic one found in the
class documentation, especially if you're dealing with a more complicated
method of e.g. a multi-parameter type class.
But what the hell is Gronk? At this point, it is probably a good idea to ask
Hayoo. However we may still get away without that: as a blind guess, we
assume that this is not only a type constructor but also the single value
constructor, i.e. it can be used as a function that will somehow produce a
Gronk a value. So we try
If we're lucky, Gronk is actually a value, and the compiler will now say
Ok, that's ugly – at first just note that Gronk has two arguments, so we can
refine our attempt:
You can now further progress by e.g. deconstructing the bar value (the
components will then show up, with types, in the Relevant bindings section).
Often, it is at some point completely obvious what the correct definition will
be, because you you see all avaliable arguments and the types fit together like
a jigsaw puzzle. Or alternatively, you may see that the definition is
impossible and why.
All of this works best in an editor with interactive compilation, e.g. Emacs
with haskell-mode. You can then use typed holes much like mouse-over
value queries in an IDE for an interpreted dynamic imperative language, but
without all the limitations.
Chapter 44: Rewrite rules (GHC)
Section 44.1: Using rewrite rules on
overloaded functions
In this question, @Viclib asked about using rewrite rules to exploit typeclass
laws to eliminate some overloaded function calls:
fromList :: Seq a -> [a] would be Seq $ fromList This is a somewhat tricky
rewritten into use case for GHC's
rewrite rules mechanism,
because overloaded functions are rewritten into their specific instance
methods by rules that are implicitly created behind the scenes by GHC (so
something like etc.).
However, by first rewriting toList and fromList into non-inlined non-typeclass
methods, we can protect them from premature rewriting, and preserve them
until the rule for the composition can fire:
Chapter 45: Date and Time
Section 45.1: Finding Today's Date
Current date and time can be found with getCurrentTime:
Subtract:
For example
Example:
Section 46.2: Do Notation
Any list comprehension can be correspondingly coded with list monad's do
notation.
[f x | x <- xs] f <$> xs do { x <- xs ; return (f x) }
A generator with a variable x in its pattern creates new scope containing all
the expressions on its right, where x is defined to be the generated element.
This means that guards can be coded as
Section 46.4: Guards
Another feature of list comprehensions is guards, which also act as filters.
Guards are Boolean expressions and appear on the right side of the bar in a
list comprehension.
Their most basic use is
Any variable used in a guard must appear on its left in the comprehension, or
otherwise be in scope. So,
[ f x | x <- list, pred1 x y, pred2 x] -- `y` must be defined in outer scope which is
equivalent to
infixl
(the >>= operator is 1, i.e. it associates (is parenthesized) to the left).
Examples:
Section 46.5: Parallel Comprehensions
With Parallel List Comprehensions language extension,
is equivalent to
Example:
Section 46.6: Local Bindings
List comprehensions can introduce local bindings for variables to hold some
interim values:
After saving we can now create the Haskell files which we can use in our
project by running
will create a new folder Protocol in the current directory with Person.hs
hprotoc
which we can simply import into our haskell project:
Note that ExpQ, TypeQ, DecsQ and PatQ are synonyms for the AST types which
are typically stored inside the Q type.
runQ :: Quasi m => Q a -> m a Quasi IO runQ :: Q a -> IO a
, and there is an The TH library provides
instance a function , so it would
seem that the Q type is just a fancy IO. However, the use of produces an IO
action which does
not have access to any compile-time environment - this is only available in
the actual Q type. Such IO actions will fail at runtime if trying to access said
environment.
Section 49.3: An n-arity curry
The familiar
The curryN function takes a natural number, and produces the curry function of
that arity, as a Haskell AST.
First we produces fresh type variables for each of the arguments of the
function - one for the input function, and one for each of the arguments to
said function.
The expression args represents the pattern f x1 x2 .. xn. Note that a pattern is
separate syntactic entity - we could take this same pattern and place it in a
lambda, or a function binding, or even the LHS of a let binding (which would
be an error).
The function must build the argument tuple from the sequence of arguments,
which is what we've done here. Note the distinction between pattern variables
(VarP) and expression variables (VarE).
f x1 x2 .. xn -> f ( x1, x2, .. , xn Finally, the value which we produce is the
AST \).
We could have also written this function using quotations and 'lifted'
constructors:
var -> .. Note that quotations must be syntactically valid, so
would only export Person and its constructors Friend and Foe.
If the export list following the module keyword is omitted, all of the names
bound at the top level of the module would be exported:
would only import map from Data.Stream, and calls to this function would
require D.:
import Prelude hiding (map, head, tail, scan, foldl, foldr, filter, dropWhile, take) -- etc
In reality, it would require too much code to hide Prelude clashes like this,
so you would in fact use a qualified import of Data.Stream instead.
Section 51.5: Qualifying Imports
When multiple modules define the same functions by name, the compiler will
complain. In such cases (or to improve readability), we can use a qualified
import:
Now we can prevent ambiguity compiler errors when we use map, which is
defined in Prelude and Data.Stream:
import Data.Text as T
It is also possible to import a module with only the
clashing names being qualified via , which allows one to have Text instead
of T.Text etc.
Section 51.6: Hierarchical module
names
The names of modules follow the filesystem's hierarchical structure. With the
following file layout:
Note that:
the module name is based on the path of the file declaring the module
Folders may share a name with a module, which gives a naturally
hierarchical naming structure to modules
Chapter 52: Tuples (Pairs, Triples,
...)
Section 52.1: Extract tuple components
Use the fst and snd functions (from Prelude or Data.Tuple) to extract the first and
second component of pairs.
Pattern matching also works for tuples with more than two components.
Haskell does not provide standard functions like fst or snd for tuples with
more than two components. The tuple library on Hackage provides such
functions in the Data.Tuple.Select module.
Section 52.2: Strictness of matching a
tuple
p1, p2 Thepattern () is strict in the outermost tuple constructor, which can lead
to unexpected strictness behaviour. For example, the following expression
diverges (using Data.Function.fix):
x, y ,
since the match on () is strict in the tuple constructor. However, the
following expression, using an irrefutable pattern, evaluates to (12) as
expected:
Section 52.3: Construct tuple values
Use parentheses and commas to create tuples. Use one comma to create a
pair.
Note that it is also possible to declare tuples using in their unsugared form.
Tuple patterns can contain complex patterns such as list patterns or more
tuple patterns.
Section 52.6: Apply a binary function
to a tuple (uncurrying)
Use the uncurry function (from Prelude or Data.Tuple) to convert a binary function
to a function on tuples.
Section 52.7: Apply a tuple function to
two arguments (currying)
Use the curry function (from Prelude or Data.Tuple) to convert a function that
takes tuples to a function that takes two arguments.
Section 52.8: Swap pair components
Use swap (from Data.Tuple) to swap the components of a pair.
Here the last argument (0,0) in InWindow marks the location of the top left
corner.
For versions older than 1.11: In older versions of Gloss FullScreen takes
another argument which is meant to be the size of the frame that gets drawn
on which in turn gets stretched to fullscreen-size, for example: FullScreen
1024 , 768 ()
background is of type Color. It defines the background color, so it's as simple as:
Then we get to the drawing itself. Drawings can be very complex. How to
specify these will be covered elsewhere ([one can refer to this for the
moment][1]), but it can be as simple as the following circle with a radius of
80:
Summarizing example
As more or less stated in the documentation on Hackage, getting something
on the screen is as easy as:
Chapter 54: State Monad
State monads are a kind of monad that carry a state that might change during
each computation run in the monad. Implementations are usually of the form
State s a which represents a computation that carries and potentially modifies a
state of type s and produces a result of type a, but the term "state monad" may
generally refer to any monad which carries a state. The mtl and transformers
package provide general implementations of state monads.
Section 54.1: Numbering the nodes of a
tree with a counter
We have a tree data type like this:
And we wish to write a function that assigns a number to each node of the
tree, from an incrementing counter:
<*Note that this required us to use Applicative (the > operator) instead of Monad.
With that, now we can write tag like a pro:
Note that this works for any Traversable type, not just our Tree type!
Getting rid of the Traversable boilerplate
GHC has a DeriveTraversable extension that eliminates the need for writing the
instance above:
Chapter 55: Pipes
Section 55.1: Producers
A Producer is some monadic action that can yield values for downstream
consumption:
For example:
naturals :: Monad m => Producer Int m ()
values too:
Section 55.2: Connecting Pipes
Use >-> to connect Producers, Consumers and Pipes to compose larger Pipe
functions.
For example:
Section 55.6: The Proxy monad
transformer
pipes's
core data type is the Proxy monad transformer. Pipe, Producer, Consumer and
so on are defined in terms of Proxy.
Since Proxy is a monad transformer, definitions of Pipes take the form of
monadic scripts which await and yield values, additionally performing effects
from the base monad m.
Section 55.7: Combining Pipes and
Network communication
Pipes supports simple binary communication between a client and a server
In this example:
1. a client connects and sends a FirstMessage
DoSomething
2. the server receives and answers 0
3. the client receives and answers DoNothing
4. step 2 and 3 are repeated indefinitely
The command data type exchanged over the network:
Rational).
Lists
There are two concatenation operators:
: (pronounced cons) prepends a single argument before a list. This
operator is actually a constructor and can thus also be used to pattern
match( “ inverse construct ” ) a list.
!! is an indexing operator.
Note that indexing lists is inefficient (complexity O(n) instead of O(1) for
arrays or O(log n) for maps); it's generally preferred in Haskell to deconstruct
lists by folding ot pattern matching instead of indexing.
This tells me that ^^ binds more tightly than +, both take numerical types
as their elements, but ^^ requires the exponent to be integral and the base
to be fractional.
The less verbose :t requires the operator in parentheses, like
Section 56.3: Custom operators
In Haskell, you can define any infix operator you like. For example, I could
define the list-enveloping operator as
Running main above will execute and "return" immediately, while the two
values, a and b are computed in the background through rpar.
threaded Note: ensure you compile with - for parallel execution to occur.
Section 57.2: rpar
rpar :: Strategy a executes the given type Strategy a = a -> Eval a
strategy (recall:
) in parallel:
This subtly changes the semantics of the rpar example; whereas the latter
would return immediately whilst computing the values in the background,
this example will wait until a can be evaluated to WHNF.
Chapter 58: Parsing HTML with
taggy-lens and lens
Section 58.1: Filtering elements from
the tree
id="article" Find div with and strip out all the inner script tags.
Foo.hs:
The unsafe keyword generates a more efficient call than 'safe', but requires that
the C code never makes a callback to the Haskell system. Since foo is
completely in C and will never call Haskell, we can use unsafe.
Finally, create a special function that would wrap Haskell function of type
Callback into a pointer FunPtr Callback:
Assume we want to carry out the following computation using the counter:
set the counter to 0
set the increment
constant to 3
increment the
counter 3 times set
the increment
constant to 5
increment the counter
2 times
The state monad provides abstractions for passing state around. We can make
use of the state monad, and define our increment function as a state
transformer.
Reader r a
However, we need to make use of the state monad as well. Thus, we need to
use the ReaderT transformer:
Using ReaderT, we can define our counter with environment and state as
follows:
We define an incR function that takes the increment constant from the
environment (using ask), and to define our increment function in terms of our
CounterS monad we make use of the lift function (which belongs to the monad
transformer class).
We can then define our counter with logging, environment, and state as
follows:
And making use of lift we can define the version of the increment function
which logs the value of the counter after each increment:
Here we arrive at a solution that will work for any monad that satisfies the
The Bifunctor class is found in the Data.Bifunctor module. For GHC versions
>7.10, this module is bundled with the compiler; for earlier versions you
need to install the bifunctors package.
Section 62.2: Common instances of
Bifunctor
Two-element tuples
(,) is an example of a type that has a Bifunctor instance.
Either's
instance of Bifunctor selects one of the two functions to apply depending
on whether the value is Left or Right.
Section 62.3: first and second
If mapping covariantly over only the first argument, or only the second
argument, is desired, then first or second ought to be used (in lieu of bimap).
For example,
Chapter 63: Proxies
Section 63.1: Using Proxy
Proxy :: k ->
The * type, found in Data.Proxy, is used when you need to give the
compiler some type information - eg, to pick a type class instance - which is
nonetheless irrelevant at runtime.
Functions which use a Proxy typically use ScopedTypeVariables to pick a type class
ambiguous function,
which results in a type error because the elaborator doesn't know which
instance of Show or Read to use, can be resolved using Proxy:
When calling a function with Proxy, you need to use a type annotation to
declare which a you meant.
Section 63.2: The "polymorphic
proxy" idiom
Since Proxy contains no runtime information, there is never a need to pattern-
match on the Proxy constructor. So a common idiom is to abstract over the
Proxy datatype using a type variable.
showread :: forall proxy a. (Show a, Read a) => proxy a -> String -> String
showread _ = (show :: a -> String) . read
Proxy :: Proxy a
Now, if you happen to have an f a in scope for some f, you don't
need to write out when calling f.
Section 63.3: Proxy is like ()
f a -> Proxy a
Since Proxy contains no runtime information, you can always
write a natural transformation for any f.
This is just like how any given value can always be erased to ():
Technically, Proxy is the terminal object in the category of functors, just like ()
is the terminal object in the category of values.
Chapter 64: Applicative Functor
f :: * -> Applicative
is the class of types * which allows lifted function
application over a structure where the function is also embedded in that
structure.
Section 64.1: Alternative definition
Since every Applicative Functor is a Functor, fmap can always be used on it;
thus the essence of Applicative is the pairing of carried contents, as well as
the ability to create it:
fpair :: (f a,f b) -> f (a,b) -- collapse a pair of contexts into a pair-carrying context This
Conversely,
Section 64.2: Common instances of
Applicative
Maybe
Maybe is an applicative functor containing a possibly-absent value.
<*>purelifts the given value into Maybe by applying Just to it. The () function
applies a function wrapped in a Maybe to a value in a Maybe. If both the
function and the value are present (constructed with Just), the function is
applied to the value and the wrapped result is returned. If either is missing,
the computation can't proceed and Nothing is returned instead.
Lists
<* > :: [a -> b] -> [a] -> [b]
One way for lists to fit the type signature is to take
the two lists' Cartesian product, pairing up each element of the first list with
each element of the second one:
There's a class of Applicatives which "zip" their two inputs together. One simple
example is that of infinite streams:
Stream's Applicative
instance applies a stream of functions to a stream of
arguments point-wise, pairing up the values in the two streams by position.
pure returns a constant stream – an infinite list of a single fixed value:
Lists too admit a "zippy" Applicative instance, for which there exists the ZipList
newtype:
Since zip trims its result according to the shortest input, the only
[a,a,a,a,...
For example:
n x x mThe two possibilities remind us of the outer and the inner product,
similar to multiplying a 1-column (1) matrix with a 1-row (1) one in the first
case, getting the n x m matrix as a result (but flattened); or multiplying a 1-row
and a 1-column matrices (but without the summing up) in the second case.
Functions
- )r , the type signatures of <*When specialised to functions (> match those
>
pure and of the K and S combinators, respectively:
<*puremust be const, and > takes a pair of functions and applies them each to a
fixed argument, applying the two results:
- ) Nat Functions are the prototypical "zippy" applicative. For example, since
>
infinite streams are isomorphic to (, ...
... representing streams in a higher-order way produces the zippy Applicative
instance automatically.
Chapter 65: Common monads as
free monads
Section 65.1: Free Empty ~~ Identity
Given
we have
which is isomorphic to
Section 65.2: Free Identity ~~ (Nat,) ~~
Writer Nat
Given
we have
which is isomorphic to
(Nat, a)or equivalently (if you promise to evaluate the fst element first) , aka
Writer Nat a, with
Section 65.3: Free Maybe ~~ MaybeT
(Writer Nat)
Given
we have
which is equivalent to
we have
which is isomorphic to
we have
which is isomorphic to
Section 65.6: Free (Reader x) ~~
Reader (Stream x)
Given
we have
which is isomorphic to
we have
Section 66.2: Cofree (Const c) ~~
Writer c
Given
we have
which is isomorphic to
Section 66.3: Cofree Identity ~~
Stream
Given
we have
which is isomorphic to
Section 66.4: Cofree Maybe ~~
NonEmpty
Given
we have
which is isomorphic to
Section 66.5: Cofree (Writer w) ~~
WriterT w Stream
Given
we have
which is equivalent to
we have
which is isomorphic to
NonEmptyT (Writer e) a
or, if you promise to only evaluate the log after the
complete result, with
Section 66.7: Cofree (Reader x) ~~
Moore x
Given
we have
which is isomorphic to
We have already seen the Fractional class, which requires Num and introduces
the notions of "division" (/) and reciprocal of a number:
The Real class models .. the real numbers. It requires Num and Ord, therefore it
models an ordered numerical field. As a counterexample, Complex numbers
are not an ordered field (i.e. they do not possess a natural ordering
relationship):
(which implies Fractional) represents constants and operations that may
Floating
not have a finite decimal expansion.
sqrt . negate :: Floating a => a -> a
Caution: while expressions such as are
perfectly valid, they might return
NaN ("not-a-number"), which may not be an intended behaviour. In such
cases, we might want to work over the Complex field (shown later).
In Haskell, all expressions (which includes numerical constants and functions
operating on those) have a decidable type. At compile time, the type-checker
infers the type of an expression from the types of the elementary functions
that compose it. Since data is immutable by default, there are no "type
casting" operations, but there are functions that copy data and generalize or
specialize the types within reason.
Section 67.1: Basic examples
list0 :: Num a => The concrete types above were inferred by GHC. More
general types like [a] would have
worked, but would have also been harder to preserve (e.g. if one consed a
Double onto a list of Nums), due to the caveats shown above.
Section 67.2: `Could not deduce
(Fractional Int) ...`
The error message in the title is a common beginner mistake. Let's see how it
arises and how to fix it.
Suppose we need to compute the average value of a list of numbers; the
following declaration would seem to do it, but it wouldn't compile:
plus )is applied to 1 yielding the plus Here, the function () 1), which is applied
function (( to 2, yielding the
plus plus function ((() 1) 2). Because 1 2 is a function which takes no arguments,
you can consider it a plain value; however in Haskell, there is little
distinction between functions and values.
To go into more detail, the function plus is a function that adds its arguments.
plus The function 1 is a function that adds 1 to its argument.
plus
The function 1 2 is a function that adds 1 to 2, which is
always the value 3.
Section 73.2: Partial
application - Part 2
As another example, we have the function map, which takes a function and a
list of values, and applies the function to each value of the list:
Let's say we want to increment each value in a list. You may decide to define
your own function, which adds one to its argument, and map that function
over your list
but if you have another look at addOne's definition, with parentheses added for
emphasis:
plus plus plus 1, remembering to use parentheses plus The function addOne,
to isolate when applied to any
value x, is the same as
the partially applied function 1 applied to x. This means the functions addOne
and 1 are identical, and we can avoid defining a new function by just
replacing addOne with 1 as a subexpression:
Section 73.3: Parentheses in a basic
function call
For a C-style function call, e.g.
One might think that because the compiler knows that take is a function, it
would be able to know that you want to apply it to the arguments b and c, and
pass its result to plus.
However, in Haskell, functions often take other functions as arguments, and
little actual distinction is made between functions and other values; and so
the compiler cannot assume your intention simply because take is a function.
And so, the last example is analogous to the following C function call:
Chapter 74: Logging
Logging in Haskell is achieved usually through functions in the IO monad,
and so is limited to non-pure functions or "IO actions".
There are several ways to log information in a Haskell program: from
putStrLn (or print), to libraries such as hslogger or through Debug.Trace.
Section 74.1: Logging with hslogger
The hslogger module provides a similar API to Python's logging framework, and
supports hierarchically named loggers, levels and redirection to handles
outside of stdout and stderr.
By default, all messages of level WARNING and above are sent to stderr and all
other log levels are ignored.
Each Logger has a name, and they are arranged hierarchically, so MyProgram is
a parent of MyParent.Module.
Chapter 75: Attoparsec
Type Detail
Parser i a The core type for representing a parser. i is the string type, e.g.
ByteString.
Fail i [ String ] String , Partial ( i -> IResult i r The result of a parse, with ) and
Done i r as
IResult i r constructors.
We can parse the header from a bitmap file easily. Here, we have 4 parser
functions that represent the header section from a bitmap file:
Firstly, the DIB section can be read by taking the first 2 bytes
Similarly, the size of the bitmap, the reserved sections and the pixel offset
can be read easily too:
which can then be combined into a larger parser function for the entire
header:
Chapter 76: zipWithM
is to zipWith as mapM is to map: it lets you combine two lists using a
zipWithM
monadic function.
Control.Monad From the module
Section 76.1:
Calculatings sales
prices
Suppose you want to see if a certain set of sales prices makes sense for a
store.
The items originally cost $5, so you don't want to accept the sale if the sales
price is less for any of them, but you do want to know what the new price is
otherwise.
Calculating one price is easy: you calculate the sales price, and return Nothing
if you don't get a profit:
This will return Nothing if any of the sales prices are below $5.
Chapter 77: Profunctor
Profunctor is a typeclass provided by the profunctors package in Data.Profunctor.
See the "Remarks" section for a full
explanation.
Section 77.1: (->)
Profunctor
(->) is a simple example of a profunctor: the left argument is the input to a
function, and the right argument is the same as the reader functor instance.
Chapter 78: Type Application
are an alternative to type annotations when the compiler
TypeApplications
struggles to infer types for a given expression.
This series of examples will explain the purpose of the TypeApplications
extension and how to use it
Don't forget to enable the extension by placing {-# LANGUAGE TypeApplications #-}
at the top of your source file.
Section 78.1: Avoiding type
annotations
We use type annotations to avoid ambiguity. Type applications can be used
for the same purpose. For example
This code has an ambiguity error. We know that a has a Num instance, and in
order to print it we know it needs a
Show instance. This could work if a was, for example, an Int, so to fix the error
we can add a type annotation
To understand what this means we need to look at the type signature of print.
The function takes one parameter of type a, but another way to look at it is
that it actually takes two parameters. The first one is a type parameter, the
second one is a value whose type is the first parameter.
The main difference between value parameters and the type parameters is
that the latter ones are implicitly provided to functions when we call them.
Who provides them? The type inference algorithm! What TypeApplications let us
do is give those type parameters explicitly. This is especially useful when the
type inference can't determine the correct type.
So to break down the above example
Section 78.2: Type applications in
other languages
If you're familiar with languages like Java, C# or C++ and the concept of
generics/templates then this comparison might be useful for you.
Say we have a generic function in C#
DoNothing ( 5.0fthis
To call DoNothing < float
function >( 5.0f
with a float
we can do ) or if we want to be
explicit we can say ). That part inside of the
angle brackets is the type application.
In Haskell it's the same, except that the type parameters are not only implicit
at call sites but also at definition sites.
The problem is that the size should be constant for every value of that type.
We don't actually want the sizeOf function to depend on a, but only on it's type.
Without type applications, the best solution we had was the Proxy type defined
like this
Now you might be wondering, why not drop the first argument altogether?
The type of our function would then just
sizeOf :: Int or, to be more precise because it is a sizeOf :: SizeOf a => Int
method of a class,
sizeOf :: forall a. SizeOf a => Int be or to be even more explicit .
The problem is type inference. If I write sizeOf somewhere, the inference
algorithm only knows that I expect an Int. It has no idea what type I want to
substitute for a. Because of this, the definition gets rejected by the compiler
unless you have the {-# LANGUAGE AllowAmbiguousTypes #-} extension enabled. In
that case the definition compiles,it just can't be used anywhere without an
ambiguity error.
sizeOf @ Int Luckily,
the introduction of type applications saves the day! Now
we can write , explicitly saying that a is Int. Type applications allow us to
provide a type parameter, even if it doesn't appear in the actual parameters of
the function!