Lecture15 - Lambda Calculus II
Lecture15 - Lambda Calculus II
Wayne Snyder
Computer Science Department
Boston University
Intuitively: change the bound variable x and every occurrence of an x corresponding to this
binding to a new variable (your choice, but make sure it doesn't conflict with any other
variables, either bound or free).
Examples:
( "x. E ) F →* E[x := F]
where the term ( "x. E ) has undergone alpha-conversion as necessary to prevent free variable
capture when making the substitution of F for x in E.
Examples:
( "x. ( "y. x ("x . x ( y x ) ) ) ) y → 3 ( "x. ( "y′. x ("x . x ( y' x ) ) ) ) y → * ( "y'. y ("x . x (y' x ))))
Examples:
normal form
But this may not be possible! Beta-reductions may not terminate:
(1) Which one to reduce first? In general, what is our overall strategy for choosing redexes?
(2) Does it matter which strategy that we use? What are the consequences of choosing a strategy?
(A) Normal or Leftmost Order: "The leftmost, outermost redex is always reduced first. That is,
..... the arguments are substituted into the body of an abstraction before the arguments are
reduced." (Wikipedia)
Reduction Strategies
(A) Normal or Leftmost Order: "The leftmost, outermost redex is always reduced first. That is,
..... the arguments are substituted into the body of an abstraction before the arguments are
reduced." (Wikipedia)
(B) Applicative or Strict Order: "The rightmost, innermost redex is always reduced first.
Intuitively this means a function's arguments are always reduced before the function itself.
Applicative order always attempts to apply functions to normal forms, even when this is not
possible." (Wikipedia)
Issue (2): What are the consequences of choosing one strategy over the other?
(i) If there is any reduction sequence which terminates in a normal form, Normal Order will
find one (which is why it is called "normal" order, since it finds normal forms).
(ii) Applicative Order may not terminate, even when there does exist some terminating
sequence. Example:
("x. y) (("x. x x) ("x. x x)) → ) ("x. y) (("x. x x) ("x. x x)) → ) ..... -- applicative order
(i) If there is any reduction sequence which terminates in a normal form, Normal Order will
find one (which is why it is called "normal" order, since it finds normal forms).
(ii) Applicative Order may not terminate, even when there does exist some terminating
sequence. ! ← ... ! ←
(iii) Beta-reduction is confluent, so when normal forms exist, they are unique:
F G
F G
H
Lambda Calculus: Properties of Beta-Reduction
Issue (2): What are the consequences of choosing one strategy over the other?
(i) If there is any reduction sequence which terminates in a normal form, Normal Order will
find one (which is why it is called "normal" order, since it finds normal forms).
(ii) Applicative Order may not terminate, even when there does exist some terminating
sequence. ! ← ... ! ←
(iii) Beta-reduction is confluent, so when normal forms exist, they are unique:
Punchline: Normal Order will find a unique normal form when one exists; Applicative Order
may not terminate, even when a normal form exists, but if it does, then that normal form is
unique.
(5 + 2) * 3 (\x -> x * 3) 7
7 * 3
21
Evaluation Order in Programming Languages
Most languages use applicative/strict evaluation for function calls, so for
example in Python we would have the following sequence of events:
def ohNo(x):
return ohNo(x) def cond(A,B,C):
if A:
def test(x): return B
if x > 0: else:
return "Positive!" return C
else:
return ohNo(x)
times3 x = x * 3 plus2 y = y + 2
= (((\y -> y + 2) 5 ) * 3)
= ((5 + 2) * 3)
= (7 * 3)
= 21
square3 x = x * x * 3 plus2 y = y + 2
= (7 * ((\y -> y + 2) 5 ) * 3)
= (7 * (5 + 2) * 3)
= (7 * 7 * 3)
= (49 * 3) = 147
Evaluation Order in Programming Languages
Lazy evaluation fixes this by creating a temporary variable bound to the
expression (called a "thunk") which is then only evaluated once:
square3 x = x * x * 3 plus2 y = y + 2
= (thunk * thunk * 3)
where thunk = ((\y -> y + 2) 5 )
= (thunk * thunk * 3)
where thunk = (5 + 2)
This is the
= (thunk * thunk * 3)
programming
where thunk = 7
language version of
"memoizing."
= (7 * thunk * 3) where thunk = 7
= (7 * 7 * 3) = (49 * 3) = 147
Recall: Let and Where Expressions in Haskell
let and where expressions allow you to create local variables and avoid having to
write lots of helper functions.
The parameters in a lambda expression are local variables which only have meaning
inside the body of the lambda expression:
cylinder r h =
let pi = 3.1415
sideArea = 2 * pi * r * h
topArea = pi * r^2
in sideArea + 2 * topArea
All these do exactly
cylinder r h = the same thing!
let sideArea = 2 * pi * r * h
pi = 3.1415
topArea = pi * r^2
in sideArea + 2 * topArea This is another example of how
Haskell follows mathematical
cylinder r h = practice, not imperative
let sideArea = 2 * pi * r * h programming.
topArea = pi * r^2
pi = 3.1415
in sideArea + 2 * topArea
Let and where in detail: how are bindings evaluated?
When we think of bindings as simultaneous equations, we see how Haskell
interprets equations in let and where:
x = 2 * z equivalent to x = 10
y = 4 y = 4
z = y + 1 z = 5
Main> :r
[1 of 1] Compiling Main
test = let x = 2 * z ( Main.hs, interpreted )
y = 4 Ok, one module loaded.
z = y + 1 Main> test
in (x,y,z) (10,4,5)
Main> test2
test2 = (x,y,z) where (10,4,5)
x = 2 * z
y = 4
z = y + 1
Let and where in detail: how are bindings evaluated?
The same thing is true of equations in your code:
Main> :r
[1 of 1] Compiling Main
Main.hs, interpreted )
Ok, one module loaded.
Main> x
10
Main> y
4
Main> z
5
Let and where in detail: how are bindings evaluated?
This leads to the following behavior with sets of bindings that have no solution as a
set of simultaneous equations:
Main> x = x + 1
Main> "NOOOO, DONT DO IT!!!!!"
"NOOOO, DONT DO IT!!!!!"
Main> x
If strict evaluation were being used, then x + 1 would be evaluated first, and x is
unbound (since binding to x has not yet been made), as if
Main> x = x + 1
Main> x = x + 1
Main> x
(x + 1)
((x + 1) + 1)
(((x + 1) + 1) + 1)
test = let x = 2 * z
y = 4
z = y + 1
in x