0% found this document useful (0 votes)
10 views

Lisp Course

Uploaded by

Aashutosh Poudel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Lisp Course

Uploaded by

Aashutosh Poudel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 59

LISP Tutorial 1: Basic LISP

Programming
LISP Expressions
When you start up the Common LISP environment, you should see a prompt, which means that LISP is
waiting for you to enter a LISP expression. In the environment I am using, it looks like the following:

USER(1):

The Common LISP environment follows the algorithm below when interacting with users:

loop
read in an expression from the console;

evaluate the expression;


print the result of evaluation to the console;

end loop.

Common LISP reads in an expression, evaluates it, and then prints out the result. For example, if you want
to compute the value of (2 * cos(0) * (4 + 6)), you type in:

USER(1): (* 2 (cos 0) (+ 4 6))

Common LISP replies:

20.0

before prompting you to enter the next expression. Several things are worth noting:

LISP expressions are composed of forms. The most common LISP form is function application. LISP
represents a function call f(x) as (f x). For example, cos(0) is written as (cos 0).
LISP expressions are case-insensitive. It makes no difference whether we type (cos 0) or (COS 0).
Similarly, "+" is the name of the addition function that returns the sum of its arguments.
Some functions, like "+" and "*", could take an arbitrary number of arguments. In our example, "*"
took three arguments. It could as well take 2 arguments, as in "(* 2 3)", or 4 arguments, as in "(* 2
3 4 5)".
In general, a function application form looks like (function argument1 argument2 ... argumentn).
As in many programming languages (e.g. C/C++), LISP evaluates function calls in applicative order,
which means that all the argument forms are evaluated before the function is invoked. That is to say,
the argument forms (cos 0) and (+ 4 6) are respectively evaluated to the values 1 and 10 before
they are passed as arguments to the * function. Some other forms, like the conditionals we will see
later, are not evaluated in applicative order.
Numeric values like 4 and 6 are called self-evaluating forms: they evaluate to themselves. To evaluate
(+ 4 6) in applicative order, the forms 4 and 6 are respectively evaluated to the values 4 and 6 before
they are passed as arguments to the + function.

Complex arithmatic expressions can be constructed from built-in functions like the following:

Numeric Functions Meaning


(+ x1 x2 ... xn) The sum of x1, x2, ..., xn
(* x1 x2 ... xn) The product of x1, x2, ..., xn
(- x y) Subtract y from x
(/ x y) Divide x by y
(rem x y) The remainder of dividing x by y
(abs x) The absolute value of x
(max x1 x2 ... xn) The maximum of x1, x2, ..., xn
(min x1 x2 ... xn) The minimum of x1, x2, ..., xn

Common LISP has a rich set of pre-defined numerical functions. For a complete coverage, consult Chapter
12 of the book, Common LISP, The Language (2nd Edition) (CLTL2) by Guy Steele. In general, we will not
be able to cover all aspects of Common LISP in this tutorial. Adventurous readers should consult CLTL2
frequently for more in-depth explanation of various features of the language.

Exercise: Look up pages 376-378 of CLTL2 and find out what the functions floor and ceiling are for.
Then, find out the subtle difference between mod and rem.

Defining Functions
Evaluating expressions is not very interesting. We would like to build expression abstractions that could
be reused in the future. For example, we could type in the following:

USER(2): (defun double (x) (* x 2))

DOUBLE

In the above, we define a function named double, which returns two times the value of its input argument
x. We can then test-drive the function as below:

USER(3): (double 3)

6
USER(4): (double 7)
14

Editing, Loading and Compiling LISP Programs


Most of the functions we would like to write is going to be a lot longer than the double function. When
working with complex programs, it is usually desirable to edit the program with an editor, fully debug the
code, and then compile it for faster performance. Use your favorite text editor (mine is emacs) to key in the
following function definition:

;;; testing.lisp
;;; by Philip Fong

;;;
;;; Introductory comments are preceded by ";;;"
;;; Function headers are preceded by ";;"

;;; Inline comments are introduced by ";"


;;;

;;
;; Triple the value of a number

;;

(defun triple (X)

"Compute three times X." ; Inline comments can


(* 3 X)) ; be placed here.

;;
;; Negate the sign of a number

;;

(defun negate (X)


"Negate the value of X." ; This is a documentation string.
(- X))

Save the above in the file testing.lisp. Now load the definition into the LISP environment by typing:

USER(5): (load "testing.lisp")


; Loading ./testing.lisp
T

Let us try to see if they are working properly.

USER(6): (triple 2)
6
USER(7): (negate 3)

-3

When functions are fully debugged, we can also compile them into binaries:

USER(8): (compile-file "testing.lisp")

Depending on whether your code is well-formed, and what system you are using, some compilation
messages will be generated. The compiled code can be loaded into the LISP environment later by using
the following:

USER(9): (load "testing")

; Fast loading ./testing.fasl

Control Stuctures: Recursions and Conditionals


Now that we are equipped with all the tools for developing LISP programs, let us venture into something
more interesting. Consider the definition of factorials:
n! = 1 if n = 1
n! = n * (n - 1)! if n > 1
We can implement a function to compute factorials using recursion:

(defun factorial (N)

"Compute the factorial of N."

(if (= N 1)

1
(* N (factorial (- N 1)))))
The if form checks if N is one, and returns one if that is the case, or else returns N * (N - 1)!. Several
points are worth noting:

The condition expression (= N 1) is a relational expression. It returns boolean values T or NIL. In


fact, LISP treats NIL as false, and everything else as true. Other relational operators include the
following:

Relational Operators Meaning


(= x y) x is equal to y
(/= x y) x is not equal to y
(< x y) x is less than y
(> x y) x is greater than y
(<= x y) x is no greater than y
(>= x y) x is no less than y

The if form is not a strict function (strict functions evaluate their arguments in applicative order).
Instead, the if form evaluates the condition (= N 1) before further evaluating the other two
arguments. If the condition evaluates to true, then only the second argument is evaluated, and its
value is returned as the value of the if form. Otherwise, the third argument is evaluated, and its value
is returned. Forms that are not strict functions are called special forms.
The function is recursive. The definition of factorial involves invocation of itself. Recursion is, for
now, our only mechanism for producing looping behavior. Specifically, the kind of recursion we are
looking at is called linear recursion, in which the function may make at most one recursive call from
any level of invocation.

To better understand the last point, we can make use of the debugging facility trace (do not compile your
code if you want to use trace):

USER(11): (trace factorial)

(FACTORIAL)
USER(12): (factorial 4)

0: (FACTORIAL 4)

1: (FACTORIAL 3)
2: (FACTORIAL 2)

3: (FACTORIAL 1)

3: returned 1
2: returned 2

1: returned 6

0: returned 24
24

Tracing factorial allows us to examine the recursive invocation of the function. As you can see, at most
one recursive call is made from each level of invocation.

Exercise: The N'th triangular number is defined to be 1 + 2 + 3 + ... + N. Alternatively, we could give a
recursive definition of triangular number as follows:
T(n) = 1 if n = 1
T(n) = n + T(n-1) if n > 1
Use the recursive definition to help you implement a linearly recursive function (triangular N) that
returns the N'th triangular number. Enter your function definition into a text file. Then load it into LISP.
Trace the execution of (triangular 6).

Exercise: Write down a recursive definition of BE (assuming that both B and E are non-negative integers).
Then implement a linearly recursive function (power B E) that computes BE. Enter your function
definition into a text file. Then load it into LISP. Trace the execution of (power 2 6).
Multiple Recursions
Recall the definition of Fibonacci numbers:
Fib(n) = 1 for n = 0 or n = 1
Fib(n) = Fib(n-1) + Fib(n-2) for n > 1
This definition can be directly translated to the following LISP code:

(defun fibonacci (N)

"Compute the N'th Fibonacci number."


(if (or (zerop N) (= N 1))

(+ (fibonacci (- N 1)) (fibonacci (- N 2)))))

Again, several observations can be made. First, the function call (zerop N) tests if N is zero. It is merely a
shorthand for (= N 0). As such, zerop returns either T or NIL. We call such a boolean function a
predicate, as indicated by the suffix p. Some other built-in shorthands and predicates are the following:

Shorthand Meaning
(1+ x) (+ x 1)
(1- x) (- x 1)
(zerop x) (= x 0)
(plusp x) (> x 0)
(minusp x) (< x 0)
(evenp x) (= (rem x 2) 0)
(oddp x) (/= (rem x 2) 0)

Second, the or form is a logical operator. Like if, or is not a strict function. It evaluates its arguments
from left to right, returning non-NIL immediately if it encounters an argument that evaluates to non-NIL. It
evaluates to NIL if all tests fail. For example, in the expression (or t (= 1 1)), the second argument (=
1 1) will not be evaluated. Similar logical connectives are listed below:

Logical Operators Meaning


(or x1 x2 ... xn) Logical or
(and x1 x2 ... xn) Logical and
(not x) Logical negation

Third, the function definition contains two self references. It first recursively evaluates (fibonacci (- N
1)) to compute Fib(N-1), then evaluates (fibonacci (- N 2)) to obtain Fib(N-2), and lastly return their
sum. This kind of recursive definitiion is called double recursion (more generally, multiple recursion).
Tracing the function yields the following:

USER(20): (fibonacci 3)

0: (FIBONACCI 3)
1: (FIBONACCI 2)

2: (FIBONACCI 1)
2: returned 1

2: (FIBONACCI 0)

2: returned 1

1: returned 2

1: (FIBONACCI 1)

1: returned 1

0: returned 3
3
Note that in each level, there could be up to two recursive invocations.

Exercise: The Binomial Coefficient B(n, r) is the coefficient of the term xr in the binormial expansion of (1
+ x)n. For example, B(4, 2) = 6 because (1+x)4 = 1 + 4x + 6x2 + 4x3 + x4. The Binomial Coefficient can
be computed using the Pascal Triangle formula:
B(n, r) = 1 if r = 0 or r = n
B(n, r) = B(n-1, r-1) + B(n-1, r) otherwise
Implement a doubly recursive function (binomial N R) that computes the binomial coefficient B(N, R).

Some beginners might find nested function calls like the following very difficult to understand:

(+ (fibonacci (- N 1)) (fibonacci (- N 2)))))

To make such expressions easier to write and comprehend, one could define local name bindings to
represent intermediate results:

(let

((F1 (fibonacci (- N 1)))

(F2 (fibonacci (- N 2))))


(+ F1 F2))

The let special form above defines two local variables, F1 and F2, which binds to Fib(N-1) and Fib(N-2)
respectively. Under these local bindings, let evaluates (+ F1 F2). The fibonacci function can thus be
rewritten as follows:

(defun fibonacci (N)

"Compute the N'th Fibonacci number."

(if (or (zerop N) (= N 1))

(let

((F1 (fibonacci (- N 1)))


(F2 (fibonacci (- N 2))))

(+ F1 F2))))

Notice that let creates all bindings in parallel. That is, both (fibonacci (- N 1)) and (fibonacci (- N
2)) are evaluated first, and then they are bound to F1 and F2. This means that the following LISP code will
not work:

(let

((x 1)

(y (* x 2)))

(+ x y))

LISP will attempt to evaluate the right hand sides first before the bindings are established. So, the
expression (* x 2) is evaluated before the binding of x is available. To perform sequential binding, use
the let* form instead:

(let*

((x 1)
(y (* x 2)))

(+ x y))

LISP will bind 1 to x, then evaluate (* x 2) before the value is bound to y.

Lists
Numeric values are not the only type of data LISP supports. LISP is designed for symbolic computing. The
fundamental LISP data structure for supporting symbolic manipulation are lists. In fact, LISP stands for
"LISt Processing."

Lists are containers that supports sequential traversal. List is also a recursive data structure: its definition
is recursive. As such, most of its traversal algorithms are recursive functions. In order to better
understand a recursive abstract data type and prepare oneself to develop recursive operations on the data
type, one should present the data type in terms of its constructors, selectors and recognizers.

Constructors are forms that create new instances of a data type (possibly out of some simpler
components). A list is obtained by evaluating one of the following constructors:

1. nil: Evaluating nil creates an empty list;


2. (cons x L): Given a LISP object x and a list L, evaluating (cons x L) creates a list containing x
followed by the elements in L.

Notice that the above definition is inherently recursive. For example, to construct a list containing 1
followed by 2, we could type in the expression:

USER(21): (cons 1 (cons 2 nil))


(1 2)

LISP replies by printing (1 2), which is a more readable representation of a list containing 1 followed by
2. To understand why the above works, notice that nil is a list (an empty one), and thus (cons 2 nil) is
also a list (a list containing 1 followed by nothing). Applying the second constructor again, we see that
(cons 1 (cons 2 nil)) is also a list (a list containing 1 followed by 2 followed by nothing).

Typing cons expressions could be tedious. If we already know all the elements in a list, we could enter our
list as list literals. For example, to enter a list containing all prime numbers less than 20, we could type in
the following expression:

USER(22): (quote (2 3 5 7 11 13 17 19))


(2 3 5 7 11 13 17 19)

Notice that we have quoted the list using the quote special form. This is necessary because, without the
quote, LISP would interpret the expression (2 3 5 7 11 13 17 19) as a function call to a function with
name "2" and arguments 3, 5, ..., 19 The quote is just a syntactic device that instructs LISP not to evaluate
the a form in applicative order, but rather treat it as a literal. Since quoting is used frequently in LISP
programs, there is a shorthand for quote:

USER(23): '(2 3 5 7 11 13 17 19)

(2 3 5 7 11 13 17 19)

The quote symbol ' is nothing but a syntactic shorthand for (quote ...).

The second ingredient of an abstract data type are its selectors. Given a composite object constructed out
of several components, a selector form returns one of its components. Specifically, suppose a list L1 is
constructed by evaluating (cons x L2), where x is a LISP object and L2 is a list. Then, the selector forms
(first L1) and (rest L1) evaluate to x and L2 respectively, as the following examples illustrate:

USER(24): (first '(2 4 8))

USER(25): (rest '(2 4 8))

(4 8)

USER(26): (first (rest '(2 4 8)))


4

USER(27): (rest (rest '(2 4 8)))

(8)

USER(28): (rest (rest (rest '(8))))

NIL

Finally, we look at recognizers, expressions that test how an object is constructed. Corresponding to each
constructor of a data type is a recognizer. In the case of list, they are null for nil and consp for cons.
Given a list L, (null L) returns t iff L is nil, and (consp L) returns t iff L is constructed from cons.

USER(29): (null nil)

T
USER(30): (null '(1 2 3))

NIL

USER(31): (consp nil)

NIL

USER(32): (consp '(1 2 3))

Notice that, since lists have only two constructors, the recognizers are complementary. Therefore, we
usually need only one of them. In our following discussion, we use only null.

Structural Recursion with Lists


As we have promised, understanding how the constructors, selectors and recognizers of lists work helps
us to develop recursive functions that traverse a list. Let us begin with an example. The LISP built-in
function list-length counts the number of elements in a list. For example,

USER(33): (list-length '(2 3 5 7 11 13 17 19))


8

Let us try to see how such a function can be implemented recursively. A given list L is created by either
one of the two constructors, namely nil or a cons:

Case 1: L is nil.
The length of an empty list is zero.
Case 2: L is constructed by cons.
Then L is composed of two parts, namely, (first L) and (rest L). In such case, the length of L can
be obtained inductively by adding 1 to the length of (rest L).

Formally, we could implement our own version of list-length as follows:

(defun recursive-list-length (L)


"A recursive implementation of list-length."

(if (null L)

0
(1+ (recursive-list-length (rest L)))))

Here, we use the recognizer null to differentiate how L is constructed. In case L is nil, we return 0 as its
length. Otherwise, L is a cons, and we return 1 plus the length of (rest L). Recall that (1+ n) is simply a
shorthand for (+ n 1).

Again, it is instructive to use the trace facilities to examine the unfolding of recursive invocations:

USER(40): (trace recursive-list-length)


(RECURSIVE-LIST-LENGTH)

USER(41): (recursive-list-length '(2 3 5 7 11 13 17 19))

0: (RECURSIVE-LIST-LENGTH (2 3 5 7 11 13 17 19))

1: (RECURSIVE-LIST-LENGTH (3 5 7 11 13 17 19))

2: (RECURSIVE-LIST-LENGTH (5 7 11 13 17 19))

3: (RECURSIVE-LIST-LENGTH (7 11 13 17 19))
4: (RECURSIVE-LIST-LENGTH (11 13 17 19))

5: (RECURSIVE-LIST-LENGTH (13 17 19))

6: (RECURSIVE-LIST-LENGTH (17 19))

7: (RECURSIVE-LIST-LENGTH (19))

8: (RECURSIVE-LIST-LENGTH NIL)

8: returned 0

7: returned 1
6: returned 2

5: returned 3

4: returned 4

3: returned 5

2: returned 6

1: returned 7
0: returned 8

The kind of recursion we see here is called structural recursion. Its standard pattern is as follows. To
process an instance X of a recursive data type:

1. Use the recognizers to determine how X is created (i.e. which constructor creates it). In our example,
we use null to decide if a list is created by nil or cons.
2. For instances that are atomic (i.e. those created by constructors with no components), return a trivial
value. For example, in the case when a list is nil, we return zero as its length.
3. If the instance is composite, then use the selectors to extract its components. In our example, we use
first and rest to extract the two components of a nonempty list.
4. Following that, we apply recursion on one or more components of X. For instance, we recusively
invoked recursive-list-length on (rest L).
5. Finally, we use either the constructors or some other functions to combine the result of the recursive
calls, yielding the value of the function. In the case of recursive-list-length, we return one plus
the result of the recursive call.

Exercise: Implement a linearly recursive function (sum L) which computes the sum of all numbers in a
list L. Compare your solution with the standard pattern of structural recursion.

Sometimes, long traces like the one for list-length may be difficult to read on a terminal screen. Common
LISP allows you to capture screen I/O into a file so that you can, for example, produce a hard copy for
more comfortable reading. To capture the trace of executing (recursive-list-length '(2 3 5 7 11 13
17 19)), we use the dribble command:

USER(42): (dribble "output.txt")

dribbling to file "output.txt"

NIL

USER(43): (recursive-list-length '(2 3 5 7 11 13 17 19))


0: (RECURSIVE-LIST-LENGTH (2 3 5 7 11 13 17 19))

1: (RECURSIVE-LIST-LENGTH (3 5 7 11 13 17 19))

2: (RECURSIVE-LIST-LENGTH (5 7 11 13 17 19))

3: (RECURSIVE-LIST-LENGTH (7 11 13 17 19))

4: (RECURSIVE-LIST-LENGTH (11 13 17 19))

5: (RECURSIVE-LIST-LENGTH (13 17 19))


6: (RECURSIVE-LIST-LENGTH (17 19))

7: (RECURSIVE-LIST-LENGTH (19))

8: (RECURSIVE-LIST-LENGTH NIL)

8: returned 0

7: returned 1

6: returned 2

5: returned 3
4: returned 4

3: returned 5

2: returned 6

1: returned 7

0: returned 8

8
USER(44): (dribble)

The form (dribble "output.txt") instructs Common LISP to begin capturing all terminal I/O into a file
called output.txt. The trailing (dribble) form instructs Common LISP to stop I/O capturing, and closes
the file output.txt. If we examine output.txt, we will see the following:

dribbling to file "output.txt"

NIL

USER(43): (recursive-list-length '(2 3 5 7 11 13 17 19))

0: (RECURSIVE-LIST-LENGTH (2 3 5 7 11 13 17 19))

1: (RECURSIVE-LIST-LENGTH (3 5 7 11 13 17 19))

2: (RECURSIVE-LIST-LENGTH (5 7 11 13 17 19))

3: (RECURSIVE-LIST-LENGTH (7 11 13 17 19))

4: (RECURSIVE-LIST-LENGTH (11 13 17 19))


5: (RECURSIVE-LIST-LENGTH (13 17 19))

6: (RECURSIVE-LIST-LENGTH (17 19))

7: (RECURSIVE-LIST-LENGTH (19))

8: (RECURSIVE-LIST-LENGTH NIL)

8: returned 0

7: returned 1

6: returned 2

5: returned 3

4: returned 4
3: returned 5

2: returned 6

1: returned 7

0: returned 8

8
USER(44): (dribble)

Symbols
The lists we have seen so far are lists of numbers. Another data type of LISP is symbols. A symbol is simply
a sequence of characters:

USER(45): 'a ; LISP is case-insensitive.

USER(46): 'A ; 'a and 'A evaluate to the same symbol.

USER(47): 'apple2 ; Both alphanumeric characters ...

APPLE2

USER(48): 'an-apple ; ... and symbolic characters are allowed.


AN-APPLE

USER(49): t ; Our familiar t is also a symbol.

USER(50): 't ; In addition, quoting is redundant for t.

USER(51): nil ; Our familiar nil is also a symbol.

NIL

USER(52): 'nil ; Again, it is self-evaluating.

NIL

With symbols, we can build more interesting lists:

USER(53): '(how are you today ?) ; A list of symbols.

(HOW ARE YOU TODAY ?)

USER(54): '(1 + 2 * x) ; A list of symbols and numbers.

(1 + 2 * X)

USER(55): '(pair (2 3)) ; A list containing 'pair and '(2 3).

(pair (2 3))

Notice that the list (pair (2 3)) has length 2:

USER(56): (recursive-list-length '(pair (2 3)))

Notice also the result of applying accessors:

USER(57): (first '(pair (2 3)))

PAIR

USER(58): (rest '(pair (2 3)))


((2 3))

Lists containing other lists as members are difficult to understand for beginners. Make sure you
understand the above example.

Example: nth
LISP defines a function (nth N L) that returns the N'th member of list L (assuming that the elements are
numbered from zero onwards):

USER(59): (nth 0 '(a b c d))

USER(60): (nth 2 '(a b c d))

We could implement our own version of nth by linear recursion. Given N and L, either L is nil or it is
constructed by cons.

Case 1: L is nil.
Accessing the N'th element is an undefined operation, and our implementation should arbitrarily
return nil to indicate this.
Case 2: L is constructed by a cons.
Then L has two components: (first L) and (rest L). There are two subcases: either N = 0 or N >
0:
Case 2.1: N = 0.
The zeroth element of L is simply (first L).
Case 2.2: N > 0.
The N'th member of L is exactly the (N-1)'th member of (rest L).

The following code implements our algorithm:

(defun list-nth (N L)
"Return the N'th member of a list L."

(if (null L)

nil

(if (zerop N)

(first L)

(list-nth (1- N) (rest L)))))

Recall that (1- N) is merely a shorthand for (- N 1). Notice that both our implementation and its
correctness argument closely follow the standard pattern of structural recursion. Tracing the execution of
the function, we get:

USER(61): (list-nth 2 '(a b c d))


0: (LIST-NTH 2 (A B C D))

1: (LIST-NTH 1 (B C D))

2: (LIST-NTH 0 (C D))

2: returned C

1: returned C

0: returned C

C
Exercise: LISP has a built-in function (last L) that returns a the last cons structure in a given list L.

USER(62): (last '(a b c d))

(d)

USER(63): (last '(1 2 3))

(3)

Implement your own version of last using linear recursion. You may assume that (last nil) returns nil.
Compare your implementation with the standard pattern of structural recursion.

Notice that we have a standard if-then-else-if structure in our implementation of list-nth. Such logic can
alternatively be implemented using the cond special form.

(defun list-nth (n L)

"Return the n'th member of a list L."

(cond

((null L) nil)

((zerop n) (first L))

(t (list-nth (1- n) (rest L)))))

The cond form above is evaluated as follows. The condition (null L) is evaluated first. If the result is true,
then nil is returned. Otherwise, the condition (zerop n) is evaluated. If the condition holds, then the
value of (first L) is returned. In case neither of the conditions holds, the value of (list-nth (1- n)
(rest L)) is returned.

Exercise: Survey CLTL2 section 7.6 (pages 156-161) and find out what other conditional special forms are
available in Common LISP. Do you know when the special forms when and unless should be used instead
of if?

Example: member
LISP defines a function (member E L) that returns non-NIL if E is a member of L.

USER(64): (member 'b '(perhaps today is a good day to die)) ; test fails

NIL

USER(65): (member 'a '(perhaps today is a good day to die)) ; returns non-NIL
'(a good day to die)

We implement our own recursive version as follows:

(defun list-member (E L)

"Test if E is a member of L."


(cond

((null L) nil)

((eq E (first L)) t)

(t (list-member E (rest L)))))

The correctness of the above implementation is easy to justify. The list L is either constructed by nil or by
a call to cons:

Case 1: L is nil.
L is empty, and there is no way E is in L.
Case 2: L is constructed by cons
Then it has two components: (first L) and (rest L). There are two cases, either (first L) is E
itself, or it is not.
Case 2.1: E equals (first L).
This means that E is a member of L,
Case 2.2: E does not equal (first L).
Then E is a member of L iff E is a member of (rest L).

Tracing the execution of list-member, we get the following:

USER(70): (list-member 'a '(perhaps today is a good day to die))

0: (LIST-MEMBER A (PERHAPS TODAY IS A GOOD DAY TO DIE))

1: (LIST-MEMBER A (TODAY IS A GOOD DAY TO DIE))


2: (LIST-MEMBER A (IS A GOOD DAY TO DIE))

3: (LIST-MEMBER A (A GOOD DAY TO DIE))

3: returned T

2: returned T

1: returned T

0: returned T

In the implementation of list-member, the function call (eq x y) tests if two symbols are the same. In
fact, the semantics of this test determines what we mean by a member:

USER(71): (list-member '(a b) '((a a) (a b) (a c)))

0: (LIST-MEMBER (A B) ((A A) (A B) (A C)))

1: (LIST-MEMBER (A B) ((A B) (A C)))

2: (LIST-MEMBER (A B) ((A C)))

3: (LIST-MEMBER (A B) NIL)

3: returned NIL

2: returned NIL

1: returned NIL

0: returned NIL

NIL

In the example above, we would have expected a result of t. However, since '(a b) does not eq another
copy of '(a b) (they are not the same symbol), list-member returns nil. If we want to account for list
equivalence, we could have used the LISP built-in function equal instead of eq. Common LISP defines the
following set of predicates for testing equality:

(= x y) True if x and y evaluate to the same number.


(eq x y) True if x and y evaluate to the same symbol.
(eql x y) True if x and y are either = or eq.
(equal x y) True if x and y are eql or if they evaluate to the same list.
(equalp x y) To be discussed in Tutorial 4.

Exercise: What would be the behavior of list-member if we replace eq by =? By eql? By equal?

Example: append
LISP defines a function append that appends one list by another:
USER(72): (append '(a b c) '(c d e))

(A B C C D E)

We implement a recursive version of append. Suppose we are given two lists L1 and L2. L1 is either nil or
constructed by cons.

Case 1: L1 is nil.
Appending L2 to L1 simply results in L2.
Case 2: L1 is composed of two parts: (first L1) and (rest L1). If we know the result of appending
L2 to (rest L1), then we can take this result, insert (first L1) to the front, and we then have the
list we want.

Formally, we define the following function:

(defun list-append (L1 L2)

"Append L1 by L2."

(if (null L1)

L2

(cons (first L1) (list-append (rest L1) L2))))

An execution trace is the following:

USER(73): (list-append '(a b c) '(c d e))

0: (LIST-APPEND (A B C) (C D E))

1: (LIST-APPEND (B C) (C D E))

2: (LIST-APPEND (C) (C D E))


3: (LIST-APPEND NIL (C D E))

3: returned (C D E)

2: returned (C C D E)

1: returned (B C C D E)

0: returned (A B C C D E)

(A B C C D E)

Exercise: LISP defines a function (butlast L) that returns a list containing the same elements in L
except for the last one. Implement your own version of butlast using linear recursion. You may assume
that (butlast nil) returns nil.

Using Lists as Sets


Formally, lists are ordered sequences. They differ with sets in two ways:

1. Sets are unordered, but lists are. (a b c) and (c b a) are two different lists.
2. An element either belong to a set or it does not. There is no notion of multiple occurrences. Yet, a list
may contain multiple occurrences of the same element. (a b b c) and (a b c) are two different lists.

However, one may use lists to approximate sets, although the performance of such implementation is not
the greatest.

We have already seen how we can use the built-in function member to test set membership. LISP also
defines functions like (intersection L1 L2), (union L1 L2) and (difference L1 L2) for boolean
operations on sets. In fact, these functions are not difficult to implement. Consider the following
implementation of set intersection:
(defun list-intersection (L1 L2)

"Return a list containing elements belonging to both L1 and L2."

(cond

((null L1) nil)

((member (first L1) L2)


(cons (first L1) (list-intersection (rest L1) L2)))

(t (list-intersection (rest L1) L2))))

The correctness of the implementation is easy to see. L1 is either an empty set (nil) or it is not:

Case 1: L1 is an empty set.


Then its interection with L2 is obviously empty.
Case 2: L1 is not empty.
L1 has both a first component and a rest component. There are two cases: either (first L1) is a
member of L2 or it is not.
Case 2.1: (first L1) is a member of L2.
(first L1) belongs to both L1 and L2, and thus belong to their intersection. Therefore, the
intersection of L1 and L2 is simply (first L1) plus the intersection of (rest L1) and L2.
Case 2.2: (first L1) is not a member of L2.
Since (first L1) does not belong to L2, it does not belong to the intersection of L1 and L2. As a
result, the intersection of L1 and L2 is exactly the intersection of (rest L1) and L2.

A trace of executing the function is given below:

USER(80): (trace list-intersection)


(LIST-INTERSECTION)

USER(81): (list-intersection '(1 3 5 7) '(1 2 3 4))

0: (LIST-INTERSECTION (1 3 5 7) (1 2 3 4))

1: (LIST-INTERSECTION (3 5 7) (1 2 3 4))

2: (LIST-INTERSECTION (5 7) (1 2 3 4))

3: (LIST-INTERSECTION (7) (1 2 3 4))

4: (LIST-INTERSECTION NIL (1 2 3 4))

4: returned NIL

3: returned NIL

2: returned NIL

1: returned (3)

0: returned (1 3)

(1 3)

Exercise: Give a linearly recursive implementation of union and difference.


LISP Tutorial 2: Advanced
Functional Programming in LISP
Auxiliary Functions and Accumulator Variables
We define the reversal of a list L to be a list containing exactly the elements of L in reversed order. The
Common LISP built-in function (reverse L) returns the reversal of L:

USER(1): (reverse '(1 2 3 4))

(4 3 2 1)

USER(2): (reverse '(1 (a b) (c d) 4))

(4 (c d) (a b) 1)

USER(3): (reverse nil)


NIL

Implementing a version of reverse is not difficult, but finding an efficient implementation is not as trivial.
To review what we learned in the last tutorial, let us begin with a naive implementation of reverse:

(defun slow-list-reverse (L)

"Create a new list containing the elements of L in reversed order."


(if (null L)

nil
(list-append (slow-list-reverse (rest L))

(list (first L)))))

Notice that this linearly recursive implementation calls the function list-append we implemented in the
the last tutorial. Notice also the new function list, which returns a list containing its arguments. For
example, (list 1 2) returns the list (1 2). In general, (list x1 x2 ... xn) is just a shorthand for
(cons x1 (cons x2 ... (cons xn nil) ... )). So, the expression (list (first L)) in the listing
above returns a singleton list containing the first element of L.

So, why does (slow-list-reverse L) returns the reversal of L? The list L is constructed either by nil or
by cons:

Case 1: L is nil.
The reversal of L is simply nil.
Case 2: L is constructed from a call to cons.
Then L has two components: (first L) and (rest L). If we append (first L) to the end of the
reversal of (rest L), then we obtain the reversal of L. Of course, we could make use of list-append
to do this. However, list-append expects two list arguments, so we need to construct a singleton list
containing (first L) before we pass it as a second argument to list-append.

Let us trace the execution of the function to see how the recursive calls unfold:

USER(3): (trace slow-list-reverse)

(SLOW-LIST-REVERSE)
USER(4): (slow-list-reverse '(1 2 3 4))

0: (SLOW-LIST-REVERSE (1 2 3 4))
1: (SLOW-LIST-REVERSE (2 3 4))

2: (SLOW-LIST-REVERSE (3 4))

3: (SLOW-LIST-REVERSE (4))
4: (SLOW-LIST-REVERSE NIL)

4: returned NIL

3: returned (4)

2: returned (4 3)
1: returned (4 3 2)

0: returned (4 3 2 1)

(4 3 2 1)

Everything looks fine, until we trace also the unfolding of list-append:

USER(9): (trace list-append)

(LIST-APPEND)
USER(10): (slow-list-reverse '(1 2 3 4))
0: (SLOW-LIST-REVERSE (1 2 3 4))

1: (SLOW-LIST-REVERSE (2 3 4))
2: (SLOW-LIST-REVERSE (3 4))

3: (SLOW-LIST-REVERSE (4))
4: (SLOW-LIST-REVERSE NIL)
4: returned NIL

4: (LIST-APPEND NIL (4))


4: returned (4)
3: returned (4)

3: (LIST-APPEND (4) (3))


4: (LIST-APPEND NIL (3))

4: returned (3)
3: returned (4 3)
2: returned (4 3)

2: (LIST-APPEND (4 3) (2))
3: (LIST-APPEND (3) (2))
4: (LIST-APPEND NIL (2))

4: returned (2)
3: returned (3 2)

2: returned (4 3 2)
1: returned (4 3 2)
1: (LIST-APPEND (4 3 2) (1))

2: (LIST-APPEND (3 2) (1))
3: (LIST-APPEND (2) (1))

4: (LIST-APPEND NIL (1))


4: returned (1)
3: returned (2 1)

2: returned (3 2 1)
1: returned (4 3 2 1)
0: returned (4 3 2 1)

(4 3 2 1)

What we see here is revealing: given a list of N element, slow-list-reverse makes O(N) recursive calls,
with each level of recursionl involving a call to the linear-time function list-append. The result is that
slow-list-reverse is an O(N2) function.
We can in fact build a much more efficient version of reverse using auxiliary functions and accumulator
variables:

(defun list-reverse (L)


"Create a new list containing the elements of L in reversed order."

(list-reverse-aux L nil))

(defun list-reverse-aux (L A)

"Append list A to the reversal of list L."


(if (null L)

A
(list-reverse-aux (rest L) (cons (first L) A))))

The function list-reverse-aux is an auxiliary function (or a helper function). It does not perform any
useful function by itself, but the driver function list-reverse could use it as a tool when building a
reversal. Specifically, (list-reverse-aux L A) returns a new list obtained by appending list A to the
reversal of list L. By passing nil as A to list-reverse-aux, the driver function list-reverse obtains the
reversal of L.

Let us articulate why (list-reverse-aux L A) correctly appends A to the reversal of list L. Again, we
know that either L is nil or it is constructed by cons:

Case 1: L is nil.
The reversal of L is simply nil. The result of appending A to the end of an empty list is simply A itself.
Case 2: L is constructed by cons.
Now L is composed of two parts: (first L) and (rest L). Observe that (first L) is the last
element in the reversal of L. If we are to append A to the end of the reversal of L, then (first L) will
come immediately before the elements of A. Observing the above, we recognize that we obtain the
desired result by recursively appending (cons (first L) A) to the reversal of (rest L).

Tracing both list-reverse and list-reverse-aux, we get the following:

USER(17): (trace list-reverse list-reverse-aux)


(LIST-REVERSE LIST-REVERSE-AUX)

USER(18): (list-reverse '(1 2 3 4))

0: (LIST-REVERSE (1 2 3 4))
1: (LIST-REVERSE-AUX (1 2 3 4) NIL)

2: (LIST-REVERSE-AUX (2 3 4) (1))

3: (LIST-REVERSE-AUX (3 4) (2 1))
4: (LIST-REVERSE-AUX (4) (3 2 1))

5: (LIST-REVERSE-AUX NIL (4 3 2 1))

5: returned (4 3 2 1)

4: returned (4 3 2 1)
3: returned (4 3 2 1)

2: returned (4 3 2 1)

1: returned (4 3 2 1)
0: returned (4 3 2 1)

(4 3 2 1)

For each recursive call to list-reverse-aux, notice how the first element of L is "peeled off", and is then
"accumulated" in A. Because of this observation, we call the variable A an accumulator variable.

Factorial Revisited
To better understand how auxiliary functions and accumulator variables are used, let us revisit the
problem of computing factorials. The following is an alternative implementation of the factorial function:

(defun fast-factorial (N)

"A tail-recursive version of factorial."

(fast-factorial-aux N 1))

(defun fast-factorial-aux (N A)

"Multiply A by the factorial of N."


(if (= N 1)

(fast-factorial-aux (- N 1) (* N A))))

Let us defer the explanation of why the function is named "fast-factorial", and treat it as just another
way to implement factorial. Notice the structural similarity between this pair of functions and those for
computing list reversal. The auxiliary function (fast-factorial-aux N A) computes the product of A and
the N'th factorial. The driver function computes N! by calling fast-factorial-aux with A set to 1. Now,
the correctness of the auxiliary function (i.e. that (fast-factorial-aux N A) indeed returns the product
of N! and A) can be established as follows. N is either one or larger than one.

Case 1: N = 1
The product of A and 1! is simply A * 1! = A.
Case 2: N > 1
Since N! = N * (N-1)!, we then have N! * A = (N-1)! * (N * A), thus justifying our implementation.

Tracing both fast-factorial and fast-factorial-aux, we get the following:

USER(3): (trace fast-factorial fast-factorial-aux)

(FAST-FACTORIAL-AUX FAST-FACTORIAL)

USER(4): (fast-factorial 4)
0: (FAST-FACTORIAL 4)

1: (FAST-FACTORIAL-AUX 4 1)

2: (FAST-FACTORIAL-AUX 3 4)
3: (FAST-FACTORIAL-AUX 2 12)

4: (FAST-FACTORIAL-AUX 1 24)

4: returned 24
3: returned 24

2: returned 24

1: returned 24
0: returned 24

24

If we compare the structure of fast-factorial with list-reverse, we notice certain patterns underlying
the use of accumulator variables in auxiliary functions:

1. An auxiliary function generalizes the functionality of the driver function by promising to compute the
function of interest and also combine the result with the value of the accumulator variable. In the case
of list-reverse-aux, our original interest were computing list reversals, but then the auxiliary
function computes a more general concept, namely, that of appending an auxiliary list to some list
reversal. In the case of fast-factorial-aux, our original interest were computing factorials, but the
auxiliary function computes a more general value, namely, the product of some auxiliary number with
a factorial.
2. At each level of recursion, the auxiliary function reduces the problem into a smaller subproblem, and
accumulating intermediate results in the accumulator variable. In the case of list-reverse-aux,
recursion is applied to the sublist (rest L), while (first L) is cons'ed with A. In the case of fast-
factorial, recursion is applied to (N - 1)!, while N is multiplied with A.
3. The driver function initiates the recursion by providing an initial value for the auxiliary variable. In
the case of computing list reversals, list-reverse initializes A to nil. In the case of computing
factorials, fast-factorial initializes A to 1.

Now that you understand how fast-factorial works, we explain where the adjective "fast" come from ...

Tail Recursions
Recursive functions are usually easier to reason about. Notice how we articulate the correctness of
recursive functions in this and the previous tutorial. However, some naive programmers complain that
recursive functions are slow when compared to their iterative counter parts. For example, consider the
original implementation of factorial we saw in the previous tutorial:

(defun factorial (N)

"Compute the factorial of N."

(if (= N 1)

1
(* N (factorial (- N 1)))))

It is fair to point out that, as recursion unfolds, stack frames will have to be set up, function arguments
will have to be pushed into the stack, so on and so forth, resulting in unnecessary runtime overhead not
experienced by the iterative counterpart of the above factorial function:

int factorial(int N) {

int A = 1;
while (N != 1) {

A = A * N;

N = N - 1;
}

return A;

Because of this and other excuses, programmers conclude that they could write off recursive
implementations ...

Modern compilers for functional programming languages usually implement tail-recursive call
optimizations which automatically translate a certain kind of linear recursion into efficient iterations. A
linear recursive function is tail-recursive if the result of each recursive call is returned right away as the
value of the function. Let's examine the implementation of fast-factorial again:

(defun fast-factorial (N)

"A tail-recursive version of factorial."


(fast-factorial-aux N 1))

(defun fast-factorial-aux (N A)

"Multiply A by the factorial of N."

(if (= N 1)

A
(fast-factorial-aux (- N 1) (* N A))))
Notice that, in fast-factorial-aux, there is no work left to be done after the recursive call (fast-
factorial-aux (- N 1) (* N A)). Consequently, the compiler will not create new stack frame or push
arguments, but instead simply bind (- N 1) to N and (* N A) to A, and jump to the beginning of the
function. Such optimization effectively renders fast-factorial as efficient as its iterative counterpart.
Notice also the striking structural similarity between the two.

When you implement linearly recursive functions, you are encouraged to restructure it as a tail recursion
after you have fully debugged your implementation. Doing so allows the compiler to optimize away stack
management code. However, you should do so only after you get the prototype function correctly
implemented. Notice that the technique of accumulator variables can be used even when we are not
transforming code to tail recursions. For some problems, the use of accumulator variables offers the most
natural solutions.

Exercise: Recall that the N'th triangular number is defined to be 1 + 2 + 3 + ... + N. Give a tail-recursive
implementation of the function (fast-triangular N) which returns the N'th triangular number.

Exercise: Give a tail-recursive implementation of the function (fast-power B E) that raises B to the
power E (assuming that both B and E are non-negative integers).

Exercise: Give a tail-recursive implementation of the function (fast-list-length L), which returns the
length of a given list L.

Functions as First-Class Objects


A data type is first-class in a programming language when you can pass instances of the data type as
function arguments or return them as function values. We are used to treating numeric and Boolean
values as first-class data types in languages like Pascal and C. However, we might not be familiar to the
notion that functions could be treated as first-class objects, that is, functions can be passed as function
arguments and returned as function values. This unique feature of Common LISP and other functional
languages makes it easy to build very powerful abstractions. In the remaining of this tutorial, we will look
at what passing functional arguments buys us. In the fourth tutorial, when we talk about imperative
programming in LISP, we will return to the topic of returning functional values.

Frequently, we need to apply a transformation multiple times on the same data object. For example, we
could define the following transformation:

(defun double (x)

"Multiple X by 2."

(* 2 x))

We could compute 24 by applying the double transformation 4 times on 1:

USER(10): (double (double (double (double 1))))

16

Not only is this clumsy, but it also fails to express the very idea that the same transformation is applied
multiple times. It would be nice if we can repeat applying a given transformation F on X for N times by
simply writing (repeat-transformation F N X). The following function does just that:

(defun repeat-transformation (F N X)

"Repeat applying function F on object X for N times."


(if (zerop N)

X
(repeat-transformation F (1- N) (funcall F X))))

The definition follows the standard tail recursive pattern. Notice the form (funcall F X). Given a
function F and objects X1 X2 ... Xn, the form (funcall F X1 X2 ... Xn) invoke the function F with
arguments X1, X2, ..., Xn. The variable N is a counter keeping track of the remaining number of times we
need to apply function F to the accumulator variable X.

To pass a the function double as an argument to repeat-transformation, we need to annotate the


function name double by a closure constructor, as in the following:

USER(11): (repeat-transformation (function double) 4 1)

16

There is nothing magical going on, the closure constructor is just a syntax for telling Common LISP that
what follows is a function rather than a local variable name. Had we not included the annotation, Common
LISP will treat the name double as a variable name, and then report an error since the name double is not
defined.

To see how the evaluation arrives at the result 16, we could, as usual, trace the execution:

USER(12): (trace repeat-transformation)

REPEAT-TRANSFORMATION

USER(13): (repeat-transformation #'double 4 1)

0: (REPEAT-TRANSFORMATION # 4 1)

1: (REPEAT-TRANSFORMATION # 3 2)

2: (REPEAT-TRANSFORMATION # 2 4)
3: (REPEAT-TRANSFORMATION # 1 8)

4: (REPEAT-TRANSFORMATION # 0 16)

4: returned 16

3: returned 16

2: returned 16

1: returned 16

0: returned 16
16

Higher-Order Functions
Notice that exponentiation is not the only use of the repeat-transformation function. Let's say we want
to build a list containing 10 occurrences of the symbol blah. We can do so with the help of repeat-
transformation:

USER(30): (defun prepend-blah (L) (cons 'blah L))

PREPEND-BLAH
USER(31): (repeat-transformation (function prepend-blah) 10 nil)

(BLAH BLAH BLAH BLAH BLAH BLAH BLAH BLAH BLAH BLAH)

Suppose we want to fetch the 7'th element of the list (a b c d e f g h i j). Of course, we could use
the built in function seventh to do the job, but for the fun of it, we could also achieve what we want in the
following way:
USER(32): (first (repeat-transformation (function rest) 6 '(a b c d e f g h i j)))

Basically, we apply rest six times before apply first to get the seventh element. In fact, we could have
defined the function list-nth (see previous tutorial) in the following way:

(defun list-nth (N L)

(first (repeat-transformation (function rest) N L)))

(list-nth numbers the member of a list from zero onwards.)

As you can see, functions that accepts other functions as arguments are very powerful abstractions. You
can encapsulate generic algorithms in such a function, and parameterize their behavior by passing in
different function arguments. We call a function that has functional parameters (or return a function as its
value) a higher-order function.

One last point before we move on. The closure constructor function is used very often when working with
higher-order functions. Common LISP therefore provide another equivalent syntax to reduce typing. When
we want Common LISP to interpret a name F as a function, instead of typing (function F), we can also
type the shorthand #'F. The prefix #' is nothing but an alternative syntax for the closure constructor. For
example, we could enter the following:

USER(33): (repeat-transformation #'double 4 1)

16

USER(34): (repeat-transformation #'prepend-blah 10 nil)


(BLAH BLAH BLAH BLAH BLAH BLAH BLAH BLAH BLAH BLAH)

USER(35): (first (repeat-transformation #'rest 6 '(a b c d e f g h i j)))

Lambda Expressions
Some of the functions, like prepend-blah for example, serves no other purpose but to instantiate the
generic algorithm repeat-transformation. It would be tedious if we need to define it as a global function
using defun before we pass it into repeat-transformation. Fortunately, LISP provides a mechanism to
help us define functions "in place":

USER(36): (repeat-transformation #'(lambda (L) (cons 'blah L)) 10 nil)

(BLAH BLAH BLAH BLAH BLAH BLAH BLAH BLAH BLAH BLAH)

The first argument (lambda (L) (cons 'blah L)) is a lambda expression. It designates an anonymous
function (nameless function) with one parameter L, and it returns as a function value (cons 'blah L). We
prefix the lambda expression with the closure constructor #' since we want Common LISP to interpret the
argument as a function rather than a call to a function named lambda.

Similarly, we could have computed powers as follows:

USER(36): (repeat-transformation #'(lambda (x) (* 2 x)) 4 1)

16
Exercise: Define a function (apply-func-list L X) so that, given a list L of functions and an object X,
apply-func-list applies the functions in L to X in reversed order. For example, the following expression

(apply-func-list (list #'double #'list-length #'rest) '(1 2 3))

is equivalent to

(double (list-length (rest '(1 2 3))))

Exercise: Use apply-func-list to compute the following:

1. 10 times the fourth element of the list (10 20 30 40 50),


2. the third element of the second element in the list ((1 2) (3 4 5) (6)),
3. the difference between 10 and the length of (a b c d e f),
4. a list containing a list containing the symbol blah.

Iterating Through a List


We have seen how we could iterate through a list using linear recursion. We have also seen how such
recursion can be optimized if structured in a tail-recursive form. However, many of the the list processing
functions look striking similar. Consider the following examples:

(defun double-list-elements (L)


"Given a list L of numbers, return a list containing the elements of L multiplied by 2."

(if (null L)

nil

(cons (double (first L)) (double-list-elements (rest L)))))

(defun reverse-list-elements (L)

"Given a list L of lists, return a list containing the reversal of L's members."
(if (null L)

nil

(cons (reverse (first L)) (reverse-list-elements (rest L)))))

We could come up with infinitely many more examples of this sort. Having to write a new function
everytime we want to iterate through a list is both time-consuming and error-prone. The solution is to
capture the generic logic of list iteration in higher-order functions, and specialize their behaviors by
passing in functional arguments.

If we look at the definitions of double-list-elements and reverse-list-elements, we see that they


apply a certain function to the first element of their input, then recursively invoke themselves to process
the rest of the input list, and lastly combine the result of the two function calls using cons. We could
capture this logic into the following function:

(defun mapfirst (F L)

"Apply function F to every element of list L, and return a list containing the results."

(if (null L)

nil

(cons (funcall F (first L)) (mapfirst F (rest L)))))


The functions double-list-elements and reverse-list-elements can be replaced by the following:

USER(18): (mapfirst #'double '(1 2 3 4))

(2 4 6 8)

USER(19): (mapfirst #'reverse '((1 2 3) (a b c) (4 5 6) (d e f)))

((3 2 1) (C B A) (6 5 4) (F E D))

Of course, you could also pass lambda abstractions as arguments:

USER(20): (mapfirst #'(lambda (x) (* x x)) '(1 2 3 4))

(1 4 9 16)

In fact, the higher-order function is so useful that Common LISP defines a function mapcar that does
exactly what mapfirst is intended for:

USER(22): (mapcar #'butlast '((1 2 3) (a b c) (4 5 6) (d e f)))

((1 2) (A B) (4 5) (D E))

The reason why it is called mapcar is that the function first was called car in some older dialects of LISP
(and rest was called cdr in those dialects; Common LISP still supports car and cdr but we strongly advice
you to stick with the more readable first and rest). We suggest you to consider using mapcar whenever
you are tempted to write your own list-iterating functions.

The function mapcar is an example of generic iterators, which capture the generic logic of iterating
through a list. If we look at what we do the most when we iterate through a list, we find that the following
kinds of iterations occurs most frequently in our LISP programs:

1. Transformation iteration: transforming a list by systematically applying a monadic function to the


elements of the list.
2. Search iteration: searching for a list member that satisfies a given condition.
3. Filter iteration: screening out all members that does not satisfy a given condition.

As we have already seen, mapcar implements the generic algorithm for performing transformation
iteration. In the following, we will look at the analogous of mapcar for the remaining iteration categories.

Search Iteration
Let us begin by writing a function that returns an even element in a list of numbers:

(defun find-even (L)


"Given a list L of numbers, return the leftmost even member."

(if (null L)

nil

(if (evenp (first L))

(first L)

(find-even (rest L)))))


Exercise: Implement a function that, when given a list L of lists, return a non-empty member of L.

We notice that the essential logic of searching can be extracted out into the following definition:

(defun list-find-if (P L)
"Find the leftmost element of list L that satisfies predicate P."

(if (null L)

nil

(if (funcall P (first L))

(first L)

(list-find-if P (rest L)))))

The function list-find-if examines the elements of L one by one, and return the first one that satisfies
predicate P. The function can be used for locating even or non-nil members in a list:

USER(34): (list-find-if #'evenp '(1 3 5 8 11 12))

USER(35): (list-find-if #'(lambda (X) (not (null X))) '(nil nil (1 2 3) (4 5)))

(1 2 3)

Common LISP defines a built-in function find-if which is a more general version of list-find-if. It can
be used just like list-find-if:

USER(37): (find-if #'evenp '(1 3 5 8 11 12))

USER(38): (find-if #'(lambda (X) (not (null X))) '(nil nil (1 2 3) (4 5)))

(1 2 3)

Exercise: Use find-if to define a function that searches among a list of lists for a member that has
length at least 3.

Exercise: Use find-if to define a function that searches among a list of lists for a member that contains
an even number of elements.

Exercise: Use find-if to define a function that searches among a list of numbers for a member that is
divisible by three.

Filter Iteration
Given a list of lists, suppose we want to screen out all the member lists with length less than three. We
could do so by the following function:

(defun remove-short-lists (L)


"Remove all members of L that has length less than three."

(if (null L)

nil
(if (< (list-length (first L)) 3)
(remove-short-lists (rest L))

(cons (first L) (remove-short-lists (rest L))))))

To articulate the correctness of this implementation, consider the following. The list L is either nil or
constructed by cons.

Case 1: L is nil.
Removing short lists from an empty list simply results in an empty list.
Case 2: L is constructed by cons.
L has two components: (first L) and (rest L). We have two cases: either (first L) has fewer
than 3 members or it has at least 3 members.
Case 2.1: (first L) has fewer than three elements.
Since (first L) is short, and will not appear in the result of removing short lists from L, the
latter is equivalent to the result of removing short lists from (rest L).
Case 2.2: (first L) has at least three elements.
Since (first L) is not short, and will appear in the result of removing short lists from L, the
latter is equivalent to adding (first L) to the result of removing short lists from (rest L).

A typical execution trace is the following:

USER(17): (remove-short-lists '((1 2 3) (1 2) nil (1 2 3 4)))

0: (REMOVE-SHORT-LISTS ((1 2 3) (1 2) NIL (1 2 3 4)))

1: (REMOVE-SHORT-LISTS ((1 2) NIL (1 2 3 4)))

2: (REMOVE-SHORT-LISTS (NIL (1 2 3 4)))

3: (REMOVE-SHORT-LISTS ((1 2 3 4)))

4: (REMOVE-SHORT-LISTS NIL)

4: returned NIL

3: returned ((1 2 3 4))

2: returned ((1 2 3 4))

1: returned ((1 2 3 4))

0: returned ((1 2 3) (1 2 3 4))

((1 2 3) (1 2 3 4))

Alternatively, we could have removed short lists using Common LISP's built-in function remove-if:

USER(19): (remove-if #'(lambda (X) (< (list-length X) 3)) '((1 2 3) (1 2) nil (1 2 3 4)))

((1 2 3) (1 2 3 4))

The function (remove-if P L) constructs a new version of list L that contains only members not satisfying
predicate P. For example, we can remove all even members from the list (3 6 8 9 10 13 15 18) by the
following:

USER(21): (remove-if #'(lambda (X) (zerop (rem x 2))) '(3 6 8 9 10 13 15 18))

(3 9 13 15)

Without remove-if, we would end up having to implement a function like the following:

(defun remove-even (L)

"Remove all members of L that is an even number."


(if (null L)

nil

(if (zerop (rem (first L) 2))

(remove-even (rest L))

(cons (first L) (remove-even (rest L))))))

Exercise: Demonstrate the correctness of remove-even using arguments you have seen in this tutorial.

Exercise: Observe the recurring pattern in remove-short-lists and remove-even, and implement your
own version of remove-if.

We could actually implement list-intersection using remove-if and lambda abstraction:

(defun list-intersection (L1 L2)

"Compute the intersection of lists L1 and L2."

(remove-if #'(lambda (X) (not (member X L2))) L1))

In the definition above, the lambda abstraction evaluates to a predicate that returns true if its argument is
not a member of L2. Therefore, the remove-if expression removes all elements of L1 that is not a member
of L2. This precisely gives us the intersection of L1 and L2.

Exercise: Use remove-if and lambda abstractions to implement list-difference.

Exercise: Look up the functionality of remove-if-not in CTRL2. Re-implement list-intersection using


remove-if-not and lambda abstraction.

Functions Returning Multiple Values


The functions we have seen so far return a single value, but some LISP functions return two or more
values. For example, if two arguments number and divisor are passed to the floor function, it returns two
values, the quotient q and the remainder r so that number = divisor * q + r.

USER(11): (floor 17 5)

USER(12): (floor -17 5)

-4

Regular binding constructs like let and let* only allows us to catch the first returned value of a multiple-
valued function, as the following example illustrates:

USER(13): (let ((x (floor 17 5))) x)

One can use the multiple-value-bind to receive the results of a multiple-valued function:
USER(14): (multiple-value-bind (x y) (floor 17 5)

(+ x y))

In the above expression, (x y) is the list of variables binding to the returned values, (floor 17 5) is the
expression generating multiple results. Binding the two values of (floor 17 5) to x and y, LISP then
evaluates the expression (+ x y).

One may also write LISP functions that return multiple values:

(defun order (a b)

"Return two values: (min a b) and (max a b)."

(if (>= a b)

(values b a)

(values a b)))

The values special form returns its arguments as multiple values.

For more information about LISP constructs for handling multiple values, consult section 7.10 of CLTL2.

Exercise: Implement a tail-recursive function (list-min-max L) that returns two values: the minimum
and maximum members of a list L of numbers.
LISP Tutorial Lecture 3: Data
Abstraction
Binary Trees
Suppose we want to create a new kind of recursive data type, our familiar binary trees. The first thing we
have to do is to define the data type in terms of its constructors, selectors and recognizers. In the case of
binary trees, we have the following:

1. Constructors: We have two kinds of binary trees, leaves and nodes. Accordingly, we need a
constructor for each kind:
(make-bin-tree-leaf E): A leaf is a composite object with one component, the element E.
(make-bin-tree-node E B1 B2): A node consists of three components, an element E, a left
subtree B1 and a right subtree B2. Each of B1 and B2 is a binary tree.
Notice that the definition of binary tree is inherently recursive (as in the case of nodes). Larger binary
trees can be composed from smaller ones.
2. Selectors: We need to define a selector for each component of each kind of binary tree.
(bin-tree-leaf-element L): Retrieve the element of a leaf L.
(bin-tree-node-element N): Retrieve the element of a node N.
(bin-tree-node-left N): Retrieve the left subtree of a node N.
(bin-tree-node-right N): Retrieve the right subtree of a node N.
3. Recognizers: We define one recognizer for each kind of binary tree.
(bin-tree-leaf-p B): Test if a given binary tree B is a leaf.
(bin-tree-node-p B): Test if a given binary tree B is a node.

Notice that we have not written a line of code yet, and still we are able to write down the function
signature of all the constructors, selectors and recognizers. The process is more or less mechanical:

1. Define a constructor for each variant of the recursive data type. The parameters for a constructor
defines the components of a composite object.
2. For each parameter of each constructor, define a selector to retrieve the corresponding component.
3. For each constructor, define a corresponding recognizer.

The next question is how we are to represent a binary tree as a LISP object. Of course, a list is the first
thing that comes to our mind:

We represent an leaf with element E by a singleton list containing E (i.e. (list E)).
A node with element E, left subtree B1 and right subtree B2 is represented as a list containing the
three components (i.e. (list E B1 B2)).

Fixing the representation, we can thus implement the recursive data type functions:

;;
;; Binary Trees

;;

;;
;; Constructors for binary trees

;;

(defun make-bin-tree-leaf (E)


"Create a leaf."

(list E))

(defun make-bin-tree-node (E B1 B2)


"Create a node with element K, left subtree B1 and right subtree B2."
(list E B1 B2))

;;
;; Selectors for binary trees

;;

(defun bin-tree-leaf-element (L)


"Retrieve the element of a leaf L."

(first L))

(defun bin-tree-node-element (N)

"Retrieve the element of a node N."


(first N))

(defun bin-tree-node-left (N)


"Retrieve the left subtree of a node N."

(second N))

(defun bin-tree-node-right (N)

"Retrieve the right subtree of a node N."


(third N))

;;
;; Recognizers for binary trees

;;

(defun bin-tree-leaf-p (B)

"Test if binary tree B is a leaf."


(and (listp B) (= (list-length B) 1)))

(defun bin-tree-node-p (B)


"Test if binary tree B is a node."

(and (listp B) (= (list-length B) 3)))

The representation scheme works out like the following:

USER(5): (make-bin-tree-node '*


(make-bin-tree-node '+

(make-bin-tree-leaf 2)
(make-bin-tree-leaf 3))
(make-bin-tree-node '-

(make-bin-tree-leaf 7)
(make-bin-tree-leaf 8)))
(* (+ (2) (3)) (- (7) (8)))

The expression above is a binary tree node with element * and two subtrees. The left subtree is itself a
binary tree node with + as its element and leaves as its subtress. The right subtree is also a binary tree
node with - as its element and leaves as its subtrees. All the leaves are decorated by numeric components.

*
/\

/ \
/ \
+ -

/\ /\
2 3 7 8

Searching Binary Trees


As discussed in previous tutorials, having recursive data structures defined in the way we did streamlines
the process of formulating structural recursions. We review this concept in the following examples.

Suppose we treat binary trees as containers. An expression E is a member of a binary tree B if:

1. B is a leaf and its element is E.


2. B is a node and either its element is E or E is a member of one of its subtrees.

For example, the definition asserts that the members of (* (+ (2) (3)) (- (7) (8))) are *, +, 2, 3, -, 7
and 8. Such a definition can be directly implemented by our recursive data type functions:

(defun bin-tree-member-p (B E)
"Test if E is an element in binary tree B."
(if (bin-tree-leaf-p B)

(equal E (bin-tree-leaf-element B))

(or (equal E (bin-tree-node-element B))


(bin-tree-member-p (bin-tree-node-left B) E)

(bin-tree-member-p (bin-tree-node-right B) E))))

The function can be made more readable by using the let form:

(defun bin-tree-member-p (B E)
"Test if E is an element in binary tree B."

(if (bin-tree-leaf-p B)

(equal E (bin-tree-leaf-element B))

(let
((elmt (bin-tree-node-element B))

(left (bin-tree-node-left B))

(right (bin-tree-node-right B)))


(or (equal E elmt)

(bin-tree-member-p left E)

(bin-tree-member-p right E)))))

Tracing the execution of bin-tree-member-p, we get:

USER(14): (trace bin-tree-member-p)

(BIN-TREE-MEMBER-P)

USER(15): (bin-tree-member-p '(+ (* (2) (3)) (- (7) (8))) 7)


0: (BIN-TREE-MEMBER-P (+ (* (2) (3)) (- (7) (8))) 7)

1: (BIN-TREE-MEMBER-P (* (2) (3)) 7)

2: (BIN-TREE-MEMBER-P (2) 7)
2: returned NIL
2: (BIN-TREE-MEMBER-P (3) 7)

2: returned NIL

1: returned NIL

1: (BIN-TREE-MEMBER-P (- (7) (8)) 7)


2: (BIN-TREE-MEMBER-P (7) 7)

2: returned T

1: returned T
0: returned T

Exercise: Let size(B) be the number of members in a binary tree B. Give a recursive definition of size(B),
and then implement a LISP function (bin-tree-size B) that returns size(B).

Traversing Binary Trees


Let us write a function that will reverse a tree in the sense that the left and right subtrees of every node
are swapped:

(defun binary-tree-reverse (B)

"Reverse binary tree B."

(if (bin-tree-leaf-p B)
B

(let

((elmt (bin-tree-node-element B))


(left (bin-tree-node-left B))

(right (bin-tree-node-right B)))

(make-bin-tree-node elmt
(binary-tree-reverse right)

(binary-tree-reverse left)))))

The correctness of the above implementation can be articulated as follows. Given a binary tree B and an
object E, either the binary tree is a leaf or it is a node:

Case 1: B is a leaf.
Then the reversal of B is simply B itself.
Case 2: B is a node.
Then B has three components, namely, an element elmt, a left subtree left and a right subtree
right. The reversal of B is a node with element elmt, left subtree the reversal of right, and right
subtree the reversal of left.

The following shows us how the recursion unfolds:

USER(21): (trace bin-tree-reverse)

(BIN-TREE-REVERSE)

USER(22): (bin-tree-reverse '(* (+ (2) (3)) (- (7) (8))))


0: (BIN-TREE-REVERSE (* (+ (2) (3)) (- (7) (8))))

1: (BIN-TREE-REVERSE (- (7) (8)))

2: (BIN-TREE-REVERSE (8))
2: returned (8)

2: (BIN-TREE-REVERSE (7))
2: returned (7)

1: returned (- (8) (7))

1: (BIN-TREE-REVERSE (+ (2) (3)))


2: (BIN-TREE-REVERSE (3))

2: returned (3)

2: (BIN-TREE-REVERSE (2))
2: returned (2)

1: returned (+ (3) (2))

0: returned (* (- (8) (7)) (+ (3) (2)))

(* (- (8) (7)) (+ (3) (2)))

The resulting expression represents the following tree:

/\

/ \
/ \

- +

/\ /\
8 7 3 2

Let us implement a function that will extract the members of a given binary tree, and put them into a list
in preorder.

(defun bin-tree-preorder (B)

"Create a list containing keys of B in preorder."

(if (bin-tree-leaf-p B)
(list (bin-tree-leaf-element B))

(let

((elmt (bin-tree-node-element B))

(left (bin-tree-node-left B))


(right (bin-tree-node-right B)))

(cons elmt

(append (bin-tree-preorder left)


(bin-tree-preorder right))))))

Tracing the execution of the function, we obtain the following:

USER(13): (trace bin-tree-preorder)

(BIN-TREE-PREORDER)
USER(14): (bin-tree-preorder '(* (+ (2) (3)) (- (7) (8))))

0: (BIN-TREE-PREORDER (* (+ (2) (3)) (- (7) (8))))

1: (BIN-TREE-PREORDER (+ (2) (3)))

2: (BIN-TREE-PREORDER (2))

2: returned (2)
2: (BIN-TREE-PREORDER (3))

2: returned (3)

1: returned (+ 2 3)

1: (BIN-TREE-PREORDER (- (7) (8)))

2: (BIN-TREE-PREORDER (7))
2: returned (7)
2: (BIN-TREE-PREORDER (8))

2: returned (8)

1: returned (- 7 8)

0: returned (* + 2 3 - 7 8)

(* + 2 3 - 7 8)

As we have discussed before, the append call in the code above is a source of inefficiency that can be
obtimized away:

(defun fast-bin-tree-preorder (B)

"A tail-recursive version of bin-tree-preorder."

(preorder-aux B nil))

(defun preorder-aux (B A)

"Append A to the end of the list containing elements of B in preorder."


(if (bin-tree-leaf-p B)

(cons (bin-tree-leaf-element B) A)

(let

((elmt (bin-tree-node-element B))

(left (bin-tree-node-left B))

(right (bin-tree-node-right B)))


(cons elmt

(preorder-aux left

(preorder-aux right A))))))

An execution trace of the implementation is the following:

USER(15): (trace fast-bin-tree-preorder preorder-aux)

(PREORDER-AUX FAST-BIN-TREE-PREORDER)

USER(16): (fast-bin-tree-preorder '(* (+ (2) (3)) (- (7) (8))))

0: (FAST-BIN-TREE-PREORDER (* (+ (2) (3)) (- (7) (8))))

1: (PREORDER-AUX (* (+ (2) (3)) (- (7) (8))) NIL)

2: (PREORDER-AUX (- (7) (8)) NIL)

3: (PREORDER-AUX (8) NIL)


3: returned (8)

3: (PREORDER-AUX (7) (8))

3: returned (7 8)

2: returned (- 7 8)

2: (PREORDER-AUX (+ (2) (3)) (- 7 8))

3: (PREORDER-AUX (3) (- 7 8))


3: returned (3 - 7 8)

3: (PREORDER-AUX (2) (3 - 7 8))

3: returned (2 3 - 7 8)

2: returned (+ 2 3 - 7 8)

1: returned (* + 2 3 - 7 8)

0: returned (* + 2 3 - 7 8)

(* + 2 3 - 7 8)
Exercise: Implement a function that will create a list containing members of a given binary tree in
postorder. Implement also a tail-recursive version of the same function.

Exercise: Repeat the last exercise with inorder.

Abstract Data Types


Abstract data types are blackboxes. They are defined in terms of their external interfaces, and not their
implementation. For example, a set abstraction offers the following operations:

(make-empty-set) creates an empty set.


(set-insert S E) returns a set containing all members of set S plus an additional member E.
(set-remove S E) returns a set containing all members of set S except for E.
(set-member-p S E) returns true if E is a member of set S.
(set-empty-p S) returns true if set S is empty.

To implement an abstract data type, we need to decide on a representation. Let us represent a set by a list
with no repeated members.

(defun make-empty-set ()

"Creates an empty set."

nil)

(defun set-insert (S E)

"Return a set containing all the members of set S plus the element E."
(adjoin E S :test #'equal))

(defun set-remove (S E)

"Return a set containing all the members of set S except for element E."

(remove E S :test #'equal))

(defun set-member-p (S E)

"Return non-NIL if set S contains element E."

(member E S :test #'equal))

(defun set-empty-p (S)

"Return true if set S is empty."


(null S))

Exercise: Look up the definition of adjoin, remove and member from CLTL2. In particular, find out how
the :test keyword is used to specify the equality test function to be used by the three functions. What will
happen if we omit the :test keyword and the subsequent #'equal when invoking the three functions?

Notice that we have implemented an abstract data type (sets) using a more fundamental recursive data
structure (lists) with additional computational constraints (no repetition) imposed by the interface
functions.

Binary Search Trees


Another way of implementing the same set abstraction is to use the more efficient binary search tree
(BST). Binary search trees are basically binary trees with the following additional computational
constraints:

All the members in the left subtree of a tree node is no greater than the element of the node.
All the members in the right subtree of a tree node is greater than the element of the node.
All the leaf members are distinct.

Again, we are implementing an abstract data type (sets) by a more fundamental recursive data structure
(binary trees) with additional computational constraints. In particular, we use the leaves of a binary tree
to store the member of a set, and the tree nodes for providing indexing information that improves search
performance. for example, a BST representing the set {1 2 3 4} would look like:

/\

/ \

/ \

1 3
/\ /\

1 2 3 4

An empty BST is represented by NIL, while a nonempty BST is represented by a binary tree. We begin
with the constructor and recognizer for empty BST.

(defun make-empty-BST ()

"Create an empty BST."

nil)

(defun BST-empty-p (B)

"Check if BST B is empty."

(null B))

Given the additional computational constraints, membership test can be implemented as follows:

(defun BST-member-p (B E)

"Check if E is a member of BST B."

(if (BST-empty-p B)

nil

(BST-nonempty-member-p B E)))

(defun BST-nonempty-member-p (B E)

"Check if E is a member of nonempty BST B."

(if (bin-tree-leaf-p B)

(= E (bin-tree-leaf-element B))

(if (<= E (bin-tree-node-element B))


(BST-nonempty-member-p (bin-tree-node-left B) E)

(BST-nonempty-member-p (bin-tree-node-right B) E))))

Notice that we handle the degenerate case of searching an empty BST separately, and apply the well-
known recursive search algorithm only on nonempty BST.

USER(16): (trace BST-member-p BST-nonempty-member-p)

(BST-NONEMPTY-MEMBER-P BST-MEMBER-P)

USER(17): (BST-member-p '(2 (1 (1) (2)) (3 (3) (4))) 3)


0: (BST-MEMBER-P (2 (1 (1) (2)) (3 (3) (4))) 3)
1: (BST-NONEMPTY-MEMBER-P (2 (1 (1) (2)) (3 (3) (4))) 3)

2: (BST-NONEMPTY-MEMBER-P (3 (3) (4)) 3)

3: (BST-NONEMPTY-MEMBER-P (3) 3)
3: returned T

2: returned T

1: returned T

0: returned T

Insertion is handled by the following family of functions:

(defun BST-insert (B E)

"Insert E into BST B."

(if (BST-empty-p B)

(make-bin-tree-leaf E)
(BST-nonempty-insert B E)))

(defun BST-nonempty-insert (B E)

"Insert E into nonempty BST B."

(if (bin-tree-leaf-p B)

(BST-leaf-insert B E)
(let ((elmt (bin-tree-node-element B))

(left (bin-tree-node-left B))

(right (bin-tree-node-right B)))

(if (<= E (bin-tree-node-element B))

(make-bin-tree-node elmt

(BST-nonempty-insert (bin-tree-node-left B) E)

right)
(make-bin-tree-node elmt

left

(BST-nonempty-insert (bin-tree-node-right B) E))))))

(defun BST-leaf-insert (L E)

"Insert element E to a BST with only one leaf."


(let ((elmt (bin-tree-leaf-element L)))

(if (= E elmt)

(if (< E elmt)

(make-bin-tree-node E

(make-bin-tree-leaf E)

(make-bin-tree-leaf elmt))
(make-bin-tree-node elmt

(make-bin-tree-leaf elmt)

(make-bin-tree-leaf E))))))

As before, recursive insertion to nonempty BST is handled outside of the general entry point of BST
insertion. Traversing down the index nodes, the recursive algorithm eventually arrives at a leaf. In case
the element is not already in the tree, the leaf is turned into a node with leaf subtrees holding the inserted
element and the element of the original leaf. For example, if we insert 2.5 into the tree represented by (2
(1 (1) (2)) (3 (3) (4))), the effect is the following:
2 2

/\ /\

/ \ / \

/ \ ==> / \

1 3 1 3
/\ /\ /\ /\

1 2 3 4 1 2 2.5 4

/\

2.5 3

A trace of the insertion operation is given below:

USER(22): (trace BST-insert BST-nonempty-insert BST-leaf-insert)

(BST-LEAF-INSERT BST-NONEMPTY-INSERT BST-INSERT)

USER(23): (BST-insert '(2 (1 (1) (2)) (3 (3) (4))) 2.5)

0: (BST-INSERT (2 (1 (1) (2)) (3 (3) (4))) 2.5)

1: (BST-NONEMPTY-INSERT (2 (1 (1) (2)) (3 (3) (4))) 2.5)


2: (BST-NONEMPTY-INSERT (3 (3) (4)) 2.5)

3: (BST-NONEMPTY-INSERT (3) 2.5)

4: (BST-LEAF-INSERT (3) 2.5)

4: returned (2.5 (2.5) (3))

3: returned (2.5 (2.5) (3))

2: returned (3 (2.5 (2.5) (3)) (4))


1: returned (2 (1 (1) (2)) (3 (2.5 (2.5) (3)) (4)))

0: returned (2 (1 (1) (2)) (3 (2.5 (2.5) (3)) (4)))

(2 (1 (1) (2)) (3 (2.5 (2.5) (3)) (4)))

Removal of elements is handled by the following family of functions:

(defun BST-remove (B E)

"Remove E from BST B."

(if (BST-empty-p B)
B

(if (bin-tree-leaf-p B)

(BST-leaf-remove B E)

(BST-node-remove B E))))

(defun BST-leaf-remove (L E)

"Remove E from BST leaf L."

(if (= E (bin-tree-leaf-element L))

(make-empty-BST)

L))

(defun BST-node-remove (N E)

"Remove E from BST node N."


(let

((elmt (bin-tree-node-element N))

(left (bin-tree-node-left N))


(right (bin-tree-node-right N)))

(if (<= E elmt)

(if (bin-tree-leaf-p left)

(if (= E (bin-tree-leaf-element left))

right
N)

(make-bin-tree-node elmt (BST-node-remove left E) right))

(if (bin-tree-leaf-p right)

(if (= E (bin-tree-leaf-element right))

left

N)

(make-bin-tree-node elmt left (BST-node-remove right E))))))

This time, removal from empty BST's and BST's with a single leaf are both degenerate cases. The
recursive removal algorithm deals with BST nodes. Traversing down the index nodes, the recursive
algorithm searches for the parent node of the leaf to be removed. In case it is found, the sibling of the leaf
to be removed replaces its parent node. For example, the effect of removing 2 from the BST represented
by (2 (1 (1) (2)) (3 (3) (4))) is depicted as follows:

2 2

/\ /\

/ \ / \

/ \ ==> / \
1 3 1 4

/\ /\ /\

1 2 3 4 1 2

A trace of the deletion operation is given below:

USER(4): (trace BST-remove BST-node-remove)

(BST-NODE-REMOVE BST-REMOVE)

USER(5): (BST-remove '(2 (1 (1) (2)) (3 (3) (4))) 3)

0: (BST-REMOVE (2 (1 (1) (2)) (3 (3) (4))) 3)

1: (BST-NODE-REMOVE (2 (1 (1) (2)) (3 (3) (4))) 3)

2: (BST-NODE-REMOVE (3 (3) (4)) 3)

2: returned (4)

1: returned (2 (1 (1) (2)) (4))

0: returned (2 (1 (1) (2)) (4))

(2 (1 (1) (2)) (4))

Exercise: A set can be implemented as a sorted list, which is a list storing distinct members in ascending
order. Implement the sorted list abstraction.

Polynomials
We demonstrate how one can perform symbolic computation using LISP. To begin with, we define a new
recursive data type for polynomials, which is defined recursively as follows:

If num is a number, then (make-constant num) is a polynomial;


If sym is a symbol, then (make-variable sym) is a polynomial;
If poly1 and poly2 are polynomials, then the following are also polynomials:
(make-sum poly1 poly2)
(make-product poly1 poly2)
If poly is a polynomial and num is a number, then (make-power poly num) is a polynomial.

One can represent polynomials in the most standard way:

;;

;; Constructors for polynomials

;;

(defun make-constant (num)

num)

(defun make-variable (sym)

sym)

(defun make-sum (poly1 poly2)


(list '+ poly1 poly2))

(defun make-product (poly1 poly2)

(list '* poly1 poly2))

(defun make-power (poly num)

(list '** poly num))

For example, (make-power (make-sum (make-variable 'x) (make-constant 1)) 2) is represented by


the LISP form (** (+ x 1) 2), which denotes the polynomail (x + 1)2 in our usual notation.

We then define a recognizer for each constructor:

;;

;; Recognizers for polynomials

;;

(defun constant-p (poly)


(numberp poly))

(defun variable-p (poly)

(symbolp poly))

(defun sum-p (poly)

(and (listp poly) (eq (first poly) '+)))

(defun product-p (poly)

(and (listp poly) (eq (first poly) '*)))

(defun power-p (poly)

(and (listp poly) (eq (first poly) '**)))

We then need to define selectors for the composite polynomials. We define a selector for each component
of each composite constructor.
;;

;; Selectors for polynomials

;;

(defun constant-numeric (const)

const)

(defun variable-symbol (var)

var)

(defun sum-arg1 (sum)

(second sum))

(defun sum-arg2 (sum)

(third sum))

(defun product-arg1 (prod)


(second prod))

(defun product-arg2 (prod)

(third prod))

(defun power-base (pow)

(second pow))

(defun power-exponent (pow)

(third pow))

One may ask why we define so many trivial looking functions for carrying out the same task (sum-arg1 and
product-arg1 have exactly the same implementation). The reason is that we may end up changing the
representation in the future, and there is no guarantee that sums and products will be represented
similarly in the future. Also, programs written like this tends to be self-commenting.

Now that we have a completely defined polynomial data type, let us do something interesting with it. Let
us define a function that carries out symbolic differentiation. In particular, we want a function (d poly x)
which returns the derivative of polynomial poly with respect to variable x. Let us review our first-year
differential calculus:

The derivative (dC / dx) of a constant C is zero.


The derivative (dy/dx) of a variable y is 1 if the x = y. Otherwise, we leave the derivative unevaluated.
We represent unevaluated derivatives using the following functions

;;

;; Unevaluated derivative

;;

(defun make-derivative (poly x)


(list 'd poly x))

(defun derivative-p (poly x)

(and (listp poly) (eq (first poly) 'd)))

The derivative (d(F+G)/dx) of a sum (F+G) is (dF/dx) + (dG/dx).


The derivative (d(F*G)/dx) of a product (F*G) is F*(dG/dx) + G*(dF/dx).
The derivative (d(FN)/dx) of a power FN is N * FN-1 * (dF/dx).

The above calculus can be encoded in LISP as follows:

;;

;; Differentiation function

;;

(defun d (poly x)

(cond

((constant-p poly) 0)

((variable-p poly)

(if (equal poly x)

(make-derivative poly x)))

((sum-p poly)

(make-sum (d (sum-arg1 poly) x)

(d (sum-arg2 poly) x)))

((product-p poly)

(make-sum (make-product (product-arg1 poly)

(d (product-arg2 poly) x))


(make-product (product-arg2 poly)

(d (product-arg1 poly) x))))

((power-p poly)

(make-product (make-product (power-exponent poly)

(make-power (power-base poly)

(1- (power-exponent poly))))

(d (power-base poly) x)))))

Test driving the differentiation function we get:

USER(11): (d '(+ x y) 'x)

(+ 1 (D Y X))
USER(12): (d '(* (+ x 1) (+ x 1)) 'x)

(+ (* (+ X 1) (+ 1 0)) (* (+ X 1) (+ 1 0)))

USER(13): (d '(** (+ x 1) 2) 'x)

(* (* 2 (** (+ X 1) 1)) (+ 1 0))

The result is correct but very clumsy. We would like to simplify the result a bit using the following
rewriting rules:

E+0=E
0+E=E
E*0=0
0*E=0
E*1=E
1*E=E
E0 = 1
E1 = E
This can be done by defining a simplification framework, in which we can implement such rules:

;;
;; Simplification function

;;

(defun simplify (poly)

"Simplify polynomial POLY."

(cond

((constant-p poly) poly)

((variable-p poly) poly)

((sum-p poly)

(let ((arg1 (simplify (sum-arg1 poly)))

(arg2 (simplify (sum-arg2 poly))))

(make-simplified-sum arg1 arg2)))

((product-p poly)
(let ((arg1 (simplify (product-arg1 poly)))

(arg2 (simplify (product-arg2 poly))))

(make-simplified-product arg1 arg2)))

((power-p poly)

(let ((base (simplify (power-base poly)))

(exponent (simplify (power-exponent poly))))

(make-simplified-power base exponent)))

((derivative-p poly) poly)))

The simplify function decomposes a composite polynomial into its components, apply simplification
recursively to the components, and then invoke the type-specific simplification rules (i.e. make-
simplified-sum, make-simplified-product, make-simplified-power) based on the type of the
polynomial being processed.

The simplification rules are encoded in LISP as follows:

(defun make-simplified-sum (arg1 arg2)

"Given simplified polynomials ARG1 and ARG2, construct a simplified sum of ARG1 and ARG2."

(cond

((and (constant-p arg1) (zerop arg1)) arg2)

((and (constant-p arg2) (zerop arg2)) arg1)

(t (make-sum arg1 arg2))))

(defun make-simplified-product (arg1 arg2)


"Given simplified polynomials ARG1 and ARG2, construct a simplified product of ARG1 and ARG2."

(cond

((and (constant-p arg1) (zerop arg1)) (make-constant 0))

((and (constant-p arg2) (zerop arg2)) (make-constant 0))

((and (constant-p arg1) (= arg1 1)) arg2)

((and (constant-p arg2) (= arg2 1)) arg1)

(t (make-product arg1 arg2))))

(defun make-simplified-power (base exponent)

"Given simplified polynomials BASE and EXPONENT, construct a simplified power with base BASE and exponent EXPONENT."

(cond
((and (constant-p exponent) (= exponent 1)) base)

((and (constant-p exponent) (zerop exponent)) (make-constant 1))

(t (make-power base exponent))))

Let us see how all these pay off:

USER(14): (simplify (d '(* (+ x 1) (+ x 1)) 'x))

(+ (+ X 1) (+ X 1))

USER(15): (simplify (d '(** (+ x 1) 2) 'x))

(* 2 (+ X 1))

Comparing to the original results we saw before, this is a lot more reasonable.

Exercise: Extend the symbolic polynomial framework in the following ways:

Define a new type of polynomial --- difference. If poly1 and poly2 are polynomials, then (make-
difference poly1 poly2) is also a polynomial. Implement the constructor, recognizer and selectors
for this type of polynomial.
The derivative (d(F-G)/dx) of a difference (F-G) is (dF/dx) - (dG/dx). Extend the differentiation function
to incorporate this.
Implement the following simplification rule:
E-0=E

Exercise: Extend the symbolic polynomial framework in the following ways:

Define a new type of polynomial --- negation. If poly1 is a polynomial, then (make-negation poly) is
also a polynomial. Implement the constructor, recognizer and selectors for this type of polynomial.
The derivative (d(-F)/dx) of a negation -F is -(dF/dx). Extend the differentiation function to incorporate
this.
Implement the following simplification rules:
-0 = 0
-(-E) = E

Exercise: The simplification rules we have seen so far share a common feature: the right hand sides do
not involve any new polynomial constructor. For example, -(-E) is simply E. However, some of the most
useful simplification rules are those involving constructors on the right hand sides:

0 - E = -E
E1 + (-E2) = E1 - E2
(-E1) + E2 = E2 - E1
E1 - (-E2) = E1 + E2
E * (-1) = -E
(-1) * E = -E

Within the type-specific simplification functions, if we naively apply the regular constructors to build the
expressions on the right hand sides, then we run into the risk of constructing polynomials that are not
fully simplified. For example, -x and -1 are both fully simplified, but if we now construct their product (-1)
* (-x), the last simplification rule above says that we can rewrite the product into -(-x), which needs further
simplification. One naive solution is to blindly apply full simplification to the newly constructed
polynomials, but this is obviously an overkill. What then is an efficient and yet correct implementation of
the above simplification rules?

Exercise: If all the components of a composite polynomial are constants, then we can actually perform
further simplification. For example, (+ 1 1) should be simplified to 2. Extend the simplification
framework to incorporate this.

Tower of Hanoi
The Tower of Hanoi problem is a classical toy problem in Artificial Intelligence: There are N disks D1, D2,
..., Dn, of graduated sizes and three pegs 1, 2, and 3. Initially all the disks are stacked on peg 1, with D1,
the smallest, on top and Dn, the largest, at the bottom. The problem is to transfer the stack to peg 3 given
that only one disk can be moved at a time and that no disk may be placed on top of a smaller one. [Pearl
1984]

We call peg 1 the "from" peg, peg 3 the "to" peg. Peg 2 is a actually a buffer to facilitate movement of
disks, and we call it an "auxiliary" peg. We can move N disks from the "from" peg to the "to" peg using the
following recursive scheme.

1. Ignoring the largest disk at the "from" peg, treat the remaining disks as a Tower of Hanoi problem
with N-1 disks. Recursively move the top N-1 disks from the "from" peg to the "auxiliary" peg, using
the "to" peg as a buffer.
2. Now that the N-1 smaller disks are in the "auxiliary" peg, we move the largest disk to the "to" peg.
3. Ignoring the largest disk again, treat the remaining disks as a Tower of Hanoi problem with N-1 disks.
Recursively move the N-1 disks from the "auxiliary" peg to the "to" peg, using the "from" peg as a
buffer.

To code this solution in LISP, we need to define some data structure. First, we represent a disk by a
number, so that Di is represented by i. Second, we represent a stack of disks by a tower, which is nothing
but a list of numbers, with the first element representing the top disk. We define the usual constructors
and selectors for the tower data type.

;;

;; A tower is a list of numbers

;;

(defun make-empty-tower ()

"Create tower with no disk."

nil)

(defun tower-push (tower disk)

"Create tower by stacking DISK on top of TOWER."

(cons disk tower))

(defun tower-top (tower)

"Get the top disk of TOWER."

(first tower))

(defun tower-pop (tower)

"Remove the top disk of TOWER."

(rest tower))

Third, we define the hanoi data type to represent a Tower of Hanoi configuration. In particular, a hanoi
configuration is a list of three towers. The elementary constructors and selectors are given below:

;;

;; Hanoi configuration

;;
(defun make-hanoi (from-tower aux-tower to-tower)

"Create a Hanoi configuration from three towers."

(list from-tower aux-tower to-tower))

(defun hanoi-tower (hanoi i)

"Select the I'th tower of a Hanoi construction."

(nth (1- i) hanoi))

Working with towers within a Hanoi configuration is tedious. We therefore define some shortcut to
capture recurring operations:

;;
;; Utilities

;;

(defun hanoi-tower-update (hanoi i tower)

"Replace the I'th tower in the HANOI configuration by tower TOWER."

(cond

((= i 1) (make-hanoi tower (second hanoi) (third hanoi)))

((= i 2) (make-hanoi (first hanoi) tower (third hanoi)))

((= i 3) (make-hanoi (first hanoi) (second hanoi) tower))))

(defun hanoi-tower-top (hanoi i)

"Return the top disk of the I'th tower in the HANOI configuration."

(tower-top (hanoi-tower hanoi i)))

(defun hanoi-tower-pop (hanoi i)

"Pop the top disk of the I'th tower in the HANOI configuration."

(hanoi-tower-update hanoi i (tower-pop (hanoi-tower hanoi i))))

(defun hanoi-tower-push (hanoi i disk)

"Push DISK into the I'th tower of the HANOI configuration."

(hanoi-tower-update hanoi i (tower-push (hanoi-tower hanoi i) disk)))

The fundamental operator we can perform on a Hanoi configuration is to move a top disk from one peg to
another:

;;

;; Operator: move top disk from one tower to another

;;

(defun move-disk (from to hanoi)

"Move the top disk from peg FROM to peg TO in configuration HANOI."

(let

((disk (hanoi-tower-top hanoi from))

(intermediate-hanoi (hanoi-tower-pop hanoi from)))

(hanoi-tower-push intermediate-hanoi to disk)))


We are now ready to capture the logic of our recursive solution into the following code:

;;

;; Subgoal: moving a tower from one peg to another

;;

(defun move-tower (N from aux to hanoi)

"In the HANOI configuration, move the top N disks from peg FROM to peg TO using peg AUX as an auxiliary peg."

(if (= N 1)

(move-disk from to hanoi)

(move-tower (- N 1) aux from to

(move-disk from to

(move-tower (- N 1) from to aux hanoi)))))

We use the driver function solve-hanoi to start up the recursion:

;;

;; Driver function

;;

(defun solve-hanoi (N)

"Solve the Tower of Hanoi problem."

(move-tower N 1 2 3 (make-hanoi (make-complete-tower N) nil nil)))

(defun make-complete-tower (N)

"Create a tower of N disks."


(make-complete-tower-aux N (make-empty-tower)))

(defun make-complete-tower-aux (N A)

"Push a complete tower of N disks on top of tower A."

(if (zerop N)

(make-complete-tower-aux (1- N) (tower-push A N))))

To solve a Tower of Hanoi problem with 3 disks, we call (solve-hanoi 3):

USER(50): (solve-hanoi 3)

(NIL NIL (1 2 3))

All we get back is the final configuration, which is not as interesting as knowing the sequence of moves
taken by the algorithm. So we trace usage of the move-disk operator:

USER(51): (trace move-disk)

(MOVE-DISK)

USER(52): (solve-hanoi 3)

0: (MOVE-DISK 1 3 ((1 2 3) NIL NIL))

0: returned ((2 3) NIL (1))


0: (MOVE-DISK 1 2 ((2 3) NIL (1)))

0: returned ((3) (2) (1))

0: (MOVE-DISK 3 2 ((3) (2) (1)))

0: returned ((3) (1 2) NIL)

0: (MOVE-DISK 1 3 ((3) (1 2) NIL))

0: returned (NIL (1 2) (3))

0: (MOVE-DISK 2 1 (NIL (1 2) (3)))

0: returned ((1) (2) (3))

0: (MOVE-DISK 2 3 ((1) (2) (3)))

0: returned ((1) NIL (2 3))


0: (MOVE-DISK 1 3 ((1) NIL (2 3)))

0: returned (NIL NIL (1 2 3))

(NIL NIL (1 2 3))

From the trace we can actually read off the sequence of operator applications necessary for one to achieve
the solution configuration. This is good, but not good enough. We want to know why each move is being
taken. So we trace also the high-level subgoals:

USER(53): (trace move-tower)

(MOVE-TOWER)
USER(54): (solve-hanoi 3)

0: (MOVE-TOWER 3 1 2 3 ((1 2 3) NIL NIL))


1: (MOVE-TOWER 2 1 3 2 ((1 2 3) NIL NIL))

2: (MOVE-TOWER 1 1 2 3 ((1 2 3) NIL NIL))


3: (MOVE-DISK 1 3 ((1 2 3) NIL NIL))

3: returned ((2 3) NIL (1))


2: returned ((2 3) NIL (1))

2: (MOVE-DISK 1 2 ((2 3) NIL (1)))


2: returned ((3) (2) (1))
2: (MOVE-TOWER 1 3 1 2 ((3) (2) (1)))

3: (MOVE-DISK 3 2 ((3) (2) (1)))


3: returned ((3) (1 2) NIL)

2: returned ((3) (1 2) NIL)


1: returned ((3) (1 2) NIL)

1: (MOVE-DISK 1 3 ((3) (1 2) NIL))


1: returned (NIL (1 2) (3))

1: (MOVE-TOWER 2 2 1 3 (NIL (1 2) (3)))


2: (MOVE-TOWER 1 2 3 1 (NIL (1 2) (3)))

3: (MOVE-DISK 2 1 (NIL (1 2) (3)))


3: returned ((1) (2) (3))

2: returned ((1) (2) (3))


2: (MOVE-DISK 2 3 ((1) (2) (3)))
2: returned ((1) NIL (2 3))

2: (MOVE-TOWER 1 1 2 3 ((1) NIL (2 3)))


3: (MOVE-DISK 1 3 ((1) NIL (2 3)))

3: returned (NIL NIL (1 2 3))


2: returned (NIL NIL (1 2 3))

1: returned (NIL NIL (1 2 3))


0: returned (NIL NIL (1 2 3))

(NIL NIL (1 2 3))

The trace gives us information as to what subgoals each operator application is trying to establish. For
example, the top level subgoals are the following:

0: (MOVE-TOWER 3 1 2 3 ((1 2 3) NIL NIL))


1: (MOVE-TOWER 2 1 3 2 ((1 2 3) NIL NIL))

...
1: returned ((3) (1 2) NIL)

1: (MOVE-DISK 1 3 ((3) (1 2) NIL))


1: returned (NIL (1 2) (3))

1: (MOVE-TOWER 2 2 1 3 (NIL (1 2) (3)))


...

1: returned (NIL NIL (1 2 3))


0: returned (NIL NIL (1 2 3))

They translate directly to the following: In order to move a tower of 3 disks from peg 1 to peg 3 using peg
2 as a buffer (i.e. (MOVE-TOWER 3 1 2 3 ((1 2 3) NIL NIL))) we do the following:

1. "1: (MOVE-TOWER 2 1 3 2 ((1 2 3) NIL NIL))"


Move a tower of 2 disks from peg 1 to peg 2 using peg 3 as a buffer. The result of the move is the
following:
"1: returned ((3) (1 2) NIL)"
2. "1: (MOVE-DISK 1 3 ((3) (1 2) NIL))"
Move a top disk from peg 1 to peg 3. The result of this move is:
"1: returned (NIL (1 2) (3))"
3. "1: (MOVE-TOWER 2 2 1 3 (NIL (1 2) (3)))"
Move a tower of 2 disks from peg 2 to peg 3 using peg 1 as a buffer, yielding the following
configuration:
"1: returned (NIL NIL (1 2 3))"
LISP Tutorial Lecture 4: Imperative
Programming
Imperative vs Functional Programming
The fragment of LISP we have learned so far is purely functional. We program by defining mathematical
functions. Programming this way has the benefit of referential transparency, which means that an
experession has the same value and the same behavior, no matter where it is located, and how many times
it is invoked. For example, the following function always returns 4 when an input 2 is supplied:

(defun double (x) (* 2 x))

Such programs are easy to develop and debug. The only effect of
calling a function is the returned value. Functions become a lot
harder to understand and debug when we allow them to produce
side effects, that is, affecting subsequent computation in ways
other than returning a value. This style of programming, in
which side effect is not only permissible but is also the primary
means by which we program, is called imperative programming.
This is not something new, but is in fact the very kind of
programming habits that you have acquired since you learned
your first programming language (e.g. C/C++, Pascal, Java).
Because of this, we intentionally leave the topic of imperative
programming to the end of this series of tutorials. We do so
since you already know so much about imperative
programming, and since, in my taste, teaching it before
teaching functional programming would only reinforce some of
the bad habits programmers usually have, and thereby luring
you to write C code with LISP, which, of course, make it harder
to finish your assignment on time.
Variables
Imperative programming is made possible by the notion of program state. The behavior of an expression
depends on the exact state a program is in. In turn, the state of a program is defined by, guess what,
variables.

Suppose we want to create a global counter to keep track of the times some computational events occur.
Common LISP allows us to define global variables as follows.

USER(7): (defparameter *counter* 0 "Counter to keep track of ....")

*COUNTER*

USER(8): (setf *counter* (1+ *counter*))

USER(9): *counter*
1

USER(10): (setf *counter* (1+ *counter*))

USER(11): *counter*
200

We use the defparameter special form to define a global variable *counter*, with zero as its initial value.
We also decorate the declaration by a documentation string. We then use the setf special form to assign
new values to the variable. The setf special form returns the new value of the variable. Lastly, evaluating
the variable gives its current value. Notice that *counter* is evaluated to two different values at different
point of time. This is something you will never see if you work with purely functional programs.

Suppose we actually want this counter to reset to zero when it reaches a user defined threshold. We would
encapsulate this service into the following definitions:

(defparameter *counter* 0

"Counter to keep track of ....")

(defparameter *threshold* 4
"Counter will be reset to zero when its value reaches this threshold.")

(defun counter-inc ()

"Increment global counter by one."


(if (>= (1+ *counter*) *threshold*)
(setf *counter* 0)

(setf *counter* (1+ *counter*))))

(defun counter-value ()

"Return current value of global counter."


*counter*)

(defun counter-threshold ()
"Return current threshold of global counter."

*threshold*)

USER(24): (counter-value)
0

USER(25): (counter-inc)
1
USER(26): (counter-inc)

2
USER(27): (counter-inc)
3

USER(28): (counter-threshold)
4

USER(29): (counter-inc)
0

The function counter-inc can also be defined as follows:

(defun counter-inc ()
"Increment global counter by one."

(if (>= (1+ *counter*) *threshold*)


(setf *counter* 0)
(incf *counter*)))
Sequencing
With the possibility of state alteration, we also need a new kind of programming construct --- sequencing.
This allows us to perform multiple state-alterating operations, one after another. For example, we might
want to introduce another function to our global counter abstraction: a function to modify the value of
*threshold* on the fly:

(defun counter-set-threshold (th)

"Set counter threshold to TH."


(setf *threshold* th)
(if (>= *counter* *threshold*)

(setf *counter* 0))


*threshold*)

The function sets the global value of *threshold* to a new value, and then checks if the current counter
value has exceeded the new threshold. If the check succeeds, then the counter is reset to zero. Lastly, the
function evaluates and returns the last expression, which is simply the new value of *threshold*. Notice
that in this example the body of a function is no longer one expression, but a sequence of expressions.
They are evaluated in order, and the value of the last expression is returned as a value of the sequence.
The effect is demonstrated as in the following trace:

USER(2): (counter-value)
0

USER(3): (counter-inc)

1
USER(4): (counter-inc)

USER(5): (counter-inc)

3
USER(6): (counter-inc)

USER(7): (counter-inc)
1

USER(8): (counter-inc)

2
USER(9): (counter-set-threshold 2)

USER(10): (counter-value)
0

In fact, forms like let, let*, when and unless all admit sequences as their bodies. The cond form allows
sequencing in the body of each alternative. The exception is if, due to its syntax. However, you can
always get around by introducing a let form in the alternatives or by using other conditional forms.

Before we move on, notice that we can rewrite the counter-inc function as follows:

(defun counter-inc ()
"Increment global counter by one."

(if (>= (1+ *counter*) *threshold*)

(setf *counter* 0)
(incf *counter*)))
The (incf x) form is simply a shorthand for (setf x (+ x 1)). In general, the following are useful
shorthands when developing imperative programs:

Shorthand Meaning
(incf x delta) (setf x (+ x delta))
(incf x) (incf x 1)
(decf x delta) (setf x (- x delta))
(decf x) (decf x 1)
(push e x) (setf x (cons e x))
(pop x) (let ((e (first x))) (setf x (rest x)) e)

Exercise: Create a global stack abstraction, with interface functions for performing pushing and poping.

Closures
The above method for defining an abstraction is not good enough. For one, the global variables are not
encapsulated. There is nothing to forbid a careless user to change the value of *counter* in ways
violating the invariants of the counter abstraction. To enforce this, we make use of local variables and
closures.

As one would expect, the names introduced by a let form turns out not to be a name binding to some
immutable values. The names refer to local variables. Such variables follow typical lexical scoping rules,
so that LISP always looks up the innermost variable definition when a variable name is to be evaluated.

What is more interesting is that, due to the ability to return functions as values, the local variable has a
life span longer than the expression defining it. Consider the following example:

USER(20): (setf inc (let ((counter 0))

#'(lambda ()

(incf counter))))
#

USER(21): (funcall inc)

USER(22): (funcall inc)


2

USER(23): (funcall inc)

We assign a value to the global variable inc. That value is obtained by first defining local variable counter
using let, and then within the lexical scope of the local variable, a lambda expression is evaluated,
thereby creating an annonymous function, which is returned as a value to be assigned to inc. The most
interesting part is in the body of that lambda expression --- it increments the value of the local variable
counter! When the lambda expression is returned, the local variable persists, and is accessible only
through the annonymous function.

The thing to remember from this example is that, in other kinds of languages like Pascal and C/C++ the
lexical scope of a local variable somehow coincide with its life span. After executation passes beyond the
boundary of a lexical scope, all the local variables defined within it cease to exist. This is not true in
languages supporting higher-order functions, in particular, expressions that return functions as values.
However, that is not to say that lexical scoping does not hold in such languages. In the contrary, lexical
scoping is enforced strictly, and therefore the only place from which you can alter the value of counter is,
as every true believer of lexical scoping would agree, within the lexical scope of the variable --- the lambda
expression. As a result, the counter state is effectively encapsulated. The only way to modify it is by going
through the annonymous function stored in inc. The technical term to refer to this thing that is stored in
inc, this thing that at the same time captures both the definition of a function and the variables
referenced within the function body is called a function closure.
What if we want to define multiple interface functions for the encapsulated counter? Simple, just return all
of them:

USER(34): (setf list-of-funcs (let ((counter 0))

(list #'(lambda ()

(incf counter))
#'(lambda ()

(setf counter 0)))))

(#
#)

USER(35): (setf inc (first list-of-funcs))

#
USER(36): (setf reset (second list-of-funcs))

USER(37): (funcall inc)


1

USER(38): (funcall inc)

USER(39): (funcall inc)


3

USER(40): (funcall reset)

0
USER(41): (funcall inc)

Exercise: Define an encapsulated global stack.

Poor-Man's Object
Having only one instance of this encapsulated counter abstraction is not good enough. Imagine cases
when we need multiple instances of such counters, each having its own state. Well, the answer is simple,
we just evaluate the function-returning let form everytime we need a new instance of counter. Since, a
new local variable will be created everytime we evaluate the let form, the function closure that is
returned each time will be associated with a different incarnation of the local variable counter. To
automate this process, of course we define a constructor for such closures:

(defun make-counter ()

"Create a new instance of a counter object."


(let ((counter 0))

(list #'(lambda ()

(incf counter))

#'(lambda ()

(setf counter 0)))))

Notice how this allows us to maintain independently encapsulated states:

USER(44): (setf c1 (make-counter))

(#

#)
USER(44): (setf c2 (make-counter))
(#

#)

USER(45): (funcall (first c1))

USER(46): (funcall (first c1))

USER(47): (funcall (second c1))


0

USER(48): (funcall (first c2))

USER(49): (funcall (first c2))

USER(50): (funcall (first c2))


3

USER(51): (funcall (first c1))

USER(52): (funcall (first c1))

USER(53): (funcall (second c2))


0

USER(54): (funcall (first c1))

To make invoking counter interface functions easier, we can define the following shorthands:

(defun counter-increment (counter)

"Increment counter."

(funcall (first counter)))

(defun counter-reset (counter)

"Reset counter."

(funcall (second counter)))

And now we have an all-rounded counter abstraction.

USER(56): (counter-increment c1)

4
USER(57): (counter-reset c1)

The moral of this store is that, function closures are encapsulated states. They are a poor-man's version of
objects, which, after all, are nothing but encapsulated states. (Yes, I am aware that Common LISP has a
Common LISP Object System (CLOS), and there is no point of using closures in this manner if all we want
is simply object orientation. But No, I want you to understand what higher-order functions buy you, and
how they serve as a building block for other advanced programming language constructs. Lastly, I don't
want you to spend time struggling to learn yet another object system.)

Exercise: Implement a constructor for your encapsulated stack abstraction. Define appropriate
shorthands for convenient invocation of interface functions.
Iterative Constructs
To round off this tutorial, we discuss something that you know so much about --- iterations. We start with
something very simple:

USER(1): (dotimes (i 10) (print i))

4
5

NIL

The (dotimes (i n) body) form executes body N times. In addition, it defines a local variable i, which
receives an integer value ranging from 0 to n-1. The body of the form could be a sequence of expressions.
The form returns NIL. The (print x) form prints the LISP object x to the console.

A similar construct is defined for iterating through a list:

USER(2): (dolist (i '(a b c d e)) (print i))

NIL

The most general looping construct is the do form:

(defun fibonacci (n)

"Compute the N'th Fibonacci number."

(do ((f1 0 f2)


(f2 1 (+ f1 f2))

(i 0 (1+ i)))

((= i n) f2)

; empty body

))

The first list following the do keyword is a list of local variables and their initializations and update forms.
Each member of this list has the format (var init-form update-form). Within this loop are defined
three local variables f1, f2 and i. The three initialization forms 0, 1 and 0 are evaluated first, and then
assigned to the three locals simultaneously. In each subsequent iteration, the three update forms f2, (+
f1 f2) and (1+ i) will be evaluated first, and then the values are assigned to the three local variables
simultaneously. The second list following the do keyword (i.e. ((= i n) f2)) has the general format of
(exit-condition return-form). The exit condition (= i n) is checked after the initialization is done, or
after the update is performed in subsequent iterations. If the test succeeds, then the return form is
evaluated, and its value is returned. Otherwise, the body of the do form, which in this case is empty, will
be executed with the updated value of the local variables.

Indefinite looping can be achieved using the (loop body) form. One may exit from such loop using the
return-from form:

(defun fib (n)

"Compute the N'th Fibonacci number."

(let ((f1 0)

(f2 1)

(i 0))

(loop
(if (= i n)

(return-from fib f2))

; empty body

(psetf f1 f2

f2 (+ f1 f2)

i (1+ i)))))

The fib function has the same meaning as the fibonacci function coded in terms of do. The psetf is a
variation of setf that implements "parallel" assignment.

You might also like