Newsgroups: comp.lang.scheme
Path: cantaloupe.srv.cs.cmu.edu!rochester!udel!MathWorks.Com!europa.eng.gtefsd.com!newsxfer.itd.umich.edu!gumby!yale!yale!yale.edu!newsserver.jvnc.net!nntpserver.pppl.gov!princeton!nimaster.princeton.edu!blume
From: blume@beth.cs.princeton.edu (Matthias Blume)
Subject: Re: parsing lambda calculus
In-Reply-To: vanmeule@netcom15.netcom.com's message of Sun, 11 Sep 1994 20:42:28 GMT
Message-ID: <BLUME.94Sep11174715@beth.cs.princeton.edu>
Originator: news@nimaster
Sender: news@Princeton.EDU (USENET News System)
Nntp-Posting-Host: beth.cs.princeton.edu
Organization: Princeton University
References: <VANMEULE.94Sep10053951@netcom7.netcom.com>
	<GREGS.94Sep10093703@delphi.ccs.neu.edu>
	<VANMEULE.94Sep11134228@netcom15.netcom.com>
Date: Sun, 11 Sep 1994 21:47:15 GMT
Lines: 72

In article <VANMEULE.94Sep11134228@netcom15.netcom.com> vanmeule@netcom15.netcom.com (Andre van Meulebrouck) writes:

   In article <GREGS.94Sep10093703@delphi.ccs.neu.edu>
   gregs@delphi.ccs.neu.edu (Gregory T Sullivan) writes:

   [. . .]

   Thanks for the response--much appreciated.

   >2 notes: (1) the convention is that the body of a lambda extends as far as
   >possible, so, in your example, any reduction will take place within the
   >outermost Lf.  

   Hmm.  Well, I got the same result you got, but I had never heard of
   the "convention" you mention. The word convention makes me nervous.
   I'd like to think things are more rigorous than to be based on a
   convention.  I'd like to think the parsing of lambda calculus is a
   well defined art with parsing rules just like Boolean algebra has
   (i.e. AND binds tighter than OR).

   Maybe it's just a matter of how you wish to think about things, but I
   think using the rule: "application binds tighter than abstraction"
   [Gunter 92: Semantics of Programming Languages] subsumes the
   convention you cited.

I agree with you in that it is not a matter of conventions of how to do
the reductions but a matter of conventions of how to write the
expressions.  There is no fixed standard for the latter, and different
authors use different notations.  The one I like best goes as follows:

Let \ stand for ``lambda''. Furthermore, let V* be an infinite
enumerable set of ``variables''.  The following production rules
produce the grammar of lambda expressions:

start symbol: E

(i)	E => x		where x in V*			(variable)
(ii)	E => \x.E	where x in V*			(abstraction)
(iii)	E => (E E)					(combination)

This is sufficient, but for convenience you can add the relaxation
rules:

	E => (L)
	L => E E
	L => L E

and the identity:

	(E1 E2 ... En) == (( ... (E1 E2) ...) En)

With this syntax there is only one parse-tree for every
lambda-expression, which excludes any ambiguities.  (Note that there
are no parenthesis involved in the rule for abstractions.)

The problem of which beta-redex to choose while doing reductions is a
totally unrelated problem.  It is known that leftmost-outermost
reduction (i.e. ``normal order'' evaluation) always arrives at a
normal form (provided there is one).  Normal forms are uniquely
determined up to alpha-conversion (renaming of bound variables).

This property might lead some of us to the conclusion, that normal
order evaluation is some kind of convention.  Interestingly, it is
quite the opposite: most real programming languages use call-by-value,
which is strongly related to ``applicative order evaluation''
(rightmost innermost reduction).  Only few very-high-level modern
functional languages use lazy (normal order) evaluation.

BTW, Scheme uses applicative order evaluation (call-by-value).

--
-Matthias
