Newsgroups: comp.lang.functional,comp.lang.scheme
Path: cantaloupe.srv.cs.cmu.edu!rochester!cornellcs!newsfeed.cit.cornell.edu!newsstand.cit.cornell.edu!news.kei.com!newsfeed.internetmci.com!news.msfc.nasa.gov!elroy.jpl.nasa.gov!swrinde!cs.utexas.edu!uwm.edu!vixen.cso.uiuc.edu!usenet.ucs.indiana.edu!news.cs.indiana.edu!jeschke@cs.indiana.edu
From: "Eric Jeschke" <jeschke@cs.indiana.edu>
Subject: Re: Speed of FP languages, laziness, "delays"
Message-ID: <199511101703.MAA01632@piano.cs.indiana.edu>
Organization: Computer Science, Indiana University
References: <45roal$45a@roar.cs.utexas.edu> <466ug9$2fvu@news-s01.ny.us.ibm.net> <46ckqs$8ej@jive.cs.utexas.edu> <1995Oct24.114456.27640@news.cs.indiana.edu> <hbaker-2510951004460001@10.0.2.15>
Date: Fri, 10 Nov 1995 12:03:40 -0500 (EST)
Lines: 111
Xref: glinda.oz.cs.cmu.edu comp.lang.functional:6660 comp.lang.scheme:14276

hbaker@netcom.com (Henry Baker) writes:

:In article <1995Oct24.114456.27640@news.cs.indiana.edu>, "Eric Jeschke"
:<jeschke@cs.indiana.edu> wrote:

:> A side note about the laziness issue: I tend to think that it is
:> underappreciated, especially with regard to the expressiveness that it
:> lends to programs, by the general programming populace.  I don't think
:> you can fully appreciate its nuances until you have programmed one or
:> more large applications in a lazy language.  The pedagogical examples
:> presented in most programming courses (e.g. streams) give only the
:> faintest glimmer of the possibilities, and even then the explicit
:> manipulation required negates much of these programming benefits.
:> Current popular VHLLs owe much of their popularity due to the fact that
:> there is not as great a conceptual leap between programming imperative
:> styles in different languages.

:We're all ears.  How about some concrete examples??  If you've got a catalog
:of really good examples of why laziness is important, then this should be
:worth a good conference/journal article.

:Thanks in advance.

[Sorry for the late reply--I don't keep up with this group daily]

OK.  Here are a few examples of small things that I have noticed after
coding in a lazy language for a while.  My guess is that most readers
of these groups will think: "Oh, I can do that in <insert favorite
strict symbolic language> by using <insert explicit delayed evaluation
trick>.  Of course you can; I'm not arguing that.  The crux of the point
I was trying to make is that when you combine these and other artifacts
that arise easily and naturally under a lazy language you end up with a
programming style that is distinctly different in character, sometimes
subtly, but definitely noticeable.  I agree that this would make an
interesting journal article.  Perhaps if I had a few more examples...

A similar argument can be made for other paradigms; e.g. a distinct
programming style develops using a truly object-oriented language as
opposed to coding in an OO style in a non-OO language.  However, I don't
know enough about these to make a good analogy here.

Caveat: these examples arise from programming in Daisy, a sort of lazy 
counterpart to Scheme.  Some of these examples may not apply to other
lazy languages that have quite different attributes (e.g. strong typing).
I have tried to pick only things that are primarily linked to laziness
and not just a functional style.  This list is hardly comprehensive;
if you have another good example please jump in and tack it on.

[1] It is routine to de-structure a function argument before knowing
    the actual structure passed, because the binding is delayed.
    This occurs frequently in mapping functions where in strict languages
    you'd test for a nil argument right away before doing anything else.
    This results in multiple conditionals, separated by _lets_, where in
    a lazy language you generally end up with fewer conditionals--
    often one local binding followed by one conditional.

    in Scheme
    --------------------------
    (define foo (lambda (l)
       (if (null? l) ...
           (let ((x1 (caar l)) (y1 (cadar l))
                 (x2 (caadr l)) (y2 (cadadr l)))  ; did I get these right?
                (if (x1 < 0) ...
                   (let ...

    in Daisy
    --------------------------
    foo = \l.
        let:[ [[x1 y1] [x2 y2]] l
             if:[ nil?:l  ...
                  neg?:x1 ...
                ]
            ]

    [ In a pattern-matching language it's even shorter: ]
    foo  [] = ...
    foo  [[x1 y1] [x2 y2]] = ...

[2] I often create local bindings to large or (potentially) erroneous
    computations before I decide whether to use them.  For example, I
    might create a binding to the recursion of a function over the tail
    of its argument so that I can conveniently refer to it later in
    the body in multiple places with a single identifier, rather than
    explicitly expressing the same recursion over and over.

[3] Similar idea, different application is "data continuations".
    I frequently pass data continuations around between functions--
    the data continuation is the result of rest of the computation.
    This allows me to avoid costly functional appends and instead
    just cons data on to the front of the result.

[4] Prevalent use of streams.  Essentially the inverse of [3]--one can
    create a binding in the tail of a list to a future computation
    which makes expressing streams trivial.  Combined with transparent
    stream coercion, streams are commonplace in lazy languages and rarely
    used in others, except for special "I/O streams".

Note that these are all just subtle variations on the tacit assumption
of laziness.

To respond to Baker's original question, I think that a Scheme that 
provides explicit delays and _implicit_ forces provides much of the power
needed to express these kinds of things semi-conveniently in Scheme.

For the curious, what was the motivation for the original question?
How many working Scheme's actually provide implicit "force"s?


-- 
Eric Jeschke                      |          Indiana University
jeschke@cs.indiana.edu            |     Computer Science Department
