Newsgroups: misc.education,comp.lang.misc,comp.lang.logo,comp.lang.lisp,comp.lang.scheme
Path: cantaloupe.srv.cs.cmu.edu!rochester!cornellcs!travelers.mail.cornell.edu!news.kei.com!simtel!news.sprintlink.net!mv!moreira.mv.com!alberto
From: alberto@moreira.mv.com (Alberto C Moreira)
Subject: Re: Compiler abstractions [was: Wanted: programming language for 9 yr old]
Message-ID: <alberto.460.00013CCC@moreira.mv.com>
Nntp-Posting-Host: moreira.mv.com
Sender: usenet@mv.mv.com (Paul Hurley)
Organization: MV Communications, Inc.
Date: Fri, 6 Oct 1995 05:14:14 GMT
References: <43v5qb$sb8@blackice.winternet.com> <44c8fh$6m3@onramp.arc.nasa.gov> <BLUME.95Oct4105052@atomic.cs.princeton.edu>
X-Newsreader: Trumpet for Windows [Version 1.0 Rev A]
Lines: 83
Xref: glinda.oz.cs.cmu.edu comp.lang.misc:23324 comp.lang.logo:2159 comp.lang.lisp:19365 comp.lang.scheme:13919

In article <BLUME.95Oct4105052@atomic.cs.princeton.edu> blume@atomic.cs.princeton.edu (Matthias Blume) writes:

>It is a good idea to read one another's articles to the end before
>spouting off with ones counter arguments.  I was not talking about
>floating point here.  Moreover, I said _myself_ that floating point
>and loss of precision _are_ more complicated an issue.

     My point was that knowledge of hardware-level representation is a
     good idea; while the integer example touches a point, the floating
     point goes more down to the point I was advocating.

>There is no excuse on today's hardware for not raising an exception on
>integer overflow, because almost all processors implement this in
>hardware and don't slow down the common case of no overflow.

      I have mixed reactions about this. If I depend on the machine's
      interrupt-handling speed, I'm not too sure I want it to waste
      time interrupting. On the average Risc processor a test for
      overflow is pipelined and takes 0 cycles; an interrupt takes
      a number of them, plus two context switches. 

>Did I say anything to the contrary?  My point is that such things are
>also understandable if you don't know the details of IEEE standards.
>If I remember correctly, then we are talking about teaching
>programming to nine-year olds, and I hope you are not seriously
>suggesting to assault 9-year olds with such a document.

     At this point, the subject evolved far beyond 9-year olds! 

>It can be explained quite simply why floating point computation can
>and will loose precision.  Everybody learns about long division in
>school, and this pretty much suffices to talk about the basics.

    True enough. But that's not the point I was trying to make, but
    rather that in any sort of real programming it's not enough to know
    that floating point looses precision, but rather how to minimize 
    that loss of precision. 

>The latter is actually trivial and follows directly from the
>limitations of the binary (or decimal, or ...) representation of
>numbers.  You do _not_ need to mention hardware in order to explain
>it.

    Except that the flow is reversed: the representation of numbers is only
    done the way we do it because of the limitations placed by the hardware
    and the desire to achieve a representation that's standard accross a fair
    number of hardware platforms. Yes you can generalize out of the low-level,
    but a bit is still a bit, and mantissas and exponents only have that many
    bits; whether a bit is a hardware transistor or a 0-1 digit in base 2 is
    immaterial, we're still dealing with limitations enforced by today's 
    hardware.

>IMO, gate logic belongs to EE.  The underlying theory of boolean
>algebra, of automata, of formal languages, of computability, etc, are
>indeed math and as such well worth knowing for any computer scientist.
>But they can be taught without any references to transitors, voltage,
>current, wires, etc.

    I don't agree. This is the age of HDL and VHDL. An algorithm can be
    programmed in ML and be called software, or it can be programmed
    in HDL and be called hardware. A state machine works the same way
    whether it's in a lexer or inside a chip. Gates and logic can be taught
    without mention to current, wires and transistors; that is indeed EE.
    (By the way, you're talking to one). But when the Windows 95 Bitblt
    standard requires the implementation of all 256 logical functions of
    3 binary inputs at the bit level, there isn't much difference between
    that and a piece of hardware; actually in many cases I know the
    algorithm ends up split so that a portion of it goes to the hardware
    (typically the 2-input side) while the rest goes to software. Still,
    there are chips where it's all done in hardware.

>We are still talking about 9-year olds, not CS majors!

    You probably heard Seymour Papert and his collaborators talking about how
    much programming they managed to teach to 9-year olds and even younger
    children. But you're right, by now the debate went far overboard!


                                                       _alberto_




