Newsgroups: misc.education,comp.lang.misc,comp.lang.logo,comp.lang.lisp,comp.lang.scheme
Path: cantaloupe.srv.cs.cmu.edu!rochester!cornellcs!travelers.mail.cornell.edu!news.kei.com!news.mathworks.com!uunet!in1.uu.net!news.sprintlink.net!mv!moreira.mv.com!alberto
From: alberto@moreira.mv.com (Alberto C Moreira)
Subject: Re: Compiler abstractions [was: Wanted: programming language for 9 yr old]
Message-ID: <alberto.459.0016DB1C@moreira.mv.com>
Nntp-Posting-Host: moreira.mv.com
Sender: usenet@mv.mv.com (Paul Hurley)
Organization: MV Communications, Inc.
Date: Thu, 5 Oct 1995 02:51:12 GMT
References: <44c8fh$6m3@onramp.arc.nasa.gov> <BLUME.95Oct3135251@atomic.cs.princeton.edu> <alberto.458.00098B19@moreira.mv.com> <44uaf7$krp@camelot.ccs.neu.edu>
X-Newsreader: Trumpet for Windows [Version 1.0 Rev A]
Lines: 95
Xref: glinda.oz.cs.cmu.edu comp.lang.misc:23300 comp.lang.logo:2145 comp.lang.lisp:19339 comp.lang.scheme:13894

In article <44uaf7$krp@camelot.ccs.neu.edu> will@ccs.neu.edu (William D Clinger) writes:

     Gosh, it's hard to disagree with somebody of the stature of Will Clinger. 

     Yet I'll try; I believe that sometimes it is necessary to leave the safe
     grounds of abstraction and dig into specifics.

>alberto@moreira.mv.com (Alberto C Moreira) writes:
>>  Numerical inaccuracy is a direct consequence of hardware considerations,
>>  embodied in the IEEE Floating Point standard.

>No.  Numerical inaccuracy is a direct consequence of finite
>representations, which are fundamental to computer science and
>apply equally to hardware and software.

      I believe we're saying more or less the same; one major reason why
      we use finite representations is because of our hardware limitations.
      If our hardware could represent numbers to a precision much larger than 
      the highest precision needed by any application, finite representation   
      wouldn't hurt numerical accuracy.

      Also, today the difference between hardware and software is very blurred.
      If I write it in ML I call it software; if I write the same thing in HDL 
      I call it hardware. In the company I work for, one of the sourest 
      decisions is whether some high-level graphics function will be coded
      into the hardware or left for the device driver programmers to handle.

      I also believe that the programming abstraction is a real number, yet 
      when programming applications it is unavoidable that the actual 
      representation is well taken into consideration, specially when 
      we hit the boundaries of that representation.
 
      The limits placed on structure (a.2^b, a.16^b) 
      and precision (both a and b are bounded within an interval) 
      must be taken into account in the abstraction itself; in a sense, 
      programming languages don't offer real numbers but an approximation 
      thereof, consisting of two bounded  values and a few other bits and
      pieces. While it may be convenient to forget this and use the 
      abstraction as if we had real numbers, many problems will get close 
      enough to our precision or structural boundaries so that those must 
      be taken into account.

>As asserted by the original post, it is important that the programmer
>understand the abstractions provided by a programming language and its
>implementation.  The programmer doesn't need to understand the
>implementation itself.

      You're very right that the programmer must understand the abstractions
      provided by the programming language. Yet, I'm not sure he/she doesn't
      need to understand representation details, because they play a 
      significant role when accuracy is required at boundary conditions. 

>The IEEE Standard for Binary Floating-Point Arithmetic is an excellent
>example of this principle.  A few years ago I used this abstraction
>to develop and analyze the first efficient algorithm for converting
>decimal representations of floating point numbers into their closest
>binary floating-point approximations.  I also implemented much of the
>IEEE floating-point abstraction as part of a portable implementation
>of my algorithm.  So I understand the IEEE floating-point standard far
>better than most programmers, and better than most other people who
>write compilers.

       I believe you. 

>But I haven't the least idea how floating point is implemented by the
>68882, PowerPC, SPARC, MIPS, and Alpha hardware that I use.  Furthermore
>I am convinced that the time it would take me to understand those
>hardware implementations (if indeed their details were available to me,
>which they probably aren't) would be better spent learning how various
>programming languages and compilers screw up the IEEE abstraction in
>the model they present to the programmer.

       Again, if the kind of computing I do doesn't bang against the 
       walls of the representation, I'll probably never need to know 
       what's going on inside the hood. But if I need to squeeze that 
       extra bit of precision, or if I must give stability to an unstable 
       computation, then exact knowledge of the representation may give 
       me the extra bonus. 

>In short:  Programmers need to understand the abstractions that are
>presented by their high level languages and compilers, not the
>abstraction that is presented by the hardware, and certainly not
>the implementation of an abstraction in hardware.

       I would think a programmer should know all of the above. 
       But, again, I'm talking as a professional programmer rather
       than as a computer scientist.

                   
                                                           _alberto_





