Newsgroups: misc.education,comp.lang.misc,comp.lang.logo,comp.lang.lisp,comp.lang.scheme
Path: cantaloupe.srv.cs.cmu.edu!rochester!udel!gatech!howland.reston.ans.net!news.sprintlink.net!mv!moreira.mv.com!alberto
From: alberto@moreira.mv.com (Alberto C Moreira)
Subject: Re: Compiler abstractions [was: Wanted: programming language for 9 yr old]
Message-ID: <alberto.458.00098B19@moreira.mv.com>
Nntp-Posting-Host: moreira.mv.com
Sender: usenet@mv.mv.com (Paul Hurley)
Organization: MV Communications, Inc.
Date: Wed, 4 Oct 1995 13:32:32 GMT
References: <43v5qb$sb8@blackice.winternet.com> <44c8fh$6m3@onramp.arc.nasa.gov> <BLUME.95Oct3135251@atomic.cs.princeton.edu>
X-Newsreader: Trumpet for Windows [Version 1.0 Rev A]
Lines: 60
Xref: glinda.oz.cs.cmu.edu comp.lang.misc:23282 comp.lang.logo:2138 comp.lang.lisp:19326 comp.lang.scheme:13876

In article <BLUME.95Oct3135251@atomic.cs.princeton.edu> blume@atomic.cs.princeton.edu (Matthias Blume) writes:

>This is only true in brain-dead languages.  In a sane language
>30000+30000 should either yield 60000 or _raise an exception_.  Both
>concepts (60000 and exceptions) do not require the understanding of
>how a hardware adder works.  Both concepts can be explained entirely
>within the framework of the language.

       If we were to raise an exception every time a floating point operation 
       loses precision, scientific programming would be an unfeasible
       proposition. The handling of numerical instability is one of the great
       - and open - problems in scientific programming, and I don't think
       even a language that's not "brain dead" - as you put it - can do the
       job in a proper way. 

>The accuracy problem for floating point numbers is trickier, but it
>also doesn't require you to understand hardware.  Hardware
>considerations can serve as an explanation of _why_ inaccuracies
>actually occur in practice, but they are not the only possible
>explanation.

       Numerical inaccuracy is a direct consequence of hardware considerations,
       embodied in the IEEE Floating Point standard. Yes, it does help a lot to
       know the standard and how it's implemented, and why bits are dropped;
       while it is also important to know about the theory and practice of 
       scientific programming to understand how to handle these problems in an
       effective way.

> Furthermore, such explanation is not really necessary.
>Or do we also have to understand the reasons why the electromagetic
>force is so much stronger than gravitation in order to understand
>electronics in order to understand integrated circuits in order to
>understand hardware adders in order to understand contemporary
>computers ... in order to understand computing?!

       It is important to understand adders in order to be able to do 
       programming. Basic gate logic and state machine understanding
       is considered fundamental in many a CS course, and not without
       reason. It is a fact today that more and more software is disappearing
       into hardware; while chip design is more and more software design.
       Denying that basic hardware knowledge isn't fundamental to a CS
       major is, in my opinion, not a good idea in today's world.

>After all, there are purely logical calculi (e.g. Turing machines,
>RAMs, lambda-calculus, denotational semantics, ...), which are
>adequate to explain the properties of computing and of programming
>languages without any need to mention actual hardware.

       Both are important. You need state machines to understand the 
       principles, but then you need a hardware course to know how it
       really works in practice; otherwise one's understanding of the
       principles is incomplete and can - and many times does - fail
       in actual practice and usage.

       I believe that CS courses should teach a fair amount more hardware
       techniques than they teach today.


                                                         _alberto_

