Newsgroups: comp.lang.lisp
From: cyber_surfer@wildcard.demon.co.uk (Cyber Surfer)
Path: cantaloupe.srv.cs.cmu.edu!fs7.ece.cmu.edu!europa.eng.gtefsd.com!howland.reston.ans.net!news.sprintlink.net!demon!wildcard.demon.co.uk!cyber_surfer
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
References: <CwJ82u.7nL@csfb1.fir.fbc.com> <Cwsvrs.Dtv@cogsci.ed.ac.uk> <780822729snz@wildcard.demon.co.uk> <Cx3zo5.5sE@cogsci.ed.ac.uk>
Organization: The Wildcard Killer Butterfly Breeding Ground
Reply-To: cyber_surfer@wildcard.demon.co.uk
X-Newsreader: Demon Internet Simple News v1.27
Lines: 105
Date: Tue, 4 Oct 1994 20:12:46 +0000
Message-ID: <781301566snz@wildcard.demon.co.uk>
Sender: usenet@demon.co.uk

In article <Cx3zo5.5sE@cogsci.ed.ac.uk> jeff@aiai.ed.ac.uk "Jeff Dalton" writes:

> There are 32 bits in a longword at some location.  They can be
> interpreted as an int, as a float, as a pointer, etc.  The type
> of the var that corresponds to that location says what they are.

Assuming a CPU with a 32bit word for longs, of course. That implies
that an int would be 16bit. You're also assuming that a long will
be the same size as a pointer. I never make explicit assumptions
like that in my C code, as it will be wrong on some machines.

> That's all I meant anyway.  Again I meant in C, not CL.  Or,
> rather, in the model used to understand C.

In one model used to understand C. Sadly, I have to use more than
one model, as some CPUs are rather odd. The Intel x86 family, for
example. ;-) I usually use "real mode", but when I can, I prefer
to use the 32bit protected mode. The size of a pointers and ints
are different, depending on the CPU mode chosen for the program,

CL hides all of this from the programmer. An implementation might
eveal some of it, but I've yet to use a Lisp that does that. If
CL code run by CLISP can distinguish between 32bit "flat" pointers
and 32bit far segmented pointers, I've yet to find a way to do it,
and I'd be very suprised if it could use both.

> The ANSI doc is on the net, somewhere.  I think Barmar said where
> a short while back.

Yes, but 3 MB is too big for me. What file format is it in, anyway?
I can only just handle postscript, now that I have Ghostscript on
my machine, but I wouldn't use it for reading a large document.
Steele is my only realistic CL reference for now.

> That's interesting.  In my experience, you could typically redim the
> array but only within its original total size.  So only the address
> calculation, not the size, was dynamic.

I don't remember that feature, but it's still only a variation of
static allocation. If I understand what you're saying correctly.

> For some reason, a number of languages (C, some Basics, some Lisps,
> ...) treat arrays in a more "static" way than similar types (structs
> in C, strings in Basic).

With Basic, it really depends on the compiler. There are so many
dialects that I've never seen two compatible systems. With CL, I
might distinguish between the language and the compiler, but with
Basic, I could only do that with ANSI Basic and an implementation.
I've yet to ever use an implementation of ANSI Basic, so I can't
comment on it.

> I find it difficult to judge how much difference it makes to learn
> assembler.  I did learn an assembly language fairly early on, and
> I'm not sure how difficult things would have been if I hadn't.

I wouldn't want anyone to beging to learn programming at that
level. I'm currently reading Roger Bailey's Hope tutorial, and
the idea of teaching programming with Hope is one that appeals
to me a lot. It might not have features like arrays, but that
might actually _help_ the teaching process, rather than hinder it.

> But I'd be surprised if novices typically learned such things
> at / near the start.

The point where I learned about the CPU was when I had to.
I was beginning to outgrow Basic, and the "Basic" model was
certainly holding me back. It was difficult to imagine what
the machine was doing.

It could that was because of the OS rather than the language,
as there was a clear distinction bewteen the two, and most of
the things I wanted to do would be buried deep in the 12K ROM
of the "operating system" & interpreter, or when I eventually
progressed to a disk operating system, I needed to make system
calls to the OS. That was a little harder than merely accessing
the memory mapped devices!

Curiously, I could do all of that in Cambridge Lisp, on my 3rd
machine. Not only that, but the code performed as fast as if
I'd written it in C. That Lisp had no trouble interfacing to
the OS, but then, it had a decent FFI, while the Basic on my
first machine didn't. In fact, I'm not sure that Basic had an
FI at all. I vaguely recall POKEing machine code into strings
nd comments, and then calling it thru some bizarre interface.
It worked, but it was hard to see why. I expect it was also hard
to see why the code didn't work when it sometimes failed, as
code does when you're still developing it.

The last Basic I used has a very good FFI. It couldn't be better.
I've seen a similar FFI in Scheme, and using the same method of
linking to the C code. I was also impressed by the FFI in Actor,
when I was using it. Apart from the way a function would be
declared to the FFI, all 3 language systems use an identical
interface. The C code wouldn't be able to determine which one
was calling it. That allows the programmer to use a very simple
model for the FFI.

Of course, it helps if all the C functions use a "standard" way
of using the stack, which is the case for this platform (Windows).
That's just a convention, but everyone uses it. Those that don't
use it will practically get ignored, as fewer people can then use
their code. That's a _lot_ of pressure to conform.
-- 
http://cyber.sfgate.com/examiner/people/surfer.html
