Newsgroups: comp.lang.scheme
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!newsfeed.pitt.edu!gatech!news.mathworks.com!hunter.premier.net!netnews.worldnet.att.net!ix.netcom.com!netcom.net.uk!netcom.com!hbaker
From: hbaker@netcom.com (Henry Baker)
Subject: Re: Real-time garbage collection in Scheme?
Content-Type: text/plain; charset=ISO-8859-1
Message-ID: <hbaker-0708960515490001@10.0.2.1>
Sender: hbaker@netcom21.netcom.com
Content-Transfer-Encoding: 8bit
Organization: nil
X-Newsreader: Yet Another NewsWatcher 2.2.0
References: <31FFC897.76B8@cnmat.berkeley.edu> <31FFD0C5.17C9@ccm.hf.intel.com> <4tqj38$r35@roar.cs.utexas.edu> <hbaker-0408960727110001@10.0.2.15> <4u80nn$jb2@roar.cs.utexas.edu>
Mime-Version: 1.0
Date: Wed, 7 Aug 1996 13:15:49 GMT
Lines: 34

In article <4u80nn$jb2@roar.cs.utexas.edu>, wilson@cs.utexas.edu (Paul
Wilson) wrote:
> In article <hbaker-0408960727110001@10.0.2.15>,
> Henry Baker <hbaker@netcom.com> wrote:
> >In
> >any case, since the definition of 'hard' 'real-time' is 'a priori bounded
> >latency', and _not_ throughput, the Baker-style incremental GC's _are_
> >'hard real-time' GC's.
> 
> I have to disagree.  The only relevant definition of real-time for GC's
> has to be one that bears on real-time programs.  I agree that the
> "established" definition of "real-time" for GC's has always been
> the one about bounded individual delays.  But the "true" definition
> of real-time has to be one that real-time programmers would buy,
> and real-time programmers are interested in guaranteed throughput
> at the timescales relevant to applications, not just in bounded
> latencies for individual operations.

Your comment is valid, but it doesn't change the definition of
'hard real-time' found in the CS literature, which is the 'a priori
bounded latency' definition.  If you would like to
define a _new_ term, then be my guest, but please don't confuse people
by trying to change agreed-upon definitions of existing terms after the fact.

> >At the cost of an additional pointer/object and an additional indirection,
> >the Brooks 'optimization' can decouple copying so that it may be scheduled
> >a bit more smoothly.
> 
> Right.  But then I'd say that this is not a Baker-style copy collector.
> It's really more like a Dijkstra-style write-barrier-based tracer,
> with an extra read barrier to support relocation of objects.  Very
> different algorithm.

Well, Brooks's own paper doesn't agree with that assessment.
