Newsgroups: comp.lang.scheme
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!pipex!uknet!liv!news
From: bruce@liverpool.ac.uk
Subject: Re: Mixing languages
In-Reply-To: "Daniel C. Wang"'s message of Tue,  9 May 1995 18:44:30 -0400
Message-ID: <BRUCE.95May12121506@iasc3.scm.liv.ac.uk>
Sender: news@liverpool.ac.uk (News System)
Nntp-Posting-Host: iasc3.scm.liv.ac.uk
Organization: IASC, University of Liverpool
References: <LORD.95May7224548@x1.cygnus.com> <3onfoa$nb6@nyheter.chalmers.se>
	<sjfz1Cy00bkM1YSeNs@andrew.cmu.edu>
Date: Fri, 12 May 1995 11:15:06 GMT
Lines: 21

>>>>> "Daniel" == Daniel C Wang <dw3u+@andrew.cmu.edu> writes:

> What I'd like to see is a preformace comparison of a bytecoded VM
> based on a virtual RISC architecture which tries to uses the
> hardware registers to there maximum potential, compared to native
> code. I'm sure that such a VM based on a virtual RISC machine would
> outpreform a stackbased VM, and probably make interpreted code look
> like a much more viable solution when compared to compiled native
> code.

I guess the question is when you map the virtual stack machine to a
register model.  When you're interpreting code, my guess is that it's
no major performance loss to keep the stack model, but when you're
compiling it's surely necessary to go to registers!  How easy is it to
perform register allocation on scheme code vs register allocation on
virtual stack machine code?  (For that matter, how machine independent
are you going to make the virtual RISC machine?)
-- 
Bruce                   Institute of Advanced Scientific Computation
bruce@liverpool.ac.uk   University of Liverpool

