Newsgroups: comp.robotics
Path: brunix!sgiblab!swrinde!elroy.jpl.nasa.gov!usc!howland.reston.ans.net!torn!watserv2.uwaterloo.ca!reda
From: reda@watnow.uwaterloo.ca (Reda Ezzat FAYEK)
Subject: Re: Distance to Mars
Message-ID: <C8Gwn6.1EG@watserv2.uwaterloo.ca>
Keywords: virtual reality, mars rover, speed of light
Sender: news@watserv2.uwaterloo.ca
Organization: University of Waterloo
References: <1v5t56INNlkm@flop.ENGR.ORST.EDU> <1993Jun10.145007.12268@mksol.dseg.ti.com> <130777@netnews.upenn.edu>
Date: Fri, 11 Jun 1993 17:30:41 GMT
Lines: 24

In article <130777@netnews.upenn.edu> stein@thumb.cis.upenn.edu (Matthew Stein) writes:
>In article <1993Jun10.145007.12268@mksol.dseg.ti.com>, strohm@mksol.dseg.ti.com (john r strohm) writes:
>> 
>
>autonomy at the remote site, but not full autonomy.  Full autonomy may be
>the best solution, but I don't think we're quite there yet.  Anyone want
>to correct me on this point?

Maybe not yet, but sure there is enough work and results to give the impression
that it's not all that far off..

Several components are studied in several places.. The one I'm planning to
attack for instance is the navigation in outdoor unstructured terrain..
This is not really like what Ambler in CMU does, butmor like finding a
suitable representation (other than the DEM) at a more symbolic level to be
used by a navigational system.. 3D-vision imposes itself as the source of
raw information, but has several drawbacks (related to the huge amount of
information to process).. This, however, can be circumvented by finding methods
to "draw the focus of attention" of the vision system to entities and regions
that are more interestig at one time than the other.. This is a challenge..
Hey, I didn't say I finished my Ph.D... I'm only starting..

Reda

