Newsgroups: comp.robotics
Path: brunix!cat.cis.Brown.EDU!agate!ihnp4.ucsd.edu!swrinde!news.dell.com!tadpole.com!uunet!hobbes!earth.armory.com!rstevew
From: rstevew@armory.com (Richard Steven Walz)
Subject: Re: 100 Billion Nuerons
Organization: The Armory
Date: Thu, 14 Jul 1994 15:16:37 GMT
Message-ID: <Csxrrs.74n@armory.com>
References: <wmillsCsrLH9.Dp0@netcom.com> <ZByDPc2w165w@sfrsa.com>
Sender: news@armory.com (Usenet News)
Nntp-Posting-Host: deeptht.armory.com
Lines: 103

In article <ZByDPc2w165w@sfrsa.com>, bsmall <bsmall@sfrsa.com> wrote:
>wmills@netcom.com (William J. Mills) writes:
>> The granularity needed would seem to be fairly small but is also time 
>> dependent right?  The counter could increment based on other neurons and 
>> decrement back to 0 over time.  The counter basically being a 'saturation 
>> of neurotransmitter' which is re-absorbed from the synapse.  Is neuron 
>> firing based on the sum of all of the synapses?  If so then the synapses 
>> might be simple while the whole neuron obeys a slighlty more smooth 
>> behavior with a few more bits of representation for the neuron itself.
>
>I'm not sure. When I originally thought of this kind of system
>I assumed I would need a mechanism to bleed off the counters.
>What a pain though because you would need to touch every cell
>in the system periodically. Another way is to somehow time stamp
>the cells counters and when a new stimulus comes into the cell,
>you ask if the old counter value is too old, and make a decision
>based on the count and how old the count is.
> 
>I'm thinking now that you really don't need to bleed off the 
>counters at all. So what if a seldom visited cell ocassionally 
>goes off. In the grand scheme of things, this once in a great 
>while shouldn't effect the outcome of the thinking process. It's
>the cells that are firing a lot that really matter.
> 
>Is neuron firing based on the sum of all the synapses? I think 
>so despite the earlier talks in this discussion about quatum 
>physics, etc. There are definetely a lot of things going on but
>we need to concentrate on the major player(s). One interesting 
>neural tissue dilema is the inhibitor synapse. What on earth does
>this do in our brains or in bee brains? Do we need inhibitors to
>create a stable system?
> 
>Brad Smallridge
>bsmall@sfrsa.com
-------------------------
Of course you do. Haven't you ever heard of an inverted input to a sensor
system or controller?? And as for counters, free running timer/counters as
reference for how old data is can be used without concern. The weighting in
the neural net will iron out any persistent inaccuracies. It may not be the
most perfect way, but it can work. Even so, it is sloppy. Neural nets are
sloppy, and the only reason that we play with them is that they feel like
magic! They seem to program themselves, when actually that's the wasteful
way WE evolved!!!

I think this whole thing; creating an awareness, etc., will be a very
straightforward process, when we get the separate multiprocessed functions
nailed down. If all the neural connections were ACTUALLY INTENTIONALLY
implemented by an overseer/designer/programmer in the brain to accomplish
awareness, then I would agree that it's likely that the connections MUST be
emulated accurately and in toto to arrive at awareness, but not only has
that no robustness for when a person gets ill, or is smacked upside the
head a pretty good one, or has variations in neurochemicals that day
because of diet or habits, but it is also foolish to imagine since there IS
NO overseer/designer/programmer in the brain that implements only the
needed synaptic connections. The huge number of synapses and their weighted
effects in the neuron is something that is best described as an evolutionary
fudge factor factory, with NO elimination of synapses which are actually
not needed for a process, and instead an overwhelming number of spaghetti
wired counterfunctions wired to counteract stupid functions!!! My guess is
that over 95% of the wiring in your head is of that sort, as it has been/is
being wired at random with evolution as the referee and judge, and NOT one
which judges for the most elegant and simple wired processes, but just
huge ugly black boxes that worked better than the poor fool who died out!!!
Awareness will probably be discovered and killed accidentally a million
times by experimenters before such a stinking slow process CAN be
recognized by us as actual awareness and then actively enhanced and speeded
up to near that of ours by elimination of interfering processes! And I
don't mean awareness as broad as ours. How many of us have had a rat or
mouse show us that it was aware at a near human level? They don't have our
number of neurons, and are also spaghetti wired at cross-purposes as well!
I can easily imagine building an awareness which we can communicate with,
with FAR fewer "neuronal" elements, and it will still be able to make sense
of things we can empathize with ourselves and recognize awareness. It might
be slow in responding for a while, but it might be a good idea to attempt
modeling of brain functions by iteration with the latest and fastest
computers in a simple half-designed/half-evolutionary iterative attempt to
define the simplest awareness we can recognize. It needn't be really smart,
or understand terribly deeply, or do complex vision or hearing processing,
which is an enormous overhead in earth's animal brains. All it needs is an
environment it can discuss with us and which we can recognize the tell-tale
functions within! Awareness can fill many niches. For all we know we have
already had programs running which were technically aware or had a fair
sized piece of that process mastered! WE SIMPLY COULDN'T BELIEVE that it
might be that simple, nor were we looking for it! Even if we only construct
something which knows of the existence of 20 nouns and 6 verbs, and can be
goal directed, that is a start, and from it can follow larger awarenesses;
defined as having the function of being able to talk ABOUT itself! However
simply! I don't think this route has been given enough imagination. Once
expanded and filled out, a collection of functions which believes it exists
and talks about itself can be implemented in hardware so that it can learn
to be even more. I do not posit that awareness MUST be a learning
awareness. But that is an addable feature given hardware and processing and
a handle for it as well as a handle maker function for new things. I have
seen self-replicating code, and that's cute and biological, but why don't
psychologists and neuroscientists probe THIS concept in code?? I may be
wandering around a bit here, but is anyone bothering to DO that? I have NOT
yet seen it either in the lay media or reported in jounrals beyond the
separate emulation of sensory preprocessors. Tell me if you know of anyone
trying this. I really think we may be overlooking the chance that our much
vaunted awareness is not really very uncommon.
Thanks for listening.
-Steve Walz   rstevew@armory.com 

