Cgs401 Lec4
Cgs401 Lec4
• One of the dominant ideas in cognitive science has been Jerry Fodor’s
Modularity of Mind (1983).
• However, the idea of modularity was
preceded by a similar idea from Franz
Joseph Gall (1758 – 1828), i.e.,
phrenology.
• However, Fodor agreed that Gall was basically correct to think of the mind
as being made up of semi – autonomous cognitive faculties.
• However, for Fodor, the more important idea from Gall was that of vertical
cognitive faculties, i.e., domain – specific cognitive faculties that carry out
very specific types of information processing tasks. For instance, they might
be specialized for analyzing shapes, or for recognizing specific objects, or
aspects of objects. Like color, motion, depth etc.
• Interestingly, Fodor thought of these systems as informationally
encapsulated, i.e., that they can call upon only a very limited range of
information in doing so.
• Each vertical cognitive faculty has its own database of information relative
to the task it is performing, and it can use only that very specific
information in this database. The cognitive faculties were referred to as
cognitive modules.
Characteristic of Modular Processing
• Informational – encapsulation:-
Modular processing remains unaffected
by what is going on elsewhere in the
mind. Modular systems cannot be
infiltrated by background knowledge and
expectations, or by information in the
databases associated with the different
modules.
• Mandatory application:- Cognitive
modules respond automatically to stimuli
of the appropriate kind, rather then being
under any executive control. It is
evidence that certain types of visual
processing are modular that we cannot
help but perceive visual illusions, even
when we know them to be illusions.
• Some of these modules are very close to the sensory periphery; implying little
information processing between the sense organs and the module.
Central Processing
• Fodor is also emphatic that there have to be psychological processes that cut across cognitive
domains. He stresses the distinction between what cognitive systems compute and what the
organism believes.
• The representations processed within the cognitive modules are not the only kind of
representations in cognitive systems. More specifically, there should be other information
processing systems that can evaluate and correct the output of these cognitive modules.
• It is Quinean, i.e., it aims at certain knowledge properties that are defined over the
propositional attitude system as a whole.
• Fodor sees each organism’s belief system as, analogous to a scientific theory. It needs to
be evaluated as a whole for its consistency and coherence, for example, we cannot
consider how accurate or well confirmed individual beliefs are in isolation, as how we
evaluate individual beliefs cannot be removed from how we think about other elements of
the system in which we are embedded.
• But then these cover only a very small portion of the cognitive subsystems,
and there are a large number of processes that are handled by what Fodor
refers to as central processing.
Modularity & Cognitive Science
• Fodor proposes that, “the more global (i.e., isotropic) a cognitive system is, the
less anybody understands it. Very global processes, like analogical reasoning,
aren’t understood at all.”
• Fodor justifies the claim with proposals such as that the traditional AI project of
developing a general model of intelligent problem solving had come to a dead
end and that relatively little serious work was any longer being done on building
an intelligent machine.
• Acc. to Fodor, Central processing is quinean and isotropic, i.e., its job is not
to construct for e.g., a single representation of the environment, or to parse a
hard sentence.
• These are tasks of a very different kind. For instance, anything that a system
knows might potentially be relevant to solving them. The information
processing that each of these involves cannot be informationally
encapsulated. And it also often depends upon working out what is and what is
not a consistent with one’s general beliefs about how people behave or how
the world works.
• Now, why should this be so difficult for cognitive science to explain?
• In order to appreciate the difficulty, we need to think about Fodor’s ideas about
language of thought.
• So, the syntactic properties of a sentence in the language of thought are intrinsic,
physical properties of that physical structure.
• Further, the idea that syntactic properties are intrinsic, physical properties of
sentences in the language of thought is at the heart of Fodor’s solution to
the problem of causation by content.
• Fodor proposes to solve the problem by arguing that the intrinsic physical
properties of sentences move in tandem with their non-intrinsic semantic
properties – in an analogous way to how transformations of the physical
shapes of symbols in a logical proof move in tandem with the interpretation
of those symbols.
• The basic problem is that context insensitivity goes hand in hand with informational
encapsulation. Yet, the information processing associated with propositional attitude
psychology is a paradigm example of processing that is not informationally
encapsulated.
• Beliefs reached by inference to the best explanation are not entailed by the evidence
on which they are based.
• There is no way of deducing the belief from the evidence.
• For instance, It is perfectly possible that my friend has left home without her car.
• But, then that is most probably a guess, or the ‘best’ guess.
• However, what all these considerations have in common is that they are dependent
upon global properties of the belief system. Or a form of context sensitivity.
• These syntactic properties must be context sensitive. But this conflicts with
the characteristics of the central processing that Fodor highlights in drawing
the distinction between modular and non – modular processing.
• When we are dealing with sentences in the language of thought
corresponding to beliefs and other propositional attitudes, we have
transitions between sentences in the language of thought that are context
sensitive.
• But then this means that we cannot apply our model of information
processing to them, since that model only applies when we have purely
syntactic transitions.
• If Fodor is right, then this is obviously very bad news for cognitive science.
But there are several ways of trying to avoid his argument.
• One possible strategy is to reject the the idea that there are completely domain
– general forms of information processing that can potentially draw upon any
type of information.
• It is this way of thinking about central processing that causes all the
difficulties, since it brings into play the global properties of belief systems that
cannot be understood in a physical or syntactic way. But perhaps it is wrong.
• Another alternative conception is that there is no real difference in kind
between modular and central processing. May be there is no central
processing of the kind Fodor talks about, as because all processing is
modular.
• This is referred to as the massive modularity hypothesis.
The massive modularity hypothesis
• We saw in the previous section that individuals are rather poor at elementary logial
reasoning although their performance improves drastically when they are
reinterpreted to involve a particular type of conditional, i.e., the deontic conditional.
• The evolutionary psychologists Leda Cosmides and John Tooby came up with a
striking and imaginative explanation for the fact that humans tend to be very good at
reasoning involving deontic conditionals – and much better than they are in
reasoning involving ordinary, non – deontic conditionals.
• According to Cosmides and Tooby, when people solve problems with deontic
conditionals they are using a specialized module for monitoring social exchanges and
detecting cheaters. They propose an ingenious explanation for why there should be
such a thing a thing as a cheater detection module.
• The explanation is evolutionary; and they argue that the presence of some sort of
cheater detection module is a very natural corollary of one very plausible
explanation for the emergence of cooperative behaviour in evolution.
• The idea that cooperative behaviour evolved through people applying strategies
such as TIT for TAT in situations that have the structure of a prisoner’s dilemma.
• We need to be very good at detecting cheaters in order to apply for Tit for Tat
algorithm, because the Tit for Tat algorithm essentially instructs us too cooperate
with anyone who did not cheat on the last occasion we encountered them.
• According to Cosmides & Tooby, this created pressure for the evolutionary
selection of a cognitive module specialized for detecting cheaters.
• The cheater detection module gives massive modularity theorists a model for
thinking about how the mind as a whole may be organized.
• They hold that the human mind is a collection of specialized modules, each of
which evolved to solve a very specific set of problems that were confronted by
our early ancestors – by hunter gatherers in the Pleistocene period. These
modules are referred to as Darwinian Modules.
• Prosopagnosia is often connected to injury to a specific brain area – the part of the
fusiform gyrus known as the fusiform face area. Many cognitive scientists believe
that there is a specialized face recognition system located in the fusiform face area.
• Some of these theoretical arguments are worded by Cosmides & Tooby who
gave two arguments for thinking that there is nothing more to the mind than
a collection of a specialized subsystems. These are:
• The argument from error.
• The argument from statistics and learning.
• Both arguments have an evolutionary flavor. The basic assumptions are that
the human mind is the product of evolution, and that evolution works by
natural selection.
• In this spirit, the two arguments set out to show that evolution could not
have selected a domain – general mental architecture. No domain – general,
central processing system of the type that Fodor envisages could have been
selected, because no such processing system could have solved the type of
adaptive problems that fixed the evolution of the human mind.
The argument from error
• Learning requires feedback and negative feedback is often easier to come by
than positive feedback.
• But how does one know when we have got things wrong, and so be able to work
out that we need to try something different?
• In some cases, there are obvious error signals – pain, hunger, for example. But
such straightforward error signals won’t work for most of what goes on in
central processing.
• We need more abstract criteria for success and failure. These criteria will
determine whether or not a particular behaviour promotes fitness, and so
whether or not it will be selected.
• But, Cosmides & Tooby argue, these fitness criteria are domain – specific, not
domain – general. What counts as fit behaviour varies from domain to domain.
• They give the example of how one treats one’s family members. For instance,
it may not be fitness– promoting to have sex with close family members; but
in contrast, it is fitness promoting to help family members in many other
circumstances.
• Instead, Cosmides & Tooby, say that there must be a distinct cognitive
mechanisms for every domain that has a different definition of what counts
as a successful outcome.
The argument from statistics and learning
• The argument from statistics and learning focuses on problems in how domain
– general cognitive systems can discover what fitness consists in.
• All that they have access to is what can be inferred from perceptual processes
by general cognitive mechanisms.
• The problem is that the world has what Cosmides and Tooby describe as a
“statistically recurrent domain – specific structure”.
• Certain features hold with great regularity in some domains, but not in
others. These are not the sort of things that a general – purpose cognitive
mechanisms could be expected to learn.
• The example they give is the equation for kin selection proposed by the
evolutionary biologist W. D. Hamilton.
• The problem of kin selection is the problem of explaining why certain
organisms often pursue strategies that promote the reproductive success of
their relatives, at the cost of their own reproductive success.
• This type of self – sacrificing behaviour seems, on the face of it, to fly in
the face of the theory of natural selection, since the self-sacrificing strategy
seems to diminish the organism’s fitness.
• This problem is a special case of the more general problem of explaining the
evolution of cooperation – a problem that evolutionary psychologists have
also explored from a rather different perspective in the context of the
prisoner’s dilemma.
• Hamilton’s basic idea is that that there are certain circumstances in which it
can make good fitness – promoting sense for an individual sacrifice herself for
another individual.
• From an evolutionary point of view, fitness – promoting actions are ones that
promote the spread of agent’s genes. And, Hamilton argued, there are
circumstances where an act of self-sacrifice will help the individuals’ own
genes to spread and thereby spread the kin selection gene.
• In particular, two conditions need to hold:
• Condition 2: The individual benefitting from the sacrifice must share the gene
that promotes kin selection.
• In English, therefore, Hamilton’s kin selection equation says that kin selection
genes will spread when the reproductive benefit to the recipient of the
sacrifice, discounted by the recipient’s degree of relatedness to the self-
sacrifice, exceeds the reproductive cost to the self – sacrifice.
• Now, should this make us believe in the massive modularity hypothesis?
• Cosmides & Tooby think that massive modularity is the only way of solving a
fundamental problem raised by Hamilton’s theory of kin selection.
• The problem has to do with how an organism learns to behave according to the
kin selection equation.
• Simply looking at a relative will not tell the organism how much to help that
relative. Nor will she be able to evaluate the consequences of helping or not
helping.
• The consequences will not be apparent until long after the moment of decision.
The kin selection equation exploits statistical relationships that completely
outstrip the experience of any individual.
• According to Cosmides & tooby, then, no domain – general learning
mechanisms could ever pick up on the statistical generalizations that
underwrite Hamilton’s selection law.
• So how could the kin selection law get embedded in the population?
• The only way that this could occur, they think, is for natural selection to have
selected a special – purpose kin selection module that has the kin selection law
built into it.
Evaluating the arguments for massive modularity
• Now, we have seen that Darwinian modules are very different from
Fodorean modules.
• This is not very surprising, since Darwinian modules were brought into play
in order to explain types of information processing that could plainly not be
carried out by Fodorean modules.
• Now let's see the six key features of Fodorean modules:
• Domain specificity;
• Information encapsulation;
• Mandatory application;
• Speed;
• Fixed neural architecture;
• Specific breakdown patterns.
• Of these six features, only the first seems to clearly apply to Darwinian
modules. The second applies only in a limited sense.
• If one is deciding whether or not to help a relative, there are many things
that might come into play besides the calculations that might be carried out
in a Darwinian kin – selection module – even though those calculations
themselves might be informationally encapsulated.
• One would need to make a complex cost – benefit analysis where the costs
& benefits can take many forms. & this information might not be available
in any proprietary database.
• Darwinian modules do not seem to be mandatory – it is unlikely that the kin
selection module will be activated every time that one encounters a relative.
• Neither of the the two arguments have anything to say about a neural
architecture or break– down patterns (nor does the example of the cheater
detection module talk about the same).
• So, there may be a sense in which Darwinian modules are fast, but this is a
rather fizzy concept to apply any measure of computational complexity
without some understanding of the algorithms that Darwinian modules
might be running.
• Given these fundamental differences between Darwinian modules and
Fodorean modules, it is natural to ask why is it that Darwinian modules can be
considered as modules in the first place?
• A natural criticism of the arguments that we have looked at is that they seem
perfectly compatible with a much weaker conclusion.
• Both te argument from error and the argument from statistics and learning are
compatible with the idea hat human beings are born with cetain innate bodies
of domain – specific knowledge.
• When one is thinking about the organization of the mind, however, the
distinction is fundamentally important.
• When we formulate the massive modularity hypothesis in terms of cognitive
modules it is a bold and provocative doctrine about mental architecture. It says
that there is no such thing as a domain – general information processing
mechanism and that the mind is nothing over and above a collection of
independent and quasi-autonomous cognitive sub-systems.
• The idea that we are born with innate bodies of knowledge dedicated to certain
domains, is not really a claim about the architecture of cognition. It is anyways
an assumption that Cognitive scientists have proposed in various areas like
numerical competence, intuitive mechanics, syntax etc.
• The distinctive feature of the massive modularity hypothesis, as developed by Cosmides
& Tooby, is its denial that there is any such thing as domain – general central processing.
• But this is a claim about informational processing and mental architecture. If we think
instead in terms of innate bodies of knowledge then the only way one can think of
reformulating the denial is as the claim that there are no domain – general learning
mechanisms.
• So, on the face of it there do appear to be obvious counter – examples to this claim. For
instance, classical & instrumental conditioning count as learning mechanisms?
• & they do seem to be domain – general. Cosmides & Tooby cannot deny that classical &
instrumental conditioning are possible.
• Only thing that can be probably said is that we know things that we could not have learnt by
applying domain – general learning mechanisms.
• So, the case for massive modularity thesis is compatible with a much
weaker and less controversial conclusion.
• But, then, how do we reject the stronger version of the massive modularity
thesis as an account of information processing mental architecture.
• Two possible arguments can be made. They propose that since there cannot
be a completely modular cognitive system, and so that massive modularity
thesis must be untenable.
• His critique starts off from the obvious fact that any modular system,
whether Darwinian or Fodorean, takes only a limited range of inputs. So,
any question anyonen proposing a modular cognitive capacity has to answer
is how that limited range of inputs is selected.
• These filters ensure, for example, that only information about light intensity
feeds into the earliest stages of visual processing.
• But Darwinian modules and Fodorean modules operate on fundamentally different
types of input. Inputs into the cheater – detection module, for example, must be
representations of social exchange transducers. There has to be some sort of filtering
operation that will discriminate all and only the social exchanges.
• But on the other hand, since the filtering process is modular, it must have a limited
range of inputs. The filtering process is itself domain – specific, working to
discriminate the social exchanges from a slightly broader class of inputs – perhas as
et of inputs who members have in common the fact they all involve more than one
person.
• So, the same question arises again. How is this set of inputs generated?
• Presumably a further set of processing will be required to do the filtering. It follows
from the massive modularity hypothesis that this processing must itself be modular.
• Acc. to Fodor, a similar line of argument will apply to all other Darwinian
modules. The massive modularity hypothesis collapses, because it turns out that
massive modularity requires complete domain – generality.
• Fodor’s argument is bottom – up. It analyzes the inpust into Darwinian modules.
There is also a room for a broadly parallel line of argument that is top – down,
directed at the outputs of Darwinian modules.
• It is very likely that some situations will fall under the scope of more than one
module. So, for example, something might be a social exchange when looked at
from one point of view, but a potentially dangerous situation when looked at
from another.
• Suppose this is situation A. Under this description A would be an input for the
cheater detection module, while under the second description A might be
relevant to, say, the kin –selection module. In this sort of case, one might
reasonably think that a representation of S will be processed by both modules in
parallel.
• However, this will often create a processing problem. The outputs of the
relevant modules will need to be reconciled if, for example, the kin
selection module ‘recommends’ one course of action and the cheater –
detection module another.
• The cognitive system will have to come to a stable view, prioritizing one
output over the other. This will require further processing. And the
principles of reasoning used in this processing cannot be domain – specific.
This is because these principles need to be applicable to both of the relevant
domains, and indeed to any other domains that might be initially potentially
relevant.
• The general thought here is really rather straightforward. Acc. to the massive
modularity hypothesis, the mind is a complex structure of superimposed Darwinian
modules that have evolved at different times to deal with different problems.
• Given the complexities of human existence and human social interactions, there
will have to be a considerable number of any such modules. & given, those very
same complexities, moreover, it seems highly unlikely that every situation to which
the organism needs to react will map cleanly on to one and only one Darwinian
module.
• It is far more likely that in many situations a range of modules will be brought to
bear. Something far closer to what is standardly understood as central processing
will be required to reconcile conflicting outputs from those Darwinian modules.
This central processing will have to be domain – general.
• So, what can we take home from this?
• On the one hand, the strongest version of the hypothesis seems much stronger
than is required to do justice to the two arguments for massive modularity that
we considered.
• The argument from error and the argument from statistics and learning
certainly fall short of establishing a picture of the mind as composed solely of
domain – specific and quasi – autonomous cognitive sub – systems.
• At best these arguments show that there must be some domain – specific
modules – which is a long way short of the controversial claim that there
cannot be any domain – general processing.
• On the other hand, however, even if one rejects the massive modularity
hypothesis in its strongest form, it still makes some very important points
about the organization of the mind.
• In particular, it makes a case for thinking that the mind might be at least
partially organized in terms of cognitive subsystems or modules that are
domain specific without having all the characteristics of full fledged Fodorean
modules.
• Cognitive scientists have take this idea very seriously and we can explore it in
more detail going further.