Unit 4 Models of Information Processing: Structure
Unit 4 Models of Information Processing: Structure
Unit 4 Models of Information Processing: Structure
4.0 INTRODUCTION
Cognition as a psychological area of study goes far beyond simply the taking in
and retrieving information. Neisser (1967), one of the most influential researchers
in cognition, defined it as the study of how people encode, structure, store, retrieve,
use or otherwise learn knowledge. The information processing approach to human
cognition remains very popular in the field of psychology.
44
Models of Infromation
4.1 OBJECTIVES Processing
What Waugh and Norman did that James never attempted was to quantify
properties of primary memory. This short-term storage system was taken to have
very limited capacity, so that loss of information from it was postulated to occur
not as a simple function of time but (once the storage capacity was exhausted)
by displacement of old items by new ones. PM could be conceptualised as a
storage compartment much like a vertical file, in which information is stored in
a slot or, if all the slots are filled, displaces an item occupying one of the slots.
Rehearsal
Stimulus Secondary
Primary Memory
Memory
Forgotton
Fig. 1.4.1: Model of Primary and Secondary Memory (Adapted from Waugh and Norman
(1965)
Waugh and Norman traced the fate of items in PM (primary memory) by using
lists of sixteen digits, that were read to subjects at the rate of one digit per second
or four digits per second. The purpose of presenting digits every second or quarter
second was to determine whether forgetting was a function of decay (presumed
to be due to time) or interference in PM.
45
Information Processing If forgetting was a function of decay, then less recall could be expected with the
slower rate (one digit per second); if forgetting was a function of interference in
PM, then no difference in recall could be expected according to the presentation
rate. The same amount of information is presented at both presentation rates,
which, by Waugh and Norman’s logic, allows the same time for decay to occur.
It might be argued that even at one item per second, subjects would allow extra
experimental information to enter their PM, but later experimentation (Norman,
1966) in which presentation rates varied from one to ten digits (for a given period),
yielded data consistent with a rate of forgetting expected from the original model.
The rate of forgetting for the two presentation rates is similar. Interference seems
to be a greater factor than decay in forgetting in PM.
Waugh and Norman’s system makes good sense. PM holds verbal information
and is available for verbatim recall; this is true in our ordinary conversation. We
can recall that last part of a sentence we have just heard with complete accuracy,
even if we were barely paying attention to what was said. However, to recall the
same information sometime later is impossible unless we rehearse it, which makes
it available through SM.
Fig. 1.4.2: A stage model of memory (Adapted from Atkinson and Shiffrin 1969)
In the Atkinson-Shiffrin model, memory starts with a sensory input from the
environment. This input is held for a very brief time – several seconds at most –
in a sensory register associated with the sensory channels (vision, hearing, touch,
and so forth). This occurs in as little as ½ second for visual stimuli (Sperling, 1960),
46
and about 4 or 5 seconds for auditory stimuli (Darwin et al., 1972). The transfer Models of Infromation
Processing
of new information quickly to the next stage of processing is of critical importance,
and sensory memory acts as a portal for all information that is to become part of
memory. There are many ways to ensure transfer and many methods for facilitating
that transfer. To this end, attention and automaticity are the two major influences
on sensory memory, and much work has been done to understand the impact of
each on information processing.
Information that is attended to and recognised in the sensory register may be
passed on to second stage of information processing, i.e. short-term memory
(STM) or working memory, where it is held for perhaps 20 or 30 seconds. This
stage is often viewed as active or conscious memory because it is the part of
memory that is being actively processed while new information is being taken
in. Some of the information reaching short-term memory is processed by being
rehearsed – that is, by having attention focused on it, perhaps by being repeated
over and over (maintenance rehearsal), or perhaps by being processed in some
other way that will link it up with other information already stored in memory
(elaborate rehearsal). Generally 5 + 2 number of units can be processed at any
given time in STM.
Tulving (1972) was the first to distinguish between episodic and semantic
memory. “Episodic memories are those which give a subject the sense of
remembering the actual situation, or event” (Eliasmith, 2001). Episodic memory’s
store is centered on personal experience and specific events. It is entirely
circumstantial and it is not generally used for the processing of new information
except as a sort of backdrop. Semantic memory, in contrast, deals with general,
abstract information and can be recalled independently of how it was learned. It
is semantic memory that is the central focus of most current study because it
houses the concepts, strategies and other structures that are typically used for
encoding new information. Most researchers now combine these two in a broader
category labeled declarative.
The significant issue, in Craik and Lockhart’s view, is that we are capable of
perceiving at meaningful levels before we analyse information at a more primitive
level. Thus, levels of processing are more a “spread” of processing, with highly
familiar, meaningful stimuli more likely to be processed at a deeper level than
less meaningful stimuli.
That we can perceive at a deeper level before analysing at a shallow level casts
grave doubts on the original levels-of-processing formulation. Perhaps we are
dealing simply with different types of processing, with the types not following
any constant sequence. If all types are equally accessible to the incoming stimulus,
then the notion of levels could be replaced by a system that drops the notion of
49
Information Processing levels or depth but retains some of Craik and Lockhart’s ideas about rehearsal
and about the formation of memory traces.
A model that is closer to their original idea is shown in Figure 1.4.3. This figure
depicts the memory activation involved in proofreading a passage as contrasted
with that involved in reading the same passage for the gist of the material.
Proofreading, that is, looking at the surface of the passage, involves elaborate
shallow processing and minimal semantic processing.
Reading for gist, that is, trying to get the essential points, involves minimal
shallow processing, or “maintenance rehearsal” (held in memory without
elaboration), but elaborate semantic processing. Another example of this latter
kind of memory activity would be a typist who concentrates on responding to
letter sequences but has very little understanding of the material being typed.
As a result of some studies (Craik & Watkins, 1973; and Lockhart, Craik, &
Jacoby, 1975), the idea that stimuli are always processed through an unvarying
sequence of stages was abandoned, while the general principle that some sensory
processing must precede semantic analysis was retained.
Proofreading Gist
Maintenance rehearsal
Deeper Levels
Elaborat
rehearsal
Fig. 1.4.3: Memory activation in two kind of reading. (Adapted from Solso, 2006)
One clear difference between the boxes-in-the-head theory (Waugh and Norman,
and Atkinson and Shiffrin) and the levels-of-processing theory (Craik and
Lockhart) is their respective notions concerning rehearsal. In the former, rehearsal,
or repetition, of information in STM serves the function of transferring it to a
longer-lasting memory store; in the latter, rehearsal is conceptualised as either
maintaining information at one level of analysis or elaborating information by
processing it to a deeper level. The first type, maintenance rehearsal, will not
lead to better retention.
Craik and Tulving (1975) tested the idea that words that are deeply processed
should be recalled better than those that are less so. They did this by having
subjects simply rate words as to their structural, phonemic, or semantic aspects.
Craik and Tulving measured both the time to make a decision and recognition of
the rated words. The data obtained are interpreted as showing that (1) deeper
processing takes longer to accomplish and (2) recognition of encoded words
increases as a function of the level to which they are processed, with those words
engaging semantic aspects better recognised than those engaging only the
phonological or structural aspects. Using slightly different tasks, D’Agostino,
O’Neill, and Paivio (1977); Klein and Saltz (1976); and Schulman (1974) obtained
similar results.
As in the Craik and Tulving study, it was assumed that words more deeply coded
during rating should be recalled better than those words with shallow coding.
After the subjects rated the words, they were asked to free-recall as many of the
words they had rated as possible. Recall was poorest for words rated structurally
and ascended through those phonemically rated and semantically rated. Self-
reference words were recalled best.
Essentially, the model is neutrally inspired, concerned with the kind of processing
mechanism that is the human mind. Is it a type of von Neumann computer – a
Johniac – in which information is processed in sequential steps? Alternatively,
might the human mind process information in a massively distributed, mutually
interactive parallel system in which various activities are carried out
simultaneously through excitation and/or inhibition of neural cells? PDPers opt
for latter explanation.
“These [PDP] models assume that information processing takes place through
the interactions of a large number of simple processing elements called units,
each sending excitatory and inhibitory signals to other units” (McClelland,
Rumelhart, & Hinton, 1986). These units may stand for possible guesses about
letters in a string of words or notes on a score. In other situations, the units may
stand for possible goals and actions, such as reading a particular letter or playing
a specific note. Proponents suggest that PDP models are concerned with the
description of the internal structure of larger units of cognitive activity, such as
reading, perceiving, processing sentences, and so on.
The connectionist (or PDP) model attempts to describe memory from the even
finer-grained analysis of processing units, which resemble neurons. Furthermore,
the connectionist model is based on the development of laws that govern the
representation of knowledge in memory. One additional feature of the PDP model
of memory is that it is not just a model of memory; it is also a model for action
and the representation of knowledge.
A fundamental assumption of the PDP model is that mental processes take place
through a system of highly interconnected units, which take on activation values
and communicate with other units. Units are simple processing elements that
stand for possible hypotheses about the nature of things, such as letters in a
display, the rules that govern syntax, and goals or actions (for example, the goal
of typing a letter on a key board or playing a note on the piano). Units can be
compared to atoms, in that both are building blocks for more complete structures
and combine with others of their kind to form larger networks. A neuron in the
brain is a type of unit that combines with other neurons in a parallel processing
mode to form larger systems.
Units are organised into modules, much as atoms are organised into molecules.
The number of units per module range from thousands to millions. Each unit
52
receives information from other modules and, after processing, passes information Models of Infromation
Processing
to other modules. In this model, information is received, is permeated throughout
the model, and leaves traces behind when it has passed through. These traces
change in the strength (sometimes called weight) of the connections between
individual units in the model.
A memory trace, such as a friend’s name, may be distributed over many different
connections. The storage of information (for example, friend’s name) is thought
to be content addressable—that is, we can access the information in memory on
the basis of its attributes. You can recall your friend’s name if I show you a
picture of him, tell you where he lives, or describe what he does. All of these
attributes may be used to access the name in memory. Of course, some cues are
better than others.
Additional information (for example, the man with the beard, the left handed
player, the guy with red tennis shorts, the dude with the rocketlike serve, the
chap with the Boston terrier, and so forth) may easily focus the search. You can
imagine how very narrow the search would be if all of these attributes were
associated with only one person: the man you play tennis with has a beard, is
left-handed, wears red tennis shorts, has a hot serve, and has a terrier.
In real life, each of these attributes may be associated with more than one person.
You may know several people who have a hot serve or have a beard. If that is the
case, it is possible to recall names other than the intended one. However, if the
categories are specific and mutually exclusive, retrieval is likely to be accurate.
How can a PDP modular concept of memory keep these interfering components
from running into each other?
The rationale offered by the connectionist model for prototype formation in the
case of the boy and his (prototype) dog is that each time the boy sees a dog, a
visual pattern of activation is produced over several of the units in the module.
In contrast, the name of the dog produces a reduced pattern of activation. The
combined activation of all exemplar dogs sums to the prototype dog, which may
be the stable memory representation. Thus, the model, more detailed than
presented here, seems to account for this form of memory quite nicely.
The connectionist model of memory has won many disciples in the past few
years. Its popularity is due in part to its elegant mathematical models, its
relationship to neural networks, and its flexibility in accounting for diverse forms
of memories.
References
http://chiron.valdosta.edu/whuitt/col/cogsys/infoproc.html
http://www.well.com/user/smalin/miller.html]
55