Human Associative Memory PDF
Human Associative Memory PDF
Book Review
WAYNE A. WICKELGREN
In the first place, I liked the title; human conceptual memory is associative, and it is
nice to see that assertion right up front in the title of this important book. I have always
considered the issue of whether any memory system was associative or nonassociative
to be fundamental. The first memory paper I ever wrote (Wickelgren, 1965) presented
a definition of one type of associative memory and one type of nonassociative memory
and asserted that the evidence favored the assumption that verbal short-term memory
is associative. Since then, about half a dozen subsequent papers of mine were also
concerned with this issue and related issues concerned with the associative character of
verbal long-term memory (Wickelgren, 1972) and the nonassociative character of
visual sensory memory (Wickelgren & Whitman, 1970).
Partly because the general notion of associative memory has been around since
Aristotle, the most common reaction to my work on this topic has been largely “ho
hum.” Also, the decade of the 60’s was not a good time for definitions, arguments, and
evidence favorable to the assumption that human conceptual memory is associative.
Computer models and transformational generative grammar were “in,” and, for
completely illogical reasons, associative memory was “out.” The notion that memory
is associative suffered from “guilt by association.” Nearly universal acceptance of the
hypothesis of associative memory throughout the early history of psychology led to the
ridiculous notion during the 60’s that the assumption of associative memory was
directly linked to a noncognitive, operationist, crassly empirical, S-R approach to
psychology. As a consequence, a lot of psycholoinguists and cognitive memory
researchers threw out the baby with the bath water and went off developing theories
of the human mind that either ignored memory altogether or worked with some
* This work was supported by Grant 3-0097 from the National Institute of Education and by
contract F 44620-73-C-0056 from the Advanced Research Projects Agency, D. 0. D. to Ray
Hyman.
+ New York: John Wiley and Sons, New York, 1973.
243
Copyright Q 1976 by Academic Press, Inc.
AU rights of reproduction in any form reserved.
244 WAYNE A. WICKELGREN
primitive and incorrect nonassociative conception of it. Research of the last few years
on semantic memory, particularly as exemplified by the Anderson and Bower book,
has gone a long way toward rectifying this grievous error.
In the second place, I liked the overview that Anderson and Bower provided to the
different aspects of understanding semantic memory. As I see it, a complete theory of
semantic memory must solve seven theoretical subproblems: the structure of semantic
memory, recognition, acquisition, storage, retrieval, inference, and generation. The
structure of semantic memory refers to the nature of the memory code, including its
associative vs. nonassociative nature, the particular types of associations and elements
(nodes), how our knowledge of the world is represented in semantic memory, etc.
Recognition refers to the process by which the familiar portion of new stimulus input is
identified and mapped onto already existing representation in semantic memory,
including its dynamics. Acquisition refers to the process by which the new (previously
uncoded) portion of input is encoded into sematic memory, including its dynamics.
Storage referes to the nature and dynamics of the changes that take place in semantic
memory over the retention interval between acquisition and usage (retrieval).
Retrieval, inference, and generation could all be subsumed under the rubric of
“usage,” but the theoretical problems involved in each seem sufficiently different at
the present time to warrant separate treatment. Accordingly, I use the term “retrieval”
in the same way that Anderson and Bower use the term “fact retrieval” to refer to
elementary question answering (which differs from Anderson and Bower’s use of the
term “question-answering”). Elementary question answering has two principal
components: answering yes-no questions and answering wh-questions (who, what,
when, where, and possibly why, and how). This subdivision corresponds approximately
to the distinction in traditional memory research between recognition and recall
(approximately, but not exactly). “Retrieval” will be used to refer to elementary
question answering for material directly stored in semantic memory. By contrast,
inference will refer to elementary question answering that requires some combination
of separately stored memory traces in semantic memory (e.g., if Liebnitz was a person
and people have four-chambered hearts, then Liebnitz had a four-chambered heart).
Generation refers to spoken or written verbal production (principally speech produc-
tion). This is distinguishable from elementary retrieval and inference by virtue of the
production of long, syntactically complex utterances.
The processes are not totally independent. For example, recognition is usually
involved in acquisition and necessarily in usage. The last six subproblems refer to
processes for which one wants ultimately to specify both the basic nature of the process
and its (temporal) dynamics. The structure of semantic memory has no dynamic aspect
and is perhaps the most basic subproblem to solve, since it may be deeply involved in
the solution to most, if not all, of the other theoretical subproblems. However, studying
the structure of semantic memory typically requires at least some minimal assumptions
concerning most of the six process. Despite the likelihood of at least a modest degree
SUBPROBLEMS OF SEMANTIC MEMORY 245
of interaction between each of these seven subproblems, some such analysis is essential
to the solution of any complex scientific problem. This seems like a good one.
One of the strengths of the Anderson and Bower book is the presentation of a sub-
problem analysis which seems to me to be equivalent to the one just specified, or
nearly so. Furthermore, Anderson and Bower present preliminary examples of a
theoretical solution to each of the first five subproblems and at least discuss the problem
of an adequate accounting for inference in some detail, although their theory of
semantic memory has very limited inferential capacity. Anderson and Bower mention
the problem of speech generation, but present no example theoretical solution.
Anderson and Bower generally also discuss a number of empirical findings (their own
and others) that confirm or reject various aspects of their theory. In the scope of both
their theoretical approach to semantic memory and the empirical testing thereof, the
Anderson and Bower theory (HAM for human associative memory) has no published
rival. It should be noted, that this does not imply that Anderson and Bower’s theoretical
solution to any subproblem area is superior to that of Norman and Rumelhart (1975),
Schank (1973), Winograd (1972), etc., to name just a few outstanding recent alternative
formulations. The ultimate worth of different theories, particularly in the most basic
area of the structure of semantic memory will take some time to determine. I certainly
am not able to draw any definite global conclusion on this matter at the present.
Before proceeding to a critique of Anderson and Bower’s theory, I wish to provide
a brief overview of the book’s contents and evaluate its suitability for courses. The
book is divided into two parts. The first five chapters are an historical introduction to
semantic memory including various versions of associationism, Gestalt and recon-
structive notions of memory (such as they are), computer simulation or artificial
intelligence models of semantic memory, and linguistic contributions to the under-
standing of syntax and semantics. This introductory review occupies the first 135 pages
and substantially enhances the value of the book as a text for a semantic memory
seminar or an advanced course.
The remaining three quarters of the book presents Anderson and Bower’s theory
with chapters having some rough correspondence to the subproblem analysis previously
described, though this could have been improved to give a more exact correspondence.
In addition, Anderson and Bower extend their theory to the analysis of the syntactically
unstructured materials used in traditional verbal learning experiments. Finally, at
various stages in the book, Anderson and Bower presented some ideas concerning the
propositional representation of image memory. This is very stimulating, whether
or not you agree with their point of view.
Doug Hintzman, Ray Hyman, and I used the Anderson and Bower book as the sole
text for a one quarter graduate seminar on semantic memory, and in my opinion it was
an unqualified success. However, many’students, particularly those with little or no
background in linguistics, psycholinguistics, or artificial intelligence complained that
the book was too difficult to understand. Personally, I think that is a matter of expecta-
246 WAYNE A. WICKELGREN
tion. Students who are familiar with courses and seminars that are focused on
experiments will find a course that is primarily oriented to the understanding of a
relatively complex and precisely specified theory to be conceptually difficult. If students
expect such material to be difficult, to require substantial time for understanding and
to require asking a lot of questions, then a course or seminar based on this book
can be completely successful. Despite complaints from a number of students, the
semantic memory seminar we held based on this book has had an enormous impact
at the University of Oregon, certainly greater than any course or seminar I have ever
been a part of before.
In my opinion, qualitative and quantitative mathematical psychology have reached
the point of development where we should insist that the vast majority of all students
in “experimental” psychology develop some conceptual (mathematical) sophistication
to accompany their experimental and statistical sophistication. Since semantic memory
is currently “hot,” systematic exposure to this book provides an excellent vehicle for
increasing the conceptual and theoretical training of graduate students in cognitive
psychology. I do not think that the proper way to introduce the area of semantic
memory is by prior study of linguistics or computer science, since these areas provide
less motivation to the student in the form of the psychological relevance of the
conceptual material being learned. Finally, prior study of empirical psycholinguistics
is of only minor value in understanding the theoretical conceptions important in
semantic memory and simply lengthens the time to acquire the necessary sophistication.
Associative Memory
Propositional Nodes
Besides having a multiplicity of labeled links, Anderson and Bower have taken
another important step away from traditional associative memory by assuming that
learning consists not of the formation or facilitation of connections between already
existing nodes (ideas, concepts), but rather “vertical” associations between existing
nodes for lower-level concepts, phrases or propositions and higher-level nodes
representing phrases and propositions. For example, to encode the phrase “touch
debutante,” Anderson and Bower do not assume the formation or facilitation of a
single link between the concept “touch” and the concept “debutante.” Rather, they
assume that a new node is formed standing for the predicate “touch debutante” to
which both “touch” and “debutante” are linked in both forward and backward
directions. Similarly, the proposition “hippie touch debutante” is encoded in semantic
memory, not by direct associations between each of the pairs of concepts, but rather
by a hierarchy of “vertical” associations: (a) “touch” and “debutante” to the predicate
nodes “touch debutante” and (b) the predicate node and the subject “hippie” to the
propositional node “hippie touch debutante.”
To me this assumption of the existence of phrasal and propositional nodes was the
single most important idea in the book. Anderson and Bower’s theory does not quite
assume that phrasal and propositional nodes are equivalent to more elementary concept
nodes (and I consider this to be a defect of the theory). However, Anderson and
Bower’s theory deviates from the traditional assumption that associations are
“horizontal” (directly connecting the component ideas of a proposition) more than the
theories of either Norman and Rumelhart (1975), or Schank (1974).
The assumption that associative memory requires the capability of adding new nodes
by means of these vertical associations is an idea that seems very attractive and
necessary to me. For example, I have argued elsewhere (Wickelgren, 1969) in favor
of the assumption that associative memory can add new nodes or elements to stand
for new concepts or ideas as they are learned. Horizontal associations between the
attributes of a concept seem completely insufficient to explain our capacity for
representation of concepts. Also, I once used a somewhat similar notion of vertical
248 WAYNE A. WICKELGREN
policemar\hippie-/ouched debutante
FIG. 1A. Anderson and Bower’s representation of a conjunctive compound proposition.
S = Subject, P = Predicate, R = Relation (verb), 0 = Object.
FIG. 1B. Independent propositional representation of a compound proposition.
Between-Propositional Analysis
One of the many useful distinctions made in Anderson and Bower’s book is the
distinction concerning the between-propositional level of analysis and the within-
propositional level of analysis. The distinction has some analogy to that between
propositional and predicate calculus, though Anderson and Bower’s within-proposi-
tional constituent structure is far more detailed than the constituent structure of
predicate calculus.
Basically, Anderson and Bower’s theory of the between-propositional level of
analysis simply states that compound and complex sentences have as their constituents
the propositions represented by the various component clauses of the sentence.
I guess everyone in linguistics and psycholinguistics agrees with this, and, in
Chapter 9, Anderson and Bower present some evidence supporting the psychological
reality of this between-propositional level of analysis in the form of positive transfer
for learning new sentences that contain previously learned complete subject + verb +
object propositions.
is to say, a sentence such as “John bit a red apple” would be represented by two
constituent propositions: “John bit an apple” and “the apple is red.” Although
Anderson and Bower appear to assume this deep structure analysis of adjectives, they
do not do experiments using adjectives and appear to have largely ignored the issue.
I do not remember the exact experiments that have been performed on this question,
but it is my impression that there is precious little psychological evidence that supports
the separate propositional representation of adjectives. Anderson and Bower might
handle adjective and noun phrases in a manner similar to their treatment of definite
descriptions, but the success of such treatment cannot presently be evaluated.
Within-Propositional Representation
Being verbs. A minor elegant feature in the Anderson and Bower theory is the fact
that verbs of being (is, are, was, were, etc.,) are not representedin the sameway as
other verbs, but rather are consideredto be representedby an undifferentiated predicate
node as shown in Figure 2. The difference in the representation of a sentencewith
a verb of being comparedto one with a transitive verb is shown in Fig. 1. It should be
noted that the representationsshown in Figures I and 2 are simplified by deletion of
terminal quantifiers from the representation assumedby HAM.
in the park.”
6d P&PI.
Presed
FIG. 2. Representations
Canary
C = context,
icate, R = Relation (Verb),
in HAM
tr ar
of “A canary is a bird”
F = Fact, L = Location,
and 0 = Object.
re en
. I
9
and “A canary is biting the man
T = Time, S = Subject, P = Pred-
252 WAYNE A. WICKELGREN
Structural vs. temporal contiguity. The sad state of affairs regarding within-
propositional representation is nowhere more convincingly demonstrated than in
pages 319-329 where physical (temporal) contiguity sometimes provides better
prediction than HAM of what concepts are most closely associated to other concepts
within propositions. Only a fool could think that the physically adjacent words in a
sentence constitute the principal structure of semantic memory. Hence, I interpret the
failure of HAM’s propositional contiguity by comparison to temporal contiguity to
reflect the miserably inadequate state of our understanding concerning within-proposi-
tional representation. This is especially so considering that, once again, the analysis
of between-propositional structure was quite superior to physical contiguity in
predicting what is most closely associated to what in sentence recall.
dog. Hence, if your dog, Rover, bit a particular postman there would be a node
representing Rover which was associated via the set membership link to the concept
dog, and there would also be the same type of link between the node representing the
particular postman that he bit and the general concept of postman. Finally, the same
type of link would exist between the node representing the particular act of biting that
your dog performed and the general concept of biting.
The representation of Rover bit all postmen is as shown in Figure 3, where the
particular node for Rover is linked via the set membership association (E) to the
concept of dog and all postmen is linked via the “generic” association for universal
quantification (V) to postman, and the particular act of biting that Rover (or dogs ?) do
(does) to postmen (or all people or no restriction ?) is linked by the generic association
to a more general concept of dog-biting (?), which in turn is linked by the subset
association (C) to the general concept of biting. Of course this sentence is a bit absurd
since it asserts that Rover is currently biting or has bitten or will bite all postmen that
exist. However, I wanted to represent all three types of quantifiers in HAM using a
single diagram. I believe I did this.
do bit es
9 pOStb?lCUl
FIG. 3. Representation of the three types of quantifiers in HAM: member (c), subset (C),
and generic (V meaning “all” or “for all”).
I am not fond of the procedure of labeling the links. One alternative is that “Rev-er is
a dog” could be represented by a simple subject-predicate construction. A similar
solution could be used for all subsets of dogs. However, there are important subtleties
here that Anderson and Bower discuss (somewhat opaquely). For example, the Terrier
subset of dogs is not itself a dog. Each member of the subset is a dog, but the subset is
not a dog. The relation between individual elements and subsets of these elements
must be represented. One of the major contributions of the Anderson and Bower
book is to draw attention to this important problem and to suggest one possible
solution.
Concepts and words. Anderson and Bower also draw explicit attention to the
distinction between the concept (idea) and the word representing the concept, that is
between dog and “dog.” The word representing the concept is linked to the general
concept node via the word-idea association. One might try to dispense with unitary
word representatives and consider the representative of a word to be the set of its
phonetic (segmental) constituents. However, I suspect that a unitary (chunk or
concept) representation of words is also necessary.
Types and tokens. Finally, linking terminal constituents of propositional trees via
quantifier associations to more general concepts, in my opinion, exhibits a superb use
of the type-token distinction. Instead of pretending that we create multiple tokens
willy-nilly every time the same concept appears in a new sentence, Anderson and Bower
assume that tokens are introduced only to represent (sometimes subtle) differences in
meaning for the same type. Hence, when the term “the dog” is used in one sentence, it
may refer to Rover, and, in another sentence, it may refer to a different dog. Obviously,
we should represent these by two different concepts. Anderson and Bower introduce
nodes to represent these two quite properly different concepts, linking each of them via
the set membership relation (quantifier) to the more general concept of dog. If both
of these dogs have names, then each individual dog concept would be linked to its
own word representative. However, many concepts have no name of their own and
require disambiguation by an appropriate context for unique identification. The fact
that these concepts do not have single words associated to them does not mean that
the concepts should not have distinct nodes representing them. Anderson and Bower
quite properly decide that they should. Once again, one must be impressed by the deep
sophistication that Anderson and Bower demonstrate in understanding the problems of
adequate representation for semantic memory.
I do not personally believe that every individual act of biting should be represented
by a new node (token), and it is not clear what Anderson and Bower believe regarding
such matters. It seems to me that, at some point, representation in semantic memory
must ignore subtle differences in favor of completely equivalent encoding of similar
concepts in different propositions, allowing the propositional context to further
distinguish the concept in this context.
SUBPROBLEMS OF SEMANTIC MEMORY 255
Verbal Learning
Anderson and Bower alsoassertthat traditional verbal learning material is encoded
by somewhatdegeneratepropositions. This seemslike a reasonableextension of their
theory to such material and recent work on the efficacy of various mnemonic devices
that in most instances amount to embedding pairs of verbal items into verbal or
imaginal “propositions” is certainly consistentwith this generalpoint of view. Similarly,
the importance of serial position concepts and grouping concepts for the organization
of linear orders, such as in serial list learning, also argues for some more abstract
propositional structure underlying this type of learning as well. In the case of
syntactically unstructured material, such as typically used in verbal learning
experiments, it is not clear that much is added by a propositional analysis,other than
the distinctly important fact that all learning is subsumedunder a common rubric.
Anderson and Bower’s belief that the extension of their theory to verbal learning
materialswasnecessaryto generateinterest and experimental testing of their theoretical
notions seemsto me to be wrong. The interest in Anderson and Bower’s book will
surely be primarily from cognitive psychologists, psycholinguists, linguists, and
computer scientists interested in theories of semantic memory, not verbal learning
researchers.
Also, I fail to seehow two obviously brilliant people such as John Anderson and
Gordon Bower can waste their time studying a hopelesslyuncontrolled procedure
such asfree-recall. Basicresearchtheoriesof such uncontrolled and complex processes
are a scientific absurdity. Since free-recall alsohaslittle practical significance,I consider
FRAN a wasteof time.
RECOGNITION
ACQUISITION
Anderson and Bower have little to say concerning acquisition that was not already
said in conjunction with the recognition process. This occurs for several reasons.
First, Anderson and Bower have no theory of the dynamics of parsing or acquisition.
They do have a theory of the dynamics of MATCHing, which will be discussed
briefly in the Retrieval section.
Acquisition of new nodes and links in semantic memory is accomplished in HAM
by the MATCH process which simply grafts all the unMATCHed portions of the input
tree onto the MATCHed portions following the rule of achieving the highest degree
of node economy.
However, as mentioned previously, HAM’s MATCH process is a bit overzealous
in its efforts to create higher-level node economy in that it creates compound proposi-
tional nodes. This produces the problem of independent contradiction of each proposi-
tion. Although Anderson and Bower did not acknowledge this problem, they did
note that their MATCH process can lead to “multiplication of meanings” (discussed
on pp. 243-246). As an example of this, consider that in HAM a propositional node
may join subject-l with predicate-l, later join subject-l to predicate-2, and then later
join subject-2 to predicate-2. When this happens, one also gets subject-2 joined to
predicate-l, which is not a valid inference. To solve this problem, Anderson and
Bower created another process, IDENTIFY, which inhibits the MATCH process
whenever it would produce (unwanted) multiplication of meanings. Since IDENTIFY
is a very ad hoc process which does not solve the problem of independent contra-
SUBPROBLEMS OF SEMANTIC MEMORY 257
STORAGE
HAM has four different types of memory. First, there is the sensory buffer that
contains up to seven words. Second, there is a push down store that indicates where
HAM is at in the parsing process. Third, there is a working memory that holds the
tree structures generated during parsing and for use during the MATCH and
IDENTIFY processes. Finally, there is the long-term semantic memory. This is an
excessive number of different memories for my taste. In my opinion, the reasons for this
large number derive primarily from the separation of parsing, matching, and acquisi-
tion, top-down parsing, and not enough parallel processing.
Although Anderson and Bower devote an entire chapter to the topic interference and
forgetting, it is clear that Anderson and Bower’s primary efforts were directed to
storage. Associations in HAM are “all or none,” although the ordering of associations
with identical link labels on the GET lists at each node does provide an unusual kind
of gradation. The only storage process to be found in HAM is that new associations
with the same link label at a node “bury” old associations in the recency-ordered GET
list. Retrieval at a given node involves serial search in both recognition and recall, and
Anderson and Bower assume that retrieval stops after some randomly determined
period of time. Hence, HAM has a interference mechanism that is similarity dependent
in a reasonable way.
HAM does make one unique prediction concerning storage which is that inter-
ference obtained from repeating the same word in different sentences should be obtained
only if the concept denoted by that word has the same relation to the other concepts
presented in the sentence. For example, HAM expects interference if the same concept
is used as the subject (or the agent) in two sentences, but does not expect any inter-
ference if the same concept is used as the subject in one sentence and the object in
another sentence (or the agent in one sentence and the object in the other sentence).
HAM predicts both relation-specific negative transfer and retroactive interference.
Anderson and Bower obtain the effect in transfer, but not in retroactive interference.
The prediction is obviously “up in the air. ” I was no more successful than Anderson
and Bower in figuring out why this should have happened.
480/13/3-z
258 WAYNE A. WICKELGREN
The most obvious flaw in HAM’s storage dynamics is that the ordering of the GET
lists is by recency only. However, I doubt that accounting for frequency effects would
pose insuperable difficulties for HAM. Overall, it appears that Anderson and Bower
had no desire to devote much of their theoretical effort to precise formulation of
storage processes, being content to postulate some process that was a crude first
approximation which they hoped would not greatly affect their evaluation of the parts
of the theory into which they put more effort. This is a reasonable and necessary way
to go about attacking a complex problem.
RETRIEVAL
INFERENCE
Anderson and Bower admit that HAM is weak in inferential capacity. However,
largely in Chapter 10, Anderson and Bower list and discuss a large number of semantic
inferences that people are capable of making.
he was referred to directly by name or by a description. The same holds for synonyms.
It is quite desirable that HAM has this inferential capacity, but it is also clear from the
results on pages 248-251 that subjects are capable of some differential encoding
for sentences with names as opposed to sentences with definite descriptions. There is
no elegant accounting for this difference in HAM, though Anderson and Bower often
make noises to the effect that differences like this could be accounted for by means of a
certain degree of auxiliary propositional encoding.
Negation
Anderson and Bower discussthe representationof negation and inferencesinvolving
negations. HAM encodespropositional negation by embeddinga negatedproposition
P asa predicate in a superordinateproposition, “It is falsethat P.”
This is okay for saying no to P in fact retrieval, but how, in general, doesone make
the inference that Q is false becauseit is contradictory to stored information. Perhaps
this should be considereda different problem.
Another nice feature of HAM is that it includes encoding of the presuppositionsof
negation. Anderson and Bower provide the example of “It wasn’t in the park that the
hippie touched the debutante.” HAM’s encoding of this is equivalent to “At some
placethe hippie touched the debutante and it is falsethat the place wasthe park.” One
presupposesthat the episode occurred and then denies that its location was in the
park. Besideits intuitive plausibility, there is somedirect psychological evidence for
the distinction between given information (the presuppositions)and the new informa-
tion in a variety of sentencetypes (Haviland and Clark, 1974).
Veri$cation vs. recognition memory. Recognition memory experiments and verifica-
tion experiments appear similar in that both involve two alternative answersthat at
first glancemight appearto be equivalent. Anderson and Bower implicitly regard them
asequivalent probably becausetheir verification experiments were essentiallyrecogni-
tion memory experiments. However, in general, verification (judging truth or falsity)
is not equivalent to recognition (judging occurrence), though there is probably a close
relation. In a verification experiment, one might present two contradictory sentences
with subjects instructed to go by the more recent sentencein determining truth. In
which case,the answer “true” and the answer “yes” (sentencehas occurred) would
have different logical determination. It is even less appropriate to identify “false”
with “no,” since “false” meansthat the probe is contradictory to stored information,
while “no” meanssomethingequivalent to “not stored in memory.” Consideration of
this issueis the sort of thing that leadsone to speculateabout the utility of the three-
valued logic where a proposition can be true “true,” “false,” or “undetermined.”
Opposites
Consider oppositesboth on binary scales,such as “in-out,” “open-closed,” “same-
different,” etc., where one term implies the negation of the other and as ends of
262 WAYNE A. WICKELGREN
Set Inclusion
An example of such inferences is: “All pets are bothersome, Fido is a pet, therefore
Fido is bothersome.” As Anderson and Bower discuss, such set inclusion inferences
could be programmed into HAM, but they appear to be rather awkward to achieve. By
contrast, Quillian’s (1968, 1969) much more limited model of semantic memory
achieves set inclusion inferences quite simply and automatically. In my opinion, this
deficiency of HAM is a critical one to focus on in modifying the theory.
Disjunction (0~)
For example, “If John drove Mary home, then John or Keith drove Mary home.”
This does not seem like a very useful example of disjunctive inference, but human
beings clearly have this capability.
SUBPROBLEMS OF SEMANTIC MEMORY 263
Expansion
These are the important classof inferencesmost cogently identified by Schank(1972)
and perhaps best described in the paper by Schank and Rieger (1973). My favorite
example of such inferencesis as follows: From “John fell overboard,” we are capable
of inferring (perhaps incorrectly but that is no matter) that “John was in a boat;
John is now in the water; John is wet; John is swimming or otherwisestruggling in the
water, etc.”
Schank handlessuch inferencesby meansof encoding at the time of the input that
expressesan extended meaning for a proposition in terms of a primitive set of atomic
concepts. I think Anderson and Bower are correct to disagreewith &hank’s approach
to this problem. Anderson and Bower claim that many of theseinferencesare not made
at the time of input, but only at the time of retrieval. They even cite someverification
latency data in support of their prediction, though I think such experiments should be
done using speed-accuracytradeoff methodology. In any event, my intuition agrees
with Anderson and Bower that many inferences are made at the time of retrieval.
However, HAM hasno mechanismfor achieving such inferences.Since human beings
make these expansions(relational and predicate inferences) easily and frequently in
understanding written or spokenlanguage,an explanation for this inferential capacity
is another primary unsolved problem.
Transitive Relations (Linear Spatial Orders)
These inferences are of the form: “A is greater than B and B is greater than C,
therefore, A is greater than C.” HAM canencodesuchlinear orderingsin an economical
way by storing the propositions “A greater than B” and “B is greater than C.” HAM
then can usea reasonablysimple, but ad hoc, inference rule for the classof transitive
relations.
One of the problems of this theory is that it makesthe production that verification
for a proposition such as, “An elephant is bigger than a termite” would be slowerthan
verification for a proposition such as, “An elephant is bigger than a tiger.” The
proposition “elephant is bigger than tiger” is presumedto be stored directly, but the
proposition “elephant is bigger than a termite” must be verified by the rule applied to
a chain of transitive relations. Anderson and Bower discusssomeevidence collected
by Potts (1972) which is basically contradictory to this prediction and attempt to
explain away Potts’ results. The paradigmsused by Potts (1972, 1973)may not yield
evidence clearly contradictory to the prediction that decisionsconcerning close pairs
are made more quickly than decisionsconcerning remore pairs in a linear ordering.
However, the results of Moyer (1973) suggestthat there is a classof linear ordering
inferenceswhich possesses the property that decisionsare faster for remote pairs than
for close pairs. Such results are incompatible either with Anderson and Bower’s
assumptionsregarding the representation of linear orderings or with their inference
and retrieval assumptions.
264 WAYNE A. WICKELGREN
GENERATION
1 Recent research by Deese (1975) indicates that most of the spoken speech of educated adults
consists of grammatical sentences, contrary to earlier intuitive assertions. Thus, performance
appears to be much closer to capacity than previously supposed.
SUBPROBLEMS OF SEMANTIC MEMORY 265
has always been an attractive hypothesis that the same syntactic knowledge used in
speech production is also used in speech perception. Personally, I find the control
ordering characteristic of augmented transition networks to be far more natural for
speech generation than for speech recognition and reading. In any event, augmented
transition networks are quite elegant and basically consistent with an associative
memory structure.
IMAGERY
Anderson and Bower use the ideas of Winston (1970) that images are represented
not as analog pictures, but rather as sets of propositions that describe various aspects
of the image. The propositional representation of images is a very exciting idea which
currently seems open to extensive theoretical development and ultimately experimental
testing. The analog picture notion has produced some interesting experiments but has
so far defied precise theoretical statement (Pylyshyn, 1973).
Anderson and Bower specifically reject the dual encoding position that there is both
a verbal semantic memory and a spatial image memory, and instead opt for a single
unified modality of semantic memory in which the construction of images constitutes
only a greater “unpacking” of visual detail from higher order nodes in semantic
memory. This is an interesting hypothesis, but I do not see how it will wash with all
the evidence from split brain studies, lateralized brain damage, temporary immobiliza-
tion of one hemisphere, and all the other evidence indicating that verbal memory is a
separate modality from spatial image memory.
There are several different issues here. First, there is the question of whether the
representation of images is propositional or analog-pictoral. Second, even if image
memory is propositional like semantic memory, there is the question of whether the
propositional structures are equivalent in semantic and image memory. Finally, there
is the issue of whether these are two anatomically and psychologically separate
representation modalities in which there might be dual representation.
One possibility is to consider that image memory is propositional, but that the nature
of the propositional structure for the two modalities and/or the nature of the inference
processes that accompany these structures are quite different. The nature of this
difference should probably be that semantic memory has greater flexibility for repre-
sentation of various types of information, including spatial information, but that
image memory is particularly well suited for encoding, retrieving, and deriving
inferences about spatial relations.
I have a strong hypothesis concerning what the difference might be. Semantic
memory has an extensive degree of hierarchical structure for the storage of propositions
that are embedded in other propositions. Furthermore, within-propositional encoding
266 WAYNE A. WICKELGREN
probably has some hierarchical structure. In general, this sort of complex embedding
capability is quite essential to representation of human knowledge.
However, perhaps image representation has no propositional embedding and uses
a one-level symmetric relational grammar. Thus, in image memory a scene is encoded
as a set of relations with arguments, Ri(... xj . ..). where the arguments must be atomic
elements, not propositions containing relations. There is clearly a massive degree of
interconnection of various elements of the scene, since each object or part of an object
has multiple relationships to a variety of other elements. However, it appears to me
that scene description does not require embedding. I would appreciate a clear counter
example.
If no counter example is forthcoming, this is an attractive idea for the difference
between semantic memory and image memory, since a representation system that
avoided embedding would appear to facilitate information retrieval. Even if there is a
need for a certain degree of embedding for adequate scene description, it is conceivable
that such embedding is accomplished by the verbal semantic memory with the rest of
the information being stored in image memory. This begins to slip into a position
similar to that held by Anderson and Bower, but preserves the claim that there is
something basically different about image memory from semantic memory, besides the
amount of visual detail.
For the moment it seems like the propositional approach to representation of image
memory is more productive than the analog-picture approach. However, Anderson and
Bower admit that there are certain phenomena, such as Stromeyer’s (1970) eidetiker
and the Shepherd and Metzler (1971) mental rotation phenomena, that appear on the
surface to be more consistent with an analog picture approach. My guess is that we
will soon have a number of theories of image memory, with at least a few having
some analog-picture characteristics, even if the highest level image representations are
propositional.
CONCLUSION
REFERENCES
DEESE, J. Thought into speech: statistical studies of the distribution of certain features in
spontaneous speech. Paper presented at the Psychonomic Society Meeting, November, 1975.
ESTES, W. K. An associative basis for coding and organization in memory, In, A. W. Meltin &
E. Martin (Eds.), Coding processes in human memory. Washington, D.C.: Winston, 1972.
Pp. 161-190.
FILLMORE, C. J. The case for case. In E. Bach & R. T. Harms, Uniwersuls in linguistic theory.
New York: Holt, Rinehart, & Winston, 1968. Pp. l-88.
HAVILAND, S. E., & CLARK, H. H. What’s new? Acquiring new information as a process in
comprehension. Journal of Verbal Learning and Verbal Behavior, 1974, 13, 512-521.
KAPLAN, R. M. On process models for sentence analysis. In D. A. Norman & D. E. Rumelhart,
Explorations in cognition. San Francisco: W. H. Freeman, 1975. Pp. 197-223.
MILLER, G. A. The magical number seven, plus or minus two: Some limits on our capacity for
processing information. Psychological Review, 1956, 63, 81-97.
MOYER, R. S. Comparing objects in memory: Evidence suggesting an internal psychophysics.
Perception and Psychophysics, 1973, 13, 180-l 84.
NORMAN, D. A., & RUMELHART, D. E. and the LNR Research Group. Explorations in cognition.
San Francisco: W. H. Freeman, 1975.
PACHELLA, R. G. The interpretation of reaction time in information processing research. In
B. Kantowitz (Ed.), Human information processing: tutorials in performance and cognition.
Hillsdale, N. J.: Lawrence Erlbaum, 1974. Pp. 41-82.
POTTS, G. R. Information and processing strategies used in the encoding of linear ordering.
Journal of Verbal Learning and Verbal Behavior, 1972, 11, 727-740.
POTTS, G. R. Memory for redundant information. Memory and Cognition, 1973, I, 467-570.
PYLYSHYN, 2. W. What the mind’s eye tells the mind’s brain: A critique of mental imagery.
Psychological Bulletin, 1973, 80, l-24.
QUILLIAN, M. R. Semantic memory. In M. Minsky (Ed.), Semantic information processing.
Cambridge, Mass.: M.I.T. Press, 1968.
QUILLIAN, M. R. The teachable language comprehender: A simulation program and theory of
language. Communications of the Association for Computing Machinery, 1969, 12, 459-476.
REEK, A. V. Speed-accuracy tradeoff in recognition memory. Science, 1973, 181, 574-576.
REED, A. V. The time-course of recognition in human memory. Ph.D. dissertation for the
University of Oregon, August 1974.
RUMELHART, D. E., LINDSAY, P. H., & NORMAN, D. A. A process model for long-term memory.
In E. Tulving & W. Donaldson (Eds.), Organization of memory. New York: Academic Press,
1972. Pp. 197-246.
SCHANK, R. C. Conceptual dependency: A theory of natural-language understanding. Cognitive
Psychology, 1972,3,552-631.
SCHANK, R. C. Identification of conceptualizations underlying natural language. In R. C. Schank
& K. M. Colby, Computer models of thought and language. San Francisco: W. H. Freeman,
1973. Pp. 187-247.
SWANK, R. C., & RIEGER, C. J. Inference and the computer understanding of natural language.
Stanford Artificial Intelligence Memo AIM-197, May, 1973.
SHEPHERD, R. N., & METZLER, J. Mental rotation of three-dimensional objects. Science, 1971,
171, 701-703.
STEVENS, A. L.. & RUMELHART, D. E. Errors in reading: An analysis using an augmented transi-
tion network model of grammar. In D. A. Norman and D. E. Rumelhart, Explorations in
cognition. San Francisco: W. H. Freeman, 1975. Pp. 224-263.
268 WAYNE A. WICKELGREN