0% found this document useful (0 votes)
872 views31 pages

Toward A Theory of Automatic Information Processing in Reading

Uploaded by

Jorge
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
872 views31 pages

Toward A Theory of Automatic Information Processing in Reading

Uploaded by

Jorge
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

COGNITIVEPSYCHOLOGY

6, 293-323 (1974)

Toward a Theory of Automatic Information Processing


in Reading’

DAVID LABERGE AND S. JAY SAMUELS


University of Minnesota

A model of information processing in reading is described in which


visual information is transformed through a series of processing stages
involving visual, phonological and episodic memory systems until it is finally
comprehended in the semantic system. The processing which occurs at each
stage is assumed to be learned and the degree of this learning is evaluated
with respect to two criteria: accuracy and automuticity. At the accuracy
level of performance, attention is assumed to be necessary for processing;
at the automatic level it is not. Experimental procedures are described
which attempt to measure the degree of automaticity achieved in perceptual
and associative learning tasks. Factors which may influence the development
of automaticity in reading are discussed.

Among the many skills in the repertoire of the average adult, reading
is probably one of the most complex. The journey taken by words from
their written form on the page to the eventual activation of their meaning
involves several stages of information processing. For the fluent reader,
this processing takes a very short time, only a fraction of a second. The
acquisition of the reading skill takes years, and there are many who do
not succeed in becoming fluent readers, even though they may have
quickly and easily mastered the skill of understanding speech.
During the execution of a complex skill, it is necessary to coordinate
many component processes within a very short period of time. If each
component process requires attention, performance of the complex skill
will be impossible, because the capacity of attention will be exceeded.
But if enough of the components and their coordinations can be processed
automatically, then the load on attention will be within tolerable limits
and the skill can be successfully performed. Therefore, one of the prime
issues in the study of a complex skill such as reading is to determine how
the processing of component subskills becomes automatic.

’ This research was supported by a grant ( HD-06730-01) to the authors from the
National Institutes of Child Health and Human Development, and in part by the
Center for Research in Human Learning through National Science Foundation Grant
GB-17590. Reprints may be requested from David LaBerge, Department of Psy-
chology, University of Minnesota, Minneapolis, Minnesota 55455.
293
Copyright @ 1974 by Academic Press, Inc.
AlI rights of reproduction in any form reserved.
294 LABERGE AND SAMUI~Q

Our purpose in this paper is to present a model of the reading process


which describes the main stages involved in transforming written patterns
into meanings and relates the attention mechanism to processing at each
of these stages. In addition, we will test the model against some experi-
mental findings which indicate that the role of attention changes during
advanced states of perceptual and associative learning.
This paper is divided into four sections in which we ( 1) briefly sum-
marize the current views of the attention mechanism in information
processing, (2) set forth a theory of automaticity in reading and evaluate
it against some data, (3) discuss factors which may influence the develop-
ment of automaticity, and (4) discuss some implications of the model
for research in reading instruction.

ATTENTION MECHANISMS IN INFORMATION PROCESSING


In view of the fact that the present model places heavy emphasis on
the role of attention in the component processes of reading, it may be
well to review briefly the way the concept has been used by researchers
in the recent past.
The properties of attention most frequently treated by investigators are
selectivity and capacity limitation. Posner and Boies (1971) list alertness
as a third component of attention, but this property has been investigated
mostly in vigilance tasks and less often in the sorts of information process-
ing tasks related closely to reading. The property of attention which has
generated the most theoretical controversy is its limited capacity. When
early dichotic listening experiments indicated that subjects select one ear
at a time for processing messages, Broadbent (1958) proposed a theory
which assumed that a filter is located close to the sensory surface. This
filter allows messages from only one ear at a time to get through. How-
ever, later experiments indicated that well-learned significant signals such
as one’s own name (Moray, 1959) managed to get processed by the un-
attended ear. This led Treisman (1964) to modify the Broadbent theory
and allow the filter to attenuate the signal instead of blocking it com-
pletely. In this way, significant, well-learned items could be processed
by the unattended ear. Deutsch and Deutsch (1963), on the other hand,
described a theory which rejected the placement of a selective filter prior
to the analysis or decoding of stimuli and instead placed the selection
mechanism ,at a point much later in the system, after the “importance”
or “pertinence” (Norman, 1968) of the stimulus has been determined.
In the present model, it is assumed that all well-learned stimuli are
processed upon presentation into an internal representation, or code,
regardless of where attention is directed at the time. In this regard, the
model is similar to the models of Deutsch and Deutsch and of Norman.
AUTOMATIC READING PROCESSES 295

However, the present theory proposes in addition that attention can


selectively activate codes at any level of the system, not only at the
deeper levels of meaning, but also at visual and auditory levels nearer
the sensory surfaces. The number of existing codes of any kind that can
be activated by attention at a given moment is sharply limited, probably
to one. But the number of codes which can be simultaneously activated
by outside stimuli independent of attention is assumed to be large, perhaps
unlimited. In short, it is assumed that we can only attend to one thing
at a time, but we may be able to process many things at a time so long
as no more than one requires attention.
It is this capability of automatic processing which we consider critical
for the successful operation of multicomponent, complex skills such as
reading. As visual words are processed through many stages en route to
meaningfulness, each stage is processed automatically. In addition, the
transitions from stage to stage must be automatic as well. Sometimes a
stage may begin processing before an earlier one finishes its processing.
Examples of these interrelations between stages of processing are treated
in the research of Sternberg ( 1969) and Clark and Chase ( 1972). In the
skill of basketball, ball-handling by the experienced player is regarded
as automatic. But ball-handling consists of subskills such as dribbling,
passing, and catching, so each of these must be automatic and the
transitions between them must be automatic as well. Therefore, when
one describes a skill at the macrolevel as being automatic, it follows that
the subskills at the microlevel and their interrelations must also be
automatic.
Our criterion for deciding when a skill or subskill is automatic is that
it can complete its processing while attention is directed elsewhere. It
is especially important in such tests that one take careful account of all
attention shifts. On many occasions, people appear to be giving attention
to two or more things at the same time, when, in fact, they are shifting
attention rapidly between the tasks. An example is the cocktail party
phenomenon in which a person may appear to be following two con-
versations at the same time, but in reality he is alternating his attention.
The way we attempt to manage this problem in the laboratory is to test
automaticity with procedures that control the momentary attention state
of the subject ( LaBerge, Van Gelder, & Yellott, 1970). Typically, we
present a cue just prior to the stimulus the subject is to identify, and this
induces a state of preparation for that particular stimulus. Most of the
time he receives the expected stimulus, but occasionally he receives
instead a test stimulus unrelated to the cue. The response to the un-
expected test stimulus requires that the subject switch his attention as
well as process the test stimulus. If the processing of the test stimulus
296 LABERGE AND SAMUELS

requires attention, then the response latency will include both time for
stimulus processing and time for attention switching. If, however, the
stimulus processing does not require attention (i.e., it is automatic), then
the response latency will not include stimulus processing time, assuming
that the stimulus processing is completed by the time attention is
switched.

MODEL OF AUTOMATICITY IN READING


With these considerations of attention in mind, we turn to a description
of a model of automatic processing in reading, This model is based on
the assumption that the transformation of written stimuli into meanings
involves a sequence of stages of information processing (Posner et al.,
1972). Although the overall model has many stages and alternative routes
of information processing, we hope that the way it is put together will
permit us to isolate small portions of the model at a time for experimental
tests without doing violence to the model regarded as a whole. Our
strategy here is to capture the basic principles of automaticity in
perceptual and associative processing with simple examples drawn from
initial processing stages of reading and then indicate how these examples
generalize to more complex stages of reading.
We shall consider first the learning, or construction, of visual codes in
reading, which includes the perception of letters, spelling patterns, words,
and word groups. After presenting some relevant data, we will attempt
to detail the rest of the model, showing how the visual stage fits into the
larger picture. Then we will describe an experiment which attempts to
demonstrate the acquisition of automaticity in the kind of associative
learning utilized in the decoding and simple comprehension of words.

Modd of Grapheme Learning


Now let us consider in some detail the learning of a perceptual code.
It is assumed that incoming information from the page is first analyzed
by detectors which are specialized in processing features such as lines,
angles, intersections, curvature, openness, etc., as well as relational
features such as left, right, up, down, etc. (Rumelhart, 1970). For present
purposes, it is not necessary that we stipulate the exact mapping of these
features onto properties of physical stimuli. In fact, to do so would
emphasize a punctate view of the feature detectors, whereas we wish to
provide for the possibility, following Gibson (1969), that relational
aspects may be important in this kind of learning. However, it should be
pointed out that it is sometimes difficult to define relational properties in
a clear way.
In Fig. 1 we present an abbreviated sketch of the role of visual
AUTOMATIC READING PROCESSES 297

VISUAL MEMORY
f---
l

l 9
k’

0
/’

Yl
/
A

FIG. 1. Model of visual memory showing two states of perceptual coding of visual
patterns. Arrows from the attention center (A) to solid-dot codes denote a two-way
flow of excitation: attention can activate these codes and be activated (attracted)
by them. Attention can activate open-dot codes but cannot be activated (attracted)
by them.

perception in the reading model. In this schematic drawing, graphemic


information enters from the left and is analyzed by feature detectors,
which in turn feed into letter codes. These codes activate spelling-pattern
codes, which then feed into word codes, and word codes may sometimes
give rise to word-group codes. Some features activate spelling patterns
and words directly (e.g., f, and f,). These features detect characteristics
such as word shape and spelling-pattern shape. This hierarchical coding
scheme draws heavily on the notions of Gibson ( 1971), Bower ( 1972),
Johnson ( 1972), and others, but particularly on the model described by
Estes (1972).
The role of the attention center of Fig. 1 is assumed to be critical early
in the learning of a graphemic code, but expendable in later states of
learning. Arrows from the attention center indicate a two-way flow of
information between the center and each visual code in long-term
memory. Every visual code in long-term memory is represented by the
symbol 0 or 0. The open circle indicates codes that are activated only
with the assistance of attention. The filled circles indicate codes which
may be activated without attention. The lines leading from the visual
codes to attention represent the flow of information that occurs when
a code has been activated or “triggered” by stimuli. The lines leading
298 LABERGE AND SAMUELS

from the attention center to visual codes represent the activation of these
units by attention. When a stimulus occurs and activates a code, a signal
is sent to the attention center, which can “attract” attention to that code
unit in the form of additional activation. Only the well-learned, filled-
circle codes can attract attention. If attention is directed elsewhere at
the time a visual code is activated by external stimulation, attention will
not shift its activation to that visual code, unless the stimulus is intense
or unless the code automatically activates autonomic responses or other
systems which mediate the “importance” (Deutsch & Deutsch, 1963) or
“pertinence” (Norman, 1968) of that code to the attention center.
The many double arrows emanating from the attention center, therefore,
indicate potential lines of information flow to every well-learned code in
visual long-term memory. At any given moment, however, the attention
center activates only one code. This characteristic of the model represents
the limited-capacity property of attention.
As conceptualized here, attentional activation may have three different
effects on information processing. First of all, it can assist in the con-
struction of a new code by activating subordinate input codes. For ex-
ample, in Fig. 1, successive activation of features f, and f, is necessary to
synthesize letter code 1,. Secondly, activation of a code prior to the
presentation of its corresponding stimulus is assumed to increase the rate
of processing when that stimulus is presented (LaBerge et al., 1970).
Finally, activation of a code can arouse other codes to which it has been
associated, as will be described later in connection with Fig. 3.
Some of the most common patterns we learn to recognize are letters,
which are represented in Fig. 1 as 11, I,, etc. We assume that the first
stage of this learning requires the selection of the subset of appropriate
features from the larger set of features which are activated by incoming
physical stimuli. For example, assume that a child is learning to discrimi-
nate the letters t and h. The length of the vertical line is not relevant to
the discrimination of these two letters. Instead he must note the short
horizontal cross of the t and the concave loop in the h. These are. the
distinctive features of these letters when considered against each other.
In the model, we represent the selection of features of a given letter by
the lines leading from particular features to a particular letter. In this
example, the length of the vertical line is an irrelevant feature, but when
these two letters are compared with other letters, for example the letter n,
the length of the vertical line becomes a relevant feature, We ‘assume
that this kind of adjustment in feature selection continues as the rest of
the alphabet is presented to the child. One feature that seems to be ir-
relevant for all letters is thickness of line. It would appear, then, that
many features are selected for a given letter, and in many cases letters
AUTOMATIC READING PROCESSES 299

share features in common. Therefore, the relationship between the feature


and letter code levels shown in Fig. 1 is somewhat simplified for economy
of illustration. Figure 1 shows only two lines leading from features into
a given letter; typically a letter is coded from more than two feature
detectors, and a given feature may feed into more than two letters.
As a first stage of perceptual learning, selection of relevant features is
similar to the initial stages of concept learning tasks (e.g., Trabasso &
Bower, 1968) in which the subject searches for the relevant dimension
before he selects a response. The rate of learning to select the appropriate
features of a pattern may be quite slow the first time a child is given
Ietters to discriminate. However, after a child has experienced severa
discrimination tasks, he may develop strategies of visual search which
permit him to move through this first stage of perceptual learning at an
increasingly rapid rate.
In the second stage of perceptual learning, as conceived here, the sub-
ject must construct a letter code from the relevant features, a process
which requires attention. By rapid scanning of the individual feature
detectors, perhaps along with some application of Gestalt principles of
organization (e.g., proximity and similarity), a higher-order unit is
formed. If the pattern has too many features, organization into a unit
code might not be manageable. This would mean that when the letter
is itself organized into a superordinate code, it would consist of several
components, instead of one unit. However, when the features do permit
organization into a unitary code, this code is of a short-term nature at
first and is quickly lost when the eye shifts to other patterns, or when
attention activates another visual code. But every time the subject
organizes the features into that particular letter code, some trace of this
organization between features and letter code is laid down. The dashed
lines in Fig. 1 between features and a letter represent the early state of
establishment of these traces, and the solid lines represent later states of
trace consolidation.
In the early trials of learning we assume that attention activation must
be added to external stimulation of feature detectors to produce
organization of the Ietters into a unit. In the later trials, we assume that
features can feed into letter codes without attentional activation, in other
words, that the stimulus can be processed into a letter code automatically.
For example, contrast the perception of the familiar letter u. with the
unfamiliar Greek letter y. Let us assume that the visual stimulus, a, first
activates the feature detectors f, and f,. These features automatically
activate the letter code 11, which corresponds to the letter a. When the
Greek letter, y, is presented, assume that features fi and f, are activated,
However, these features do not excite the letter code 1, by themselves.
390 LABERGE AND SAMUELS

Nevertheless, when this unfamiliar letter is presented, the feature


detectors f, and f, induce the attention center to switch attention
activation to themselves, because they are already linked to the attention
center (by learning or heredity). The resultant scanning or successive
activation of these features by attention produces sufficient additional
activation to organize and arouse the letter code 1,. Were the subject to
activate and not organize the features, then the Greek letter would not
be perceived as a unit but merely as a set of features, f, and f,.
If the subject is induced to organize the separate features into a unit
when it is presented to him, the lines linking features f, and f, will pre-
sumably become strengthened, until after many such experiences they
eventually become as strong as those lines linking features f, and f, with
letter code 11, which represents a highly familiar pattern such as the
letter a. When this is accomplished, the Greek letter, y, can be perceived
as a unit without requiring attention to the scanning of its component
features.
When this unitization becomes automatic, there is nothing to prevent
the subject from exercising the option of attending to the features of the
Greek letter, much as he can choose to pay attention to the curved lines
in the familiar letter a if he chooses. This optional attentional activation
at either the feature level or the letter level is implied in the model by
the placement of dots at both feature detectors and letter codes.
At this point it would appear appropriate to mention that there may
exist another stage of learning located between feature selection and
the unitizing stages of perceptual learning. Quite possibly the subject may
learn to scan features more rapidly with practice, and eventually the
scanning itself may become automatic. Experiments which measure the
learning of patterns by shortened reaction time in matching tasks or by
increased accuracy in tachistoscopic exposures would then be revealing
the learning of a scanning path for features and not necessarily the gradual
formation of a unit. One way to test for feature scanning as opposed to
unit formation might be to estimate how much short-term memory
capacity is taken up when a letter is presented. For example, to identify
each letter by a series of feature tests may require about four or five
binary decisions ( Smith, 1971)) implying that each letter is represented
by as many components. Even if a subject stores only three or four letters
v!sually in short-term memory, this means he would have to be storing
1220 feature chunks, which seems unlikely given the limits set by Miller
(1956) at seven plus or minus two.
One might be tempted to move the argument up one level in the visual
hierarchy and maintain that letters are the visual units normally coded
in reading. This would leave the formation of higher-order units to other
AUTOMATIC READING PROCZSSES 301

systems such as the phonological system. This position is close to the one
taken by Gough ( 1972), who makes a strong attempt to reconcile letter-
by-letter visual scanning with the apparently high rates of word process-
ing by fluent readers.
One could even move the argument up another level to consider spelling
patterns as the typical units of visual perception in reading, a position
preferred by Gibson ( 1971), although she maintains that these units must
eventually be reorganized into still higher-order units.
The critical point being made here is that automaticity in processing
graphemic material may not necessarily mean that unitizing has taken
place. Scanning pathways may have been learned to the degree that they
can be run off automatically and rapidly, whatever the size of the visual
code unit involved. The present model as depicted in Fig. 1 adapts itself
to the view that a letter may be a cluster of discrete features which are
scanned automatically. One simply equates the symbol 1, with the term
( f3, f,), to indicate that the code at the letter level is a cluster of feature
units. The interpretation of automaticity is the same. For the dashed lines
linking features with letters, the features cannot be adequately scanned
without the services of attention; for the solid lines, the features are
scanned automatically. For present purposes of exposition, however, we
find it more convenient to refer to letters, spelling patterns, and words as
unit codes, but we hope that the reader wilI keep in mind that there is
an alternative view of what it is that is being automatized in perceptual
learning of this kind.
Before extending the model to other stages of processing, such as
sounding letters, spelling patterns, and words or comprehension of words,
we will describe briefly an experiment which attempted to measure
automaticity of perception and use it as an indicator of amount of per-
ceptual learning of a graphemic pattern.
Indicators of automatic perceptual processing. One way to test recog-
nition of a letter is to present two letters simultaneously and ask the
subject to indicate if they match or not (Posner & Mitchell, 1967). In
order to determine whether a person can automatically recognize a letter
pattern, we must present a pair of patterns at a moment when he is not
expecting them. The way this was done in a recent study (LaBerge,
197313) was to induce the subject to expect a letter, e.g., the letter a, by
presenting it first as a cue in a successive matching task. If the letter
which followed the cue was also an a, he was to press a button, otherwise
not. Occasionally, following the single letter cue a, the subject was given
a pair of letters other than the letter expected, e.g., the stimulus (b b).
If these letters matched, he was to press the button, otherwise not, regard-
less of what the cue was on that trial. In terms of Fig. 1, the state of the
302 LABERGE: AN-D SAMUELS

subject at the moment he expects the letter a may be represented by the


attention arrow activating 11, which we shall assume corresponds to the
letter code a. The perceptual analyzers are primed to process the stimulus
a and when it occurs, the speed of recognition should be increased. How-
ever, when a pair of different letters, (b b), is presented, corresponding
to I,, for example, the attention arrow leading to 1, now becomes activated
and the arrow to 1, deactivated. The time required for this change is
often referred to as switching time and may be as large as 80 msec in
some cases ( LaBerge, 1973a). The important prediction by the model is
that familiar letters corresponding to 1, will have already processed
features f, and f, into the letter code 1, by the time attention is switched
from 1, to 14, whereas unfamiliar letters such as 1, will not have achieved
this capability of processing features f, and f, into 1, before attention is
switched to 1,.
Therefore our indicator of automaticity in the perception of a letter
pattern is the extra time it takes to perceptually process a letter once
attention has been shifted to that letter. If this time is negligible, then we
conclude that the letter code is activated automatically from external
stimulation of its features before attention was switched to it. If this
time is substantial, then we conclude that attention was needed to
synthesize the features into the letter code. Of course we do not have
direct means of assessing the amount of attention time involved in
perceptual coding. However, we do have good estimates of the total
time it takes to code and match highly familiar letters in these tasks,
which for adults we assume must be quite automatic by now. Using
familiar letters as controls, we can measure the differences between match
latencies for unfamiliar and familiar letters and use this as an estimate
of the extra attention time required to process unfamiliar letters. Then,
as further training is given, we may note the convergence of the latencies
of the unfamiliar to the familiar letters and use this as an indicator of
perceptual learning.
In the experiment by LaBerge (1973b), the unfamiliar letters were
[ 11]I] and the familiar letters used as controls were [b d p q]. Other
groups of letter patterns, e.g., [a g n s], were used as cues to focus the
subject’s attention at the moment a pair of test letters was given accord-
ing to the procedure just described. Referring to Fig. 1, a letter from the
familiar test group could be represented by a letter code such as 14, and
a letter from the unfamiliar test group by 1,. A letter from the group
[a g n S] could be represented by I,, which represents the momentary
focus of attention. We expected that latencies of unfamiliar test letter
matches would be longer than the familiar letter matches at first, but
we expected that the amount of the difference would decrease with
practice if perceptual learning were taking place.
AUTOMATIC READING PROCESSES 303

FIG. 2. Mean latency and percent errors of matching responses to unfamiliar and
familiar letter patterns.

The results from the 16 college-age subjects are shown in Fig. 2. The
initial difference in latency between unfamiliar and familiar letters was
48 msec and the difference clearly decreased over the next four days. In
terms of the model in Fig. 1, we would say that the dashed lines between
features f, and f, and 1, were strengthened over days and approached
the automatic level of learning of the lines connecting f, and f, with 1,.
The finding that unfamiliar letters improved with practice more than
did familiar letters offers support for the hypothesis that something is
being learned about the unfamiliar letters over the days of training.
Evidence that subjects are learning automatic processing of the unfamiliar
letters is supported by a special testing condition presented to another
group of 16 subjects. In this condition, the familiar and unfamiliar pat-
terns were presented both as cues and target stimuli so that we could
assess the time taken to detect the letter when the subject expected that
letter. In terms of the model in Fig. 1, we assume for the unfamiliar let-
ter that the attention arrow to 1, is activated at the time the letter is
presented. Similarly, when a familiar letter, l,, is cued, the attention ar-
row is focused on 1, in preparation for that letter to be presented.
A comparison of latencies of these successive matches showed that the
time to make an unfamiliar match equals the time to make a familiar
match. This means that under conditions when the subject is attending
to these letters, differences between perceptual learning of letters are
not revealed. Only under conditions when the subject is attending else-
where at the moment when the test letter is presented do these differences
emerge.
Taken together, the data from these two conditions strongly suggest
that what is being learned over days is a perceptual process that operates
without attention, namely an automatic perceptual process. Whether the
304 LABERGE AND SAMUELS

process be a unitizing one or a quick scanning of features, or perhaps


something else, is not decided by these data. The main conclusion is
that what is being improved with practice is automaticity.
Apparently, acquiring automaticity is a slow process in contrast to the
relatively quick rate of acquiring accuracy in paired associate learning
(Estes, 1970). For a five-year old, one suspects that achievement of
automatic recognition of the 26 letters of the alphabet may indeed be a
slow learning process, assuming that the child is no better than the col-
lege adult at this task. It may be that the child can learn to distinguish
letters with accuracy with relatively few exposures, but it is costing him
a considerable amount of attention to do it. Apparently, considerably
greater amounts of exposure to the graphemes are necessary before the
child can carry out letter recognition automatically, a feat he must learn
to do if he is to acquire new skills involving combinations of these letters.
For other studies which support the hypothesis that visual processing
may occur without attention, the reader is referred to Eriksen and Spencer
(1969), Posner and Boies ( 1971), Egeth, Jonides, and Wall ( 1972)) and
Shiffrin and Gardner (1972). An experiment by LaBerge, Samuels, and
Petersen (1973) treats perceptual learning of unfamiliar letter-like pat-
terns which are more complex than the ones described here with similar
results.

THEORETICAL RELATIONSHIPS BETWEEN VISUAL AND


PHONOLOGICAL SYSTEMS

We turn now to a consideration of other processing systems which


presumably operate on the inputs from visual codes. Of course the model
we are describing step-by-step is not considered to be complete at this
time. Rather we expect that it will have to be modified a good deal as
the appropriate experimental tests are made. However, we hope that the
present model will help clarify some of the locations of our ignorance and
point the way to the kinds of experimental and theoretical operations
most likely to remove that ignorance.
In Fig. 3 we describe the structure of the phonological memory system
and the more important lines of associative activation, both direct and
indirect, leading from the visual codes. Evidence that recognition of
visually presented words typically involves phonological recoding is given
by Rubenstein et al. ( 1971) and Wicklund and Katz ( 1970). A model
of the articulatory response system is also briefly sketched to represent
direct links between phonological memory and the overt articulation of
words. The structures in the visual memory system are abbreviated in
Fig. 3 for economy of exposition.
The phonological memory system is assumed to contain units closely
305

VM PM RS

r(w ) Hrcsl)
P(W,)----r I
ris,l
>
p(w,l----dwz)
r&l

FIG. 3. Representation of associative links between codes in visual znemory (VM),


phonoiagica~ memory (PM ), eph3dic memary (EM ), and the response system (Rs ).
Attention is momentarily focused on a code in visual memory.

related to acoustic and articulatory inputs. If we were to represent these


systems separately, we would be strongly tempted to construct the
acoustic system in a fashion similar to the visual system, with features,
phonemes, syllables, and words structured in a hierarchy. The structure
of the articulatory system is roughly suggested within the response system
of Fig. 3, in the form of a hierarchy of response output nodes arranged in
a mirror image of the hierarchies for the sensory systems. For example,
to respond with a word, one gives attention to r( w,) which then auto-
matica~~y feeds into the syllabic units r( s,) and r(s2), and perhaps from
these into phonemes. For present purposes, we feel that we can trace the
flow of information from the visual system to the phonological system
without making specific assumptions about the precise relationships
between the acoustic and articulatory systems. Following Gibson ( 1971),
therefore, we lump these under the genera] heading of the phonological
system.
The input to the phonological memory system is assumed to come from
at least six sources: units in visual memory, response memory, semantic
memory, and episodic memory, as well as from auditory stimulation and
306 LARERGE AND SAMUELS

articulatory response feedback. Of course, additional activation can be


provided from the attention center to any well-learned unit in phono-
logical memory as is the case for units in visual memory. The sources of
input of main interest here are the codes of the visual system. Associations
between visual codes and their phonological counterparts are indicated
by the lines drawn between the visual and phonological systems. Solid
lines denote automatic associations; dashed lines denote associations that
require additional activation by attention to generate an association. For
example, a visually coded word v( wl), e.g., “basket,” automatically
activates its phonological associate, p( wl), e.g., /basket/, while another
visual word code v(w2), e.g., “capstan,” requires additional activation
by attention before it can activate its phonological associate p( w2), e.g.,
/capstan/. Another way that the phonological code p(wl) can be
activated by the visual system is by way of the component spelling pat-
terns spl (“bas”) and sp2 (“ket”), which may be associated with the
phonological units p(spl), (/has/), and p(sp,), (/ket/). Once p(spl)
and p( sp2) are activated, they in turn activate and organize a blend into
the phonological word unit, p( w1 ), ( /basket/ ) . This blend is accom-
plished by the two connecting lines presumably learned to automaticity
by a great deal of practice in hearing and speaking these two syllabic
components in the context of the word unit.
Thus we have specified two different locations in which the unitizing
of a word might take place, one in the visual system and the other in the
phonological system. For the experienced reader, the particular location
used is optional. If he is reading easy material at a fast pace, he may
select as visual units words or even word groups; if he is reading difficult
material at a slow pace, he may select spelling patterns and unitize these
into word units at the phonological level.
Exactly how these options are executed is a matter for speculation at
present. Our best estimate of the role of the attention activator during
fast reading is one in which no attention is given to the visual system,
and the highest visual unit available is the one that automatically activates
its corresponding phonological code. For slow reading, we suspect that
the attention arrow is directed to the visual system where smaller units
are given added activation, resulting in the activation of smaller phono-
logical units. These phonological units then are blended automatically
into larger phonological units.
The dashed lines in Fig. 3 leading to the episodic memory system
represent an indirect way that a visual code may activate a phonological
code. This memory system, labeled “episodic” by Tulving (1972), is
closely related to the temporal-contextual-information store of Shiffrin
and Geisler (1973). It contains codes of temporal and physical events
AUTOMATIC READING PROCESSES 397

which can be organized with visual and phonological codes into a super-
ordinate code, indicated here by cl. These codes represent associations
that are in the very earliest stages of learning. The dashed lines connected
with the episodic code represent the fact that attention is required to
activate the code. With further practice, direct lines may be formed
between visual and phonological codes, for example the line joining
v( w,) with p( w,). This link is represented by a dashed line to indicate
that additional activation by attention still is necessary for the association
to take place. The solid lines joining visual and phonological codes, of
course, represent well-learned associations that occur without attentional
activation. Of course, all three types of associations, episodic, nonauto-
matic direct, and automatic direct, are assumed to be at the accuracy
level.
The initial association between a new visual pattern and its phono-
logical response is considered to be a fast learning process (Estes, 1970).
It may not occur on the first trial, but when it does occur, it appears to
happen in an all-or-none manner. For this state of learning, progress is
indicated customarily by percent correct or percent errors. When the
subject has achieved a criterion of accurate performance, the visual code
still requires attention whenever retrieval occurs through the episodic
memory code or through a direct dashed-line connection, even if the
perceptual coding of the visual stimulus itself is automatic. Further train-
ing beyond the accuracy criterion must be provided if the association is
to occur without attention, represented by the solid lines. The letter-
naming experiment soon to be described will serve as an illustrative ex-
ample of the associative learning this model is intended to represent.

THEORETICAL RELATIONSHIPS BETWEEN VISUAL, PHONOLOGICAL,


AND SEMANTIC SYSTEMS

Once a visual word code makes contact with the phonological word
code in reading, we assume that the meaning of the word can be elicited
by means of a direct associative connection between the phonological
unit, p( w1 ), and the semantic meaning unit, m( wl), as shown in Fig. 4.
Most of the connections between phonological word codes and semantic
meaning codes have already been learned to automaticity through ex-
tensive experience with spoken communications. In fact, authors of
children’s books purposely select vocabularies in which words meet this
condition. This takes the attention off the processing of meaning and
frees it for decoding. However, for a child in the process of learning
meanings of words, we assume that the linkage between a heard word
and its meaning may be coded first in episodic memory. This is repre-
sented in Fig. 4 by the organization of p( w,) and m(w,) and event el
308 LABERGE AND SAMUELS

VM PM SM

--
FIG. 4. Representation of three states of associative
visual memory (VM), phonological memory (PM), semantic memory (SM ), and
episodic memory (EM). Attention is momentarily focused on a code in episodic
memory.

and e, into the episodic code cp. Additional exposures to a word along
with activations of its meaning would begin to form a direct link between
the phonological unit and its meaning, represented by the dashed line
between p( w,) and m( wZ). At these two states of learning, attention is
needed to activate the association of a heard word into its meaning, but
with enough practice, a word should elicit its meaning automatically,
as illustrated by the solid line joining p( wl) with m( wI).
At this point we may mention that the association between the phono-
logical form of a word and its meaning may go in the other direction, SO
that activation of a meaning unit could automatically excite a phono-
logical unit. However, we are not prepared to specify in any detail how
this is done. We simply wish to indicate that generating speech by
activation of semantic structures also appears to be automatic, at least in
the general sense in which we are using the term here.
We should note the possibility in the model that a visual word code
may be associated directly with a semantic meaning code (Bower, 1970;
Kolers, 1970). That is, a unit, v( wl), may activate its meaning, m( wI),
without mediation through the phonological system. The fact that we
can quickly recognize the difference in the meaning of such homonyms
as “two” versus “too” seems to illustrate this assumption.
Indicators of automatic associative processing. The way we are cur-
AUTOMATIC READING PROCESSES 309

rently measuring the role of attention in associative processing is similar


in principle to the method already described in this paper for testing
automaticity of perceptual recognition. Here again, latency serves as the
critical indicator, since we are interested in learning trends after the
accuracy criterion has been met. The fact that response latency of a
paired associate decreases considerably after accuracy has been achieved
has been well-established by Millward ( 1964)) Suppes et al. ( 1966)) Judd
and Glaser ( 1969), and Hall ( 1972).
In a test of automatic associative processing used by LaBerge &
Samuels (1973), the subject is asked to name the letter he sees. In order
to strictly control the subject’s attention at the moment the letter appears,
we give him another task to perform and then insert the letter at a
moment when he does not expect it. Eight subjects observed pairs of
common words presented successively and pressed a button when the
second word matched the first word. Conditions were arranged SO that
the first word of each pair prepared the subject’s attention for the fol-
lowing word. Occasionally, instead of presenting the same word for a
match, we presented a letter and asked the subject to name it aloud into
a microphone which activated a voice key. Since he expected to see a
particular word at that moment, we could test how much of the letter-
naming process was carried out before attention was shifted to the
letter. Of course, this test of automaticity required a control condition
with letters whose names were already at the automatic level of associ-
ative learning.
The two sets of letters were the same as the ones used in the perceptual
learning study. The familiar set was [b d p q] and the unfamiliar set was
[ ]J ]I]. The names for the familiar set were bee, dee, pea, and cue, and
the names for the unfamiliar set were one, two, four, and five. The over-
all latency of naming a letter pattern presumably includes perceptual
coding time, association time, and residual response time. However, we
were interested in differences only in association time. After determining
that the residual-response component was equal for the familiar and un-
familiar letters, we had only to equate the familiar and unfamiliar letters
with respect to perceptual coding time. To do this we gave the subjects
preliminary training on perceptual matching of the unfamiliar patterns
until they were recognized as fast as familiar patterns, These matching
tests were given under automatic test conditions already described. When
the criterion had been met on matching tests, the subjects were given a
card for about two minutes on which were drawn the new patterns along
with their corresponding names. They then began a series of daily tests
of naming these new letters along with tests in which familiar letters were
named. After each day’s test block, intensive training blocks were given
310 LABERGE AND SAMIJELS

FIG. 5. Mean latency and percent errors of naming responses to unfamiliar and
familiar letter patterns.

in which a trial consisted of a small circle as a cue followed by one of


the eight letters which the subject named, Figure 5 shows the results of
this experiment over 20 days of testing and training.
It is evident that the latency difference of a naming response to the
new and old letters is quite large at first and converges over days of
training. All eight subjects showed convergence. This convergence con-
tinues when accuracy appears to be stationary. Additional tests conducted
under conditions in which the subject was attending to the naming oper-
ation prior to the stimulus onset showed no difference in latencies of
familiar and unfamiliar letters. In view of these findings, we believe that
Fig. 5 provides an indication of the gradual learning of automatic naming
associations.
In Fig. 6 is shown a model of three levels of associative learning for
this experiment. Any one of the familiar letters may be designated as 1,
and its name by n( 14); any of the unfamiliar letters may be designated by
1, and its name by n( &). At first the subject may associate the un-
familiar letters with their names by some mnemonic strategy, rule, or
image. This state of learning is represented by the lines which connect
with the episodic code c,. Later, as learning progresses, a direct link may
be formed. This is indicated by the type of line joining 1, with n( 15).
Dashed lines, of course, indicate that attention must be focused on the
letter to provide the additional activation needed to complete the as-
sociation and excite the phonological name unit. The solid line joining
the familiar letter 1, with its name n(h) represents an automatic as-
sociation, which allows the stimulus activation of 1, to then excite n( 14)
while attention is directed elsewhere.
The results shown in Fig. 5 are consistent with this model of automatic
AUTOMATIC READING PROCESSES 311

e temporal-spatial even, code


c eplsodw code
feature detector
I letter code
EM\ 1 nm letter-name code
. code actwated ~~,hou, a,,en,,on
code actwated onlv w,,h a,,en,,an
CA momentary focus of a,,e”,lOn
lnformatlon flow Wl,ho”, a,,en,lOn
-- lnformatlon flow only with a,,en,~on

FIG. 6. Representation of three states of associative learning between codes in


visual memory (VM ), phonological memory (PM), and episodic memory ( EM ) .
Attention is momentarily focused on a code in visual memory.

association. However, at the end of 20 days of practice, the college sub-


jects did not name unfamiliar letters as fast as they named the familiar
letters, a finding which leads us to conclude that the subjects were still
using some degree of attention to make the association. Apparently, it
would take a great many more days to bring letter naming of these
rather simpIe letters to the level of automaticity already achieved by the
familiar letters. We are tempted to generalize to classroom routines in
elementary schools in which letter naming is directly taught and tested
only up to the accuracy level. A child may be quite accurate in naming
or sounding the letters of the alphabet, but we may not know how much
attention it costs him to do it. This kind of information could be helpful
in predicting how easily he can manage new learning skills which build
on associations he has already learned.
We agree that higher-order reading skills are based more on sounding
letters than naming them. Had we instructed the subjects to sound the
letters instead of name them, we would regard the expected convergence
of latency of the unfamiliar sounds to the latency of the familiar sounds
as indicating the gradual development of automaticity in sounding letters.
In this case, we would designate the sound of a visual letter code 1, in
Fig. 6 as p(L) instead of n( IS), etc. We assume that the three states of
learning to associate a name with a Ietter wouId generalize to the case of
312 LABEFiGE AND SAMUELS

learning to sound that letter and to sounding spelling patterns and words
as well.
Turning to the association of word sounds with word meanings illus-
trated in Fig. 4, it is possible to perform learning experiments using
indicators of automaticity of associating meanings in much the same way
as we did for associating names. The only major difference in procedure
is that instead of asking the subjects to name a letter, we ask him to press
a button if the word is a member of a particular category of meaning
( Meyer, 1970).
General model of automuticity in reading. In Fig. 7 all the memory
systems relevant to this theory of reading are shown together. We may
use this sketch to trace some of the many alternative routes that a visually
presented word could take as it proceeds toward its goal of activating
meaning codes. A given route is defined here not only in terms of the
particular systemic code encountered along the way, but also in terms of
whether or not attention adds its activation to any of these codes. A few
of the possible optional processing routes may be described as follows:

PM

FIG. 7. Representation of some of the many possible ways a visually presented


word may be processed into meaning. The four major stages of processing shown
here are visual memory ( VM ), phonological memory (PM), episodic memory (EM),
and semantic memory (SM). Attention is momentarily focused on comprehension
in SM, involving organization of meaning codes of two word-groups.
AUTOMATIC READING PROCESSES 313

Option 1: The graphemic stimulus is automatically coded into a visual


word code v( wl ), which automatically activates the meaning code m( wl).
An example is “bear” or “bare” or any very common word which is not
processed by Option 2.
Option 2: The graphemic stimulus is automatically coded into a visual
word code v( wZ ), which automatically activates the phonological code
p( w,). This code then automatically excites the meaning code m( w,).
An example is any very common word which is not processed by Option 1.
Option 3: The graphemic stimulus is automatically coded into the
visual word-group code v( wgI). This code automatically activates the
phonological word-group code p( wgl), which in turn activates auto-
matically the meaning code of the word group m( wgI). An example
might be the words “beef stew” or “apres ski.”
Option 4: The graphemic stimulus is automatically coded into two
spelling patterns sp4 and sp.+ These units activate the phonological codes
p( sp4) and p( sp5). These two codes are blended with attention into the
phonological word code p ( w4), which activates with attention the
episodic code cl. This code is then activated by attention to excite the
meaning code m( w,). An example might be “Skylab,” for those who
have had few experiences with the word.
Option 5: The graphemic stimulus is coded with attention into the
visual word code v( w,). Attention activates this code to excite the
episodic code cZ. When attention is shifted to cp, it generates the mean-
ing code m( wa). An example is the name of a character in a Russian
novel which is too long to pronounce easily.
An act of comprehension is illustrated in Fig. 7 by the focusing of at-
tention on the organization of two word groups, one of which, m( wgl),
has been automatically grouped, and the other, m( wgZ), has required
attention to be grouped. We assume that m( wZ), m( w,), m( w4), and
m(w5) can be organized to make sense to the subject only if he can
manage to shift his attention activation quickly among these meaning
codes to keep them simultaneously active. We are assuming that the
process of organizing is promoted by fast scanning at the semantic level
in much the same way that fast scanning of feature detectors promotes
unitizing of features into new letter patterns at the usual level.
Options 1 and 2 illustrate what many consider the goal of fluent reading:
the reader can maintain his attention continuously on the meaning units
of semantic memory, while the decoding from visual to semantic systems
proceeds automatically. The rest of the examples serve to emphasize that
the reader often has the option of several different ways of processing a
given word. When he encounters a word he does not understand, his
attention may be shifted to the phonological level to read out the sound
314 LABERGE Ah'DSAMUFLS

for attempts at retrieval from episodic memory. At other times he may


shift his attention to the visual level and attempt to associate spelling pat-
terns with phonological units, which are then blended into a word which
makes contact with meaning. When the decoding and comprehension
processes are automatic, reading appears to be “easy.” When they require
attention to complete their operations, reading seems to be “difficult.”
One could say that every time a word code requires attention we are
made aware of that aspect of the reading process. For example, when we
encounter a word that does not make sense, we may speak it and thereby
are momentarily aware of the sound of the words we are reading. Or if
the word does not sound right to us, we may examine its spelling patterns,
thereby becoming aware of its visual aspects. However, when reading
is flowing at its best, for example in reading a mystery novel in which the
vocabulary is very familiar, we can go along for many minutes imagin-
ing ourselves with the detective walking the streets of London, and
apparently we have not given a bit of attention to any of the decoding
processes that have been transforming marks on the page into the deeper
systems of comprehension.

DEVELOPMENT OF AUTOMATICITY
Throughout this paper we have stressed the importance of automaticity
in performance of fluent reading. Now we turn to a consideration of
ways to train reading subskills to automatic levels. Unfortunately, very
little systematic research has been directed specifically to this advanced
stage of learning. Reviews of studies of automatic activity (Keele, 1968;
Welford, lQ68; Posner & Keele, lQ69) deal mostly with automatic motor
tasks and, to our knowledge, there are no studies which systematically
compare training methods which facilitate the acquisition of automaticity
of verbal skills. Therefore, our remarks here will be speculative, although
we are currently putting forth efforts in the laboratory to shed light on
this problem.
First of all, we would agree with most practitioners involved in skill
learning that practice leads to automaticity. For example, recognizing
letters of the alphabet apparently becomes automatic by successive ex-
posures (see Fig. 2). Sounding spelling patterns apparently becomes
automatic by repetition of the visual and articulatory sequences. Even
the meaning of a visual word would seem to achieve automatic retrieval
through successive repetitions. Edmond Huey in 1908 emphasized the
role of repetitions in the development of automaticity when he wrote,
“To perceive an entirely new word or other combination of strokes
requires considerable time, close attention and is likely to be imperfectly
done, just as when we attempt some new combination of movements,
AUTOMATIC READING PROCESSES 315

some new trick in the gymnasium, or a new serve at tennis. In either case,
repetition progressively frees the mind from attention to details, makes
facile the total act, shortens the time, and reduces the extent to which
consciousness must concern itself with the process” (Huey, 1908, p. 104).
In the case of perceptual learning, repetitions would seem to provide
more than the consolidation of perceptions to the point where they can
be run off quite quickly and automatically. Another thing that can happen
during these repetitions is that the material can be reorganized into
higher-order units even before the lower-order units have achieved a
high Ievel of automaticity. For example, when the chiId reads text in
which the same vocabulary is used over and over again, the repetitions
will certainly make more automatic the perceptions of each word unit,
but if he stays at the word level he will not realize his potential reading
speed. If, however, he begins to organize some of the words into short
groups or phrases as he reads, then further repetitions can strengthen
these units as well as word units. In this way he can break through the
upper limit of word-by-word reading and apply the benefits of further
repetitions to automatization of Iarger units. Apparently this sort of
higher-order chunking progresses as the child gains more experience in
reading. For example, Taylor et al. (1960) found that 1st grade children
made as many as two fixations per word whereas 12th graders made one
fixation for about every two words.
Reorganization into larger units requires attention, according to the
model. We do not know specifically how to train a child to organize
codes into higher units although some speed-reading methods make
claims that sheer pressure for speed forces the person out of the word-
by-word reading into larger units. Nevertheless, we feel reasonably sure
that considerable application of attention is necessary if the reorganiz-
ation into higher-order units is to take place. When a person does not
pay attention to what he is practicing, he rules out opportunities for form-
ing higher units because he simply processes through codes that are
already laid down.
What may be critica in the determination of upper limits of word-
group units is the number of word meanings that the subject can compre-
hend in one chunk in his semantic memory. Units at the semantic level
may determine chunk size at the phonological level which, in turn, may
influence how attention is distributed over visual codes. Stated more
generally, this hypothesis says that the limiting size of the chunk at early
levels is influenced by the existing chunk size at deeper levels. If this
hypothesis holds up under experimental test, it would imply that the
teaching of higher-order units for the reader should progress from deeper
levels to sensory levels, rather than the reverse.
316 LABERGE AND SAMUELS

We have suggested that during the development of automaticity the


person either may attempt to reorganize smaller units (e.g., words) into
larger units (e.g., word phrases) or he may simply stay at the word
unit level. It stands to reason that he has more confidence in his per-
formance at the lower unit level where he has had the most practice.
Whenever he attempts to reorganize word codes into larger units, he
may temporarily slow down and perhaps make more errors. Therefore, to
encourage chunking, we may have to relax the demand for accuracy. In
general, teachers who stress accuracy too strongly may discourage
children from developing sophisticated strategies of word recognition
( Archwamety & Samuels, 1973). Thus, a child who has performed
successfully at one level of processing may be reluctant to leave it and
move to higher levels which could eventually improve reading speed.
In the case of association learning, automaticity presumably develops
by the sheer temporal contiguity of the two codes. For example, sound-
ing the word “dog” as it is visually presented with no attention to
organization of the stimulus and the response may be sufficient for attain-
ment of automaticity. Presumably, proficient readers continue to increase
their speed on word-naming tests by sheer practice without special at-
tention to organizing or reorganizing associations.
However, at the initial stages of association, well before automaticity
begins to develop, organizational processes are probably involved during
a repetition (Mandler, 1967; Tulving, 1962). While it is possible to form
an initial associative link directly (rote learning), most likely the subject
organizes the stimulus and response together with an event or with a
rule and stores this code in episodic memory, as shown in Fig. 3. Later,
when the stimulus is presented, attention activates the stimulus code to
excite the episodic code. This episodic code now “attracts” attention to
activate it further and the code generates its subordinate codes, including
the response code. This way of recalling a response requires attention
activation and takes a relatively large amount of time. With further
repetitions, the stimulus code should begin to short-circuit the episodic
code and form a new direct link with the response code. At this point,
the stimulus code may require some attention to activate the response
code. However, the route through episodic memory remains as an option,
and subjects probably use it as a check on the response obtained by the
new direct route. With enough practice, of course, activation of the
stimulus code excites the response code without attentional assistance.
We expect that the rate of growth of automaticity will depend upon
a number of other factors. Two of these, namely distribution of learning
and presentation of feedback, have been studied extensively in verbal
learning and motor learning experiments. For both motor skill and verbal
learning, it has generally been found that distributed practice is better
AUTOMATIC READING PROCESSES 317

than massrju practice, although the optimal interval seems to be a matter


of minutes, not days. Massed practice appears to be more favorable when
one deals with meaningful material. If we can assume that organizational
processing requires massed practice, then we would be inclined to predict
that massed practice would be more beneficial for acquiring automatic
perceptual processing where organization of codes into larger chunks
seems critical. However, automatic associating of sounds or names with
these perceptual chunks should involve little if any organization and
therefore should profit more from distributed practice.
Since the important growth of automaticity takes place after the sub-
ject has achieved accuracy, overt feedback for correct and incorrect
responses may be redundant because at this stage of Iearning the subject
knows when he is correct or not (Adams & Bray, 1970). However, there
is another type of feedback which may affect the rate of automaticity
learning. While learning proceeds toward the automatic level, it might
be appropriate to inform the subject of the time it took to execute his
response. In fact, the research we have described on acquisition of
automaticity routinely informs the subject of his response speed after
each trial as well as at the end of a block of trials. Of course, latency
feedback for a response to a particular stimulus will not be meaningful
by itself, it must be related to some criterion baseline. For example, the
time it takes to identify a new word should be compared to the time it
takes to identify a word that is already at the automatic level. Thus, the
critical metric is the difference between the two latencies. In practice,
what we do is present a few old and well-learned patterns along with
the new material we wish the subject to learn. At the end of a series of
these trials, we show the subject the two latencies. Another way to present
feedback is to give it after each response. When his response is faster
than the mean on the previous block, he is given a light or sound to
indicate a fast response. Aside from the incentive value of knowing how
his response speed compares with a criterion, the feedback may inffuence
the way he distributes attention before and immediately after stimulus
presentation. This may in turn influence how he organizes the perceptual
aspects of reading. As for the purely associative operations, we would
suspect that latency feedback would be effective mainly in assuring that
the subject continues to respond fast enough to maintain optimal temporal
contiguity between stimulus and response codes.

IMPLICATIONS OF THE MODEL FOR RESEARCH IN


READING INSTRUCTION

The model which has been presented here may have several helpful
features for the researcher concerned with reading. It provides explan-
atory power by clarifying a number of phenomena which have puzzled
318 LABEBGE AND SAMiJELS

educators for some time, and it suggests directions for pcaagogical


improvement.
One of the current questions in reading is whether it should be con-
sidered as a wholistic process or as a cluster of subskills. In support of
the subskill view, Guthrie (1973) found that correlations among sub-
skills were high for a group of good readers but low for poor readers,
suggesting that they differ in the way they organize component skills.
Jeffrey and Samuels (1966) found that children who were taught all of
the subskills necessary for decoding words were able to do so without
any guidance from a teacher, whereas children who were not taught a
particular component were unable to decode.
From the point of view of a mature reader, however, the process ap-
pears to be a unitary one. In fact, it is customarily referred to by one
label, namely “reading.” When a teacher observes a bright child learn-
ing to read, he may see the child slowly attaining one skill. However,
when the same teacher is confronted with a slow learner, he may observe
the child slowly learning many skills. This comes about because the child
often must be given extensive training on each of a variety of tasks, such
as letter discrimination, letter-sound training, blending, etc. In this man-
ner, a teacher becomes aware of the fact that letter recognition can be
considered a skill itself, to be taught like we teach object-naming, for
example, naming birds.
The fluent reader has presumably mastered each of the subskills at the
automatic level, Even more important, he has made their integration
automatic as well, What this implies is that he no longer clearly sees the
dividing lines separating these skills under the demands of his day-to-day
reading. In effect, this means that he is no longer aware of the component
nature of the subskills as he was required to be when he was a beginning
reader, learning skills one-by-one. Therefore, if you should ask a typically
fluent reader how he perceives his reading process, he is likely to tell
you that he views it as a wholistic one.
It seems from our consideration of this model that all readers must
go through similar stages of learning to read but do so at different rates,
The slower the rate of learning to read, the more the person becomes
aware of these component stages. One of the hallmarks of the reader who
learned the subskills rapidly is that he was least aware of them at the
time, and therefore now has little memory of them as separable subskills.
On the basis of this model, therefore, we view reading acquisition as a
series of skills, regardless of how it appears to the fluent reader. Pedagog-
ically, we favor the approach which singles out these skills for testing
and training and then attempts to sequence them in appropriate ways.
In consideration of each stage, for example, learning to sound letter
AUTOMATIC READING PROCESSES 319

patterns, it would appear that there are two criteria of achievement:


accuracy and automaticity. During the achievement of accuracy, we as-
sume the student should have his attention focused on the task at hand
to code the association between the visual letters and their sounds in
episodic memory, or to establish direct associations (cf. Fig. 6). Once he
has learned letter-sound correspondences, he may or may not be ready
to attack the next stage, namely to “blend” these sounds into syllables or
words. To ascertain his readiness to move ahead, we must consider a
further criterion, namely automaticity. If a good deal of attention is
required for him to be accurate in sounding letter patterns, then “blend-
ing” will be more difficult to perform owing to the total number of things
he must attend to and hold in short-term memory.
In practice, the letter-sound processing need not be fully automatic
for him to make progress towards blending, since even the slowest learner
has sufficient short-term memory capacity to store a few sounds while
he works at blending them. Of course, the less time that his attention must
be allocated to the letter-sound processing, the more time he can devote
to the blending operation and the faster the progress in learning to blend.
We could say that the child who either has a small short-term memory
capacity or who has not yet developed the letter-sound skill to the auto-
matic level has too many things to which he must switch his attention in
order to carry out the operation of blending. This means that he will for-
get information crucial to the blending process, and therefore he is more
likely to suffer unsuccessful experiences with this task. In short, accuracy
is not a sufficient criterion for readiness to advance to skills which build
on the subskills at hand. One should take into account the amount of
attention required by these subskills as part of the readiness criterion.
COMPREHENSION
We turn now to consider the way the model can be used to clarify some
of the comprehension processes and to point to certain pedagogical
consequences. In its present simple form, the model does not spell out
higher-order linguistic operations such as parsing, predictive processing,
and contextual effects on comprehension. If initial tasks of the model
are successful, it is hoped that it can be elaborated to represent more
complex semantic operations such as these. For present purposes, we find
it convenient to separate comprehension from word meaning. By word
meaning we refer to the semantic referent of a spoken or written word,
morpheme, or groups of words that denote a meaningful unit. By com-
prehension, on the other hand, we refer to the organization of these word
meanings. TO do this, the meaning units presumably are scanned one-by-
one by attention and organized as a coherent whole. The momentary
320 LABERGE AND SAMUELS

act of comprehension is represented in Fig. 7 by the focus of attention


on the coding of m( wgl) with m( wg2). If a subject maintains attention
solely on single-meaning codes, this would constitute a rather low form
of comprehension, much like viewing characters in a play one-by-one
and ignoring their interactions. On the other hand, for high-level com-
prehension of passages, attention must be directed to organizing these
meaning codes, and presumably this is where effort enters into reading
just as it does in understanding difficult spoken sentences.
So long as word meanings are automatically processed, the focus of
attention remains at the semantic level and does not need to be switched
to the visual system for decoding, nor to the phonological level for
retrieving the semantic meanings. On the other hand, attention could be
focused on the decoding of visual words into their phonological form
and spoken aloud without any attention to comprehension. In fact, this
has been frequently observed with some beginning readers, and goes
by the label of “word calling.”
Another phenomenon which the model may clarify is reading for
meaning, but without recall for what has just been read. The model
indicates that meanings of familiar words and word groups may be
activated automatically, leaving attention free to wander to other matters,
perhaps to recent personal episodes. If the reader gives little attention
to organizing meanings into new codes for storage, it is not surprising
that he later finds he cannot recall what he has been reading.
The complexity of the comprehension operation appears to be as
enormous as that of thinking in general. When a person is comprehending
a sentence, he quite often adds his own associations to the particular
organized pattern of meanings. In addition, the ways in which he might
organize the meaning units from semantic memory may be influenced
by strategies whose programs of operation are themselves stored in
semantic memory. We assume that the act of adding material from one’s
own experiences to what one is reading is represented by switching to
other codes in semantic and episodic memory. When this occurs, the item
in semantic memory is used as a retrieval cue to access an association or
strategy. The finished organizational product presumably is then stored
in episodic or semantic memory. When this is successfully done, we say
the person can remember what he has read.
REFEIRENCES
ADAMS, J. A., & BRAY, N. W. A closed-loop theory of paired-associate verbal learning.
Psychological Review, 1970,77,3&S-405.
ARCWWAMETI, T., & SAMUELS, S. J. A mastery based experimental program for
teaching mentally retarded children word recognition and reading comprehension
skills through use of hypothesis/test procedures. Research Report #50, Research,
AUTOMATIC READING PROCESSES 321

Development and Demonstration Center in Education of Handicapped Children,


Minneapolis, Minnesota, 1973.
BOXER, G. H. A selective review of organizational factors in memory. In E. Tulving
& W. Donaldson (Eds. ), Organization of memory. London: Academic Press,
1972.
BOWER, T. G. R. Reading by eye. In H. Levin & J. Williams (Eds.), Basic studies in
reading. New York: Basic Books, 1970.
BROADBENT, D. E. Perception and communication. London: Pergamon Press, 1958.
CLARK, H. H., & CHASE, W. G. On the process of comparing sentences against pictures.
Cognitive Psychology, 1972, 3, 472-517.
DEUTSCH, J. A., & DEUTSCH, D. Attention: Some theoretical considerations. Psycho-
logical Review, 1963, 70, 80-90.
EGETH, H., JONIDES, J., & WALL, S. Parallel processing of multielement displays.
Cognitive Psychology, 1972, 3, 674-698.
ERIKSEN, C. W., & SPENCER, T. Rate of information processing in visual perception:
Some results and methodological considerations. Journal of Experimental Psy-
chology Monographs, 1969,72( 2Pt.2), 1-16.
ESTES, W. K. Learning theory and mental development. New York: Academic Press,
1970.
ESTES, W. K. An associative basis for coding and organization in memory. In A. W.
Melton & E. Martin ( Eds. ), Coding processes in human memory. New York:
Wiley, 1972. Pp. 161-190.
GIBSON, E. J. PTincipZes of perceptual learning and development. New York: Appleton-
Century-Crofts, 1969.
GIBSON, E. J. Perceptual learning and the theory of word perception. Cognitive
Psychology, 1971, 2, 351- 368.
GOUGH, P. B. One second of reading. In J. F. Kavanaugh and I. G. Mattingly (Eds.),
Language by ear and by eye. Cambridge: M.I.T. Press, 1972.
GUTHRIE, J. T. Models of reading and reading disability. The Journal of Educational
Psychology, in press.
HALL, R. F., & WENDEROTH, P. M. Effects of number of responses and recall strategies
on parameter values of a paired-associate learning model. Journal of Verbal
Learning and Verbal Behavior, 1972, 11, 29-37.
HUEY, E. B. The psychology and pedagogy of reading. New York: Macmillan, 1908.
JEFFREY, W. E., & SAMUELS, S. J. The effect of method of reading training on initial
learning and transfer. Journal of Verbal Learning and Verbal Behavior, 1966,
57, 159-163.
JOHNSON, N. F. Organizations and the concept of a memory code. In A. W. Melton &
E. Martin (Eds. ), Coding processes in human memory. New York: Wiley, 1972.
JUDD, W. A., & GLASER, R. Response latency as a function of training method,
information level, acquisition, and overlearning. Journal of Educational Psychology
Monograph, 1969, 60, No. 4, Part 2.
KEELE, S. W. Movement control in skilled motor performance. Psychological Bulletin,
1968,70, 387403.
KOLERS, P. A. Three stages of reading. In H. Levin and J. Williams, (I&.), Basic
studies in reading. New York: Basic Books, 1970. Pp. 90-118.
LABERGE, D. Identification of the time to switch attention: A test of a seriaI and
parallel model of attention. In S. Kornblum (Ed.), Attention and performance
IV. New York: Academic Press, 1973a.
322 LAFBRGE AND SAMUELS

LABERGE, D. Attention and the measurement of perceptual learning. Memory and


Cognition, 1973b, 1, 26S-276.
LABERGE, D., & SAMUELS, S. J. On the automaticity of naming artificial letters.
Technical Report #7, Minnesota Reading Research Project, University of
Minnesota, 1973.
LABERGE, D., SAMUELS, S. J., & PETERSEN, R. Perceptual learning of artificial letters.
Technical Report #6, Minnesota Reading Research Project, University of
Minnesota, 1973.
LABERGE, D., VAN GEWER, P., & YELLOTT, J. I. A cueing technique in choice
reaction time. Perception and Psychophysics, 1970, 7,57-62.
MANDLER, G. Organization and memory. In K. W. Spence and J. T. Spence (Eds.),
The psychology of learning and motivation. Vol. 1. New York: Academic Press,
1967.
MEYER, D. F. On the representation and retrieval of stored semantic information.
Cognitive Psychology, 1970, 1, 242-300.
MILLER, G. A. The magical number seven plus or minus two: Some limits on our
capacity for processing information. Psychological Review, 1956, 63, 81-97.
MILLWARD, R. Latency in a modified paired-associate learning experiment. Journal
of Verbal Learning and Verbal Behauior, 1964, 3, 309-316.
MORAY, N. Attention in dichotic listening: Affective cues and the influence of
instructions. Quarterly Journal of Experimental Psychology, 1959, 11, 56-60.
NORMAN, D. A. Toward a theory of memory and attention. Psychological Review,
1968, 75, 522-536.
POSNER, M. I., & BOIES, S. J. Components of Attention. Psychological Review, 1971,
‘78, 5, 391-405.
POSNER, M. I., BOIES, S. J., EICHELMAN, W. H., & TAYLOR, R. L. Retention of visual
and name codes of single letters. Journal of Experimental Psychology Monograph,
1969,79, No. 1, Part 2.
POSNER, M. I., LEWIS, J. L., & CONRAD, C. Component processes in reading: A per-
formance analysis. In J. F. Kavanaugh & I. G. Mattingly (Eds. ), Langwge by ear
and by eye. Cambridge: M.I.T. Press, 1972.
POSNER, M. I., & MITCHELL, R. A chronometric analysis of classification. Psycho-
logical Review, 1967, 74, 392499.
POSNER, M. I., & KEELE, S. W. Attention demands of movements. Proceedings of the
seventeenth congress of applied psychology, Amsterdam: Zeitlinger, 1969.
RUBENSTEIN, H., LEWIS, S. S., & RUBENSTEIN, M. A. Evidence for phonemic recoding
in visual word recognition. .lournal of Verbal Learning and Verbal Behavior, 1971,
10, 645-647.
RUMELHART, D. E. A multicomponent theory of the perception of briefly exposed
visual displays. Journal of Mathematical Psychology, 1970, 7, 191-218.
SHIFFRIN, R. M., & GARDNER, G. T. Visual processing capacity and attentional control.
Journal of Experimental Psychology, 1972,93, 72-62.
SHIFFRIN, R. M., & GEISLER, W. S. Visual recognition in a theory of information
processing. In R. L. Solso (Ed.), Contemporary issues in cognitive psychology:
The Loyola Symposium. New York: Wiley, 1973.
SMITH, F. Understanding reading. New York: Holt, Reinhart & Winston, 1971.
STERNBERG, S. The discovery of processing stages: Extensions of Donder’s methods.
In Koster (Ed.), Attention and performance, Vol. 2. Amsterdam: North Holland
Publishing Co., 1969.
AUTOMATIC RFADING PROCESSES 323

SUPPES, P., GROEN, G., & SCHLAG-RAY, M. A model for response latency in paired-
associate learning. Journal of Mathematical Psychology, 1966, 3, 99-128.
TAYLOR, S. E., FRACKENPOHL, H., & PATTEE, J. L. Grade level norms for the com-
ponents of the fundamental reading skill. Bulletin #3, New York: Huntington,
Educational Development Laboratories, 1960.
T-BASSO, T. R., & BOXER, G. H. Attention in learning: Theory and research. New
York: Wiley, 1968.
TREISMAN, A. Selective attention in man. British Medical Bulletin, 1964, 20, 12-16.
TULVING, E. Subjective organization in free recall of “unrelated words.” Psychological
Review, 1962, 69, 344-354.
TULVING, E. Episodic and semantic memory. In E. Tulving & W. Donaldson (Eds.),
Organization of memory. New York: Academic Press, 1972.
WELFOFW, A. I. Fundamentals of skill. London: Methuen, 1968.
WICKLUND, D. A., & KATZ, L. Short term retention and recognition of words by
children aged seven and ten. Visual Information Processing, Progress Report No.
2, University of Connecticut, 1970.

(Accepted November 27, 1973)

You might also like