Working Memory Baddeley
Working Memory Baddeley
Working Memory Baddeley
Summary
Working memory refers to the system or systems that are assumed to be
necessary in order to keep things in mind while performing complex tasks such as
reasoning, comprehension and learning. Over the last 30 years, the concept of
working memory has been increasingly widely used, extending from its origin in
cognitive psychology to many areas of cognitive science and neuroscience, and
been applied within areas ranging from education, through psychiatry to
paleoanthropology.
Main Text
The term working memory was coined in 1960 by Miller, Galanter and Pribram in
their classic book ‘Plans and the Structure of Behaviour’, used in 1968 by Atkinson
and Shiffrin in an influential paper [1], and adopted as the title for a multicomponent
model by Baddeley and Hitch [2]. It is this use of the term that will concern the rest
of the discussion. It is important to note, however, that the term working memory
was adopted independently by Olton [3] in connection with the performance of
animals, typically rats, in a multi-arm radial maze in which each arm was baited:
the animals were given several trials per day and needed to remember which arm
had already been visited on that day, in order to maximise reward. Within the
human context, this would be regarded as a long-term memory task, demanding
more than the brief limited capacity system that is assumed to comprise human
working memory. Olton's concept is still sometimes used in studies based on
animals, although primate studies typically use the term in the same way as it is
used in studies on humans.
This concept of working memory evolved from that of short-term memory, the
temporary storage of small amounts of material over brief periods of time. It
became a topic of major interest during the 1960s, linked to an information-
processing approach to psychology, in which the digital computer served as a
theoretical basis for the development of psychological theories. This approach
became known as cognitive psychology and has, in one form or another, become
increasingly widely applied in subsequent years.
A very fruitful development stemmed from applying the concepts and methods of
cognitive psychology to patients with brain impairment, an approach termed
cognitive neuropsychology. While most brain-damaged patients typically suffer
from a number of deficits, cases occasionally occur in which a single isolated
cognitive function is impaired, while other functions are preserved, allowing
theories to be tested directly. A very influential case was that of H.M. [4], who,
following bilateral hippocampal surgery to relieve intractable epilepsy, became
densely amnesic and unable to form ongoing memories. He could, however,
perform normally on short-term memory tasks such as hearing and repeating back
a telephone number. This dissociation between impaired long-term and preserved
short-term memory also proved applicable to patients whose pure amnesia
resulted from a number of different aetiologies. It was also shown to extend to a
range of other tasks that were assumed to distinguish between long-term and
short-term memory. Other patients, with damage to the left temporo-parietal cortex
rather than the hippocampus, were reported to have exactly the opposite pattern of
deficits, suggesting preserved long-term but impaired short-term memory.
This led to a conceptualisation of memory as comprising a succession of storage
systems in which information flows from the environment, into a series of
temporary sensory buffers, which are essentially part of perceptual processes,
before being passed on to a limited capacity short-term memory store, which then
feeds long-term memory. In the most influential of these models, Atkinson and
Shiffrin [1] proposed that this short-term system acts as a working memory,
controlling the flow of information into and out of long-term memory, and playing a
crucial role in learning and in cognition more widely.
This model, however, ran into two problems. The first concerned the assumption
that the mere maintenance of material in short-term memory would guarantee long-
term learning. This proved incorrect, with degree of learning depending much more
on the nature of the processing. Hence, processing a word in terms of its
perceptual appearance or spoken sound is much less effective for subsequent
learning than encoding the material on the basis of its meaning or its emotional
tone [5]. The second problem came from the study of patients with a very specific
short-term memory deficit as described by Shallice and Warrington [6]. According
to the Atkinson and Shiffrin [1] model, in the absence of an adequate short-term
memory, information should be rapidly lost and hence such patients should not be
able to learn. Furthermore, if this system did indeed function as a working memory,
patients with impaired short-term memory should be seriously cognitively
handicapped. In fact, these patients appeared to show normal long-term learning
and to live intact lives, one as a secretary, another as a taxi driver and a third
running a shop.
Baddeley and Hitch [2] attempted to tackle this paradox by studying the effect of
disrupting short-term memory on the capacity of normal people to perform complex
tasks such as reasoning, comprehending and learning. We did so by combining
such tasks with a concurrent activity that was assumed to depend on short-term
memory, namely remembering and repeating back sequences of digits such as a
telephone number. As the length of the sequence increases, the remaining
available capacity of short-term memory should be reduced, and performance on
the concurrent cognitive task progressively disrupted. We found that there was
indeed a consistent effect, with speed of performance declining with sequence
length, but impairment was far from catastrophic even with long digit sequences,
and error rate was low and unchanged. In order to explain our results, we
abandoned the unitary model, proposing instead a three-component model
illustrated in Figure 1. This assumes an attentional control system, the central
executive, aided by two short-term storage systems, one for visual material, the
visuo-spatial sketchpad, and one for verbal-acoustic material, the phonological
loop. We assumed that short-term memory patients had damage to the loop, and
that in simulating such patients using concurrent digit memory, we had
systematically loaded up the loop, at the same time as placing only a modest load
on the rest of working memory.
Our model differed from that of Atkinson and Shiffrin [1] in a number of ways. Most
obviously, we replaced the concept of a single system with one comprising at least
three separable, but interacting, subsystems. Secondly, this form of interaction
abandoned the assumption of a series of successive stages for a model capable of
parallel processing across the subsystems. This is methodologically very
important, as it rules out the use of methods in which the first few seconds are
assumed to reflect pure short-term or working memory, and later measures pure
long-term memory, an assumption that is still made in far too many studies. Long-
term memory may well influence performance at every stage, meaning that other
methods must be used to separate the two or more memory components that are
likely to influence early performance.
The multicomponent model was offered as a broad theoretical framework that, if
successful, would allow more detailed modelling of the three components. In
choosing the term working memory, we aimed to stress that its role went beyond
simple storage, allowing it to play an important role in cognition more generally,
hopefully providing a framework and a set of techniques that could be applied
practically to the wide range of activities for which working memory might be
important.
The model has subsequently proved both useful and resilient, but has had to be
supplemented by a fourth component, the episodic buffer (Figure 2). This is
episodic in that it is capable of holding multidimensional episodes or chunks, which
may combine visual and auditory information possibly also with smell and taste. It
is a buffer in that it provides a temporary store in which the various components of
working memory, each based on a different coding system, can interact through
participation in a multidimensional code, and can interface with information from
perception and long-term memory. The episodic buffer is assumed to have a
limited capacity of about four chunks or episodes, and to be accessible through
conscious awareness [7]. In its initial form, the buffer was assumed to play an
active and attentionally-demanding role in binding together information from
different sources, but further investigation suggests that it serves as a passive
store rather than an active processor [8].
1. Download high-res image (179KB)
2. Download full-size image
Figure 2. A later development of the multicomponent model.
It includes links to long-term memory and a fourth component, the episodic buffer
that is accessible to conscious awareness.
This broad theoretical framework has proved durable and has been widely used
within both basic and applied psychology and in neuroscience more
generally 7, 9, 10. One reason for the survival of this broad framework is its
simplicity, allowing more detailed theoretical development within the model, without
the need for constant change. At this more detailed level, there have been many
theoretical developments, some mainly behavioural, others based on detailed
mathematical modelling while yet other approaches have been more
neurobiologically oriented 9, 10. Rather than attempting to review this wide area, I
will describe two topics that are relatively close to the multicomponent model and
that have seen development, controversy and practical application. One concerns
the concept of a phonological loop and the other involves the study of individual
differences in working memory capacity as a means of investigating the basis of
executive control.
We proposed in our initial theorising that the phonological loop has two major
features. The first is a store in which speech-like memory traces are registered and
will spontaneously fade within about two seconds. The second is a process
whereby such traces can be refreshed by verbal or subvocal rehearsal, an activity
that takes place in real time. Blocking rehearsal by requiring the continuous
utterance of an irrelevant sound — for example, repeatedly saying the word ‘the’ —
will prevent the transformation and storage of a visual stimulus, such as a printed
letter or word, as a phonological encoded spoken item. Evidence for reliance on a
speech-like memory store comes from the phonological similarity effect. Immediate
recall of a sequence of words is grossly impaired when they are similar in sound.
Hence the sequence Map, Cat, Cap, Mat, Can is harder to recall immediately than
Pit, Day, Cow, Tub, Pen. Similarity of meaning on the other hand, as in the
sequence Huge, Wide, Long, Big, Tall, has little effect on immediate recall. But
when list length is increased to ten words, and several learning trials are allowed,
forcing reliance on long-term memory, the pattern reverses, with meaning
becoming the crucial factor [2].
Evidence for the importance of subvocal rehearsal comes from the word length
effect, whereby immediate recall declines as the length of the words to be
remembered increases [11]. The longer it takes to say the words in the sequence,
the more the forgetting that will occur. Uttering a sequence of irrelevant sounds
stops you rehearsing the words and hence abolishes the word length effect.
Evidence from the study of neuropsychological patients fits the model well [12], as
illustrated in Figure 3.
1. Download high-res image (300KB)
2. Download full-size image
Figure 3. A more detailed formulation of the phonological loop model based on
both behavioural and neuropsychological evidence [12].
An attempt to investigate the possible evolutionary significance of the phonological
loop led to the hypothesis that it facilitates the new phonological learning that is
necessary for learning to produce new words. Patients with phonological loop
impairment can learn meaningful material at a normal rate, but have great difficulty
in learning foreign language vocabulary. Further evidence for the language
learning function of the loop comes from a range of further sources. Children with
specific language impairment typically have poor short-term memory, and are
slower in acquiring new vocabulary. In the case of normal children, size and rate of
increase in vocabulary during early years is influenced by phonological loop
capacity [13].
Aspects of this evidence remain controversial. More specifically, it is by no means
generally accepted that forgetting within the short-term store reflects a fading
memory trace, rather than some form of interference from other material. It is also
the case that older children can typically cope with a phonological-loop deficit
without major language acquisition problems, so long as they have good executive
processing. These are issues of both theoretical and practical importance, but do
not present a major challenge to the overall concept of a phonological loop. Other
critics argue that the phonological loop should be regarded simply as part of the
language processing system, rather than regarding it as a supplementary store.
However, while the loop has almost certainly evolved from mechanisms for speech
perception and production, the fact that patients with grossly impaired phonological
short-term memory may have normal speech perception and production argues for
a separate system, although this is strongly linked to language processing [12].
My second example is linked to the concept of a central executive, a term that
refers to the system whereby working memory is controlled, leaving open the
question of whether this involves a single unitary controller, or as seems more
likely, an emergent alliance of executive processes. An influential approach to this
issue uses correlational methods in which differences between individuals on
specific working memory tasks, typically referred to as working memory span, are
linked to more general measures of cognition such as prose comprehension or
academic performance. The classic initial study in this area [14] required college
students to read out a sequence of unrelated sentences, and then recall the last
word of each sentence. People can usually manage only three or four sentences;
this comprises their working memory span. Surprisingly, this simple measure
proved to correlate well with the prose comprehension component of a widely used
college aptitude test. This finding has been replicated many times, and extended to
many other ways of measuring working memory span, provided they require
simultaneous storage and processing. Such span measures have been shown to
predict performance on many cognitive tasks, ranging from rate of learning
programming skills to performance on the type of reasoning task used in
intelligence testing [15].
Gathercole and colleagues [16] have used variants of the working memory span
measure as part of a broader working memory battery that is proving useful in
education, where it is able to identify children at risk of subsequent academic
difficulties, enabling teachers to anticipate future problems and provide help. But
despite the success of the individual difference-based approach to working
memory, there is no widely accepted explanation of precisely what processes
underpin the predictive capacity to these complex working memory tasks. Engle
and Kane [15]emphasise the role of inhibition in suppressing interference from
irrelevant information, while others stress the capacity to divide or switch
attention [17], or to update and maintain information [18].
This is clearly an important and lively area that can readily be fitted into the broader
multicomponent model of working memory, although not all investigators would
necessarily choose to do so. Some prefer an alternative framework proposed by
Cowan [19], whose influential embedded processes model is illustrated in Figure 4.
Cowan defines working memory as “cognitive processes that retain information in
an unusually accessible state”. Activation occurs in long-term memory, is
temporary, and fades unless maintained by verbal rehearsal or continued attention.
Cowan [19]emphasises the focus of attention, which he suggests has a limit of
about four items or chunks.
Memoria de trabajo
Enlaces de autor abierto panel de superposiciónAlan Baddeley
Mostrar más
archivo abierto
Resumen
La memoria de trabajo se refiere al sistema o sistemas que se supone que son
necesarios para mantener las cosas en mente al realizar tareas complejas como el
razonamiento, la comprensión y el aprendizaje. En los últimos 30 años, el
concepto de memoria de trabajo se ha utilizado cada vez más, extendiéndose
desde su origen en la psicología cognitiva a muchas áreas de la ciencia cognitiva y
la neurociencia, y se ha aplicado en áreas que van desde la educación, desde la
psiquiatría hasta la paleoantropología.
Artículo anterior en cuestión
Siguiente artículo en cuestión
Texto principal
El término memoria de trabajo fue acuñado en 1960 por Miller, Galanter y Pribram
en su libro clásico ' Planes y la estructura del comportamiento ', utilizado en 1968
por Atkinson y Shiffrin en un artículo influyente [1] , y adoptado como título para un
componente múltiple. modelo de Baddeley y Hitch [2] . Es este uso del término lo
que concierne al resto de la discusión. Es importante señalar, sin embargo, que el
término memoria de trabajo fue adoptado independientemente por Olton [3]en
relación con el desempeño de los animales, típicamente ratas, en un laberinto
radial de múltiples brazos en el que se cebó cada brazo: a los animales se les
realizaron varios ensayos por día y necesitaban recordar qué brazo ya había
visitado ese día, para poder: maximizar la recompensa. En el contexto humano,
esto se consideraría como una tarea de memoria a largo plazo, que exige más que
el breve sistema de capacidad limitada que se supone que comprende la memoria
de trabajo humana. El concepto de Olton todavía se usa a veces en estudios
basados en animales, aunque los estudios de primates suelen usar el término de
la misma manera que se usa en estudios en humanos.
Este concepto de memoria de trabajo evolucionó desde el de memoria a corto
plazo, el almacenamiento temporal de pequeñas cantidades de material durante
breves períodos de tiempo. Se convirtió en un tema de gran interés durante la
década de 1960, vinculado a un enfoque de la psicología del procesamiento de la
información, en el que la computadora digital servía de base teórica para el
desarrollo de teorías psicológicas. Este enfoque se conoció como psicología
cognitiva y, de una forma u otra, se ha aplicado cada vez más ampliamente en los
años posteriores.
Un desarrollo muy fructífero se derivó de la aplicación de los conceptos y métodos
de la psicología cognitiva a pacientes con discapacidad cerebral, un enfoque
denominado neuropsicología cognitiva. Si bien la mayoría de los pacientes con
daño cerebral normalmente sufren una serie de déficits, ocasionalmente ocurren
casos en los que una única función cognitiva aislada se ve afectada, mientras que
otras funciones se conservan, lo que permite probar las teorías directamente. Un
caso muy influyente fue el de HM [4], que, después de una cirugía bilateral del
hipocampo para aliviar la epilepsia intratable, se volvió densamente amnésica e
incapaz de formar recuerdos permanentes. Sin embargo, podría realizar
normalmente tareas de memoria a corto plazo, como escuchar y repetir un número
de teléfono. Esta disociación entre la memoria deteriorada a largo plazo y la
memoria conservada a corto plazo también resultó aplicable a los pacientes cuya
amnesia pura resultó de una serie de etiologías diferentes. También se demostró
que se extendía a una variedad de otras tareas que se suponía que distinguían
entre la memoria a largo plazo y la memoria a corto plazo. Se informó que otros
pacientes, con daño en la corteza temporoparietal izquierda en lugar del
hipocampo, tienen exactamente el patrón opuesto de déficits, lo que sugiere una
memoria a largo plazo pero a corto plazo deteriorada.
Esto llevó a una conceptualización de la memoria que comprende una sucesión de
sistemas de almacenamiento en los que la información fluye desde el entorno a
una serie de búferes sensoriales temporales, que son esencialmente parte de
procesos perceptivos, antes de pasar a una memoria de corto plazo de capacidad
limitada. tienda, que luego alimenta la memoria a largo plazo. En el más influyente
de estos modelos, Atkinson y Shiffrin [1] propusieron que este sistema a corto
plazo actúe como una memoria de trabajo, controlando el flujo de información
hacia y desde la memoria a largo plazo, y desempeñando un papel crucial en el
aprendizaje y en la cognición más ampliamente.
Este modelo, sin embargo, se topó con dos problemas. El primero se refería a la
suposición de que el mero mantenimiento de material en la memoria a corto plazo
garantizaría el aprendizaje a largo plazo. Esto resultó incorrecto, y el grado de
aprendizaje dependió mucho más de la naturaleza del procesamiento. Por lo tanto,
procesar una palabra en términos de su apariencia perceptiva o sonido hablado es
mucho menos efectivo para el aprendizaje posterior que codificar el material en
función de su significado o su tono emocional [5] . El segundo problema provino
del estudio de pacientes con un déficit de memoria a corto plazo muy específico
como lo describen Shallice y Warrington [6] . Según el Atkinson y
Shiffrin [1]modelo, en ausencia de una memoria adecuada a corto plazo, la
información debería perderse rápidamente y, por lo tanto, tales pacientes no
deberían poder aprender. Además, si este sistema funcionara de hecho como una
memoria de trabajo, los pacientes con problemas de memoria a corto plazo
deberían tener discapacidades cognitivas graves. De hecho, estos pacientes
parecían mostrar un aprendizaje normal a largo plazo y vivir vidas intactas, uno
como secretaria, otro como taxista y un tercero que dirigía una tienda.
Baddeley y Hitch [2] attempted to tackle this paradox by studying the effect of
disrupting short-term memory on the capacity of normal people to perform complex
tasks such as reasoning, comprehending and learning. We did so by combining
such tasks with a concurrent activity that was assumed to depend on short-term
memory, namely remembering and repeating back sequences of digits such as a
telephone number. As the length of the sequence increases, the remaining
available capacity of short-term memory should be reduced, and performance on
the concurrent cognitive task progressively disrupted. We found that there was
indeed a consistent effect, with speed of performance declining with sequence
length, but impairment was far from catastrophic even with long digit sequences,
and error rate was low and unchanged. In order to explain our results, we
abandoned the unitary model, proposing instead a three-component model
illustrated in Figura 1 . Esto supone un sistema de control de atención, el ejecutivo
central, ayudado por dos sistemas de almacenamiento a corto plazo, uno para
material visual, el bloc de notas visuo-espacial, y otro para material verbal-
acústico, el bucle fonológico. Supusimos que los pacientes con memoria a corto
plazo tenían daños en el bucle, y que al simular a estos pacientes utilizando
memoria de dígitos concurrentes, habíamos cargado sistemáticamente el bucle, al
mismo tiempo que se colocaba solo una carga modesta en el resto de la memoria
de trabajo.