Thomas D. Parsons (Auth.) - Clinical Neuropsychology and Technology - What's New and How We Can Use It (2016, Springer International Publishing)
Thomas D. Parsons (Auth.) - Clinical Neuropsychology and Technology - What's New and How We Can Use It (2016, Springer International Publishing)
Thomas D. Parsons (Auth.) - Clinical Neuropsychology and Technology - What's New and How We Can Use It (2016, Springer International Publishing)
Parsons
Clinical
Neuropsychology
and Technology
What’s New and How We Can Use It
Clinical Neuropsychology and Technology
Thomas D. Parsons
Clinical Neuropsychology
and Technology
What’s New and How We Can Use It
123
Thomas D. Parsons
University of North Texas
Denton, TX
USA
vii
viii Preface
This book reviews currently available technologies that may be useful for
neuropsychologists. In addition to enhanced technologies for administration and
data capture, there is emphasis on the need for information technologies that
can link outcome data to neuroinformatics and collaborative knowledgebases.
I understand that this book is a rather ambitious first account of advances in
technology for neuropsychological assessment. It is important to note that neu-
ropsychologists need not view these advanced technologies as necessary replace-
ments for current batteries. Instead, it is hoped that the tools described herein will
offer neuropsychologists with additional tools that can be used judiciously with
current batteries.
I wish to acknowledge the amazing people who helped me in making this book
possible.
First, I wish to acknowledge my colleagues at the University of Southern
California, and the University of North Texas. While at the University of Southern
California’s Institute for Creative Technologies, I had the great opportunity of
working with Galen Buckwalter, Skip Rizzo, and Patrick Kenny. Their passion for
virtual reality and neuropsychology shaped my development and proffered an
impetus for my desire to update the tools used currently for neuropsychological
assessment. At the University of North Texas, I have had a number of interesting
interactions with my colleagues in Psychology. Additionally, I have benefitted from
collaborative work with Ian Parberry in Computer Science and Lin Lin in Learning
Technologies.
I also wish to acknowledge the first intellects who shaped my thinking. First, is
Umberto Eco, in addition to Eco’s semiotics (e.g., semiological guerrilla), his
creation of the Abulafia computer in Foucault’s Pendulum will always be my
favorite technology for generating Aristotelian metaphors. I am also indebted to
Jorge Luis Borges for his discussions of libraries, labyrinths, time, and infinity.
Next, there is Ludwig Wittgenstein’s brilliant early work in the Tractatus
Logico-Philosophicus, and his later work in the Philosophical Investigations,
wherein he discarded much of what he argued in the Tractatus!
There are also a number of students and postdoctoral fellows who both inspired
me and diligently assisted with the research for the manuscript. Two graduate
students stand out as exemplars of “why I do what I do” each day: Christopher
Courtney from the University of Southern California and Timothy McMahan from
the University of North Texas.
Finally, I must thank my best friend Valerie Parsons. Her encouragement and
support have been invaluable. Also, our children, Tommy and Sophie: I am proud
to have a bearcat and a bugaboo that are already fascinated by the brain and what it
means to be a person. My family inspires and heals me. Dostoyevsky was correct
that “The soul is healed by being with children.”
xi
Contents
Part I Introduction
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... 3
1 Sternberg’s Call for Advances in Technology
for Assessment of Intelligence . . . . . . . . . . . . . . . . . . . . . . . ... 4
2 Dodrill’s Call for Advances in Technology
for Neuropsychological Assessment . . . . . . . . . . . . . . . . . . . ... 4
3 From Lesion Localization to Assessment of Everyday
Functioning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... 5
4 Bilder’s Neuropsychology 3.0: Evidence-Based Science
and Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
5 Computerized Neuropsychological Assessment Devices . . . . . . . . 6
6 Ecological Validity and Assessment of Everyday Functioning . . . . 7
7 Construct-Driven Versus Function-Led Approaches . . . . . . . . . . . 7
8 Affective Neuroscience and Clinical Neuropsychology . . . . . . . . . 8
9 Virtual Environments for Enhanced Neuropsychological
Assessments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... 9
10 Plan for This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... 9
2 Ecological Validity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..... 11
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..... 11
2 The Everyday/Laboratory Research Conflict . . . . . . . . . . . ..... 13
3 Early Attempts at a Neuropsychology-Specific Definition
of Ecological Validity . . . . . . . . . . . . . . . . . . . . . . . . . . ..... 14
4 Construct-Driven and Function-Led Approaches
to Neuropsychological Assessment. . . . . . . . . . . . . . . . . . ..... 17
4.1 Function-Led Tests that Are Representative
of Real-World Functions . . . . . . . . . . . . . . . . . . . . ..... 18
4.2 Real-World Assessments Using the Multiple Errands
Tasks: Potential and Limitations . . . . . . . . . . . . . . . ..... 19
xiii
xiv Contents
Part IV Conclusions
8 Future Prospects for a Computational Neuropsychology . . . . . . . . . 135
1 Formal Definitions of Neuropsychological Concepts
and Tasks in Cognitive Ontologies. . . . . . . . . . . . . . . . . . . . . . . 136
1.1 Covariance Among Neuropsychology Measures
of Differing Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
1.2 Lack of Back-Compatibility in Traditional Print
Publishing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
1.3 Neuropsychology’s Need for Cognitive Ontologies . . . . . . . 137
2 Web 2.0 and Collaborative Neuropsychological
Knowledgebases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
3 Construct-Driven and Function-Led Redux . . . . . . . . . . . . . . . . . 140
3.1 Virtual Environments for Assessing Polymorphisms . . . . . . 141
3.2 Neuropsychological Assessment Using the Internet
and Metaverse Platforms . . . . . . . . . . . . . . . . . . . . . . . . . 142
Contents xvii
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
About the Author
xix
Part I
Introduction
Chapter 1
Introduction
Decades ago Paul Meehl (1987) called for clinical psychologists to embrace the
technological advances prevalent in our society. Meehl’s endorsement of technol-
ogy for clinical psychology reflects the developments that were occurring during
the 1980s for psychological testing (Bartram and Bayliss 1984; French and
Beaumont 1987; Space 1981). In the 1980s, neuropsychologists also discussed the
possibilities of computer-automated neuropsychological assessments and compared
them to traditional approaches that involved paper-and-pencil testing (Adams 1986;
Adams and Brown 1986; Adams and Heaton 1987; Long and Wagner 1986). An
unfortunate limitation of progress beyond this period is that too great of emphasis
was placed upon interpretive algorithms which led to questions about whether
then-current programs could generate accurate clinical predictions (Anthony et al.
1980; Heaton et al. 1981). While it is unclear whether the computerized platforms
during this period were adequate, it is clear that the use of computerized inter-
pretation of clinical results from fixed batteries stalled progress in development of
technologically advanced neuropsychological assessments (Russell 2011).
A decade after Meehl, Sternberg (1997) described the ways in which clinical psy-
chologists have fallen short of meeting Meehl’s challenge. This failure is apparent in
the discrepancy between progress in cognitive assessment measures like the Wechsler
scales and progress in other areas of technology. Sternberg used the example of the
now-obsolete black-and-white televisions, vinyl records, rotary-dial telephones, and
the first commercial computer made in the USA (i.e., UNIVAC I) to illustrate the lack
of technological progress in the standardized-testing industry. According to
Sternberg, currently used standardized tests differ little from tests that have been used
throughout this century. For example, while the first edition of the Wechsler Adult
Intelligence Scale appeared some years before UNIVAC, the Wechsler scales (and
similar tests) have hardly changed at all (aside from primarily cosmetic changes)
compared to computers. Although one may argue that innovation in the computer
industry is different from innovation in the standardized-testing industry, there are still
appropriate comparisons. For example, whereas millions of dollars spent on tech-
nology in the computer industry typically reflects increased processing speed and
power, millions of dollars spent on innovation in the testing industry tends to reflect
the move from multiple-choice items to fill-in-the-blank items. Sternberg also points
out cognitive testing needs progress in ideas, not just new measures, for delivering old
technologies. While clinical neuropsychology emphasizes its role as a science, its
technology is not progressing in pace with other clinical neurosciences.
At the same time Sternberg was describing the discrepancy between progress in
cognitive assessment measures and progress in other areas of technology, Dodrill
(1997) was contending that neuropsychologists had made much less progress than
would be expected in both absolute terms and in comparison with the progress
made in other clinical neurosciences. Dodrill points out that clinical neuropsy-
chologists are using many of the same tests that they were using 30 years ago (in
fact close to 50 years ago given the date of this publication). If neuroradiologists
were this slow in technological development, then they would be limited to
pneumo-encephalograms and radioisotope brain scans—procedures that are con-
sidered primeval by current neuroradiological standards. According to Dodrill, the
advances in neuropsychological assessment (e.g., Weschler scales) have resulted in
new tests that are by no means conceptually or substantively better than the old
ones. The full scope of issues raised by Dodrill becomes more pronounced when he
compares progress in clinical neuropsychology to that of other neurosciences. For
example, clinical neuropsychologists have historically been called upon to identify
2 Dodrill’s Call for Advances in Technology for Neuropsychological Assessment 5
focal brain lesions. When one compares clinical neuropsychology’s progress with
clinical neurology, it is apparent that while the difference may not have been that
great prior the appearance of computerized tomographic (CT) scanning (in the
1970s), the advances since then (e.g., magnetic resonance imaging) has given
clinical neurologists a dramatic edge.
nerve cells in the brain—shift from predominant emphasis upon electrical impulses
to an enhanced model of chemical transmission (Carlsson 2001). For neurology (and
a number of related branches of neuroscience), a shift is found in new ways to
visualize the details of brain function (Raichle 2009; Sakoglu et al. 2011). Finally,
we are seeing shifts in computer science in the areas of social computing (Wang
2007), information systems (Merali and McKelvey 2006), neuroinformatics (Jagaroo
2009; Koslow 2000; Fornito and Bullmore 2014), and even the video game industry
(de Freitas and Liarokapis 2011; Zackariasson and Wilson 2010).
Recently, Bilder (2011) has argued that clinical neuropsychology is ready to embrace
technological advances and experience a transformation of its concepts and methods.
For Bilder, the theoretical formulations of neuropsychology are represented in three
waves. In Neuropsychology 1.0 (1950–1979), clinical neuropsychologists focused
on lesion localization and relied on interpretation without extensive normative data.
In Neuropsychology 2.0 (1980–present), clinical neuropsychologists were impacted
by technological advances in neuroimaging and as a result focused on characterizing
cognitive strengths and weaknesses rather than differential diagnosis. For
Neuropsychology 3.0 (a future possible Neuropsychology), Bilder emphasizes the
need to leverage advances in neuroimaging that Dodrill discussed. Further, he calls
on clinical neuropsychologists to incorporate findings from the human genome
project, advances in psychometric theory, and information technologies. Bilder
argues that a paradigm shift toward evidence-based science and praxes is possible if
neuropsychologists understand the need for innovations in neuropsychological
knowledgebases and the design of Web-based assessment methods.
and can rapidly implement of adaptive algorithms. The enhanced timing precision of
the computer-automated assessment enables implementation of subtle task manip-
ulations and trial-by-trial analysis methods found in cognitive neuroscience. Bilder
argues that these offer greater sensitivity and specificity to individual differences in
neural system function. Relatedly, there is increased interest in Internet-based
assessment and the possibility for acquiring hundreds of thousands of participants in
months. The longitudinal behavioral data garnered from Internet-based assessment
offers potential for the development of repositories that can be stored with electronic
medical records, genome sequences, and each patient’s history.
While Bilder’s arguments are very similar to the ones made in this book, they do
not include a discussion of the need for ecological validity in neuropsychological
assessments. An unfortunate limitation of most computer-automated neuropsy-
chological measures is that they simply automate construct-driven paper-and-pencil
assessments. The changing role for neuropsychologists has also resulted in
increased emphasis upon the ecological validity of neuropsychological instruments
(Franzen and Wilhelm 1996). An unfortunate limitation for neuropsychologists
interested in assessing everyday functioning has been the lack of definitional
specificity of the term “ecological validity” (Franzen and Wilhelm 1996). Early
attempts to define ecological validity for neuropsychological assessment empha-
sized the functional and predictive relation between a patient’s performance on a set
of neuropsychological tests and the patient’s behavior in everyday life. Hence, an
ecologically valid neuropsychological measure has characteristics similar to a
naturally occurring behavior and can predict everyday function (Sbordone 1996).
Franzen and Wilhelm (1996) refined the definition of ecological validity for neu-
ropsychological assessment via an emphasis upon verisimilitude and veridicality.
By verisimilitude, they meant that the demands of a test and the testing conditions
must resemble demands found in the everyday world of the patient. A test with
verisimilitude resembles a task the patient performs in everyday life and links task
demands to the prediction of real-world behavior (Spooner and Pachana 2006). By
veridicality, they meant that performance on a test should predict some aspect of the
patient’s functioning on a day-to-day basis.
A further issue for ecological validity is the need for assessments that take seriously
the impact of affective arousal upon neurocognitive performance. While current
approaches to neuropsychological assessment aid our understanding of cognitive
conflict, everyday activities commonly come in the form of emotional distractors.
Social and affective neuroscience studies have found that affective stimuli are
particularly potent distracters that can reallocate processing resources and impair
cognitive (e.g., attention) performance (Dolcos and McCarthy 2006; Pessoa 2008).
Affective responses to emotional distractors may be understood as multimodal
events in response to a stimulus that has particular significance for the participant,
often signifying a potential threat or reward. Affective stimuli are particularly potent
distracters that can reallocate processing resources and impact attentional perfor-
mance (Dolcos and McCarthy 2006). Enhanced understanding of the effect of
threatening stimuli upon executive functions has important implications for affec-
tive disorders (e.g., specific phobias, depression, and post-traumatic stress disorder)
that are characterized by increased susceptibility to affective distraction (Ellis and
Ashbrook 1988; Wang et al. 2008). Although cognitive-based understandings of
brain–behavior relationships have grown in recent decades, the neuropsychological
understandings of emotion remain poorly defined (Suchy 2011). Likewise, neu-
ropsychological assessments often fail to assess the extent to which affective
arousal may impair cognitive performance.
9 Virtual Environments for Enhanced Neuropsychological Assessments 9
interpretations (Loring and Bauer 2010). A further issue is that while the historical
purpose of clinical neuropsychology was differential diagnosis of brain pathology,
technological advances in other clinical neurosciences (e.g., the development of
neuroimaging) have changed the neuropsychologist’s role to that of making eco-
logically valid predictions about the impact of a given patient’s neurocognitive
abilities and disabilities on everyday functioning. These reasons alone should
prompt neuropsychologists to take seriously the need for technological progress for
a progressive neuropsychology.
Throughout this book, there is an emphasis upon the importance of (1) en-
hancing ecological validity via a move from construct-driven assessments to tests
that are representative of real-world functions—it is argued that this will proffer
results that are generalizable for prediction of the functional performance across a
range of situations; (2) the potential of computerized neuropsychological assess-
ment devices (CNADs) to enhance: standardization of administration; accuracy of
timing presentation and response latencies; ease of administration and data col-
lection; and reliable and randomized presentation of stimuli for repeat administra-
tions; and (3) novel technologies to allow for precise presentation and control of
dynamic perceptual stimuli—provides ecologically valid assessments that combine
the veridical control and rigor of laboratory measures with a verisimilitude that
reflects real-life situations.
Following a discussion of ecological validity, Part II reviews “The Evolution of
Neuropsychological Assessment” and focuses upon the three waves found in the-
oretical formulations of neuropsychological assessment. The organization of this
section is as follows. In Chap. 3, “Neuropsychological Assessment 1.0,” a brief
overview will be given of the historical development of clinical neuropsychology’s
normal science and the current state that is leading to a shift in approaches. In
Chap. 4, “Neuropsychological Assessment 2.0,” current applications of
computer-based neuropsychological assessments are described. In Chap. 5,
“Neuropsychological Assessment 3.0,” a discussion is proffered of the utility of
simulation technology for ecologically valid neuropsychological assessments that
make use of current technological advances.
In Part III, “Next Generation Neuropsychological Applications,” there will be a
discussion of novel technologies and approaches that allow the clinician to reach
patients in novel approaches. In Chap. 6, “Teleneuropsychology: Coming out of the
office,” there will be a discussion of the ways in which electronic communications
may be used to deliver health-related services from a distance, and its particular
usefulness in bringing specialty services to underserved populations and/or remote
areas. Chapter 7, explains about “Gamification of Neurocognitive Approaches to
Rehabilitation.”
In Part IV, “Conclusions,” the book will conclude (Chap. 8) with a presentation
of “Future Prospects for a Computational Neuropsychology.” Herein, there will be
a discussion of the importance of using technology to develop repositories for
linking neuropsychological assessment results with data from neuroimaging, psy-
chophysiology, and genetics.
Chapter 2
Ecological Validity
1 Introduction
Over the past twenty years, neuropsychology has experienced a shift in assessment
from lesion localization to assessment of everyday functioning (Hart and Hayden
1986; Heaton and Pendleton 1981; Long 1996; Manchester et al. 2004). Clinical
neuropsychologists are increasingly being asked to make prescriptive statements about
everyday functioning (Chaytor and Schmitter-Edgecombe 2003; Gioia and Isquith
2004; Olson et al. 2013; Rabin et al. 2007). This new role for neuropsychologists has
resulted in increased emphasis upon the ecological validity of neuropsychological
instruments (Chaytor et al. 2006). As a result, neuropsychologists have been experi-
encing a need to move beyond the limited generalizability of results found in many
earlier developed neuropsychology batteries to measures that more closely approxi-
mate real-world function. A difficult issue facing neuropsychologists interested in
assessment of real-world functioning is the question of what constitutes an ecologically
valid assessment. In the psychology literature, the term “ecological validity” was
ecological validity via the inclusion of the interplay of “cold” cognitive processing of
relatively abstract, context-free information, and “hot” cognitive processing involved
when emotionally laden information.
returning to everyday life in a community. While there was a great deal of con-
troversy over whether cognitive assessment should emphasize standardized,
laboratory-based methods, or more observational and naturalistic assessment
practices (e.g., Banaji and Crowder 1989; Conway 1991; Neisser 1978), the debate
has since subsided (deWall et al. 1994).
An issue that came up during the discussion of cognitive assessment of everyday
functioning was the specificity of many of the skills needed for activities of daily
living. Given the great variability among the skills needed for various daily
activities, there may not be enough similarity available to allow for an adequate
study of the skills (Tupper and Cicerone 1990, 1991). Williams (1988) suggested
that neuropsychologists interested ecological validity need to define clusters of
skills needed for a given task relative to whether the skill is used in many tasks
across environments (i.e., generic) or used in new tasks in a limited number of
environments (i.e., specific). According to Williams, this would allow for the use of
traditional testing procedures for assessing skills that are used in many tasks across
environments. However, functionally based test measures would need to be
developed for skills used in new tasks in a limited number of environments. The
work of Williams and others prompted a need for a more refined definition of
ecological validity for the theory and praxes of clinical neuropsychology.
target cards. Although participants are not told how to match the cards, they are
informed whether a particular match is correct or incorrect. It is important to note
that the WCST (like many paper-and-pencil tests in use today) was not originally
developed as a measure of executive functioning. Instead, the WCST was preceded
by a number of sorting measures that were developed from observations of the
effects of brain damage (e.g., Weigl 1927). Nevertheless, in a single study by
Brenda Milner (1963), patients with dorsolateral prefrontal lesions were found to
have greater difficulty on the WCST than patients with orbitofrontal or nonfrontal
lesions. However, the majority of neuroimaging studies have found activation
across frontal and nonfrontal brain regions and clinical studies have revealed that
the WCST does not discriminate between frontal and nonfrontal lesions (Nyhus and
Barcelo 2009; Stuss et al. 1983). Further, while data from the WCST do appear to
provide some information relevant to the constructs of “set shifting” and “working
memory,” the data do not necessarily offer information that would allow a neu-
ropsychologist to predict what situations in everyday life require the abilities that
the WCST measures.
Burgess et al. (2006) suggest that future development of neuropsychological
assessments should result in tests that are “representative” of real-world “func-
tions” and proffer results that are “generalizable” for prediction of the functional
performance across a range of situations. According to Burgess et al. (2006) a
“function-led approach” to creating neuropsychological assessments will include
neuropsychological models that proceed from directly observable everyday
behaviors backward to examine the ways in which a sequence of actions leads to a
given behavior in normal functioning; and the ways in which that behavior might
become disrupted. As such, call for a new generation of neuropsychological tests
that are “function led” rather than purely “construct driven.” These neuropsycho-
logical assessments should meet the usual standards of reliability, but discussions of
validity should include both sensitivity to brain dysfunction and generalizability to
real-world function.
A number of function-led tests have been developed that assess cognitive func-
tioning in real-world settings. For example, Shallice and Burgess (1991) developed
the multiple errands test (MET) as a function-led assessment of multitasking in a
hospital or community setting. Participant performs a number of relatively simple
but open-ended tasks (e.g., buying particular items, writing down specific infor-
mation, traveling to a specific location) without breaking a series of arbitrary rules.
The examiner observes the participant’s performance and writes down the number
and type of errors (e.g., rule breaks, omissions). The MET has been shown to have
increased sensitivity (over construct-driven neuropsychological tests) to elicit and
detect failures in attentional focus and task implementation. It has also been shown
to be better at predicting behavioral difficulties in everyday life (Alderman et al.
2003). However, there are a number of unfortunate limitations for the traditional
MET that are apparent in the obvious drawbacks to experiments conducted in
real-life settings. Function-led neuropsychological assessments can be
time-consuming, require transportation, involve consent from local businesses,
costly, and difficult to replicate or standardize across settings (Logie et al. 2011;
Rand et al. 2009). Further, there are times when function-led assessments in
real-world settings are not feasible for participants with significant behavioral,
psychiatric, or mobility difficulties (Knight and Alderman 2002).
In summary, early discussions of verisimilitude in neuropsychology emphasized
that the technologies current to the time could not replicate the environment in
which the behavior of interest would ultimately take place. Today, most neu-
ropsychological assessments continue to represent outdated technologies and static
stimuli that are yet to be validated with respect to real-world functioning. While
much of the early discussion of ecological validity reflected an emphasis upon
veridicality and verisimilitude, Burgess and colleagues (2006) have updated the
discussion to include differentiating of construct-driven assessments from
function-led neuropsychological assessments.
20 2 Ecological Validity
Over the past 30 years, there has been growing concern in the field of neuropsy-
chology about the ecological validity of neuropsychological tests. While this con-
cern often takes the prosaic form of elaborating the cognitive tasks with the surface
features of real-life situations, little is done to adjust the assessments to measure
real-world adaptive decision making. Goldberg and Podell (2000) contend that
neuropsychological assessments need more than simple cosmetic changes to rep-
resent real-life situations. Instead, there is a more fundamental issue that the majority
of neuropsychological assessments focus upon various aspects of veridical decision
making and neglect the reality that real-life veridical decision making is merely a
tool subordinate to adaptive decision making (Goldberg and Podell 1999). Given
that many existing neuropsychological tests assess more narrow veridical than
real-world adaptive (Goldberg (2012) also refers to this as “agent-centered”) deci-
sion making, the neuropsychologist may not be collecting the data relevant to
documentation of the full essence of the patients cognitive strengths and weaknesses.
The distinction made by Goldberg and colleagues (1999, 2000, 2005, 2009, 2012)
includes a dichotomy between “veridical” and “agent-centered” cognition. By
“veridical,” Goldberg means that cognition is directed at solving problems charac-
terized by agent-dependent choices that are fundamentally “true” and “false.” These
choices range from simple (e.g., 2 + 2 = ?) to complex (what day of the week will be
September 11, 3001?). This agent-dependent decision making is contrasted with the
sorts of agent-centered and “adaptive” decisions that occur in real life. This
agent-centered decision ranges from simple (e.g., choosing from a restaurant menu) to
decisions that will impact the agent’s life (e.g., career decisions). For these
agent-centered decisions, the dichotomous “true–false” metric does not apply because
asserting that salad is an intrinsically correct choice and soup is an intrinsically false
choice is self-refuting. Of course, this also holds for life decisions like the assertion
that a doctoral degree in “clinical neuropsychology” is an intrinsically correct choice
and one in “engineering” is an intrinsically false choice (Goldberg et al. 2012).
While this distinction is often underemphasized in clinical neuropsychology, it is
central to understanding the nature of the patient’s decision making. This is espe-
cially true when making decisions about a patient’s competency for making car-
dinal decisions—such decisions are agent-centered, while veridical cognition serves
a supportive role. Unfortunately, the vast majority of cognitive paradigms used in
neuropsychological assessment are notoriously devoid of appropriate tools to study
“agent-centered” cognition. Much of this is due to the traditional focus on
assessment of veridical aspects of cognition in highly contrived, artificial, labora-
tory situations. This explains why many purported cognitive measures have noto-
riously poor ecological validity and why patients with prefrontal lesions have
real-life problems even though they do well on neuropsychological tests purported
to assess prefrontal functions (Sbordone 2010).
It is interesting to note that although cognitive neuroscience researchers have
begun investigating various innovative paradigms that depart to various degrees
5 Veridical and Actor-Centered Decision Making 21
cognition can be studied relatively easily in the laboratory whereas affect and
motivation require assessment of real-world activities. In recent years, cognitive
neuropsychology has witnessed a resurgence of interest in (1) going beyond the
artificial situation of the laboratory to assessments that reflect the everyday world
(Neisser 1982); and (2) bringing together studies of cognition, emotion, and
conation (Masmoudi et al. 2012). In fact, research on emotions is increasingly
found in the literature: affective neuroscience (Adolphs et al. 2002; Ledoux 1996),
neuroscience of psychopathology (Kohler et al. 2010), and clinical neuropsychol-
ogy (Stuss and Levine 2002).
While the discussion of laboratory versus real-world assessment was discussed
in the 1980s (see above), the latter issue of bringing together studies of cognition,
emotion, and conation is something that is increasingly being discussed in terms of
affective neuroscience (Panksepp 1998). Although the terms “cognition,” “moti-
vation,” and “emotion” as important drivers for theory development and praxes in
behavioral neuroscience research, the term “cognition” has been increasingly
overused and misused since the cognitive revolution. According to Cromwell and
Panksepp (2011), this has resulted in deficient development of a usable shared
definition for the term “cognition.” They argue that this deficiency raises concerns
about a possible misdirection of research within behavioral neuroscience. For
Cromwell and Panksepp, the emphasis upon top-down (cortical → subcortical)
perspectives tends to dominate the discussion in cognitive-guided research without
concurrent noncognitive modes of bottom-up developmental thinking. They believe
that this could hinder progress in understanding neurological and psychiatric dis-
orders. As an alternative, they emphasize inclusion of bottom-up (subcortical →
cortical) affective and motivational “state-control” perspectives. The affective
neuroscience approach represents a more “embodied” organic view that accepts that
cognitions are integrally linked to both our neurology as well as the environments
in which we operate (Smith and Gasser 2005; Panksepp 2009, 2010).
The affective neuroscience critique that top-down perspectives tend to dominate
the discussion in cognitive-guided research is readily applicable to the contempo-
rary approach to neuropsychological assessment. Although cognitive-based
understandings of brain–behavior relationships have grown in recent decades, the
neuropsychological understandings of emotion remain poorly defined (Suchy
2011). While current approaches to neuropsychological assessment aide our
understanding of cognitive conflict, everyday activities commonly come in the form
of emotional distractors. Social and affective neuroscience studies have found that
affective stimuli are particularly potent distracters that can reallocate processing
resources and impair cognitive (e.g., attention) performance (Dolcos and McCarthy
2006; Pessoa 2008). Affective responses to emotional distractors may be under-
stood as multimodal events in response to a stimulus that has particular significance
for the participant, often signifying a potential threat or reward. Affective stimuli are
particularly potent distracters that can reallocate processing resources and impact
attentional performance (Dolcos and McCarthy 2006). Enhanced understanding of
the effect of threatening stimuli upon executive functions has important implica-
tions for affective disorders (e.g., specific phobias, depression, and post-traumatic
6 Importance of Affective States for Cognitive Processing 23
Increased arousal may impact the processing of salient information and enhance the
contrast between stimuli with different levels of salience (Critchley 2005).
The distinction of cognitive processes in this dual pathway approach has simi-
larities to a neuropsychological subdivision of cognitive control (Zelazo et al.
2003). Zelazo et al. (2003) differentiate between “cold” cognitive control (the
executive dysfunction pathway) and “hot” affective aspects of cognitive control (the
motivational dysfunction pathway). In a similar fashion, Nigg (2000, 2001) dis-
tinguishes between behavioral inhibition (i.e., response inhibition) and motivational
inhibition (i.e., personality and motivation). In Sonuga-Barke’s (2003) neuropsy-
chological research, these different aspects of inhibition have been shown to be
related to different brain networks. The distinction between “cold” cognitive rea-
soning and “hot” affective processing has been studied in decision neuroscience.
While “cold” cognitive processing tends to be relatively logic-based and free from
much affective arousal, “hot” affective processing occurs in the face of reward and
punishment, self-regulation, and decision making involving personal interpretation
(Ardila 2008; Brock et al. 2009; Chan et al. 2008; Grafman and Litvan 1999;
Happaney et al. 2004; Seguin et al. 2007). A number of studies have found that
impairments in either the “cold” or “hot” cognitive functions may be related to
deficits in everyday functioning (e.g., independence at home, ability to work, school
attendance, and social relations (Goel et al. 1997; Grafman et al. 1996; Green 1996;
Green et al. 2000; see Chan et al. 2008 for review).
The idea of “hot” decision making is consistent with the somatic marker
hypothesis (Bechara 2004; Bechara and Damasio 2005; Bechara et al. 1997, 1998;
Damasio 1996). Damasio (1994, 1996) suggested a somatic marker hypothesis
approach to decision making, in which the experience of an emotion (e.g., “gut
feeling” and “hunch”) results in a “somatic marker” that guides decision making.
According to Damasio (1994), the somatic marker is hypothesized to play a role in
“hot” decision making in that it assists the “cold” decision making process by
biasing the available response selections in a complex decision making task.
According to the somatic marker hypothesis, when persons are faced with decisions,
they experience somatic sensations (i.e., somatic markers) that occur in advance of
real consequences of possible different alternatives (Bechara et al. 1997). These
somatic markers act as affective catalysts for decision making, in which distinct
alternatives are evaluated via somatic sensations that guide adaptive decision making
(Damasio 1996). The ventromedial prefrontal cortex (VMPFC) and its limbic system
connections are considered key structures in the somatic marker hypothesis and the
decision making process (Bechara et al. 1997, 1998). However, the neuropsycho-
logical assessment of the VMPFC remains somewhat enigmatic as patients tend to
have both an appearance of normality on most neuropsychological tests and also
problems in their everyday lives (Zald and Andreotti 2010).
The somatic marker hypothesis was originally proposed to account for a sub-
group of patients with VMPFC (i.e., orbitofrontal) lesions who appeared to have
intact cognitive processing, but had lost the capacity to make appropriate life
decisions. According to Damasio (1994), they had lost the ability to weigh the
positive and negative features of decision-based outcomes. The Iowa gambling task
6 Importance of Affective States for Cognitive Processing 25
(IGT; Bechara et al. 1994; Bechara 2007) was developed to assess these patients
and is increasingly being accepted as a neuropsychological measure of affect-based
decision making (Bowman et al. 2005). The IGT is a computerized assessment of
reward-related decision making that measures temporal foresight and risky decision
making (Bechara et al. 1994; Bechara 2007). During IGT assessment, the patient is
instructed to choose cards from four decks (A–D). Selection of each card results in
on-screen feedback regarding either a “gain” or “loss” of currency. In the four
decks, there are two advantageous (C and D) decks that result in money gained
($250 every 10 cards) and low monetary loss during the trial. The other two decks
(A and B) are disadvantageous and involve greater wins (around $100 each card)
than C and D (around $50) but also incur greater losses, meaning that one loses
$250 every 10 cards in Decks A and B. The primary dependent variables derived
from the IGT are total score and net score ([C + D]–[A + B]) and block score
([C + D]–[A + B]) for each segment or block of 20 cards, frequency of deck
choices, and spared or impaired performance according to a cutoff point of −10
(Bechara et al. 2000) especially in brain-damaged subjects.
Neuroimaging studies of persons performing the IGT have revealed activation in
the orbitofrontal cortex (Ernst et al. 2002; Grant et al. 1999; Windmann et al. 2006),
which appears to be significant for signaling the anticipated rewards/punishments of
an action and for adaptive learning (Schoenbaum et al. 2011). Evidence for the
somatic marker hypothesis’s role in hot decision making over IGT trials can be
found in the demonstration of an anticipatory electrodermal response in healthy
controls to card selection (Bechara et al. 1996, 1997). For example, prior to
selecting a card from a risky deck, a healthy control will show a physiological
reaction indicating that the participant is experiencing bodily the anticipated risk.
Further, studies have shown that damage to vmPFC (part of the orbitofrontal cortex)
and the amygdala prevents the use of somatic (affective) signals for advantageous
decision making (Bechara et al. 1996, 1998; Bechara et al. 2000). It is noteworthy
that there are different roles played by the ventromedial prefrontal cortex (vmPFC)
and amygdala in decision making. While vmPFC patients were able to generate
electrodermal responses when they received a reward or a punishment, amygdala
patients failed to do so (Bechara et al. 1999). These findings have been supported in
other that have found positive correlations between the development of anticipatory
skin conductance responses and better performance on a similar gambling task
(Crone et al. 2004; Carter and Pasqualini 2004).
It is important to note that alternative explanations of Bechara’s findings have
been posited. Tomb et al. (2002) suggested that the anticipatory responses are
related to the belief that the risky choice will probably produce a large reward—
higher immediate short-term benefits of the risky decks ($100 versus $50).
According to Tomb et al. (2002), the anticipatory SCR effect is unrelated to any
long-term somatic marker mechanism. Nevertheless, Tomb et al.’s (2002) account
does not readily explain deficient performance in ventromedial PFC patients. While
these patients fail to develop an anticipatory response to the decks with immediate
short-term benefits, they also prefer these decks throughout the task (Clark and
Manes 2004).
26 2 Ecological Validity
While the IGT may have potential for assessment of “hot” affective processing
(Baddeley 2011), there have been failures to replicate the initial studies (Hinson
et al. 2002, 2003). Whereas data from Bechara et al.’s (1998) early studies sug-
gested normal performance of patients with dorsolateral prefrontal lesions, a
number of later studies indicate significant effects of lesions that include either
dorsolateral or dorsomedial prefrontal cortex regions (Manes et al. 2002: Clark et al.
2003). Further, researchers have argued that the IGT is deficient for understandings
the affective impact of emotional stimuli upon cognitive processing because (1) the
observed effects on the IGT may simply be cognitive (not affective) demands placed
resulting from such a complex decision task (Hinson et al. 2002, 2003); and (2) the
IGT is more of a learning task (Baddeley 2011), whereas a true assessment of
affective impact upon cognitive processing requires a measure of the capacity to
evaluate existing valences (i.e., positive, negative, and neutral). In a similar manner,
Fellows and Farah (2005) have suggested that an elemental deficit in reversal
learning (instead of deficit in decision making) may better explain the VMPFC
lesion patients’ selections of disadvantageous and risky cards on the IGT. Evidence
for this is indicated by improved performance when the initial bias favoring the
disadvantageous decks is removed by reordering the cards. Hence, while insensi-
tivity to risk is often used to explain poor performance on the IGT, the learning and
explicit reversal components of the IGT may better explain into what the IGT it is
actually tapping. Further, like other cognitive measures, the IGT was created to
assess the construct of decision making in a laboratory setting, but it remains to be
seen whether a relation between performance on the IGT and real-world decision
making exists (Buelow and Suhr 2009).
7 Conclusions
neglect the reality that real-life veridical decision making is merely a tool subor-
dinate to adaptive decision making. A recent focusing of the discussion by Burgess
and colleagues (2006) emphasizes the need for “function-led approaches” to
neuropsychological models and assessments. While these approaches have been
gaining in popularity, there are a number of unfortunate limitations that are apparent
in the obvious drawbacks to experiments conducted in real-life settings.
Function-led neuropsychological assessments can be time-consuming, require
transportation, involve consent from local businesses, costly, and difficult to
replicate or standardize across settings. Further, there are times when function-led
assessments in real-world settings are not feasible for participants with significant
behavioral, psychiatric, or mobility difficulties. There is also a growing interest in
the interplay of “cold” cognitive processing (linked to dorsal and lateral regions of
the prefrontal cortex) of relatively abstract, context-free information, and “hot”
cognitive processing (linked to the functioning of the orbitofrontal cortex) involved
when emotionally laden information is present. Unfortunately, very little has been
done to include affective components into the neuropsychological assessment.
In the chapters that follow, there will be review of these issues for each of three
waves found in theoretical formulations of neuropsychological assessment.
Throughout, there will be emphasis upon the ways in which both paper-and-pencil
and computer-automated neuropsychological assessments reflect “construct-driven”
and “function-led” approaches to neuropsychological assessment. A preference is
emphasized for “function-led” approaches to models and assessments that proceed
backward from a directly observable everyday behavior to measure the ways in
which a set of actions lead to a given behavior in normal and disrupted processing.
Further, there is discussion of the potential for improving our understanding of
ecological validity via the inclusion of the interplay of “cold” cognitive processing
of relatively abstract, context-free information, and “hot” cognitive processing
involved when emotionally laden information.
Part II
Evolution of Neuropsychological
Assessment
Chapter 3
Neuropsychological Assessment 1.0
2 Neuropsychology’s Prehistory
brain development that reflect areas responsible for specific psychological func-
tions. While some may argue that phrenology’s description of anatomical local-
ization of function within the brain qualifies it as the first manifestation of a
neuropsychological theory (see for example Miller 1996), the essential points of
Gall’s doctrine (with the exception of his emphasis upon the cortex) have been
shown to fail as a neuropsychological theory and its hypotheses both about brain
development and its reflection in scalp topography were ultimately to be dismissed
(see Heeschen 1994 for a discussion).
According to Cubelli, Wernicke’s (1874) work was more theoretically relevant to the
development of neuropsychology in that Wernicke proposed a neuropsychological
model that included psychological (i.e., levels of processing underlying oral repe-
tition), neurological (i.e., cortical and subcortical pathways associated with each
processing stage), and clinical (i.e., symptoms following circumscribed brain
lesions) descriptions of persons performing a simple task of oral repetition. The
result of Wernicke’s multilevel formulation is a template for the development of
future neuropsychological models.
Gelb and Goldstein (1920) developed a sorting task in the examination of a patient
Th. who suffered from color amnesia. This test was based on the Holmgren test for
color blindness. In the first condition of this test, the patient was asked to select one
string from a collection of colored strings and then select those strings that were
similar to the first one. In the second condition of the test, the examiner presented
three strings. While the left and middle string matched in color, the right and middle
string matched in brightness. The examiner instructed the patient to indicate which
string matched the middle one. In a third condition of the test, the examiner pre-
sented two rows of six strings. One row varied from light to dark red. In a second
row, the strings had the same clarity, but varied in color. The examiner instructed
the patient to select the strings that matched each other. In a fourth condition of the
test, the examiner instructed the patient to formulate the reason for her or his
responses. Goldstein observed that patients performed the task very differently than
healthy participants. His patients tended to take a “concrete” attitude and look at
individual objects. Contrariwise, his healthy participants maintained the ability of
“abstract attitude” in which features may be abstracted and concepts may be chosen
to structure and organized perceptions. It is important to note that there were no
quantitative scoring procedures during this time. Goldstein believed that the specific
attitude could not be expressed in a single test score. Instead, the examiner had to
observe how the patient performed the tasks.
36 3 Neuropsychological Assessment 1.0
Over the years, Goldstein refined his assessment of the loss of abstract attitude and
the ability to formulate concepts following brain injury to assess deficits in abstract
behavior (Goldstein and Scheerer 1941). A variant was developed by Weigl, the
Gelb–Goldstein–Weigl–Scheerer Sorting Test that consisted of a set of common
objects used in daily life activities. The patient was instructed to sort these in
different groups. For example, the patient was to sort the objects according to color,
material, or usage. Subsequently, the patient was assessed on ability to set shift and
was assessed on her or his ability to sort the items according to a new criterion
(Weigl 1942). In the Weigl–Goldstein–Scheerer–Color Form Sorting Test, the
patient was assessed on her or his ability to sort geometrical objects (e.g., squares,
triangles, and circles) that consisted of the colors red, green, blue, and yellow
according to the form or color. Following his emigration to the USA, Goldstein
continued to use this task in his clinical work (Bolles and Goldstein 1938).
In summary, Goldstein’s work on abstract reasoning and categorical thinking has
had a noteworthy impact upon neuropsychology (Goldstein 1990). Goldstein’s
“abstract attitude” refers to something that is lost in a patient following brain injury,
as suggested by the tendency to focus on physical (i.e., concrete attitudes) and not
conceptual (abstract) properties of their environments (Goldstein and Scheerer
1941). Goldstein’s formulation of abstract versus concrete attitudes laid the foun-
dation for the ways in which abstractions play an important role in automatic and
controlled processing. He was one of the first neurologists to explore higher
executive functions in everyday life, such as the abilities to establish and maintain
particular cognitive sets, shift cognitive sets, and abstract from the properties in a
test situation the needed rules for decision making (Goldstein 1936).
research that facilitated the subsequent differentiation and integration of basic and
clinical research in neuropsychology (Christensen 1975). Luria obtained an Ed.D
(doctor pedagogicheskikh nauk) in 1936 and an MD in 1943 at the Moscow
Medical Institute. Luria was influenced by Vygotsky’s cultural–historical approach
to cognitive functioning, in which perceptual, attentional, and memory processing
are converted into socially structured cognitive functions through learning. As a
result, Luria, like Goldstein and Vygotsky, investigated the symbolic operations
underlying the highest mental functions (Akhutina 2003).
The work of A.R. Luria is a good example of a neuropsychological evaluation
derived from a neurological approach. In Luria’s clinical evaluations, he employed
a clinical–theoretical approach that primarily involved a qualitative approach (Luria
1966/1980). According to Luria, performance on a neuropsychological assessment
reveals patterns or “functional systems” of interacting neurological areas that are
responsible for a given behavior. Each neurological area of the brain participates in
numerous such functional systems. Luria aimed to show that the cognitive sequelae
of a brain injury reflect an interruption to the execution of any functional system
that includes the injured areas(s). A great strength of Luria’s flexible methodology
is that it developed from his clinical experiences with thousands of patient evalu-
ations (Luria and Majovski 1977; Majovski and Jacques 1980). As is often the case,
the greatest strength of a clinical neuropsychology approach can be that it also
involves a weakness. The most significant flaw in the original Luria battery is a lack
of standard administration and scoring that has precluded an assessment of its
validity. However, there have been attempts to overcome these deficiencies by
developing an objective form, combining Luria’s procedures with the advantages of
a standard test battery (i.e., Luria-Nebraska Neuropsychological Battery, also
known as LNNB; Golden et al. 1980).
and provided an early integration of the existing literature in support of the regional
localization doctrine. He also proffered descriptions of neural network assemblies
and their functional relationships that incorporated elements of both localizationistic
and mass-action doctrine into the analysis of the behavioral consequences of brain
lesions (Meier 1992; further development of this synthesis can be attributed to Luria
1966). Hebb’s behavior–cognition integration helped to counterbalance strict
behaviorism and stimulated the rise of physiological psychology and psychobiol-
ogy. Hebb’s formulations gave “mental events” a neural basis and opened the door
to a future possible cognitive neuropsychology (Hebb 1981).
Eventually, Hebb teamed up with Wilder Penfield at McGill University for their
pioneering research into the effects of removal of prefrontal or the anterior/mesial
temporal tissue for intractable focal seizure disorders (Hebb and Penfield 1940).
During this time, a number of influential proto-neuropsychologists visited Hebb and
Penfield. For example, Henry Hacaen’s time there had a profound influence on his
future development and he named his laboratory “Groupe de Neuropsychologie et
de Neurolinguistique” in the 1960s (Boller 1998). While at McGill, Hebb took on
Brenda Milner as a PhD candidate in psychophysiology. Milner was also able to
work with Penfield to study the behavior of epileptic patients treated with focal
ablation of brain tissue. Milner is considered a pioneer in clinical neuropsychology
and is noted for convincing her colleagues of the importance of collecting data on
the cognitive capacities of patients undergoing unilateral cerebral excisions. In her
early studies, she contrasted the effects of left and right temporal lobectomy, which
lead to the establishment of research into hemispheric specialization (Berlucchi
et al. 1997; Jones-Gotman et al. 1997).
Like Goldstein (see above), Milner used a sorting task to investigate the effect of
different brain lesions on frontal lobe functioning. Milner used the Wisconsin Card
Sorting Test (WCST). The WCST was developed by Grant and Berg (1948) as an
extension of the Weigl-Goldstein card-sorting task. Of note, the WCST had the
advantage of tapping into both quantitative and qualitative aspects of abstract
reasoning, concept formation, and response strategies to changing contextual
contingencies. This original WCST consisted of a set of 60 response cards. Each
card had one to four duplicate patterns (e.g., stars, crosses, triangles, and circles)
and was all in the same color (e.g., red, yellow, green, or blue). Participants were
instructed to place each card under one of four key cards and to infer the sorting
principle following feedback (correct, incorrect) from the examiner. The examiner
scored each response in terms of errors, latency, degree of perseveration, and
capacity to shift (Eling et al. 2008). Milner introduced the WCST as a neuropsy-
chological assessment, and it has become a standard clinical tool for the diagnosis
of executive dysfunction (Milner 1963).
Milner’s (1963) study of eighteen patients with epileptogenic foci in the
dorsolateral prefrontal cortex (dPFC) found that they committed more perseverative
3 Neuropsychological Assessment and Localization 39
errors than patients with orbitofrontal cortex (OFC), temporal, or parietal damage.
No significant differences were found across clinical groups for nonperseverative
errors. According to Milner, the fewer number of achieved categories in patients
with dPFC was due to their perseverative tendencies instead of their tendency to be
distracted (i.e., to nonperseverative errors). Although these seminal findings and
interpretations has had a huge impact on expected patterns of neuropsychological
performance for patients with prefrontal lesions, many neuropsychologists now
question the anatomical specificity of predicted from performance on the
paper-and-pencil WCST. To a large extent, this a result of theoretically ill-posed
approaches to localize the brain region ultimately responsible for correct WCST
performance (Nyhus and Barceló 2009). Further, in its original paper-and-pencil
format, the traditional WCST has been argued to be ill-suited to proffer accurate
descriptions of the type and severity of cognitive deficits or for localizing lesions
responsible for those deficits (Mountain and Snow 1993). This points to the need
for caution when building cognitive models from these older experimental designs.
Specifically, they lacked precision in spatial and temporal sampling of brain acti-
vations and may have limited the formulation of integrative formal models of
prefrontal executive functions.
3.4 Summary
A major shift in testing occurred when Wechsler applied testing procedures (i.e.,
group and individual) developed for normal functioning persons to the construction
of a clinical test battery. Following World War I, Wechsler assembled the 1939
Wechsler-Bellevue battery, which included both verbal and performance Scales.
4 Development of the Quantitative Test Battery 41
While standardized intelligence tests such as the Wechsler and Binet scales had
growing clinical utilization, a number of investigators were interested in developing
quantitative clinical psychometric procedures beyond these standardized intelligence
tests. One reason was the relative insensitivity of intelligence tests to the behavioral
consequences of many brain lesions (e.g., prefrontal executive functions). The
earliest adjunctive set of measures was the Halstead Neuropsychological Battery. It
is interesting to note that the Halstead was not primarily interested in clinical
application (Goldstein 1985; Reed 1985). Instead, his work was concerned with the
distinguishing between intellectual components found in learned versus innate
capacities. Halstead’s distinction between these components is similar to Hebb’s
(1949) “Intelligence A” and “Intelligence B” and to Cattell’s (1963) “crystallized”
and “fluid” abilities. During this time, there was an increasing desire to establish an
empirical base for a more scientific approach to neuropsychological assessment.
Halstead’s student, Ralph Reitan, introduced some modifications to the battery to
develop a composite (i.e., Halstead Impairment Index) of outcomes defined by group
differences on each of 10 subtests (Reitan 1964). Through a process of clinical and
experimental evaluation, seven of the ten survived and were developed into the
primary components of the HRB and Halstead Impairment Index. Reitan went on to
supplement Halstead’s seven tests with numerous other clinical procedures to
enhance the clinical usefulness of evaluations. The HRB offered a battery of
objective measures that could be used for diagnosis and categorization of brain
damage. The Halstead–Reitan Battery was developed specifically to detect “organic”
dysfunction and differentiate between patients with and without brain damage (e.g.,
to distinguish “organic” from “functional” disorders). Over the years, tests have been
designed in concert with evolving information regarding the mediation of behavior
by specific structures or circuits provide greater insight into the integrity or disin-
tegration of neurologic function. Extensive experience with these instruments pro-
vides a basis for interpreting the tests in neurologic terms.
In addition to being one of the first battery approaches, the Halstead–Reitan
Battery is commonly referred to today as a “fixed battery” approach because test
selection is “fixed” irrespective of the patient’s presenting problem. Hence, the
Halstead–Reitan Battery represents a comprehensive battery of tests that are
administered to each patient in a standardized manner. Advocates of the Halstead–
Reitan Battery argue that fixed batteries facilitate the following: (1) comparison of
test scores across patient groups and settings (Hom 2003); (2) inclusion of tech-
nicians in test administration (Russell 1995, 1998); and (3) development of datasets
for research studies (Sweeney et al. 2007). While the HRB has become widely used
in neuropsychological assessment (Hom 2003; Ross et al. 2013, 2014), there has
been some debate related to the amount of administration time, cost, and potential
for excessive testing sessions that may be difficult for patients to tolerate (Lezak
1995; Russell 1998).
6 Contemporary Development of the Flexible Test Battery 43
An early precursor to the flexible battery approach can be found in the work of
Arthur Benton (Benton 1964, 1969). Benton has had a great influence on the
development of clinical neuropsychology and his work bridges behavioral neu-
rology, cognitive psychology, neurolinguistics and aphasiology, developmental
psychology, clinical psychology, and experimental/physiological psychology
(Meier 1992). While a member of the neurology department at Iowa, Benton
developed procedures for assessing visuospatial judgment, visuoconstructional
praxis, facial recognition, auditory perception, serial digit learning, cerebral dom-
inance, developmental learning disorders, and language behavior (Costa and Spreen
1985). Benton’s flexible battery approach is a hypothesis-driven approach to
standardized measurement of higher brain functions in patients with known or
suspected brain disease. Benton’s methods have now evolved into the Iowa-Benton
method, in which quantitative measurements are considered key for assessing
domains of cognition and behavior, in a time-efficient manner. According to Tranel
(2009), the Iowa-Benton flexible battery approach maintains a close link to neu-
roanatomy and the findings from neuropsychological assessments are both
informed by and inform findings from neuroimaging and physiological recordings.
44 3 Neuropsychological Assessment 1.0
Development of the flexible battery has resulted in what now called the “process”
approach, which resulted from work by neuropsychologists in Australia (Walsh
1987), Denmark (Christensen 1979), and the USA (Kaplan 1988). The process
approach extends early formulations of the flexible battery approach via increased
emphasis upon standardization. For example, the qualitative information gleaned
from analyses of behavior is quantified and subjected to psychometric analyses. As
a result, the process approach allows for greater assessment of clinical limits in an
operationally defined, repeatable, and quantifiable manner. The emphasis upon
process analyses results from the belief that the resolution to problems presented via
standardized assessment measures may be achieved by various processes, and each
of these may be related to different brain structures. Hence, the ways in which a
given patient responds are viewed as important as the patient’s response itself.
While there has been a great deal of debate about whether the fixed or flexible
battery approach is the best, most would agree that there are strengths found in
including both qualitative and quantitative information (Rourke and Brown 1986).
Over the years, various procedures have been developed to maximize the use of the
battery of tests. Psychometrics have been applied to test results to maximize the
value of the neuropsychological assessment battery approach by combining infor-
mation, minimizing variability among groups, and grouping patients by behavioral
characteristics (Bezeau and Graves 2001; Maroof 2012; Zakzanis 1998, 2001;
Woods et al. 2003). According to Stuss (2002), factor analyses have allowed for the
differentiation of assessment variables into functional representative groupings and
provided a methods and systems for examining shared and independent variance to
maximize brain–behavior understanding. Further, discriminant function analyses
have been utilized for classification of patients into their proper groups for vali-
dation of the battery approach (Stuss and Trites 1977).
7 Conclusions
Over the course of the last several decades, clinical neuropsychology has gained
recognition as a discipline with relevance to a number of diverse practice areas
(e.g., neurology, neurosurgery, psychiatry, and family medicine) as well as
neuroscience-specific research areas (e.g., behavior, learning, and individual dif-
ferences). As a result, neuropsychologists tend to apply a working understanding of
psychology, physiology, and neurology to assess, diagnose, and treat patients with
neurological, medical, neurodevelopmental, psychiatric, and cognitive disorders.
A typical neuropsychological assessment examines brain–behavior relations as they
pertain to neurocognitive, affective, and behavioral expressions of central nervous
system dysfunction. Neuropsychologists use specialized assessment procedures to
measure deficits in cognitive functioning, personality, and sensory–motor functions,
7 Conclusions 45
and connect the results to specific brain areas that have been affected. Current
approaches to neuropsychological evaluation typically range from an ordinal scale
(i.e., impaired/nonimpaired) for basic sensory and perceptual functioning to an
integral scale (i.e., percentile relative to normative group) for more complex
functions. The question of normal versus abnormal functioning of the central
nervous system includes not only the assessment of the consequences of trauma and
diseases to the central nervous system, but also the impact of psychiatric conditions
in which central nervous system involvement is assumed but not well defined.
As mentioned above, much of what is now considered part of neuropsycho-
logical assessment originated from attempts of late nineteenth-century and early
twentieth-century physicians to improve evaluation of the cognitive capacities of
persons with brain disease. As such, during a period focusing on localization,
neuropsychologists received referrals from neurosurgeons to psychometrically
localize brain damage. As a result, developed measures were based upon a local-
ization paradigm that focused upon double dissociation—two neocortical areas are
functionally dissociated by two behavioral measures, and each measure is affected
by a lesion in one neocortical area and not the other (Teuber 1955; Probram 1971).
Given the importance of neuropsychological assessment for lesion localization, it
became increasingly important that neuropsychological assessment has enhanced
psychometric rigor. In addition to the reliability and validity issues mentioned
above, this also includes issues of sensitivity and specificity. By sensitivity, neu-
ropsychologists are referring to a test’s ability to detect even the slightest expression
of abnormalities in neurological (primarily central nervous system) function.
Sensitivity is understood as a reflection of the neuropsychological test’s ability to
identify persons with a disorder. This is often referred to as true positive rate. By
specificity, neuropsychologists are referring to the ability of a neuropsychological
test to differentiate patients with a certain abnormality from those with other
abnormalities or with no abnormality. This is often referred to as true negative rate.
A score on any test can be a true positive, false positive, true negative, or false
negative. For a score to be true positive, it must have high sensitivity to dysfunc-
tion, allowing dysfunctions to be detected. If a score on any test is false positive, it
indicates sensitivity to dysfunction, but lacks specificity to a particular dysfunction.
A score on any test can be a true negative if it has high specificity, allowing
negative to be distinguished from others. If a score on any test is false negative, this
indicates a lack of sensitivity, without regard to specificity of the test. For any
evaluation, it is important to understand the rates of each of the four categories of
results. The ability to identify brain dysfunction varies greatly among neuropsy-
chological tests and is determined by the fidelity with which the neuropsychological
test distinguishes normal from abnormal function and by the specific type of deficit
that the patient exhibits. The WAIS, for example, has no memory subtests and is
necessarily insensitive to memory-related deficits, whereas it has demonstrated
sensitivity to disorders affecting visuospatial, calculation, and attentional abilities.
In general, tests that are timed, requiring the patient to complete the test in a
specified period, have greater sensitivity to diffuse or multifocal cerebral changes
than untimed tests.
46 3 Neuropsychological Assessment 1.0
It is important to note, however, that with the advent of neuroimaging, the need for
neuropsychologists to localize brain damage has been greatly reduced.
Unfortunately, many neuropsychologists continue to rely on “localization” as the
chief basis for validating neuropsychological tests. As Ronald Ruff has contended,
although neuroimaging caused the role of neuropsychology to shift from localization
to documentation of neuropsychological deficits for prediction of real-world func-
tioning, clinical neuropsychologists many times fail to develop ecologically oriented
assessments and continue to use localizationist-developed test batteries (Ruff 2003).
Although today’s neuropsychological assessment procedures are widely used,
neuropsychologists have been slow to adjust to the impact of technology on their
profession. Two essential limitations have resulted from this refusal of technological
adaptation: First, current neuropsychological assessment procedures represent a
technology that has barely changed since the first scales were developed in the early
1900s (i.e., Binet and Simon’s first scale in 1905 and Wechsler’s first test in 1939).
In order for neuropsychologists to fully embrace the development of new batteries
that take real-world functioning (i.e., ecological validity) seriously, there is a need
for them to move beyond cosmetic changes to standardized tests to computerized
measures. However, neuropsychologists have historically resisted embracing tech-
nological advances in computation. While neuropsychology emphasizes its role as a
science, its technology is not progressing in pace with other science-based tech-
nologies. Second, while the historical purpose of clinical neuropsychology was
differential diagnosis of brain pathology, technological advances in other clinical
neurosciences (e.g., the development of neuroimaging) have changed the
neuropsychologist’s role to that of making ecologically valid predictions about the
impact of a given patient’s neurocognitive abilities and disabilities on everyday
functioning.
Chapter 4
Neuropsychological Assessment 2.0:
Computer-Automated Assessments
Early work with automated testing did not make full use of computers (Space
1981). For example, Gedye (1967, 1968) and Miller (1969) automated the pictorial
paired-associate learning task using a teaching machine (the ts-512). In 1969,
Elwood used a paper-tape-controlled and solenoid-operated console that used a
microphone, rear-projection screen, push-button panel, tape recorder, and type-
writer to administer the Wechsler Adult Intelligence Scale (WAIS). In the early
1970s, Studies using the automated system reported that the automated WAIS was
as reliable as the standard version, and the two versions correlated very highly
(Elwood 1972; Elwood and Griffin 1972). Microcomputers became popular in the
1970s and 1980s with the advent of increasingly powerful microprocessors. This
introduction soon led to the development of computer-automated versions of
classical paper-and-pencil and electromechanical neuropsychological assessments
(Kane and Kay 1992). As a result, computers were increasingly employed during
the 1980s and 1990s for neuropsychological test administration, scoring, and
interpretation (Adams and Heaton 1987; Kane and Kay 1992, 1997). One of the
earliest platforms for computerized assessment, scoring, and basic interpretation
was called the neuropsychological key (Russell et al. 1970). According the Russell
(2011), Carolyn Shelley programmed a decision-tree algorithm that placed the
scoring and the preliminary interpretation of test results into an automated program.
Following the initial computer automation, the period 1980s saw continued
development of the neuropsychology key and establishment of the lateralization
index (Russell 1984; Russell and Starkey 2001). Also during this period, Reitan
developed the Neuropsychological Deficit Scale program to lateralize brain damage
(Reitan 1991). In lieu of these advances, there were discussions about the impor-
tance of automated rules for statistical prediction that incorporate psychometric
data, patient history, and results from clinical or neurological examinations (Garb
and Schramke 1996).
Initial enthusiasm suggested that computerized administration, scoring, and
interpretation could be integrated with computer technology (Kane and Kay 1992;
Russell 1995). Although computer-based interpretation was increasingly used
during this period (Russell 1995, 2000), concerns were raised regarding whether
sophisticated algorithms can generate accurate clinical predictions (Adams 1986;
Adams and Brown 1986; Adams and Heaton 1985, 1987; Adams et al. 1984;
Anthony et al. 1980; Heaton and Adams 1987; Heaton et al. 1981; Long and
Wagner 1986). In response to these publications on the inadequacy of computer
interpretation, Russell (2011) has argued that these critiques are primarily review
articles that were based on only two studies (the other publications were reviews
citing these two studies). One of these studies was completed in Heaton’s laboratory
1 Historical Development of Computerized Assessments 51
(Anthony et al. 1980; Heaton et al. 1981) and the other was performed by Adams
(Adams, et al. 1984). Russell also points out that these reviews overlooked studies
that questioned their results. Results from a study that reanalyzed the Anthony et al.
(1980) study found agreement between the original neuropsychological key study
and the crossvalidation when both sensitivity and selectivity were considered
(Goldstein and Shelly 1982). Russell (2011) also points to two studies by Wedding
(1983a, b) that found the neuropsychology key to be almost as accurate as an
experienced neuropsychologist (see also Garb 1989; Russell 1998). While it is
unclear whether the computerized platforms during this period were adequate, it is
clear that the use of computerized interpretation of clinical results from fixed bat-
teries has dwindled over the years.
Although there was some decrease in enthusiasm for fixed batteries during this
period, interest in the development of computerized administration and scoring has
continued. A quick literature review reveals that throughout the 1980s there was
burgeoning interest in computerization (or at least quasi-computerization) of vari-
ous assessment measures. For example, some assessments have been developed for
computerized symptom validity testing (Allen et al. 2003). Examples of comput-
erized neuropsychological measures used to distinguish between persons putting
forth their best effort and those who are not include: Computerized Assessment of
Response Bias (Green and Iverson 2001); Victoria Symptom Validity Test (Slick
et al. 1997); and the Word Memory Test (Green 2003). Further, during this period
neuropsychologists transferred a number of paper-and-pencil measures to the
personal computer platform (see Bartram and Bayliss 1984). A few examples of
computerized versions of traditional paper-and-pencil neuropsychological tests
include: the Raven’s Progressive Matrices (Calvert and Waterfall 1982; French and
Beaumont 1990; Knights et al. 1973; Rock and Nolen 1982; Waterfall 1979); the
Peabody Picture Vocabulary Test (Klinge and Rodziewicz 1976; Knights et al.
1973; Space 1975); the Category Test (Beaumont 1975; Choca and Morris 1992);
the Hidden and Embedded Figures tests (Brinton and Rouleau 1969); and the
Wisconsin Card Sorting Test (Fortuny and Heaton 1996; Heaton 1999). Further,
research by Gilberstadt et al. (1976) looked at automated versions of the
Shipley-Hartford Vocabulary Test, Raven’s Matrices, and Digit Span and Digit
Symbol subtests of the WAIS. Initial attempts at assessing the equivalence of these
measures to traditional tests were made (Eckerman et al. 1985). Early studies tended
to reveal significant test–retest reliabilities and no significant differences were found
between manual and computer administrations (Bartum and Bayliss 1984; Mead
and Drasgow 1993).
52 4 Neuropsychological Assessment 2.0: Computer-Automated Assessments
For much of the 1980s to early 2000s, the focus of many test developers has been
upon slight revisions of the paper-and-pencil versions with computerized scoring.
Scoring programs for various paper-and-pencil tests have been developed that
allowed for automatic calculation of normative scores, generation of profiles, and
reporting of interpretive statements: California Verbal Learning Test-Second
Edition (Delis et al. 2000); Delis–Kaplan executive function system (Delis et al.
2001); Neuropsychological Assessment Battery (Stern and White 2003); Ruff
Neurobehavioral Inventory (Ruff and Hibbard 2003); Wechsler Adult Intelligence
Scale-Fourth Edition (Wechsler 2008); Wechsler Memory Scale-Fourth Edition
(Wechsler 2009); and the Wisconsin Card Sorting Test (Heaton et al. 1993).
For example, while automated versions of the original WAIS were developed in
1969 (Elwood and Griffin 1972) and again in 1980 (Vincent 1980), these
automations provided only rudimentary stimulus presentation and limited data
recording. The technology used for administration of the Wechsler scales changed
little between the 1980s and the early 2000s. Given the major advances in tech-
nology, it is interesting that for the past few decades, the most widely used neu-
ropsychological tests (e.g., Wechsler scales in various manifestations: WAIS-R,
WAIS III) progressed so little in terms of technological advances (Hartlage and
Telzrow 1980; Guilmette et al. 1990; Lees-Haley et al. 1996). According to a 2005
study surveying assessment practices and test usage patterns among 747 North
American, doctorate-level clinical neuropsychologists, the Wechsler Scales were
the most frequently used tests in their neuropsychological assessments (Rabin et al.
2005). While the administration of the Wechsler scales changed very little during
this time, software was developed that allowed for automatic calculation of nor-
mative scores, generation of profiles, and reporting of interpretive statements.
A recent development for traditional paper-and-pencil assessments is the
establishment of the Q-Interactive platform that enables examiners to create bat-
teries at both the battery (e.g., Wechsler Memory Scale—4th Edition, Delis-Kaplan
Executive Function System) and subtest levels for administration on a portable iPad
device and scored in real time (Daniel 2012, 2013). This automation of traditional
paper-and-pencil tests uses two tablet PCs, one for the examiner and the other for
the examinee. While the examinee’s tablet does little more than replace the tradi-
tionally printed stimulus booklet, the examiner’s tablet offers a number of advances
over simply replacing examiner’s manual: Scores item response times and accuracy
shows the examinee’s touch responses, implements administration rules (e.g., start
and discontinue), and records examiner notes. While this approach offers an
advance via a two-PC approach, it currently offers little else over other
computer-automated neuropsychological assessments.
2 Application Areas of Computerized Assessments 53
Table 3 (continued)
Test Hardware Admin Domains Strengths Weaknesses
(Input) (min)
CogState PC/laptop/ 15–20 • Attention High test–retest Difficult to
Lim et al. tablet— • Memory reliability and distinguish
(2013) Web-based • Executive stability in all early MCI
(keyboard) function groups and from
• Language adequate to healthy
• Social detect controls
cognition AD-related without
cognitive several
impairment rounds of
administration
MicroCog PC/laptop 30–45 • Attention Cognitive Significant
Elwood (Set of keyboard • Memory scoring anxiety/
(2001) keys) • Reaction timesignificantly frustration in
• Spatial correlated with cognitively
• Reasoning Full impaired
IQ component subjects
WAIS-III
Mindstreams PC/laptop— 45–60 • Attention Effective in Length of
Doninger Web-based • Memory detection of time required
et al. (2009) (keyboard) • Executive MCI performed for
function poorly completion
• Verbal compared to HC
Fluency participants in
• Visuospatial all domains,
• Motor with significant
• Information differences in
processing memory
(p = 0.003;
d = 0.96)
executive
function
(p = 0.046;
d = 0.64), and
overall battery
performance
(p = 0.041;
d = 0.63).
Note ANAM = Automated Neuropsychological Assessment Metrics. CAMCI = Computer Assessment of
Mild Cognitive Impairment. CANS-MCI = Computer-administered Neuropsychological Screen for Mild
Cognitive Impairment. CANTAB = Cambridge Neuropsychological Test Automated Battery. CNS-VS =
Central Nervous System Vital Signs. CDR = Cognitive Drug Research
schizophrenia (Gur et al. 2001; Irani et al. 2012; Nieuwenstein et al. 2001); and
substance abuse (Di Sclafani et al. 2002; Lopez et al. 2001). While there is
increasing interest in using computerized neuropsychological assessments, there is
a large need for the validation of these computerized tests in specific patient pop-
ulations before routine use.
Computerized neuropsychological testing has long been applied for attention deficit
hyperactivity disorder (ADHD). One of the most commonly used computer-based
assessments in ADHD evaluations is the continuous performance test (Huang-Pollock
et al. 2012; Nigg 2005; Sonuga-Barke et al. 2008; Willcutt et al. 2005). Another
commonly used measure for children is the Cambridge Neuropsychological Test
Automated Battery (CANTAB) because it is known to be sensitive to cognitive
dysfunction across multiple domains of ADHD and to the amelioration of cognitive
dysfunction through pharmacotherapy (Chamberlain et al. 2011). An advantage of the
CANTAB is that it separates mnemonic and strategic components of working
2 Application Areas of Computerized Assessments 59
memory. There are several studies using the CANTAB to assess medication effects in
ADHD (Bedard et al. 2002, 2004; McLean et al. 2004; Mehta et al. 2004). Further, the
CANTAB has been used to investigate neuropsychological functioning in children
with ADHD (Fried et al. 2012; Rhodes et al. 2005; Gau et al. 2009). A further use of
computerized neurocognitive testing has been with evaluating young athletes for
sports-related concussion. Although current empirical evidence for the development
and utility of computerized neuropsychological testing for preadolescent students is
limited, it is an emerging area in research (De Marco and Broshek 2014). The
Multimodal Assessment of Cognition and Symptoms for Children is an emerging
computerized neuropsychological assessment that was designed to assess cognitive
abilities in children between the ages of 5 and 12 (Vaughan et al. 2014). The battery
consists of six cognitive tests that produce the following composites: Response Speed,
Learning & Memory, and Accuracy/Speed Efficiency. The battery also includes an
assessment of performance validity. Results from a recent study found that there were
no differences between individual versus group format among a sample of children
aged 5 to 18. These findings suggest that computerized baseline assessment can be
effectively administered across groups.
As can be seen from the review in this chapter thus far, there are many comput-
erized neuropsychological assessment batteries that are used to gather information
on aspects of cognitive functioning. While they all tend to have some overlap, there
is an unfortunate lack of uniformity among the measures used to capture these
cognitive constructs. A perhaps greater limitation to adoption for many neuropsy-
chologists is that these computerized assessments are generally expensive. Further,
each computerized assessment must be fully vetted and most are a long way off
from having normative databases on homogenous nondiverse populations and most
do not cover the lifespan. There is great need for computerized assessment tools that
can address these issues and be used as a form of “common currency” across
diverse study designs and populations (Gershon et al. 2013).
healthy controls (Gur et al. 2001a, b, 2007). Studies assessing the CNB have
demonstrated good test–retest reliability and sensitivity to diagnosis (Gur et al.
2007). The CNB has also been associated with positive reliability and construct
validity when compared to traditional paper-and-pencil batteries in healthy samples
(Gur et al. 2001a) and in schizophrenia patients (Gur et al. 2001b). The availability
of the CNB in the public domain and Web-based administration has yielded
large-scale normative and disease-specific data on thousands of individuals (Moore
et al. 2015).
distinction for improving our understanding about the relation between psycho-
logical well-being and physical health (Cohen and Pressman 2006). Ultimately, the
NIH Toolbox offers a “common currency” for researchers and clinicians to select
the optimal outcome measures for patient populations.
ADHD symptoms. For example, commission errors (i.e., behavioral response that
occurs when no response is required) are assumed to reflect impulsivity in everyday
life. Omission errors (i.e., failure to respond to a target) are thought to reflect
inattention behaviors. In a large epidemiological study, Epstein and colleagues
(2006) examined the relationships between ADHD symptoms and specific CPT
parameters. Contrary to predictions, they found that omission and commission
errors had a nonspecific relationship to ADHD symptomatology. Omissions were
related to hyperactivity symptoms, not inattention symptoms. Although commis-
sions were related to impulsivity symptoms, they were also related to hyperactivity
and inattention symptoms. Few studies have compared CPT tests with other more
subjective behavioral questionnaires. In a study by Barkley and Murphey (2011),
the Deficits in Executive Functioning Scale (DEFS) was used to assess executive
functioning deficits in daily life. Findings suggested that the CPT scores were
largely unrelated to the executive functioning scale ratings. In fact, of all the CPT
scores, the CPT reaction time was the most closely related which only 7 % of the
variance. Likewise, studies have used the CANTAB to explore the pattern of
associations between self-assessed (e.g., Subjective Scale to Investigate Cognition
in Schizophrenia) and objective neuropsychological performance (CANTAB) in a
sample of outpatients with schizophrenia. Findings suggest that although outpa-
tients with schizophrenia express some cognitive difficulties, the cognitive nature of
these subjective complaints does not strictly correspond with objective perfor-
mances. These results also suggest that theoretical constructs of cognitive functions
do not always have ecological validity. The authors recommend that both objective
performance and subjective cognitive complaints be taken into account in assess-
ment of patient well-being (Prouteau et al. 2004).
Another approach to computer-automated assessment is to develop assessments
that have topographical similarity (i.e., theoretical relation) between the stimuli
used on the computer screen and the skills required for successful praxes in the
natural environment of the patient. For example, the task demands of learning and
recalling of pictures of objects has “theoretical” similarity to the sorts of tasks that a
patient might be required to perform in their everyday lives. One example of this
approach is The Psychologix Computer-Simulated Everyday Memory Battery,
which incorporates video recordings and two-dimensional stimuli that are repre-
sentative of real-world objects and settings. Crook and colleagues (Crook et al.
1979, 1980) have developed this computerized battery over the past 35 years
(previously known as the Memory Assessment Clinics Battery). The battery
includes the following tests: (1) Name–Face Association Test, in which patients
watch video recordings of persons introducing themselves (and name the city that
they are from) and then tested on both immediate and delayed recall; (2) Incidental
Memory Test, which assesses the patient’s recall of the name of the city that was
provided during Name–Face Association test; (3) First–Last Names Test, which
measures associate learning and recall of four to six paired first and last names over
three to six trials; (4) Narrative Recall, in which patients watch a 6-min television
news cast and then are assessed on their ability to answer 25 multiple-choice
questions related to the news broadcast; (5) Selective Reminding, which draws from
4 What About Ecologically Valid Computer-Based Assessments? 63
tools for cognitive assessment and normal populations and then later found their
way into the clinical realm to aid in assessing constructs that are important to
carrying out real-world activities. For example, if a patient’s performance on the
Stroop revealed difficulty inhibiting an automatic, overlearned response, a neu-
ropsychologist may be compelled to report caution relative to an aging patient’s
driving—safe driving of an automobile includes the ability to withhold an over-
learned behavior to press the brakes if a traffic light turns red when the driver is
halfway through the intersection. An unfortunate limitation of this approach to
predicting everyday functioning is that it forces the neuropsychologist to rely on
measures designed for other purposes. Goldstein (1996) questioned this approach
because it is difficult to ascertain the extent to which performance on measures of
basic constructs translates to functional capacities within the varying environments
found in the real world. A decade later, Burgess et al. (2006) agree and argue that a
further issue is that we need assessments that further our understanding about the
ways in which the brain enables persons to interact with their environment and
organize everyday activities. Instead of using the terms “verisimilitude” and
“veridicality” when discussing “ecological validity,” they use the term “represen-
tativeness” to discuss the extent to which a neuropsychological assessment corre-
sponds in form and context to a real-world (encountered outside the laboratory)
situation. They use the term “generalizability” to discuss the degree to which poor
performance on a neuropsychological assessment will be predictive of poor per-
formance on tasks outside the laboratory.
Another example can be found in one of the most widely used measures of
executive function, the Wisconsin Card Sort Test (WCST). The most extensive
normative data are derived from an administration of the WCST that utilizes paper
cards. The stimulus cards are administered by an experimenter on one side of a desk
as he/she faces a participant on the other side of the desk. Participants are presented
with a number of stimulus cards and instructed to match these stimulus cards to
target cards. Although participants are not told how to match the cards, they are
informed whether a particular match is correct or incorrect. It is important to note
that the WCST (like many paper-and-pencil tests in use today) was not originally
developed as a measure of executive functioning. Instead, the WCST was preceded
by a number of sorting measures that were developed from observations of the
effects of brain damage. Nevertheless, in a single study by Brenda Milner (1963),
patients with dorsolateral prefrontal lesions were found to have greater difficulty on
the WCST than patients with orbitofrontal or nonfrontal lesions. However, the
majority of neuroimaging studies have found activation across frontal and non-
frontal brain regions, and clinical studies have revealed that the WCST does not
discriminate between frontal and nonfrontal lesions (Nyhus and Barcelo 2009;
Stuss et al. 1983). Further, while data from the WCST do appear to provide some
information relevant to the constructs of “set shifting” and “working memory,” the
data do not necessarily offer information that would allow a neuropsychologist to
predict what situations in everyday life require the abilities that the WCST mea-
sures. Further, it has been shown that patients with frontal lobe pathology do not
always differ from control subjects on the WCST (Stuss et al. 1983).
68 5 Neuropsychological Assessment 3.0
decision making) may better explain the VMPFC lesion patients’ selections of
disadvantageous and risky cards on the IGT. Evidence for this is indicated by
improved performance when the initial bias favoring the disadvantageous decks is
removed by reordering the cards. Hence, while insensitivity to risk is often used to
explain poor performance on the IGT, the learning and explicit reversal components
of the IGT may better explain into what the IGT it is actually tapping. Further, the
IGT falls short of an ecologically valid assessment because it does not mimic
real-world activities. Like other cognitive measures, the IGT was created to assess
the construct of decision making in a laboratory setting, but it remains to be seen
whether a relation between performance on the IGT and real-world decision making
exists (Buelow and Suhr 2009).
verbal feedback (e.g., “That’s it” or “That’s not what I want”). Following the
WCST, the participant had 128 turns to twice match 10 times to color, 10 to object,
and 10 times to number (in that order) to successfully complete the task. Results
from comparison of healthy control performance on VRLFAM and the WCST
indicated that all performance scales (with the exception of WCST perseverative
errors) were directly related (Elkind 2001). An unfortunate limitation of modeling
VE-based neuropsychological assessments off of the WCST is that the virtual
analogues, like the original WCST, may not be able to differentiate between
patients with frontal lobe pathology and control subjects (Stuss et al. 1983). Further,
while data from the VE-based assessments, like the WCST, do appear to provide
information relevant to the constructs of “set shifting” and “working memory,” the
VE assessments seem to do little to extend ecological validity.
(continued)
Table 1 (continued)
72
Note BASC = Behavioral Assessment System for Children; CPT = Continuous Performance Test; WISC-IV = Wechsler Intelligence Scale for Children—
Fourth Edition; CPRS-R:L = Conners’ Parent Rating Scales—Revised: Long Version; CBCL = Child Behavior Checklist; BRIEF = Behavior Rating
Inventory of Executive Functioning; TMT = Trail Making Test; BNT = Boston Naming Test; NEPSY = NEuroPSYchological Assessment; TOVA = Test of
Variables of Attention
73
74 5 Neuropsychological Assessment 3.0
been replicated in other studies attempting to validate the VR Classroom for use
with ADHD (Adams et al. 2009; Bioulac et al. 2012; Pollak et al. 2009, 2010; see
Table 1).
Perhaps the best validated of these virtual classrooms is the AULA virtual reality
test (Diaz-Orueta et al. 2013; Iriarte et al. 2012). The AULA has a normative
sample comprised of 1272 participants (48.2 % female) with an age range from 6 to
16 years (M = 10.25, SD = 2.83). The AULA is significantly correlated with the
traditional CPT and can distinguish between children with ADHD with and without
pharmacological interventions. In comparison of the AULA Virtual Reality CPT
with standard computerized tests, the AULA VR CPT was found to be more
sensitive to reaction time and rate of omission errors than the TOVA and was also
rated as more enjoyable than the TOVA computerized battery (see Table 1). In
another recent study, Diaz-Oreta et al. (2013) analyzed the convergent validity
between the AULA and the Conners’ Continuous Performance Test (CPT).
The AULA and CPT were administered correlatively with 57 children who had a
diagnosis of ADHD. Convergent validity was indicated via the observed significant
correlations between both tests in all the analyzed variables (omissions, commis-
sions, reaction time, and variability of reaction time).
The classroom paradigm has been extended to a virtual apartment that superim-
poses construct-driven stimuli (e.g., Stroop and CPT) onto a large television set in
the living room. In a preliminary study, Henry and colleagues (2012) with 71
healthy adult participants found that the VR-Apartment Stroop is capable of elic-
iting the Stroop effect with bimodal stimuli. Initial validation data also suggested
that measures of the VR-Stroop significantly correlate with measures of the
Elevator counting with distracters, the Continuous Performance Task (CPT-II), and
the Stop-it task. Results from regression indicated that commission errors and
variability of reaction times at the VR-Apartment Stroop were significantly pre-
dicted by scores of the Elevator task and the CPT-II. These preliminary results
suggest that the VR-Apartment Stroop is an interesting measure of cognitive and
motor inhibition for adults.
Shallice and Burgess (1991) developed the Multiple Errands Test (MET) as a
function-led assessment of multitasking. The MET requires the patient to perform a
number of relatively simple but open-ended tasks in a shopping context.
Participants are required to achieve a number of simple tasks without breaking a
series of arbitrary rules. The MET has been shown to have increased sensitivity
(over traditional neuropsychological measures) to elicit and detect failures in
executive function (e.g., distractibility and task implementation deficits). It has also
been shown to be better at predicting behavioral difficulties in everyday life
(Alderman et al. 2003). Further, the MET has been found to have strong inter-rater
reliability (Dawson et al. 2009; Knight et al. 2002), and performance indices from
the MET were able to significantly predict severity of everyday life executive
problems in persons with TBI (Cuberos-Urbano et al. 2013).
Potential limitations for the MET are apparent in the obvious drawbacks to
experiments conducted in real-life settings (e.g., Bailey et al. 2010). Logie et al.
(2011) point out a number of limitations in the MET: (1) time-consuming;
(2) transportation is required for participants; (3) consent from local businesses;
(4) lack of experimental control; and (5) difficulty in adapting tasks for other
clinical or research settings. McGeorge et al. (2001) modeled a Virtual Errands Test
(VET) off of the original MET. However, the VET tasks were designed to be more
vocationally oriented in format containing work-related as opposed to the shopping
errands used in the MET. In a study involving five adult patients with brain injury
and 5 unimpaired matched controls, participants completed both the real-live MET
and the VET. Results revealed that performance was similar for real-world and VE
tasks. In a larger study comparing 35 patients with prefrontal neurosurgical lesions
to 35 controls matched for age and estimated IQ (Morris et al. 2002), the VE
scenario was found to successfully differentiate between participants with brain
injuries and controls. A limitation of these early VEs is that the graphics were
unrealistic, and performance assessment involved video recording test sessions with
subsequent manual scoring.
In the past decade, a number of virtual environments with enhanced graphics (and
usability) have been developed to model the function-led approach found in the
MET. In addition to the virtual environments for assessment of nonclinical popu-
lations (Logie et al. 2011), a number of virtual errand protocols have been developed
to evaluate executive functions of clinical populations (see Table 2 for review of
virtual errand protocols over the past 10 years). For example, virtual shopping
scenarios (see Parsons et al. 2013 for review) offer an advanced computer interface
that allows the clinician to immerse the patient within a computer-generated simu-
lation that reflects activities of daily living. They involve a number of errands that
must be completed in a real environment following certain rules that require problem
solving. Since they allow for precise presentation and control of dynamic perceptual
stimuli, they have the potential to provide ecologically valid assessments that
3 Need for Function-Led Assessments 77
combine the control of laboratory measures within simulations that reflect real-life
situations.
A number of other function-led VEs are being modeled to reflect the multi-
tasking demands found in the MET. The Multitasking in the City Test (MCT) is
modeled after the MET and involves an errand-running task that takes place in a
virtual city (Jovanovski et al. 2012a, b). The MCT can be distinguished from
existing VR and real-life METs. For instance, MCT tasks are performed with less
explicit rule constraints. This contrasts with the MET, in which participants must
abide by certain rules (not traveling beyond a certain spatial boundary and not
entering a shop other than to buy something). This difference was intentional in the
MCT because the researchers aimed to investigate behaviors that are clearly not
goal-directed. The MCT is made up of a virtual city that includes a post office, drug
store, stationary store, coffee shop, grocery store, optometrist’s office, doctor’s
office, restaurant/pub, bank, dry cleaners, pet store, and the participant’s home.
Although all buildings in the MCT VE can be entered freely, interaction within
them is possible only for those buildings that must be entered as part of the task
requirements. The MCT was used to compare a sample of post-stroke and traumatic
brain injury (TBI) patients to an earlier sample of normal controls. Jovanovski et al.
(2012b) found that although the patient sample developed adequate plans for
executing the tasks, their performance of the tasks revealed a greater number of
errors. The MCT was significantly correlated with a rating scale completed by
significant others.
Virtual reality assessments modeled off of the MET have also been created and
validated in samples with stroke or injury-related brain deficits. These protocols are
often placed in living of work settings (see Table 2 for review of MET-based VEs
from the past 10 years): virtual office tasks (Jansari et al. 2013; Lamberts et al.
2009; Montgomery et al. 2011); virtual apartment/home tasks (Saidel-Goley et al.
2012; Sweeney et al. 2010); virtual park (Buxbaum et al. 2012); virtual library task
(Renison et al. 2012); virtual anticipating consequences ask (Cook et al. 2013);
virtual street crossing (Avis et al. 2013; Clancy et al. 2006; Davis et al. 2013;
Nagamatsu et al. 2011); and virtual kitchen (Cao et al. 2010) (Table 3).
An example of a recently developed virtual environment for function-led
assessment is the virtual library task. Renison et al. (2012) aimed to investigate
whether performance on a virtual library task was similar to performance of the
same task in a real-world library. Findings revealed that scores on the virtual library
task and the real-world library task were highly positively correlated, suggesting
that performance on the virtual library task is similar to performance on the
real-world library task. This finding is important because the virtual reality envi-
ronment allows for automated logging of participant behaviors and it has greater
clinical utility than assessment in real-world settings. Comparisons of persons with
traumatic brain injury and normal controls supported the construct validity of the
virtual library task as a measure of executive functioning. In fact, the virtual library
task was found to be superior to traditional (e.g., WCST) tasks in differentiating
between participants with TBI and healthy controls. For example, the WCST failed
to significantly differentiate between the two groups. This is consistent with studies
78
Table 2 Function-led virtual reality errand tests for assessment of executive functioning
Study Traditional tests Design Outcome
Carelli et al. (2008) MMSE Within subjects • Temporal and accuracy outcome measures
Virtual Supermarket (n = 20) were able to monitor differences in abilities
Usability data for • Results revealed need for more practice to
RCT ensure task learning
• Large variance in performance time
Cipresso et al. (2013) MMSE; DS; Short Story Recall; TMT A, B, FAB; OCD group • While performing routine tasks in the
Virtual Multiple Corsi’s Span; Corsi’s Block; phonemic fluency test; (n = 15) Controls VMET, patients with OCD had more
Errands Test (VMET) semantic fluency test; disyllabic word test; ToL; (n = 15) difficulties working with breaks in volition
Token Test; Street Completion; STAI; BDI than normal controls
Josman et al. (2014) BADS, OTDL-R Stroke (n = 24) • Stroke participants showed a modest
Virtual Action Planning Control (n = 24) relationship between the BADS and
Supermarket (VAP-S) number of purchases in the VAP-S
• The VAP-S demonstrated decent predictive
validity of IADL performance
Josman et al. (2009) BADS Clinical (n = 30) • VAP-S was sensitive to differences in
Virtual Action Planning Controls (n = 30) executive functioning between
Supermarket (VAP-S) schizophrenia patients and controls
Law et al. (2012) N/A Healthy controls • Participants were able to navigate the
Edinburgh Virtual (n = 42) environment and perform tasks
Errands Test (EVET) • Factor demand and factor plan were shown
to have little effect on ability to complete
tasks
Logie et al. (2011) Word Recall Task, Working Memory Verbal Span, Within subjects • EVET scores predicted by measures of
Edinburgh Virtual Working Memory Spatial Span, Traveling Salesman (n = 165) retrospective memory, visuospatial
Errands Test (EVET) Task, Breakfast Task planning, and spatial working memory
• A limitation of the EVET is its lack of
generalizability to other scenarios
(continued)
5 Neuropsychological Assessment 3.0
Table 2 (continued)
Study Traditional tests Design Outcome
Pedroli et al. (2013) MMSE, AVLT, DS, Corsi’s Span, Supra-Span, Short Within subjects • The VMET showed good reliability
Virtual Multiple Story, ToL, Verbal Fluency, Benton’s JLO, WAIS-R, PD (n = 3) • However, the System Usability Scale
Errands Test (VMET) Naming Test, TMT, STAI, BDI Control (n = 21) suggests that the VMET may need minor
improvements to increase usability for
patients
Rand et al. (2009) Zoo Map Test, IADL, MET Post-stroke (n = 9) • VMET was able to distinguish between
Virtual Multiple Healthy adults healthy and post-stroke
Errands Test (VMET) (n = 20) • VMET was able to distinguish between
older and younger cohorts
Raspelli et al. (2012) MMSE, Star Cancellation Test, Token Test, Street’s Within subjects • Stroke patients showed the highest number
3 Need for Function-Led Assessments
Virtual Multiple Completion Test, Test of Attentional Performance, post-stroke (n = 9) of errors and slower reaction time on the
Errands Test (VMET) Stroop Test, Iowa gambling task, DEX, ADL, IADL, Healthy adults VR-MET
State Trait Anxiety Index, BDI (n = 10) • Older adults showed higher number of
errors and slower reaction time on the
VR-MET compared to younger adults
Sauzeon et al. (2014) MMSE, MRT, Spatial Span, Stroop task, CVLT, Within subjects • Alzheimer’s patients demonstrated deficits
Virtual Human Object SSQ, NTQ AD (n = 16) in executive functioning
Memory for Everyday Young adults • Older individuals showed signs of decline
Scenes (virtual (n = 23) in free recall
HOMES) Older adults
(n = 23)
Werner et al. (2009) BADS Within-subjects • VAP-S was able to discriminate between
Virtual Action Planning MCI (n = 30) MCI and the control group
Supermarket (VAP-S) Control (n = 40)
Note RCT = Randomized Clinical Trial; PD = Parkinson’s disease; AD = Alzheimer’s disease; MMSE = Mini Mental State Examination; DS = Digit
Span; TMT = Trail Making Test, FAB = Frontal Assessment Battery; ToL = Tower of London; BADS = Behavioural Assessment of the Dysexecutive
Syndrome; AVLT = Auditory-Verbal Learning Test; JLO = Judgment of Line Orientation; MET = Multiple Errands Test; MRT; CVLT = California Verbal
Learning Test; SSQ = Simulator Sickness Questionnaire; NTQ = New Technology Questionnaire; IADL = Instrumental Activities of Daily Living; ADL =
Activities of Daily Living; DEX = Dysexecutive Questionnaire; OTDL-R = Revised Observed Tasks of Daily Living; BDI = Beck Depression
79
(continued)
Table 3 (continued)
82
Table 4 Function-led assessment of attention and executive functioning using driving simulators
Study Traditional tests Design Outcome
Allen et al. (2009) N/A 3 between subjects (placebo • fMRI results indicated blood oxygen level
versus moderate alcohol versus decreased in the hippocampus, anterior
high alcohol cingulated, and dorsolateral prefrontal areas
(n = 40)) during the VR driving task
• Since these brain areas affect attention and
decision making, this may have implications
for driving while under the influence of
alcohol
Barkley et al. Adult ADHD Rating Scale, Safe Driving 3 × 3 (atomoxetine dose • Although self-report results revealed perceived
(2007) Behavior Survey titration) within-subjects ADHD improvement in ADHD symptoms while
(n = 18) performing simulated driving, behavioral
observations did not reveal a significant
improvement
Barkley et al. N/A 2 (placebo versus alcohol) × 2 • Although alcohol produced greater inattention
(2007) (0.04 BA between-subjects in persons with ADHD than control subjects,
ADHD (n = 50) versus control overall driving performance declined similarly
(n = 40) for both groups
Barkley et al. CPT 2 × 3 (dose titration) within • A high dose of methylphenidate reduced
(2005) subjects ADHD (n = 52) impulsiveness errors on the CPT and led to
improved driving performance (less variability
in steering and lowered driving speed)
Cox et al. (2008) N/A 3 between-subjects (OROS • Results showed ORS MPH and se-AMPH ER
MPH versus se-AMPH ER were linked with significantly poorer
versus placebo) Adolescents performance of the VR driving simulator in
with ADHD (n = 19) comparison with the placebo
Knouse et al. Driver History Survey, DBS, estimates of 6 within subjects ADHD • Adults with ADHD overestimated driving
(2005) driving ability, estimated percentile score, (n = 44) versus control (n = 44 performance and performed worse on the
examiner rating of safe driving behavior virtual reality driving task
(continued)
5 Neuropsychological Assessment 3.0
Table 4 (continued)
Study Traditional tests Design Outcome
Mckeever et al. N/A 2 within subjects (baseline loop • Texting appeared to have a detrimental effect
(2013) versus task engaged loop) on lane management, as participants paid less
(n = 28) attention to the road while texting
Milleville-Pennel Alertness, Go/No-Go, divided attention, 10 within subjects TBI (n = 5) • Differences in planning and attention were
et al. (2010) Visual Scanning (TAP); DS (WAIS-III); versus control (n = 6) found between participants with TBI and
D2 Test, Stroop Color–Word Test, Zoo controls
Map Test, TMT • Persons with TBI showed limited visual
exploration compared to controls
Schultheis et al. Digit Symbol (WAIS-III), PASAT, TMT, 5 within subjects acquired brain • Persons with acquired brain injury had lower
(2007) VR-DR User Feedback Questionnaire injury (n = 33) versus controls ratings of the VR-DR system than healthy
3 Need for Function-Led Assessments
(n = 21) controls
Note ADHD = Attention deficit/hyperactivity disorder, CPT = Continuous Performance Test, DBS = Driving Behavior Survey, ACT = Auditory Consonant
Trigrams, PASAT = Paced Auditory Serial Addition Test, UFOV = Useful Field of View, WAIS-III = Wechsler Adult Intelligence Scale—Third Edition,
TMT = Trail Making Test, TAP = Test for Attentional Performance, UFOV = Useful Field of View, WMS = Wechsler Memory Scale
85
86 5 Neuropsychological Assessment 3.0
virtual city, participants took part in a learning task in which they were exposed to
language- and graphic-based information without any context across three free
learning trials. Next, they were immersed into the virtual environment and they
followed a virtual human guide to five different zones of a virtual city. In each zone,
participants searched the area for two target items (i.e., items from the learning
phase). Following immersion in each of the five zones, participants performed short
and long delay free and cued recall tasks. Parsons and Rizzo compared results from
the virtual city to traditional paper-and-pencil tasks and found the virtual city was
significantly related to paper-and-pencil measures (both individual and composites)
of both visually and auditorily mediated learning and memory (convergent
validity).
Other researchers have extended object memory to include contextual infor-
mation such as the location, character, and moment associated with each object
(Burgess et al. 2001, 2002; Rauchs et al. 2008; Plancher et al. 2010, 2012, 2013).
For example, Arvind-Pala et al. (2014) assessed everyday-like memory using a
virtual environment-based Human Object Memory for Everyday Scenes (HOMES)
test. In this virtual paradigm that included a virtual apartment to simulate the
California Verbal Learning Test, older adults and brain-injured patients displayed
comparably poor recall and a recognition benefit.
One application of interest for affective neuroscience is the use of virtual envi-
ronments for studies of fear conditioning (Alvarez et al. 2007; Baas et al. 2008).
Virtual environments offer an ecologically valid platform for examinations of
context-dependent fear reactions in simulations of real-life activities (Glotzbach
2012; Mühlberger et al. 2007). Neuroimaging studies utilizing virtual environments
have been used to delineate brain circuits involved in sustained anxiety to unpre-
dictable stressors in humans. In a study of contextual fear conditioning, Alvarez
et al. (2008) used a virtual office and fMRI to investigate whether the same brain
mechanisms that underlie contextual fear conditioning in animals are also found in
humans. Results suggested that contextual fear conditioning in humans was con-
sistent with preclinical findings in rodents. Specifically, findings support hot
affective processing in that the medial aspect of the amygdala had afferent and
efferent connections that included input from the orbitofrontal cortex. In another
study using a virtual office, Glotzbach-Schoon et al. (2013) assessed the modulation
of contextual fear conditioning and extinction by 5HTTLPR (serotonin
transporter-linked polymorphic region) and NPSR1 (neuropeptide S receptor 1)
polymorphisms. Results revealed that both the 5HTTLPR and the NPSR1 poly-
morphisms were related to hot affective (implicit) processing via a fear-potentiated
startle. There was no effect of the 5HTTLPR polymorphism on cold cognitive
(explicit) ratings of anxiety. Given the ability of virtual environments to place
participants in experimentally controlled yet contextually relevant situations, there
appears to be promise in applying this platform to future translational studies into
contextual fear conditioning.
92 5 Neuropsychological Assessment 3.0
Recently, virtual environments have been applied to the assessment of both “cold”
and “hot” processes using combat-related scenarios (Armstrong et al. 2013; Parsons
et al. 2013). The addition of virtual environments allows affective neuroscience
researchers to move beyond the ethical concerns related to placing participants into
real-world situations with hazardous contexts. The goal of these platforms is to
assess the impact of hot affective arousal upon cold cognitive processes (see
Table 5). For example, Parsons et al. (2013) have developed a Virtual Reality
Stroop Task (VRST) in which the participant is immersed in a simulated High
Mobility Multipurpose Wheeled Vehicle (HMMWV) and passes through zones
with alternating low threat (driving down a deserted desert road) and high threat
(gunfire, explosions, and shouting among other stressors) while dual-task stimuli
(e.g., Stroop stimuli) were presented on the windshield. They found that the
high-threat zones created a greater level of psychophysiological arousal (heart rate,
skin conductance, respiration) than did low-threat zones. Findings from these
studies also provided data regarding the potential of military-relevant virtual
environments for measurement of supervisory attentional processing (Parsons et al.
2013). Analyses of the effect of threat level on the color–word and interference
scores resulted in a main effect of threat level and condition. Findings from the
virtual environment paradigm support the perspective that (1) high information load
tasks used for cold cognitive processing may be relatively automatic in controlled
circumstances—for example, in low-threat zones with little activity, and (2) the
total available processing capacities may be decreased by other hot affective factors
such as arousal (e.g., threat zones with a great deal of activity). In a replication
study, Armstrong et al. (2013) established the preliminary convergent and dis-
criminant validity of the VRST with an active-duty military sample.
In addition to virtual environment-based neuropsychological assessments using
driving simulators, a number of other military-relevant virtual environments have
emerged for neurocognitive assessment of cold and hot processes. For example,
Parsons et al. (2012, 2014) immersed participants into a Middle Eastern city and
exposed participants to a cold cognitive processing task (e.g., paced auditory serial
addition test) as they followed a fire team on foot through safe and ambush (e.g.,
hot affective—bombs, gunfire, screams, and other visual and auditory forms of
threat) zones in a Middle Eastern city. In one measure of the battery, a
route-learning task, each zone is preceded by a zone marker, which serves as a
landmark to assist in remembering the route. The route-learning task is followed
immediately by the navigation task in which the participants were asked to return to
the starting point of their tour through the city. Courtney et al. (2013) found that the
inclusion of hot affective stimuli (e.g., high-threat zones) resulted in a greater level
of psychophysiological arousal (heart rate, skin conductance, respiration) and
decreased performance on cold cognitive processes than did low-threat zones.
Results from active-duty military (Parsons et al. 2012) and civilian (Parsons and
Table 5 Virtual environments to elicit affective responses in threatening contexts
Study Traditional tests Sample Outcome
Armstrong et al. BDI, DKEFS Color–Word Interference, PASAT N = 49 • Convergent validity of the VRST with the DKEFS
(2013) ANAM (Stroop, Reaction Time, Code Substitution, active-duty Color-Word Interference Task and the ANAM Stroop
Virtual Matching to Sample, Math Processing, Tower Test) soldiers Task
HMMWV • Discriminant validity with the VRST was established
VRST
Parsons and PASAT; DKEFS Color–Word Inference Test; ANAM N = 50 • Convergent validity of VR-PASAT was found via
Courtney (2014) (Code Substitution, Procedural Reaction Time, college-aged correlations with measures of attentional processing
VR-PASAT Mathematical Processing, Matching to Sample, Stroop) students and executive functioning
• Divergent validity of VR-PASAT was found via lack
of correlations with measures of learning, memory, and
visuospatial processing, showing discriminant validity
• Likability of VR-PASAT was consistently rated higher
than the traditional PASAT
Parsons et al. ANAM Stroop Task, DKEFS Color–Word Interference N = 50 • HUMVEE VRST showed the same Stroop pattern as
(2013) college-aged the traditional DKEFS and ANAM Stroop measures
Virtual students • VRST offers the advantage of being able to assess
HMMWV exogenous and endogenous processing
VRST • HUMVEE VRST high-threat zones produced more
5 Enhanced Ecological Validity via Virtual Environments …
Courtney 2014) populations offer preliminary support for the construct validity of
the VR-PASAT as a measure of attentional processing. Further, results suggest that
the VR-PASAT may provide some unique information related to hot affective
processing not tapped by traditional cold attentional processing tasks.
6 Conclusions
While the cognitive screening studies are helpful, many neuropsychologists are
interested in the results of comparing larger neuropsychological batteries via
video teleconferencing to in-person administrations (see Table 2). Jacobsen and
colleagues (2003) evaluated the reliability of administering a broader neuropsy-
chological assessment remotely using healthy volunteers. The battery was made up
of twelve measures that covered eight cognitive domains: attention, information
processing, visuomotor speed, nonverbal memory, visual perception, auditory
attention, verbal memory, and verbal ability. Results revealed that for most of the
measures, the in-person and remote scores were highly correlated (reliability
coefficients ranging from 0.37 to 0.86; median value of 0.74).
In addition to some important reviews of the teleneuropsychology literature,
Cullum, Grosch, and colleagues have completed a set of studies comparing video
teleconference-based diagnostic interviewing with conventional in-person assess-
ment (Cullum et al. 2006, 2012, 2014; Grosch et al. 2011, 2015). In an early study,
Cullum et al. (2006) made this comparison with a sample of 33 participants, with
14 older persons having mild cognitive impairment and 19 participants with
mild-to-moderate Alzheimer’s disease. The neuropsychology battery included the
MMSE, Hopkins Verbal Learning Test-Revised, Boston Naming Test (short form),
Digit Span, letter fluency, and category fluency. Robust correlations were found
between video teleconference and in-person testing: MMSE (r = 0.89), Boston
Naming (r = 0.88), digit span (r = 0.81), category fluency (r = 0.58), and letter
fluency (r = 0.83), and showed excellent agreement. It is important to note that while
there was a significant correlation between two conditions for the HVLT-R, verbal
percentage retention score exhibited considerable variability in each test session.
This suggests that this score may not be as reliable as the other memory indices.
These results offer further support for the validity of video teleconferencing for
conducting neuropsychological evaluations of older adults with cognitive impair-
ment. Cullum and colleagues (2014) performed a follow-up study with a larger
sample size. They examined the reliability of video teleconference-based neu-
ropsychological assessment using the following: MMSE, Hopkins Verbal Learning
Test-Revised, Boston Naming Test (short form), letter and category fluency, digit
Span forward and backward, and clock drawing. The sample consisted of two
hundred and two (cognitive impairment N = 83; healthy controls N = 119) adult
participants. Highly similar results were found across video teleconferencing and
in-person conditions regardless of whether participants had cognitive impairment.
These findings suggest that video teleconferencing-based neuropsychological testing
is a valid and reliable alternative to traditional in-person assessment. In a more recent
study, this team aimed to validate remote video teleconferencing for geropsychiatry
applications. Findings suggest that brief telecognitive screening is feasible in an
outpatient geropsychiatry clinic and produces similar results for attention and
visuospatial ability in older patients. The patients of neuropsychologists may benefit
from a remote assessment and diagnosis because teleneuropsychology allows the
neuropsychologist to overcome the barriers of displacing patients (and their care-
givers) living in rural areas that are far from health institutions.
Although the above studies provide support for the feasibility, validity, and
acceptability of neuropsychological screening delivered via telehealth, there have
only been a couple studies that discuss the feasibility of neuropsychological
assessment for a comprehensive dementia care program that services patients with
limited accessibility (Barton et al. 2011; Harrell et al. 2014; Martin-Khan et al.
2012; Vestal et al. 2006). For example, Barton et al. (2011) employed a platform to
administer multidisciplinary, state-of-the-art assessment of neurocognitive impair-
ment by video teleconferencing. The participants were patients at a rural veteran’s
community clinic that were referred by their local provider for evaluation of
memory complaints. The neuropsychological evaluation was integrated into the
1 Video Teleconferencing for Teleneuropsychological Assessment 105
for assisting in a broad range of clinical research areas and patient point-of-care
services (Boulos et al. 2011; Doherty and oh 2012; Doherty et al. 2011; Fortney
et al. 2011). Of particular interest for neuropsychologists is the potential of
smartphones for research in cognitive science (Dufau et al. 2011). A handful of
smartphone applications have emerged for cognitive assessment (Brouillette et al.
2013; Gentry et al. 2008; Kwan and Lai 2013; Lee et al. 2012; Svoboda et al. 2012;
Thompson et al. 2012).
In one of these smartphone applications, Brouillette et al. (2013) developed a
new application that utilizes touch screen technology to assess attention and pro-
cessing speed. Initial validation was completed using an elderly nondemented
population. Findings revealed that their color shape test was a reliable and valid tool
for the assessment processing speed and attention in the elderly. These findings
support the potential of smartphone-based assessment batteries for attentional
processing in geriatric cohorts.
Smartphones move beyond the Neuropsychological Assessment 1.0 (paper and
pencil) found in video teleconferencing. Given their use of advanced technologies
(i.e., smartphones instead of paper-and-pencil), smartphone-based cognitive assess-
ment represents Neuropsychological Assessment 2.0. That said, smartphone-based
cognitive assessment does not extend ecological validity because it incorporates
construct-driven and veridical assessments that lack assessment of affective
processing.
1 Introduction
the degree of relevance or similarity that a test or training system has relative to the
real world and in its value for predicting or improving daily functioning.
A further common method applied in the rehabilitation sciences employs
behavioral observation and ratings of human performance in the real world or via
physical mock-ups of functional environments. Activities of daily living within
mock-up environments (i.e., kitchens and bathrooms) and workspaces (i.e., offices
and factory settings) are typically built, within which persons with motor and/or
neurocognitive impairments are observed, while their performance is evaluated.
Aside from the economic costs to physically build these environments and to
provide human resources to conduct such evaluations, this approach is limited in
the systematic control of real-world stimulus challenges and in its capacity to
provide detailed performance data capture.
adults after stroke (Westerberg et al. 2007) and traumatic brain injury (Serino et al.
2007).
While computerized training of attention has been of questionable import,
summative findings from systematic reviews have revealed moderate support for
direct attention training for patients who have experienced a moderate to severe TBI
(Snell et al. 2009; Comper et al. 2005; Gordon et al. 2006). The Cogmed program
has been widely used in various populations. Cogmed is a rehabilitation program
that uses videogame-based programs for children and adults that are practiced
30 min a day, 5 days/week for 5 weeks. The Cogmed consists of coaching, edu-
cation, and peer support with the goal of training and improving working memory
capacity. Brehmer and colleagues (2012) used Cogmed QM to investigate the
benefits of computerized working memory exercises in 55 younger adults and 45
cognitively intact older adults. Results indicated that participants (both younger and
older adults) showed significant improvements on the following neuropsychologi-
cal measure: verbal and nonverbal working memory (Spatial Span and Digit Span),
sustained attention and working memory (PASAT), and self-report of cognitive
functioning. Improvements were not seen in the areas of memory, nonverbal rea-
soning, or response inhibition (Brehmer et al. 2012). In a study with a clinical
population of 18 adults with moderate to severe acquired brain injuries, Johansson
and Tournmalm (2012) had participants use the Cogmed QM training software for
8 weeks for a total of 12 h. Findings revealed improvements on trained working
memory tasks and self-reports of functioning. An unfortunate limitation of the study
is that it had no control group. In another study that included a control group
Westerberg et al. (2007) had participants use Cogmed QM for 40 min daily, 5 days
a week for 5 weeks. The treatment group performed significantly better than the
control group on Spatial and Digit Span, PASAT, and Ruff 2&7, but no differences
emerged between groups on the Stroop, Ravens progressive matrices, or the word
list test (see Table 1).
Authors Groups Time to onset Design Intervention Outcome measures Main findings
citation duration
Cogmed QM 9 stroke 12–36 months Randomized pilot 40 min, Span board, Digit Span, Significant decrease in
Westerberg patients design 5 per week for Stroop, Raven’s PM, word cognitive problems as
et al. (2007) 9 controls 5 weeks list repetitions, PASAT, measured by CFQ.
RUFF, CFQ Systematic WM training
can significantly improve
WM and attention more
than one year post-stroke
Parrot 12 ABI 21.27 months Quasi-experimental 60 min., Cognistat Attention, Significant improvement
7
software patients prepost test design 10 per session, memory in memory and attention
Li et al. 1 session per
(2013) week for 8
sessions
Cogmed QM 10 stroke 37 months Controlled 45–60 min., PASAT; CWIT; Block Significant improvement
Lundqvist and ABI crossover 5 per week for Span; LST; Picture Span in trained WM tasks; WM
et al. (2010) patients experimental design 5 weeks test; and rated
11 controls occupational performance
Cogmed QM 18 TBI and 7 years Prospective cohort 30–45 min., Built-in parameter; CFQ Significant improvement
Johansson tumor design 3 per week for on trained WM tasks
and patients 7–8 weeks
Tommalm
(2012)
Cogmed QM 20 stroke 27 weeks Randomized 30–45 min., Digits Backward; RBMT Significant improvement
Bjorkdahl and TBI controlled design 5 days per II; WM Q on Digit Span and the
et al. (2013) patients week for WM questionnaire
18 controls 5 weeks
(continued)
Neuropsychological Rehabilitation 3.0: State of the Science
Table 1 (continued)
Authors Groups Time to onset Design Intervention Outcome measures Main findings
citation duration
Cognitive 15 stroke Not spec Control design 24 sessions, RML; Attentive Matrices; Significant cognitive
PC-training and TBI 3 per week for MMSE; CVF; improvement on nearly all
De Luca patients 8 weeks LVF; RAVLT Recall; the neuropsychological
et al. (2014) 20 control Constructional Apraxia; tests
ADLs; Barthel Index;
HADS
Cogmed QM 8 patients Not spec Randomized Baseline and at Digit Span, Arithmetic, Significant improvement
Akerlund 10 controls controlled design follow-ups at 6 Letter-NumberSequences, in WM, BNIS, and in
(2013) and 18 weeks Spatial Span, BNIS; DEX Digit Span
and HADS
8 Neuropsychological Rehabilitation 3.0
Cogmed QM 48 ABI 29 month Randomized Patients tested PASAT; Forward and Significantly improved
Hellgren patients controlled design at baseline and backward block repetition; WM,neuropsychological
et al. (2015) (crossover) 20 weeks LST test scores, and
self-estimated health
scores
Note ADAS = Alzheimer’s Disease Assessment Scale; BCPM = Behavioral Checklist of prospective Memory Task; BNIS = Barrow Neurological Institute
Screen; CAMPROMT = Cambridge Prospective Memory Test; CBTT = Corsi Block-Tapping Test; CFQ = Cognitive Failures Questionnaire;
CIQ = Community Integration Questionnaire; CST = Corsi Supraspan Test; CTT,Color Trail Test; CVF = Category Verbal Fluency; CWIT = Color
Word Interference Test; FAB = Frontal Assessment Battery; HAD = Hamilton Anxiety and Depression; LST = Listening Span Task; LVF = letter verbal
fluency; RAVLT = Rey Auditory Verbal, Learning Test; Lawton IADL = Instrumental Activities of Daily Living Scale; Min-minutes; MMSE = Mini–mental
state examination; NCSE = Neurobehavioral Cognitive Status; PASAT = Paced Auditory Serial Addition Test; Raven’s PM = Progressive Matrices;
RBMT = Rivermead Behavioral Memory Test; SADI = Self-Awareness of Deficit Interview; TMT = Trial Making Test; TONI-3 = Test of Nonverbal
Intelligence–3rd Edition; WFT = Word Fluency Test; WM = Working Memory
121
122 7 Neuropsychological Rehabilitation 3.0: State of the Science
(2009) remote history descriptive statistics sessions of VR 60 min each, performance and errors on MET-HV
of stroke (Mean, SD) pretest intervention; over 3 week clinician ratings of decreased from pre-
(age = 53–70) and post-test Control: None period instrumental to post-test and mean
activities of daily clinician ratings
living improved
Gamito TBI case study Case study; Experimental: 10 10 sessions PASAT test Subject’s PASAT
et al. (age = 20) nonparametric sessions of VR performance scores improved
(2011) pairwise consisting of finding from pretest to
comparisons at one’s way to and post-test
pretest, intermediate, from a supermarket
and post-test and finding figures
embedded in scenery;
Control: None
Fong et al. 24 stroke, TBI, or S; Mann–Whitney Experimental: 1 6 training Cognistat test Improvement in
(2010) other U-test (pre-test vs. instruction sessions: performance and reaction time and
neurological post-test) session + 6 training 60 min each, in vivo task accuracy of cash
injury patients sessions; Control: 2/week for completion withdrawals; no
(age = 18–63) VR training in basic 3 weeks (withdraw money difference in money
123
Authors Groups Design and Experimental control Intervention Outcome measures Main findings
citation analyses intervention duration
Optale 15 older adults 2 (group) × time Experimental: 36 36 initial Test performance on There were
et al. with memory mixed model initial sessions and sessions for MMSE, Digit Span, improvements on all
(2010) deficits (mean ANOVA 24 booster sessions; 30 min each VSR test, phonemic outcome measure
age = 78.5) and VR pathfinding over 3 months verbal fluency, dual tests, but no gains on
16 older adults in exercise; Control: and 24 test performance, clinician ratings
a randomized music therapy booster cognitive estimation
controlled trial sessions for test, clock drawing
(mean 30 min each test, Geriatric
7
backward VST
(continued)
Table 2 (continued)
Authors Groups Design and Experimental control Intervention Outcome measures Main findings
citation analyses intervention duration
Man et al. 20 patients with Two-way repeated Experimental: 10 10 sessions Test performance on The experimental
(2012) questionable measures ANCOVA sessions; Control: for 30 min clinical Dementia (VR) group improved
dementia (mean (groups x time) 10 sessions of each over Rating Scale, more on
age = 80.3) and paper-and-pencil 4 weeks MMSE, Fuld object neuropsychological
14 randomized memory exercises memory evaluation, tests of memory and
controls (mean and clinician ratings on self-report ratings
age = 80.28) of instrumental of memory strategy
ADLs implementation
Gerber 19 patients Within-subjects; Experimental: Single 1 session with VR Task All subjects
et al. with remote descriptive statistics, session of 4 VR tasks 4 tasks performance; Purdue improved on task
8 Neuropsychological Rehabilitation 3.0
(2012) history of TBI correlations of w/3 repetitions each. repeated 3 Pegboard test performance from
(meanage = 50.4) standardized tests Tasks were to times performance; pre-test to post-test
with improvement remove tools from a Clinician ratings of
in time workbench, compose neurobehavioral
3-letter words, make symptom inventory
a sandwich, and and Boredom
hammer nails; Propensity Scale
Control: none
Dvorkin TBI case study Case study; analysis Experimental: 10 10 sessions, Task performance Time on task
et al. (in (age = 20) of descriptive sessions of VR target 40 min each (duration of on-task improved earlier
press) statistics acquisition exercise; behavior in each during VR sessions
Control: 10 sessions session) than on standard
of standard OT and therapy
speech therapy
(continued)
125
Table 2 (continued)
126
Authors Groups Design and Experimental control Intervention Outcome measures Main findings
citation analyses intervention duration
Larson 18 patients with Within-subjects; Experimental: 2 2 sessions for Task performance There was
et al. severe TBI descriptive statistics sessions of VR target 40 min each (target acquisition improvement on
(2011) (age = 23–56) (mean, SD); acquisition exercise; over 2 days time) and clinician target acquisition
repeated measures Control: None rating on modified time task
ANOVA of target agitated behavior performance and
acquisition time scale improvement in
across trials clinician ratings
(lower modified
7
Agitated Behavior
Scale scores)
Lloyd 20 patients with Within-subjects; Experimental: One 1 session Task performance as Route recall was
et al. brain injury or paired sample t tests session of errorless measured by number more accurate during
(2009) stroke (mean of errors by learning of errors experimental
age = 42.5) condition (errorless route-finding errorless learning
learning versus trial exercise condition than during
and error); ANOVA (demonstration trial control trial-and-error
(time 1 versus time 2 and learning trials); learning condition
versus time 3) Control: One session
of trial-and-error
learning
route-finding
exercise
(demonstration trial
and learning trials)
Gagliardo 10 patients with Within-subjects; Experimental: 40 40 sessions Task performance as There was
et al. stroke (mean descriptive statistics sessions running for 60 min measured by number improvement from
(2013) age = 54.8) (mean, SD) and t Neuro@Home each over of correct responses pre- to post-test
tests (pre- and exercises (attention, 8 weeks observed on all three
post-test) categorization, and exercises
Neuropsychological Rehabilitation 3.0: State of the Science
working memory);
Control: None
(continued)
Table 2 (continued)
Authors Groups Design and Experimental control Intervention Outcome measures Main findings
citation analyses intervention duration
Devos 42 patients with Nonparametric Experimental: 15 15 sessions Test performance on Both groups
et al. stroke and 41 statistical sessions in VR for 60 min Test Ride for improved on all
(2009) randomized comparison simulator running each Investigating aspects of on-road
controls (mean (baseline, post-test, 12 driving scenarios; Practical fitness to performance; VR
age = 54) follow-up) Control: 15 sessions drive (TRIP)— group showed greater
playing Belgian version improvements in
commercially overall on-road
available games performance
involving cognitive
skills required by
8 Neuropsychological Rehabilitation 3.0
driving
Sorita 13 TBI patients Between-subjects; Experimental: 4 4 sessions for Test performance on The real environment
et al. (mean descriptive statistics sessions of Barrash 15 min each sketch map test, map group scored better
(2012) age = 31.1) and (mean, SD); Route-learning task over 2 days recognition test, and than VR group only
14 randomized ANOVA and Mann– in virtual scene arrangement on scene
controls (mean Whitney U–test for # environment; test arrangement; VR
age = 31.5) of errors by task Control: 4 sessions group performed as
(real versus VR) of Barrash well as real
route-learning task in environment group
urban environment on sketch map test
and map recognition
test
(continued)
127
Table 2 (continued)
128
Authors Groups Design and Experimental control Intervention Outcome measures Main findings
citation analyses intervention duration
Yip and 19 acquired brain Between-subjects; Experimental: 12 12 sessions Test performance on The VR group
Man injury patients descriptive statistics sessions of VR; for 30–45 min Cambridge improved for both
(2013) (mean (mean, SD); paired Control: Reading and each, twice Prospective Memory tests and on in vivo
age = 37.8) and and independent t table game activities per week for Test–Chinese task completion
18 randomized tests 5–6 weeks Version, Hong Kong
controls (mean List Learning Test,
age = 38.5) Frontal Assessment
Battery, Word
7
Fluency Test–
Chinese Version,
Color Trails Test and
in vivo task
completion
(remembering
events)
Man, 20 TBI patients Between-subjects; Experimental: 12 12 sessions Test performance on The VR group
Poon and (age = 18–55) descriptive statistics sessions of VR; for 20–25 min WCST and Tower of improved more on
Lam and 20 (mean, SD); Control: 12 sessions each London Test; In vivo WCST; there were no
(2013) randomized repeated measures of task completion differences between
controls ANOVA psycho-educational groups on
(age = 18–55) vocational training employment
system conditions
Neuropsychological Rehabilitation 3.0: State of the Science
8 Neuropsychological Rehabilitation 3.0 129
Virtual environments are also increasingly being used for the management of
behavioral problems and social skills training with developmental disorders and
psychiatric populations (Kandalaft et al. 2013; Park et al. 2011; Parsons et al. 2009;
Rus-Calafell et al. 2014). Such programs may use virtual humans and incorporate
programming that allows the treating professional to track desired and undesired
responses/interactions. Coaching can also be integrated to capitalize on real-time
feedback. Again, the appeal of creating a more ecologically valid and modifiable
therapy session speaks to the exciting potential of VR technologies with even the
most challenging of populations.
Within the psychological specialties more focused upon the detection, diagnosis,
and treatment of neurocognitive deficits, virtual reality has increasingly been seen
as a potential aid to assist in detection and treatment of neurocognitive impairments
(Koenig et al. 2009; Marusan et al. 2006; Probosz et al. 2009). Specifically, virtual
environment-based assessments and rehabilitation programs have been explored in
depth following brain injury. For example, virtual environments have been devel-
oped that assess way-finding skills and then provide training of spatial orientation
skills via programming that allows increased complexity as participants progress
though the program (Koenig et al. 2009). Virtual reality research has been tran-
sitioning more recently to a focus upon clinical treatment of cognitive disorders
(Imam and Jarus 2014; Larson et al. 2014; Spreij 2014). This includes innovative
rehabilitation of stroke patients to reduce paresis and dyspraxias that impact
expressive speech via multisensorial brain stimulation (Probosz et al. 2009).
Various researchers have promoted the use of mental rotation paradigms—felt to be
an essential skill in multiple cognitive functions—in the neurorehabilitation setting
to enhance memory, reasoning, and problem solving in everyday life (Marusan
et al. 2006; Podzebenko et al. 2005). Some neuropsychological interventions have
been translated into virtual formats, such as the VR version of the Multiple Errands
Test (MET; Burgess et al. 2006). This virtual environment has been used to provide
training the executive functions: strategic planning, cognitive flexibility, and inhi-
bition. The MET has been validated on stroke and TBI patients (Jacoby et al. 2013).
These attempts at clinical treatment represent the next stage of transitioning VR
from research laboratories to mainstream interventions.
Millian 2010). To date, a primary focus of BCIs has revolved around motor-based
activity and communication. Case examples and a few small studies have high-
lighted how BCI can be applied to neurorehabilitation populations such as stroke,
amyotrophic lateral sclerosis, locked-in syndrome, and SCI (Enzinger et al. 2008;
Ikegami et al. 2011; Kaufmann et al. 2013; Kiper et al. 2011; Schreuder et al. 2013;
Salisbury et al. 2015). Still, much of the technology is not ready for mainstream
implementation, and the various challenges inherent with such interventions have
been well detailed (see Danziger 2014; Hill et al. 2014; Millian et al. 2010;
Mak and Wolpaw 2009; Shih et al. 2012).
The potential of BCI beyond motor and communication augmentation has
received less attention, but is increasingly viewed as a fruitful area of application for
assessment (Allanson and Fairclough 2004; Nacke et al. 2011; Wu et al. 2010; Wu
et al. 2014) and training (Berka et al. 2007; Parsons and Courtney 2011; Parsons
and Reinebold 2012). This avenue of BCI research would be consistent with
increasing support that psychological, social, and nonmotor-based factors are key
aspects in perceived quality of life following injury (Tate 2002). Studies have
begun exploring the use of BCI in nonmedical populations to recognize emotions
(Inventado et al. 2011; Jatupaiboon et al. 2013; McMahan et al. 2015a; Pham and
Tran 2012), assess specific psychological symptoms (Dutta et al. 2013), mediate
artistic expression (Fraga et al. 2013), and evaluate cognitive workload (Anderson
et al. 2011; Allison et al. 2010; McMahan et al. 2015b, c; Treder and Blankertz
2010).
Interest in BCI among individuals with SCI, particularly for augmentation of
motor functioning, has been detailed in the literature (Collinger et al. 2013; Rupp
2014; Tate 2011). In a recent review of the use of BCI in persons with SCI, Rupp
(2014) concluded that while BCIs seem to be a promising assistive technology for
individuals with high SCI, systematic investigations are needed to obtain a realistic
understanding of the feasibility of using BCIs in a clinical setting. Rupp identified
three potentially limiting factors related to feasibility that should be considered:
(1) availability of technology for signal acquisition and processing; (2) individual
differences in user characteristics, and (3) infrastructure and healthcare-related
constraints.
settings. Still, there may be a need for more realistic visual paradigms. Studies are
needed to assess the impact of various levels of stimulus fidelity, immersion, and
presence upon rehabilitation efficacy. Only with such studies will we know
whether further advances in graphic feedback naturalism are important for the
progression of virtual interventions within the realm of psychological care and
assessment.
The studies discussed in this chapter involved collaboration among clinicians
and experts in the area of technology and psychology who can provide any needed
assistance in a timely manner. The presence of an information technology support
staff is also typically available as part of the hospital system. Further, during most
studies, there are times when technology does not work properly resulting in
delayed intervention or the need to reschedule. Such complications speak to the
challenges of implementing interventions dependent upon technology within
inpatient and outpatient rehabilitation settings. Any delays in these fast paced
settings, requiring the coordination of various disciplines, can be quite disruptive to
the milieu. The need to train staff and have support service available is paramount
when considering using advanced technology as a core component of a rehabili-
tation program.
Finally, the financial feasibility of virtual environments will largely be deter-
mined by future outcome research. Unless there is support for clinical gain in the
form of improved outcomes, decreased complications, or secondary decline in
medical costs (e.g., decreased length of stay or less use of future medical services),
cost concerns may prohibit adoption of such technologies. Mainstream imple-
mentation in rehabilitation would be a financial challenge considering the trend of
declining reimbursement for clinical services and emphasis on bundled services
with recent healthcare changes. The initial cost must be coupled with the previously
mentioned planning for technology maintenance, staff training, and statistician
support by individuals trained to analyze the data formats associated with this
technology. Additionally, virtual environments require a private space that limits
distractions that are all too frequent in rehabilitation settings. Private rooms or
dedicated areas for such interventions would be ideal yet allocation of such space is
often a challenge.
8.3 Summary
paradigms could be systematically altered to meet changing patient care needs and
goals based on the course of recovery. Furthermore, integration of patient-specific
training prompts and cues may improve patient self-monitoring, guide problem
solving, and promote less reliance on rehabilitation staff and caregivers
(Christansen et al. 1998; Imam and Jarus 2014; Larson et al. 2014; Spreij 2014).
The core challenges to bringing technology into a multidisciplinary rehabilitation
milieu include the initial costs of collaboration with research laboratories devel-
oping (and validating) the technologies, staff training, and confirming that these
interventions have superior outcomes to traditional care.
Part IV
Conclusions
Chapter 8
Future Prospects for a Computational
Neuropsychology
Throughout this book, there is an emphasis upon the importance of (1) enhancing
ecological validity via a move from construct-driven assessments to tests that are
representative of real-world functions—it is argued that this will proffer results that
are generalizable for prediction of the functional performance across a range of
situations; (2) the potential of computerized neuropsychological assessment devices
(CNADs) to enhance: standardization of administration, accuracy of timing pre-
sentation and response latencies, ease of administration and data collection, and
reliable and randomized presentation of stimuli for repeat administrations; and
(3) novel technologies to allow for precise presentation and control of dynamic
perceptual stimuli—provides ecologically valid assessments that combine the
veridical control and rigor of laboratory measures with a verisimilitude that reflects
real-life situations.
Neuropsychologists are increasingly interested in the potential for advanced
psychometrics, online computer-automated assessment methods, and neuroinfor-
matics for large-sample implementation and the development of collaborative
neuropsychological knowledgebases (Bilder 2011; Jagaroo 2009). In this chapter,
the focus will be upon using technology to develop repositories for linking neu-
ropsychological assessment results with data from neuroimaging, psychophysiol-
ogy, and genetics. It is argued that clinical neuropsychology is ready to embrace
technological advances and experience a transformation of its concepts and meth-
ods. To develop this aspect of Neuropsychology 3.0, clinical neuropsychologists
should incorporate findings from the human genome project, advances in psycho-
metric theory, and information technologies. Enhanced evidence-based science and
praxes is possible if neuropsychologists do the following: (1) develop formal
definitions of neuropsychological concepts and tasks in cognitive ontologies;
© Springer International Publishing Switzerland 2016 135
T.D. Parsons, Clinical Neuropsychology and Technology,
DOI 10.1007/978-3-319-31075-6_8
136 8 Future Prospects for a Computational Neuropsychology
An important growth area for neuropsychology is the capacity for sharing knowl-
edge gained from neuropsychological assessments with related disciplines.
Obstacle to this shared knowledge approach includes the covariance among mea-
sures and the lack of operational definitions for key concepts and their interrela-
tions. The covariance that exists among neuropsychological measures designed to
assess overlapping cognitive domains limits categorical specification into
well-delineated domains. As such, neuropsychological assessment batteries may be
composed of multiple tests that measure essentially the same performance attri-
butes. According to Dodrill (1997), poor test specificity may be revealed in the
median correlations for common neuropsychological tests. For example, Dodrill
asserts that while the median correlation within domain groupings on a test was
0.52, the median correlation between groupings was 0.44. From this, Dodrill
extrapolates that the tests are not unambiguously domain specific because the
median correlations should be notably higher for the within groupings and lower for
the between groupings. Consequently, the principal assessment measures used by
practitioners may not be quantifying domains to a level of specificity that accounts
for the covariance among the measures (Dickinson and Gold 2008; Parsons et al.
2005). Future studies should look at multivariate approaches found in neuroinfor-
matics that will allow for elucidation of the covariance information latent in
neuropsychological assessment data.
An additional concern is that revisions found in traditional print publishing like the
Wechsler Adult Intelligence Scale (WAIS) and the Wechsler Memory Scale
(WMS) have fallen short in terms of back-compatibility that may invalidate clinical
interpretations. Loring and Bauer (2010) contend that the significant alterations in
the structure and content of the WAIS-IV/WMS-IV may decrease the potential for
1 Formal Definitions of Neuropsychological Concepts … 137
emotion perception (social cognition; Mathersul et al. 2009; Silverstein et al. 2007;
Williams et al. 2009). In addition to cognitive assessments, the WebNeuro battery also
includes affective assessments of emotion recognition and identification: immediate
explicit identification followed by implicit recognition (within a priming protocol).
The WebNeuro protocol was developed as a Web-based version of the IntegNeuro
computerized battery. The IntegNeuro battery was developed by a consortium of
scientists interested in establishing a standardized international called Brain Resource
International Database (BRID; Gordon 2003; Gordon et al. 2005; Paul et al. 2005).
IntegNeuro and WebNeuro are part of the BRID project’s aim to move beyond out-
dated approaches to aggregating neuropsychological knowledgebases and develop
standardized testing approaches that facilitate the integration of normally independent
sources of data (genetic, neuroimaging, psychophysiological, neuropsychological,
and clinical). The WebNeuro platform allows for data to be acquired internationally
with a centralized database infrastructure for storage and manipulation of these data.
While the normative findings for a Web-based neuropsychological assessment
are promising (Mathersul et al. 2009; Silverstein et al. 2007; Williams et al. 2009),
the greatest potential appears to be WebNeuro’s relation to Brain Resource
International Database. The data from WebNeuro are linked to insights and cor-
relations in a standardized and integrative international database (see www.
BrainResource.com; www.BRAINnet.com). The WebNeuro data are incorporated
as additional (in addition to other markers: genetic, neuroimaging, psychophysio-
logical, neuropsychological, and clinical) clinical markers that can be incorporated
into databases as new marker discoveries emerge from personalized medicine.
Earlier in this book, we discussed the limitations of construct-driven tasks and the
need for function-led neuropsychological assessments. While the original goal of
many neuropsychological assessments was lesion localization, there is increasing
need for assessments of everyday functioning. This new role for neuropsychologists
has resulted in increased emphasis upon the ecological validity of neuropsycho-
logical instruments (Chaytor et al. 2006). As a result, neuropsychologists have been
compelled to move beyond the limited generalizability of results found in
construct-driven measures to tests that more closely approximate real-world func-
tion. The function-led approach allows the neuropsychologist to move from
construct-driven assessments to tests that are representative of real-world functions
and proffer results that are generalizable for prediction of the functional perfor-
mance across a range of situations.
A number of investigators have argued that performance on traditional neu-
ropsychological construct-driven tests (e.g., Wisconsin Card Sorting Test, Stroop)
has little correspondence to activities of daily living (Bottari et al. 2009; Manchester
et al. 2004; Sbordone 2008). According to Chan et al. (2008), most of these
3 Construct-Driven and Function-Led Redux 141
While little neuroinformatic work has included virtual reality immersion, neu-
roimaging studies utilizing virtual environments have been used to delineate brain
circuits involved in sustained anxiety to unpredictable stressors in humans. In a
study of contextual fear conditioning, Alvarez et al. (2008) used a Virtual Office
and fMRI to investigate whether the same brain mechanisms that underlie con-
textual fear conditioning in animals are also found in humans. Results suggested
that contextual fear conditioning in humans was consistent with preclinical findings
in rodents. Specifically, findings support hot affective processing in that the medial
aspect of the amygdala had afferent and efferent connections that included input
from the orbitofrontal cortex. In another study using a Virtual Office, Glotzbach-
Schoon et al. (2013) assessed the modulation of contextual fear conditioning and
extinction by 5HTTLPR (serotonin-transporter-linked polymorphic region) and
NPSR1 (neuropeptide S receptor 1) polymorphisms. Results revealed that both the
5HTTLPR and the NPSR1 polymorphisms were related to hot affective (implicit)
processing via a fear potentiated startle. There was no effect of the 5HTTLPR
polymorphism on cold cognitive (explicit) ratings of anxiety. Given the ability of
virtual environments to place participants in experimentally controlled yet con-
textually relevant situations, there appears to be promise in applying this platform to
future translational studies into contextual fear conditioning. The ability to differ-
entiate cold cognitive from hot affective processing could be used to develop on-
tologies for future research.
142 8 Future Prospects for a Computational Neuropsychology
multiple medical and health educational projects (Boulos et al. 2007). Although these
programs focus primarily on the dissemination of medical information and the
training of clinicians, a handful of private islands in Second Life (e.g., Brigadoon for
Asperger’s syndrome; Live2give for cerebral palsy) have been created for thera-
peutic purposes. In a recent article by Gorini et al. (2008), the authors describe such
sites and the development and implementation of a form of tailored immersive
e-therapy in which current technologies (e.g., virtual worlds; bio and activity sensors;
and personal digital assistants) facilitate the interaction between real and 3-D virtual
worlds and may increase treatment efficacy. In a recent article in science, Bainbridge
(2007) discussed the robust potential of virtual worlds for research in the social and
behavioral sciences. For social and behavioral science researchers, virtual worlds
reflect developing cultures, each with an emerging ethos and supervenient social
institutions (for a discussion of supervenience see Hare 1984). In addition to the
general social phenomena emerging from virtual world communities, virtual worlds
provide novel opportunities for studying them. According to Bainbridge (2007),
virtual worlds proffer environments that facilitate the creation of online laboratories
that can recruit potentially thousands of research subjects in an automated and
economically feasible fashion. Virtual worlds like Second Life offer scripting and
graphics tools that allow even a novice computer user the means necessary for
building a virtual laboratory. Perhaps even more important is the fact that social
interactions in online virtual worlds (e.g., Second Life) appear to reflect social norms
and interactions found in the physical world (Yee et al. 2007). Finally, there is the
potential of virtual worlds to improve access to medical rehabilitation. Klinger and
Weiss (2009) describe the evolution of virtual worlds along to two dimensions:
(1) the number of users and (2) the distance between the users. According to Klinger
and Weiss, single user and locally used virtual worlds have developed into three
additional venues: (1) multiple users located in the same setting, (2) single users
remotely located, and (3) multiple users remotely located. According to Klinger and
Weiss, single user, locally operated virtual worlds will continue to be important for
rehabilitation within a clinical or educational setting. However, the literature, to date,
has been limited to descriptions of system development and reports of small pilot
studies (Brennan et al. 2009). It is anticipated that this trend is changing and future
years will see evidence of the effectiveness of such virtual worlds for therapy.
4 Computational Neuropsychology
front of the participant) and woman walking in the kitchen (center). The conditions
were designed to assess reaction times (simple and complex), selective attention
(matching the auditory and visual stimuli), and external interference control (en-
vironmental distracters). Computational neuropsychologists could compare
construct-driven virtual environment performance with and without distractors to
computer-automated neuropsychological assessment devices.
Recently, virtual environments have been applied to the assessment of both “Cold”
and “Hot” processes using combat-related scenarios (Armstrong et al. 2013;
Parsons et al. 2013). The addition of virtual environments allows neuropsycholo-
gists to move beyond the ethical concerns related to placing participants into
real-world situations with hazardous contexts. The goal of these platforms is to
assess the impact of hot affective arousal upon cold cognitive processes. For
example, Parsons et al. (2013) have developed a Virtual Reality Stroop Task
(VRST) in which the participant is immersed in a simulated High Mobility
Multipurpose Wheeled Vehicle (HMMWV) and passes through zones with alter-
nating low threat (driving down a deserted desert road) and high threat (gunfire,
explosions, and shouting among other stressors), while construct-driven stimuli
(e.g., Stroop stimuli) were presented on the windshield. They found that the
high-threat zones created a greater level of psychophysiological arousal (heart rate,
skin conductance, respiration) than did low-threat zones. Findings from these
studies also provided data regarding the potential of military relevant virtual
environments for the measurement of supervisory attentional processing (Parsons
et al. 2013). Analyses of the effect of threat level on the color–word and interfer-
ence scores resulted in a main effect of threat level and condition. Findings from the
virtual environment paradigm support the perspective that (1) high information load
tasks used for cold cognitive processing may be relatively automatic in controlled
circumstances—for example, in low-threat zones with little activity; and (2) the
total available processing capacities may be decreased by other hot affective factors
such as arousal (e.g., threat zones with a great deal of activity). In a replication
study, Armstrong et al. (2013) established the preliminary convergent and dis-
criminant validity of the VRST with an active duty military sample.
In addition to virtual environment-based neuropsychological assessments using
driving simulators, a number of other military relevant virtual environments have
emerged for neurocognitive assessment of Cold and Hot processes. For example,
Parsons et al. (2012, 2014) immersed participants into a Middle Eastern city and
exposed participants to a cold cognitive processing task (e.g., paced auditory serial
addition test) as they followed a fire team on foot through safe and ambush (e.g.,
hot affective—bombs, gunfire, screams, and other visual and auditory forms of
146 8 Future Prospects for a Computational Neuropsychology
Allison, B. Z., Brunner, C., Kaiser, V., Müller-Putz, G. R., Neuper, C., & Pfurtscheller, G. (2010).
Toward a hybrid brain-computer interface based on imagined movement and visual attention.
Journal of Neural Engineering, 7(2).
Alphonso, A. L., Monson, B. T., Zeher, M. J., Armiger, R. S., Weeks, S. R., Burck, J. M., & Tsao,
J. W. (2012). Use of a virtual integrated environment in prosthetic limb development and
phantom limb pain. Studies in Health Technology and Informatics, 181, 305–309.
Alvarez, R. P., Johnson, L., & Grillon, C. (2007). Contextual-specificity of short-delay extinction
in humans: Renewal of fear-potentiated startle in a virtual environment. Learning and Memory,
14(4), 247–253.
Anastasi, A., & Urbina, S. (1997). Psychological testing (7th ed.). New York: McMillian.
Anderson, E. W., Potter, K. C., Matzen, L. E., Shepherd, J. F., Preston, G. A., & Silva, C. T.
(2011). A user study of visualization effectiveness using EEG and cognitive load. Computer
Graphics Forum, 30, 791–800.
Anthony, W. Z., Heaton, R. K., & Lehman, R. A. W. (1980). An attempt to cross-validate two
actuarial systems for neuropsychological test interpretation. Journal of Consulting and Clinical
Psychology, 48, 317–326.
Ardila, A. (2008). On the evolutionary origins of executive functions. Brain and Cognition, 68,
92–99.
Armstrong, C., Reger, G., Edwards, J., Rizzo, A., Courtney, C., & Parsons, T. (2013). Validity of
the virtual reality stroop task (VRST) in active duty military. Journal of Clinical and
Experimental Neuropsychology, 35, 113–123.
Asimakopulos, J., Boychuck, Z., Sondergaard, D., Poulin, V., Ménard, I., & Korner-Bitensky, N.
(2012). Assessing executive function in relation to fitness to drive: A review of tools and their
ability to predict safe driving. Australian Occupational Therapy Journal, 59, 402–427.
Avis, K., Gamble, K., & Schwebel, D. (2014). Does excessive daytime sleepiness affect children’s
pedestrian safety? SLEEP, 37, 283–287.
Azad, N., Amos, S., Milne, K., & Power, B. (2012). Telemedicine in a rural memory disorder
clinic-remote management of patients with dementia. Canadian Geriatrics Journal, 15,
96e100.
Baas, J. M. P., van Ooijen, L., Goudriaan, A., & Kenemans, J. L. (2008). Failure to condition to a
cue is associated with sustained contextual fear. Acta Psychologica, 127(3), 581–592.
Backs, R. W., & Seljos, K. A. (1994). Metabolic and cardiorespiratory measures of mental effort:
The effects of level of difficulty in a working memory task. International Journal of
Psychophysiology, 16, 57–68.
Baddeley, A. (1981). The cognitive psychology of everyday life. British Journal of Psychology, 72
(2), 257–269.
Baddeley, A. (2011). How does emotion influence working memory? Attention, representation,
and human performance: Integration of cognition, emotion, and motivation, 3–18.
Baddeley, A. (2012). How does emotion influence working memory? In S. Masmoudi, D. Y. Dai,
& A. Naceur (Eds.), Attention, representation, and human performance: Integration of
cognition, emotion, and motivation (pp. 3–18). Psychology Press.
Bakdash, J. Z., Linkenauger, S. A., & Profitt, D. (2008). Comparing decision-making and control
for learning a virtual environment: backseat drivers learn where they are going. Proceedings of
the Human Factors and Ergonomics Society Annual Meeting, 52, 2117–2121.
Banaji, M. R., & Crowder, R. G. (1989). The bankruptcy of everyday memory. American
Psychologist, 44(9), 1185.
Bangirana, P., Giordani, B., John, C. C., Page, C., Opoka, R. O., & Boivin, M. J. (2009).
Immediate neuropsychological and behavioral benefits of computerized cognitive rehabilitation
in Ugandan pediatric cerebral malaria survivors. Journal of Developmental and Behavioral
Pediatrics: JDBP, 30(4), 310–318.
Bangirana, P., Allebeck, P., Boivin, M. J., John, C. C., Page, C., Ehnvall, A., & Musisi, S. (2011).
Cognition, behaviour and academic skills after cognitive rehabilitation in Ugandan children
surviving severe malaria: A randomised trial. BMC Neurology, 11(1), 96.
References 149
Barkley, R. A., Anderson, D. L., & Kruesi, M. (2007). A pilot study of the effects of atomoxetine
on driving performance in adults with ADHD. Journal of Attention Disorders, 10, 306–316.
Barkley, R. A., & Murphy, K. R. (2011). The nature of executive function (EF) deficits in daily life
activities in adults with ADHD and their relationship to performance on EF tests. Journal of
Psychopathology and Behavioral Assessment, 33(2), 137–158.
Barkley, R., Murphy, K., O’connell, T., Anderson, D., & Connor, D. (2006). Effects of two doses
of alcohol on simulator driving performance in adults with attention-deficit/hyperactivity
disorder. Neuropsychology, 20, 77-87.
Barkley, R. A., Murphy, K. R., O’Connell, T., & Connor, D. F. (2005). Effects of two doses of
methylphenidate on simulator driving performance in adults with attention deficit hyperactivity
disorder. Journal of Safety Research, 36, 121–131.
Barr, W. B. (2008). Historical development of the neuropsychological test battery. In J. E. Morgan
& J. H. Ricker (Eds.), Textbook of clinical neuropsychology (pp. 3–17). New York: Taylor &
Francis.
Barra, J., Laou, L., Poline, J. B., Lebihan, D., & Berthoz, A. (2012). Does an oblique/slanted
perspective during virtual navigation engage both egocentric and allocentric brain strategies?
PLoS ONE, 7, e49537.
Barrett, A. M., Buxbaum, L. J., Coslett, H. B., Edwards, E., Heilman, K. M., Hillis, A. E., &
Robertson, I. H. (2006). Cognitive rehabilitation interventions for neglect and related disorders:
moving from bench to bedside in stroke patients. Journal of Cognitive Neuroscience, 18(7),
1223-1236.
Barton, C., Morris, R., Rothlind, J., & Yaffe, K. (2011). Video-telemedicine in a memory disorders
clinic: evaluation and management of rural elders with cognitive impairment. Telemedicine and
e-Health, 17(10), 789–793.
Bartram, D., & Bayliss, R. (1984). Automated testing: Past, present and future. Journal of
Occupational Psychology, 57(3), 221–237.
Bauer, R. M., Iverson, G. L., Cernich, A. N., Binder, L. M., Ruff, R. M., & Naugle, R. I. (2012).
Computerized neuropsychological assessment devices: Joint position paper of the American
Academy of Clinical Neuropsychology and the National Academy of Neuropsychology. The
Clinical Neuropsychologist, 26(2), 177–196.
Baxendale, S., & Thompson, P. (2010). Beyond localization: the role of traditional neuropsy-
chological tests in an age of imaging. Epilepsia, 51(11), 2225–2230.
Bayless, J. D., Varney, N. R., & Roberts, R. J. (1989). Tinker Toy Test performance and
vocational outcome in patients with closed-head injuries. Journal of Clinical and Experimental
Neuropsychology, 11, 913–917.
Beaudoin, M., & Desrichard, O. (2011). Are memory self-efficacy and memory performance
related? A meta-analysis. Psychological Bulletin, 137, 211–241.
Beaumont, J. G. (1975). The validity of the Category Test administered by on-line computer.
Journal of Clinical Psychology, 31, 458–462.
Bechara, A., Damasio, A. R., Damasio, H., & Anderson, S. W. (1994). Insensitivity to future
consequences following damage to human prefrontal cortex. Cognition, 50, 7–15.
Bechara, A., Tranel, D., Damasio, H., & Damasio, A. R. (1996). Failure to respond autonomically
to anticipated future outcomes following damage to prefrontal cortex. Cerebral Cortex, 6, 215–
225.
Bechara, A., Damasio, H., Tranel, D., & Damasio, A. R. (1997). Deciding advantageously before
knowing the advantageous strategy. Science, 275, 1293–1295.
Bechara, A., Damasio, H., Tranel, D., & Anderson, S. W. (1998). Dissociation of working
memory from decision making within human prefrontal cortex. Journal of Neuroscience, 18,
428–437.
Bechara, A., Damasio, H., & Damasio, A. R. (2000). Emotion, decision making, and the
orbitofrontal cortex. Cerebral Cortex, 10, 295–307.
Bechara, A. (2004). The role of emotion in decision-making: evidence from neurological patients
with orbitofrontal damage. Brain and Cognition, 55(1), 30–40.
150 References
Binet, A., & Simon, T. (1905). Methodes nouvelles pour le diagnostic du niveau intellectuel des
anormaux (New methods to diagnose the intellectual level of abnormals). L’Annee
Psychologique, 11, 245–366.
Binet, A., & Simon, T. (1908). Le developpement de l’intelligence chez les enfants (The
de-velopment of intelligence in children). L’Annee Psychologique, 14, 1–94.
Bioulac, S., Lallemand, S., Rizzo, A., Philip, P., Fabrigoule, C., & Bouvard, M. P. (2012). Impact
of time on task on ADHD patients’ performances in a virtual classroom. European Journal of
Paediatric Neurology, 16, 514–521.
Boake, C. (2003). Stages in the history of neuropsychological rehabilitation. Neuropsychological
rehabilitation: Theory and practice, 11–21.
Bohil, C. J., Alicea, B., & Biocca, F. A. (2011). Virtual reality in neuroscience research and
therapy. Nature Reviews Neuroscience, 12, 752–762.
Boivin, M. J., Busman, R. A., Parikh, S. M., Bangirana, P., Page, C. F., Opoka, R. O., & Giordani,
B. (2010). A pilot study of the neuropsychological benefi ts of computerized cognitive
rehabilitation in Ugandan children with HIV. Neuropsychology, 24(5), 667–673.
Boller, F. (1998). History of the International Neuropsychological Symposium: a reflection of the
evolution of a discipline. Neuropsychologia, 37(1), 17–26.
Bolles, M., & Goldstein, K. (1938). A study of the impairment of ‘‘abstract behavior” in
schizophrenic patients. Psychiatric Quarterly, 12, 42–65.
Boller, F., & Forbes, M. M. (1998). History of dementia and dementia in history: an over-view.
Journal of the Neurological Sciences, 158(2), 125–133.
Bonelli, R. M., & Cummings, J. L. (2007). Frontal-subcortical circuitry and behavior. Dialogues in
Clinical Neuroscience, 9(2), 141.
Bottari, C., Dassa, C., Rainville, C., & Dutil, E. (2009). The criterion-related validity of the IADL
Profile with measures of executive functions, indices of trauma severity and sociodemographic
characteristics. Brain Injury, 23, 322–335.
Boulos, M. N., Wheeler, S., Tavares, C., & Jones, R. (2011). How smartphones are changing the
face of mobile and participatory healthcare: an overview, with example from eCAALYX.
BioMedical Engineering OnLine, 10, 24.
Boutcher, Y. N., & Boutcher, S. H. (2006). Cardiovascular response to Stroop: Effect of verbal
response and task difficulty. Biological Psychology, 73, 235–241.
Bowen, A., & Lincoln, N. B. (2007). Cognitive rehabilitation for spatial neglect following stroke.
Cochrane Database of Systematic Reviews, 2, CD003586.
Bowman, H. C., Evans, Y. E. C., & Turnbull, H. O. (2005). Artificial time constraints on the Iowa
Gambling Task: The effects on behavioral performance and subjective experience. Brain and
Cognition, 57(1), 21–25.
Brehmer, Y., Westerberg, H., & Backman, L. (2012). Working-memory training in younger and
older adults: training gains, transfer, and maintenance. Frontiers in Human Neuroscience, 6, 63.
Brinton, G., & Rouleau, R. A. (1969). Automating the Hidden and Embedded Figure Tests.
Perceptual and Motor Skills, 29, 401–402.
Broca, P. (1865). Sur le siège de la faculté du langage articulé. Bulletine de Société
Anthro-Pologique, 6, 337–339.
Brock, L. L., Rimm-Kaufman, S. E., Nathanson, L., & Grimm, K. J. (2009). The contributions of
“hot” and “cool” executive function to children’s academic achievement, learning-related
behaviors, and engagement in kindergarten. Early Childhood Research Quarterly, 24,
337–349.
Broglio, S. P., Ferrara, M. S., Macciocchi, S. N., Baumgartner, T. A., & Elliott, R. (2007).
Test-retest reliability of computerized concussion assessment programs. Journal of Athletic
Training, 42(4), 509.
Brookings, J. B., Wilson, G. F., & Swain, C. R. (1996). Psychophysiological responses to changes
in workload during simulated air traffic control. Biological Psychology, 42, 361–377.
152 References
Brouillette, R. M., Foil, H., Fontenot, S., Correro, A., Allen, R., Martin, C. K., & Keller, J. N.
(2013). Feasibility, reliability, and validity of a smartphone based application for the
assessment of cognitive function in the elderly. PloS one, 8(6), e65925.
Bruce, D. (1985). On the origin of the term “Neuropsychology”. Neuropsychologia, 23, 813–814.
Brunswick, E. (1955). Symposium of the probability approach in psychology: Representative
design and probabilistic theory in a functional psychology. Psychological Review, 62, 193–
217.
Buelow, M. T., & Suhr, J. A. (2009). Construct validity of the Iowa gambling task.
Neuropsychology Review, 19(1), 102–114.
Burda, P. C., Starkey, T. W., & Dominguez, F. (1991). Computer administered treatment of
psychiatric inpatients. Computers in Human Behavior, 7(1), 1–5.
Burgess, N., Maguire, E. A., Spiers, H. J., & O’Keefe, J. (2001). A temporoparietal and prefrontal
network for retrieving the spatial context of lifelike events. Neuroimage, 14(2), 439–453.
Burgess, N., Maguire, E. A., & O’Keefe, J. (2002). The human hippocampus and spatial and
episodic memory. Neuron, 35(4), 625–641.
Burgess, P. W. (2000). Real-world multitasking from a cognitive neuroscience perspective
(pp. 465–472). Control of Cognitive Processes: Attention and performance XVIII.
Burgess, P. W., Alderman, N., Forbes, C., Costello, A., Coates, L., Dawson, D. R., et al. (2006).
The case for the development and use of “ecologically valid” measures of executive function in
experimental and clinical neuropsychology. Journal of the International Neuropsychological
Society, 12(02), 194–209.
Burgess, P. W., Veitch, E., Costello, A., & Shallice, T. (2000). The cognitive and neuroanatomical
correlates of multitasking. Neuropsychologia, 38, 848–863.
Buxbaum, L., Dawson, A., & Linsley, D. (2012). Reliability and validity of the virtual reality
lateralized attention test in assessing hemispatial neglect in right-hemisphere stroke.
Neuropsychology, 26, 430–441.
Cain, A. E., Depp, C. A., & Jeste, D. V. (2009). Ecological momentary assessment in aging
research: A critical review. Journal of Psychiatric Research, 43, 987–996.
Calabria, M., Manenti, R., Rosini, S., Zanetti, O., Miniussi, C., & Cotelli, M. (2011). Objective
and subjective memory impairment in elderly adults: a revised version of the everyday memory
questionnaire. Aging Clinical and Experimental Research, 23(1), 67–73.
Calhoun, V. D., & Pearlson, G. D. (2012). A selective review of simulated driving studies:
Combining naturalistic and hybrid paradigms, analysis approaches, and future directions.
NeuroImage, 59, 25–35.
Calvert, E. J., & Waterfall, R. C. (1982). A comparison of conventional and automated
administration of Raven’s Standard Progressive Matrices. International Journal of
Man-Machine Studies, 17, 305–310.
Camara, W. J., Nathan, J. S., & Puente, A. E. (2000). Psychological test usage: Implications in
professional psychology. Professional Psychology: Research and Practice, 31, 141–154.
Campbell, Z., Zakzanis, K. K., Jovanovski, D., Joordens, S., Mraz, R., & Graham, S. J. (2009).
Utilizing virtual reality to improve the ecological validity of clinical neuropsychology: An
fMRI case study elucidating the neural basis of planning by comparing the Tower of London
with a three-dimensional navigation task. Applied Neuropsychology, 16, 295–306.
Canini, M., Battista, P., Della Rosa, P. A., Catricalà, E., Salvatore, C., Gilardi, M. C., &
Castiglioni, I. (2014). Computerized neuropsychological assessment in aging: Testing efficacy
and clinical ecology of different interfaces. Computational and Mathematical Methods in
Medicine.
Canty, A. L., Fleming, J., Patterson, F., Green, H. J., Man, D., & Shum, D. H. (2014). Evaluation
of a virtual reality prospective memory task for use with individuals with severe traumatic brain
injury. Neuropsychological Rehabilitation, 24, 238–265.
Cao, X., Douguet, A. S., Fuchs, P., & Klinger, E. (2010). Designing an ecological virtual task in
the context of executive functions: a preliminary study. Proceedings of the 8th International
Conference on Disability, Virtual Reality and Associated Technologies (vol. 31, pp. 71–78).
References 153
Capuana, L. J., Dywan, J., Tays, W. J., Elmers, J. L., Witherspoon, R., & Segalowitz, S. J. (2014).
Factors influencing the role of cardiac autonomic regulation in the service of cognitive control.
Biological Psychology, 102, 88–97.
Carelli, L., Morganti, F., Weiss, P. L., Kizony, R., & Riva, G. (2008). A virtual reality paradigm
for the assessment and rehabilitation of executive function deficits post stroke: feasibility study.
Virtual Rehabilitation (pp. 99–104). IEEE.
Carlsson, A. (2001). A paradigm shift in brain research. Science, 294, 1021–1024.
Carroll, D., Turner, J. R., & Hellawell, J. C. (1986). Heart rate and oxygen consumption during
active psychological challenge: The effects of level of difficulty. Psychophysiology, 23,
174–181.
Carter, S., & Pasqualini, M. C. S. (2004). Stronger autonomic response accompanies better
learning: a test of Damasio’s somatic marker hypothesis. Cognition and Emotion, 18, 901–911.
Cernich, A. N., Brennana, D. M., Barker, L. M., & Bleiberg, J. (2007). Sources of error in
computerized neuropsychological assessment. Archives of Clinical Neuropsychology, 22,
39–48.
Chan, R. C. K., Shum, D., Toulopoulou, T., & Chen, E. Y. H. (2008). Assessment of executive
functions: Review of instruments and identification of critical issues. Archives of Clinical
Neuropsychology, 23(2), 201–216.
Chaytor, N., & Schmitter-Edgecombe, M. (2003). The ecological validity of neuropsychological
tests: A review of the literature on everyday cognitive skills. Neuropsychology Review, 13,
181–197.
Chaytor, N., Schmitter-Edgecombe, M., & Burr, R. (2006). Improving the ecological validity of
executive functioning assessment. Archives of Clinical Neuropsychology, 21(3), 217–227.
Chelune, G. J., & Moehle, K. A. (1986). Neuropsychological assessment and everyday
functioning. In D. Wedding, A. M. Horton, & J. Webster (Eds.), The neuropsychology
handbook (pp. 489–525). New York: Springer.
Choca, J. P., & Morris, J. (1992). Administering the category test by computer: Equivalence of
results. Clinical Neuropsychologist, 6, 9–15.
Chrastil, E. R., & Warren, W. H. (2012). Active and passive contributions to spatial learning.
Psychonomic Bulletin and Review, 19, 1–23.
Christensen, A. L. (1975). Luria’s neuropsychological investigation (2nd ed.). Copenhagen,
Denmark: Munksgaard.
Christensen, A. L. (1996). Alexandr Romanovich Luria (1902-1977): Contributions to neuropsy-
chological rehabilitation. Neuropsychological Rehabilitation, 6(4), 279–304.
Cicerone, K. D. (2008). Principles in evaluating cognitive rehabilitation research. In D. T. Stuss,
G. Winocur, & I. H. Robertson (Eds.), Cognitive neurorehabilitation (pp. 106–118). New
York: Cambridge University Press.
Ciemins, E. L., Holloway, B., Jay Coon, P., McClosky-Armstrong, T., & Min, S. J. (2009).
Telemedicine and the Mini-Mental State Examination: assessment from a distance.
Tele-medicine and e-Health, 15(5), 476–478.
Cincotti, F., Mattia, D., Aloise, F., Bufalari, S., Schalk, G., Oriolo, Babiloni, F. (2008).
Non-invasive brain-computer interface system: Towards its application as assistive technology.
Brain Research Bulletin, 75(6), 796-803.
Cipresso, P., La Paglia, F., La Cascia, C., Riva, G., Albani, G., & La Barbera, D. (2013). Break in
volition: A virtual reality study in patients with obsessive-compulsive disorder. Experimental
Brain Research, 229, 443–449.
Clancy, T. A., Rucklidge, J. J., & Owen, D. (2010). Road-crossing safety in virtual reality: A
comparison of adolescents with and without ADHD. Journal of Clinical Child and Adolescent
Psychology, 35, 203–215.
Clark, L., Manes, F., Nagui, A., Sahakian, B. J., & Robbins, T. W. (2003). The contributions of
lesion laterality and lesion volume to decision-making impairment following frontal lobe
damage. Neuropsychologia, 41, 1474–1483.
154 References
Clark, L., & Manes, F. (2004). Social and emotional decision-making following frontal lobe
injury. Neurocase, 10(5), 398–403.
Clifford, G. D., & Clifton, D. (2012). Wireless technology in disease management and medicine.
The Annual Review of Medicine, 63, 479–492.
CogState (2011). CogState research manual (Version 6.0). New Haven: CogState Ltd.
Cohen, S., & Pressman, S. D. (2006). Positive affect and health. Current Directions in
Psychological Science, 15(3), 122–125.
Coley, N., Andrieu, S., Jaros, M., Weiner, M., Cedarbaum, J., & Vellas, B. (2011). Suitability of
the clinical dementia rating-sum of boxes as a single primary endpoint for Alzheimer’s disease
trials. Alzheimer’s Dementia, 7, 602–610.
Collie, A., Maruff, P., Falleti, M., Silbert, B., & Darby, D. G. (2002). Determining the extent of
cognitive change following coronary artery bypass grafting: A review of available statistical
procedures. Annals of Thoracic Surgery, 73, 2005–2011.
Collie, A., & Maruff, P. (2003). Computerised neuropsychological testing. British Journal of
Sports Medicine, 37(1), 2–2.
Collie, A., Maruff, P., Makdissi, M., McCrory, P., McStephen, M., & Darby, D. (2003). CogSport:
reliability and correlation with conventional cognitive tests used in postconcussion medical
evaluations. Clinical Journal of Sport Medicine, 13(1), 28–32.
Collinger, J. L., Boninger, M. L., Bruns, T. M., Curley, K., Wang, W., & Weber, D. J. (2013).
Functional priorities, assistive technology, and brain-computer interfaces after spinal cord
injury. Journal of Rehabilitation Research and Development, 50(2), 145–159.
Comper, P., Bisschop, S. M., Carnide, N., & Triccio, A. (2005). A systematic review of treatments
for mild traumatic brain injury. Brain Injury, 19(11), 863–880.
Conkey, R. (1938). Psychological changes associated with head injuries. Archives of Psychology,
33, 5–22.
Conway, M. A. (1991). In defense of everyday memory. American Psychologist, 46, 19–26.
Cook, L., Hanten, G., Orsten, K., Chapman, S., Li, X., Wilde, E., et al. (2013). Effects of moderate
to severe Traumatic Brain Injury on anticipating consequences of actions in adults: a
preliminary study. Journal of the International Neuropsychological Society, 19, 508–517.
Costa, L. (1983). Clinical neuropsychology: A discipline in evolution. Journal of Clinical
Neuropsychology, 5, 1–11.
Costa, L., & Spreen, O. (Eds.). (1985). Studies in neuropsychology: Selected papers of Arthur
Benton. New York: Oxford University Press.
Courtney, C., Dawson, M., Rizzo, A., Arizmendi, B., & Parsons, T. D. (2013). Predicting
navigation performance with psychophysiological responses to threat in a virtual environment.
Lecture Notes in Computer Science, 8021, 129–138.
Cox, D., Moore, M., Burket, R., Merkel, R., Mikami, A., & Kovatchev, B. (2008). Rebound
effects with long-acting amphetamine or methylphenidate stimulant medication preparations
among adolescent male drivers with Attention-Deficit/Hyperactivity Disorder. Journal of Child
and Adolescent Psychopharmacology, 18, 1–10.
Critchley, H. D. (2005). Neural mechansims of autonomic, affective, and cognitive integration.
The Journal of Comparative Neurology, 493, 154–166.
Cromwell, H. C., & Panksepp, J. (2011). Rethinking the cognitive revolution from a neural
perspective: how overuse/misuse of the term ‘cognition’ and the neglect of affective controls in
behavioral neuroscience could be delaying progress in understanding the BrainMind.
Neuroscience and Biobehavioral Reviews, 35, 2026–2035.
Crone, E. A., Somsen, R. J. M., van Beek, B., & van der Molen, M. W. (2004). Heart rate and skin
conductance analysis of antecedents and consequences of decision making. Psychophysiology,
41, 531–540.
Crook, T. H., Ferris, S., & McCarthy, M. (1979). The Misplaced-Objects Task: A brief test for
memory dysfunction in the aged. Journal of the American Geriatrics Society, 27, 284–287.
Crook, T. H., Ferris, S. H., McCarthy, M., & Rae, D. (1980). Utility of digit recall tasks for
assessing memory in the aged. Journal of Consulting and Clinical Psychology, 48, 228–233.
References 155
Crook, T. H., Kay, G. G., & Larrabee, G. J. (2009). Computer-based cognitive testing. In I. Grant
& K. M. Adams (Eds.), Neuropsychological assessment of neuropsychiatric and neuromedical
disorders (3rd ed., pp. 84–100). New York: Oxford University Press.
Cubelli, R. (2005). The history of neuropsychology according to Norman Geschwind: Conti-nuity
and discontinuity in the development of science. Cortex, 41(2), 271–274.
Cuberos-Urbano, G., Caracuel, A., Vilar-López, R., Valls-Serrano, C., Bateman, A., &
Verdejo-García, A. (2013). Ecological validity of the Multiple Errands Test using predictive
models of dysexecutive problems in everyday life. Journal of Clinical and Experimental
Neuropsychology, 35, 329–336.
Cullum, C. M., Weiner, M. F., Gehrmann, H. R., & Hynan, L. S. (2006). Feasibility of
tele-cognitive assessment in dementia. Assessment, 13(4), 385–390.
Cullum, C. M., & Grosch, M. G. (2012). Teleneuropsychology. In K. Myers & C. Turvey (Eds.),
Telemental health: Clinical, technical and administrative foundations for evidence-based
practice (pp. 275–294). Amsterdam: Elsevier.
Cullum, M., Hynan, C., Grosch, L., Parikh, M., & Weiner, M. F. (2014). Teleneuropsychology:
Evidence for video teleconference-based neuropsychological assessment. Journal of the
International Neuropsychological Society, 20(10), 1028–1033.
Cummings, J., Gould, H., & Zhong, K. (2012). Advances in designs for Alzheimer’s disease
clinical trials. American Journal of Neurodegenerative Disease, 1(3), 205–216.
Cyr, A., Stinchcombe, A., Gagnon, S., Marshall, S., Hing, M. M., & Finestone, H. (2009). Driving
difficulties of brain-injured drivers in reaction to high-crash-risk simulated road events: A
question of impaired divided attention? Journal of Clinical and Experimental
Neuropsychology, 31, 472–482.
Dale, O., & Hagen, K. B. (2006). Despite technical problems personal digital assistants outperform
pen and paper when collecting patient diary data. Journal of Clinical Epidemiology, 60(1),
8–17.
Damasio, A. R. (1994). Descartes’ error: Emotion, rationality and the human brain. New York:
GP Putnam.
Damasio, A. R. (1996). The somatic marker hypothesis and the possible functions of the prefrontal
cortex. Philosophical Transactions of the Royal Society of London—Series B: Biological
Sciences, 351, 1413–1420.
Damasio, A. (2008). Descartes’ error: Emotion, reason and the human brain. Random House.
Daniel, M. H. (2012). Equivalence of Q-interactiveTM administered cognitive tasks: CVLT-II and
selected D-KEFS subtests, Q-Interactive Tech. Rep. No. 3. Bloomington, MN: NCS Pearson,
Inc.
Daniel, M. H. (2013). Equivalence of Q-interactive™ and paper administration of WMS®-IV
cognitive tasks, Q-Interactive Tech. Rep. No. 6. Bloomington, MN: NCS Pearson, Inc.
Davis, A., Avis, K., & Schwebel, D. (2013). The effects of acute sleep restriction on adolescents’
pedestrian safety in a virtual environment. Journal of Adolescent Health, 53, 785–790.
Davidson, R. J., & Sutton, S. K. (1995). Affective neuroscience: The emergence of a discipline.
Current Opinion in Neurobiology, 5, 217–224.
Dawson, D. R., Anderson, N. D., Burgess, P., Cooper, E., Krpan, K. M., & Stuss, D. T. (2009).
Further development of the Multiple Errands Test: Standardized scoring, reliability, and
ecological validity for the Baycrest version. Archives of Physical Medicine and Rehabilitation,
90, S41–S51.
Danziger, Z. (2014). A reductionist approach to the analysis of learning in brain-computer
interfaces. Biological Cybernetics, 108(2), 183–201.
Delaney, J. P. A., & Brodie, D. A. (2000). Effects of short-termpsychological stress on the time
and frequency domains of heart rate variability. Perceptual and Motor Skills, 91, 515–524.
Delis, D. C., Kaplan, E., & Kramer, J. H. (2001). Delis-Kaplan executive function system
(D-KEFS). Psychological Corporation.
Delis, D. C., Kramer, J. H., Kaplan, E., & Ober, B. A. (2000). CVLT-II. New York: The
psychological corporation.
156 References
De Luca, R., Calabrò, R. S., Gervasi, G., DeSalvo, S., Bonanno, L., Corallo, F., et al. (2014). Is
computer-assisted training effective in improving rehabilitative outcomes after brain injury? A
case-control hospital-based study. Disability and Health Journal, 7, 356–360.
de Freitas, S., & Liarokapis, F. (2011). Serious Games: A new paradigm for education? In Serious
games and edutainment applications (pp. 9–23). Springer London.
De Jager, C. A., Milwain, E., & Budge, M. (2002). Early detection of isolated memory deficits in
the elderly: the need for more sensitive neuropsychological tests. Psychological Medicine, 32,
483–491.
De Marco, A. P., & Broshek, D. K. (2014). Computerized cognitive testing in the management of
youth sports-related concussion. Journal of Child Neurology, 0883073814559645.
de Oliveira, M. O., & Brucki, S. M. D. (2014). Computerized neurocognitive test (CNT) in mild
cognitive impairment and Alzheimer’s disease. Dementia and Neuropsychologia, 8(2), 112–
116.
deWall, C., Wilson, B. A., & Baddeley, A. D. (1994). The extended rivermead behavioral memory
test: A measure of everyday memory performance in normal adults. Memory, 2, 149–166.
Diaz-Orueta, U., Garcia-Lopez, C., Crespo-Eguilaz, N., Sanchez-Carpintero, R., Climent, G., &
Narbona, J. (2014). AULA virtual reality test as an attention measure: Convergent validity with
Conners’ continuous performance test. Child Neuropsychology, 20, 328–342.
Diller, L. (1976). A model for cognitive retraining in rehabilitation. Clinical Psychology, 29, 13–
15.
Dimeff, L. A., Paves, A. P., Skutch, J. M., & Woodcock, E. A. (2011). Shifting paradigms in
clinical psychology: How innovative technologies are shaping treatment delivery. In D.
H. Barlow (Ed.), The Oxford handbook of clinical psychology (pp. 618–648). New York:
Oxford University Press.
Di Sclafani, V., Tolou-Shams, M., Price, L. J., & Fein, G. (2002). Neuropsychological
performance of individuals dependent on crack-cocaine, or crack-cocaine and alcohol, at
6 weeks and 6 months of abstinence. Drug and Alcohol Dependence, 66, 161–171.
Dodrill, C. B. (1997). Myths of neuropsychology. The Clinical Neuropsychologist, 11, 1–17.
Dodrill, C. B. (1999). Myths of neuropsychology: Further considerations. The Clinical
Neuropsychologist, 13(4), 562–572.
Doherty, S. T., & Oh, P. (2012). A multi-sensor monitoring system of human physiology and daily
activities. Telemedicine Journal and e-Health, 18, 185–192.
Dolcos, F., & McCarthy, G. (2006). Brain systems mediating cognitive interference by emotional
distraction. The Journal of Neuroscience, 26(7), 2072–2079.
Dufau, S., Dunabeitia, J. A., Moret-Tatay, C., McGonigal, A., Peeters, D., et al. (2011). Smart
phone, smart science: how the use of smartphones can revolutionize research in cognitive
science. PLoS One, 6, e24974.
Dunn, E. J., Searight, H. R., Grisso, T., Margolis, R. B., & Gibbons, J. L. (1990). The relation of
the Halstead-Reitan neuropsychological battery to functional daily living skills in geriatric
patients. Archives of Clinical Neuropsychology, 5, 103–117.
Dutta, A., Kumar, R., Malhotra, S., Chugh, S., Banerjee, A., & Dutta, A. (2013). A low-cost
point-of-care testing system for psychomotor symptoms of depression affecting standing
balance: A preliminary study in India. Depression Research and Treatment.
Eack, S. M., Greenwald, D. P., Hogarty, S. S., Cooley, S. J., DiBarry, A. L., Montrose, D. M., &
Keshavan, M. S. (2009). Cognitive enhancement therapy for early-course schizophrenia:
Effects of a two-year randomized controlled trial. Psychiatric Services, 60(11), 1468–1476.
Eckerman, D. A., Carroll, J. B., Forre, C. D., Guillen, M., & Lansman, M. (1985). An approach to
brief field testing for neurotoxicity. Neurobehavioral Toxicology and Teratology, 7, 387–393.
Égerházi, A., Berecz, R., Bartók, E., & Degrell, I. (2007). Automated neuropsychological test
battery (CANTAB) in mild cognitive impairment and in Alzheimer’s disease. Progress in
Neuro-Psychopharmacology and Biological Psychiatry, 31(3), 746–751.
References 157
Elbin, R. J., Schatz, P., & Covassin, T. (2011). One-year test-retest reliability of the online version
of ImPACT in high school athletes. The American Journal of Sports Medicine, 39(11), 2319–
2324.
Eling, P., Derckx, K., & Maes, R. (2008). On the historical and conceptual background of the
Wisconsin Card sorting test. Brain and Cognition, 67, 247–253.
Elkind, J. S., Rubin, E., Rosenthal, S., Skoff, B., & Prather, P. (2001). A simulated reality scenario
compared with the computerized Wisconsin Card sorting test: An analysis of preliminary
results. Cyber Psychology and Behavior, 4, 489–496.
Ellis, H. C., & Ashbrook, P. W. (1988). Resource allocation model of the effects of depressed
mood states on memory. In K. Fiedler & J. Forgas (Eds.), Affect, cognition, and social
behavior: New evidence and integrative attempts (pp. 25–43). Toronto: Hogrefe.
Elwood, D. J. (1969). Automation of psychological testing. American Psychologist, 24, 287–289.
Elwood, D. J. (1972). Validity of an automated measure of intelligence in borderline retarded
subjects. American Journal of Mental Deficiency, 7(1), 90–94.
Elwood, D. L., & Griffin, R. (1972). Individual intelligence testing without the examiner. Journal
of Consulting and Clinical Psychology, 38, 9–14.
Elwood, R. W. (2001). MicroCog: Assessment of cognitive functioning. Neuropsychology Review,
11(2), 89–100.
Embretson, S. E., & Reise, S. P. (2013). Item response theory. Psychology Press.
Erlanger, D. M., Feldman, D. J., Kutner, K. (1999). Concussion resolution index. New York:
HeadMinder, Inc.
Enzinger, C., Ropele, S., Fazekas, F., Loitfelder, M., Gorani, F., Seifert, T., & Müller-Putz, G.
(2008). Brain motor system function in a patient with complete spinal cord injury following
extensive brain-computer interface training. Experimental Brain Research, 190(2), 215–223.
Erlanger, D. M., Feldman, D., Kutner, K., Kaushik, T., Kroger, H., Festa, J., et al. (2003).
Development and validation of a web-based neuropsychological test protocol for sports-related
return-to-play decision making. Archives of Clinical Neuropsychology, 18, 293–316.
Ernst, M., Bolla, K., Mouratidis, M., Contoreggi, C., Matochik, J. A., Kurian, V., et al. (2002).
Decision-making in a risk-taking task: A PET study. Neuropsychopharmacology, 26, 682–691.
Falleti, M. G., Maruff, P., Collie, A., & Darby, D. G. (2006). Practice effects associated with the
repeated assessment of cognitive function using the CogState battery at 10-minute, one week
and one month test-retest intervals. Journal of Clinical and Experimental Neuropsychology, 28
(7), 1095–1112.
Fasotti, L., Kovacs, F., Eling, P., & Brouwer, W. (2000). Time pressure management as a
compensatory strategy training after closed head injury. Neuropsychological Rehabilitation, 10
(1), 47–65.
Fazeli, P. L., Ross, L. A., Vance, D. E., & Ball, K. (2013). The relationship between computer
experience and computerized cognitive test performance among older adults. The Journals of
Gerontology Series B: Psychological Sciences and Social Sciences, 68, 337–346.
Feldstein, S. N., Keller, F. R., Portman, R. E., Durham, R. L., Klebe, K. J., & Davis, H. P. (1999).
A comparison of computerized and standard versions of the Wisconsin Card Sorting Test. The
Clinical Neuropsychologist, 13, 303–313.
Fellows, L. K., & Farah, M. J. (2005). Different underlying impairments in decision making
following ventromedial and dorsolateral frontal lobe damage in humans. Cerebral Cortex, 15
(1), 58–63.
Ferrara, M. S., McCrea, M., Peterson, C. L., & Guskiewicz, K. M. (2001). A survey of practice
patterns in concussion assessment and management. Journal of Athletic Training, 36(2), 145.
Fish, J., Evans, J. J., Nimmo, M., Martin, E., Kersel, D., et al. (2007). Rehabilitation of executive
dysfunction following brain injury: “content-free cueing” improves everyday prospective
memory performance. Neuropsychologia, 45(6), 1318–30.
158 References
Fish, J., Manly, T., Emslie, H., Evans, J. J., & Wilson, B. A. (2008). Compensatory strategies for
acquired disorders of memory and planning: Differential effects of a paging system for patients
with brain injury of traumatic versus cerebrovascular aetiology. Journal of Neurology,
Neurosurgery and Psychiatry, 79(8), 930–935.
Flavia, M., Stampatori, C., Zanotti, D., Parrinello, G., & Capra, R. (2010). Effi cacy and specifi
city of intensive cognitive rehabilitation of attention and executive functions in multiple
sclerosis. Journal of the Neurological Sciences, 288(1-2), 101–105.
Fonseca, R. P., Zimmermann, N., Cotrena, C., Cardoso, C., Kristensen, C. H., & Grassi-Oliveira,
R. (2012). Neuropsychological assessment of executive functions in traumatic brain injury: Hot
and cold components. Psychology and Neuroscience, 5, 183–190.
Fornito, A., & Bullmore, E. T. (2014). Connectomics: A new paradigm for understanding brain
disease. European Neuropsychopharmacology.
Fortier, M. A., DiLillo, D., Messman-Moore, T., Peugh, J., DeNardi, K. A., & Gaffey, K.
J. (2009). Severity of child sexual abuse and revictimization: The mediating role of coping and
trauma symptoms. Psychology of Women Quarterly, 33(3), 308–320.
Fortin, S., Godbout, L., & Braun, C. M. (2003). Cognitive structure of executive deficits in
frontally lesioned head trauma patients performing activities of daily living. Cortex, 39, 273–
291.
Fortney, J. C., Burgess, J. F, Jr, Bosworth, H. B., Booth, B. M., & Kaboli, P. J. (2011).
A reconceptualization of access for 21st century healthcare. Journal of General Internal
Medicine, 26(Suppl 2), 639–647.
Fortuny, L. I. A., & Heaton, R. K. (1996). Standard versus computerized administration of the
Wisconsin Card Sorting Test. The Clinical Neuropsychologist, 10(4), 419–424.
Fraga, T., Pichiliani, M., & Louro, D. (2013). Experimental art with brain controlled interface.
Lecture Notes in Computer Science, 8009, 642–651.
Franzen, M. D., & Wilhelm, K. L. (1996). Conceptual foundations of ecological validity in
neuropsychological assessment. In R. J. Sbordone & C. J. Long (Eds.), Ecological validity of
neuropsychological testing (pp. 91–112). Boca Raton: St Lucie Press.
Franzen, M. D. (2000). Reliability and validity in neuropsychological assessment. New York:
Springer.
Fray, P. J., Robbins, T. W., & Sahakian, B. J. (1996). Neuropsychological applications of
CANTAB. International Journal of Geriatric Psychiatry, 11, 329–336.
Fredrickson, J., Maruff, P. P., Woodward, M. M., Moore, L. L., Fredrickson, A. A., Sach, J. J., &
Darby, D. D. (2010). Evaluation of the usability of a brief computerized cognitive screening
test in older people for epidemiological studies. Neuroepidemiology, 34, 65–75.
French, C. C., & Beaumont, J. G. (1987). The reaction of psychiatric patients to computerized
assessment. British Journal of Clinical Psychology, 26, 267–278.
French, C. C., & Beaumont, J. G. (1990). A clinical study of the automated assessment of
intelligence by the Mill Hill vocabulary test and the standard progressive matrices test. Journal
of Clinical Psychology, 46, 129–140.
Fried, R., Hirshfeld-Becker, D., Petty, C., Batchelder, H., & Biederman, J. (2012). How
informative is the CANTAB to assess executive functioning in children with ADHD? A
controlled study. Journal of Attention Disorders, 1087054712457038.
Garb, H. N., & Schramke, C. J. (1996). Judgment research and neuropsychological assessment: A
narrative review and meta-analysis. Psychological Bulletin, 120, 140–153.
Gardner, A., Kay-Lambkin, F., Stanwell, P., Donnelly, J., Williams, W. H., Hiles, A., & Jones, D.
K. (2012). A systematic review of diffusion tensor imaging findings in sports-related
concussion. Journal of Neurotrauma, 29(16), 2521-2538.
Gedya, J,L (1967), A teaching machine programme for use as a test of learning ability. In D.
Unvyin & J. Leedham (Eds.), Aspects of Educational Technology. London: Methuen.
Gedye, J. L. (1968). The development of a general purpose psychological testing system. Bulletin
of The British Psychological Society, 21, 101–102.
References 159
Gedye, J. L., & Miller, E. (1969). The automation of psychological assessment. International
Journal of Man-Machine Studies, 1, 237–262.
Gentry, T., Wallace, J., Kvarfordt, C., & Lynch, K. B. (2008). Personal digital assistants as
cognitive aids for individuals with severe traumatic brain injury: A community-based trial.
Brain Injury, 22(1), 19–24.
Gershon, R. C., Cella, D., Fox, N. A., Havlik, R. J., Hendrie, H. C., & Wagster, M. V. (2010).
Assessment of neurological and behavioral function: The NIH Toolbox. Lancet Neurology, 9
(2), 138–139.
Gershon, R. C., Wagster, M. V., Hendrie, H. C., Fox, N. A., Cook, K. F., & Nowinski, C.
J. (2013). NIH toolbox for assessment of neurological and behavioral function. Neurology, 80
(11 Supplement 3), S2-S6.
Gibbons, R. D., Weiss, D. J., Kupfer, D. J., Frank, E., Fagiolini, A., & Grochocinski, V. J. (2008).
Using computerized adaptive testing to reduce the burden of mental health as-sessment.
Psychiatric Services, 59(4), 361–368.
Gilberstadt, H., Lushene, R., & Buegel, B. (1976). Automated assessment of intelligence:
The TAPAC test battery and computerized report writing. Perceptual and Motor Skills, 43(2),
627–635.
Gilboa, Y., Rosenblum, S., Fattal-Valevski, A., Toledano-Alhadef, H., Rizzo, A., & Josman, N.
(2011). Using a virtual classroom environment to describe the attention deficits profile of
children with Neurofibromatosis Type 1. Research in Developmental Disabilities, 32, 2608–
2613.
Giles, G. M., & Shore, M. (1989). The effectiveness of an electronic memory aid for a
memory-impaired adult of normal intelligence. American Journal of Occupational Therapy, 43
(6), 409–411.
Gioia, G. A., & Isquith, P. K. (2004). Ecological assessment of executive function in traumatic
brain injury. Developmental Neuropsychology, 25(1-2), 135–158.
Glotzbach, E., Ewald, H., Andreatta, M., Pauli, P., & Mühlberger, A. (2012). Contextual fear
conditioning predicts subsequent avoidance behaviour in a virtual reality environment.
Cognition and Emotion, 26(7), 1256–1272.
Glotzbach-Schoon, E., Andreatta, M., Reif, A., Ewald, H., Tröger, C., Baumann, C., & Pauli,
P. (2013). Contextual fear conditioning in virtual reality is affected by 5HTTLPR and NPSR1
polymorphisms: effects on fear-potentiated startle. Frontiers in Behavioral Neuroscience, 7.
Godbout, L., & Doyon, J. (1995). Mental representation of knowledge following frontal-lobe or
postrolandic lesions. Neuropsychologia, 33, 1671–1696.
Goel, V., Grafman, J., Tajik, J., Gana, S., & Danto, D. (1997). A study of the performance of
patients with frontal lobe lesions in a financial planning task. Brain, 120, 1805–1822.
Goldberg, E., Podell, K., Harner, R., & Riggio, S. (1994). Cognitive bias, functional cortical
geometry, and the frontal lobes: laterality, sex, and handedness. Journal of Cognitive
Neuroscience, 6, 276–296.
Goldberg, E., Podell, K., & Lovell, M. (1994). Lateralization of frontal lobe functions and
cognitive novelty. The Journal of Neuropsychiatry & Clinical Neurosciences, 6, 371–378.
Goldberg, E., et al. (1997). Early diagnosis of frontal lobe dementias. Jerusalem: Eighth Congress
of International Psychogeriatric Association.
Goldberg, E., & Podell, K. (1999). Adaptive versus veridical decision making and the frontal
lobes. Consciousness and Cognition, 8, 364–377.
Goldberg, E., & Podell, K. (2000). Adaptive decision making, ecological validity, and the frontal
lobes. Journal of Clinical and Experimental Neuropsychology, 22(1), 56–68.
Goldberg, E. (2009). The new executive brain: frontal lobes in a complex world. New York:
Oxford University Press.
Goldberg, E., Funk, B. A., & Podell, K. (2012). How the brain deals with novelty and ambiguity:
implications for neuroaesthetics. Rendiconti Lincei, 23, 227–238.
Golden, C. J., Hammeke, T. A., & Purisch, A. D. (1980). Manual for the Luria-Nebraska
Neu-ropsychological Battery. Los Angeles: Western Psychological Services.
160 References
Goldenberg, G. (2003). Goldstein and Gelb’s Case Schn.: A classic case in neuropsychology?
Classic Cases in Neuropsychology, 2, 281.
Goldstein, G. (1990). Contributions of Kurt Goldstein to neuropsychology. The Clinical
Neuropsychologist, 4(1), 3–17.
Goldstein, G. (1996). (1996). Functional considerations in neuropsychology. In R. J. Sbordone &
C. J. Long (Eds.), Ecological validity of neuropsychological testing (pp. 75–89). Delray Beach,
Florida: GR Press/St. Lucie Press.
Goldstein, G., & Shelly, C. H. (1982). A further attempt to cross-validate the Russell, Neuringer,
and Goldstein neuropsychological keys. Journal of Consulting and Clinical Psychology, 50,
721–726.
Goldstein, K. (1939). The organism: a holistic approach to biology derived from pathological
data in man (pp. 15–16). New York: American Book Company.
Goldstein, K., & Gelb, A. (1918). Psychologische Analysen hirnpathologischer Fälle auf Grund
von Untersuchungen Hirnverletzer. Zeitschrift für die Gesamte Neurologie und Psychiatrie, 41,
1–142.
Goldstein, K. (1923). Über die Abhängigkeit der Bewegungen von optischen Vorgängen.
Bewegungsstörungen bei Seelenblinden. Monatschrift für Psychiatrie und Neurologie,
Festschrift Liepmann.
Goldstein, K. (1931/1971). Über Zeigen und Greifen. In A. Gurwitsch, E. M. Goldstein Haudek, &
W. E. Haudek (Eds.), Selected Papers/Ausgewählte Schriften. The Hague: Martinus Nijhoff.
Goldstein, K. (1936). The significance of the frontal lobes for mental performances. Journal of
Neurology and Psychopathology, 17(65), 27.
Goldstein, K., & Scheerer, M. (1941). Abstract and concrete behavior: An experimental study with
special tests. Psychological Monograph, 43, 1–151.
Gontkovsky, S. T., McDonald, N. B., Clark, P. G., & Ruwe, W. D. (2002). Current directions in
computer-assisted cognitive rehabilitation. NeuroRehabilitation, 17(3), 195–200.
Gordon, E. (2003). Integrative neuroscience. Neuropsychopharmacology, 28(Suppl. 1), 2–8.
Gordon, E., Cooper, C., Rennie, D., & Williams, L. M. (2005). Integrative neuroscience: The role
of a standardized database. Clinical EEG and Neuroscience, 36, 64–75.
Gordon, W. A., Zafonte, R., Cicerone, K., Cantor, J., Brown, M., Lombard, L., & Chandna, T.
(2006). Traumatic brain injury rehabilitation. Brain Injury Rehabilitation, 85, 343-382.
Grafman, J., & Litvan, I. (1999). Importance of deficits in executive functions. The Lancet, 354,
1921–1923.
Grafman, J., Schwab, K., Warden, D., Pridgen, A., Grown, H. R., & Salazar, A. M. (1996). Frontal
lobe injuries, violence, and aggression: A report of the Vietnam Head Injury Study. Neurology,
46, 1231–1238.
Grant, D. A., & Berg, E. (1948). A behavioral analysis of degree of reinforcement and ease of
shifting to new responses in a Weigl-type card-sorting problem. Journal of Experimental
Psychology, 38, 404–411.
Grant, S., Bonson, K. R., Contoreggi, C., & London, E. D. (1999). Activation of the ventromedial
prefrontal cortex correlates with gambling task performance: A FDG-PET study. Society for
Neuroscience Abstracts, 25, 1551.
Gras, D., Gyselinck, V., Perrussel, M., Orriols, E., & Piolino, P. (2013). The role of working
memory components and visuospatial abilities in route learning within a virtual environment.
Journal of Cognitive Psychology, 25(1), 38–50.
Gray, J. M., Robertson, I., Pentland, B., & Anderson, S. (1992). Microcomputer-based attentional
retraining after brain damage: A randomised group controlled trial. Neuropsychological
Rehabilitation, 2(2), 97–115.
Green, M. F. (1996). What are the functional consequences of neurocognitive deficits in
schizophrenia? American Journal of Psychiatry, 153(3), 321–330.
Green, M. F., Kern, R. S., Braff, D. L., & Mintz, J. (2000). Neurocognitive deficits and functional
outcome in schizophrenia: Are we measuring the “right stuff”? Schizophrenia Bulletin, 26(1),
119–136.
References 161
Hartlage, L. C., & Telzrow, C. F. (1980). The practice of clinical neuropsychology in the US.
Archives of Clinical Neuropsychology, 2, 200–202.
Heaton, R. (1999). Wisconsin Card Sorting Test: Computer Version 3 for Windows (Research ed.).
Lutz, Florida: Psychological Assessment Resources.
Heaton, R. K., & Adams, K. M. (1987). Potential versus current reality of automation in
neuropsychology: Reply to Kleinmuntz. Journal of Consulting and Clinical Psychology, 55,
268–269.
Heaton, R. K., Chelune, G. J., Talley, J. L., Kay, G. G., & Curtiss, G. (1993). Wisconsin Card
Sorting Test manual: Revised and expanded. Odessa, FL: Psychological Assessment
Resources.
Heaton, R. K., Grant, I., Anthony, W. Z., & Lehman, R. A. W. (1981). A comparison of clinical
and automated interpretation of the Halstead-Reitan Battery. Journal of Clinical
Neuropsychology, 22, 121–141.
Heaton, R. K., Grant, I., & Matthews, C. G. (1991). Comprehensive norms for an expanded
Halstead-Reitan Battery: Demographic corrections, research findings, and clinical
appli-cations. Odessa: Psychological Assessment Resources.
Heaton, R. K., & PAR. (2003). Wisconsin Card Sorting Test: Computer Version 4, Research
Edition (WCST: CV4). Odessa, FL: Psychological Assessment Resources.
Heaton, R. K., & Pendleton, M. G. (1981). Use of Neuropsychological tests to predict adult
patients’ everyday functioning. Journal of Consulting and Clinical Psychology, 49, 807.
Hebb, D. O., & Penfield, W. (1940). Human behavior after extensive bilateral removal from the
frontal lobes. Archives of Neurology & Psychiatry, 44(2), 421–438.
Hebb, D. O. (1949). The organization of behavior. New York: Wiley.
Hebb, D. O. (1963). Introduction to Dover edition. In K. S. Lashley, Brain mechanisms and
intel-ligence.
Hebb, D. O. (1981). Consider mind as a biological problem. Neuroscience, 6(12), 2419–2422.
Heeschen, C. (1994). Franz Joseph Gall. Reader in the History of Aphasia: From Franz Gall to
Norman Geschwind, 4, 1.
Henry, M., Joyal, C. C., & Nolin, P. (2012). Development and initial assessment of a new
paradigm for assessing cognitive and motor inhibition: The bimodal virtual-reality
Stroop. Journal of Neuroscience Methods, 210, 125–131.
Hildebrand, R., Chow, H., Williams, C., Nelson, M., & Wass, P. (2004). Feasibility of
neuro-psychological testing of older adults via videoconference: implications for assessing the
capacity for independent living. Journal of Telemedicine and Telecare, 10(3), 130–134.
Hill, N. J., Häuser, A., & Schalk, G. (2014). A general method for assessing brain-computer
interface performance and its limitations. Journal of Neural Engineering, 11(2).
Hinson, J. M., Jameson, T. L., & Whitney, P. (2002). Somatic markers, working memory, and
decision making. Cognitive, Affective, & Behavioral Neuroscience, 2(4), 341–353.
Hinson, J. M., Jameson, T. L., & Whitney, P. (2003). Impulsive decision making and working
memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29(2), 298.
Hoffman, H. G., Chambers, G. T., Meyer III, W. J., Arceneaux, L. L., Russell, W. J., Seibel, E. J.,
& Patterson, D. R. (2011). Virtual reality as an adjunctive non-pharmacologic analgesic for
acute burn pain during medical procedures. Annals of Behavioral Medicine, 41(2), 183-191.
Hom, J. (2003). Forensic neuropsychology: Are we there yet? Archives of Clinical
Neuropsychology, 18, 827–845.
Hoskins, L. L., Binder, L. M., Chaytor, N. S., Williamson, D. J., & Drane, D. L. (2010).
Comparison of oral and computerized versions of the Word Memory Test. Archives of Clinical
Neuropsychology, 25, 591–600.
Huang-Pollock, C. L., Karalunas, S. L., Tam, H., & Moore, A. N. (2012). Evaluating vigilance
deficits in ADHD: a meta-analysis of CPT performance. Journal of Abnormal Psychology, 121
(2), 360.
Hume, D. (1978). A treatise of human nature (LA Selby-Bigge, Ed.). New York: Oxford, 413-418.
References 163
Hunt, H. F. (1943). A practical clinical test for organic brain damage. Journal of Applied
Psychology, 27, 375–86.
Imam, B., & Jarus, T. (2014). Virtual reality rehabilitation from social cognitive and motor
learning theoretical perspectives in stroke population. Rehabilitation Research and Practice.
Inoue, M., Jinbo, D., Nakamura, Y., Taniguchi, M., & Urakami, K. (2009). Development and
evaluation of a computerized test battery for Alzheimer’s disease screening in
community-based settings. American Journal of Alzheimer’s Disease and other Dementias,
24, 129–135.
Internet Live Stats. (2015). Internet Users. Retrieved from Internet Live Stats: http://www.
internetlivestats.com/internet-users-by-country/.
Inventado, P., Legaspi, R., Suarez, M., & Numao, M. (2011). Predicting student emotions
resulting from appraisal of its feedback. Research and Practice in Technology Enhanced
Learning, 6(2), 107–33.
Irani, F., Brensinger, C. M., Richard, J., Calkins, M. E., Moberg, P. J., Bilker, W., & Gur, R. C.
(2012). Computerized neurocognitive test performance in schizophrenia: a lifespan analysis.
The American Journal of Geriatric Psychiatry, 20(1), 41-52.
Iriate, Y., Diaz-Orueta, U., Cueto, E., Irazustabarrena, P., Banterla, F., & Climent, G. (2012).
AULA - advanced virtual reality tool for the assessment of attention: normative study in Spain.
Journal of Attention Disorders, 20, 1–27.
Iverson, G. L., Lovell, M. R., & Collins, M. W. (2003). Interpreting change on ImPACT following
sport concussion. The Clinical Neuropsychologist, 17(4), 460–467.
Iverson, G. L., Brooks, B. L., Ashton, V. L., Johnson, L. G., & Gualtieri, C. T. (2009). Does
familiarity with computers affect computerized neuropsychological test performance? Journal
of Clinical and Experimental Neuropsychology, 31(5), 594–604.
Ivins, B. J., Kane, R., & Schwab, K. A. (2009). Performance on the Automated
Neuropsychological Assessment Metrics in a nonclinical sample of soldiers screened for mild
TBI after returning from Iraq and Afghanistan: a descriptive analysis. The Journal of Head
Trauma Rehabilitation, 24(1), 24–31.
Jacobsen, S. E., Sprenger, T., Andersson, S., & Krogstad, J. (2003). Neuropsychological
as-sessment and telemedicine: a preliminary study examining the reliability of
neuropsychol-ogy services performed via telecommunication. Journal of the International
Neuropsycho-logical Society, 9(03), 472–478.
Jacoby, M., Averbuch, S., Sacher, Y., Katz, N., Weiss, P. L., & Kizony, R. (2013). Effectiveness
of executive functions training within a virtual supermarket for adults with traumatic brain
injury: a pilot study. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 21
(2), 182–90.
Jacyna, S. (2004). Bastian’s four centres. Cortex, 40, 7–8.
Jacyna, S. (2004). Lichtheim’s “house”. Cortex, 40, 413–414.
Jagaroo, V. (2009). Neuroinformatics for neuropsychology. US: Springer.
Jansari, A. S., Froggatt, D., Edginton, T., & Dawkins, L. (2013). Investigating the impact of
nicotine on executive functions using a novel virtual reality assessment. Addiction, 108, 977–
984.
Jansari, A. S., Devlin, A., Agnew, R., Akesson, K., Murphy, L., & Leadbetter, T. (2014).
Ecological assessment of executive functions: A new virtual reality paradigm. Brain
Impairment, 15(02), 71–87.
Jatupaiboon, N., Pan-ngum, S., & Israsena, P. (2013). Real-time EEG-based happiness detection
system. The Scientific World Journal, 618649.
Johansson, B. B., & Tornmalm, M. (2012). Working memory training for patients with acquired
brain injury: effects in daily life. Scandinavian Journal of Occupational Therapy, 19, 176–183.
Johnson, D. R., Vincent, A. S., Johnson, A. E., Gilliland, K., & Schlegel, R. E. (2008). Reliability
and construct validity of the Automated Neuropsychological Assessment Metrics (ANAM)
mood scale. Archives of Clinical Neuropsychology, 1, 73–85.
164 References
Johnson, S. C., Schmitz, T. W., Kawahara-Baccus, T. N., Rowley, H. A., Alexander, A. L., Lee, J.,
& Davidson, R. J. (2005). The cerebral response during subjective choice with and without
self-reference. Journal of Cognitive Neuroscience, 17, 1897–1906.
Johnstone, B., & Stonnington, H. H. (Eds.). (2009). Rehabilitation of neuropsychological
disorders: A practical guide for rehabilitation professionals. Philadelphia: Psychology Press.
Jokel, R., Rochon, E., & Anderson, N. D. (2010). Errorless learning of computer-generated words
in a patient with semantic dementia. Neuropsychological Rehabilitation, 20, 16–41.
Jones-Gotman, M., Zatorre, R. J., Olivier, A., Andermann, F., Cendes, F., Staunton, H., & Wieser,
H. G. (1997). Learning and retention of words and designs following excision from medial or
lateral temporal-lobe structures. Neuropsychologia, 35(7), 963-973.
Jones, M., & Johnston, D. (2011). Understanding phenomena in the real world: The case for real
time data collection in health services research. Journal of Health Services Research & Policy,
16, 172–176.
Josman, N., Kizony, R., Hof, E., Goldenberg, K., Weiss, P., & Klinger, E. (2014). Using the
Virtual Action Planning-Supermarket for evaluating executive functions in people with stroke.
Journal of Stroke and Cerebrovascular Diseases, 23, 879–887.
Josman, N., Schenirderman, A. E., Klinger, E., & Shevil, E. (2009). Using virtual reality to
evaluate executive functions among persons with schizophrenia: a validity study.
Schizophrenia Research, 115, 270–277.
Jovanovski, D., Zakzanis, K., Campbell, Z., Erb, S., & Nussbaum, D. (2012). Development of a
novel, ecologically oriented virtual reality measure of executive function: The Multitasking in
the City Test. Applied Neuropsychology: Adult, 19, 171–182.
Jovanovski, D., Zakzanis, K., Ruttan, L., Campbell, Z., Erb, S., & Nussbaum, D. (2012).
Ecologically valid assessment of executive dysfunction using a novel virtual reality task in
patients with acquired brain injury. Applied Neuropsychology, 19, 207–220.
Kandalaft, M., Didehbani, N., Krawczyk, D., Allen, T., & Chapman, S. (2013). Virtual reality
social cognition training for young adults with high-functioning autism. Journal of Autism &
Developmental Disorders, 43(1), 34–44.
Kane, R. L., & Kay, G. G. (1992). Computerized assessment in neuropsychology: A review of
tests and test batteries. Neuropsychology Review, 3(1), 1–117.
Kane, R. L., & Kay, G. G. (1997). Computer applications in neuropsychological assessment. In
Contemporary approaches to neuropsychological assessment (pp. 359–392). Springer US.
Kane, R. L., & Reeves, D. L. (1997). Computerized test batteries. The Neuropsychology
Handbook, 1, 423–467.
Kane, R. L., Roebuck-Spencer, T., Short, P., Kabat, M., & Wilken, J. (2007). Identifying and
monitoring cognitive deficits in clinical populations using Automated Neuropsychological
Assessment Metrics (ANAM) tests. Archives of Clinical Neuropsychology, 22, 115–126.
Kaufmann, T., HOlz, E., & Kubler, A. (2013). Comparison of tactile, auditory, and visual
modalityfor brain-computer interface use: A case study with a patient in the locked-in state.
Frontiers in Neuroscience, 7, 129.
Kay, G. G., & Starbuck, V. N. (1997). Computerized neuropsychological assessment. Clinical
neuropsychology: theoretical foundations for practitioners (pp. 143–161). New Jersey:
Lawrence Erlbaum Associates.
Keefe, F. J., Huling, D. A., Coggins, M. J., Keefe, D. F., Rosenthal, M. Z., Herr, N. R., &
Hoffman, H. G. (2012). Virtual reality for persistent pain: a new direction for behavioral pain
management. PAIN, 153(11), 2163–2166.
Kennedy, D. O., & Scholey, A. B. (2000). Glucose administration, heart rate and cognitive
performance: Effects of increasing mental effort. Psychopharmacology, 149, 63–71.
Kerr, A., & Zelazo, P. D. (2004). Development of “hot” executive function: the children’s
gambling task. Brain and Cognition, 55, 148–157.
Kesler, S. R., Lacayo, N. J., & Jo, B. (2011). A pilot study of an online cognitive rehabilitation
program for executive function skills in children with cancer-related brain injury. Brain Injury,
25(1), 101–112.
References 165
Kingstone, A., Smilek, D., Birmingham, E., Cameraon, D., & Bischof, W. F. (2005). Cognitive
ethology: Giving real life to attention research (pp. 341–357). Measuring the mind: Speed,
control and age.
Kingstone, A., Smilek, D., Ristic, J., Friesen, C. K., & Eastwood, J. D. (2003). Attention,
researchers! It is time to take a look at the real world. Current Directions in Psychological
Science, 12, 176–180.
Kinsella, G. J., Ong, B., & Tucker, J. (2009). Traumatic brain injury and prospective memory in a
virtual shopping trip task: Does it matter who generates the prospective memory target? Brain
Impairment, 10, 45–51.
Kiper, P., Piron, L., Turolla, A., Stozek, J., & Tonin, P. (2011). The effectiveness of reinforced
feedback in virtual environment in the first 12 months after stroke. Neurologia i
Neurochirurgia Polska, 45(5), 436–444.
Kirkwood, K. T., Peck, D. F., & Bennie, L. (2000). The consistency of neuropsychological
assessments performed via telecommunication and face to face. Journal of Telemedicine and
Telecare, 6(3), 147–151.
Kirsch, N. L., Levine, S. P., Fallon-Krueger, M., & Jaros, L. A. (1987). The microcomputer as an
“orthotic” device for patients with cognitive deficits. The Journal of Head Trauma
Rehabilitation, 2, 77–86.
Klingberg, T., Forssberg, H., & Westerberg, H. (2002). Training of working memory in children
with ADHD. Journal of Clinical and Experimental Neuropsychology, 24, 781–791.
Klingberg, T., Fernell, E., Olesen, P. J., Johnson, M., Gustafsson, P., Dahlstrom, K., Westerberg,
H. (2005). Computerized training of working memory in children with ADHD—A
randomized, controlled trial. Journal of the American Academy of Child and Adolescent
Psychiatry, 44, 177–186.
Klinge, V., & Rodziewicz, T. (1976). Automated and manual intelligence testing of the Peabody
Picture Vocabulary Test on a psychiatric adolescent population. International Journal of
Man-Machine Studies, 8, 243–246.
Knight, C., & Alderman, N. (2002). Development of a simplified version of the Multiple Errands
Test for use in hospital settings. Neuropsychological Rehabilitation, 12(3), 231–255.
Knight, R. G., & Titov, N. (2009). Use of virtual reality tasks to assess prospective memory:
Applicability and evidence. Brain Impairment, 10, 3–13.
Knights, R, M., Richardson, D, H, & McNarry, L, R, (1973), Automated vs, clinical administration
of the Peabody Picture Vocabulary Test and the Coloured Progressive Matrices. American
Journal of Mental Deficiency, 78, 223–225.
Knouse, L. E., Bagwell, C. L., Barkley, R. A., & Murphy, K. R. (2005). Accuracy of
self-evaluation in adults with ADHD: Evidence from a Driving Study. Journal of Attention
Disorders, 8, 221–234.
Kobayashi, N., Yoshino, A., Takahashi, Y., & Nomura, S. (2007). Autonomic arousal in cognitive
conflict resolution. Autonomic Neuroscience: Basic and Clinical, 132, 70–75.
Koenig, S. T., Crucian, G. P., Dalrymple-Alford, J. C., & Dünser, A. (2009). Virtual reality
rehabilitation of spatial abilities after brain damage. Studies in Health Technology and
Informatics, 144, 105–107.
Kohler, C. G., Walker, J. B., Martin, E. A., Healey, K. M., & Moberg, P. J. (2010). Facial emotion
perception in schizophrenia: A meta-analytic review. Schizophrenia Bulletin, 36, 1009–1019.
Koslow, S. H. (2000). Should the neuroscience community make a paradigm shift to sharing
primary data? Nature Neuroscience, 3(9), 863–865.
Kramer, J. H., Mungas, D., Possin, K. L., Rankin, K. P., Boxer, A. L., Rosen, H. J., & Widmeyer,
M. (2014). NIH EXAMINER: conceptualization and development of an executive function
battery. Journal of the International Neuropsychological Society, 20(01), 11–19.
Kuhn, T. S. (1996). The structure of scientific revolutions (3rd Edn.). Chicago: University of
Chicago Press, first published 1962 (The University of Chicago Press).
Kurlychek, R. T. (1983). Use of a digital alarm chronograph as a memory aid in early dementia.
Clinical Gerontologist, 1, 93–94.
166 References
Kwan, R. Y. C., & Lai, C. K. Y. (2013). Can smartphones enhance telephone-based cognitive
assessment (TBCA)? International journal of environmental research and public health, 10
(12), 7110–7125.
Lalonde, G., Henry, M., Drouin-Germain, A., Nolin, P., & Beauchamp, M. H. (2013). Assessment
of executive function in adolescence: a comparison of traditional and virtual reality tools.
Journal of Neuroscience Methods, 219, 76–82.
Lamberts, K., Evans, J., & Spikman, J. (2009). A real-life, ecologically valid test of executive
functioning: The executive secretarial task. Journal of Clinical and Experimental
Neuropsychology, 56-65.
Larose, S., Gagnon, S., Ferland, C., & Pépin, M. (1989). Psychology of computers: XIV.
Cognitive rehabilitation through computer games. Perceptual and Motor Skills, 69(3 Pt 1),
851-858.
Larrabee, G. J. (2008). Flexible vs. fixed batteries in forensic neuropsychological assessment:
Reply to Bigler and Hom. Archives of Clinical Neuropsychology, 23(7), 763–776.
Larrabee, G. J., & Crook, T. H. (1996). Computers and memory. In I. Grant & K. M. Adams
(Eds.), Neuropsychological assessment of neuropsychiatric disorders (2nd ed., pp. 102–117).
New York: Oxford University Press.
Larson, E. B., Feigon, M., Gagliardo, P., & Dvorkin, A. Y. (2014). Virtual reality and cognitive
rehabilitation: A review of current outcome research. Neuro Rehabilitation, 34(4), 759–772.
Lashley, K. S. (1929). Brain mechanisms and intelligence. Chicago: University of Chicago Press.
Lashley, K. S. (1950). In search of the engram. In J. F. Danielli & R. Brown (Eds.), Physiological
mechanisms in animal behaviour (pp. 454–482). New York: Academic Press.
Law, A. S., Trawley, S. L., Brown, L. A., Stephens, A. N., & Logie, R. H. (2013). The impact of
working memory load on task execution and online plan adjustment during multitasking in a
virtual environment. The Quarterly Journal of Experimental Psychology, 66, 1241–1258.
Ledoux, J. (1996). The emotional brain: The mysterious underpinnings of emotional life. New
York: Touchstone.
Lee, H., Baniqued, P. L., Cosman, J., Mullen, S., McAuley, E., Severson, J., & Kramer, A. F.
(2012). Examining cognitive function across the lifespan using a mobile application.
Computers in Human Behavior, 28(5), 1934–1946.
Lee, H. C., Cameron, D., & Lee, A. G. (2003). Assessing the driving performance of older adult
drivers: On-road versus simulated driving. Accident Analysis and Prevention, 35, 797–803.
Lee, H. C., Lee, A. H., Cameron, D., & Li-Tsang, C. (2003). Using a driving simulator to identify
older drivers at inflated risk of motor vehicle crashes. Journal of Safety Research, 34, 453–459.
Lees-Haley, P. R., Smith, H. H., Williams, C. W., & Dunn, J. T. (1996). Forensic
neuropsychological test usage: an empirical survey. Archives of Clinical Neuropsychology,
11, 45–51.
Legenfelder, J., Schultheis, M. T., Al-Shihabi, T., Mourant, R., & DeLuca, J. (2002). Divided
attention and driving: a pilot study using virtual reality technology. Journal of Head Trauma
Rehabilitation, 17, 26–37.
Levine, B., Dawson, D., Boutet, I., Schwartz, M. L., & Stuss, D. T. (2000). Assessment of
strategic self-regulation in traumatic brain injury: Its relationship to injury severity and
psychosocial outcome. Neuropsychology, 14, 491–500.
Levinson, D., Reeves, D., Watson, J., & Harrison, M. (2005). Automated neuropsychological
assessment metrics (ANAM) measures of cognitive effects of Alzheimer’s disease. Archives of
Clinical Neuropsychology, 20(3), 403–408.
Lezak, M. D. (1983). Neuropsychological assessment (pp. 91–97). New York: Oxford.
Lezak, M. (1986). Psychological implications of traumatic brain damage for the patient’s family.
Rehabilitation Psychology, 31(4), 241-250.
Lezak, M. D. (1995). Neuropsychological Assessment (3rd ed.). New York: Oxford University
Press.
References 167
Li, K., Robertson, J., Ramos, J., & Gella, S. (2013). Computer-based cognitive retraining for
adults with chronic acquired brain injury: A pilot study. Occupational Therapy in Health Care,
27, 333–344.
Lieberman, M. D. (2010). Social cognitive neuroscience. In S. T. Fiske, D. T. Gilbert, & G.
Lindzey (Eds.), Handbook of Social Psychology (5th ed., pp. 143–193). New York:
McGraw-Hill.
Lieberman M. D., Eisenberger N. I. (2005). Conflict and habit: a social cognitive neuroscience
approach to the self. In A. Tesser, J. V. Wood, D. A. Stapel (Eds.), On building, defending and
regulating the self: A psychological perspective (pp. 77–102), New York: Psychology Press.
Liu, L., Miyazaki, M., & Watson, B. (1999). Norms and validity of the DriVR: A virtual reality
driving assessment for persons with head injuries. Cyberpsychology & Behavior, 2, 53–67.
Logie, R. H., Trawley, S., & Law, A. (2011). Multitasking: multiple, domain-specific cognitive
functions in a virtual environment. Memory & Cognition, 39, 1561–1574.
Loh, P. K., Ramesh, P., Maher, S., Saligari, J., Flicker, L., & Goldswain, P. (2004). Can pa-tients
with dementia be assessed at a distance? The use of Telehealth and standardized as-sessments.
Internal Medicine Journal, 34(5), 239–242.
Loh, P. K., Donaldson, M., Flicker, L., Maher, S., & Goldswain, P. (2007). Development of a
telemedicine protocol for the diagnosis of Alzheimer’s disease. Journal of Telemedicine and
Telecare, 13(2), 90–94.
Long, C. J. (1996). Neuropsychological tests: A look at our past and the impact that ecological
issues may have on our future. In R. J. Sbordone & C. J. Long (Eds.), Ecological Validity of
Neuropsychological Testing (pp. 1–14). Delray Beach: GR Press/St. Lucie Press.
Long, C. J., & Wagner, M. (1986). Computer applications in neuropsychology. In D. Wedding, A.
M. Horton, & J. Webster (Eds.), The neuropsychology handbook (pp. 548–569). New York:
Springer.
Lopez, S. J., Ryan, J. J., Sumerall, S. W., Lichtenberg, J. W., Glasnapp, D., Krieshok, T. S., & Van
Fleet, J. N. (2001). Demographic variables and microCog performance: Relationships for VA
patients under treatment for substance abuse. Psychological Reports, 88, 183–188.
Loring, D. W., & Bauer, R. M. (2010). Testing the limits: Cautions and concerns regarding the
new Wechsler IQ and Memory scales. Neurology, 74(8), 685–690.
Loring, D. W., & Bowden, S. C. (2014). The STROBE statement and neuropsychology: light-ing
the way toward evidence-based practice. The Clinical Neuropsychologist, 28(4), 556–574.
Louey, A. G., Cromer, J. A., Schembri, A. J., Darby, D. G., Maruff, P., Makdissi, M., & Mccrory,
P. (2014). Detecting cognitive impairment after concussion: sensitivity of change from baseline
and normative data methods using the CogSport/Axon cognitive test battery. Archives of
Clinical Neuropsychology, 29(5), 432–441.
Lovell, M. R., Collins, M. W., Maroon, J. C., Cantu, R., Hawn, M. A., Burke, C. J., & Fu, F.
(2002). Inaccuracy of symptom reporting following concussion in athletes. Medicine & Science
in Sports & Exercise, 34, S298.
Lovell, M., Collins, M., & Bradley, J. (2004). Return to play following sports-related concussion.
Clinics in Sports Medicine, 23(3), 421–441.
Lovell, M. R. (2013). Immediate post-concussion assessment testing (ImPACT) test: clinical
interpretive manual online ImPACT 2007-2012. Pittsburgh: ImPACT Applications Inc.
Lubin, B., Larsen, R. M., & Matarazzo, J. D. (1984). Patterns of psychological test usage in the
United States: 1935-1982. American Psychologist, 39(4), 451.
Lundqvist, A., Grundström, K., Samuelsson, K., & Rönnberg, J. (2010). Computerized training of
working memory in a group of patients suffering from acquired brain injury. Brain Injury, 24,
1173–1183.
Luria, A. R. (1966/1980). Higher Cortical Functions in Man (2nd ed., rev. and expanded), Basic
Books, New York.
Luria, A. R., & Majovski, L. V. (1977). Basic approaches used in American and Soviet clin-ical
neuropsychology. American Psychologist, 32, 959–968.
168 References
Mathersul, D., Palmer, D. M., Gur, R. C., Gur, R. E., Cooper, N., Gordon, E., & Williams, L. M.
(2009). Explicit identification and implicit recognition of facial emotions: II. Core domains and
relationships with general cognition. Journal of Clinical and Experimental Neuropsychology,
31(3), 278–291.
Mathewson, K. J., Jetha, M. K., Drmic, I. E., Bryson, S. E., Goldberg, J. O., Hall, G. B., et al.
(2010). Autonomic predictors of Stroop performance in young and middle-aged adults.
International Journal of Psychophysiology, 76, 123–129.
McCann, R. A., Armstrong, C. M., Skopp, N. A., Edwards-Stewart, A., Smolenski, D. J., June,
J. D., & Reger, G. M. (2014). Virtual reality exposure therapy for the treatment of anxiety
disorders: An evaluation of research quality. Journal of Anxiety Disorders, 28(6), 625–631.
McDonald, S. (2013). Impairments in social cognition following severe traumatic brain injury.
Journal of the International Neuropsychological Society, 19, 231–246.
McEachern, W., Kirk, A., Morgan, D. G., Crossley, M., & Henry, C. (2008). Reliability of the
MMSE administered in-person and by telehealth. The Canadian Journal of Neurological
Sciences, 35(05), 643–646.
McGeorge, P., Phillips, L. H., Crawford, J. R., Garden, S. E., Sala, S. D., & Milne, A. B. (2001).
Using virtual environments in the assessment of executive dysfunction. Presence, 10, 375–383.
McGurk, S., Twamley, E., Sitzer, D., McHugo, G., & Mueser, K. (2007). A meta-analysis of
cognitive remediation in schizophrenia. American Journal of Psychiatry, 164(12), 1791–1802.
McKeever, J., Schultheis, M., Padmanaban, V., & Blasco, A. (2013). Driver performance while
texting: even a little is too much. Traffic Injury Prevention, 14, 132–137.
McLean, A., Dowson, J., Toone, B., Young, S., Bazanis, E., Robbins, T. W., & Sahakian,
B. J. (2004). Characteristic neurocognitive profile associated with adult attention-deficit/
hyperactivity disorder. Psychological Medicine, 34, 681–692.
McMahan, T., Parberry, I., & Parsons, T. D. (2015a). Evaluating Player Task Engagement and
Arousal using Electroencephalography. In Proceedings of the 6th International Conference on
Applied Human Factors and Ergonomics, Las Vegas, USA (July 26-30).
McMahan, T., Parberry, I., & Parsons, T.D. (2015b). Evaluating Electroencephalography
Engagement Indices During Video Game Play. Proceedings of the Foundations of Digital
Games Conference, June 22-June 25, 2015.
McMahan, T., Parberry, I., & Parsons, T. D. (2015). Modality Specific Assessment of Video Game
Player’s Experience Using the Emotiv. Entertainment Computing, 7, 1–6.
Meacham, J. A. (1982). A note on remembering to execute planned actions. Journal of Applied
Developmental Psychology, 3, 121–133.
Mead, A. D., & Drasgow, F. (1993). Equivalence of computerized and paper-and-pencil cognitive
ability tests: A meta-analysis. Psychological Bulletin, 114, 449–458.
Medalia, A., Lim, R., & Erlanger, D. (2005). Psychometric properties of the web-based
work-readiness cognitive screen used as a neuropsychological assessment tool for schizophre-
nia. Computer Methods and Programs In Biomedicine, 80(2), 93–102.
Meehan, W. P., 3rd, d’Hemecourt, P., Collins, C. L., Taylor, A. M., & Comstock, R. D. (2012).
Computerized neurocognitive testing for the management of sport-related concussions.
Pediatrics, 129(1), 38-44.
Meehl, P. E. (1987). Foreword. In J. N. Butcher (Ed.), Computerized Psychological Assessment
(pp. xv–xvi). New York: Basic Books.
Mehler, B., Reimer, B., Coughlin, J. F., & Dusek, J. A. (2009). Impact of incremental increases in
cognitive workload on physiological arousal and performance in young adult drivers. Journal
of the Transportation Research Board, 2138, 6–12.
Mehta, M. A., Goodyer, I. M., & Sahakian, B. J. (2004). Methylphenidate improves working
memory and set-shifting in AD/HD: Relationships to baseline memory capacity. Journal of
Child Psychology and Psychiatry, 45, 293–305.
Meier, M. J. (1992). Modern clinical neuropsychology in historical perspective. American
Psy-chologist, 47(4), 550–558.
170 References
Menon, A. S., Kondapavalru, P., Krishna, P., Chrismer, J. B., Raskin, A., Hebel, J. R., & Ruskin,
P. E. (2001). Evaluation of a portable low cost videophone system in the assess-ment of
depressive symptoms and cognitive function in elderly medically ill veterans. The Journal of
Nervous and Mental Disease, 189(6), 399–401.
Merali, Y., & McKelvey, B. (2006). Using complexity science to effect a paradigm shift in
Information Systems for the 21st century. Journal of Information Technology, 21, 211–215.
Meyer, V. (1961). Psychological effects of brain damage. In H. J. Eysenck (Ed.), Handbook of
abnormal psychology (pp. 529–565). New York: Basic Books.
Meyer, R. E. (1996). Neuropsychopharmacology: Are we ready for a paradigm shift?
Neuropsychopharmacology, 14, 169–179.
Millan, J. D., Rupp, R., Muller-Putz, G. R., Murray-Smith, R., Giugliemma, C., Tangermann, M.,
& Mattia, D. (2010). Combining brain-computer interfaces and assistive technologies:
State-of-the-art and challenges. Frontiers in Neuroscience, 4, 10.
Miller, E. (1996). Phrenology, neuropsychology and rehabilitation. Neuropsychological
Rehabilitation, 6(4), 245–256.
Miller, G. A. (2003). The cognitive revolution: A historical perspective. Trends in Cognitive
Sciences, 7, 141–144.
Miller, J. B., Schoenberg, M. R., & Bilder, R. M. (2014). Consolidated Standards of Report-ing
Trials (CONSORT): Considerations for Neuropsychological Research. The Clinical
Neuropsychologist, 28(4), 575–599.
Milleville-Pennel, I., Pothier, J., Hoc, J. M., & Mathe, J. F. (2010). Consequences of cognitive
impairments following traumatic brain injury: pilot study on visual exploration while driving.
Brain Injury, 24, 678–691.
Milner, B. (1963). Effects of different brain lesions on card sorting: The role of the frontal lobes.
Archives of Neurology, 9(1), 90–00.
Mitrushina, M. N., Boone, K. L., & D’Elia, L. (1999). Handbook of normative data for
neu-ropsychological assessment. New York: Oxford University Press.
Moffat, S. D. (2009). Aging and spatial navigation: what do we know and where do we go?
Neuropsychology Review, 19(4), 478–489.
Montgomery, C., Ashmore, K., & Jansari, A. (2011). The effects of a modest dose of alcohol on
executive functioning and prospective memory. Human Psychopharmacology: Clinical and
Experimental, 26, 208–215.
Moore, T. M., Reise, S. P., Gur, R. E., Hakonarson, H., & Gur, R. C. (2015). Psychometric
properties of the penn computerized neurocognitive battery. Neuropsychology, 29(2), 235–246.
Morris, R. G., Kotitsa, M., Bramham, J., Brooks, B. M., & Rose, F. D. (2002). Virtual reality
investigation of strategy formation, rule breaking and prospective memory in patients with
focal prefrontal neurosurgical lesions. In Proceedings of the 4th International Conference on
Disability, Virtual Reality & Associated Technologies.
Motraghi, T. E., Seim, R. W., Meyer, E. C., & Morissette, S. B. (2012). Virtual reality exposure
therapy for the treatment of PTSD: A systematic review American Psychological Association
(APA).
Motraghi, T. E., Seim, R. W., Meyer, E. C., & Morissette, S. B. (2014). Virtual reality exposure
therapy for the treatment of posttraumatic stress disorder: A methodological review using
CONSORT guidelines. Journal of Clinical Psychology, 70(3), 197–208.
Mountain, M. A., & Snow, W. G. (1993). Wisconsin Card Sorting Test as a measure of frontal
pathology: A review. The Clinical Neuropsychologist, 7, 108–118.
Mueller, C., Luehrs, M., Baecke, S., Adolf, D., Luetzkendorf, R., Luchtmann, M., & Bernarding,
J. (2012). Building virtual reality fMRI paradigms: a framework for presenting immersive
virtual environments. Journal of Neuroscience Methods, 209(2), 290–298.
Mühlberger, A., Bülthoff, H., Wiedemann, G., & Pauli, P. (2007). Virtual reality for the
psychophysiological assessment of phobic fear: Responses during virtual tunnel driving.
Psychological Assessment, 19(3), 340346.
References 171
Murray, C. D. (2009). A review of the use of virtual reality in the treatment of phantom limb pain.
Journal of Cybertherapy and Rehabilitation, 2(2), 105–113.
Nacke, L. E., Stellmach, S., & Lindley, C. A. (2011). Electroencephalographic assessment of
player experience a pilot study in affective ludology. Simulation & Gaming, 42(5), 632–655.
Nadeau, S. E. (2002). A paradigm shift in neurorehabilitation. The Lancet Neurology, 1, 126–130.
Nagamatsu, L., Voss, M., Neider, M., Gaspar, J., Handy, T., Kramer, A., & Liu-Ambrose, T.
(2011). Increased cognitive load leads to impaired mobility decisions in seniors at risk for falls.
Psychology and Aging, 26, 253–259.
Nakao, T., Takezawa, T., Miyatani, M., & Ohira, H. (2009). Medial prefrontal cortex and
cognitive regulation. Psychologia, 52, 93–109.
Nakao, T., Osumi, T., Ohira, H., Kasuya, Y., Shinoda, J., Yamada, J., & Northoff, G. (2010).
Medial prefrontal cortex—dorsal anterior cingulate cortex connectivity during behavior
selection without an objective correct answer. Neuroscience Letters, 482, 220–224.
Nakao, T., Ohira, H., & Northoff, G. (2012). Distinction between externally vs. internally guided
decision-making: operational differences, meta-analytical comparisons and their theoretical
implications. Frontiers in Neuroscience, 6.
Naugle, R. I., & Chelune, G. J. (1990). Integrating neuropsychological and “real-life” data:
A neuropsychological model for assessing everyday functioning. In D. Tupper & K. Cicerone
(Eds.), The neuropsychology of everyday life: Assessment and basic competences (pp. 57–73).
Kluwer Academic Publishers: Boston.
Neigh, G. N., Gillespie, C. F., & Nemeroff, C. B. (2009). The neurobiological toll of child abuse
and neglect. Trauma, Violence, & Abuse, 10(4), 389–410.
Neisser, U. (1982). Memory: What are the important questions. Memory Observed: Remembering
in Natural Contexts, 3–19.
Neisser, U. (Ed.). (1987). Concepts and conceptual development: Ecological and intellectual
factors in categorization. Cambridge: Cambridge University Press.
Newcombe, F. (2002). An overview of neuropsychological rehabilitation: A forgotten past and a
challenging future. Cognitive rehabilitation: A clinical neuropsychological approach, 23–51.
Nieuwenstein, M. R., Aleman, A., & de Haan, E. H. F. (2001). Relationship between symptom
dimensions and neurocognitive functioning in schizophrenia: A meta-analysis of WCST and
CPT studies. Journal of Psychiatric Research, 35, 119–125.
Nigg, J. T. (2000). On inhibition/Disinhibition in developmental psychopathology: Views from
cognitive and personality psychology and a working inhibition taxonomy. Psychological
Bulletin, 126, 220–246.
Nigg, J. T. (2001). Is ADHD a disinhibitory disorder? Psychological Bulletin, 127, 571–598.
Nigg, J. T. (2005). Neuropsychologic theory and findings in attention deficit/hyperactivity
disorder: The state of the field and salient challenges for the coming decade. Biological
Psychiatry, 57, 1424–1435.
Nolin, P., Stipanicic, A., Henry, M., Joyal, C. C., & Allain, P. (2012). Virtual reality as a screening
tool for sports concussion in adolescents. Brain Injury, 26, 1564–1573.
Notebaert, A. J., & Guskiewicz, K. M. (2005). Current trends in athletic training practice for
concussion assessment and management. Journal of Athletic Training, 40(4), 320.
Nussbaum, P. D., Goreczny, A., & Haddad, L. (1995). Cognitive correlates of functional capacity
in elderly depressed versus patients with probable Alzheimer’s disease. Neuropsychological
Rehabilitation, 5, 333–340.
Nyhus, E., & Barceló, F. (2009). The Wisconsin Card Sorting Test and the cognitive assessment of
prefrontal executive functions: A critical update. Brain and Cognition, 71(3), 437–451.
Olson, K., Jacobson, K. K., & Van Oot, P. (2013). Ecological validity of pediatric
neuropsychological measures: current state and future directions. Applied Neuropsychology:
Child, 2, 17–23.
Ord, J. S., Greve, K. W., Bianchini, K. J., & Aguerrevere, L. E. (2009). Executive dysfunction in
traumatic brain injury: The effects of injury severity and effort on the Wisconsin Card Sorting
Test. Journal of Clinical and Experimental Neuropsychology, 32(2), 132–140.
172 References
Ossher, L., Flegal, K. E., & Lustig, C. (2013). Everyday memory errors in older adults. Aging,
Neuropsychology, and Cognition, 20(2), 220–242.
Ozonoff, S. (1995). Reliability and validity of the Wisconsin card sorting test in studies of autism.
Neuropsychologia, 9, 491–500.
Panksepp, J. (1998a). Affective Neuroscience. Oxford: Oxford University Press.
Panksepp, J. (1998b). Affective neuroscience: The foundations of human and animal emotions.
Oxford University Press.
Panksepp, J. (2009). Primary process affects and brain oxytocin. Biological Psychiatry, 65, 725–
727.
Panksepp, J. (2010). Affective neuroscience of the emotional BrainMind: evolutionary perspec-
tives and implications for understanding depression. Dialogues in Clinical Neuroscience, 12,
389–399.
Parikh, M., Grosch, M. C., Graham, L. L., Hynan, L. S., Weiner, M., Shore, J. H., & Cullum, C.
M. (2013). Consumer acceptability of brief videoconference-based neuropsychological
assessment in older individuals with and without cognitive impairment. The Clinical
Neuropsychologist, 27(5), 808–817.
Park, K., Ku, J., Choi, S., Jang, H., Park, J., Kim, S. I., & Kim, J. (2011). A virtual reality
application in role-plays of social skills training for schizophrenia: A randomized, controlled
trial. Psychiatry Research, 189(2), 166–172.
Parsons, T. D. (2011). Neuropsychological assessment using virtual environments: Enhanced
assessment technology for improved ecological validity. In Advanced Computational
Intelligence Paradigms in Healthcare 6. Virtual Reality in Psychotherapy, Rehabilitation,
and Assessment (pp. 271–289). Berlin: Springer.
Parsons, T. D., Bowerly, T., Buckwalter, J. G., & Rizzo, A. A. (2007). A controlled clinical
comparison of attention performance in children with ADHD in a virtual reality classroom
compared to standard neuropsychological methods. Child Neuropsychology, 13, 363–381.
Parsons, T. D., & Courtney, C. G. (2011). Neurocognitive and psychophysiological interfaces for
adaptive virtual environments. Human Centered Design of E-Health Technologies, 208-233.
Parsons, T. D., & Courtney, C. (2014). An initial validation of the Virtual Reality Paced Auditory
Serial Addition Test in a college sample. Journal of Neuroscience Methods, 222, 15–23.
Parsons, T. D., Courtney, C. G., Arizmendi, B. J., & Dawson, M. E. (2011). Virtual Reality Stroop
Task for neurocognitive assessment. In MMVR (pp. 433–439).
Parsons, T. D., Courtney, C. G., & Dawson, M. E. (2013). Virtual reality Stroop task for
assessment of supervisory attentional processing. Journal of Clinical and Experimental
Neuropsychology, 35(8), 812–826.
Parsons, T. D., Courtney, C., Rizzo, A. A., Edwards, J., & Reger, G. (2012). Virtual reality paced
serial assessment tests for neuropsychological assessment of a military cohort. Studies in
Health Technology and Informatics, 173, 331–337.
Parsons, T. D., Cosand, L., Courtney, C., Iyer, A., & Rizzo, A. A. (2009a). Neurocognitive
workload assessment using the Virtual Reality Cognitive Performance Assessment Test.
Lecture Notes in Artificial Intelligence, 5639, 243–252.
Parsons, T. D., Iyer, A., Cosand, L., Courtney, C., & Rizzon, A. (2009b). Neurocognitive and
psychophysiological analysis of human performance within virtual reality environments.
Medicine Meets Virtual Reality, 17, 247–252.
Parsons, T. D., McPherson, S., & Interrante, V. (2013). Enhancing neurocognitive assessment
using immersive virtual reality. In Virtual and Augmented Assistive Technology (VAAT),
2013 1st Workshop on (pp. 27–34). IEEE.
Parsons, T. D., & Reinebold, J. (2012). Adaptive virtual environments for neuropsychological
assessment in serious games. IEEE Transactions on Consumer Electronics, 58, 197–204.
Parsons, T. D., & Rizzo, A. A. (2008). Affective outcomes of virtual reality exposure therapy for
anxiety and specific phobias: A meta-analysis. Journal of Behavior Therapy and Experimental
Psychiatry, 39, 250–261.
References 173
Parsons, T. D., & Rizzo, A. A. (2008). Initial validation of a virtual environment for assessment of
memory functioning: virtual reality cognitive performance assessment test. Cyberpsychology
and Behavior, 11, 17–25.
Parsons, T. D., Rizzo, A. A., Rogers, S., & York, P. (2009). Virtual reality in paediatric
rehabilitation: A review. Developmental Neurorehabilitation, 12(4), 224–238.
Parsons, T. D., Silva, T. M., Pair, J., & Rizzo, A. A. (2008). A virtual environment for assessment
of neurocognitive functioning: Virtual reality cognitive performance assessment test. Studies in
Health Technology and Informatics, 132, 351–356.
Parsons, T. D., & Trost, Z. (2014). Virtual Reality Graded Exposure Therapy as Treatment for
Pain-Related Fear and Disability in Chronic Pain. In M. Ma (Ed.), Virtual and augmented
reality in healthcare (pp. 523–546). Germany: Springer.
Paul, R. H., Lawrence, J., Williams, L. M., RICHARD, C. C., Cooper, N., & Gordon, E. (2005).
Preliminary validity of “IntegNeuroTM”: A new computerized battery of neu-rocognitive tests.
International Journal of Neuroscience, 115(11), 1549–1567.
Paulus, M. P., & Frank, L. R. (2006). Anterior cingulate activity modulates nonlinear decision
weight function of uncertain prospects. Neuroimage, 30, 668–677.
Pedroli, E., Cipresso, P., Serino, S., Pallavicini, F., Albani, G., & Riva, G. (2013). Virtual multiple
errands test: Reliability, usability and possible applications. Annual Review of Cybertherapy
and Telemedicine, 191, 48–52.
Penfield, W., & Evans, J. P. (1935). The frontal lobe in man: a clinical study of maximum
removals. Brain: A Journal of Neurology.
Pennington, B. F., Bennetto, L., McAleer, O. K., & Roberts, R. J. (1996). Executive functions and
working memory: Theoretical and measurement issues. In G. R. Lyon & N. A. Krasnegor
(Eds.), Attention, memory and executive function. Brookes: Baltimore.
Perry, W. (2009). Beyond the numbers. Expanding the boundaries of neuropsychology. Archives
of Clinical Neuropsychology, 24, 21–29.
Pessoa, L. (2008). On the relationship between emotion and cognition. Nature Reviews
Neuroscience, 9(2), 148–158.
Pessoa, L. (2009). How do emotion and motivation direct executive control? Trends in Cognitive
Sciences, 13(4), 160–166.
Pham, T., & Tran, D. (2012). Emotional recognition using the Emotiv EPOC device. Neural
Information Processing, 7667, 394–399.
Pineles, S. L., Mostoufi, S. M., Ready, C. B., Street, A. E., Griffin, M. G., & Resick, P. A. (2011).
Trauma reactivity, avoidant coping, and PTSD symptoms: A moderating relationship? Journal
of Abnormal Psychology, 120(1), 240–246.
Plancher, G., Tirard, A., Gyselinck, V., Nicolas, S., & Piolino, P. (2012). Using virtual reality to
characterize episodic memory profiles in amnestic mild cognitive impairment and Alzheimer’s
disease: influence of active/passive encoding. Neuropsychologia, 50, 592–602.
Plancher, G., Barra, J., Orriols, E., & Piolino, P. (2013). The influence of action on episodic
memory: A virtual reality study. Quarterly Journal of Experimental Psychology, 66, 895–909.
Pollak, Y., Shomaly, H. B., Weiss, P. L., Rizzo, A. A., & Gross-Tsur, V. (2010). Methylphenidate
effect in children with ADHD can be measured by an ecologically valid continuous
performance test embedded in virtual reality. CNS Spectrums, 15, 125–130.
Pollak, Y., Weiss, P. L., Rizzo, A. A., Weizer, M., Shriki, L., Shalev, R. S., & Gross-Tsur, V.
(2009). The utility of a continuous performance test embedded in virtual reality in measuring
ADHD-Related deficits. Journal of Developmental & Behavioral Pediatrics, 20, 2–6.
Podell, K., Lovell, M., Zimmerman, M., & Goldberg, E. (1995). The cognitive bias task and
lateralized frontal lobe functions in males. The Journal of Neuropsychiatry & Clinical
Neurosciences, 7, 491–501.
Podzebenko, K., Egan, G. F., & Watson, J. D. G. (2005). Real and imaginary rotary motion
processing: functional parcellation of the human parietal lobe revealed by fMRI. Journal of
Cognitive Neuroscience, 17(1), 24–36.
Popper, K. R. (1962). Conjectures and refutations: The growth of scientific knowledge. New York.
174 References
Raymond, P. D., Hinton-Bayre, A. D., Radel, M., Ray, M. J., & Marsh, N. A. (2006). Assessment
of statistical change criteria used to define significant change in neuropsychological test
performance following cardiac surgery. European Journal of Cardio-Thoracic Surgery, 29,
82–88.
Raz, S., Bar-Haim, Y., Sadeh, A., & Dan, O. (2014). Reliability and validity of the online
continuous performance test among young adults. Assessment, 21(1), 108–118.
Reeves, D. L., Winter, K. P., Bleiberg, J., & Kane, R. L. (2007). ANAM® Genogram: Historical
perspectives, description, and current endeavors. Archives of Clinical Neuropsychology, 22,
15–37.
Reitan, R. M. (1955). Investigation of the validity of Halstead’s measure of biological
intelli-gence. Archives of Neurology, 73, 28–35.
Reitan, R. M. (1964). Psychological deficits resulting from cerebral lesions in man. In J. M. Warren
& A. K. Akert (Eds.), The frontal granular cortex and behavior (pp. 295–312). New York:
McGraw-Hill.
Reitan, R. M. (1991). The neuropsychological deficit scale for adults computer program, in the
Manual from Traumatic Brain Injury Vol. II: Recovery and Rehabilitation. Tucson:
Neuropsychology Press.
Renison, B., Ponsford, J., Testa, R., Richardson, B., & Brownfield, K. (2012). The ecological and
construct validity of a newly developed measure of executive function: The Virtual Library
Task. Journal of the International Neuropsychological Society, 18, 440–450.
Resch, J. E., McCrea, M. A., & Cullum, C. M. (2013). Computerized neurocognitive testing in the
management of sport-related concussion: An update. Neuropsychology review, 23(4), 335–
349.
Riccio, C. A., & French, C. L. (2004). The status of empirical support for treatments of attention
deficits. The Clinical Neuropsychologist, 18, 528–558.
Riva, G. (2009). Virtual reality: an experiential tool for clinical psychology. British Journal of
Guidance & Counselling, 37(3), 337–345.
Robertson, I. H., Manly, T., Andrade, J., Baddeley, B. T., & Yiend, J. (1997). “Oops!”:
performance correlates of everyday attentional failures in traumatic brain injured and normal
subjects. Neuropsychologia, 35, 747–758.
Robertson, I. H., Ward, T., Ridgeway, V., & Nimmo-Smith, I. (1994). The Test of Everyday
Attention. Bury St. Edmunds: Thames Valley Test Company.
Rock, D. L., & Nolen, P. A. (1982). Comparison of the standard and computerized versions of the
Raven Coloured Progressive Matrices Test. Perceptual and Motor Skills, 54, 40–42.
Roebuck‐Spencer, T. M., Vincent, A. S., Twillie, D. A., Logan, B. W., Lopez, M., Friedl, K. E., &
Gilliland, K. (2012). Cognitive change associated with self-reported mild traumatic brain injury
sustained during the OEF/OIF conflicts. The Clinical Neuropsychologist, 26(3), 473–489.
Rogante, M., Grigioni, M., Cordella, D., & Giacomozzi, C. (2010). Ten years of telerehabilitation:
A literature overview of technologies and clinical applications. Neuro Rehabilitation, 27, 287–
304.
Rogoff, B., & Lave, J. (Eds.). (1984). Everyday cognition: Its development in social context.
Cambridge: Harvard University Press.
Roiser, J. P., & Sahakian, B. J. (2013). Hot and cold cognition in depression. CNS spectrums, 18
(03), 139–149.
Ross, S. A., Allen, D. N., & Goldstein, G. (2013). Factor Structure of the Halstead-Reitan
Neuropsychological Battery: A Review and Integration. Applied Neuropsychology: Adult, 20
(2), 120–135.
Ross, S. A., Allen, D. N., & Goldstein, G. (2014). Factor Structure of the Halstead-Reitan
Neuropsychological Battery for Children: A Brief Report Supplement. Applied
Neuropsychology: Child, 3(1), 1–9.
Rosser, J. C, Jr, Gabriel, N., Herman, B., & Murayama, M. (2001). Telementoring and
teleproctoring. World Journal of Surgery, 25, 1438–1448.
176 References
Sbordone, R. J., & Long, C. J. (1996). Ecological validity of neuropsychological testing (pp. 91–
112). Boca Raton: St Lucie Press.
Sbordone, R. J. (1996). Ecological validity: Some critical issues for neuropsychologist. In R.
J. Sbordone & C. J. Long (Eds.), Ecological validity of neuropsychological testing (pp. 15–41).
Delray Beach: GR Press/St. Lucie Press.
Sbordone, R. J. (2008). Ecological validity of neuropsychological testing: critical issues. The
Neuropsychology Handbook, 367, 394.
Sbordone, R. J. (2010). Neuropsychological tests are poor at assessing the frontal lobes, executive
functions, and neurobehavioral symptoms of traumatically brain-injured patients.
Psychological Injury and Law, 3, 25–35.
Schacter, D. L. (1983). Amnesia observed: Remembering and forgetting in a natural environment.
Journal of Abnormal Psychology, 92, 236–242.
Scharre, D. W., Chang, S. I., Nagaraja, H. N., Yager-Schweller, J., & Murden, R. A. (2014).
Community cognitive screening using the self-administered gerocognitive examination
(SAGE). The Journal of Neuropsychiatry and Clinical Neurosciences, 26(4), 369–375.
Schatz, P., & Browndyke, J. (2002). Applications of computer-based neuropsychological
assessment. Journal of Head Trauma Rehabilitation, 17, 395–410.
Schatz, P., Pardini, J. E., Lovell, M. R., Collins, M. W., & Podell, K. (2006). Sensitivity and
specificity of the ImPACT Test Battery for concussion in athletes. Archives of Clinical
Neuropsychology, 21(1), 91–99.
Schatz, P., & Sandel, N. (2013). Sensitivity and specificity of the online version of ImPACT in
high school and collegiate athletes. The American Journal of Sports Medicine, 41(2), 321–326.
Schreuder, M., Riccio, A., Risetti, M., Dähne, S., Ramsay, A., Williamson, J., & Tangermann, M.
(2013). User-centered design in brain-computer interfaces—A case study. Artificial
Intelligence in Medicine, 59(2), 71–80.
Schultheis, M. T., & Mourant, R. (2001). Virtual reality and driving: The road to better assessment
for cognitively impaired populations. Presence: Teleoperators and Virtual Environments, 10,
431–439.
Schultheis, M. T., Himelstein, J., & Rizzo, A. A. (2002). Virtual reality and neuropsychology:
upgrading the current tools. The Journal of Head Trauma Rehabilitation, 17, 378–394.
Schultheis, M. T., Hillary, F., & Chute, D. L. (2003). The neurocognitive driving test: Applying
technology to the assessment of driving ability following brain injury. Rehabilitation
Psychology, 48, 275–280.
Schultheis, M. T., Rebimbas, J., Mourant, R., & Millis, S. R. (2007). Examining the usability of a
virtual reality driving simulator. Assistive Technologies, 19, 1–8.
Schuster, R. M., Mermelstein, R. J., & Hedeker, D. (2015). Acceptability and feasibility of a visual
working memory task in an ecological momentary assessment paradigm.
Seeley, W. W., Menon, V., Schatzberg, A. F., Keller, J., Glover, G., Kenna, H., & Greicius, M. D.
(2007). Dissociable intrinsic connectivity networks for salience processing and executive
control. Journal of Neuroscience, 27, 2349–2356.
Seguin, J. R., Arseneault, L., & Tremblay, R. E. (2007). The contribution of “cool” and “hot”
components of decision-making in adolescence: implications for developmental psychopathol-
ogy. Cognitive Development, 22, 530–543.
Serino, A., Ciaramelli, E., Santantonio, A. D., Malagù, S., Servadei, F., & Làdavas, E. (2007).
A pilot study for rehabilitation of central executive deficits after traumatic brain injury. Brain
Injury, 21(1), 11–19.
Settle, J. R., Robinson, S. A., Kane, R., Maloni, H. W., & Wallin, M. T. (2015). Remote cognitive
assessments for patients with multiple sclerosis: A feasibility study. Multiple Sclerosis Journal,
1352458514559296.
Shallice, T. I. M., & Burgess, P. W. (1991). Deficits in strategy application following frontal lobe
damage in man. Brain, 114, 727–741.
Shiffman, S., Stone, A. A., & Hufford, M. R. (2008). Ecological momentary assessment. Annual
Review of Clinical Psychology, 4, 1–32.
178 References
Shih, J. J., Krusienski, D. J., & Wolpaw, J. R. (2012). Brain-computer interfaces in medicine.
Mayo Clinic Proceedings, 87(3), 268–279.
Shipley, W. C. (1940). A self-administered scale for measuring intellectual impairment and
deterioration. Journal of Psychology, 9, 371–77.
Silverstein, S. M., Berten, S., Olson, P., Paul, R., Williams, L. M., Cooper, N., & Gordon, E.
(2007). Development and validation of a World-Wide-Web-based neurocognitive assess-ment
battery: WebNeuro. Behavior Research Methods, 39(4), 940–949.
Sinnott, J. D. (Ed.). (1989). Everyday problem solving: Theory mid applications. New York:
Praeger.
Slate, S. E., Meyer, T. L., Burns, W. J., & Montgomery, D. D. (1998). Computerized cognitive
training for severely emotionally disturbed children with ADHD. Behavior Modification, 22(3),
415–437.
Slick, D. I., Hopp, G., Strauss, E., & Thompson, G. B. (1997). Victoria symptom validity test.
Odessa: Psychological Assessment Resources.
Sloan, R. P., Korten, J. B., & Myers, M. M. (1991). Components of heart rate reactivity during
mental arithmetic with and without speaking. Physiology & Behavior, 50, 1039–1045.
Smith, A. (2014). Older Adults and Technology Use. Life Proj: The Pew Research Center’s
Internet & American.
Smith, L., & Gasser, M. (2005). The development of embodied cognition: six lessons from babies.
Artificial Life, 11, 13–29.
Snell, D. L., Surgenor, L. J., Hay-Smith, J. C., & Siegert, R. J. (2009). A systematic review of
psychological treatments for mild traumatic brain injury: An update on the evidence. Journal of
Clinical and Experimental Neuropsychology, 31(1), 20–38.
Soares, F. C., & de Oliveira, T. C. G. (2015). CANTAB object recognition and language tests to
detect aging cognitive decline: an exploratory comparative study. Clinical Interventions in
Aging, 10, 37–48.
Sohlberg, M. M., & Mateer, C. A. (1989). Introduction to cognitive rehabilitation: Theory and
practice. New York, NY: Guilford Press.
Sohlberg, M. M., & Mateer, C. A. (2001). Cognitive rehabilitation: An integrative approach. New
York: Guilford Press.
Sohlberg, M. M., & Mateer, C. A. (2010). APT-III: Attention process training: A direct attention
training program for persons with acquired brain injury. Youngville: Lash & Associates.
Sonuga-Barke, E. J. S. (2003). The dual pathway model of AD/HD: An elaboration of
neurodevelopmental characteristics. Neuroscience & Biobehavioral Reviews, 27, 593–604.
Sonuga-Barke, E. J. S., Sergeant, J. A., Nigg, J., & Willcutt, E. (2008). Executive dysfunction and
delay aversion in attention deficit hyperactivity disorder: Nosologic and diagnostic implica-
tions. Child and Adolescent Psychiatric Clinics of North America, 17, 367–384.
Space, L. G. (1975). A console for the interactive on-line administration of psychological tests.
Behavior Research Methods and Instrumentation, 7, 191–193.
Space, L. G. (1981). The computer as psychometrician. Behavior Research Methods and
Instrumentation, 13, 595–606.
Spooner, D. M., & Pachana, N. A. (2006). Ecological validity in neuropsychological assessment:
A case for greater consideration in research with neurologically intact populations. Archives of
Clinical Neuropsychology, 21, 327–337.
Spreen, O., & Strauss, E. (1998). A compendium of neuropsychological tests: Administration,
norms and commentary (2nd ed.). New York: Oxford University Press.
Spreij, L. A., Visser-Meily, J. M., van Heugten, C. M., & Nijboer, T. C. (2014). Novel insights
into the rehabilitation of memory post acquired brain injury: A systematic review. Frontiers in
Human Neuroscience, 8.
Stern, H., Jeaco, S., & Millar, T. (1999). Computers in neurorehabilitation: What role do they
play? Part 1. British Journal of Occupational Therapy, 62(12), 549–553.
Stern, R. A., & White, T. (2003). NAB, Neuropsychological Assessment Battery: Attention
Module Stimulus Book. Form 2. Psychological Assessment Resources.
References 179
Sternberg, R. J. (1997). Intelligence and lifelong learning: What’s new and how can we use it?
American Psychologist, 52, 1134–1139.
Sternberg, R. J., & Wagner, R. K. (Eds.). (1986). Practical intelligence: Nature and origins of
competence in the everyday world. New York: Cambridge University Press.
Stierwalt, J. A., & Murray, L. L. (2002). Attention impairment following traumatic brain injury. In
Seminars in speech and language, 23(2), 129–138.
Stone, A. A., Broderick, J. E., Schwartz, J. E., Sfiffman, S., Litcher-Kelly, L., & Calvanese,
P. (2003). Intensive momentary reporting of pain with an electronic diary: reactivity,
compliance, and patient satisfaction. Pain., 104(1-2), 343–351.
Stuss, D. T., Alexander, M. P., Shallice, T., Picton, T. W., Binns, M. A., Macdonald, R., et al.
(2005). Multiple frontal systems controlling response speed. Neuropsychologia, 43, 396–417.
Stuss, D. T., Benson, D. F., Kaplan, E. F., Weir, W. S., Naeser, M. A., Lieberman, I., et al. (1983).
The involvement of orbitofrontal cerebrum in cognitive tasks. Neuropsychologia, 21, 235–248.
Stuss, D. T., & Levine, B. (2002). Adult clinical neuropsychology: Lessons from studies of the
frontal lobes. Annual Review of Psychology, 53(1), 401–433.
Suchy, Y. (2011). Clinical neuropsychology of emotion. Guilford Press.
Sweeney, J. A., Kmiec, J. A., & Kupfer, D. J. (2000). Neuropsychologic impairments in bipolar
and unipolar mood disorders on the CANTAB neurocognitive battery. Biological Psychiatry,
48(7), 674–684.
Sweeney, J. E., Slade, H. P., Ivins, R. G., Nemeth, D. G., Ranks, D. M., & Sica, R. B. (2007).
Scientific investigation of brain-behavior relationships using the Halstead-Reitan Battery.
Applied Neuropsychology, 14(2), 65–72.
Sweeney, S., Kersel, D., Morris, R., Manly, T., & Evans, J. (2010). The sensitivity of a virtual
reality task to planning and prospective memory impairments: Group differences and the
efficacy of periodic alerts on performance. Neuropsychological Rehabilitation, 20(2), 239–263.
Svoboda, E., Richards, B., Leach, L., & Mertens, V. (2012). PDA and smartphone use by
individuals with moderate-to-severe memory impairment: Application of a theory-driven
training programme. Neuropsychological Rehabilitation, 22(3), 408–427.
Taillade, M., Sauzéon, H., Dejos, M., Arvind Pala, P., Larrue, F., Wallet, G., & N’Kaoua, B.
(2013). Executive and memory correlates of age-related differences in wayfinding perfor-
mances using a virtual reality application. Aging, Neuropsychology, and Cognition, 20(3),
298–319.
Tate, D. G., Kalpakjian, C. Z., & Forchheimer, M. B. (2002). Quality of life issues in individuals
with spinal cord injury. Archives of Physical Medicine and Rehabilitation, 83(12), S18–S25.
Temple, V., Drummond, C., Valiquette, S., & Jozsvai, E. (2010). A comparison of intellectual
assessments over video conferencing and in-person for individuals with ID: preliminary data.
Journal of Intellectual Disability Research, 54(6), 573–577.
Teuber, H. L. (1955). Physiological psychology. Annual Review of Psychology, 6, 267–296.
Teuber, H. L. (1966). Kurt Goldstein’s role in the development of neuropsychology.
Neuropsychologia, 4(4), 299–310.
Thayer, J. F., & Lane, R. D. (2000). A model of neurovisceral integration in emotionregulation and
dysregulation. Journal of Affective Disorders, 61, 201–216.
Thayer, J. F., & Lane, R. D. (2009). Claude Bernard and the heart-brainconnection: Further
elaboration of a model of neurovisceral inte-gration. Neuroscience and Biobehavioral Reviews,
33(2), 81–88.
Thompson, O., Barrett, S., Patterson, C., & Craig, D. (2012). Examining the neurocognitive
validity of commercially available, smartphone-based puzzle games. Psychology, 3(07), 525.
Thornton, K. E., & Carmody, D. P. (2008). Efficacy of traumatic brain injury rehabilitation:
Interventions of QEEG guided biofeedback, computers, strategies, and medications. Applied
Psychophysiology and Biofeedback, 33(2), 101–124.
Toet, A., & van Schaik, M. G. (2013). Visual attention for a desktop virtual environment with
ambient scent. Frontiers in Psychology, 4, 1–11.
180 References
Tomb, I., Hauser, M., Deldin, P., & Caramazza, A. (2004). Do Somatic markers mediate decisions
on the gambling task? Nat Neuroscience, 5, 1103–1104.
Tornatore, J. B., Hill, E., Laboff, J., & McGann, M. E. (2005). Self administered screening for mild
cognitive impairment: Initial validation of a computerized test battery. The Journal of
Neuropsychiatry & Clinical Neurosciences, 17, 98–105.
Tranel, D., (2009). The Iowa-Benton school of neuropsychological assessment. In I. Grant (Ed.),
Neuropsychological assessment of neuropsychiatric and neuromedical disorders (pp. 66–83).
Oxford University Press.
Treder, M. S., & Blankertz, B. (2010). Research Covert attention and visual speller design in an
ERP-based brain-computer interface. Behavioral & Brain Functions, 6.
Tropp, M., Lundqvist, A., Persson, C., Samuelsson, K., & Levander, S. (2015). Self-ratings of
Everyday Memory Problems in Patients with Acquired Brain Injury-A Tool for Rehabilitation.
International Journal of Physical Medicine & Rehabilitation, 3(258), 2.
Trost, Z., & Parsons, T. D. (2014). Beyond Distraction: Virtual Reality Graded Exposure Therapy
as Treatment for Pain-Related Fear and Disability in Chronic Pain. Journal of Applied
Biobehavioral Research, 19, 106–126.
Troyer, A. K., Rowe, G., Murphy, K. J., Levine, B., Leach, L., & Hasher, L. (2014). Devel-opment
and evaluation of a self-administered on-line test of memory and attention for middle-aged and
older adults. Frontiers in Aging Neuroscience, 6.
Tupper, D. E., & Cicerone, K. D. (1990). The neuropsychology of everyday life: Assessment and
basic competencies. US: Springer.
Tupper, D. E., & Cicerone, K. D. (Eds.). (1991). The neuropsychology of everyday life: Issues in
development and rehabilitation. US: Springer.
Turner, T. H., Horner, M. D., VanKirk, K. K., Myrick, H., & Tuerk, P. W. (2012). A pilot trial of
neuropsychological evaluations conducted via telemedicine in the Veterans Health
Administration. Telemedicine and e-Health, 18(9), 662–667.
Turner-Stokes, L., Disler, P. B., Nair, A., & Wade, D. T. (2005). Multi-disciplinary rehabilitation
for acquired brain injury in adults of working age. Cochrane Database of Systematic Reviews,
3, CD004170.
Unsworth, N., Heitz, R. P., & Engle, R. W. (2005). Working memory capacity in hot and cold
cognition. Cognitive Limitations in Aging and Psychopathology, 19-43.
Vaughan, C. G., Gerst, E. H., Sady, M. D., Newman, J. B., & Gioia, G. A. (2014). The relation
between testing environment and baseline performance in child and adolescent concussion
assessment. The American Journal of Sports Medicine, 0363546514531732.
Vestal, L., Smith-Olinde, L., Hicks, G., Hutton, T., & Hart, J, Jr. (2006). Efficacy of language
assessment in Alzheimer’s disease: comparing in-person examination and telemedicine.
Clinical Interventions in Aging, 1(4), 467.
Vincent, K. R. (1980). Semi-automated full battery. Journal of Clinical Psychology, 36, 437–446.
Vincent, A. S., Bleiberg, J., Yan, S., Ivins, B., Reeves, D. L., Schwab, K., & Warden, D. (2008).
Reference data from the automated Neuropsychological Assessment Metrics for use in
traumatic brain injury in an active duty military sample. Military Medicine, 173(9), 836–852.
Vincent, A. S., Roebuck-Spencer, T., Gilliland, K., & Schlegel, R. (2012). Automated
neuropsychological assessment metrics (v4) traumatic brain injury battery: military normative
data. Military Medicine, 177(3), 256–269.
Volz, K. G., Schubotz, R. I., & Von Cramon, D. Y. (2006). Decision-making and the frontal lobes.
Current Opinion in Neurology, 19, 401–406.
Wald, J., & Liu, L. (2001). Psychometric properties of the driVR: A virtual reality driving
assessment. Studies in Health Technology Information, 81, 564–566.
Wald, J. L., Liu, L., & Reil, S. (2000). Concurrent validity of a virtual reality driving assessment in
persons with brain injury. Cyberpsychology & Behavior, 3, 643–652.
Waldstein, S. R., Bachen, E. A., & Manuck, S. B. (1997). Active coping and cardiovascular
reactivity: a multiplicity of influences. Psychosomatic Medicine, 59, 620–625.
References 181
Wang, F. Y. (2007). Toward a paradigm shift in social computing: The ACP approach. IEEE
Intelligent Systems, 22, 65–67.
Wang, L., LaBar, K. S., Smoski, M., Rosenthal, M. Z., Dolcos, F., Lynch, T. R., & McCarthy, G.
(2008). Prefrontal mechanisms for executive control over emotional distraction are altered in
major depression. Psychiatry Research: Neuroimaging, 163(2), 143–155.
Waterfall, R. C. (1979). Automating standard intelligence tests. Journal of Audiovisual Media in
Medicine, 2, 21–24.
Waters, A. J., & Li, Y. (2008). Evaluating the utility of administering a reaction time task in an
ecological momentary assessment study. Psychopharmacology, 197, 25–35.
Waters, A. J., Szeto, E. H., Wetter, D. W., Cinciripini, P. M., Robinson, J. D., & Li, Y. (2014).
Cognition and craving during smoking cessation: An ecological momentary assessments study.
Nicotine & Tobacco Research, 16(Suppl. 2), S111–S118.
Watson, J. B. (1912). Psychology as the behaviorist views it. Psychological Review, 20, 158–177.
Wechsler, D. (1945). A standardized memory scale for clinical use. Journal of Psychology, 19, 87–
95.
Wechsler, D. (1949). Manual for the wechsler intelligence scale for children. New York:
Psychological Corporation.
Wechsler, D. (1955). Manual for the Wechsler Adult Intelligence Scale. New York: Psychological
Corporation.
Wechsler, D. (1958). The measurement and appraisal of adult intelligence (4th ed.). Baltimore:
Williams & Wilkins.
Wechsler, D. (1967). Manual for the Wechsler Preschool and Primary School of Intelligence. New
York: Psychological Corporation.
Wechsler, D. (1974). Wechsler Memory Scale manual. San Antonio: Psychological Corporation.
Wechsler, D. (1981). Manual for the Wechsler Adult Intelligence Scale-Revised. New York:
Psychological Corporation.
Wechsler, D. (1987). Wechsler Memory Scale-Revised manual. San Antonio: Psychological
Corporation.
Wechsler, D. (1989). Manual for the Wechsler Preschool and Primary Sale of Intelligence Revised
(WPPSI-R). San Antonio: Psychological Corporation.
Wechsler, D. (1991). Manual for the Wechsler Intelligence Scale for Children (3rd ed.). New
York: Psychological Corporation.
Wechsler, D. (1992). Wechsler Individual Achievement Test. San Antonio: Psychological
Corporation.
Wechsler, D. (1997). Wechsler Adult Intelligence Scale—3rd Edition (WAIS-3) San Antonio. TX:
Harcourt Assessment.
Wechsler, D. (1997). WAIS-III Administration and Scoring manual. San Antonio: Psychological
Corporation.
Wechsler, D. (1997). Wechsler Memory Scale: Administration and Scoring manual (3rd ed.). San
Antonio: Psychological Corporation.
Wechsler, D. (2001). Wechsler Test of Adult Reading. San Antonio: Psychological Corporation.
Wechsler, D. (2003). Wechsler Intelligence Scale for Children—4th Edition (WISC-IV). San
Antonio: Harcourt Assessment.
Wechsler, D. (2008). Wechsler adult intelligence scale-Fourth Edition (WAIS-IV). San Antonio:
NCS Pearson.
Wechsler, D. (2009). Wechsler Memory Scale-(WMS-IV). New York: The Psychological
Corporation.
Wedding, D. (1983a). Clinical and statistical prediction. Clinical Neuropsychology, 5, 49–55.
Wedding, D. (1983b). Comparison of statistical and actuarial models for predicting lateralization
of brain damage. Clinical Neuropsychology, 4, 15–20.
Weigl, E. (1927). Zur Psychologie sogenannter Abstraktionsprozesse. Zeitschrift fur Psychologie,
103, 2–45. (Translated by M. Rioch (1948) and reprinted as: On the psychology of so-called
processes of abstraction. Journal of Abnormal and Social Psychology, 36, 3–33).
182 References
Weiss, D. (2004). Computerized adaptive testing for effective and efficient measurement in
counseling and education. Measurement and Evaluation in Counseling and Development, 37
(2), 70–84.
Weintraub, S., Dikmen, S. S., Heaton, R. K., Tulsky, D. S., Zelazo, P. D., Bauer, P. J., y Gershon,
R. C. (2013). Cognition assessment using the NIH Toolbox. Neurology, 80(11 Suppl 3), S54–
S64.
Wells, F. L., & Ruesch, J. (1942). Mental Examiner’s Handbook. New York: Psychological
Corporation.
Welsh, M. C., & Pennington, B. F. (1988). Assessing frontal lobe functioning in children: Views
from developmental psychology. Developmental Neuropsychology, 4, 199–230.
Welsh, M. C., Pennington, B. F., & Grossier, D. B. (1991). A normative-developmental study of
executive function: A window on prefrontal function in children. Developmental
Neuropsychology, 7, 131–149.
Wender, R., Hoffman, H. G., Hunner, H. H., Seibel, E. J., Patterson, D. R., & Sharar, S. R. (2009).
Interactivity influences the magnitude of virtual reality analgesia. Journal of Cyber Therapy
and Rehabilitation, 2(1), 27.
Werner, P., Rabinowitz, S., Klinger, E., Korczyn, A., & Josman, N. (2009). Use of the virtual
action planning supermarket for the diagnosis of mild cognitive impairment. Dementia and
Geriatric Cognitive Disorders, 301–309.
Wernicke, C. (1874). Der Aphasische Symptomkomplex. Breslau: Cohn & Weigart.
Westerberg, H., Jacobaeus, H., Hirvikoski, T., Clevberger, P., Ostensson, M. L., Bartfai, A., &
Klingberg, T. (2007). Computerized working memory training after stroke-a pilot study. Brain
Injury, 21(1), 21–29.
Widmann, C. N., Beinhoff, U., & Riepe, M. W. (2012). Everyday memory deficits in very mild
Alzheimer’s disease. Neurobiology of Aging, 33(2), 297–303.
Wilkins, R. H., & Classic-XVII, N. (1964). The Edwin Smith surgical papyrus. Journal of
Neurosurgery, 5(1), 240–244.
Willcutt, E. G., Doyle, A., Nigg, J. T., Faraone, S. V., & Pennington, B. F. (2005). Validity of the
executive function theory of attention-deficit/hyperactivity disorder: A meta-analytic review.
Biological Psychiatry, 57, 1336–1346.
Williams, J. M. (1988). Everyday cognition and the ecological validity of intellectual and
neuropsychological tests. In J. M. Williams & C. J. Long (Eds.), Cognitive approaches to
neuropsychology (pp. 123–141). New York: Plenum.
Williams, W. H., Evans, J. J., & Wilson, B. A. (2003). Neurorehabilitation for two cases of
post-traumatic stress disorder following traumatic brain injury. Cognitive Neuropsychiatry, 8,
1–18.
Williams, J. E., & McCord, D. M. (2006). Equivalence of standard and computerized versions of
the Raven progressive matrices test. Computers in Human Behavior, 22, 791–800.
Williams, L. M., Mathersul, D., Palmer, D. M., Gur, R. C., Gur, R. E., & Gordon, E. (2009).
Explicit identification and implicit recognition of facial emotions: I. Age effects in males and
females across 10 decades. Journal of Clinical and Experimental Neuropsychology, 31(3),
257–277.
Wilson, B. A. (1993). Ecological validity of neuropsychological assessment: Do neuropsycho-
logical indexes predict performance in everyday activities? Applied & Preventive Psychology,
2, 209–215.
Wilson, B. A. (2011). Cutting edge developments in neuropsychological rehabilitation and
possible future directions. Brain Impairment, 12, 33–42.
Wilson, B. A. (2013). Neuropsychological rehabilitation: State of the science. South African
Journal of Psychology, 43(3), 267–277.
Wilson, B. A., Alderman, N., Burgess, P. W., Emslie, H., & Evans, J. J. (1996). Behavioral
Assessment of the Dysexecutive Syndrome. Bury St. Edmunds: Thames Valley Test Company.
References 183
Wilson, B. A., Clare, L., Cockburn, J. M., Baddeley, A. D., Tate, R., & Watson, P. (1999). The
Rivermead Behavioral Memory Test—Extended Version. Bury St. Edmunds: Thames Valley
Test Company.
Wilson, B., Cockburn, J., & Baddeley, A. (1991). The Rivermead behavioural memory test
manual. Bury St. Edmunds: Thames Valley Test Corporation.
Wilson, B. A., Cockburn, J., Baddeley, A., & Hiorns, R. (1989). The development and validation
of a test battery for detecting and monitoring everyday memory problems. Journal of Clinical
and Experimental Neuropsychology, 11, 855–870.
Wilson, B. A., Emslie, H. C., Quirk, K., & Evans, J. J. (2001). Reducing everyday memory and
planning problems by means of a paging system: a randomised control crossover study. The
Journal of Neurology, Neurosurgery, and Psychiatry, 70, 477–82.
Wilson, B. A., Evans, J. J., Brentnall, S., Bremner, S., Keohane, C., & Williams, H. (2000). The
Oliver Zangwill Centre for Neuropsychological Rehabilitation: a partnership between health
care and rehabilitation research. In A.-L. Christensen & B. P. Uzzell (Eds.), International
Handbook of Neuropsychological Rehabilitation (pp. 231–46). New York: Kluwer
Academic/Plenum.
Wilson, B. A., Shiel, A., Foley, J., Emslie, H., Groot, Y., Hawkins, K., et al. (2004). Cambridge
Test of Prospective Memory. Bury St. Edmunds: Thames Valley Test Company.
Windmann, S., Kirsch, P., Mier, D., Stark, R., Walter, B., Gunturkun, O., et al. (2006). On framing
effects in decision making: linking lateral versus medial orbitofrontal cortex activation to
choice outcome processing. Journal of Cognitive Neuroscience, 18(7), 1198–1211.
Witsken, D. E., D’Amato, R. C., & Hartlage, L. C. (2008). Understanding the past, present, and
future of clinical neuropsychology. In R. C. D’Amato & L. C. Hartlage (Eds.), Essentials of
neuropsychological assessment, second edition: Treatment planning for rehabilitation (pp. 3–
29). New York: Springer Publishing Company.
Witt, J. A., Alpherts, W., & Helmstaedter, C. (2013). Computerized neuropsychological testing in
epilepsy: Overview of available tools. Seizure, 22(6), 416–423.
Wolbers, T., & Hegarty, M. (2010). What determines our navigational abilities? Trends in
Cognitive Sciences, 14(3), 138–146.
Woo, E. (2008). Computerized neuropsychological assessments. CNS Spectrums, 13(10 Suppl 16),
14–17.
Woods, S. P., Weinborn, M., & Lovejoy, D. W. (2003). Are classification accuracy statistics
underused in neuropsychological research? Journal of Clinical and Experimental
Neuropsychology, 25(3), 431–439.
Wright, C. E., O’Donnell, K., Brydon, L., Wardle, J., & Steptoe, A. (2007). Family history of
cardiovascular disease is associated with cardiovascular responses to stress in healthy young
men and women. International Journal of Psychophysiology, 63, 275–282.
Wu, D., Courtney, C., Lance, B., Narayanan, S. S., Dawson, M., Oie, K., & Parsons, T. D. (2010).
Optimal Arousal Identification and Classification for Affective Computing: Virtual Reality
Stroop Task. IEEE Transactions on Affective Computing, 1, 109–118.
Wu, D., Lance, B., & Parsons, T. D. (2013). Collaborative filtering for brain-computer interaction
using transfer learning and active class selection. PLOS ONE, 1–18.
Yellowlees, P. M., Shore, J., & Roberts, L. (2010). Practice Guidelines for
Videoconferencing-Based Tele-mental Health—October 2009. Telemed e-Health., 16, 1074–
1089.
Yerkes, R. M. (1917). Behaviorism and genetic psychology. Journal of Philosophy, Psycholo-gy,
and Scientific Methods, 14, 154–160.
Yoakum, C. S., & Yerkes, R. M. (1920). Army mental tests. New York: Holt.
Zackariasson, P., & Wilson, T. L. (2010). Paradigm shifts in the video game industry.
Competitiveness Review, 20, 139–151.
Zakzanis, K. K. (1998). Brain is related to behavior (p < 0.05). Journal of Clinical and
Experimental Neuropsychology, 20(3), 419–427.
184 References
Zakzanis, K. K. (2001). Statistics to tell the truth, the whole truth, and nothing but the truth:
formulae, illustrative numerical examples, and heuristic interpretation of effect size anal-yses
for neuropsychological researchers. Archives of Clinical Neuropsychology, 16(7), 653–667.
Zakzanis, K. K., & Azarbehi, R. (2014). Introducing BRAIN screen: Web-Based Real-Time
Examination and Interpretation of Cognitive Function. Applied Neuropsychology: Adult, 21(2),
77–86.
Zald, D. H., & Andreotti, C. (2010). Neuropsychological assessment of the orbital and
ventromedial prefrontal cortex. Neuropsychologia, 48(12), 3377–3391.
Zangwill, O. L. (1947). Psychological aspects of rehabilitation in cases of brain injury. British
Journal of Psychology, 37(2), 60–69.
Zelazo, P. D., Müller, U., Frye, D., Marcovitch, S., Argitis, G., Boseovski, J., & Carlson, S. M.
(2003). The development of executive function in early childhood. Monographs of the society
for research in child development, i–151.
Zygouris, S., & Tsolaki, M. (2015). Computerized Cognitive Testing for Older Adults A review.
American Journal of Alzheimer’s Disease and other Dementias, 30(1), 13–28.
Index
A Behaviorism, 5, 38
Abstract attitude, 34–36 Benton, Arthur, 34, 39, 41, 43
Active navigation, 88, 89 Bigler, Erin, 9, 43
Activities of daily living, 61, 75, 76, 106, 113, Bilder, Robert, 5–7, 9, 31, 47, 107, 110,
114, 121, 123, 124, 140 135–137, 139
Actor-centered, 12, 20, 21 Binet, Alfred, 9, 40
Adaptive testing, 9, 48, 107, 139 Boston process approach, 44
Adaptive virtual environments, 9 Bracy cognitive rehabilitation program, 118
Affective assessments, 91, 108, 109, 140 Brain–computer interfaces, 129
Affective neuroscience, 8, 22, 68, 91, 92 Brain Resource International Database (BRID),
Affective systems, 23 109, 140
Agent-centered decision making, 12 BRAINScreen, 107, 108
American academy of clinical Brigadoon for Asperger’s syndrome, 143
neuropsychology, 49 Broca, Paul, 33, 114
ANAM-Sports medicine battery, 55 Burgess, Paul, 8, 12, 17–19, 66–68, 75, 76, 88,
Anterior cingulate, 84 95, 129, 137
Army alpha, 40, 41
Army beta, 40, 41 C
Army Group Examinations, 41 California verbal learning test, 52, 87, 88, 109
Assessment, 3–10, 12–14, 16–18, 20 Cambridge Neuropsychological Test
Attention, 8, 16, 22, 56–59, 70, 74, 102, 103, Automated Battery (CANTAB), 57, 58
106–109, 116–120, 122, 126, 130, 140, Cambridge Test of Prospective Memory
145 (CAMPROMPT), 16
Attentional training system, 118 Captain’s log computerized cognitive training
Attention process training, 118 system, 117
AULA Virtual Reality based attention test, 74 Capuana, Lesley, 23
Automated Neuropsychological Assessment Cardiovascular, 57
Metrics (ANAM), 54, 56–58 Case Schn, 35
Automatic processing, 36 Cattell, Raymond, 42
Autonomic, 23 Chan, Raymond, 19, 24, 75, 140
Chaytor, Naomi, 16, 140
B Cicerone, Keith, 14, 15, 115, 116
Baddeley, Alan, 21, 26, 68 ClinicaVR (Digital media works), 70
Banaji, Mahzarin, 13, 14 CNS vital signs, 56, 57
Barkley, Russell, 62, 83 Cogmed QM, 119–121
Bauer, Russell, 9, 10, 49, 65, 110, 136 Cognitive Atlas, 139
Bechara, Antione, 24–26, 68 Cognitive bias test, 21
Behavioral assessment of the dysexecutive Cognitive neuroscience, 7, 20
syndrome, 75 Cognitive ontologies, 31, 135–137, 141, 144