Durlach (2004)
Durlach (2004)
423–451
Paula J. Durlach
U. S. Army Research Institute
ABSTRACT
Recent research on change detection suggests that people often fail to notice
changes in visual displays when they occur at the same time as various forms of vi-
sual transients, including eye blinks, screen flashes, and scene relocation. Distrac-
tions that draw the observer’s attention away from the location of the change es-
pecially lead to detection failure. As process monitoring and control systems rely
on humans interacting with complex visual displays, there is a possibility that im-
portant changes in visually presented information will be missed if the changes
occur coincident with a visual transient or distraction. The purpose of this article
is to review research on so called “change blindness” and discuss its implications
for the design of visual interfaces for complex monitoring and control systems.
The major implication is that systems should provide users with dedicated
change-detection tools, instead of leaving change detection to the vagaries of hu-
man memorial and attentional processes. Possible training solutions for reducing
vulnerability to change-detection failure are also discussed.
CONTENTS
1. INTRODUCTION
2. CONDITIONS THAT PRODUCE CHANGE BLINDNESS
3. FACTORS THAT AFFECT THE SPEED OF CHANGE DETECTION
3.1. Distractions
3.2. Discriminability
3.3. Categorization
3.4. Biased Serial Search
3.5. Amount of Information and Level of Analysis
3.6. Exogenous Attentional Capture
3.7. Within-task Learning
3.8. Meaningfulness
3.9. Expertise
4. POTENTIAL METHODS OF FACILITATING CHANGE DETECTION IN
VISUAL DISPLAYS
4.1. Design of the Work Environment
4.2. Training
5. CONCLUSIONS
1. INTRODUCTION
This review discusses recent findings on visual change detection and fail-
ures of visual change detection known as Change Blindness (CB). These find-
ings have relevance to the design and use of systems intended for the monitor-
ing and control of multiple processes and entities. CB refers to the failure to
detect what should be obvious visual changes. It may occur because of a mo-
mentary change in focus of attention or because some other visual transient
(e.g., a blink or screen flash) occurrs at the same time as the change. For exam-
ple, in conducting a zoom-in or out on a military tactical map display, an op-
erator may fail to notice a change in position of an enemy icon that occurred
during the zoom operation, when the map was being redrawn.
Complex monitoring and control systems (e.g., nuclear power plant con-
trol rooms, airplane cockpits) involve visual search over multiple displays,
display manipulation tasks, situation assessment, decision making, and com-
munications. The research literature on CB indicates that it is exactly under
circumstances like this where changes in visual displays are most likely to go
undetected. This would obviously undermine efficient use of the sophisti-
cated information system at the operator’s disposal. As future systems be-
come more and more sophisticated, awareness of what is known about CB
should be taken into account both in the design of those systems and in opera-
tor training. For example, the U. S. Army is undergoing major transformation
CHANGE BLINDNESS 425
over the next decade (Objective Force Task Force, 2002). Senior leadership
envisions a future maneuver force (the Objective Force) that is medium
weight, quickly deployable, extremely reliant on information networking ca-
pabilities, and capable of full-spectrum operations with flexible, adaptable
soldiers and leaders. Personnel will be replaced by networked information
processing tools and unmanned robotic entities (Future Combat System of
Systems [FCS]). The humans that will use the FCS and the characteristics of
their information-processing systems must be regarded as significant design
elements. There are implications for decisions concerning task allocation, in-
terface design, and training. Such implications will be illustrated in this article
through reference to practical examples from current military and prototype
FCS systems.
The phenomenon of CB has received much attention for what it can re-
veal about the nature of the human information processing system and con-
sciousness itself (e.g., Noë, Pessoa, & Thompson, 2000; Rensink, 2000;
Simons, 2000). This article is not concerned with these lofty issues but,
rather, with how to use knowledge of the phenomenon to avoid it. In gen-
eral, people are unaware of this phenomenon and overestimate their ability
to detect changes (Levin, Momen, Drivdahl, & Simons, 2000; Scholl,
Simons, & Levin, in press). Although it has been claimed that CB is not af-
fected by practice (Rensink, 2000), the review will suggest that, if using a
limited set of icons and symbols, training may be one way to reduce or
avoid it. To experience the CB phenomenon for yourself, try the demon-
strations at http://www.usd.edu/psyc301/ChangeBlindness.htm and
http://www.maccs.mq.edu.au/~alice/fleeting/pictureshi.html.
This review is not concerned with the effects of stress, fatigue, or work
overload. It is concerned with failures to detect changes in visual displays that
could occur at the beginning of a work session, in a calm situation, in which all
the observer needs to do is his or her normal work. Interruptions and stress
would be expected to exacerbate such effects even further (Driskell & Salas,
1996; Hancock & Desmond, 2001; McFarlane, 2002; McFarlane & Latorella,
2002).
rectly at her, you would almost surely notice this. The change in color would
automatically draw your attention. This example is intended to illustrate that
even a brief shift of attention away from a scene can lead to a failure to detect
a change. But in this example, the observer was not necessarily looking for a
change. Suppose the observer is dedicated to the task of finding changes? It
turns out that even when the observer is instructed to locate changes, they are
often missed.
Perhaps the reader will be familiar with the common child’s puzzle game
in which two cartoons are displayed in different locations (e.g., Figures 1 and
2). The task is to find the differences between the two pictures. Success takes a
concerted search strategy in which each feature of one cartoon must be com-
pared, in a serial fashion, with the analogous feature in the other cartoon. We
cannot merely look at one cartoon thoroughly and then scan the second one
for differences. Processing of the first picture is inadequate to retain all the in-
dividual features of that picture while systematically comparing them to the
second picture. This task requires both attention (which features to focus on
and in how much detail) and memory (remembering what was seen in the first
picture in enough detail to judge if the analogous feature in the second picture
is the same or different).
Instead of the cartoons being presented on different pages, suppose they
were displayed in the same location on a computer screen, with the two im-
ages rapidly alternated. The discrepancies between the two images would
then be much easier to detect. Attention would be automatically drawn to
those aspects by which the images differed. The image alternation produces
“visual transients” in the locations of image differences automatically direct
the observer’s focus to the sites of discrepancies.
Now suppose we interpose a blank screen between each alternation of the
image, so that the observer sees image 1, blank screen, image 2, blank screen,
image 1, blank screen, image 2, and so on. If the images are displayed this
way, the differences between them once again would be difficult to detect (for
a naïve observer, who had not seen them alternate without the blank). We are
back to virtually the same situation as when the pictures were presented in dif-
ferent locations. Instead of the visual transients being localized, change would
occur at every point in the image (change from the blank screen to the image).
Thus, attention would not be drawn to any specific elements of the scene.
What has just been described is one paradigm that has been used to study CB;
the flicker paradigm (e.g., Rensink, O’Regan, & Clark, 2000). The blank
screen plays the role of a real-world shift of view.
Much of the research on CB has been conducted using this flicker para-
digm with static displays, in which the observer is instructed to find a single
change in an otherwise constant scene. Changes investigated include pres-
ence or absence (an object is added or taken away), color (change of an ob-
CHANGE BLINDNESS 427
Figure 1. Illustration of child’s puzzle. Find the six differences between Figure 1 and
Figure 2.
Figure 2. Illustration of child’s puzzle. Find the six differences between Figure 1 and
Figure 2.
ing eye movement, it has been shown that change detection is undermined if
the change occurs during saccadic movement (Grimes, 1996) or a blink
(O’Regan, Deubel, Clark, & Rensink, 2000).
Examination of CB has not been restricted to static displays. Levin and
Simons (1997) showed observers a short video of a conversation between
two actors. At each change of camera angle, some aspect of the scene was
also changed. When observers were subsequently given a questionnaire
about changes noticed, only 1 out of the 10 participants noticed any of the
changes. Participants were asked to watch the video again. Before the sec-
ond viewing, observers were told to look out for a change at every shift of
camera angle. Nevertheless, on average, only two of the nine possible
changes were detected.
In another study, Levin and Simons (1997) examined what would happen
if the central actor in the video was changed. In this video, a student was first
seen sitting at a desk and then getting up and walking out the door. The scene
then shifted to the student walking down the hall to answer the phone; except
it was a different person. A separate study verified that the two people were
relatively easy to tell apart, however, only 33% of the 40 participants viewing
the video with the changed actor noticed the change. (See
http://www.wjh.harvard.edu/~viscog/lab/demos.html for a demonstra-
tion of these videos.)
One might wonder if these failures to notice changes are symptomatic of
passive viewing, as opposed to active participation. This idea can be dispelled
as a result of live studies that were conducted by Simons and Levin (1998). In
CHANGE BLINDNESS 429
one study a person was stopped on the sidewalk and asked for directions. In
the middle of the ensuing conversation, two experimental confederates inter-
rupted by carrying a door between the participants. During this interruption
the original person who had asked for directions was changed. On average,
only 50% of the people noticed this, once the door was carried off and the
conversation resumed (a video of this procedure can be seen at http://
www.wjh.harvard.edu/~viscog/lab/demos.html).
It should be obvious from the preceding discussion that occurrence of CB,
while using any complex system with visual displays, is a serious possibility. A
few studies have already documented this.
To study the effectiveness of head-up displays (HUDs), Haines (1991)
tested commercial pilots in a flight simulator. During the course of the test, pi-
lots were placed (four times) in a situation in which they had to decide
whether it was safe to land or not. On two of these occasions it was not safe,
due to an obstruction on the runway. The first time the obstruction occurred,
2 of the 4 pilots using an HUD failed to detect the obstruction and continued
to land the plane, until stopped by the experimenter. Ververs and Wickens
(1998) also looked at change detection in pilots. They found an attentional
trade off between detection of commanded flight changes and changes in traf-
fic conditions.
DiVita et al. (2004) examined performance of Naval Combat Information
Center (CIC) console operators in the traffic monitoring section of the sys-
tem. The system delivers a bird’s eye view of a two-dimensional com-
puter-displayed map, depicting operator’s location (ownship), air traffic, sea
vessels, ground installations (such as airports), and tactical information, such
as commercial air corridors and range circles from ownship. A sample image
of the screen is illustrated in Figure 3. Participants in the study were familiar
with the CIC and briefed ahead of time as to the purpose of the study (to ex-
amine change detection). They were shown the types of changes they would
be tested on prior to the beginning of the study. Their ongoing task was to
monitor map activity and evaluate the appropriateness of the amity coding
(e.g., friendly, enemy, commercial) of displayed aircraft or vessels (contacts).
Amity was represented by icon color. Information about each contact (e.g.,
speed, bearing, range from ownship) could be obtained to aid this evaluation
(via a pop-up window resulting from a mouse-click on the contact).
A second computer screen (not normally part of the system) displayed
alerts, notifications, and questions. On critical change trials, the map screen
went blank, to simulate a diversion of the operator’s attention. At the same
time, an alert appeared on the second computer. The operator was notified
that a critical change had just occurred and the nature of the change (e.g., “an
aircraft has significantly changed its course”). The operator was instructed to
return to the map display (which reappeared) and click on the contact that
430 DURLACH
Figure 3. Illustration of the screen used in DiVita and Nugent’s (2000) experiment. En-
tities (e.g., aircraft, ships, and airports) were depicted as symbols on a map. Symbol col-
ors represented amity (e.g., friendly, hostile, neutral, commercial, unknown). Symbol
shapes represented domain of operation (e.g., surface, subsurface, air). Ownship was
represented by cross-hairs and surrounded by ownship’s weapons range capabilities
(circle with pie shape missing). Commercial air corridors were shown as the shaded
wide lines. (Figure taken from unpublished manuscript, “Verification of the Change
Blindness Phenomenon in an Applied Setting,” by DiVita, Obermayer, Nugent, & Lin-
ville, 2002.)
had changed. Immediate feedback was given as to the accuracy of the selec-
tion and, if incorrect, selection continued until the correct contact was chosen.
The results were that first choices were incorrect 28.8% of the time. Statistical
modeling indicated that if the operator had not chosen correctly by the third
choice, subsequent selection was no better than chance. The authors sug-
gested that the results probably underestimate the degree of CB that could oc-
cur under normal usage of this system. The participants knew they would be
tested for change detection and the types of changes they would be tested on.
Moreover, the screens were relatively uncluttered (only 8 contacts present at
a time), compared with when the system is actually in use (50–100).
Durlach and Chen (2003) examined change detection in the context of the
army’s Force XXI Battle Command Brigade and Below system (FBCB2).
FBCB2 is a fielded digital system that is used across echelons from vehicle
CHANGE BLINDNESS 431
3.1. Distractions
Figure 4. Example of the screen used by Durlach and Chen (2003). Participants clicked
on the response bar whenever they saw an icon change. Clicking on an FBCB2 task
menu button opened windows that occluded the situation awareness map. Even though
the experiment used only one target icon at a time (e.g., enemy icon), use of task win-
dows immediately prior to a change reduced detection of icon changes to around 50%.
3.2. Discriminability
change is pointed out, it is glaringly obvious. The fact that the original and the
changed images can be easily discriminated is what makes the phenomenon
interesting. For example, in their video example of the student who gets up to
answer the phone, Levin and Simons (1997) pointed out that the two students
in the video were easy to tell apart. If they were twins, their similarity in itself
could account for the CB; but it wouldn’t be very interesting. It seems likely
that the easier it is to discriminate the original and the changed image, the
faster the change will be detected. If Levin and Simons had used a male and a
female student instead of two male students, it seems unlikely that many peo-
ple would have failed to detect this change. Changing an element of a scene
that changes the meaning or interpretation of the scene (described next) leads
to faster change detection but what about making the change physically more
or less noticeable (without changing the meaning)? A recent study by
Zelinsky (in press) demonstrated that visual similarity between the original
and changed objects, as computed by low-level visual filters, can be an impor-
tant factor determining the probability of change detection. In another study,
Zelinsky (2001) showed that detecting an object in an array of objects was
more rapid when the new object was oriented perpendicular to the replaced
object, compared to oriented at the same angle as the replaced object.
The effect of discriminability is highly relevant to the design of interface
screens, particularly choice of icons and symbols. The iconology that is cur-
rently employed in army systems (and likely destined for use in the FCS) is
determined by doctrinal rules and has not been designed with
discriminability in mind (e.g., see Mil-Std 2525B, 1999). Designers of systems
not subject to such constraints would greatly benefit from using easily dis-
criminated stimuli.
3.3. Categorization
players, people fail to notice a (black) gorilla walking through the game
(Simons & Chabris, 1999). IB (unlike CB) is easily subverted by instructions;
however, any suspicion that something unusual might happen greatly im-
proves detection of a change in nontargeted features.
Figure 5. Illustration of the global and local changes used by Austen and Enns (2000).
changing its orientation. The participant’s task was to say which item had
been switched. In the late-onset condition, 1 of the 12 items appeared later
than the other 11 in the array. In the uniqueness condition, one of the line draw-
ings was presented in a unique color. Location of the late-onset item or the
uniquely colored item was random with respect to the position of the changed
item.
Scholl (2000) found that when the late-onset or uniquely colored item did
happen to occur in the same location as the target change, detection of the
change was faster, compared with the control condition. Thus, drawing atten-
tion to the location of the change did facilitate detection. However, the con-
verse was not found. That is, change detection was not slowed, compared to
the control condition, when the late-onset or uniquely colored item appeared
in a different location from the changed item.
There is some dispute as to whether there is such a thing as pure exogenous
attentional capture (Chastain & Cheal, 2001). Some researchers have pro-
posed that attentional capture is always influenced by the attentional set of the
observer and the task requirements (e.g., Folk & Remington, 1999; Folk,
Remington, & Johnston, 1992). In general, attention can be captured by any
unique cue when observers are searching for a unique item among all similar
items but if they are searching for a particular feature value, unique items may
be ignored (Simons, 2000). Recall that Austen and Enns (2000) found that
global changes were more quickly detected than local changes. This could be
because we automatically attend to changes in the forest rather than the trees.
However, Austen and Enns also found that this natural tendency could be
modulated.
CHANGE BLINDNESS 437
Austen and Enns (2000) showed that speed of change detection can be in-
fluenced by the probability of change type. Using the same E and S stimuli de-
scribed earlier, they manipulated the likelihood that the change to be de-
tected was global or local (25% of the time for one and 75% of the time for the
other). When there was only one large letter in the display, detection times
were determined by the probability of change type. When global changes oc-
curred 75% of the time, they were detected more quickly than local changes
but when local changes occurred 75% of the time, they were detected more
quickly than global changes. These findings suggest that observers can adopt
a strategy based on experience with the particular task. They learn to bias
search in favor of changes that occur the most frequently.
When there were three or five large letters in the display, speed of change
detection was generally faster for global than local changes. More impor-
tantly, however, change detection was modulated by change-type probability.
Complete failure of detection within the time allowed occurred significantly
more often when the probability of a local change was .25 compared to when
it was .75. People failed to detect local changes about 40% of the time when
global changes were the more probable (compared to about 20% of the time
when local changes were more probable). This suggests that small local
changes in a display are particularly vulnerable to CB when these types of
changes are relatively rare.
Austen and Enns showed that change detection could be affected by prob-
ability of change type. Generalizing their findings from local and global char-
acters, it seems likely that other types of changes could exhibit the same pat-
tern. For example, if changes in position are more frequent than changes in
color, then detection of color changes may be impaired. The degree to which
prior experience affects detection likely depends on the natural salience of the
stimulus (its exogenous capture ability), the relative discriminability of the dif-
ferent change types, the magnitude of the difference in probability of occur-
rence, and the meaningfulness of the stimuli (Pringle, Kramer, Atchley, &
Irwin, 2001). Further research on the effects of prior experience needs to be
conducted. That research must be able to distinguish whether effects of prior
experience result from shifts in sensitivity to specific change types or response
bias (Parasuraman, Masalonis, & Hancock, 2000).
3.8. Meaningfulness
Recall that Richard et al. (2002) found that detection of changes in driving
scenes was significantly faster for driving-relevant than driving-irrelevant
changes. It has been demonstrated repeatedly by Rensink and colleagues
438 DURLACH
(2000) that change detection is more rapid for objects of central interest than
for objects of marginal interest (O’Regan, Resink, & Clark, 1999; O’Regan et
al., 2000; Rensink et al., 2000). In these studies, central and marginal interest
items were determined on the basis of ratings given by independent judges. It
seems reasonable to assume that items of central interest contribute more to
the interpretation or meaningfulness of a scene than items of marginal inter-
est. As such items tend to get more direct eye fixations (O’Regan et al., 2000),
it is possible that faster change detection for central versus marginal interest
items is a consequence of fixation location. If it is more probable that people
are looking directly at central interest items at any given time, it is more likely
that they will be looking at them at the time of the change. This should facili-
tate change detection. By monitoring eye movements during a CB experi-
ment, O’Regan et al. (2000), did find evidence for this interpretation. When
observers’ eyes were directly fixated on the location of change, there was no
difference in the probability of change detection for central versus marginal
interest items. Moreover, these were the conditions under which a change
was most likely detected (i.e., when directly fixated). However, fixation was
not a guarantee of detection. Even when people were looking directly at the
location of change, they still failed to detect it 40% of the time. Thus direct fix-
ation aids detection but does not ensure it.
The results of this experiment suggested that location of gaze fixation does
contribute to the differences in times to detect changes in central versus mar-
ginal interest items but the results also suggested that this is not the entire ex-
planation. In some cases, changes were detected without direct fixation on the
change location. For these cases, even when distance between fixation point
and change point was taken into account, central interest items were detected
sooner than marginal interest items. A possible interpretation is that memory
for the preceding scene might account for these faster detections of central
item changes. For remembered items, change detection could occur by com-
parison of a currently viewed feature and a remembered feature, whereas for
nonremembered items, direct viewing before and after the flicker would be
required. From what is known about memory, the more meaningful the mate-
rial, the better it should be encoded and remembered ( Jacoby, Evans, &
Bartz, 1978). Thus, more meaningful changes could be detected more rapidly.
In addition, even if memory is not involved, in a serial search, it is likely that
more meaningful items are checked first (Yarbus, 1967).
3.9. Expertise
This allows them to encode and remember these positions more accurately
than novice players (Chase & Simon, 1973; de Groot, Gobet, & Jongman,
1996) and to more reliably detect changes in the positions of pieces (Kämpf &
Strobel, 1998). It would be fair to say the expert sees meaning where the nov-
ice does not. An implication is that experts should be superior in detecting
changes in domain-relevant material, compared to novices. To test this impli-
cation, Werner and Thies (2000) conducted a flicker experiment in which the
scenes were taken from American football. One group of study participants
were experts in American football, whereas a second group were unfamiliar
with the sport. Confirming the prediction, the experts were faster and had
fewer detection failures than the novices. When the change to be detected was
meaningful to scene interpretation, the difference between experts and nov-
ices was greater than when the change was incidental to scene interpretation.
Thus, cultivating expertise may be one way of ameliorating CB. For a particu-
lar system, thorough training within the domain the system controls (e.g., tac-
tical operations in the case of FCS) may be one means of reducing vulnerabil-
ity to CB. An additional implication is that a CB test may be a convenient and
appropriate way to assess trainees for mastery. Personnel who have devel-
oped a certain level of expertise with a system should have faster change de-
tection times and fewer detection failures, compared with novice operators.
tend to capture attention naturally. For example, all new information could
blink. Once acknowledged by the user, the blinking would be turned off. This
may have additional advantages for situations with multiple operators. An
item marked as new could remain marked as new until any operator took an
action with it. Then the change in designation would serve as a signal to the
other operators that someone was dealing with it (e.g., looking at a new sensor
image or targeting a newly detected enemy). That could prevent the redun-
dant activity of multiple operators.
Although this sounds like a potential solution, as with all types of alerts, it
would need to be fine-tuned to the user’s needs. There has been considerable
research conducted on the best way to design alerts and warnings (e.g.,
Mumaw, Roth, Vicente, & Burns, 2000; Wogalter, Conzola, & Smith-Jackson,
2002), which could be applied to great advantage. Like for other alerts, there
is a danger that having to acknowledge change alerts would only increase op-
erator workload, or that operators would habituate to the change signal, no
longer noticing it once it became a recurring feature of the display. Perhaps
each operator could tailor the change alerts to the specific information they
anticipate to be particularly important (e.g., when a new opposition icon ap-
pears on the map). It should be noted that, no matter what the nature of the
alert, it can only be effective if perceived by the operator. For example, if
alerts are logged as messages in a text window but the operator does not in-
spect that window, the alerts are pointless. Clearly, the possibility of allowing
for intelligent change-alerting, holds promise for minimizing change detec-
tion failure, but at the same time it must be integrated into ongoing tasks so as
not to increase workload or add further distractions. Further research should
be devoted to determining the best means to implement intelligent alerting in
tactical, as well as other settings.
Instead of signaling changes through explicit alerts, it may be better to have a
change-detection tool, accessed at the user’s discretion. For example, when ac-
cessed, such a tool might display all the map changes in the last N min, with N
specified by the user. The mode of change highlighting could take various
forms; for example, blinking of all changes, replaying changes in fast-forward
like a video of past events, or rapid alternation between the current map and the
map as it looked N min before. Whatever the method, it should use salient
means to highlight the changes. The advantage of this method is that it would
not interrupt or further distract the operator whenever a change occurred, as an
alert might (for a study on the effects of different types of interruptions, see
McFarlane, 2002). In addition, it has the possibility of revealing patterns in
changes (are all enemy moving south?), which might be difficult to observe oth-
erwise. Proper training would be required to gain maximum benefit from such a
tool, because people tend to overestimate their ability to detect changes (Levin,
CHANGE BLINDNESS 443
Momen, Drivdahl, & Simons, 2000) and therefore may not take advantage of
such a tool unless explicitly trained to do so.
Smallman and St. John (2003) evaluated a change detection tool (for
Change History Explicit [CHEX]), for assisting change awareness in the
context of a naval system intended for the monitoring of airspace activity.
Changes were logged in a (continuously viewable) table that could be flexi-
bly sorted (by change significance, change type, change recency, or aircraft
ID). Highlighting a table entry highlighted the relevant aircraft icon on the
situation awareness map. Detecting important aircraft changes was both
more accurate and more timely when CHEX was available, than when it
was not. Moreover, CHEX assisted in recovering situation awareness after
an interruption.
With regard to the design of the work environment, discrimination and
categorization processes should both be taken into account. It has been as-
serted that the more similar items are, the less likely a change from one to the
other will be noticed. This suggests that in representing real-world objects,
their most important distinguishing aspects be translated into the most easily
discriminated features of the symbols used to represent them. Moreover,
these features should be global rather than local. To a certain extent current
military symbology does follow this prescription. Color is used to distinguish
friendly from enemy. The colors used are highly discriminable and uniform
over the icon (global). More subtle aspects, such as wheeled versus tracked ve-
hicles, are represented by local and more difficult to discriminate features in-
side the icon (Durlach & Chen, 2003). But suppose these more subtle distinc-
tions become the focus of the operators’ task? For example, when the task is to
designate the priority of enemy targets. In that case, the operator should be
given the opportunity to redisplay the symbols in such a way as to make more
salient the distinguishing features they will need to detect. What is being sug-
gested is a tool for flexible categorization, that allows the operator to visualize
more easily those features that are key to the performance of his or her goal.
The ability to toggle back and forth between the default symbol set and a
user-customized set would allow for both common understanding and flexi-
ble categorization.
Screen clutter is another anticipated feature of workstations that will con-
tribute to CB. As discussed earlier, the more items in a display, the longer it
will take, on average, for a change to be detected. One solution may be to pro-
vide the user with flexible filters that filter out unnecessary information from
the display (for research on this, see e.g., St. John, Feher, & Morrison, 2002;
Ververs & Wickens, 1998; Yeh & Wickens, 2001). One disadvantage of this
option, however, is that what at one time might have been irrelevant, may
suddenly become very relevant but is being filtered. It is possible that to solve
this problem, a kind of reverse fading could be used, such that, by default, the
444 DURLACH
filtered-out information is gradually faded back into the display or that critical
changes restore filtered information automatically.
All of the proposed design solutions have advantages and disadvantages.
Their inclusion in monitoring and control systems would require research to
determine which would provide net benefit in terms of change detection, situ-
ation awareness, and user workload (Yeh & Wickens, 2001). Moreover, any
design innovation would have implications for training.
4.2. Training
Above and beyond training operators how to use specific interface tools,
there are other means by which training (in general) can decrease the proba-
bility of CB. As was mentioned in the review earlier, CB was less probable for
central, meaningful information versus marginal or extraneous information.
Moreover, experts were less prone to CB with meaningful changes. It is likely
that for an operator of any complex control system, whether incoming infor-
mation is meaningful or not will depend on the context and their interpreta-
tion of the situation. As the ability to formulate an interpretation of incoming
information will depend on the expertise and experience of the operator, it is
important that the operator be thoroughly trained in the domain in which
they are operating. Training must be content-led (what are the system func-
tions and goals), not technology-led (how to make the system conduct tasks).
With respect to military systems, expertise in tactical arts must be cultivated
in operators. Training of this expertise must be seen as the primary goal.
Training with the information management system, of course will also be im-
portant but training on the system should be seen as a means to an end, not
the end in and of itself. Too often, when a new system is introduced, it is as-
sumed that trainees are already experts in the processes the system is in-
tended to monitor and control. Training to use a word-processing system does
not make one a good writer.
Although the literature on CB asserts that training on change detection
does not lessen CB (e.g., Rensink, 2000), it turns out that few studies have
actually investigated the issue of training, per se. The assertion is based on
the failure to see improvement in change detection performance within an
experimental session but these experiments were not designed to promote
training. For example, they did not provide the user feedback or repeated
trials with the same stimuli. In experiments that used re-arrangements of
stimulus elements, as opposed to trial-unique natural scenes, improvements
in change detection have been found (Austen & Enns, 2000; Williams &
Simons, 2000). The stimuli used in these studies are more analogous to the
information displays that will be used by digital process monitoring and
control systems than are natural scenes. That is because systems displays
CHANGE BLINDNESS 445
will always contain the same basic elements (e.g., icons, meters) but with
unique patterns and settings.
Austen and Enns (2000) found that the probability with which different
types of changes occurred influenced detection speed. This influence can
only have come from within-session learning. These findings raise the possi-
bility that change detection exercises using different configurations of stimuli
on an operator interface could lessen vulnerability to CB with that interface.
Training exercises could be designed so as to incorporate the findings re-
viewed earlier, to maximize the benefits of that training. For example, in dedi-
cated change detection training, the changes should not be arbitrary (random
changes that don’t make sense). Changes should have meaningful signifi-
cance and discussion of that significance should be part of the training. The
most important types of changes anticipated to occur in a particular setting or
mission should be rehearsed and repeated (e.g., choice points in a contin-
gency plan). These repetitions should repeat semantic content, not specific
screen layouts. That is, the same conceptual situations should be repeated,
with different physical representations. It would also be important to include
training with events thought to be unlikely but having a high level of signifi-
cance. The basic research that would establish whether such training would
be effective or not, remains to be conducted.
Another means of training that might lessen the probability of CB would
involve thorough training with the symbols and icons used in the operator in-
terface. Recall that change detection is more successful, the easier it is to dis-
criminate the old and new items. It turns out that people can be taught to dis-
criminate between highly similar stimuli (Fahle & Poggio, 2002). Just as an
untrained taster can learn to discriminate one type of wine from another, an
operator can learn to more easily discriminate one symbol from another. The-
oretically, the better able the operator is to distinguish the different symbols,
the more likely they are to detect when one of those symbols changes. This
strategy can also be generalized to patterns. For example, operators could
learn to discriminate different patterns of symbols as displayed on maps and
categorize different patterns according to their significance.
In one experiment, participants actually were pre-trained with the stimuli
subsequently used in the change detection task (Williams & Simons, 2000).
These were “fribbles,” artificial beings of different “species.” Some partici-
pants were trained to tell different fribble species apart, whereas others were
trained to identify individual fribbles by name (e.g, learning dogs and cats vs.
Spot, Rex, Fluffy, Pursia). It turns out that this independent training did influ-
ence change detection performance but in a way that affected the propensity
of saying a change occurred, as opposed to the accuracy overall. People given
species training were more likely to report no change (whether there was one
or not) and people given naming training were more likely to report change
446 DURLACH
(whether there was one or not). Thus, the means by which operators are famil-
iarized with the symbolic elements used in displays may affect the outcome
with respect to change detection. A training method that promotes accuracy
as opposed to response bias needs to be determined.
Finally, it is possible that certain kinds of training, not involving the spe-
cific system to be operated, could be successful in boosting change detection
abilities generally. Green and Bavelier (2003) recently reported that training
on an action video game (Medal of Honor) for l hr per day for 10 days led to
improvements in several measures of visual attention (capacity, spatial distri-
bution, and temporal resolution). To the extent that these processes are funda-
mental to visual change detection, video game practice could also improve
that ability.
5. CONCLUSIONS
Provide a dedicated change Allows user to recover changes at May require extensive software
detection tool that logs any time; flexible coding development and analysis of
changes and allows the should facilitate recovery of user information requirements.
user to view them in a way situation awareness.
most relevant to their needs.
Provide change alerts, and Notifies user that a change has Responding to alerts may add to
allow user to customize occurred. user’s workload; learning how
alert priorities. to set priorities may take
training and experience.
Code information for Gives user some guide as to May add to visual complexity;
recency. which information is newest. requires finding an unused but
salient coding modality.
Avoid covering one Lessens possibility that a change May require additional screen
window with another. will be covered when it occurs. real estate.
Use symbols that are easily Allows users to more easily see Depending on complexity of
discriminated. changes. information, there might not
be sufficient easily
discriminated coding.
Avoid screen clutter. Allow Reduces complexity and number Important changes in filtered
for flexible filtering of of elements in visual display. elements unobservable.
information.
Minimize distractions and Lessens possibility that attention Distractions and interruptions
interruptions. will be diverted at the time of may be inherent to the task.
a change.
447
448 DURLACH
NOTES
Author’s Present Address. Paula Durlach, U.S. Army Research Institute, Simulator
Systems Research Unit, ATTN: DAPE-ARI-IF [Durlach], 12350 Research Parkway,
Orlando, FL 32826. E-mail: [email protected].
HCI Editorial Record. First manuscript received March 28, 2003. Revision re-
ceived August 5, 2003. Accepted by Richard Pew. Final manuscript received Septem-
ber 26, 2003. –Editor
REFERENCES
Austen, E., & Enns, J. T. (2000). Change detection: Paying attention to detail. Psyche, 6.
Retrieved on March 28, 2002 from http://psyche.cs.monash.edu.au/v6/psyche-
6–11-austen.html
Blackmore, S. J., Brelstaff, G., Nelson, K., & Troscianko, T. (1995). Is the richness of
our visual world an illusion? Transsaccadic memory for complex scenes. Perception,
24, 1075–1081.
Chase, W. G., & Simon, H. A. (1973). Perception in chess. Cognitive Psychology, 4, 55–81.
Chastain, G., & Cheal, M. (2001). Attentional capture with various distractor and tar-
get types. Perception and Psychophysics, 63, 979–990.
Chastain, G., Cheal, M., & Kuskova, V. (2002). Inappropriate capture by diversionary
dynamic elements. Visual Cognition, 9, 355–381.
de Groot, A. D., Gobet, F., & Jongman, R. W. (1996). Perception and memory in chess:
Studies in the heuristics of the professional eye. Assen, Netherlands: Van Gorcum & Co
B.V.
DiVita, J., Obermeyer, R. Nygren, T. E., & Linville, J. M. (2004). Verification of the
change blindness phenomenom while managing critical events on a combat infor-
mation display. Human Factors, 46, 205–218.
Driskell, J. E., & Salas, E. (Eds.). (1996). Stress and human performance. Mahwah, NJ:
Lawrence Erlbaum Associates, Inc.
CHANGE BLINDNESS 449
Durlach, P. J., & Chen, J. Y. C. (2003). Visual change detection in digital military dis-
plays. Proceedings of the Interservice/Industry Training, Simulation, and Education Confer-
ence 2003. Orlando, FL: IITSEC.
Fahle, M., & Poggio, T. (2002). Perceptual learning. Cambridge, MA: MIT Press.
Folk, C. L., & Remington, R. W. (1999). Can new objects override attentional control
settings? Perception and Psychophysics, 61, 727–739.
Folk, C. L., Remington, R. W., & Johnston, J. C. (1992). Involuntary covert orienting is
contingent on attentional control settings. Journal of Experimental Psychology: Human
Perception and Performance, 18, 1030–1044.
Foushee, H. C., & Helmreich, R. L. (1988). Group interaction and flight crew perfor-
mance. In E. L. Weiner & D. C. Nagel (Eds.), Human factors in aviation (pp.
189–227). San Diego, CA: Academic.
Green, C. S., & Bavelier, D. (2003). Action video game modifies visual selective atten-
tion. Nature, 423, 534–537.
Grimes, J. (1996). On the failure to detect changes in scenes across saccades. In K. A.
Akins (Ed.), Perception. Vancouver studies in cognitive science: Vol. 5 (pp. 89–110). New
York: Oxford University Press.
Haines, R. F. (1991). A breakdown in simultaneous information processing. In G.
Obrecht & L. Stark (Eds.), Presbyopia research: From molecular biology to visual adapta-
tion (pp. 171–175). New York: Plenum.
Hancock, P. A., & Desmond, P. A. (2001), Stress, workload, and fatigue. Mahwah, NJ:
Lawrence Erlbaum Associates, Inc.
Jacoby, L. L., Evans, J. D., & Bartz, W. H. (1978). A functional approach to levels of
processing. Journal of Experimental Psychology: Human Learning & Memory, 4,
331–346.
Kämpf, U., & Strobel, R. (1998). Automatic position evaluation in “controlled” change
detection: Data-driven vs. concept-guided encoding and retrieval strategy compo-
nents in chess players with varying degrees of expertise. Zeitschrift fuer Psychologie,
206, 23–46.
Levin, D. T. (2000). Race as a visual feature: Using visual search and perceptual dis-
crimination tasks to understand face categories and the cross-race recognition defi-
cit. Journal of Experimental Psychology: General, 129, 559–574.
Levin, D. T., Momen, N., Drivdahl, S. B., & Simons, D. J. (2000). Change blindness:
The metacognitive error of overestimation change-detection ability. Visual Cogni-
tion, 7, 397–412.
Levin, D. T., & Simons, D. J. (1997). Failure to detect changes to attended objects in
motion pictures. Psychonomic Bulletin and Review, 4, 501–506.
McFarlane, D. C. (2002). Comparison of four primary methods for coordinating the
interruption of people in human–computer interaction. Human–Computer Interac-
tion, 17, 63–139.
McFarlane, D. C., & Latorella, K. A. (2002). The scope and importance of human in-
terruption in human–computer interaction design. Human–Computer Interaction, 17,
1–61.
Mil-Std 2525B (1999). Department of Defense Interface Standard: Common
Warfighting Symbology. Defense Information Systems Agency. Retrieved on
March 28, 2002 from http://www-symbology.itsi.disa.mil/symbol/ mil-std.htm
450 DURLACH
Mumaw, R., Roth, E. M., Vicente, K. J., & Burns, C. M. (2000). There is more to moni-
toring a nuclear power plant than meets the eye. Human Factors, 42, 36–55.
Noë, A., Pessoa, L., & Thompson, E. (2000). Beyond the grand illusion: What change
blindness really teaches us about vision. Visual Cognition, 7, 93–106.
Objective Force Task Force (2002). The objective force in 2015 white paper. Retrieved
on December 15, 2002 from http://www.objectiveforce.army.mil/pages/
OF%20in%202015%20White%20Paper%20(final).pdf
O’Regan, J. K., Deubel, H., Clark, J. J., & Rensink, R. A. (2000). Picture changes dur-
ing blinks: Looking without seeing and seeing without looking. Visual Cognition, 7,
191–211.
O’Regan, J. K., Rensink, R. A., & Clark, J. J. (1999). Change-blindness as a result of
“mudsplashes.” Nature, 398, 34.
Parasuraman, R., Masalonis, A. R., & Hancock, P. A. (2000). Fuzzy signal detection
theory: Basic postulates and formulas for analyzing human and machine perfor-
mance. Human Factors, 42, 636–659.
Pringle, H. L., Kramer, A. F., Atchley, P., & Irwin, D. E. (2001). The role of attentional
breadth in perceptual change detection. Psychonomic Bulletin & Review, 8, 89–95.
Rensink, R. A. (2000). Visual search for change: A probe into the nature of attentional
processing. Visual Cognition, 7, 345–376.
Rensink, R. A., O’Regan, J. K., & Clark, J. J. (2000). On the failure to detect changes in
scenes across brief interruptions. Visual Cognition, 7, 127–145.
Richard, C. M., Wright, R. D., Ee, C., Prime, S. L., Shimizu, Y., & Vavrik, J. (2002). Ef-
fect of a concurrent auditory task on visual search performance in a driving-related
image-flicker task. Human Factors, 44, 108–119.
Scholl, B. J. (2000). Attenuated change blindness for exogenously attended items in
flicker paradigm. Visual Cognition, 7, 377–396.
Scholl, B. J., Simons, D. J., & Levin, D. T. (in press). “Change blindness” blindness: An
implicit measure of a metacognitive error. In D. T. Levin (Ed.), Visual metacognition:
Thinking about seeing. Westport, CT: Greenwood.
Simons, D. J. (2000). Attentional capture and inattentional blindness. Trends in Cogni-
tive Sciences, 4, 147–155.
Simons, D. J., & Chabris, C. F. (1999). Gorillas in our midst: sustained inattentional
blindness for dynamic events. Perception, 28, 1059–1074.
Simons, D. J., & Levin, D. T. (1998). Failure to detect changes to people during a
real-world interaction. Psychonomic Bulletin & Review, 5, 644–649.
Smallman, H. S., & St. John, M. (2003). CHEX (Change history explicit): New HCI
concepts for change awareness. 47th Annual Meeting of the Human Factors and Ergo-
nomics Society. Denver, CO: Human Factors and Economics Society.
Sohn, Y. (2000). The role of expertise, working memory capacity, and long-term mem-
ory retrieval structure in situation awareness. Dissertation Abstracts International: Sec-
tion B: The Sciences & Engineering, 60, 5806.
St. John, M., Feher, B. A., & Morrison, J. G. (2002). Evaluating alternative symbologies
for decluttering geographical displays. Space and Naval Warfare System Center Tech
Report 1890: (August). San Diego, CA: Space and Naval Warfare Systems Center.
CHANGE BLINDNESS 451
Ververs, P. M., & Wickens, C. D. (1998). Head-up displays: Effects of clutter, display
intensity, and display location on pilot performance. International Journal of Aviation
Psychology, 8, 377–403.
Werner, S., & Thies, B. (2000). Is change blindness attenuated by domain-specific ex-
pertise? An expert-novices comparison of change detection in football images. Vi-
sual Cognition, 7, 163–173.
Wickens, C. D., & Carswell, C. M. (1995). The proximity compatibility principle: Its
psychological foundation and relevance to display design. Human Factors, 37,
473–494.
Williams, P., & Simons, D. J. (2000). Detection changes in novel, complex, three-di-
mensional objects. Visual Cognition, 7, 297–322.
Wogalter, M. S., Conzola, V. C., & Smith-Jackson, T. L. (2002). Research-based guide-
lines for warning design and evaluation. Applied Ergonomics, 33, 219–230.
Yarbus, A. (1967). Eye movement and vision. New York: Plenum.
Yeh, M., & Wickens, C. D. (2001). Attentional filtering in the design of electronic map
displays: A comparison of color coding, intensity coding, and decluttering tech-
niques. Human Factors, 43, 543–562.
Zelinsky, G. J. (2001). Eye movements during change detection: Implications for
search constraints, memory limitations, and scanning strategies. Perception and
Psychophysics, 63, 209–225.
Zelinsky, G. J. (2004). Detecting changes between real-world objects using
spatio-chromatic filters. Psychological Bulletin and Review, 10, 533–555.