Battle of Cognition
Battle of Cognition
Battle of Cognition
i
Praeger Security International Advisory Board
Board Cochairs
Loch K. Johnson, Regents Professor of Public and International Affairs, School of Public and
International Affairs, University of Georgia (USA)
Paul Wilkinson, Professor of International Relations and Chairman of the Advisory Board,
Centre for the Study of Terrorism and Political Violence, University of St. Andrews (UK)
Members
Anthony H. Cordesman, Arleigh A. Burke Chair in Strategy, Center for Strategic and
International Studies (USA)
Thérèse Delpech, Director of Strategic Affairs, Atomic Energy Commission, and Senior
Research Fellow, CERI (Fondation Nationale des Sciences Politiques), Paris (France)
Sir Michael Howard, former Chichele Professor of the History of War and Regis Professor of
Modern History, Oxford University, and Robert A. Lovett Professor of Military and Naval
History, Yale University (UK)
Lt. Gen. Claudia J. Kennedy, USA (Ret.), former Deputy Chief of Staff for Intelligence,
Department of the Army (USA)
Paul M. Kennedy, J. Richardson Dilworth Professor of History and Director, International
Security Studies, Yale University (US.A)
Robert J. O’Neill, former Chichele Professor of the History of War, All Souls College, Oxford
University (Australia)
Shibley Telhami, Anwar Sadat Chair for Peace and Development, Department of Govern-
ment and Politics, University of Maryland (USA)
Fareed Zakaria, Editor, Newsweek International (USA)
ii
Battle of Cognition
Edited by
Alexander Kott
iii
Library of Congress Cataloging-in-Publication Data
Battle of cognition : the future information-rich warfare and the mind of the commander /
edited by Alexander Kott.
p. cm.
Includes bibliographical references and index.
ISBN 978–0–313–34995–9 (alk. paper)
1. Command and control systems. 2. Situational awareness. 3. Command of
troops. I. Kott, Alexander.
UB212.B37 2008
355.3'3041—dc22 2007037551
British Library Cataloguing in Publication Data is available.
Copyright © 2008 by Greenwood Publishing Group
All rights reserved. No portion of this book may be
reproduced, by any process or technique, without the
express written consent of the publisher.
Library of Congress Catalog Card Number: 2007037551
ISBN-13: 978–0–313–34995–9
First published in 2008
Praeger Security International, 88 Post Road West, Westport, CT 06881
An imprint of Greenwood Publishing Group, Inc.
www.praeger.com
Printed in the United States of America
iv
Contents
Introduction 1
Alexander Kott
1 Variables and Constants: How the Battle Command
of Tomorrow Will Differ (or Not) from Today’s 10
Richard Hart Sinnreich
The Timeless Conditions of Battle 13
The Changing Context of Command 15
Key Command Tasks 18
Recurring Command Dilemmas 24
New Command Challenges 29
Enhancing Future Battle Command 33
v
vi Contents
The impact of the information revolution on our society has been sudden,
profound, and indisputable. The last couple of decades have seen a dramatic
rise of new, powerful economic sectors dedicated to machines and processes
for generation, transformation, distribution, and utilization of informational
products. Computers and software, wired and wireless communication net-
works, autonomous machines, the proliferation of highly capable sensors—all
these elements have transformed both daily lives and worldwide economies
to an extent that would be difficult to fathom merely a generation ago.
Warfare, inevitably, is among the human endeavors that have experienced
the massive impact of the information revolution. Historically, warfare has
been particularly dependent on, and influenced by, technology. From iron
and bronze weapons to horse breeding and riding to sails and gunpowder to
motor power and so on—the history of warfare is largely the story of some
people creatively adapting (and some failing to adapt) their military cultures,
institutions, and tactics to new waves of technology.1 Not surprisingly, since
the beginning of the information revolution, military thinkers in the United
States and elsewhere have been both analyzing and implementing the changes
enabled and necessitated by the rapidly advancing information technologies.2
While some of these adjustments have rapidly entered military practice,
others remain elusive even after long anticipation.
Examples of military transformations engendered by the information revo-
lution include some that are relatively inexpensive and benign.3 Others are
ambitious, enormously expensive, and therefore often controversial. One
effort in the latter category is the Future Combat System of the U.S. Army, a
colossal program intended to build a highly networked system of new battle
1
2 Battle of Cognition
but the agency that was responsible for their development (and much else
besides)—The Defense Advanced Research Projects Agency.”8
Whatever the accolades, it was DARPA that by the year 2000 became
increasingly concerned about the challenges of battle command in an
information-rich, network-enabled military force. Urged by the energetic
and visionary Lieutenant Colonel Gary Sauer, DARPA and the U.S. Army
formed a joint development program, initially called the Future Combat Sys-
tem Command and Control (FCS C2).9 Gary Sauer joined DARPA, became
the program manager of the program, and convinced two other talented mili-
tary technologists—Maureen Molz, a senior engineering manager with the
U.S. Army, and Lieutenant Colonel Robert Rasch—to join him. Together,
they led the program through most of its life. Around 2003, the program was
renamed Multicell and Dismounted Command and Control (MDC2)10 and
continued into 2007.
The products of the program included an unconventional, innovative
approach and technology for battle command. While not representing the
position of either the U.S. Army or DARPA, this book is based partly on the
ideas, experiments, and lessons of that program.11
DRIVING FORCES
Major innovations do not occur without both a push and a pull. A push is a
set of factors that make a change possible. Often, the push is technological—a
new invention or an advance in technology or a combination of new tech-
nologies that makes possible a capability that was previously unachievable.
In the world of warfare, such a push implies that the potential opposing
force may also avail itself of such technologies and capabilities, and therefore
some counteraction must be considered. A pull is a set of factors that make a
change desirable and even necessary. Commonly, such factors result from an
evolution in the environments and opponents that the military is likely to face
in the near future.
Today’s shifts in battle command paradigms are enabled by several push
factors such as smart precision weapons, unmanned platforms and sensors,
ubiquitous networking, and intelligent decision aids. There are also powerful
pull factors: the continuous trend toward the dispersion of forces, the need
for lighter forces that can defeat a heavier opponent without entering into
his direct fire range, and the dramatic increase in the volume of battlespace
information. Later in this book, we discuss these topics in detail, but let us
preview them briefly here.
For example, the recent emergence and the rapid progress of unmanned
platforms are nothing short of revolutionary. In our generation, we are
observing the entry into the battlespace of an entirely new class of warriors:
unmanned automated sensors, stationary and mobile, ground based and air-
borne; unmanned fire platforms, for both direct and indirect fires, capable
of operating in the air and on the ground, large and small. It is difficult to
4 Battle of Cognition
compare this development with anything else that has ever occurred in the
history of human warfare.
These artificial warriors possess unique strengths and weaknesses. They
can obtain and process far more information than a human being and yet
are generally much less intelligent in accomplishing even seemingly simple
tasks. They possess inhuman endurance, precision, strength, and “courage,”
thereby offering the commander a yet unexplored range of new tactics. On
the other hand, the unmanned, robotic platforms impose on their human
commanders a great burden: monitoring and controlling the assets that for
the foreseeable future will remain remarkably unintelligent as compared to
human warriors.
At the same time, affordable networking and computerization have brought
an unprecedented ability to exchange large volumes of information at great
speeds between both human and artificial warriors, both horizontally between
peers and vertically between echelons. The implications of this development
for the battle command are also drastic: the information flow rates as well
as distances and node-to-node connectivity have grown by many orders of
magnitude as compared to any time in the history of warfare. Many tradi-
tional limitations that used to constrain and shape the nature of the battle
command, such as the hierarchical flow of information and control, are now
open to rethinking.
In addition to greatly improving the flow of information, the technology
also helps make better use of the information. The ubiquitous presence of
computers in the battlespace at all levels of command became a norm in the
last 10–15 years and opened the door to the emergence and acceptance of
various computerized aids: visualization of the situation, exchange and inte-
gration of information, course-of-action planning, logistics, and maneuver
execution control. These aids are simultaneously multiplying the need for
information flows and enabling the decision maker to deal effectively with the
proliferation of information. They also bring new challenges by both reduc-
ing and increasing, in different ways, the fog of war.
Unlike the push factors, the pull has more to do with the changing nature
of the operational environment and opposing forces, although these are also
driven to a large extent by technological and economic forces. For example,
as recently as the 1980s, the U.S. military operated in a bipolar world, with
a clearly defined primary potential opponent. The geographic places and the
modes of likely confrontations were well understood. But then the collapse of
the Soviet Union—partly due to the information revolution—shattered the
clarity of threats faced by the U.S. military. Instead came a bewildering array
of often unpredictable conflicts scattered worldwide.
Without the well-defined expectations of where the next war may occur,
the cold war approach of prepositioning U.S. forces at strategically located
bases becomes impractical. Besides, without a massive and highly visible enemy
like the Soviet Union, the U.S. public is less willing to pay for a large number
of military personnel. This creates the need for ways to shuffle the limited
Introduction 5
number of U.S. forces around the globe, from one conflagration to another,
rapidly and efficiently. The force has to become lighter and easier to deploy
to far-flung places. Its design, platforms, and weapons, and its command and
control have to adapt to the new realities.
In addition to helping undermine the Soviet Union and release a plethora
of other evildoers, the information revolution has produced other unexpected
ramifications. Advances in communications enable millions of Americans back
home to watch the wars—and their often gory outcomes—literally as they
unfold. The brutal emotional impact of real-time video beamed across the
world from the battlefront has no precedents in human history. Besides, the
new precision weapons, also engendered in part by advances in information
technologies, lead the public to expect surgical strikes without unnecessary
civilian deaths. With such images and expectations, the public’s tolerance for
casualties among our own troops as well as among enemy civilians has dimin-
ished dramatically. Today, the so-called “CNN effect” imposes new pressures
on a military commander: images of civilian casualties caused by a single stray
bomb can produce enormous, strategically significant outrage around the
world and in his12 own country. Somehow, the commander has to accomplish
his mission under the constraint of a public demand for low-casualty warfare.
In combination, these diverse driving forces have made a trend toward a new
battle command both feasible and inevitable.
greater agility and simultaneity of actions, as well as the demand for more
precise, surgical operations. However, we stress, the most important constant
of battle command is the commander himself, and a technological advance in
this field can succeed only by matching the new technology to the intricate
strengths and weaknesses of the human mind.
Yet, it is a very tall order to match technology and the human mind. The
complexities are immense, and the only effective techniques to deal with
them are experimental—essentially trial-and-error methods. In chapter 2, we
describe our approach to solving these challenges: the series of experiments
that comprised the core of the MDC2 program, in which we explored various
arrangements of human command cells and computer-based tools. This is
the place where the setting of the military scenarios and the physical arrange-
ments of our experiments are introduced. We describe the history of the pro-
gram, the typical battles portrayed in the experiments, and the tools we built
to help commanders fight the battles.
Having briefly introduced the battle command tools constructed and exp-
lored in the course of our experiments, in chapter 3, we offer the technically
minded reader a detour to explore the nuts and bolts of the tools. The chap-
ter starts with the overarching architecture, continues into a mapping of the
command functions and corresponding tools and then shows how they work
together in an illustrative scenario, and concludes by explaining the underly-
ing technology of the tools used in the experimental battle command support
environment.
With the profusion of functions, tools, and ramifications of battle com-
mand, one aspect—situation awareness—stands out as uniquely pervasive
and influential. That is why the next several chapters focus almost exclusively
on this all-important underpinning of battle command. Chapter 4 introduces
the fundamentals of situation awareness, beginning with the definitions of
situation awareness at several distinct levels and its exceptional significance
to battle command. To provide value to commanders, the design of a battle
command tool must pay careful attention to its ability to deliver situation
awareness. The chapter discusses specific recommendations on how to meet
such design objectives: the approach to requirements analysis, design, and
evaluation of systems that support situation awareness. It also points out the
serious limitations of our current understating of situation awareness, espe-
cially as it applies to command teams.
Continuing the discussion of the theoretical foundations of situation aware-
ness, in chapter 5 we describe our experimental approach to measuring and
analyzing the processes by which warfighters develop situation awareness,
the role of situation awareness in effective decision making, and its ultimate
impact on the battle outcome. Our experimental findings highlighted situa-
tion awareness as the linchpin of the command process, as a key factor that
determined the efficacy of all its elements—from sensor and asset control to
decision quality and battle outcome. We explain how we gradually developed
the methods of collecting the relevant data and measuring both the so-called
8 Battle of Cognition
In May 1916, off Denmark’s Jutland Peninsula, a naval battle took place for
which Great Britain’s Royal Navy had been preparing for more than a decade
and which it had sought in vain to bring about since the beginning of the
World War I. For a day and a night, British admiral Sir John Jellicoe’s Grand
Fleet sparred in a roar of guns and hiss of torpedoes with German admiral
Reinhard Sheer’s High Seas Fleet.
In risking battle against his numerically superior adversary, Sheer’s inten-
tion was to lure a detachment of British warships out of port and destroy it
in detail, whittling away the Royal Navy’s tonnage advantage. Threatening
British shipping navigating the straits between Denmark and Norway, he
hoped, would entice Jellicoe’s fast but lightly armored battlecruisers into an
ambush by the more powerful battleships of the High Seas Fleet.
Instead, warned by intelligence of the intended German sortie, Jellicoe
took his entire fleet to sea even before Sheer weighed anchor. Misled by a
misunderstood radio intercept, however, Jellicoe, like Sheer, expected to face
only his enemy’s battle cruisers.
Accordingly, on the afternoon of May 31, distant from the rest of the British
fleet by more than 50 miles, Jellicoe’s battlecruisers commanded by Vice Ad-
miral David Beatty found themselves engaging their German counterparts on
a course that unchanged would have taken them directly under the guns of
Sheer’s battleships. Only a last-minute warning by one of Beatty’s light cruis-
ers alerted him in time to reverse course.
10
Variables and Constants 11
What the goddess of fortune gave with one hand, however, she took back
with the other. Thanks to signaling problems and its commander’s reluctance
to act without orders, Beatty’s most powerful squadron lagged behind, depriv-
ing him of its firepower for crucial minutes. That and poor gunnery cost the
British two battle cruisers. Fortunately, Beatty’s detached Third Battlecruiser
Squadron arrived just in time to even the odds and allow Jellicoe to deploy his
battleships into fighting formation before the arrival of Sheer’s main body.
What should have followed was the decisive clash of battle fleets for which
both navies had been built. Instead, astonished to find himself confronting
not just battle cruisers, but rather the entire Grand Fleet, Sheer turned his
ships on their heels and fled, a maneuver that the British at first failed to
detect, then failed to exploit.
Thirty minutes later, however, Sheer unaccountably reversed course once
again, in the process exposing his ships in column to the fire of Jellicoe’s battle
line. Awakening to his error, he then turned back a second time, covered by
his battle cruisers and torpedo attacks. Again Jellicoe failed to pursue, and
with night falling, the two fleets separated.
Both fleets now altered course, Jellicoe hoping to intercept the Germans
at daybreak, Sheer seeking only to evade further action and return to port. In
the darkness their courses converged, the Germans actually passing through
the rear of the British fleet. The British warships detecting them, however,
neither engaged them nor informed Jellicoe, who thus remained ignorant of
their proximity. At dawn the fleets were miles apart, and the Royal Navy had
lost its golden opportunity to destroy its German rival once and for all.1
Tactically, honors were nearly even. The British lost three modern battle
cruisers and three older cruisers, the Germans one battle cruiser and four
light cruisers, both in addition to smaller vessels. Psychologically, however,
Jutland was an immense disappointment to Britain. Unchallenged at sea for
more than a century, the vaunted Royal Navy had failed in a head-to-head
encounter to destroy the smaller fleet of what amounted to an upstart naval
power.
In a penetrating examination of the evolution of the Royal Navy between
Trafalgar and Jutland, British historian Andrew Gordon traced the factors
that led to that embarrassing result. The most important was a pervasive
change in the Royal Navy’s approach to battle command, reflecting above all
the impact on the Navy’s institutional culture and leadership of revolutionary
technological change during a century without major naval conflict.2
In the process, the initiative and audacity that had won Britain command of
the sea surrendered to a centralized and mechanical battle-command system
that proved slow to recognize opportunity and unable to exploit it. At Jutland
as it has so often in the history of war, numerical superiority alone proved
unable to compensate for that deficiency.
A recent U.S. Army paper defines battle command succinctly as “the art
and science of applying leadership and decision making to achieve mission
success.”3 The elements of this definition deserve attention.
12 Battle of Cognition
To begin with, the definition asserts that battle command is both an art
and a science. The former is a creative activity not susceptible to objective
confirmation or prediction, the latter a process of systematic discovery that
ultimately must satisfy both requirements. By the definition, battle command
somehow must reconcile these incompatible qualities.
Second, the definition implies a predetermined military objective. Battle
command seeks mission success. But just what that entails must be specified
elsewhere. So described, battle command differs markedly from strategic and
even operational direction, in which whether and why to accept or decline
battle is a preliminary and often difficult decision.
Finally, battle command is asserted to comprise two separate, albeit re-
lated functions—leadership and decision making. Both are ultimately soli-
tary activities. They differ in that respect from control, which significantly
appears nowhere in the definition. Control, the application of regulation
and correction, can be and typically is a corporate process, and, as with
defining mission success, apparently is something distinguishable from com-
mand, a view tacitly reflected in the common pairing of the two in military
terminology.
Together, these elements of the definition describe an idiosyncratic but
nonetheless reproducible activity. Reduced to its essentials, the definition
portrays battle command as a creative process constrained to a predefined
objective and conforming, in some measure at least, to confirmable principles
the application of which can produce predictable results. Indeed, that is the
way battle command is taught in most professional military schools.
History tells a rather different story. Examining the achievements of suc-
cessful battle captains, one can’t avoid concluding that much more is going on
than just the application, however artful, of reliable principles and practices.
Successful combat commanders display an almost uncanny ability to sense the
battlespace, anticipate their enemies’ behavior, and create and exploit oppor-
tunities where none previously were visible.
The late Air Force colonel John Boyd tried to capture that special talent
in his now-famous “OODA Loop”—Observe, Orient, Decide, Act.4 Like
the fighter pilots from whom Boyd derived his theory, successful battle com-
manders routinely execute that cycle more rapidly and effectively than their
adversaries, gaining a progressively greater advantage with each successive
engagement.
As Boyd himself recognized, however, the problem confronting the battle
commander differs in several crucial respects from that facing the fighter pilot.
The difference affects all four elements of the OODA loop, but especially the
last. For, whereas only his own reflexes and tolerances, the capabilities of his
aircraft, and the laws of physics constrain the fighter pilot’s ability to act, the
battle commander must act through others. The translation from decision to
action thus is much less straightforward, more vulnerable to miscommunica-
tion or misperception, and above all, more sensitive to human error and plain
bad luck.
Variables and Constants 13
Danger
Danger affects battle command on several levels. At the most basic, it
requires those at the sharp edge to suppress every instinct of self-preservation
for purposes that rarely will be as visible to them as to their leaders. Through-
out history, a focal purpose of military socialization and discipline has simply
been to inculcate resistance to fear.6
Even among trained and disciplined soldiers, however, that resistance has
limits. As every experienced commander knows, the well of courage isn’t
bottomless. Today, when democratic societies, at least, no longer will tolerate
the harsh discipline that stiffened Frederick’s lines at Leuthen or Wellington’s
squares at Waterloo, fighting men and women must be convinced in other
ways to expose themselves voluntarily to death or serious injury.7 As some
commanders relearned painfully during the Vietnam War, nothing can more
easily shatter that conviction than suspicion that their leaders don’t know
what they’re doing. Battle command thus directly affects the willingness of
soldiers to fight.
Danger also affects human perception. Modern sensors have by no means
diminished soldiers’ propensity under threat to misperceive, exaggerate, and
fantasize. Clausewitz’s notorious “fog” of war is much less often the product
of an outright lack of information than of misreading the information avail-
able. Like desert heat, danger tends to distort the vision and generate false
images. In the confusion of battle, Clausewitz commented, “it is the excep-
tional man who keeps his powers of quick decision intact.”8 More information
alone therefore is no guarantee of effective battle command. Instead, what
matters more is the judgment through which that information is filtered and
translated into knowledge.
Finally, danger affects the commander directly, less often in terms of physi-
cal risk than through the dilemmas it poses. Choices that may seem obvious
in hindsight rarely present themselves so clearly at the moment of decision.
In battle, every choice is fraught with peril. At Jutland, as Winston Churchill
justly acknowledged, Jellicoe was “the only man on either side who could lose
the war in an afternoon.”9 For Jellicoe, therefore, the perceived cost of defeat
more than counterbalanced the will to win. No battle-command system can
relieve the commander of the moral burden such dilemmas entail.
14 Battle of Cognition
Uncertainty
In 1927, German physicist Werner Heisenberg proposed to his colleagues
that perfect knowledge on a quantum scale was unattainable. Skeptics objected
that this merely reflected the limits of measurement. But Heisenberg was able
to demonstrate that imperfect knowledge is built into the very fabric of the
subatomic universe.
A century earlier, Clausewitz reached a similar conclusion about war. “War,”
he wrote, “is the realm of uncertainty; three-quarters of the factors on which
action in war is based are wrapped in a fog of greater or lesser uncertainty.”10
As in physics, that uncertainty is in great measure insensitive to the means by
which information is acquired and transmitted. At Jutland, the same intel-
ligence system both alerted Jellicoe and misinformed him.
Modern technology certainly has vastly improved armies’ abilities to acquire
and share information. Netted communications, global positioning, enhanced
sensors, and overhead platforms all have significantly increased the informa-
tion available to commanders.
And yet, as recent conflicts reveal only too clearly, uncertainty persists.
Enemy and friendly units appear where they have no business being. Tar-
gets turn out to be not what they seemed. And commanders with abundant
communications still manage to misread the battlespace, the enemy, and each
other.11 Such difficulties have plagued armies and navies since the dawn of
organized warfare. While technology may be able to diminish them, there is
no convincing evidence to date that it ever will banish them entirely.
Chance
Finally, chance or, more broadly, what Clausewitz called “friction,” domi-
nates every battlefield. “Action in war,” he wrote, “is like movement in a resis-
tant element. Just as the simplest and most natural of movements, walking,
Variables and Constants 15
Battlefield Enlargement
The first is the progressive enlargement of the battlefield and the increas-
ing number and dispersal of fighting formations. Until very recently in his-
torical terms, army commanders could and did exercise tactical command
directly, basing their decisions on what they could see with their own eyes,
transmitting orders verbally or at worst by messenger, and exerting leadership
by personal example.
To an Alexander, Julius Caesar, or Gustavus Adolphus, the modern injunc-
tion to “lead from the front” would have been superfluous. Throughout
much of military history, effective command could be exercised nowhere else.
The annals of warfare are replete with examples of battles won through the
commander’s direct personal supervision or lost through his incapacitation,
capture, or flight.
In the mid-nineteenth century, that began to change. At Waterloo in 1815,
Wellington and Napoleon still could exercise direct tactical command, observ-
ing virtually the entire battlefield and moving their units like chess pieces.
Nothing more vividly reveals the dependence of both armies on that personal
involvement than the errors committed by Bonaparte’s subordinates during
his brief infirmity in midbattle and Wellington’s dramatic personal interven-
tion to mass his musketry against the Old Guard at the battle’s climax.
16 Battle of Cognition
Fifty years later in the wilderness, neither Ulysses S. Grant nor Robert E.
Lee could even begin to exert similar personal direction. Not just the diffi-
culty of the terrain but also the sheer scale of the battlefield and the dispersal
of units made continuous observation and influence virtually impossible. Both
commanders could and did intervene at a few crucial moments. In large mea-
sure, however, having brought their forces to battle, they were compelled to
leave its tactical direction in the hands of their subordinates.14
Organizational Complexity
During the next half-century, as weapons diversified and their lethal reach
expanded, the enlargement of the battlefield was paralleled by a similar
increase in organizational complexity. The late nineteenth century saw the
multiplication of command echelons, emergence of battle staffs, accelerat-
ing functional specialization, and the introduction of new technologies from
motorization to electronic communications.
Reflecting on these developments, General Alfred von Schlieffen, chief of
the German general staff from 1891 to 1906, predicted that future command-
ers endowed with modern communications no longer would lead from the
front, but instead would direct operations by telephone and telegraph from
distant headquarters where, “seated in a comfortable chair, in front of a large
desk, the Modern Alexander will have the entire battlefield under his eyes, on
a map.”15
His vision proved far too sanguine. When the World War I erupted in 1914,
senior commanders found themselves little more able to exert direct influence
on the battle than their predecessors of half a century earlier. Instead, wedded
to centralized direction on battlefields the size and complexity of which far
outstripped commanders’ abilities to sense, assess, and communicate, armies
and their commanders collided like rudderless ships.
In the end, the much criticized linearity characteristic of so many World
War I battles was not simply a product of dim-witted leadership, but instead
reflected as much or more the sheer difficulty of reconciling central tacti-
cal direction with decentralized execution by commanders and subordinates
equally unprepared by doctrine or training for its demands. Only toward the
end of the war did the Germans, acknowledging the problem, at last begin to
develop tactics relying on more decentralized command arrangements.16
Meanwhile, the twentieth century also saw a steep increase in the number,
types, and effects of weapons, and with it, problems in harmonizing their
employment. Until the 1904 Russo-Japanese War, for example, artillery usu-
ally positioned forward and fired over open sights. Even then, coordination
with supported infantry was anything but perfect, as the failure of Confeder-
ate artillery to suppress federal defenses on Gettysburg’s final day revealed.
The withdrawal of artillery into defilade at the beginning of the twentieth
century merely compounded the problem. Throughout World War I, tele-
phonic communications routinely proved inadequate to coordinate fire and
Variables and Constants 17
Multiplication of Domains
The final key development affecting battle command, barely glimpsed in
World War I but emerging full-blown in World War II, was the transmuta-
tion of a two-dimensional battlefield into a multidimensional battlespace, in
which maneuver, fires, aircraft, and electronics all found themselves com-
peting for command attention. As the complexity of the command problem
increased, the very decentralization essential to cope with battle’s increased
scale and fluidity found itself in competition with the need to synchro-
nize domains, preclude their mutual interference, and achieve economy of
force.
Throughout World War II, and in both major theaters, tactical integration
of land, sea, and especially air capabilities prompted repeated disputes, some
of which persist today. For example, the Fire Support Coordination Line,
until recently the focus of bitter doctrinal debate between U.S. air and ground
forces, originated as the Bomb Line, a safety measure established in mid-1944
in response to a much-publicized episode of air–ground fratricide.
Conflicts since then, including our own recent engagements in Afghani-
stan and Iraq, reveal that this dilemma has by no means been resolved. On
the contrary, the proliferation of long-range missiles, fixed and rotary wing
aircraft, and unmanned aerial vehicles only has aggravated it. Combined with
an enlarged irregular threat and battlefield transparency that increasingly
subjects even minor tactical miscues to immediate public scrutiny, the result
has been to impose unprecedented pressures on tactical commanders at every
level.
18 Battle of Cognition
Diagnosing
The first and in some ways most important requirement of effective
battle command is accurately reading the battlefield. Terrain appreciation,
knowledge of friendly unit locations and conditions, and intelligence about
the enemy all contribute. But all are essentially static, whereas battle itself is
dynamic. War, Clausewitz reminds us, “is not the action of a living force upon
a lifeless mass . . . but always the collision of two living forces.”22
Accordingly, successful battle command presumes the ability to infer
dynamic patterns from fragmentary and inevitably incomplete information,
sense them as they shift, and project them forward in time. How far forward
and in what detail will vary with the situation and level of command, but not
the basic requirement.
Variables and Constants 19
Planning
Given accurate diagnosis, battle planning largely is a matter of problem
solving. For the tactical commander unlike his superiors, the mission typically
is prescribed, along with the constraints within which it must be pursued and
the assets expected to be available to accomplish it. While that makes tacti-
cal planning simpler than strategic or operational planning in one respect, in
another it is more complicated, for battles tend to be much more volatile than
the strategic and operational conditions that prompt them.
Prussian general Helmuth von Moltke’s much quoted warning that “no
plan extends with any degree of assurance beyond the first encounter with
the enemy’s main force” acknowledged this volatility.23 For the tactical com-
mander, therefore, planning is less a matter of devising a template with which
to guide the entire conduct of the battle than of arranging resources to begin
it advantageously and retain that advantage as it evolves. In the end, the litmus
test of battle planning is not how perfectly it anticipates events, but instead
how well it promotes rapid and effective adjustment to them as they occur.
Obviously, the more complete and reliable the information on which to
base planning, the better. But because battle is a two-sided contest in which
time plays no favorites, deferring engagement in the hope of acquiring better
information easily can become self-defeating. One of the central challenges
of battle command is deciding when such delay is more likely to increase than
to diminish uncertainty and its associated risks.
Above all, like diagnosis, planning to be useful must be continuous. At higher
levels of command, additional personnel are available for that purpose. Smaller
formations enjoy no such luxury and must plan with the same personnel who
execute. Enabling such units to do so more rapidly and effectively is among the
more important potential contributions of networked automation.
Deciding
If battle planning is largely problem solving, decisions are the mechanisms
through which solutions are translated into intentions. In battle against a
20 Battle of Cognition
competent enemy, however, solutions rarely are self-evident and even less
often final. Moltke exaggerated only slightly in remarking to his staff that
given three possible courses of action, the enemy almost invariably could be
counted on to choose the fourth.24
A great deal has been written about the decision-making process, but in
the end much about it remains obscure, which is one reason efforts to date
to automate it have made relatively modest progress. In the military, early
attempts to apply artificial intelligence to tactical decision making have not
fared well. TacFire, for example, the U.S. Army’s first true automated artillery
fire direction system, became so notorious for misdirecting fires and dimin-
ishing artillery’s responsiveness that gunners eventually began turning the
system off after using it to help generate their initial fire plans. If automation
performed so poorly in a relatively quantifiable matter such as fire distribu-
tion, how well would it be likely to satisfy the significantly more complex
decision-making requirements of battle command?
Instead, automation is much more likely to be successful in assisting decision
making than in replicating it. First, in addition to facilitating more rapid and
effective diagnosis, automation may help the commander judge more accu-
rately the time-space implications of choosing a particular course of action.
Since the enemy has a vote, that estimate always will be imperfect. But given
accurate information on the terrain and friendly capabilities, automation can
help reduce the variables with which tactical decision making must deal.
Second, automation can help trigger decision making by alerting the com-
mander in a timely way to the occurrence of events likely to require a modi-
fication of his intentions. As with projecting courses of action, such triggers
are likely to be imperfect and a prudent commander will avoid becoming
overreliant on them. But used judiciously, they can enhance the commander’s
sensitivity to changing circumstances.
Finally, automation may allow some decisions to be prespecified. It thus
may allow a more prompt reaction to certain events. For example, detection
of an air-defense threat to a critical airborne sensor might automatically trig-
ger counterfire or the movement of the sensor out of the threatened airspace.
Or the detection of an enemy force on an open flank might automatically
generate a warning to the nearest friendly unit. As TacFire’s example revealed,
such a capability must be used with caution. But the potential is there.
Delegating
Many years ago, a respected senior U.S. Army officer became widely
known for the admonition that “Only those things are done well that the
boss checks.” Whatever the case for that viewpoint in preparing for battle,
it invites failure once the fight begins. To command effectively in modern
battle is to delegate, and delegation without discretion is meaningless. In
Vietnam, the attempt of progressively senior commanders to dictate the
actions of the same engaged unit from command helicopters orbiting over
Variables and Constants 21
Synchronizing
Army Field Manual 3.0, Operations, defines synchronizing as the process of
“arranging activities in time, space, and purpose to mass maximum relative
combat power at a decisive place and time.”27 The need to synchronize goes
back a very long way, ultimately to the first moment when hand and projec-
tile weapons appeared together on the battlefield. The Roman javelin, for
example, temporarily could deprive its targets of the use of their shields, but
unless the unit after casting its javelins followed up quickly with hand-to-hand
engagement, that momentary advantage easily could be lost.
Since then, synchronizing has become more complicated with every
improvement in weapons technology and, more recently, the expansion of
22 Battle of Cognition
battle’s domains. In combining arms and services, the most valuable benefits
accrue from complementary rather than just additive effects. As with combin-
ing javelin and sword, however, complementarity is sensitive to timing. Fires
delivered too early may forfeit their utility in allowing unhindered maneuver.
Sensors shifted too late may deprive a moving formation of the early warn-
ing that its deployment assumed. The retention of a mobile reserve may be
pointless if the routes by which it might have to be committed weren’t earlier
reconnoitered and cleared.
In this area more than any other, dispersal and its accompanying devolution
of tactical responsibility downward have radically increased the burden on
commanders. Platoons and companies routinely must be able to synchronize
activities and effects previously managed by their parent formations. While
that burden occasionally may be lessened by the direct intervention of the
higher commander, such intervention desirably should be infrequent.
By allowing the virtual rehearsal of activities before the fight begins and
the adjustment of their relative timing as it proceeds, automation can ease
the synchronization burden. In the best case, by enhancing common situa-
tion awareness among the combining activities, networked automation may
increase subordinates’ ability to self-synchronize, diminishing the need—and
also the temptation—for the commander to intervene directly.
Communicating
About the importance to battle command of reliable communications, lit-
tle need be said. Whether to receive and disseminate information, transmit
orders and intentions, or synchronize activities, the commander must com-
municate. At Jutland, erratic communications contributed materially to the
failure to force the German fleet to battle.
But even perfect connectivity is no guarantee of effective communication.
At Balaclava in the Crimea on October 25, 1854, a scribbled order to the Brit-
ish light cavalry brigade to “prevent the enemy carrying away the guns” pro-
duced one of history’s most celebrated blunders. To the sender on the heights,
the order was perfectly clear. To its recipient in the valley below, it was utterly
opaque.28 Military history is replete with such episodes.
Automation is no panacea, but it can help make such disconnects less likely.
Simply the ability to transmit graphics quickly and clearly can diminish if not
prevent altogether the sort of perceptual divergence that sacrificed men so
uselessly albeit gallantly at Balaclava.
At the same time, in communicating as in delegating, automation misused
can turn on its owners. As bandwidth increases, so too does the potential to
communicate too much information. During the invasion of Iraq in March
2003, a senior U.S. intelligence officer complained wryly that his headquarters
was awash in information it was utterly unable to process. The more compre-
hensively automation is networked downward, the greater the risk of similar
information overload on smaller units even less capable of coping with it.
Variables and Constants 23
Motivating
One of Bill Mauldin’s wonderful World War II cartoons has Willie and Joe
crouching behind a bush while a general standing near their outpost blithely
surveys the battlefield. “Sir,” grumbles Willie, “do ya hafta draw fire while yer
inspirin’ us?”29
Front line humor aside, not least of the command challenges associated
with battlefield enlargement is how much harder it has become for com-
manders to make their personal presence felt. At Waterloo, Wellington
could stand in his stirrups and wave his hat and the gesture be seen by half
the men under his command. During the recent fighting in Fallujah, a batta-
lion commander would be lucky to connect directly with more than a few of
his men.
Some may argue that in modern war, such direct personal influence is over-
rated. An incident during Operation Iraqi Freedom suggests otherwise. As
the 101st Airborne attacked through An Najaf, several senior officers includ-
ing the corps and division commanders assembled near a road intersection to
confer. Ignoring nearby incoming mortar rounds, they continued their dis-
cussion until interrupted by small arms fire, upon which they immediately
moved toward the source of the firing. No one was hurt, but word of their
leaders’ coolness and audacity spread quickly among the troops, boosting
morale throughout the corps.30
Moreover, motivating involves more than just inspiring. On a dispersed
battlefield, the ability of any subordinate unit to gauge the impact of its
behavior on the fight overall is intrinsically limited. When minutes count, one
unit’s failure to act promptly by reason of wariness or weariness may mean
the difference between victory and defeat. Injecting a sense of urgency when
necessary is a vital command obligation, and sometimes only the command-
er’s personal presence will suffice. Commanders from George Washington to
George Patton were renowned for appearing, unlooked for, at critical times
and places.
While the commander can’t be everywhere, in short, he must be able at need
to go wherever his personal presence can make a difference without losing his
grip on the battle overall. One of the most important prospective benefits of
networked automation is to unleash the commander from his headquarters
without depriving him of its information resources, thus helping to diminish
the tension between the commander’s need to maintain his perspective on the
battle and his ability to exert personal influence where necessary.
24 Battle of Cognition
Prioritizing Requirements
Like other enterprises, war is subject to economic imperatives, requiring
commanders to allocate finite resources among competing requirements.32
Given battle’s uncertainties, however, doing so is especially difficult for the
tactical commander. In effect, it requires him despite incomplete and transi-
tory information to prejudge which resource commitments will prove most
important in accomplishing the mission.
Between friction and the enemy, that forecast rarely will be perfect. Priori-
tization thus requires balancing economy of force with elasticity. Ignoring the
first risks wasting assets. Ignoring the second risks inability to recover from
surprise. The smaller the unit, the harder it is to reconcile these competing
requirements.
Modern mobility systems together with more flexible tactical organizations
have greatly improved commanders’ ability to shift assets around the battle-
field. Even so, reallocation can’t always be counted on to correct an error in
prioritization.33 In battle much more than in other activities, retasking tends
to be difficult and dangerous. Every major readjustment risks confusion and
delay, especially inasmuch as it is likely to be needed most urgently precisely
when least convenient to execute.
One solution always has been to withhold some assets for later commit-
ment without disturbing those already committed. While appropriate for
larger formations, however, retaining such a reserve is less feasible the smaller
the unit. And while it is less essential when adjacent units are physically close
enough to furnish each other assistance at need, the more dispersed the force,
clearly, the less reliable and responsive such mutual support.
Networked automation can help diminish the prioritization dilemma in
two ways. First, by involving subordinate units directly and concurrently in
examining the implications of prioritization alternatives, it may surface gaps
in those arrangements that otherwise would be missed. At worst, by helping
forecast contingencies that might require retasking, it can enable the affected
units to consider in advance how they would mutually adjust.
Second, by helping to track the shifting spatial relationships among subor-
dinate units, networked automation can alert the commander to a develop-
ing situation that may obviate that contingency planning, allowing him in
a timely way either to alter his intentions or seek additional support from
higher echelons. The latter may be especially important, since, like internal
retasking, obtaining additional support from higher may take time.
Variables and Constants 25
Judging Timing
“War,” Clausewitz declared, “is nothing but a duel on a larger scale.”34 It
was a peculiarly apt analogy, evoking the flickering weave of blades as duelists
feint, attack, parry, and riposte, each seeking to preempt the adversary’s cor-
responding action. For the fencer, though, timing merely is a matter of alert-
ness and reflexes. Applying the analogy to battle, one should try to visualize
the same encounter taking place on the uneven bottom of a cloudy pond, in
which, moreover, each combatant is wired to a block of concrete just small
enough to be dragged with difficulty.
Deciding when to engage, change direction, commit a reserve, reallocate
assets, call for additional support, or attempt to disengage are among the bat-
tle commander’s toughest questions. “Ask me for anything but time,” Napo-
leon enjoined a subordinate, and with reason. In battle, minutes may mean
the difference between success and disaster. At Jutland, for example, only a
few minutes’ delay in reversing course might well have cost the Beatty the rest
of his battle cruisers or Sheer his entire fleet.
Describing his brigade’s attack on Objective SAINTS south of Baghdad on
April 3, 2003, the commander of Third Infantry Division’s Second Brigade
Combat Team recalled, “At 3 in the morning, there was only one battalion
ready. [I] made the decision to go without the entire brigade consolidated.
The intelligence we had received said the Hammurabi [Division] was reposi-
tioning south to take SAINTS and the airport ahead of us so we didn’t have
the freedom to wait. It was a classic commander’s dilemma.”35
Absent perfect intelligence and equally perfect foresight, nothing can elim-
inate such dilemmas, but there are ways to mitigate them. Perhaps the most
important is simply to pay close attention to the locations and conditions of
key assets. The more this can be managed without distracting subordinates
with repetitive reporting requirements, the better. Blue force tracking already
has contributed materially to this process, and more comprehensive network-
ing can further enhance it.
Tracking enemy movements and strengths is much harder, and the smaller
and less readily identifiable the enemy forces of interest, the greater the chal-
lenge. Discussing his efforts to template Iraqi forces defending the Karbala
Gap, one brigade intelligence officer recalled “huge disconnects between the
CFLCC C2, the corps G2, and the division G2 on the enemy picture. One
level had two battalions in the gap, while another level had one battalion in
the west and two battalions east of the Euphrates . . . One echelon assessed a
maneuver defense from Karbala with one battalion in the gap, while another
had the enemy defending from its garrison and controlling bridges, and a
third echelon had the enemy defending bridges from the eastern side.”36
Intelligence was even less able to detect and track the Iraqi irregulars who
proved the most persistent threat to coalition forces.
Networked automation won’t solve that problem, but it certainly can help
narrow the intelligence disconnects among successive echelons and alert com-
manders in real time to developing patterns of enemy activity that otherwise
26 Battle of Cognition
Time Compression
As a general proposition, tactical engagements today tend to begin more
precipitately, transpire more rapidly, and terminate more abruptly than they
have for centuries. To gauge the impact of this foreshortening on battle com-
mand, consider how tactical headquarters have attempted until very recently
to monitor battlefield events.
Reports and queries arrived in a crowded tent or command vehicle over
multiple and invariably congested radio nets, often so thickly as to be audibly
indistinguishable. Tired and dirty soldiers recorded those reports on what-
ever paper was at hand, and then, provided the reports hadn’t been misplaced,
transferred them by marking pen or grease pencil to an acetate map overlay
that became harder to read with every erasure and remarking.
Orders were received and transmitted in the same way, occasionally accom-
panied by hastily drawn graphics that made their own contribution to the
diminishing readability of the recipient’s map. Combine that with dated infor-
mation that failed to be deleted and new information that somehow failed to
be recorded, and it isn’t hard to see how quickly any correspondence between
the displayed and actual tactical situation could evaporate, even assuming the
original information was accurate and complete.
Such a process, needless to say, is incongruous in the military of a nation
whose children play routinely with devices that store, display, manipulate, and
communicate information far more rapidly and reliably. More important, it
increasingly has become unable to keep pace with the information flow with
which today’s commanders must cope. That alone is enough to justify current
efforts to apply networked automation to battle command.
30 Battle of Cognition
Simultaneity
Not only do events on today’s battlefield happen more quickly, but they also
involve more diverse activities in more places at the same time. In part, that
simply reflects increased dispersal. But in recent U.S. doctrine, it also reflects
a deliberate intention to confront the enemy simultaneously with more prob-
lems than his own command apparatus can handle. Its aim is to produce
confusion and dislocation that deprives the enemy of the ability to respond
effectively and thus accelerates his mental and psychological collapse.
The presumption, of course, is that such multiple simultaneous activities
won’t prove more disruptive to the perpetrator than to their intended vic-
tim, and also—and perhaps more important—that such efforts won’t result in
piecemealing assets to the point where they no longer contribute effectively
to a coherent overall purpose.
Assuring both is a challenge even at the operational level. For the bat-
tle commander, it is much greater. To the usual obligation to synchronize
maneuver and integrate combined arms, it adds the requirement to orches-
trate concurrent but independent activities aimed at separate and spatially
disconnected objectives.
Simply keeping track of those activities to insure they don’t mutually inter-
fere will test the commander’s information systems, never mind managing any
unforeseen adjustments. In addition, simultaneity only multiplies the tactical
and logistical dilemmas associated with any single operation. Effective dele-
gation thus becomes even more critical, and with it continuous review and
refinement of the commander’s estimates and intentions.
There certainly is historical precedent for the success of such simultane-
ous operations when directed by genius, but even more for their failure in its
absence. And since, as Clausewitz pointed out, genius is a scarce commodity
even in the finest military, a doctrinal commitment to simultaneity implies
some more uniformly accessible command resource. Apart from fostering
subordinate initiative, that resource resides in improved information systems
if it resides anywhere, and that too argues for enhancing battle command with
networked automation, and with soldiers and leaders schooled to employ it
effectively.
Lethality
It would be a mistake to suggest, as some have, that the lethality of ground
warfare overall has increased. At Cold Harbor in 1864, attacking federal
forces lost nearly 7,000 men in less than an hour, and at the Somme in 1916,
58,000 British troops were killed or wounded on the first day, roughly 3,000
per kilometer of front and more than 10 percent of the committed force. No
recent conflict has seen anything like those numbers.44
What is true, however, is that the reach and precision of both surface and
air weapons and their associated target acquisition means have increased
Variables and Constants 31
Tactical Agility
The greater the physical dispersal of tactical units, the less confidently
they can count on timely reorganization or reinforcement to accommodate
32 Battle of Cognition
Transparency
On February 13, 1991, a U.S. airstrike destroyed the Al Firdos bunker in
downtown Baghdad, killing scores of Iraqi civilians who had taken refuge
beneath its reinforced concrete. Intelligence had confirmed the bunker’s use
as a military command post, whereas no information suggested its occupation
by noncombatants. Nevertheless, media reaction to the attack, ably exploited
by the Iraqis, resulted in a precipitate decision to suspend any further attacks
on war-supporting targets in and around Baghdad.46
Variables and Constants 33
More recently, as this was written, debate swirled in the United States and
international press in reaction to the use of white phosphorus against Iraqi
insurgents.47 White phosphorus has been a standard artillery and mortar
munition since World War II, used both to destroy inflammable materiel and
attack personnel protected from high explosive fragmentation. Until now, its
use for either purpose has never been challenged.
Not the least of the ironies of modern war is that the same technologies
promising to enhance the commander’s access to and use of information are
equally available—in some cases more available—to media with interest in
but no accountability for the conduct of military operations. Increasingly, the
video camera hangs over the battle commander like an electronic Sword of
Damocles, able almost instantaneously to dispute his appreciations, judge his
decisions, and criticize their consequences.
Commanders always have had to contend with history’s judgment, but
never before have so many been subjected to such immediate and pervasive
public scrutiny. It would be asking too much of human nature to imagine their
behavior remaining unaffected by it. Its danger, of course, is the inculcation of
overcaution and an aversion to the risk taking without which, as Field Marshal
Wavell rightly argued, no success in battle is possible.
There is little the battle-command system can or should do to diminish
transparency or to immunize commanders against its effects. What it can
do, given the right tools, is help avoid the internal delays and confusion that
too often make the commander the last to know about an incident or deci-
sion likely to prompt unwelcome media attention. Almost invariably, the best
defense against false, distorted, or incomplete reportage is rapid and accurate
dissemination of the truth. And if that truth is awkward, it is even more essen-
tial that the commander be aware of it and be perceived to be aware of it.
Beyond that is moral courage and the acceptance of responsibility, for which
no battle-command system can substitute.
further, envisioning that “The same system that controls wartime operations
will regulate activities in garrison and in training.”50
These are heroic ambitions. They also are very likely unrealizable, perhaps
fortunately, for their premise is that all command requirements can be reduced
to the same ingredients. But if history is any guide, command, especially in
battle, is far too idiosyncratic a process to tolerate so procrustean a solution.
Instead, as one historian concluded after carefully examining several success-
ful and less successful command systems, “Command being so intimately
bound up with other factors that shape war, the pronunciation of one or more
‘master principles’ that should govern its structure and the way it operates is
impossible.”51
In reality, no single system of command however robust is likely to satisfy
every military requirement. Indeed, such a system even were it technically
feasible to devise would tend almost inevitably to reproduce the very sort of
command rigidity that contributed so heavily to the Royal Navy’s embarrass-
ment at Jutland.
Instead, a more reasonable expectation is that emerging networking tech-
nology will allow information to be shared in a timely way by different orga-
nizations without imposing undue restrictions on the way it is manipulated,
displayed, and employed. As will be seen in the chapters that follow, a central
objective of the MDC2 program has been to examine how commanders at
different levels choose what information to attend to and how.
Still more is that true of command direction, which must not only be framed
by the commander’s intentions, but also adapted to the intended recipient and
the conditions in which it is received. Both can be expected to vary constantly,
and a command system unable to accommodate those changes will impede
the commander more than assist him.
of battle not only for lack of adequate technology, but also because it largely
ignored the human limitations of the army to which it was applied. Some of
today’s command and control concepts risk making the same mistake.
Networked automation can’t be made responsible for the professional com-
petence of the commanders who employ it, and we wouldn’t want it to be. But
it can be designed in a way that no flat map can to alert commanders to the
creeks and gullies that may hamper their wooden squares, remind them how
long it has been since those squares were rested or resupplied, warn them of
dangers those squares may be unable to sense, and in the crunch, help bring
to bear in a timely way the resources to insure that their block-men don’t die
unnecessarily “because still used to just being men, not block-parts.”
In the effort to insure that battle command remains sensitive to war’s ines-
capably human character, in short, the system designer shares responsibility
with the commander. Only if, in addition to assisting him to apply capabili-
ties, networked automation also assists him in protecting and preserving the
soldiers on which those capabilities ultimately depend, will such a system truly
deserve to be called a battle-command system.
CHAPTER 2
A Journey into the Mind of
Command: How DARPA and
the Army Experimented with
Command in Future Warfare
Alexander Kott, Douglas J. Peters,
and Stephen Riese
A BATTLE OF 2018
His small robotic spy planes, tens of kilometers away, faithfully scanned the
battlespace. The composite image—flat, swampy land dotted with hamlets,
lakes, and untidy small forests—slowly scrolled on the computer screen. Cap-
tain Johnson1 glanced at the calendar and weather predictor tucked into a
corner of the display. On this August 17, 2018, it was going to be hot and
muggy all day. “Lousy visibility,” he thought, “the UAVs are going to miss a
lot of enemy.”
A few months ago, in early 2018, a faction within the Azerbaijan military
suddenly offered its support to a long-lingering dissident movement. Tradi-
tionally, the dissidents’ influence rarely extended outside of the southeastern
portion of the country, mostly south of the Kura River in the Kura Depres-
sion Region. Now, however, things were unfolding differently. By April 2018,
the Azeri Islamic Brotherhood (AIB), a coalition of antigovernment factions,
subverted the bulk of an Azeri Motorized Rifle Brigade, the well-trained and
formerly reliable Kura Brigade that mutinied to realign with this faction.
In a surprise action, a battalion from the Kura Brigade (10th MRB) seized
control of a historically significant district in the capital of Baku. A desperate
week-long defense by loyalist government forces against the attacks of the
10th MRB managed to secure the center of government within the capital
city. Still, the AIB succeeded in halting the session of the national assembly
when members fled to their home territories. The president, along with his
prime minister and council of ministers, remained in Baku and continued to
direct the government and remaining loyalist military forces in the city and
along the Apsheron Peninsula.
37
38 Battle of Cognition
Red commander with early warning of Blue force movement, and to serve as
additional dismounted infantry to confront the Blue force. Further compli-
cating the battlespace picture, the armed members of the Nagorno-Karabakh
Internal Liberation Organization (NKILO), a militia that elected to remain
neutral in this conflict, operated throughout the area. The neutral NKILO
members dressed as civilians but often carried weapons like members of the
insurgent AIB forces. From this point on, we will refer to these enemy units
as the Red force.
The raw numbers of personnel and weapon systems in Captain Johnson’s
area of responsibility—Red versus Blue—were certainly stacked against him.
A common rule of thumb used to say that the attacking force must be about
three times larger than the defending force. Johnson’s CAU, however, had
about one-third the number of platforms and many fewer troops than his Red
counterpart. Yet his orders were to attack! By turn-of-the-century measures,
Johnson’s force was an order of magnitude smaller than it should have been.
It was hard to count on a significant difference in training and motivation:
the Red force was known to be brave, motivated, and knowledgeable in their
own tactics. He knew, however, that in terms of technology his force was a
generation ahead of the enemy and that this fact could be worth more than a
10-fold numerical advantage.
The CAU’s fighting platforms (see Figure 2.2) were light and fast. Most
of them were unmanned, robotic vehicles that did not have to carry heavy
armor to protect any human riders and instead carried more weapons and
fuel. Less encumbered by weight and dependence on supply trains, their
maneuver could be far-reaching and more agile than their opponent’s.
Besides, if necessary for longer distances, most of them could be carried by
helicopters.2
Johnson’s unit was also rich in aerial and ground sensors. His robotic recon-
naissance assets—aerial and ground—ranged far ahead of the CAU’s main
forces. With their diverse sensors and the semiautomatic ability to detect sus-
picious objects—potential Red vehicles or infantry—they provided the Blue
force with crucial information about the locations and intent of the Red force,
long before coming in contact with the enemy’s weapons. The captain usually
knew much more about his enemy than the enemy knew about him.
Granted, it was of limited value to know much more about the enemy with-
out being able to impact him. Fortunately, the CAU included plenty of capa-
ble shooters. His long-range artillery and precision missiles—most of them
carried by unmanned vehicles—allowed Johnson to attack the Red force at a
distance once his sensors found the targets.
Still, all these assets would be worthless without the network that tied them
all together, a network that allowed Johnson to receive voluminous informa-
tion about the battlespace and send detailed commands to his forces. United
by the network, the CAU’s assets could fight in a widely distributed, dis-
persed fashion without losing synchronization and mutual support. Beyond
the CAU’s own assets, it could rely, if necessary, on those of a sister unit: the
40 Battle of Cognition
Figure 2.2. Organization and equipment of CAU. The Appendix describes the
equipment.
network enabled them to support each other with both information and fires
even when separated by tens of kilometers.
Finally, and perhaps most importantly, the CAU’s small command cell—
Johnson and his three battle managers (see Figure 2.3) riding in their Com-
mand and Control Vehicle (C2V)—wielded a powerful weapon for battle
command, the Commander Support Environment (CSE). A collection of com-
puter tools, the CSE fused the massive amount of information arriving from
the CAU’s manned and unmanned platforms; assisted with the recognition of
enemy targets; advised on the courses of action available for maneuver, fires,
and intelligence collection; translated the battle manager’s terse commands
into detailed instructions to robotic warriors; and even, if necessary, autono-
mously planned and executed the fires and intelligence collections tasks.
Johnson looked at his battle managers. On his right was Sergeant Rahim,
the intelligence manager, about 15 years older than Johnson. Trying to opti-
mize aerial sensor availability for battle damage assessment tasks, he was
tasking the CSE to calculate plans to potentially use Class I sensors only;
Class I and II sensors only; and Class I, II, and III sensors, with Class III
sensor platforms only used after the maneuver force has reached Phase Line
GOLD. Seated in front of Johnson, Specialist Chu, the maneuver manager,
worked the CSE to finalize several alternative routes for relatively clumsy
robotic ground vehicles. The fourth member of the CAU command cell, the
effects manager, Sergeant Manzetti, sat in the front-right corner of the C2V
and busied herself with entering into CSE the no-fire rules for the politically
touchy villages controlled by hopefully neutral NKILO militias.
A Journey into the Mind of Command 41
Figure 2.3. The Blue command cell—commander and three battle managers—ride
in a C2V.
“It’s weird,” the captain thought, “when Rahim joined the military, I was
barely out of kindergarten, and all this stuff—robotic guns, unmanned sensors,
smart computers everywhere—was considered mostly science fiction. Now it
is just normal, totally normal.”
NETWORK-ENABLED WARFARE
In fact, the efforts to make all this totally normal stuff for Captain Johnson
started well before he went to kindergarten. The style of warfare practiced by
his CAU is called network centric or network enabled (we prefer to use the latter
term in this book), and, like most other revolutions in warfare, it sprang from
a confluence of several technological and political developments.
A good place to start unraveling this chain of developments is the personal
computer revolution of the 1980s. Suddenly, anyone could afford to buy a signi-
ficant amount of computing power. Digital information became ubiquitous—
it was easy to generate such information, to capture, to reproduce, and to
distribute. One unexpected outcome of this development was its impact on the
seemingly invincible Evil Empire, the Soviet Union. Long reliant on keeping
information away from its citizens, the Communist power was faced with a
choice: technological obsolescence or relaxation of its information control.
The Soviet Union wisely elected the latter, promptly collapsed (for a number
of reasons, not just the information revolution), and released in the wake of its
42 Battle of Cognition
collapse a tsunami wave of religious and ethnic wars around the world. These
conflicts changed many of the equations for the U.S. military, forcing it to
look for such things as rapid deployment, small wars, counterinsurgency, and
highly distributed operations.
In another branch within this network of developments, personal computers
made networking both highly feasible and highly desirable. In the early 1990s,
people started to notice a mysterious slogan in the marketing literature of Sun
Microsystems, a then-popular maker of high-end computer workstations.
“The network is the computer,” went the slogan. Sun’s leaders argued that
platform-centric computing was a thing of the past, and the future was with
marvels, such as the Internet, that arise from the network-centric computing.
The Internet went on to have a glorious life of its own, including changing
the ways that the U.S. military communicates. Meanwhile, the term net-
work centric and its broader underlying ideas appealed to a visionary duo—
an Air Force officer, John Garstka, and a Navy aviator, Admiral Arthur
K. Cebrowski—who proceeded to apply it to things military.3 If all mili-
tary assets—warfighters, tanks, ships, airplanes—were to be connected by
powerful information networks, they could cooperate, make synchronized
decisions, and fight in a more agile, effective fashion. They could be tailored
to a specific mission. They could be geographically distributed. Their deploy-
ment and logistics could be faster and more flexible. They could use differ-
ent ways to organize themselves, perhaps even self-organize. They could also
provide a more-efficient environment for employment of a slightly older
development—precision weapons.
The ideas fit perfectly. Finally, here was a coherent, elegant vision of how
the indisputable information revolution could revolutionize military affairs.
Network-centric warfare became a popular concept within the U.S. Depart-
ment of Defense. The Office of Force Transformation became the official
home of the concept, and Admiral Cebrowski, John Garstka, and Dr. David
Alberts issued a steady stream of influential publications.4
Naturally, every service within the U.S. military developed its own perspec-
tive on the network-centric idea. By the late 1990s, the U.S. Army was eye-
ing a number of challenges. The typical ponderous deployment of the army’s
heavy forces was seen as a liability in the post-Soviet age: faster deployment
by fixed-wing airlift seemed necessary, but the army’s equipment was too
heavy. The expense of large numbers of men and women in army uniforms
was becoming difficult to justify. Its main fighting platforms—the Abrams
tank and the Bradley personnel carrier—were starting to approach their
obsolescence horizon and called for replacements. Emerging technologies—
computers, sensors, laser-guided weapons, robots, unmanned aerial vehicles
(UAVs)—all seemed interesting but difficult to accommodate in the army’s
current conceptual structure.
Enter network-centric warfare. In one fell swoop, it offered a holistic solu-
tion, a unifying framework for all of the above-mentioned concerns. The army
named this synthesis of ideas the Future Combat System (FCS).5 Computer
A Journey into the Mind of Command 43
networks would permeate the FCS, delivering the information from farseeing
advanced sensors (such as those carried on UAVs) to shooter platforms (many
of them robotic) that fire precision weapons at a faraway enemy beyond the
horizon. By detecting and engaging hostile forces at a distance, the FCS force
could avoid the enemy’s direct fires and allow the army platforms to carry
more modest armor. This would reduce the weight of the platforms and make
them suitable for rapid air delivery to multiple trouble spots around the world.
These FCS platforms, elaborately rich in information but prudently frugal in
armor, would be procured to replace the aging Abrams and Bradleys. All this
was a perfect fit.
Of course, there were critics of the idea. Some argued that certain key tech-
nologies, such as robotic vehicles, remained underdeveloped and not ready
for prime time, and other technologies, such as the wireless mobile networks,
remained too vulnerable to enemy attacks.6 The need for rapid deployment
by air may have been overestimated; there was no pressing need to dispense
with heavy armor,7 and besides, FCS would take almost as long to deploy as
a conventional force.8 Others argued that the FCS system was too expen-
sive,9 that a light-armored force was too vulnerable for direct contact with
the enemy,10 and that emphasis on network-centric warfare would lead our
military to neglect the need for more boots on the ground.11
Not so, responded the advocates of the program. Most critical technolo-
gies were already mature, and others were in well-managed development.
The architecture and characteristics of the system were carefully optimized to
balance its deployability, survivability, and lethality in a broad range of future
conflicts. The overall costs would be much lower than any practical alterna-
tive approaches, including an attempt to modernize the current conventional
platforms. With many tasks automated, and many platforms standardized,
the need for support personnel and its associated costs would be significantly
reduced.12 The new network-enabled force would provide more boots on the
ground, significantly faster deployed into difficult hot spots, defeating either
conventional or unconventional enemies with fewer risks and costs.
We do not intend to diminish either the weightiness of all these consider-
ations, or the contenders’ sincerity and competence. You, the reader, may find
in this book grounds for support for both sides of the argument. Our findings
and observations both confirm the potential of a network-enabled force and
highlight risks of such systems as currently conceived. Still, these are not the
arguments we wish to pursue on the pages of this book.
Rather, we argue that the issues of armor and information do not need to
be coupled. Most of our findings indicate that these are orthogonal issues:
the cognitive challenges of information-rich, network-enabled warfare do not
depend on the thickness of the armor. Network-enabled warfare will deliver
its value (or will fail to deliver, if the challenges of information-rich battle
command are not solved properly) even with heavily armored platforms.
Conversely, heavy armor neither obviates the need for networked informa-
tion, nor precludes it.
44 Battle of Cognition
Regardless of the decisions on the right thickness of the armor, or the right
number of boots on the ground, the already-present elements of network-
enabled warfare call for serious attention to how warfighters can deal with the
explosion of information. The strengths (or weaknesses, as the case might be)
of the collective human-machine cognition will be at least as important as the
right combination of the platforms’ characteristics.
While at the time of this writing the FCS program continues as a strong,
innovative, ambitious, and expensive effort, network-enabled warfare does
not wait. It enters the military by guerilla marketing methods, far outpacing
conventional military procurement. Enterprising warfighters buy laptops and
wireless devices; rig databases, blogs, and chat sites; and establish their own
procedures and techniques that are clearly reminiscent of network-enabled
ideas. UAVs and even ground robots find growing acceptance among the
warfighters, regardless of the inevitable immaturities of the technologies.
With or without the muscle of military acquisition, network-enabled warfare
is entering real-world military operations.
And this brings a major concern that began to emerge even in the late 1990s:
with drastic proliferation of information flows impacting the warfighter, and
with so many new devices requiring the warfighter’s attention, what will hap-
pen with the human cognitive mechanisms? To put it differently, network-
enabled warfare will unleash a flood of information on the warfighter. Will
the flood overwhelm the cognitive abilities of the warfighter? Particularly
important, will the warfighter be able to manage the battle command?
EXPERIMENTAL TESTBED
To experiment with the ways in which the future Captain Johnson and his
battle managers might execute their network-enabled battles, we constructed
the MDC2 experimental laboratory. There we constructed mock-up C2 vehi-
cles not unlike the one Captain Johnson might ride into a battle and populated
them with teams of live officers and staff members. Each such team, called a
command cell, commanded a force of artificial warriors and platforms, such as
the CAU we described earlier, simulated by the U.S. Army’s premier simula-
tion system called OneSAF Testbed (OTB).18 The opposing force, the Kura
Brigade and their insurgent allies, were also simulated but were commanded
A Journey into the Mind of Command 47
by a live and very capable Red command cell. The Red and Blue command
cells did not know the locations and plans of the opponent’s forces, except as
they were able to determine during the battle. Both were allowed to conduct
the battle as they desired, in a so-called free-play fashion, without following a
prescribed script, although within prescribed rules of engagement.
The battles unfolded with realistic speed, in other words in real time. The
information about the battle events received by a command cell via computer
monitors and radio channels was also fairly realistic, allowing the command
cell to interact with the environment as if in the midst of a real battle. All of
the important functions of Johnson’ CAU were represented: the ability to
maneuver the forces, direct lethal and suppressive fires on the enemy, direct
and collect ISR effects, and conduct a small measure of logistics. To put all
this more formally, the MDC2 experimental program was conducted in a
simulation-supported, interactive, real-time, free-play, human-in-the-loop
laboratory environment.
Although all eyes were on the live command cells, the experiments could
not be performed unless the commanders had somebody to command. Thus,
OTB was the critical basis of the experimental environment. OTB simulated
Blue and Red forces at the entity level, meaning that each warfighter or
tank or other entity in the battlespace was simulated individually. An entity
received a command from the command cell and then computer programs
(called behaviors) took over and controlled the detailed actions of the
entity—for example, the way in which the soldier ran or fired his weapon.
This simulation approach is called entity-level semiautomated or computer-
generated forces. The capabilities and characteristics of such entities were
strictly managed: we maintained a set of Red and Blue equipment manu-
als that provided the detailed characteristics of Red and Blue platforms,
weapons, and sensors.
It helped that the OTB software had an open architecture with source code
that allowed for modifications to meet specific requirements. For example, we
developed and added several dismounted infantry behaviors to the OTB soft-
ware in order to support the MDC2 experiments. These unit-level (squad and
fireteam) behaviors added tactically realistic functionality that reduced opera-
tor workload. For example, the React to Contact behavior provided intel-
ligent rule-based behaviors when a dismounted infantry unit came in contact
with enemy forces or indirect fire. The behavior resulted in one of several
potential outcomes, including advancing on the threat, withdrawing from the
threat, or pausing to survey the threat.
In a typical experiment, the OTB-simulated Blue force was commanded
by three Blue command cells. Two of the cells commanded a CAU each—a
force of roughly company strength. The two CAUs were parts of a Com-
bined Arms Team (CAT), a unit of about a battalion strength that was com-
manded by the third cell. In addition to these three cells, a notional brigade
commander provided input and course correction to help ensure experi-
mental objectives were being met. This brigade commander formed the
48 Battle of Cognition
necessary link between the friendly forces and the experimental control cell
(described later).
The Red command cell included a commander and several staff mem-
bers. The commander was separated from his staff and could communicate
through radio calls. Because each radio call carried the possibility of being
detected by a friendly force sensor, communication was used sparingly. The
Red staff members interacted with the OTB simulation and had access to all
information gathered by their units. The enemy commander, however, only
had access to an infrequently updated display of the battlespace. The intent
of this display was to represent an approximation of the operational picture
available to an enemy commander in 2018.
Neutral forces, also simulated by OTB, acted independently of both the
friendly and enemy forces and added much complexity to the battlespace.
They included buses with predefined routes, trucks, and civilians in populated
areas.
Unlike the Blue and Red command cells that could see only the informa-
tion that their forces would acquire in the battle, the experimental control
cell had displays that showed all true locations and actions of Blue and Red
forces—the full ground truth. The control cell also listened to radio con-
versations between the Red commander and his staff or between the Blue
command-cell operators.
The members of the control cell did not interact directly with the simu-
lation unless there was a system problem but were responsible for ensur-
ing that the experimental objectives were being met and that the systems
were performing as expected. Furthermore, to maintain the integrity of the
experiments, the control cell did not interface directly with the Blue com-
mand cells. Instead, required communications to Blue commanders were
accomplished through the notional brigade commander using conventional
military protocol.
Observers and analysts were located throughout the laboratory and tracked
the action directly from the experimental control cell and the enemy com-
mander’s cell. These analyst observers were privy to all discussions within the
experimental control cell and between the enemy commander and his staff.
The analyst observers responsible for recording the Blue command opera-
tions were physically removed from the Blue cell operators they were observ-
ing but had access to everything the Blue commanders saw and heard. To
guard against any experiment-disrupting influences, these analysts were not
allowed to interact with the Blue staff.
their own radio channel (e.g., the intelligence manager of CAU-1 would talk
to the intelligence manager of CAU-2).
useful information about the enemy. The robotic shooters, such as unmanned
mobile cannons, fired rapidly and accurately at their designated targets. The
human warriors stayed further from harm’s way and dedicated themselves to
less mechanical, more creative tasks. However, robotic platforms also required
extensive amounts of information—accurate, highly detailed orders—from
Johnson’s battle managers. Here was another potential curse: the command
cell had to feed their efficient but relatively brainless robotic warriors with
an exorbitant amount of command information. Without a powerful tool,
the command cell would not be able to generate such a complex, voluminous
output.
Fortunately, our Blue command cells had a suite of helpful tools—the CSE,
a key product of the MDC2 program (Figures 2.6 and 2.7). It was the CSE
that processed the flood of incoming information and reduced it to a man-
ageable set and presented it to the cell members in an easily understandable
manner. And it was also the CSE that translated high-level guidance and the
commands of Captain Johnson and his battle managers into highly detailed,
precise instructions to the robotic warriors.
In Figure 2.3, you see two displays in front of each command-cell member.
These screens, interfaces to the CSE, could be reconfigured and personalized
according to the cell member’s tasks and personal preferences. Typically, the
primary content of the screens was the visualization of the common operating
picture, an automatically updated and integrated (fused) picture of friendly,
neutral, and enemy forces. The information used to populate the common
operating picture came from the unit’s organic and higher-echelon sensors.
Because all displays were networked and drew the underlying data from a
shared database, an update to one display immediately updated the display on
all other displays.
In addition to integrating and displaying the available information about the
conditions of the battle, the expert system and intelligent agents within the
CSE reasoned about the intelligence report assessments that correlated and
fused detailed information. This included such considerations as the enemy
status (e.g., fuel, ammo status, and health), alerts, planning versus execution
comparisons, current tasking, and more. In particular, once Captain John-
son or Sergeant Rahim entered configurable alerts into the CSE, the system
would notify them when, for example, an enemy force within an area of inter-
est exceeded a certain size. Other types of critical events, derived from the
Commander’s Critical Information Requirements, were handled similarly. To
help keep track of the Blue forces, the CSE’s monitoring tools provided feed-
back about the status of individual assets and of the echelon as a whole.
The CSE’s expert system also allowed Captain Johnson and his staff to task
and control their organic assets or groups of assets and to perform maneuver,
sensing, or shooting functions. A cell member communicated his intent to the
CSE using a set of warfighter-configurable rules. When the configured set of
conditions was met, CSE executed the predefined actions or recommended
actions to the designated cell member. When the cell member approved one
Figure 2.6a. Some of the tools of the CSE.
52
A Journey into the Mind of Command 53
A TYPICAL EXPERIMENT
Over the course of several years, we performed a total of eight experiments.
Each experiment took multiple months to prepare and weeks to execute. An
experiment involved multiple battles (we called them runs), each taking sev-
eral hours to complete. Although most runs were based on a common terrain,
the force structure, and general situation, the specifics of Blue and Red mis-
sions and dispositions were unique in each scenario.
Each scenario was designed to address multiple experimental objectives and
forced the commanders into different tactical dilemmas. At the beginning of
each scenario, the commanders and their staff members would go through
detailed collaborative and individual planning. Several scenarios included
54 Battle of Cognition
fragmentary orders part way into the runs to force dynamic replanning to
meet the new objectives.
In general, the Red force was significantly larger and more heavily armored,
although their vehicles were slower, their weapons and sensors had shorter
ranges, and their C2 capabilities were less sophisticated than the Blue’s. De-
feating such an enemy with a smaller, more lightly armored, but more agile
and better informed force was the common dilemma of the Blue forces.
In designing the experiments, we made an early decision to look at this
experimental program as one of discovery instead of hypothesis testing. The
most significant implication of this decision was that instead of focusing the
analysis on determining if a particular hypothesis was true or false, we instead
explored significant factors and their relations—for example, the information
requirements of the cell members, or which CSE features were most effective,
and why. The results of our experimental analysis influenced enhancements
to the experimental tools and often led to generalized findings that would be
pertinent to a range of future battle-command approaches and tools.
To focus our data collection and guide the direction of subsequent analysis,
we developed a core set of essential elements of analysis—the key questions
the experiments were to answer. An analytic decomposition of these elements
enabled us to carefully construct each experiment to ensure that the required
A Journey into the Mind of Command 55
Mission
The commanders and their staff planned, rehearsed, and executed one or
two tactical missions each day. The scenario was derived from a collection of
unclassified Caspian Sea scenarios set in the country of Azerbaijan circa 2018
and was designed to force analytically significant dilemmas and command
decision making. The precise mission for each run varied, but the friendly
force was consistently on the offensive and had a terrain-oriented mission (i.e.,
secure an area, clear a path, etc.). The enemy mission varied more substan-
tively: in some runs the Red force was to exfiltrate across the border to Iran,
in some they defended a region, and in others their priority was to destroy the
Blue forces. Neither side was aware of the specific mission of its opponent.
Enemy
The Red forces, operating independently and unconstrained tactically, were
a mixture of Azeri army regulars, special-purpose teams, and insurgents. The
militants of the AIB made up the insurgent forces that attempted to over-
throw the pro-Western government. The AIB subverted control of the Kura
Terrain
The area where the battles took place—the so-called terrain box—was
typically located in the Kura River Depression in present-day Azerbaijan
(see Figures 2.9 and 2.10). The terrain box varied in size from experiment to
experiment based on the size of the force involved. The largest terrain box was
approximately 100 kilometers from north to south and 100 kilometers from
east to west, with the Kura River running west to east through the center of
the region.
The Kura River Depression region of Azerbaijan is remarkably flat with
elevation variations of –5 to +15 meters from sea level. The region is mostly
covered by sandy or hard-packed soil and is primarily an agricultural area for
grains and native crops. Large, thickly wooded areas are dispersed throughout
the area, and the region includes a large swamp in the south-central region of
the depression.
In order to increase the complexity of the experimental battles, the real-
world terrain was modified. Enhancements included over a hundred built-up
areas, dozens of mosques, 11 cemeteries, 36 national monuments, and 4 dis-
placed persons camps. To stay within the constraints of the simulation system,
the size of most of the built-up areas—very modest hamlets—was intention-
ally limited to 6 buildings per area. See Figure 2.10 for a graphical representa-
tion of the experiment terrain. Different colors indicate terrain features such
as mountainous areas, marshy areas, farmland, lakes and rivers, water, and
impassable swampland. The legend describes the cultural features, and the
numbered areas are the towns and small built-up areas.
Troops
The organization of Blue troops used in the experiments varied according
to the context of the experiment. In early experiments, a single CAU was rep-
Figure 2.9. The terrain box used in the experiment was set in a Caspian Sea region.
Figure 2.10. To increase the complexity of environment, the terrain box included a
variety of additional, fictitious features.
57
58 Battle of Cognition
Figure 2.11. Overall organization of the Blue force. Gray shading shows the forces
represented with live personnel. Squads were computer simulated. Other forces were
not represented in the experiments. The Appendix describes the equipment.
A Journey into the Mind of Command 59
Time
Mission planning was allotted two hours, mission execution was up to four
hours. In most cases, the Blue force was ordered to accomplish its objectives
by a specified time limit.
The Blue force plans to prevent the Red exfiltration. To this end, one CAT
(not explicitly modeled in the experiment) will secure the western part of the
international boundary while CAT-2 (the force modeled in our experiment)
will take key objectives (MEAD and SHERMAN) in the central and eastern
part of the area, thereby enveloping the Red force. CAT-2’s sensor assets will
be the first to cross the river, to probe the Red positions, and to set the nec-
essary conditions for the subordinate units to begin maneuver. CAU-2 will
then initiate the main effort toward the objective SHERMAN (in southeast)
while CAU-1 will execute the supporting effort and take objective MEAD
(center).
Figure 2.13a. An example of an experimental battle: Blue initial plan. See Appendix
for explanation of abbreviations.
The dominant role of situation awareness (i.e., the ability to obtain the
necessary information about the situation in which a military force operates)
should not come as a surprise. In fact, it has been long been argued that the
very nature of command organizations has been historically driven by the
need for situation awareness.
In the world of warfare, an influential historian argued that “the history
of command can be understood in terms of a race between the demand for
information and the ability of command systems to meet it.”20 It also should
not be surprising that the solution to the problem, in all ages, had much to do
with technological innovations in information processing.
Consider the Napoleonic revolution in battle command. In the large-scale
operations of the Napoleonic age, the enormously enlarged and geographi-
cally dispersed armies engendered massively increased flows of information.
The emperor could no longer be in person with every corps; he needed detailed
reports. The task of transforming these formidable inflows of information into
adequate situation awareness was too difficult even for a genius of Napoleon’s
caliber. To solve the problem, he introduced a system of remarkable innova-
tions, technological in nature even if the technology was based on humans
and paper.21
He devised sophisticated databases—a system of formalized reports, specialized
summaries, and custom-designed cabinets for efficient storage and retrieval
of such information. He institutionalized the use of a relatively recent tech-
nological development—accurately triangulated and mass-produced maps—
as a media for time-space modeling and analysis of strategic movements.22
A Journey into the Mind of Command 63
To process the incoming reports and outgoing orders with greater speed and
accuracy, he devised a system of specialized human information processors—
staff officers—responsible for formally decomposed and allocated sets of
functional tasks. This suite of technological innovations—based on paper
databases and human information processors—was at the core of Napoleonic
battle-command revolution.
In the world of industrial management, it was also long recognized that the
structure and processes of an effective organization are driven by the need
to transform large volumes of information into useable forms—situation
awareness and decisions. The ability of a decision-making organization to
produce successful performance is largely a function of avoiding information-
processing overload,23 not unlike what we saw in the Napoleonic invention of
a new battle command. Thus, in the 1990s, globalization and computerization
drove massive changes in industrial and commercial management—reduction
in layers of management, just-in-time operations, and networked structure of
enterprises.
In short, the chain of influences works as follows. New conditions of war-
fare (such as Napoleon’s large, distributed corps) both engender and demand
more information. More information challenges the ability of the old com-
mand system to transform it into actionable situation awareness. To resolve the
challenge, a capable military develops or adopts new information-processing
technologies (such as Napoleon’s paper databases and specialized human
processors), with suitable organizations and procedures—a new battle-command
system.
Captain Johnson’s command cell was a product of a similar chain. Given
the unusual degree of Blue forces dispersion and their physical separation
from the enemy, and with the confusing flood of information produced by
multiple sensors and networks, can Captain Johnson and his battle managers
maintain an effective level of situation awareness? A key hypothesis of the
MDC2 program was that they could, if provided with an appropriate suite of
tools, such as the CSE.
CHAPTER 3
New Tools of Command:
A Detailed Look at the
Technology That Helps
Manage the Fog of War
Richard J. Bormann Jr.
Skip this chapter if computer terminology bores you. The subsequent chapters
are quite understandable without the heavy technical content of this one. On
the other hand, for a technically minded reader, this is a great place to learn
about the nuts and bolts of the network-enabled battle-command tools actu-
ally built and tested in the MDC2 program.
Let’s begin by introducing two important abbreviations. Battle Command
Support Environment (BCSE) is the overall system we build to perform the
experiments with battle command within he MDC2 program. It includes
many diverse components with a range of capabilities. A large part of these
capabilities are the functions that actually support a commander cell like Cap-
tain Johnson and his battle managers. That set of functions is called the Com-
mander Support Environment (CSE). In this chapter we will describe the
entire BCSE. In other chapters, we focus almost exclusively on CSE only.
The BCSE is an execution-centric command and control (C2) decision
support system for cross-functional, collaborative mission planning and exe-
cution. It provides a common operating picture (COP) for enhanced, real-
time situation awareness (SA). It supports multiple echelons, from battalion
down to the individual mounted and dismounted warfighter level, including
both manned and unmanned platforms. Using the BCSE, command cells
control manned and robotic assets in a network-enabled, cross-functional
environment in response to rapidly changing battlespace conditions and digi-
tally share the changes across organizations in real time.
In developing BCSE, we pursued a number of objectives centered around a
network-enabled approach to battle teamwork. All assets within the command
cell are able to collaborate and share their view of the world. The information
64
New Tools of Command 65
is shared by every human and every robotic system within the commander’s
control, so that they can each operate under the same assumptions. Even
though, with inevitable periodic losses of communications, perfect instanta-
neous and continuous sharing of information may not be always possible, the
system does its best to keep the information stores as up to date as possible.
In its current form, the BCSE is based on a three-tier C2 architecture with
integrated Battlefield Functional Areas (BFA). The BFAs handled directly
within the BCSE include Maneuver, Intelligence, Effects, and Logistics. The
three-tier architecture provides decision support (1) at the warfighter graphi-
cal user interface, (2) among multiple networked assets, and (3) at the indi-
vidual asset level. The decision support capability is distributed across the
command cell while providing redundancy of the information model. The
latter is important because it ensures that a loss of any one asset does not
result in the loss of existing information and keeps the information store from
becoming a central point of failure.
The integrated BFAs allow every member of the command cell, regard-
less of his functional specialization (e.g., military intelligence or logistics) the
ability to pitch in and share the workload regardless of their assigned func-
tional role, with the same tools and functions available to every member. If
the information manager is overwhelmed while the effects manager has spare
cognitive cycles, the effects manager can pitch in to help with intelligence
tasks without switching computer screens or moving to other workstations.
The architecture is also tailorable—it allows the users to tailor the system
interface to their specific preferences and warfighting needs.
In the following section, we begin by describing the architecture of BCSE
in some detail. We then continue by highlighting some of the tools and fea-
tures that are available to the warfighter and finish by describing the decision
support system framework that at the heart of the BCSE.
• Commander and staff agents provide the commander and his staff with intelligent
information that helps them understand the battlespace conditions and occurrences
as well as giving them the ability to control the assets under their command.
• Collective agents reside at nodes within the network and handle the coordination of
multiple assets on behalf of the commander and his staff. Examples of a cross net-
work agents include the Attack Guidance Matrix (AGM) and agents that maximize
intelligence gathering through the coordination of multiple sensors based on the
commander’s intent.
• Asset agents enable each asset to understand how to carry out the commander’s
intent and directives within the scope of the overall mission.
types immediately upon detection of the target, they can instruct the AGM to
do so. Conversely, the command cell can limit control, setting the AGM to
provide recommendations only. In effect, a cell member has the ability to pro-
gram and control the agents during the operation by modifying the rules of
engagement, altering threat assessment and targetability criteria, and setting
weapon-to-target preferences through the use of their user interface. This
user interface is known as the CSE and will be explained in more detail in a
moment. In the chapters that follow, we will use the term CSE to refer to all
the functions that support the command-cell members, in order to exclude
other parts of the BCSE that support command-unrelated functions, such as
simulation.
Now let us consider each of the three tiers of our architecture (Figure 3.2).
Tier 1 of the BCSE provides decision support within the CSE.
The CSE is the primary interface between the command-cell members and
the rest of the system—it delivers the battle information to the cell operators,
and receives the battle commands from the operators. The CSE assimilates a
flood of digital battlespace information coming from intelligence reports and
sensory information of assets reporting from the battlespace into a graphical
picture so that the commander and his staff can quickly and easily understand
what is happening. This graphical picture is known as the common operating
picture (COP) (Figure 2.7 of Chapter 2). The CSE also provides information
filters that a user can customize to control the type and quantity of infor-
mation being received based on his information needs. The CSE includes a
suite of tools that operators use to control and coordinate assets, tools like
Task Synchronization Matrix, Threat Manager, AGM, Alert Tracker, and
Figure 3.3. The CSE utilizes a knowledge base to integrate the static, temporal, and
spatial information into a coherent set of knowledge for the commander and staff to
operate on across echelons.
70 Battle of Cognition
were well outside traditional doctrine. In one such an experiment, for exam-
ple, the cell members divided their roles so that one maintained responsibility
for maneuver, intelligence, and fires for the close fight while another handled
all these functions for the deep fight.
The CSE provides the following:
• Visualization of Information: the real-time display and focal point for situation
monitoring and understanding in forms such as 2D and 3D maps, graphics, icons,
tables, and reports.
• Collection management and logistical displays: the commander’s portal to the
current status of each of his assets in the battlespace.
• Command Center: the operators’ main control center for task creation and
modification, mission planning, mission coordination, and collaboration.
• Task Decomposition: the ability to break down a complex task into a set of individual
tasks.
• Terrain Analysis: the ability to understand the terrain, how it can be used, and how
to maneuver across it.
• CCIR Management and Display: the ability to specify and receive alerts, cues, and
notifications in response to topics of critical interest to the commander and his
staff.
Figure 3.4. The BCSE helps command cell in both planning and execution.
New Tools of Command 71
Tier 2 contains agents that provide the commander and staff with specific
decision support needs across the assets in their control. These agents are
known as collective agents because these agents treat the assets within their
control as a dedicated network of assets focusing on their combined goals.
The collective agents function as the commander’s assistant by directing,
coordinating, and synchronizing the assets to achieve mission goals. They
also provide recommendations and assimilate disparate information into the
COP. Each of these agents can be hosted on any asset equipped with the
appropriate hardware anywhere within the command cell’s control and can
move from one asset to another asset if required. The movement from one
asset to another may occur for a number of reasons including the destruction
or critical failure of its host asset.
Each of the collective agents fulfills a specific need of the cell members, and
therefore, there are many different types, which include those that manage
schedules; provide guidance such as attack, BDA, and reconnaissance; report
and manage threat information; fuse and synchronize data into information
that the command-cell members can easily and quickly comprehend; provide
critical alerts, such as potential fratricide or violations in rules of engage-
ment; provide memory management, which is important in view of the large
volumes of battle-relevant information in the network-enabled environment;
and collect data for postbattle analysis.
Tier 3 is a collection of resident agents—one at every asset controlled by
the command cell. The idea is to provide a networked environment and
communication mechanism such that each asset keeps the entire community
aware of its current state while in turn being kept informed of the environ-
ment and its surroundings. As a result, each asset’s knowledge base is kept
as synchronized as possible to the full state of the battlespace. This way, it
can reason on how to maneuver, control its sensors, react to threat, and take
initiative to accomplish the goals of the command cell. An important point
to mention is that while each asset may know how to maneuver on its own
(humans as well as sophisticated robotic platform have built-in reasoners for
knowing how to move and avoid obstacles), the resident agents provide an
understanding of how to maneuver in respect to the current command cell’s
goals. They understand the mission, not just the specific task.
The asset should be able to act for the good of the mission and not just
for the good of itself; the asset must also understand the world around it.
Besides merely breaking down the asset’s task into the necessary atomic
actions, it must understand its purpose and goal as well as every other asset’s
purpose and goal, and even the commander’s intent. This also enables the
asset to carry out its mission when communications with the commander
is lost and to react to threats in a way that is intelligent and meaningful,
according to mission parameters. In order to react appropriately, it must
also understand its role in the mission, what it can and cannot sacrifice for
the sake of the mission, and how to avoid causing harm to other participants
in the mission.
72 Battle of Cognition
The resident agents are aware of the environment that may be outside the
viewpoint or perspective view of the asset’s sensors and therefore can help
provide a better recommendation for which to maneuver. From that recom-
mendation, the human and robot can use their brain or onboard navigation
system to do the details. This is important because resident agents are not
merely artifacts of a simulation. In fact, they provide as much to a live envi-
ronment as they do to the simulated environment.
Here we will attempt to clarify this with an example. Assuming a UAV is
assigned to reconnoiter a potential target, it must understand the level of risk
it can endure to reach the target and accomplish the task. The detection of
an air-defense system might normally cause a reaction to flee, but if the mis-
sion dictates that mission success outweighs the risk of loosing the platform,
then the system will decide to continue the mission despite the risk—fleeing
is not an option. In other words, the UAV knows what it needs to accom-
plish, what risk it can take, and when to call off the task. The UAV must
also ensure that when it does react, it doesn’t react in a way that may bring
enemy attention to others—like fleeing back to base.
The Tier 3 resident agents are currently provided in two forms, which are
referred to as the Platform Support Environment (PSE) and the Soldier/
vehicle Support Environment (SSE).
The PSE is used in unmanned (robotic) assets to provide system guid-
ance and control. Using a knowledge base, with its set of rules, it translates
mission related tasks into directives understood by the robotic system’s con-
trol software. For example, the commander requests a particular area to be
searched for enemy. The collective agents assign asset X to perform the task
due to its availability and proximity to the area. A route and set of sensor
controls to accomplish the task is determined by the collective agents and
passed to the resident agent on asset X. Assuming asset X only understands
directives in a specific message format of segments and speeds, the agent
formats the appropriate messages and sends them to the robotic system’s
control software. As the asset moves out on its task, an enemy indirect fire
system is detected by another asset on the network. The resident agent on
asset X realizes that the enemy asset’s attack range intersects X’s route. As a
result, the resident agent generates a new route around the danger area and
notifies all the other agents on the network (including the command-cell
members) of its course change and expected time of task completion. Again
the agent formats the appropriate messages and sends them to the robotic
system’s control software. The robot reroutes and completes its task without
the cell member’s intervention. Without the resident agent, the asset would
never have known about the danger.
The SSE provides assistance to the soldier by providing recommendations
on reconnaissance and fire support, alerts and cues, and situation awareness
through the development and display of the COP (Figure 3.5).
Generally, an agent has a way to display its output to the humans. The
human warfighter has his own brain to make decisions and to break down a
New Tools of Command 73
Figure 3.5a. The SSE provides C2 and decision support to the dismounted warfighter.
task into subtasks. The warfighter can only make decisions on what is actually
known to him (unless he is guessing). The knowledge base knows the overall
known state of the battlespace. It knows the global situation and what every
asset is currently and expected to do. Decision tools, using the knowledge
base, can keep track of information that is important to the warfighter, figure
out when situations are occurring that are important to the warfighter, and
make recommendations to the warfighter—all based on detailed information
that may be difficult or tedious to keep track of when your focus is on sur-
vival. So, there needs to be a good way to interface the output of the decision
tools to the warfighter so that he can use his brain to accomplish the task at
hand. The soldier support environment uses a visualization display to show
the COP and alert the commander to critical events. The agents interface to
the display.
Now, having discussed the BCSE system, the humans that use it to command
the assets, and the assets themselves, let us not forget about the all-important
real world within which all this operates. A system like BCSE, along with
its commanders and assets, would normally exist in a real battlespace popu-
lated by real terrain features, enemies, neutrals, other friendly entities, and so
forth. However, in the MDC2 program, we did not have the luxury of experi-
menting with BCSE in a real battle or even a field exercise. Instead, the real
world was simulated by an advanced battle simulation system called OneSAF
Testbed (OTB).2 It was OTB that simulated such physical events as move-
ments of assets, their fires, and effects of fires. The overall simulation suite
also included sensor effects servers (these simulated, for example, whether a
74 Battle of Cognition
particular UAV would be visible to a radar system under the given conditions)
and imaging severs that generated simulated imagery, for example, how an
enemy tank would look to a UAV’s video camera. Because the world (includ-
ing the assets) was simulated, each PSE resided not on a real asset but was
instead connected to its simulated asset in OTB.
Figure 3.6 depicts the architecture used in one of the MDC2 experiments.
The circular part of the diagram on the right shows how multiple cells com-
municate—they are linked together on a shared C2 Internet. Recall that a cell
consists of a commander and his staff. In this example configuration, there
are 10 cells that include one higher headquarters (HHQ), one battalion (Bn),
two companies (Co), two platoons (PL), and four squads (SQ). Each cell also
owns the items represented in the callout on the left. Following the three-
tier architecture, the cell contains C2 agents that handle platform support at
the robotic level (PSE), soldier support for the dismounted warfighter (SSE),
vehicle support (VSE) for mounted warfighters, commander support for the
commander and his staff (CSE), and collective agents providing support across
New Tools of Command 75
Figure 3.6. The BCSE architecture and the decision support components used in
Experiment 7.
the assets (CA). The architecture distributes the C2 elements at three levels
or tiers: the commander and staff (CSE), across the networked components
(CA), and at the individual assets (PSE, VSE, SSE).
The reader, we are afraid, may still be uncertain of how all this works
together. We will explain this later via an illustrative example. But before we
do so, let us take a closer look at the functions and tools within the most
important part of the system—the CSE.
Visualization
The CSE shows all operators a complete and up to date view of what
is known about the battlespace, during both planning and execution of the
mission. Individual operators can access, view, configure, and tune their
view, workspace, and processes in ways that support their thinking. Icons
and markers allow the operators to quickly see the shape and status of the
battlespace.
• With just a single glance at the map, the operator can see new detections. A detec-
tion is a report stating that a sensor has detected a suspicious object (e.g., a possible
enemy tank), with details on when, where, by what sensor, and any available imagery
of the object.
• With another glance, the operator can see enemy targets, the target status, includ-
ing identification (e.g., is it a tank or an infantry team), engagement status (whether
and who fired at the target), and battle damage assessment (is it destroyed or dam-
aged, and to what extent).
• The operator can also see the locations of every asset, its tasks, routes, and sensor
coverage.
• Each platform on the map also has a tooltip that shows its fuel consumption, speed,
location, heading, and other pertinent information.
the resource allocation tool. Each icon is decorated with special adornments.
These decorators indicate additional information about the object. Decora-
tors for an enemy asset, for example, include symbology to show engagement
status, BDA status, image availability, the sensor type that last detected the
asset, its direction of movement, and more.
Briefing
Operators use the Briefing Tool to view and share mission-planning data
and intent. It is a “whiteboard” shared by multiple operators during planning
and execution in order to exchange information. This capability is particularly
critical for allowing the commander to share his vision or view of the current
or future battle with other commanders, staff, and subordinates. The opera-
tors share overlays related to mission plans, change OPORD and situation
templates, and save the briefing layers. Individual function-specific plans may
be merged into a single plan, changed during execution, and shared at any
time with selected personnel. This feature also allows the operators to enter
the mission statement, commander’s intent, and task and purpose statements.
To avoid the confusion between multiple operators of the graphic space, the
operators can add personalized graphics and icons and color code their pointer
and graphics.
There are two additional tools to aid in communication and coordination
between commanders, and between a commander and his staff. The ViewSync
tool allows an operator to synchronize his view of the battlespace with another
78 Battle of Cognition
operator’s view. The heads-up display tool allows an operator to project his
screen on a shared monitor visible to all members of a cell.
Situation Awareness
The Threat Manager shown in Figure 3.7 provides the operator with all
identified threats in a tabular display. This information is determined by the
intelligence information that has been correlated and fused by the command-
cell members and the fusion agents. It is their perception of the threat and
does not represent ground truth. The Threat Manager includes the following
information:
• Threat Name—user specified name and military type of threat such as SA-13 or
Draega.
• Unit Status—indicates the level of knowledge about the threat (e.g., suspected,
identified, and targetable),
• Threat level—qualitative level of threat ranging from low to high.
• Damage status—indicates the current perceived health of the threat, which includes
information such as destroyed, mobility-kill (e.g., a tank cannot move but can func-
tion otherwise), firepower-kill (e.g., a tank cannot fire but otherwise functions),
unknown damage, and more.
Figure 3.7. The Threat Manager identifies enemy threats and provides access to the
AGM, BDAGM, and Intel Viewer.
New Tools of Command 79
• Threat type—indicates type of threat such as air defense, indirect fire, direct fire,
and so forth.
• Friendly assets within range—lists the friendly assets that are being threatened.
When an operator selects a threat by clicking on it, the map shows lines extending
from the threat to the friendly assets being threatened.
• Engagement status—information about engagements that have been executed
against the threat.
• BDA status—presents information about the last known reconnaissance against the
threat since its last attack and whether the reconnaissance is scheduled, in progress,
or complete. When the operator clicks on the BDA Status, the BDA Guidance
Matrix recommends an asset to perform further reconnaissance on the threat.
• Show Image—indicates the time when the most recent image of the threat was taken
and a marker if the image has not been viewed by an operator. By clicking on this
field, the Intel Viewer is displayed allowing the user to view and classify the image.
• Engagement Status—indicates the last known engagement (attack) information on
the target and its status indicating whether the engagement is scheduled, in prog-
ress, or complete. When the operator clicks the Engagement Status field, the AGM
recommends an asset to perform another attack based on the command staff’s AGM
settings.
The Resource Availability tool is a textual, tabular display that gives the
operator the following information on all friendly assets: name, damage status,
fuel remaining, sensor being used, percent of task completed, speed, heading,
altitude, and location. Double clicking on the name of an asset centers the
map on the asset and highlights it.
The Collection Management tool is a textual, tabular display that gives task
information for friendly assets: asset name, task, target (for a fire or reconnais-
sance task), start time, end time, purpose, percent complete, and task status. It
is a quick way to check a platform, see all tasks assigned to that platform, and
the status of each task. Double clicking on the name of an asset centers the
map on the asset and highlights it.
Tasking
To issue a task to an asset, the operator clicks the right mouse button on
an asset shown on the map, or on the Execution Synchronization Matrix (this
will be described a little later in this section), or on the mission workspace.
The click brings a context sensitive menu that shows the possible tasks that
can be issued based on the current situation. Having selected a task, the oper-
ator is presented with a tasking window where the operator can specify the
information and parameters that are specific to the execution of the task. This
includes specifying intent, waypoints and schemes of maneuver, task duration,
dependencies for start or completion of the task, use and operation of sen-
sors and weapons, and terms of task completion. All platform tasks give the
operator an option to be notified upon task completion. During planning,
80 Battle of Cognition
the operator often animates the tasks and sees how assets move on the map.
During execution, operators add, delete, or modify tasks as the battle situa-
tion changes.
Fortunately, many of the tasks are high level tasks for which the operator
needs to input very limited information—only the intent. The system then
automatically generates the rest of the tasking information. For example, to
reconnoiter an area, the operator selects only the platform, the area to recon,
the flight area, the sensor, and altitude. The system then determines the best
route for the best coverage. A task that requires a ground maneuver invokes
the terrain analysis components to automatically generate the best route to
meet the operator’s intent (fastest, shortest, and most concealed) and to avoid
terrain obstacles.
Individual platforms usually receive tasks related to reconnaissance, maneu-
ver, or fires. The platform task menu is context sensitive, which means that
only those tasks suitable for the platform are represented in the menu. For
example, the currently available movement tasks for a reconnaissance, surveil-
lance, and target acquisition (RSTA) unmanned ground vehicle (UGV) are
Move, Halt-Resume, Overwatch, Reconnoiter Route Reconnaissance, Area,
Auto Reconnaissance, Locations Reconnaissance, Targets Reconnaissance,
Follow and Pursue.
When multiple assets are involved, an operator can create groups of assets
that are to work together for tasks such as maneuver or reconnaissance. The
group may also be tasked as a formation, moving the vehicles in a formation
pattern established by the operator. With this technique, the operator spec-
ifies the route, a stand-off distance, and a pattern, such as column, wedge,
herringbone, line, echelon left, or echelon right. The operator can change the
formation later during the execution.
Clicking on an asset able to fire weapons brings a menu with appropriate
choices. The Quick Fire tool brings up the appropriate fire options for the
selected asset, while the Prohibit Fire tool allows the operator to mark this
asset as a “Do Not Fire” asset. If a fire task is assigned to this asset, the system
issues a warning, which the operator may choose to override.
The sensor control allows the operator to manually turn sensors on and off.
However, some opportunistic reconnaissance tasks override the settings. Sen-
sor direction can also be changed using the 360 degree reference. This works
well for stationary platforms that use sensors such as GSR.
The Execution Synchronization Matrix is a customizable user interface
module that represents the COA tasks in a Gantt chart format. The matrix
graphically shows each task’s start time, end time, and duration, the status of
each task (planned, completed, in progress, off schedule), and the interdepen-
dencies between tasks. Each task can be further examined by double clicking
on the graphic representation and displaying the task details. For example,
a Targets Reconnaissance task may take 25 minutes to complete. Further
examining the task, the operator can see that it is made up of three Target
Reconnaissance tasks where the first is to reconnoiter a Garm and will take
New Tools of Command 81
Automation of Fires
The AGM is a tool that automatically monitors the enemy targets that
become known to the BCSE and, following the operator-specified rules, gen-
erates and issues commands (or recommendations) to fire at the targets. The
AGM integrates fires and effects with intelligence, maneuver, and logistics.
It does this as follows: tracks movement and location information about a
potential enemy assets, reasons about the enemy assets capabilities to hurt the
friendly forces, reasons about the currently available friendly assets and ammu-
nition, determines the friendly assets that are being threatened by the enemy
asset, determines if the enemy asset is a valid target using rules supplied by
military intelligence experts, and pairs (allocates) the friendly weapon systems
and munitions to the targets. The AGM is aware of the friendly forces in the
area, No Fire Zones, and No Fire Lines when computing the firing solution.
The operator uses the AGM tool in the CSE to enter the criteria, which
will later be used to determine if an enemy asset is targetable. It does this by
setting criteria such as how sure we know the target is the type we believe it
is (identification confidence) and how sure we know where it is (CEP). The
operator can also specify the priority order in which munition types should
be selected to attack a given target, the number of munitions to use against a
target, and much more. Figure 3.8 is shows the AGM Tool. Underlying the
AGM tool in the CSE is a collective agent called the AGM agent. This agent
takes the inputs made in the CSE and modifies the rule parameters in its
knowledge base to provide recommendations and perform actions based on
the operator’s specification. There is typically one AGM per command cell.
The AGM agent is capable of coordinating its guidance with AGM agents of
other cells. The extent of coordination between AGMs belonging to different
cells can also be controlled through the CSE. Additionally, there are load-
balancing rules that can be set to control the types and amount of munitions
that can be used by the AGM. An important aspect of the AGM is that the
operator can activate a different AGM set of criteria at any time before and
during a mission. The operator can also create new AGM criteria sets, change
criteria sets (both active and inactive), and share criteria sets with other opera-
tors in and out of the cell.
Figure 3.8. The CSE provides the command-cell member with an interface
to the AGM for controlling the rules for automated and recommended fires.
with the current and allocated munitions count. The operator selects a target
or location and clicks the button to fire.
Any warfighter can make requests for fire during mission execution. For
example, a Long Range Surveillance (LRS) soldier can issue a call for fire
on a target. Or a CAT commander may call in joint fires from an F-117A.
The Request for Fire tool displays the request and allows it to be accepted or
denied by the organization that controls the fire assets.
Intelligence Management
The CSE offers a suite of tools to help the operator organize and act on
incoming detections and intelligence. The Picture Viewer allows the operator
to customize a presentation of images provided by various sensors by speci-
fying the sensor-carrying assets he wishes to monitor. As new pictures (IR,
New Tools of Command 83
DVO, SAR) taken by that asset come in, they are added to the presentation.
By selecting the history for the asset, the operator can see all pictures taken
by that asset.
With the Intel Viewer, the operator can examine images of battlespace
objects that have been detected, as well as the history of intelligence infor-
mation, such as the object’s type, affiliation (e.g., enemy, neutral, unknown),
damage state (e.g., destroyed, firepower-kill, unknown), how sure we know
what it is (e.g., suspected, identified, targetable), and its classification (e.g.,
air defense, heavy tracked, wheeled) (Figure 3.9). The term battlespace object
is used here to refer to a physical object of interest in the battlespace. This
includes enemy, neutral, and unknown assets as well as people, bunkers,
buildings, bridges, and more. The intelligence information that is displayed
comes from several sources, including automated sensor fusion, information
correlation, and updates based on previous operator interaction with the
Intel Viewer. Previous operator interaction refers to the operator’s ability to
use the Intel Viewer to update the intelligence information listed above. The
Intel Viewer adheres to the principle of integrated Battlefield Functional
Areas since the operator can request recommendations from the system for
tasks of reconnaissance or fires and then issue the command to carry out the
tasks. This integrates intelligence, operations, and fires into one tool.
The Unit Viewer integrates information from several sources into one small
pop-up window. The tool works for both friendly and enemy platforms and
pulls all available information into one screen. The window can be docked on
the screen, and whenever an asset is selected, its information is presented. For
a friendly asset, it shows the asset status, such as speed, location, altitude, head-
ing, available munitions, fuel level, its current tasks status, the current use of its
sensors, and a list of enemy assets that are currently threatening it. From the
Unit Viewer, there is an option to display, on the map, the friendly platform’s
route and its current task’s sensor coverage. For a nonfriendly asset, the Unit
Viewer shows a list of all the friendly assets threatened by that asset and the
nonfriendly’s speed, direction, status (e.g., suspected, identified, targetable),
and movement tracks derived from previous detections. The Unit Viewer fur-
ther provides a link to the Intel Viewer to view available images of that asset.
The Detection Catalog is a tabular and textual tool that lets the operator
see all detections made by the system. It is organized by detection and shows
who detected it, and when, and by which sensor.
The intelligence estimate is a real-time enemy situation template that helps
the operator template the enemy within an area of responsibility. As infor-
mation is gathered within the area of responsibility, the system updates the
estimate with information on what was actually identified, destroyed, immo-
bilized, and so forth, in that area.
Automation of BDA
Similar to the AGM, the Battle Damage Assessment Guidance Matrix
(BDAGM) monitors the friendly fires at the enemy targets and automatically
84 Battle of Cognition
Figure 3.9. The Intel Viewer and Picture Viewer offer methods to view images and
provide information on the objects viewed.
modify the currently active BDA plan, create a new one, activate a saved one,
and share his BDA plan information with a peer, superior, or subordinate.
When setting up a BDAGM, the operator selects from a list of assets in his
command that have sensors that can generate imagery or are humans who
can visually assess damage. Note that no sensors modeled in our experiments
can automatically assess damage. Therefore, we rely on images viewed by a
human using the Intel Viewer to determine the level of damage. That is, the
sensors send back images and humans determine the level of damage. For
each asset under the group’s command, the operator may disable automatic
tasking of the asset, allow the asset to be automatically tasked by the system,
and/or dedicate the asset to the system for use as a BDA collection asset. It can
further require the asset to avoid performing BDA on certain target types or
limit its collection to specific geographic areas. When an enemy is fired at, the
collective agents in charge of BDA will monitor the attack and recommend or
automatically assign (based on the operator’s plan) the best suited asset to per-
form the reconnaissance (BDA) of the target based on the active BDA plan. In
addition to the BDAGM, the collective agents will monitor movement, radar,
fires, and communications coming from enemy units marked as damaged to
determine if there are any signs of life (such as movement or communications)
and then report that information back to the CSE where it is displayed in the
BDA Report tab of the Intel Viewer.
CCIR
The system can alert an operator about a number of situations based on
the development of Priority Intelligence Requirements and Friendly Force
Information Requirements. Alert selections are operator specific and can be
saved for the operator across multiple missions. The alert functions help a
commander determine when CCIR criteria are met.
The system also includes a planning audit tool that walks through a list
of mission planning tasks that are either automatically validated by a collec-
tive agent as complete or posed as a yes-or-no question to the operator. The
planning audit serves as an operation preparation check list for use prior to
execution of a plan.
Communications
Even with the wealth of information available through the BCSE displays,
verbal communications remains important. When the BCSE is used in a
simulation environment like the MDC2 program, the ASTi simulated radio
and communications system can be used. In the real world the BCSE would
integrate military grade radios such as SINCGARS. The BCSE integrates the
radio channel and volume controls through the use of a built in interface on
the CSE that keeps the operators interface to the radio the same regardless of
whether they are using it in a simulated or live environment.
86 Battle of Cognition
The operators also use the CSE collaboration mode in order to share the cur-
rent plan with higher headquarters, peers, or subordinates. The operator may
also choose to drop out of collaboration and work independently on his part of
the mission plan and then rejoin the collaboration session at a later time.
In order to allow the command cells the most flexibility in sending and
receiving data, the bandwidth management function lets each cell set trans-
mit and receive rates for the following: heartbeats (blue asset’s state, such
as location and health), sensor measurements (as an example, moving target
measurements include azimuth, azimuth variance, elevation, elevation vari-
ance, range rate, and range rate variance), and spot reports—fused informa-
tion about battlespace objects, which includes an ID, type, list of all possible
objects considered with each one’s estimated probability, location with its
probability of error, speed, and sensor information that was used in deter-
mining the spot report. The operators can modify the customized bandwidth
settings at any time during execution.
A unique and important feature of the CSE is command succession. If a
command asset is destroyed, like the Command and Control Vehicle (C2V),
the networked components sense its loss. An alert is triggered and the remain-
ing assets are notified that the system suspects that the particular command
vehicle has been destroyed. A commander in another cell can investigate, and
if the loss is confirmed, he may reassign assets to one or more cells, assign a
new commander to the cell, or a mixture of the two.
The CSE supports both individual chat sessions and group chat sessions.
The operator is shown the active members and may select their chat partner.
Operators may also set up a named group (one or more) for chat. Individuals
are invited to the chat and may elect to join.
Logistics
The Combat Power tool tells the operator about the health, fuel, and
ammunition status of assets during the battle. The operator may tailor the
data to his specific interests. For example, the operator may request to show
only certain assets and change such settings as the threshold when the low
level of fuel is to be reported. During execution, it shows a trend analysis for
fuel, munitions, and heath.
The Munitions-on-Hand is a tabular, textual tool that lists all the avail-
able ammunition and the allocated and spent ammunition counts for each
asset. During execution, the counts are continuously updated by the system as
rounds are fired, detonated, resupplied, or allocated by a plan.
AN ILLUSTRATIVE SCENARIO
To demonstrate how all the moving parts work together, and how they imple-
ment a network-enabled approach to battle command, let us walk through the
following scenario with the help of Captain Johnson, the commander we met
in chapter 2.
In this scenario, Captain Johnson’s force is responsible for providing recon-
naissance and for clearing of the enemy in an important area of the battlefield.
Following Johnson’s instructions, the maneuver manager, Specialist Chu, uses
GCMs—polygonal areas that he sketches on the map—to mark a specific area
88 Battle of Cognition
of interest. Chu names the area DOG. The assets under Johnson’s control
include five unmanned (robotic) assets:
The RSTA detects a moving object—let’s call it ObjX—with its GSR. The
PSE on the RSTA sends a stream of sensor measurement information to the
Collective Intelligence Module where it is fused with other available informa-
tion. The result is a spot report indicating that ObjX is a target of unknown
type moving east at 40 km/h. Since its type is unknown, the system marks it
with a low confidence level. The detection is broadcasted to all the compo-
nents in all three tiers on the network. The visualization components, the
CSE, SSE, and VSE, place an icon indicating a detection of an unknown type
on the map at the last detected location.
A-321’s PSE, its onboard expert system, gets the information, sees that the
confidence level of the target information is low, and recognizes that the target
is within the boundaries of area DOG. It immediately moves into reconnais-
sance action while alerting Johnson and his staff that it is beginning a new task
against the unknown moving object ObjX. A-321’s PSE uses the knowledge of
its own capabilities, the terrain information, and location prediction algorithms
to determine a good location to snap a picture of the target. It formulates a task
for itself called “Target Reconnaissance” that includes movement tasks and a
picture-taking task. The PSE broadcasts this information across the three tiers
so that other assets understand its intent. Specialist Chu monitors the route
closely on his CSE to ensure that the right decisions are being made.
The RSTA vehicle, still tracking the object, receives A-321’s information,
realizes that A-321 needs a more rapid feed of information in order to accu-
rately track the moving object and increases its transmission rate of the objects
location information. Instead of broadcasting the information throughout the
New Tools of Command 89
three tiers, the message is directed from the RSTA vehicle to A-321 since only
A-321 needs such detailed information. This helps reduce network traffic by
transmitting high-rate updates between communicating vehicles (using net-
work relay points if appropriate) only when necessary.
With the increased rate of incoming information about the location of the
moving target, A-321 uses its PSE to analyze the situation, terrain, and other
environmental and logistical information to track down the target and then
snaps an image. The analysis of the image, however, has to be performed
elsewhere. The Collective Intelligence Module receives the image, fuses it
with the previous information about ObjX, and generates a new spot report,
indicating an image is available. The spot report is broadcast to all three tiers.
The GUI layer updates the visualization of ObjX with an image marker, adds
the image to the picture viewer, and updates the status in all tables.
Within the command cell, Sergeant Rahim, the intelligence manager, is
alerted to the incoming image and uses his Intel Viewer on the CSE to dis-
play the image. He recognizes that ObjX is clearly an SA-13 air-defense sys-
tem that does not appear to have experienced any damage. With a few mouse
clicks, he designates the target as an enemy SA-13 with no damage. As soon
as the identification is entered into the system, an update message is trans-
mitted to other decision support components in the network. Each system
that is displaying the unknown ObjX immediately gets an update with the
appropriate symbol of an SA-13. This event, in turn, prompts several reason-
ing processes across the system.
Using its onboard intelligence, the A-321 understands a fire mission is
planned in its area and that it is in danger of friendly fire. It immediately
uses its self-protection rules in the PSE and analyzes the terrain for a good
place to seek cover. Fortunately, A-312 is small and went undetected by the
enemy, but it still takes every precaution to survive now that its task has been
completed.
The CIM’s threat manager agent classifies the SA-13 as a high threat based
on the criteria set up earlier within the AMF by the effects manager, Ser-
geant Manzetti. The updated situation awareness is broadcast to all tiers and
platforms. Each user can see the friendly assets that the SA-13 is currently
threatening.
Captain Johnson knows what is about to happen and watches the situa-
tion extremely closely. Here is the moment when the automated tasking of
vehicles and the human decision making must come together. In an instant he
gets an alert that A-321 is in danger of friendly fire and is now seeking cover.
Watching his CSE, Johnson confirms that A-321 is well on its way to take
cover behind a nearby hill. In this case, no other friendly or neutral assets are
within the vicinity of the enemy SA-13.
Concurrent with the threat identification, based on preset criteria, the
AGM calculates several attack recommendations, prioritizes them, and sends
to the command cell. The first choice on the list is a recommendation to fire
a precision attack munition from the NLOS vehicle. The second choice is to
90 Battle of Cognition
fire the MRAAS from the LOS vehicle. Manzetti positively acknowledges
the NLOS recommendation, and a fire request is sent to the NLOS vehicle.
The NLOS vehicle accepts the request and carries out the attack—fires the
missile. This update is disseminated to everyone in the network
Some minutes later, the missile detonates. The fact that the missile has det-
onated is estimated by the BDAGM agent based on the munition’s distance,
trajectory, and speed. The detonation event triggers additional automatic
behaviors. UAV A-322 dedicated to BDA is automatically tasked by the
BDAGM agent to take a picture of the SA-13. A-322’s PSE uses the same
terrain analysis tools that A-321 did. However, in this case, the PSE applies
concealment criteria to the route generation because its self-protection rules
indicate that the SA-13 may not have been destroyed and could shoot it down.
This results in a concealed route that stays out of the line of sight of the SA-13,
and other known enemy assets, for as long as possible before popping up and
taking a picture.
Soon, A-322 arrives at the target and takes the picture. The information
is communicated across the tiers to the other components. Sergeant Rahim
receives the image, views it, and then updates the damage state of the SA-13
to “Damaged.” The new state of the enemy target is broadcasted across the
tiers. Johnson leans back and announces to his team “Great job, guys!”
To summarize: working in partnership with the BCSE system, the opera-
tor sets criteria for automatic behaviors, responds to visual cues, and updates
identifications. The system shares this information across the tiers and initi-
ates appropriate automatic tasking of assets. The system keeps the commander
well informed and enables him to focus on the high-level management of the
battle rather than the control of the assets.
Figure 3.10. The use of automated code generation minimizes development time and
maximizes code reuse.
92 Battle of Cognition
We have been able to eliminate much of the extensive and error-prone analysis
and potential problems concerning rule interaction that often reveal themselves
in complex applications. The system can be adapted as the development team
learns about the domain and the application through experimentation.
Within the VDSF there is one framework-specific reasoning engine and
two third-party products that we can currently leverage to provide the rea-
soning engine in the VDSF. The third-party products are HaleyRules7 and
Clips/R2.8
Using the VDSF, a new system is built by adding a new set of application-
specific rules and augmenting some of the system components rather than
by creating a new architecture from scratch. By using a common architec-
ture, we are able to exploit commonalities across applications. This approach
has proven to be robust and scaleable and has resulted in significant cost
savings.
The key challenge from a software development perspective is to create the
appropriate data models, symbolic framework, and an efficient way of detect-
ing changes in battlespace state when the number of state change events can
be very large. To mitigate the challenge, the core VDSF architecture provides
means for the following:
• Collecting the data needed to represent the environment (in our case, the battle-
space).
• Reasoning about the battlespace data.
• Detecting and responding to relevant changes in that data.
Figure 3.11. The architecture of the VDSF underlies most of the agents within the
BCSE.
device as well as the translation of the information from the device to the DSS.
The DDI/DX provides a separation of the DSS system from a device—a failure of
one device will not affect the DSS. It also ensures that the DSS proper does not
have to provide a capability to communicate or provide protocol interaction with
any device.
• A component within DDI/DX, the DDI is a device driver built to conform and sup-
port the communication protocol of a specific supported external device or system
(e.g., a GPS or Terrain Reasoner).
• Another component of DDI/DX is the DX. It converts data received from the DDI
into a normalized format for use by the Decision Support System Processor (DSSP).
Likewise, the DX converts data received from the DSSP to device-specific formats
and passes it through to the device-specific DDI layer.
• The Device Independent Interface (DII) presents a single messaging interface to
the DSSP, in effect hiding all device-specific data.
• The Transaction Processor manages all messages between the DX and the DSS
Reasoner (described next).
• The DSS Reasoner contains the knowledge base containing the world state and the
reasoning engine known as the Rules Engine.
• The Rules Engine applies the rules against the knowledge base (called running
the rules), which leads to new inferences. Ultimately the Rule Trigger Method,
described below, will receive a notification of a change in world state and take the
appropriate action as described (later in the Rule Trigger Method description).
• Rules Helper Methods call functions registered with the rules engine during rule
evaluation. For example, if an enemy asset moves, a rule may call the Terrain Rea-
soner to help determine if the enemy can see a given position. In this example, the
Terrain Reasoner helps the rule determine if a threat exists.
94 Battle of Cognition
95
96 Battle of Cognition
Table 4.1
Examples of Level-1 SA in Command and Control
(continued)
98 Battle of Cognition
Table 4.1
Examples of Level-1 SA in Command and Control (continued )
Table 4.2
Examples of Level-2 SA in Command and Control
(continued)
100 Battle of Cognition
Table 4.2
Examples of Level-2 SA in Command and Control (continued)
As Figure 4.3 shows, there may be a significant gap between ideal SA (per-
fect knowledge) and that which is currently “known” by the system from all of
its available sensors and other inputs. By system knowledge, we mean not only
the information residing in an individual technical system (such as a radar or
command and control software), but that in the sum total of the technical
systems, people, processes, and operations that together form the basis for
command and control.
There may also be a gap between this level of system information that the
warfighter might possibly obtain and that which can be derived from the system
interfaces (available information). This gap may exist because some system infor-
mation may not be passed to the warfighter through the system interfaces—due
to limited network bandwidth, for example, or a failure of a subordinate to pass
on a needed report—or because the warfighter must take additional actions to
derive the information from the system (paging through menus and windows
to find information that may be obscured). An important goal for the develop-
ment of command and control systems is not only to raise the level of system
information, but also to minimize the gap been system information and avail-
able interface information through effective system design.
Finally, a gap can occur between the amount of information available at
the system interface and the SA that is finally formed in the mind of the indi-
vidual. There are a number of cognitive limitations that often act to limit SA,
as well as a number of external factors that can act to make situation awareness
difficult to attain.
Individual Limitations
People have a limited amount of attention they can direct toward gather-
ing needed information and a limited amount of working memory that can
be used to combine and process perceived information to form the higher
levels of SA. Unless they are experienced and dealing with learned classes of
situations (which helps them develop mental models and schema that allow
Situation Awareness 101
Table 4.3
Examples of Level-3 SA in Command and Control
Projected enemy COAs Projected availability of friendly forces
Expected COA Projected ease of implementation of
Most dangerous COA COA
Projected effect of weather on enemy COA Projected availability of resources
Projected effect of weather on enemy Projected ability to minimize troop risk
equipment Projected impact on enemy
Projected enemy unit size/actions Projected availability of resources
Projected enemy decision points Projected effect of COA on enemy
Projected effect of COAs on enemy plans/mission
vulnerabilities Projected effect of COA on enemy
Projected impact of friendly COAs on workload
enemy COAs Projected effect of COA on enemy
capabilities/ability to fight
Predicted reaction of population to Projected time required to carry out COA
friendly COAs Projected ability of plan to disrupt/
Projected civilian behavior counter enemy intentions
Projected effect of fires on enemy/civilians Projected risk associated with friendly
COA
Projected effect of weather on friendly Projected time on route
COAs Projected safety on route
Projected effect of weather on equipment Projected safety of shipments
Projected effect of weather on terrain Projected reliability of transportation
Projected effect of weather on personnel mode
Projected effect of weather on Projected time required to get item to
infrastructures site
Projected impact of weather on visibility
Projected impact of weather on Projected ability to get to location on
trafficability time
Projected impact of weather on visibility Projected ability to sustain the assets
Projected impact of weather on ability to Projected ability of enemy to counterat-
get air support tack asset
Projected timing of weather inversions Projected ability of assets to collect
Projected impact of terrain on trafficability needed information
Projected impact of terrain on visibility Projected availability of assigned assets
Projected impact of terrain/weather on Projected ability to support units with
systems operations COA
Projected impact of terrain/weather on Projected usage of each item over time
comm capabilities Projected location of unit over time
Projected impact of terrain/weather on Projected usage of each item over time
ability to get intel Projected safety of units and logistics
Projected impact of terrain/weather on team
ability to get air support Projected availability of resources
Projected time and ability to get items
Projected safety of deployment for assets to units
Projected effect of infrastructures on Projected ability to achieve new supply
friendly COAs plan
(continued )
102 Battle of Cognition
Table 4.3
Examples of Level-3 SA in Command and Control (continued)
Perceptual Constraints
In today’s practice, much of command and control occurs in a relatively
stationary command post or tactical operations center (TOC). In the future,
however, the military is planning on a much more mobile command and con-
trol, an on-the-move concept that distributes C2 activities and places them in
conditions that are intertwined with activities in the battlespace.
Under many battlespace conditions, the warfighter must traverse widely
disparate terrain and deal with highly varied environmental conditions. Obsta-
cles, noise, poor weather, visibility, and smoke may reduce the warfighter’s
ability to perceive the information he needs. Due to enemy actions, even
directly viewing a critical area may be impossible. Gathering the needed
information across a widely dispersed operation is a challenging activity that
takes considerable effort, particularly when the enemy may actively work to
conceal critical information or provide misinformation. These factors work
to directly limit Level-1 SA, and thus the higher levels of SA (comprehension
and projection), due to incomplete or inaccurate perceptions of environmen-
tal cues.
Stressors
Several types of stress factors omnipresent in C2 operations may nega-
tively affect SA. These include (a) physical stressors—noise, vibration, heat
and cold, lighting, atmospheric conditions, boredom, fatigue—and (b) social
104 Battle of Cognition
instance, the use of night vision devices has been associated with decrements in
other senses (e.g., hearing) that could reduce SA (Dyer et al. 1999). More seri-
ous effects may be produced by other devices (e.g., helmet-mounted displays)
that interfere with the warfighter’s vision, hearing, or attention (National
Research Council 1997). High levels of automation and decision aids are also
proposed and developed for C2 systems. These efforts should be conducted
with great caution. Warfighter SA can be negatively affected by the automa-
tion of tasks, which puts them “out-of-the-loop” (Endsley and Kiris 1995).
All of these issues lead to the need for a process that systematically identi-
fies warfighter SA needs and develops C2 systems that specifically promote
high levels of SA. Over the past two decades, a significant amount of research
has been focused on this topic, developing an initial understanding of the
basic mechanisms that are important for SA and of the design of systems
that support those mechanisms. Based on this research, the SA-Oriented
Design process has been established (Endsley, Bolte, and Jones 2003) to guide
the development of systems that support SA (Figure 4.5). This structured
approach incorporates SA considerations into the design process, including
SA REQUIREMENTS ANALYSIS
To determine the aspects of the situation that are important for a particular
warfighter’s SA, one can use a form of cognitive task analysis called a Goal-
Directed Task Analysis (GDTA), illustrated in Figure 4.6. In a GDTA, the
analysis identifies major goals of each warfighter position, along with the
major subgoals necessary for meeting each of these goals. The analyst then
determines the major decisions that need to be made in order to meet each
subgoal. Then, the analyst delineates the SA needed for making these deci-
sions and carrying out each subgoal. These SA requirements focus not only
on what data the warfighter needs, but also on how that information is inte-
grated or combined to address each decision, providing a detailed analysis of
the warfighter’s SA requirements at all three levels of SA. Such an analysis is
usually carried out using a combination of cognitive engineering procedures.
Expert elicitations, observation of warfighter performance of tasks, verbal
protocols, analysis of written materials and documentation, and formal ques-
tionnaires have formed the basis for the analyses. The analysis is conducted
with a number of warfighters, who are interviewed, observed, and recorded
individually. The results are pooled and then validated overall by a larger
number of warfighters.
An example of the output of this process (Figure 4.7) shows the goal struc-
ture for a brigade logistics coordinator and the decisions and resulting SA
requirements analysis for the subgoal “project future supply needs of units.”
This analysis systematically defines the SA requirements (at all three levels of
SA) for effectively making the decisions required by the warfighter’s goals. The
analysis does not indicate a prioritization among the goals (which can vary over
time) or that each subgoal within a goal will always be active. Rather, in prac-
tice, a warfighter juggles between subsets of goals, based on current priorities.
The analysis also strives to make as few assumptions about the technology
as possible. How the information is acquired is not addressed, as this can
vary considerably from person to person, from system to system, and from
time to time. Depending on a specific case, the information could be acquired
through system displays or verbal communications with other warfighters,
or it could be generated by the warfighter himself. Many of the higher-level
SA requirements are generated in the minds of warfighters today, but that
may change in future as intelligent agents and other forms of automation are
introduced. By focusing on ideal SA, the GDTA forms the basis for system
design; it provides a delineation of the information that the system should try
to provide while imposing the least workload on the warfighter.
Figure 4.7b. Analysis for the subgoal “project future supply needs of
units.”
110 Battle of Cognition
SA DESIGN EVALUATION
Many concepts and technologies are claimed to enhance SA in command
and control and military operations in general. Prototyping and simulation
of new technologies, new displays, and new automation concepts is extremely
important for evaluating the actual effects of proposed concepts within the
context of the task domain and using domain knowledgeable subjects. If SA is
to be a design objective, then it is critical that it be specifically evaluated during
the design process. Without this step, it will be impossible to tell if a proposed
concept actually helps SA, does not affect it, or inadvertently compromises it in
some way. A primary benefit of examining system design from the perspective
of warfighter SA is that the impact of design decisions on SA can be objectively
assessed as a measure of quality of the integrated system design when used
within the actual challenges of the operational environment.
SA measurement has been approached in a number of ways (Endsley
and Garland 2000). A review of the advantages and disadvantages of these
methods can be found in (Endsley 1996; Endsley, Bolte, and Jones 2003). In
general, direct measurement of SA can be very advantageous in providing
more sensitivity and diagnostic value in the test and evaluation process. This
Situation Awareness 111
to form the higher levels of SA can vary significantly, based on the goals
that are pertinent to a member’s position. For example, they found all posi-
tions require knowledge of terrain information (see Table 4.4 for terrain SA
requirements); however, the required level of detail and the way in which the
information is used varies considerably between staff positions. The majority
of differences in SA requirements appear in how the various positions need
to comprehend and make projections (Levels 2 and 3 SA) based on the same
Level-1 data. For example, the intelligence and operations officers are pri-
marily concerned with how the terrain affects friendly as well as enemy troop
movements, assets, and capabilities. The logistics officer and engineer are
more concerned with how terrain affects vehicle movements and the place-
ment of obstacles and assets. By understanding not only what data each staff
position needs, but also how that information will be used by each position,
system displays can be designed that provide only the detail level needed for a
particular position without presenting unnecessary information.
The same research also shows how the shared SA requirements within the
brigade combat team can be identified via the GDTA. Table 4.5 shows some
of the shared information requirements for the intelligence and logistics offi-
cers. The analysis of shared SA items indicates that the two positions do not
share many specific details. Instead, they share general information regarding
troops, infrastructures, and courses of action. While they each have many dif-
ferent uses for this information, they also make a number of different future
projections (Level-3 SA). Interestingly, these types of projections are rarely
conveyed in display design but instead must be communicated verbally by
team members for successful coordination in most systems. Unfortunately,
teams are often poor at sharing high-level SA requirements. Instead, they
communicate only low-level data (Level-1 SA) with the (often false) expecta-
tion that it will be interpreted the same way by other team members (Endsley
and Robertson 2000).
Knowledge of these shared SA requirements can be used to develop sys-
tems to increase shared SA between team members, which will be increas-
ingly important as future operations are likely to be more distributed. One
Situation Awareness 115
Table 4.4
SA Requirements Associated with Terrain Information Differ Depending on
the Staff Positions (Bolstad, Riley, Jones, and Endsley 2002)
SA Level 1
S2 S3 S4
(Intelligence) (Operations) (Logistics) Engineer
(continued)
116 Battle of Cognition
Table 4.4
SA Requirements Associated with Terrain Information Differ Depending on
the Staff Positions (Bolstad, Riley, Jones, and Endsley 2002) (continued)
SA Level 2
S2 S3 S4
(Intelligence) (Operations) (Logistics) Engineer
SA Level 3
S2 S3 S4
(Intelligence) (Operations) (Logistics) Engineer
Table 4.5
Shared SA Requirements for Intelligence and Logistics Officers,
after Bolstad, Riley, Jones, and Endsley (2002)
Shared SA Requirements
together not only builds team skills, but also supports the social processes
that impact team performance, such as the development of trust and under-
standing between the members. Network-enabled warfare calls for the abil-
ity to “leverage the intellect, experience, and tactical intuition of leaders at
multiple levels in order to identify enemy centers of gravity and conceptualize
solutions, thus creating a collective genius through accelerated collaborative
planning” (U.S. Army 2001). The expressed intent is to bring together rapidly
forming teams with the skills, background, and experience to offer multiple
perspectives on a problem for the purpose of collaborative planning.
These ad hoc teams would likely be selected based on the specific needs of
the situation under consideration, would pull members from multiple military
specialties and echelons, and often would incorporate joint forces or multi-
nation team members. Such teams would not have the benefit of combined
training and background, nor would the time that is necessary to establish
relationships built on mutual trust and understanding likely be afforded to
these teams. Thus, the presence of ad hoc teams adds an additional level of
complexity to the development of C2 systems.
Experience indicates that ad hoc teams, frequently occurring phenomena,
face a number of significant challenges in developing a shared understanding
of the situation upon which to base their actions.
DATA COLLECTION
Effective experimental design and setup are critical to the successful eval-
uation of the experimental metrics. However, the quality and depth of the
resulting experimental findings are ultimately linked to the quality and depth
of the collected data. We were fortunate to have extensive data collection
capabilities in our experimental program.
Figure 5.1 shows an overview of the data collection approach. In the top left
portion of this figure, we depict the sources of data, particularly automated
loggers. These loggers collect virtually every piece of information flowing
120
The Hunt for Clues 121
through the network for each of the software tools. The data contained in these
log files is comprehensive, as most of the files doubled as debugging tools
for the software developers. Using these data, we were able to explore new
avenues of analysis as we developed emerging insights and extended our analy-
sis into areas that could not have been predicted prior to the experiment. The
downside of using this information is the sheer magnitude of the data col-
lected, combined with the lack of standardization among the data files for the
different tools. Therefore, to use data from the various tools in the analysis,
we developed parsers to convert each file into relational database tables. Ulti-
mately, these data sources enable us to compare ground truth, sensor detec-
tions (including those by human eyes), fused information, and perceived truth.
These automated logs are pivotal to the analysis tools described later in this
and subsequent chapters.
The top right of Figure 5.1 shows the data that we collected from the
command-cell operators. This includes video and audio recordings of each
operator during the run as well as recordings of the after-action reviews and
planning sessions. In addition to these quantitative data regarding the opera-
tor interactions, we collected information on how the operators perceived the
tools and the battle progress. Our approach to this evolved over time. In early
experiments, we administered surveys to collect feedback from the operators.
Unfortunately, the quality of responses varied dramatically between individu-
als, and there seemed to be a decrease in the quality of responses as each
experiment wore on. In addition, the surveys were necessarily generic and
could not be tailored to specific events for a given run.
In later experiments, we replaced the host of surveys with a single demo-
graphic survey conducted at the start of the experiment. The majority of oper-
ator-related information collected during these later experiments was collected
in focus groups. At the end of each trial run, we conducted small-group inter-
views to elicit the operators’ perceptions of key events in the battle. Depending
on the events of interest, we arranged the focus groups by cell (i.e., company
staff, battalion staff) or by staff position (i.e., intelligence managers, effects
managers, etc.). By having the participants elaborate on critical situations in
the recent trial run, we obtained immediate recollections that could be corre-
lated with actual battle events. Based on the input from the group interview, we
identified an individual decision maker to interview in more detail. In this one-
on-one interview, we discussed a single key event in depth: what the operator
knew at the time, what decisions were made, what information the decisions
were based on, how a less-experienced person may have reacted under a similar
situation, and what additional information may have affected the decision. All
interviews were recorded, and analysts published notes from the interviews
that we used during the subsequent analysis phase.
The bottom portion of Figure 5.1 shows a particularly important element
of the data collection process—the analytic observers. In this complex free-
play experiment, the half-life of understanding the context of key events is
very short. We mitigated this by providing analytic observers with tailorable
122 Battle of Cognition
workstations, each comprising tools that enable the human observer to under-
stand and record as much of the battle context as possible in real time. During
the experiments, up to 20 analytic observers were stationed at these tailored
workstations. About half of the observers focused on a command cell’s under-
standing of the battle situation and the collaboration within the cell. Their
panels displayed a view of the active tools used by the commander and battle
managers of a given cell. The other half of the analysts focused on collaboration
The Hunt for Clues 123
and coordination between command cells. Their panels replicated the active
tools of all operators who specialized in a given function. For example, one
set of analyst displays considered collaboration between commanders, and
another focused on effects managers. In addition to their tailored displays,
each panel also contained a video view of the operators and a display of the
actual ground truth status of all Red and Blue forces. Together, the displays
allowed the observers to maintain awareness of ground truth, perceived truth,
the commander’s situation awareness, how the cells collaborated, and how the
commanders and staffs made decisions.
Additionally, we created a database application that enabled each analyst to
enter observations in real time. We designed the application to facilitate rapid
data entry and thereby help focus the data collection. An example collection
form is shown in Figure 5.2.
Based on the early experiments, we realized that it was not reasonable to
expect a single observer to effectively collect on all aspects of the battle. There-
fore, we made a conscious effort to identify data elements that could be col-
lected postexperiment from the automated data loggers and to not duplicate
the collection of that information via human observers. Further, we staffed
each functional area (e.g., intelligence, effects) and each unit (e.g., CAT, CAU)
with two observers. The first observer was responsible for selected counts—
recording each time certain events occurred (e.g., how often the intelligence
manager collaborated with the effects manager). The second observer was
SITUATION AWARENESS—TECHNICAL
The rich data sets collected during the experiments gave us significant flex-
ibility to explore emerging concepts, develop associated metrics, and relate
analytic results to combat outcomes. Our quantitative data included both the
information available to the commanders (perceived truth) and ground truth
states of all battlespace entities. Using this information, we devised the Situ-
ation Awareness—Technical (SAt) scoring method to evaluate the quality and
scope of information collected by the units over time.
Our primary measure of SAt reflects the quantity and accuracy of relevant
information available to a command-cell member over time. In its basic form,
the SAt score is a ratio of the information available to the information required.
This ratio is different for each commander at each echelon because information
needs vary with the size and contents of the areas of responsibility, the lethality
and range of weapon systems, and the mission at hand. While the complexities
of battle command are many, we simplify the scope to include three fundamen-
tal components for each enemy entity: knowing where the enemy is (location),
what the enemy is (acquisition level), and how healthy the enemy is (state).
In the SAt score, we did not consider information about friendly forces or
terrain because in our experiments the operators consistently had very good
information in those areas. The SAt model also did not include neutral enti-
ties. Although neutrals added complexity and additional information gathering
requirements to the scenarios, the command cells typically did not dedicate
sensors to trying to find civilians on the battlespace. That said, the impact of
civilians to situation awareness can be significant, and a more elaborate SAt
model may include, for example, the awareness of civilians in the proximity
of enemy entities or a decreased score when a neutral entity is incorrectly
identified as an enemy.
The evaluation of the SAt score was possible in our experiments because
every spot report (report about a detection of an entity) included a unique iden-
tifier that allowed us to relate unambiguously a detected entity to the actual
entity. This information was not available to the commander or his staff but
was available for analysis.
Of the three components of situation awareness considered in our model, the
awareness of where the enemy is located is perhaps the most tangible. The loca-
tion component of the SAt score is a measure of the accuracy of the perceived
truth location of a given entity as compared to the ground truth location. For
example, an inaccurate sensor reading or a target that moves after detection
The Hunt for Clues 125
• Detect—a sensor perceived an object of possible military interest but did not rec-
ognize it otherwise.
• Classify as tracked, wheeled, or biped—the sensors (and processing systems) clas-
sified the object according to its mobility class (e.g., tracked vs. wheeled vehicle).
• Classify as enemy or neutral—the entity was classified as enemy based on radio
signal processing. Because neutral entities did not emit radio, and all information
about the Blue force was known, the classification as friendly was not included in
this score.
• Recognize/identify—the entity’s specific type or model was determined (e.g., T-72
vs. M1). This provides the commander with enough information to fully understand
the threat of the detected entity.
The acquisition scores represent how correctly the command cell (or the
fused spot reports) acquired and classified an enemy entity. For example, if
126 Battle of Cognition
one of the cell members correctly identifies a tracked enemy vehicle based
on a sensor picture, the score increases from “classify track” to “recognize/
identify.” However, if the command cell incorrectly identifies the same target,
the score remains “classify track” because it is the most correct representation
of the entity available to the commander and staff.
Simply knowing the location and identification of an enemy entity is insuf-
ficient. For example, if the commander engages a target beyond line of site
of his entities, he needs to know whether he effectively disabled that target
before proceeding through that area. We model this need to know the state
of the enemy (e.g., whether an entity is alive, dead, or damaged), as the third
critical input to the SAt score. The state component of SAt is a measure of
the accuracy of the perceived state knowledge compared to the actual state of
the entity. For example, incorrectly marking an entity that has actually been
killed as still alive may lead to expending additional scarce resources to reen-
gage. Likewise, incorrectly marking a healthy entity as dead may have lethal
consequences when the friendly force moves within range of the entity.
To determine the state score, we first evaluate how much of each enemy
entity’s mission is dedicated to moving, firing, and communicating (spotting
and reporting). This evaluation is roughly based on the capabilities of the
entity and how the enemy commander typically uses the platform. The cor-
rectness of an individual state assessment is then calculated by summing the
correctly identified components of combat function. For example, a battle tank
may have 35 percent of its function as moving, 55 percent of its function dedi-
cated to firing, and 10 percent of its function as reporting or communicating.
If the entity is perceived to be “total kill,” but the actual state is “firepower
kill” (and therefore also communications kill), the assessed state is correct
for the fire function and the communication function but incorrect for the
movement function. Therefore, the state score of the entity is 55 percent + 10
percent = 65 percent.
The three component scores (location, acquisition, and state) are evalu-
ated for each entity in the opposing force and then combined to form an
overall score for a particular side’s knowledge of its opponent. The formula
used to support this evaluation is shown in Figure 5.3. It produces a score
between 0.0 and 1.0, with 0.0 indicating a complete lack of useful informa-
tion, and 1.0 indicating the possession of all required information. A score of
1.0 would imply that at a particular point in time the commander has access to
full knowledge about the location, type, and state of all enemy entities within
his area of interest. We introduced coefficients into the formula to enhance
its utility:
• The weights, W, allow the analyst to emphasize the three components of the com-
bined score to different degrees. Setting a weight to zero eliminates the contribu-
tion of that measure from the score. We have applied the following selection of
weights: location (Loc) was weighted at 0.45, acquisition (Acq) was weighted at
0.45, and state (Sta) was weighted at 0.10. This initial selection of weights reflects
The Hunt for Clues 127
a concern that the difficulty experienced by the operators in assessing the impact
of the application of effects could dominate the portrayed values of SAt. Sensitiv-
ity analyses conducted on the data available from the experimental series indicates
that the general trends of the curves are not sensitive to moderate changes in the
weighting values.
• The criticality coefficient, c, enables the analyst to account for certain entities that
might be of more value than others, regardless of location. For example, an air
defense platform may be more critical to find, identify, and eliminate than a supply
truck.
• The decay factors, d, were used in early experiments to account for the loss in value
of information over time. In the simulation, information was made available to the
operators through internal reports after each sensing event. The age of the infor-
mation is measured as the elapsed time since the last report of a particular target.
The information is of most value immediately after a report and begins to lose value
from that point forward. In later experiments, this decay component was replaced
with a more accurate representation of the value of information based on constantly
updated location accuracy information (discussed above). When an entity moves
beyond actionable position information, its track is lost, and both the location and
acquisition components of the score go to zero.
The SAt formula involves summation over a set of entities. But what is
included in that set of entities? The simplest possibility is to include all enemy
entities deployed in the battlespace. However, it is often more meaningful to
include only particular types of targets or targets in a specific geographic area.
Our SAt model allows for such specifications. For example, we used this flexi-
bility to explore SAt scores when applied to those entities that the commander
defined as most dangerous targets (MDT) or high-payoff targets (HPT). The
typical analytic package produced for each experimental run included the SAt
128 Battle of Cognition
of all enemy entities, the SAt of the MDTs as defined by the commander, the
SAt of the HPTs, and the enemy’s SAt of friendly forces.
In analyzing each trial run, recomputation of the SAt score is triggered by
a number of activities such as the receipt of a spot report, entity movement,
fire missions, or an entity state change. Because such activities occur very
frequently, the resulting graph is a nearly continuous curve that describes the
evolution of the score over time.
SENSOR COVERAGE
A command cell obtains its information from sensor reports (including
human warfighters’ reports). We found sensor coverage to be the key opera-
tional factor affecting the SAt score. Understandably, knowing the status and
capability of available sensors is crucial to the commander.
With the CSE, sensor detections immediately populated the COP to give
the commanders a sense of the battlespace. However, this immediate dis-
play of information also often had the unintended consequence of leading
the commander to mistakenly conclude that an area absent of detections was
devoid of enemy entities. To reduce the risk of being surprised by a signifi-
cant enemy force, the command cell had to understand how effectively an
area had been covered with sensors. However, the absence of detections can
also contribute positively to situation awareness. For example, suppose the
commander directs certain sensors to observe an area, and the sensors do not
detect anything. Knowing that the area is void of detections is very useful,
assuming of course, that the lack of detections is due to the absence of enemy
entities and not due to inadequate sensor coverage.
To explore how effectively the commanders and staffs used their sensors to
cover key areas, we developed a tool that examined the quality of sensor cover-
age across the battlespace. This tool enabled us to consider the commander’s
level of confidence that an area void of detections on his visual display was, in
fact, void of enemy entities.
To compute the sensor coverage quality score, we first define one or more
regions of the battlespace as critical areas of interest for the unit. Each of
these areas is then given an importance score, and a regular grid is superim-
posed over these areas, as depicted in Figure 5.4. We then evaluate each grid
cell (as described below) and compute an aggregate score based on individual
cell scores and the importance of each cell.
Our model for computing the sensor coverage quality accounted for a
number of factors:
Sensor mix—Different sensor types have different capabilities and are often much
more effective in combination than they are alone. For example, some sensors can
only detect moving targets, while others can only detect stationary targets. Sepa-
rately, either one provides some amount of information about the area, but the
combination is more effective than the sum of the parts.
The Hunt for Clues 129
Time value of information—In addition to the effectiveness of the sensors that have
covered an area, it is important to account for how much time has passed since the
area was covered.
Time of coverage—The longer a sensor covers an area, the more effective it is at
detecting entities within an area. There are several reasons for this: a stationary
target may begin moving, creating possible detection opportunities, or an entity
that was out of sensor range may move into range.
Number of times covered—Spot-mode sensors do not cover wide areas within a single
time increment but look at a localized region. Therefore, for spot-mode sensors, an
important parameter to consider is the number of passes the sensor makes over a given
area. In this case, the effective coverage increases the more times an area is covered.
Distance from sensor—Sensors tend to provide more accurate and reliable detections
at closer ranges. They are nearly ineffective at their extreme detection range.
In order to explore how the rate of change in SAt correlates with the effec-
tiveness of sensor coverage, we plotted the quantitative measure of sensor
coverage against the SAt curves. This helped reveal three primary reasons for
situations in which SAt grew slowly or stayed relatively constant:
Figure 5.5 shows an example of such an analysis. During this run, the SAt
growth followed a fairly typical trend of rapid initial growth due to the initial
intelligence feed from higher headquarters and due to sensors coming online,
130 Battle of Cognition
followed by a relatively flat period before the Blue unit began ground opera-
tions and then a rapid growth as the ground forces moved into enemy terri-
tory and found enemy targets at close range with sensors or human vision.
Two sensor coverage curves are shown in the lower portion of Figure 5.5: the
darker line represents coverage of all areas beyond the initial line of departure,
and the lighter line represents the areas most critical for mission success. The
commander and analysts jointly identified these critical regions. Together,
these charts indicate that after an initial surge of intelligence information,
there were few new detections because no new area was being covered by the
sensors. As the ground forces began maneuvering, the sensor coverage quality
increased, and new information became available to the commander.
In addition to providing the sensor coverage measurements and graphs,
the tool’s graphical interface shows analysts the positions of Red and Blue
assets over time, identifies which sensors make detections, indicates differ-
ences between perceived location and actual location, and displays Red and
Blue attrition over time (see Figure 5.6).
SITUATION AWARENESS—COGNITIVE
Although SAt is a relevant measure of the information available to a commander,
situation awareness ultimately occurs in the mind of the commander: “Technology
can enhance human capabilities, but at the end of the day . . . we can have ‘per-
fect’ knowledge with very ‘imperfect’ understanding” (Brownlee and Schoo-
maker 2004). It is the commander who perceives, categorizes, and synthesizes
the available information into a complete picture that bridges the three levels
of situation awareness discussed in the previous chapter. In general, Situation
Awareness—Cognitive (SAc) has a complex relation to SAt. Unfortunately, mea-
suring how effectively the commander understands the battle situation is not
as simple as developing database queries and scoring algorithms. Throughout
this experimental program, we searched for ways to understand what was in the
commander’s mind and how well he understood the tactical situation.
At the conclusion of battles in the early experiments, we asked each com-
mander to assess the level of situation awareness that he achieved during
that battle on a scale from 1 to 10. This retrospective, subjective assessment
is inherently biased and is strongly influenced by the surveyed individual’s
assessment of the recent battle outcome. Because these surveys were con-
ducted after the run was complete, and the commander knew how effective
his unit had been, there was an artificially strong correlation between the
commander’s self-assessed SA and unit success in the battle. This postex-
periment self-assessment often contradicted what the commander said dur-
ing a run. An example of this postexperiment bias is shown in Figure 5.7. In
this run, the commander’s verbalizations indicated a severe lack of situation
awareness regarding the critical northern avenue of advance, yet he rated his
overall situation awareness very high because the unit eventually achieved a
clear victory.
During these early experiments, we were fortunate to have a commander
who spoke freely about his current thoughts and perceptions of the battlespace.
At times, the commander addressed his thoughts directly to specific cell
members, while at other times, the actual target of the discourse was unclear.
These verbalizations contained critical information about the commander’s
current understanding of the battlespace—understanding that analysts parsed
132 Battle of Cognition
that this would give us insight into the instantaneous state of awareness at
given points in the battle. Unfortunately, because of the rapid pace of the
simulated battle, command-cell operators were reluctant to take their eyes
off the screen even for the limited amount of time (less than one minute)
required to complete a survey, and the quality of survey responses reflected
the fact that the participants viewed these surveys as a distraction from
their primary duty of fighting the battle. In later experiments, we aban-
doned the standardized surveys altogether. Instead, we relied on three
other techniques.
First, we changed postrun surveys into postrun interviews to allow the ex-
change to be tailored to emerging trends or specific events from a completed
battle. This technique also enabled us to maintain a more consistent quality
of information. These interviews were based on the questioning framework
of the Critical Decision Method (Klein, Calderwood, and MacGregor 1989).
Figure 5.7. An example in which the Blue commander exhibited poor cognitive
situation awareness, SAc, in spite of high SAt available to him.
134 Battle of Cognition
This process identifies one or more critical decisions in a run and explores the
commander’s thinking at the time of the decision. By limiting the focus and
not asking the commander subjective questions, this technique minimizes the
problems encountered with postrun surveys.
Second, we made extensive use of dedicated observers to study and analyze
decisions and verbalizations obtained from squad leaders, platoon leaders, and
commanders in which they discussed their perception of the battle and the envi-
ronment. The experimental facilities gave the observers access to all operators’
screens and communications. Using these information feeds and customized
collection tools, the observers developed an analytically rich data set.
Finally, we employed a more formal structure for the periodic “commander’s
read”—the verbal report on the commander’s assessment of his situation and
the enemy situation. Unlike our earlier attempts to encourage the commander
to speak nearly continuously, we now requested that the commander give a
verbal situation “read” at key points in the battle. We also provided the com-
mander with an outline that included his assessment of friendly and enemy
troops, and an indication of whether or not the mission could be completed on
schedule.
Both during and immediately after each experimental run, analysts recorded
the commander’s reads and qualitatively assessed their correctness as com-
pared with ground truth. This provided a subjective measure of cognitive sit-
uation awareness (SAc) as it was expressed in the commander’s reports and in
dialogues with other operators (Figure 5.8). Additionally, the observers were
aware of the actions and intent of the Red commander and were able to take
this information into consideration when making their assessments.
Each commander’s SAc was assessed by observers on a Green, Amber, and
Red scale for awareness of the Red forces and Red plans; his own forces; and
his own plan status.
The foundation for much of the analytic effort in the latter experiments
was the process trace (Woods 1993) that focused around key events (e.g., a
decisive battle, a missed decision, or an effective and timely decision). After
each run, analysts identified one or more key events based on their relative
battle impact and the commander’s cognitive effort, compiled all available
information for those events, and pieced together a detailed storyboard that
informed other aspects of the analysis.
In order to relate information availability to situation awareness, these indi-
vidual analytic results were plotted over time along with the relevant SAt curves.
This comparative examination led to the development of a new metric reflect-
ing the cognitive environment in the cell, the battle tempo.
BATTLE TEMPO
One of the primary inhibitors to developing situation awareness is the
tempo of operations in a battle. At times of peak activity, a commander is
often absorbed in the details of the moment and fails to comprehend the
bigger picture. We saw situations in which the commander made nearly con-
tinuous verbal observations about details he was seeing on his screen but
never synthesized that information into a coherent picture. The common
approach was to watch the screen for changes and then react to those
changes. All commanders in our experiments exhibited this behavior to
some extent, and the tendency became more pronounced as the tempo of
operations increased.
To better analyze this trend, we introduced a measure of battle tempo—the
frequency of battle-relevant events that influence a command cell. This met-
ric gives an indication of external cognitive factors that are likely to impact the
commander’s ability to process information and act in a timely manner. The
following events are available in either the log files or the observer database
and are used to quantify the battle tempo score:
Tempo = W1 · FD + W2 · SD + W3 · CI + W4 · GT + W5 · FT + W6 · BA
C1 C2 C3 C4 C5 C6
136 Battle of Cognition
Where:
Wi = A weighting factor.
Ci = A normalizing factor.
FD = First detections of enemy entities per unit time.
SD = Subsequent detections of enemy entities per unit time.
CI = Collaborations initiated by the commander per unit time.
GT = General taskings initiated from the cell per unit time.
FT = Fire taskings initiated from the cell per unit time.
BA = Blue entities lost per unit time.
In a manner similar to the SAt curves, calculating an instantaneous battle tempo
score repeatedly during a run produces a curve that reflects changes in cognitive
load over time. Example curves are shown for three echelons in Figure 5.9.
COLLABORATIVE EVENTS
By plotting multiple metrics—SAt, SAc, sensor coverage, and battle tempo
as a function of time—in one chart, we are able to visually explore relationships
between the metrics and underlying phenomena. We call such plots stacked
charts. An example of a useful stacked chart is shown in Figure 5.10. This chart
focuses on the CAU-1 SAt and battle tempo and was useful in the analysis of
that unit during Run 8 of Experiment 6. This stacked chart highlights three key
events and four critical decisions (denoted with large starts in the top section of
the graphic). In Event 1, four of the primary information gathering units, the
Unmanned Aerial Vehicles (UAVs), were lost to enemy fire early in the run.
Although the commander recognized the loss of these units, he did not system-
atically consider the effect of this loss on his Reconnaissance and Surveillance
(R&S) plan. The second key event was the asychronous maneuver of the two
CAU units. The lack of effective coordination between the units allowed the
enemy to strike against the attacking forces. This led to the third key event—
the destruction of CAU1 and its failure to achieve the mission objectives. Even
after this impace on force strength, the higher echelon commander (CAT CDR)
pressed on using the current plan instrad of systematically considering the capa-
bility of the remaining force. A similar product could be generated with respect
to any command cell included in our area of analytic focus.
Stacked charts are particularly helpful when used in conjunction with pro-
cess traces. A process trace produces a detailed chronicle of how an incident of
interest came about. With the process-tracing methodology, we can map out
how an incident unfolded, including available cues; what cues were noted by
operators; and the operators’ interpretation of those cues in both the immedi-
ate and larger contexts. Process tracing helps to link collaboration to changes
in situation awareness and to connect situation awareness to decision making
with a focus on the operators and their use of the battle-command system.
A stacked chart typically shows four elements:
• Select SAt curves that may include single or multiple curves for both Red and Blue
commanders
• Battle tempo
Figure 5.11 is a detail of the top chart in the stacked chart. This view details
the observer database entries for collaborations that occurred across, and
internal to, each of the command cells included in the analytic focus. The
vertical axis contains the list of operators by cell. A blue diamond in the chart
indicates a participant in a collaboration, while the pink square indicates an
initiator of the collaboration.
The second chart in the stack (Figure 5.8) reflects assessments of the com-
manders’ cognitive situation awareness (SAc) as expressed in the commander’s
reads and in his collaborations with other operators. These subjective assess-
ments were made by observers based on comparisons between individual com-
manders’ expressions and the ground truth situation available to the observer.
In the chart, the assessments of the Blue commander’s awareness of the Red
forces and Red plans are indicated by a square; awareness of his own forces,
by a triangle; and awareness of his own plan, by a diamond.
The third chart is the SAt curve (Figure 5.12) and may depict the SAt curve
for one or more command cells as indicated in the legend at the bottom of the
stacked chart. The example in the figure is for CAU-1’s SAt of the enemy and
the enemy’s SAt of CAU-1 in Run 4 of Experiment 6.
The fourth chart (Figure 5.13) contains the battle tempo curve—an assess-
ment of the relative rate of activities within the commander’s cell.
Analysis using these stacked charts led to insights regarding the amount and
nature of information available to the commander; the relationships between
that information and the commander’s situation awareness; how the cells col-
laborated; and the linkage between situation awareness, decision making, and
battle outcomes. Overall, this led to a number of interesting conclusions, some
of them encouraging; and some troubling. Perhaps for the first time quantita-
tive characteristics of battle command have been experimentally captured with
special attention to situation. Such quantitative analysis begins to shed light on
the relation between the science (particularly the use of technology) and the art
(human cognitive processes) of command, and on the cognitive dynamics that
both enable and hinder the commander in making sense of the battlespace.
CHAPTER 6
Making Sense of the Battlefield:
Even with Powerful Tools, the
Task Remains Difficult
Stephen Riese, Douglas J. Peters,
and Stephen Kirin
140
Making Sense of the Battlefield 141
win or loss for either side, the determination of which side held the ultimate
tactical advantage was usually rather clear.
To help illustrate this characterization, we show a set of Situation Awareness—
Technical (SAt) curves for Experiments 4a and 4b in Figures 6.1 and 6.2,
grouped by assessed battle outcome. Results from later experiments tend to
be similar but are more complicated due to the increased complexity that mul-
tiple echelons introduced. “Advantage Blue” or “Advantage Red”—whether it
was the Blue force or the Red force that gained advantage in the battle—was a
group assessment based on professional judgment of the observers, white cell,
analytic team, and participating operators and was not influenced by the cap-
tured data or subsequent analysis. Some remarkable relationships between the
information availability (as measured by SAt) and the tactical results emerge.
In all of the charts, the initial spike in SAt reflects the intelligence feed pro-
vided to the Blue command cell. The size of that spike is relatively consistent
across all runs as the amount of information initially provided was intention-
ally controlled. Although Experiments 4a and 4b were manned by different
Blue operator teams, those command cells achieved a comparable peak of
SAt across the runs (approximately 60%–63%), probably a reflection of the
inevitable close fight that occurs in every run. On average, the Experiment 4b
Blue command cell achieved a lower average Blue SAt score than the Experi-
ment 4a cell (40% vs. 47%). This may reflect the second team’s focused col-
lection management plan—a plan that painted a clearer picture of key areas
of interest at the expense of not understanding the more remote areas of the
battlespace. By comparison, the first team tended to cover more of the bat-
tlespace with sensors, without specifically focusing on their planned avenue of
advance. The Experiment 4b team also tended to negotiate more restrictive
terrain, a tactic that tended to mitigate the contribution of certain sensors.
Given that the amount of information ultimately available to the Blue com-
mand cell was similar across the 12 runs presented, the understanding of the
relationship between SAt and battle outcome emerges from the relative difference
between Blue and Red scores. In fact, it is the difference between Red and Blue
available information, and not the level of Blue SAt achieved, that is the stronger
predictor of battle outcome. When the Blue command cell achieved the tactical
advantage, their SAt clearly dominated that of Red over the course of the run.
In the cases in which Red gained the tactical advantage, the Red SAt usually mat-
ched or exceeded that of Blue for a significant portion of the battle (significant
either in terms of length of time or the criticality of the point in the operation).
Typically, each graph displays periods of time within the fight when there is
rapid growth of SAt, gradual and continuous growth of SAt, or no growth of
SAt. Rapid growth is usually a reflection of an intense close fight where many
new detections are made, as in the last 30 minutes of Run 6 in Experiment 4b.
There is usually an associated rise in Red SAt during these close fight encoun-
ters. Gradual and continuous growth, as in Experiment 4a, Run 8, typically
reflects deliberate movement and an Intelligence Preparation of the Battle-
field (IPB) process that enabled the appropriate placement of Named Areas
142 Battle of Cognition
The difference in sensing capability between Blue and Red was intention-
ally great and resulted in different tactics and procedures during the battles.
Because of Red’s heavy reliance on humans for detection, Blue was able to
limit Red’s ability to see by limiting close encounters. As a result, Red SAt
routinely increased most significantly during the close fight.
This observation is vital to understanding the nature of the future, informa-
tion-enabled force. While the value of information has always been appreci-
ated, it is less widely recognized that it is the information differential, and not
the absolute level of information acquired, that is the stronger determinant
of battle outcome. The impact on future force design, tactics, and procedure
development is significant: in the fight for information, acquiring a certain
level of information is less important than achieving a substantial information
advantage over the enemy.
SA IS HARD TO MAINTAIN
In spite of the great help provided by sensors and the Commander Sup-
port Environment (CSE), commanders and staffs found it very challeng-
ing to gain and maintain adequate SA. From the large number of possible
causes for this challenge, some of which have not yet been fully explored,
we examine two seemingly unrelated reasons in this section: the operators’
tendency to prefer acquiring new targets over conducting Battle Damage
Assessment (BDA), and the CSE’s limitations in presenting information to
human operators.
Human Tendencies
Human biases play a significant role in battle command. For example,
belief persistence and confirmation bias (Endsley 2000; Nickerson 1998)
were often seen to appreciably shape the course of our experimental runs.
Another tendency—to prefer acquiring new targets over assessing the state
of previously acquired targets—also emerged as one of the more consistent
biases.
Knowing the state of enemy assets is a key component of SAt and a key con-
tributor to battle outcome. Although the importance of conducting adequate
BDA was known to all operators, it emerged and remained a key challenge
through all phases of the experimental campaign. From a tactical perspective,
BDA (or the lack thereof) dramatically influenced the conduct of operations
as operators attended to previously engaged targets, reducing their speed of
maneuver and expending redundant munitions to mitigate the risk of operat-
ing in a less certain environment.
In operator surveys, the lack of BDA was reported as one of the most sig-
nificant detriments to achieving SA. Although there were certainly different
approaches, intents, and capabilities across the various Blue command cells, a
number of similarities surfaced:
144 Battle of Cognition
Command cells did not fully develop a set of tactics, techniques, and procedures to address
the requirement for BDA. Tactics, techniques, and procedures emerged and were
modified over time as the operators conducted subsequent missions. Some cells were
still experimenting with methods during their final battle. At times, the recorded
dialogues revealed a lack of awareness as to who was controlling BDA assets and
who was responsible for making assessments.
Command cells did not establish a priority for BDA. Prior to each run, the commander
established priorities, including the designation of the most-dangerous targets and
high-payoff targets. Quite remarkably, staffs often did not consider these priorities
in conducting BDA during the mission.
Command cells struggled to satisfy the competing demands of acquiring and characterizing
new targets and assessing the status of known targets. Although most sensors are opti-
mized for one task or the other, humans are needed to pursue both tasks. Some
cells developed Intelligence, Surveillance, and Reconnaissance (ISR) protocols for
the use of available sensors, but these usually did not address the use of sensors for
BDA. Because of this, BDA missions were usually ad hoc and seen to deviate from
the ISR plan.
Command cells relied heavily on sensors with lower-quality images in making their assess-
ments. For example, in Experiment 4b, although images provided by robotic ground
scouts and by UAV were a less-frequent source of imagery to support BDA, the
imagery they did provide was high quality, informative, and enabled correct BDA
(Figure 6.3).
Command cells failed to exploit the automated BDA capabilities provided by the CSE. This
may reflect a lack of training and understanding, or it may reflect operator reluc-
tance to forfeit control of assets that were also needed to identify new targets. It
should also be noted that the available automated tools did not offer the capability
to prioritize targets for BDA based on Commander’s Critical Information Require-
ments (CCIR).
Command cells often lost visibility over how many times a particular target had been attacked,
how many assessments had been cataloged, and how many images were available and had
been viewed—information that is available in the CSE user interface, but not easily
accessed and interpreted in the heat of the close fight.
Command cells repeatedly relied on the option of attacking targets multiple times in the
absence of effective BDA. Some groups developed engagement heuristics to mitigate
the lack of BDA (e.g., “Fire two precision munitions at a most dangerous target on
our axis of advance.”)
Less than 30 percent of the attempted assessments were correct; that is,
the assessment matched the actual state of the enemy asset at that point in
time. Recall that in our experiments we assumed that although a sensor can
detect and classify an object as a potential enemy asset, the final interpretation
of the images obtained by the sensor was left to a cell member. The images
were simulated with a realistic degree of uncertainty and other defects, and
to make a definitive assessment was difficult at best. This resulted in the rela-
tively low level of assessment accuracy. For example, Figure 6.5 illustrates the
content provided by images; the quality of the images reflects the simulation
system’s adjudication of the engagement and the quality of sensor providing
the image.
We describe the difficulties of BDA here at length to help the reader appre-
ciate one of the more significant challenges faced by the human operators.
Given this challenge, it is somewhat understandable that the command cells
often favored using sensors to characterize unengaged targets despite the
importance of BDA. However, this tendency and the resulting diminished
amount of quality BDA had a direct impact on the level of SA, as measured
both by SAt and SAc.
Figure 6.5. Quality of images available for BDA in Experiment 4b: (a) 27 per-
cent of images showed no discernable target; (b) 34 percent of images showed
the presence of a target with no discernable damage; (c) 29 percent of images
showed smoke rising from the target but without sufficient detail to determine
the extent of damage; and (d) 10 percent of images were of sufficient detail to
accurately conduct effective BDA.
Making Sense of the Battlefield 147
ture to the human operator. For example, the CSE’s COP displays detailed
icons showing where enemies were detected with no visual indication of how
old the detections were and no visual indication of the level of confidence
in the information presented. Without such critical clues, human operators
tend to believe what they see on the screen and ascribe near certainty to the
information. A key principle for designing systems to support SA is to directly
present the level of certainty associated with information on the display
(Endsley et al. 2003; Miller and Shattuck 2004).
The CSE’s COP presents such a believable representation of the bat-
tlespace that it is often treated by operators as ground truth. This includes
instances in which particular areas have not been searched by sensors, and
the human operators discount the potential presence of enemy assets in those
locations. When such sensor gaps align with the expectations of the men-
tal frame developed by Blue operators, those operators have a tendency to
take what is on the map at face value—believing in this false world. In other
words, if a particular area on the display is free of Red force icons, then the
corresponding area in the battlespace must be unoccupied and therefore safe
to move through.
This was certainly true halfway through Run 8 of Experiment 6, when one
Blue commander stated that if the Red unit had placed a counterattack force
in the vicinity of the objective, then his unit would have already bypassed
the counterattack. In fact, there was a large counterattack force just beyond
the objective in an area not yet covered by sensors (see Figure 6.6). The
top view shows the information available to the commander through his
system. The bottom view shows the ground truth of the same area of the
battlespace. At this point in the battle, the Blue unit had begun its move-
ment toward the objective (Town 23 circled in top picture). In this example,
the commander’s understanding was based on the information display show-
ing no enemy icons in the vicinity of the objective without a corresponding
view of the sensor coverage in that area. This is the same event described
in Figure 5.10.
The CSE battle-command system interface is platform focused—it dis-
plays individual platforms and detections of enemy, as opposed to aggrega-
tions and higher-level interpretation. Commanders and their staffs tended
to focus on using the system interface to task individual sensor assets to
acquire information on individual platforms. In doing so, they often lost
focus on the commander’s critical information requirements (e.g., the loca-
tion of the Red counterattack force). Such shortcomings in multitasking
have been found to be one of the most frequent problems leading to low
SA (Jones and Endsley 1996). This tendency led to gaps in sensor coverage
and made it difficult to predict Red disposition from the available limited
intelligence.
The tendency to overtrust the COP was more commonly observed when
the time to complete the mission was running low. Additionally, the level
148 Battle of Cognition
of trust in the COP often correlates with other available intelligence. For
example, an intelligence input from higher headquarters that suggested an
enemy presence in a certain area often resulted in the Blue unit conducting
an exhaustive search of that area and blaming the lack of Red detections on
sensor or system problems. Similarly, an intelligence report suggesting that
the enemy is not defending certain terrain led the Blue unit to move quickly
through that area with little or no sensor coverage.
The human tendency to favor acquiring and characterizing new targets
over conducting BDA and the limitations imposed by the CSE COP display
are but two challenges to gaining and maintaining SA. This difficulty drives
the demand for more automated and semiautomated tools (e.g., BDA and
BDA management), as well as the need for human training on the use of those
tools. We also see a similar demand for COP improvements to help convey
near the search area, with the majority of his forces arrayed to the west and
further north.
Blue Commander: Well, initially in the assembly area, obviously the first thing
we wanted to do is to try to gain situation awareness through sensors. So we
deployed our sensors—our Class 1s and our Class 2s and a Class 3 forward into
sector to try to gain intelligence. And from the get-go they weren’t picking up a
lot. They made it to the first two towns—like 97, 98—somewhere around that
area, and weren’t picking up anything, which surprised us. It kind of caught
us off guard a little bit because you just expect to pick up something. So now
you’re thinking, “Okay, is there something wrong with the sensor? Let’s keep
going back.” And we kept going back and looking and looking and not seeing
anything.
Interviewer: What was your expectation for what you’d be able to see?
Blue Commander: At least people. Usually when we get in the towns we see a lot
of human indicators come up. There wasn’t even that. At a minimum, you think
you’d see a lot of human indicators come up with people in the towns . . . at the
least . . . Or we would see something up in the hills—some sort of sensor or
observer up there.
Interviewer: So what did you do about that?
Blue Commander: We just continued to go around and around that town in just the
hope that we’d pick something up.
Interviewer: Did you have some sense of how long you would search a town until you
felt like it was clean?
Blue Commander: I guess not; no, I felt like, “I’m just going to keep doing this until
I find something.”
Figure 6.7. Area in which the Blue commander expected, but did not find, Red forces.
sion the future state of the battle (Level-3 SA). One possible explanation is
that the command cell was inclined to rely heavily on the COP and often
preferred to watch events unfold rather than projecting two or three steps
ahead. Thus, in some respects, the CSE COP may inadvertantly encourage a
more reactive cognitive posture.
Endsley suggests that the “single most frequent causal factor associated
with SA errors involved situations where all the needed information was pres-
ent, but not attended to by the operators. This was most often associated with
distraction due to other tasks” (Jones and Endsley 1996). Indeed, in a num-
ber of our experiments, we noticed that the CSE display drove a commander
toward very rapid and apparently unproductive shifting of his attention focus.
Constantly scanning the display for any new information, the commander
would rapidly move the cursor from one enemy icon to another, hunting for
additional details. This led him, apparently, to focus very narrowly on the
most recent information, often in a very small area of the battlespace, and
to allocate little attention to the broader appreciation of the battle. Further-
more, he would often induce other cell members to shift their attention focus
to the subject of his immediate, narrow interest. Rather than directing the
attention of the specific operator who needed to be cognizant of the event, he
would often make a general announcement that forced the other operators to
interrupt their activities. In chapter 8, we will return to this observation and
examine it in more detail.
152 Battle of Cognition
We frequently see that attention is drawn to areas where the most current
activity occurs, regardless of the importance of the information represented
by the activity. Thus, commanders can easily lose overall situation awareness
when they become too focused on specific areas of the battlespace, specific
events, or specific distracting conversations. By focusing primarily on new
information that populates the display, and therefore narrowing their atten-
tion, command cells can lose overall situation awareness.
These observations suggest the need for both training and interface
improvements. The system could provide fewer, yet more helpful, alerts
that are a combination of specific and fused data. One author distinguishes
between acute incidents in which the situation presents itself all at once and
the alert must be immediate, and going sour incidents in which there is a
slow degradation of the monitored process (Woods, Johannesen, Cook,
and Sarter 1994). For example, it is easy to imagine an immediate alert,
“out of ammunition,” but this type of alert could also be given in regular
intervals—25 percent, 50 percent, and 75 percent ammunition exhausted—
that allow the command cell to better assess the status and make decisions
before the situation becomes immediate. Furthermore, a fused alert might
only warn the operator when the amount of ammunition is being expended
at a rate faster than the percent of the mission accomplished. An alert fused
to the CCIR might indicate when a dangerous enemy asset is detected in
the planned axis of advance. By reducing false alarms and providing such
salient and distinct cues, the system would allow the operators to devote
more attention to understanding trends and other more important decision-
making tasks.
not know where their sensors had already looked and employed sensors in
areas that were already well covered.
To further complicate the problem, the Red force avoided establishing any
recognizable operational patterns and specifically avoided assuming similar
force dispositions from one experimental run to another. For example, the
Red commander put decoys where he thought the Blue unit expected actual
forces and often interspersed decoys with actual forces to portray a larger
force. Red also did not attempt to defend the entire battlespace but instead
massed combat power in one area and took risk in another area. Further, the
Red force organized itself into irregular groups composed of platforms of
different types that precluded the Blue command cell from making consistent
conclusions about the enemy force disposition.
Recognition of these conditions motivated the development of a sensor cov-
erage analysis tool (illustrated in Figure 6.9) that could portray, over time, the
amount of terrain being searched by the Blue sensors. Among its functions,
the tool measures the quality of sensor coverage by comparing the amount of
terrain searched to the Blue unit’s area of interest. A sample resultant curve
is depicted in Figure 6.10. This afforded the analysts a visual comparison of
the SAt curve to the amount of terrain covered by the available sensors. Addi-
tionally, because operators were extremely sensitive to the appearance of new
icons on their displays, and because they are the primary causes of the distinct
jumps observed in the SAt curves, the display shows the number of enemy
assets detected in each 10-minute period. In all battles examined with this
approach, there was a clear positive relationship between the amount of ter-
rain searched and the corresponding level of SAt.
The sharpest increases in coverage tended to occur when Blue ground
elements or more capable UAVs moved forward, covering new terrain with
high-quality direct vision optics (DVO) sensors or human vision. However,
the contribution of ground elements came with a cost: since Blue ground
platforms’ sensing capability was comparable to that of the Red platforms, the
ground advance usually contributed to an increase in Red SAt. And because
the Red force was usually dispersed throughout the battlespace, there were few
instances of significant coverage growth without some number of detections
(Figure 6.10).
The commander often based maneuver decisions in large part on whether
or not an area had been searched by a sensor. Seldom was he able to quantify
or assess the quality or currency of that coverage. In addition, if an area had
been searched and no enemy entities detected, there was no mechanism to
remind the commander that the area was potentially vacant. An investigation
of the factors that contribute to the commander’s decision making indicates
that the knowledge of what is not there often may be as important as the
Assumption 1: If we can show operators more Red and Blue information, then they
will have a more accurate interpretation of both the Red and Blue situations.
Assumption 2: If we can show operators more Red and Blue information, then they
will make better predictions about how the Red and Blue actions will play out.
Assumption 3: If we can show operators more Red and Blue information, then they
will have a shared Level-2 and Level-3 SA across the command cells.
had comparable SAt scores up until about 50 minutes into the run. However,
we know from the Blue commander’s actions just described that his explana-
tion of the data available around Towns 96 and 97—his Level-2 SA—was not
strong. At 70 minutes into the run, the Blue unit lost two unmanned recon-
naissance vehicles to Red indirect fire. Level-1 SAt at this time is higher for
CAU-2 than it is for Red regarding CAU-2. At 80 minutes into the run, the
Blue unit commander finalizes his assessment of the Red minefield by draw-
ing it on the map, and from that point on, the entire Blue team holds onto
the assessment that a minefield is located east of Town 96 and subsequently
makes movement decisions according to that interpretation. The SAt curve
indicates that the Blue unit’s Level-1 SAt was much better than the Red unit’s
for the last 90 minutes of the run—a clear example of how accurate data pro-
vided by the system does not necessarily lead to an accurate interpretation of
the situation.
Furthermore, despite the fact that all Blue operators saw the same elements
of data via the COP, a shared interpretation (Level-2 SA) was lacking. Both
unit commanders (CAU-1 and CAU-2) saw the same three ATD reports.
The CAU-1 commander correctly interpreted them as Red infantry in Town
97. He stated this assessment during a formal commander’s report and again
during informal information exchanges with the CAU-2 commander. The
CAU-2 commander noted the difference of opinion but maintained his suspi-
cion of a minefield. Working from their individual frames, the two CAU com-
manders maintained different explanations (Level-2 SA) for the same data.
Interestingly, the CAU-1 commander later accepted the assessment of a
minefield in that location after the two unmanned reconnaissance vehicles
160 Battle of Cognition
Figure 6.13. SAt over time for Blue and Red in Run 2.
had been killed and the minefield was drawn on the map. Thus, when they
finally did gain a shared understanding, it was a wrong one.
As a result of the misinterpretation, they allocated resources to clear the
phantom minefield. The Blue unit had enjoyed good forward momentum to
this point, but movement stopped for the mine-clearing work. The tactical
impact of the flawed assessment was also felt by CAU-1, as they postponed
initiating their movement until the phantom minefield was cleared. The non-
existent minefield contributed to movement decisions for the remainder of
the run (e.g., whether to go around the minefield or to navigate the path
through the minefield).
Had the CSE prompted the commander to consider other explanations for
the data, it might have helped both CAUs and the entire Blue force’s mission.
Additionally, because the discrepant interpretations were articulated but not
examined, an improved CSE system could provide a better means of support-
ing collaborative comprehension of data (e.g., tools to compare and contrast
probabilities associated with different interpretations).
In a separate situation, during the mission planning for Run 3, the com-
bined arms team (CAT—higher echelon for the two CAUs) set forth criteria
to be met prior to initiating the attack. They included certain preparatory fires
to disable Red ADA, sensors sweeps and surveillance at river crossing sites,
and a sweep of the sector 20 km forward of the Line of Departure (LD). The
CAT commander’s intent was to be as certain as possible that the subordinate
CAUs would be safe to move forward. As it turns out, the CAT commander
did not feel comfortable allowing the forces to cross the LD until more than
an hour past the start of the mission. During that hour, both subordinate
Making Sense of the Battlefield 161
units were confused as to why they had to wait so long. In one instance, at
25 minutes into the battle, the CAU-2 commander asked for permission to
cross the LD with unmanned mortar and reconnaissance vehicles, in order to
both range the enemy infantry and to increase the viewing area of the UAVs
that were tethered to the reconnaissance vehicles. The higher commander
denied this request, and as a result, the subordinate commander decided to
use longer-range and more-powerful-than-necessary weapons on the exposed
infantry. This in turn confused the higher-echelon command cell—why would
they use such weapons, usually reserved for more dangerous targets, on dis-
mounted infantry? The CAT commander further realized that by using these
weapons, CAU-2 may have telegraphed his location to the Red force. As a
result, there was sudden pressure to cross the LD more quickly, despite the
CAT commander’s low level of confidence that the situation was right.
Two SA issues are brought to light from this example. First, there was not
a common frame among the various Blue elements to describe how the plan
should unfold—lack of shared SA. The CAT commander developed a certain
vision during planning for how the battlespace should look before crossing
the LD and verbally communicated the specific criteria to the CAUs prior to
the run. But his mental vision that operationalized those criteria and essen-
tially described what his comfort level had to be before acting was not com-
municated. Both subordinate units were ready to move out long before the
CAT commander’s unknown vision had been achieved.
Second, the two echelons (CAT and CAU-2) comprehended the evolving
battle differently as a result of different cognitive frames. The CAT command-
er’s goals were to target enemy ADA and to have sensor coverage of specific
areas. The CAU-2 commander’s goals were to target all known enemy enti-
ties and to continually extend his sensor coverage forward. The CAU-2 com-
mander’s interpretation of the data in the context of his mission objectives led
him to believe that his best action was to move certain assets across the LD,
in contrast with the CAT commander’s comprehension of the same data and
his perceived best action.
This difference in Level-2 SA resulted in the CAT’s refusal to let CAU-2
cross the LD, the subsequent use of more potent weapons to take care of
enemy infantry, and confusion at both echelons concerning the other’s
actions. The difference led to Level-3 SA (projection) problems as well. The
CAT commander did not anticipate that a delayed movement would lead to
a suboptimal engagement decision on the part of one of his subordinates.
And while the CAU-2 commander may have anticipated that his fires could
provide his location to the Red unit, he did not anticipate that higher head-
quarters would wait so long to approve his crossing the LD.
These few examples, and there are many more, help illustrate that having
access to the same information does not necessarily ensure common understand-
ing. In the next chapter, we further discuss how collaboration suffered when
operators believed that sharing the same display content reduced the require-
ment for human communication. Situation awareness is built upon information;
162 Battle of Cognition
upon which their decisions were based was a critical area of focus for Experi-
ment 7. Here we examine the relationship between the cognitive loads at
the two combined arms units (CAU-1 and CAU-2—approximately company-
level units), the CAT (roughly comparable to a battalion), and their higher
headquarters (brigade).
Figure 6.14 depicts the SAt curves obtained from two selected runs (4 and
7). The curves indicate that the Blue commander won the critical fight for
information and was able to obtain a higher level of SAt than his adversary.
While this leads one to expect that the Blue team would win the overall battle,
we saw situations where this did not occur. In fact, the ability of one side to
gain an early information advantage and maintain that advantage through-
out the battle is critical. While we have already discussed the relationship
between SA and battle outcome, we highlight this result here because it is
essential that our future commanders be trained to properly use their future
force assets to gain an early information advantage. Additionally, it is criti-
cal these commanders understand how they can use future force assets to
perform counterreconnaissance against a future enemy in order to limit the
information obtained by the enemy early in the battle.
The CSE provides a significantly greater quantity of information to lower
echelons at a faster pace than occurs with today’s force. To handle the increased
amount of information volume, the commander requires an advanced skill set
and experience level in order to effectively visualize the battlespace, identify
and understand decision points, and create effective CCIRs to support his
decisions. A Blue commander described the challenge of being capable of
quickly sorting through all of the available information and determining what
information is most relevant to his mission at that time:
I don’t look at all the imagery, I look at what I see as affecting my plan because there is
a lot of superfluous imagery that floats around and you know I don’t have to be aware
of every shot nor would I ever want to be aware of every shot. I can’t process that
much. I want to focus on 2–3 bits of information at any given time and that helps me
maintain a confidence level vis-à-vis my mission success.
Experience and training on how to handle this new cognitive load are
essential for a commander to excel in the future environment of network-
enabled warfare. The critical combat enabler is not simply the presence of
the information, but rather the ability of the commander to understand and
process the relevant information and act upon that information quickly. This
ability to understand, process, and act quickly must be built into the experi-
ential base of future junior leaders. Often this means overcoming the natural
human tendency to want more and more information.
Future company-level commanders and staffs need to effectively integrate
advanced ISR assets into tactical operations, classify and identify future enemy
targets through different types of imagery, prioritize and engage these targets
with the correct munitions, and perform effective BDA. More important, and
164 Battle of Cognition
likely more difficult, is the requirement for these future leaders to be proficient
in processing vast amounts of information, determining what is relevant and
what is not relevant, and making key decisions based on partial information.
The solution to helping future tactical commanders is likely to be a complex
one that includes training, assignment policies (to better manage experience),
Making Sense of the Battlefield 165
Experimental Environment
• Having multiple, free-play runs of the same mission, with different operator teams
across experimental phases, allowed analysts to make observations and draw conclu-
sions that would not be possible from single-run experiments or from more-varied
runs.
• The dynamic, stressful environment in which decisions could not be exhaustively
examined a priori allowed a thorough exploration of the more intuitive decision
modes.
• Having humans both in the loop and able to freely exercise battle command allowed
situation awareness to influence battle outcome.
• The representation of multiple potential future capabilities in a structured environ-
ment enabled examination of a relatively large number of concerns with a relatively
small number of operators.
oughly explore the cognitive demand of sensor management and its relationship
with gaining and maintaining SA.
Analysis
• The SAt metric to measure information availability, combined with metrics for
mission success, permitted analysts to objectively determine who held the tactical
advantage (Blue or Red) and to compare those outcomes to information availability.
• Availability of dedicated analysts and data collectors during the experiments allowed
us to capture the context of command decisions.
• Tailorable analyst observation stations, automated logging of all simulation activi-
ties, recordings of all communications, and complete transcriptions of a number of
runs enabled the detailed analysis that yielded substantive, quantitative insights.
The findings outlined in this chapter derive from relatively early experi-
mentation on network-enabled battle command. Continued and more sophis-
ticated experiments and analysis are required to fully explore the impact of
network-enabled warfare on the future commander’s cognition. Still, even
these tentative findings are significant in their implication for the design
and training of our future force. The importance of situation awareness in
network-enabled warfare drives the demands on the human operator who,
despite remarkable advances in automation, faces an increased cognitive burden
in the future. This in turn calls for further improvements in battle-command
support systems and in corresponding training.
In addition to training and tools, another age-honored approach to deal-
ing with cognitive challenges is to bring more minds to the task. Two heads
are better the one, says the common wisdom, and collaboration helps solve
hard problems. It should be particularly true, one could argue, with the mod-
ern tools designed to make collaboration more efficient. Our experiments,
however, offer a far more nuanced story of collaboration’s role in battle
command.
CHAPTER 7
Enabling Collaboration:
Realizing the Collaborative
Potential of Network-Enabled
Command
Gary L. Klein, Leonard Adelman,
and Alexander Kott
167
168 Battle of Cognition
social psychology, and cognitive psychology. The following are some of the
considerations addressed in the framework:
• The different ways that technology can affect a collaborative task process
• The different granularity of information that is required at different levels in an
organizational hierarchy, which requires collaborators to transform and reinterpret
the information that they share
• The different informational requirements of each level of situation awareness
• The different approaches used for collaboration and the different behaviors they
require
• The interaction between concepts of operation, the task environment, and tech-
nology on performance efficiency
• The means to measure the effect of using technology on collaborative perfor-
mance
Figure 7.1 illustrates the interaction between tasks and technology. The
downward arrows represent the constraints of one element on another; the
nature of the tasks should define the behaviors and transmissions required in
a specific context, and together they should define the nature of the technol-
ogy. The upward arrows represent that one element is a resource to the other;
the nature of the available technology will influence the conduct of the col-
laborative behaviors and transmissions, and together they will influence the
conduct of the collaborative task. Taken together, these influences suggest the
mutual entanglement between task, technology, and concept of operations
that always exists in a system. So for example, various characteristics of the
MDC2 collaborative task require certain types of collaborative behaviors and
task transmissions among and between collaborators in their vehicles for cost-
effective performance. On one hand, the need for these behaviors and trans-
missions represents requirements for the Command Support Environment
(CSE) technology, which if not met, constrain its effectiveness as a collabora-
tive tool. On the other hand, the prototype and future CSE technology can
provide resources that when combined with an appropriate concept of opera-
tions, permit behaviors and transmissions that were not previously available
and thus provide the potential for dramatically improving collaborative task
performance.
TASK TRANSMISSIONS
All collaborative tasks require the transmission of information among par-
ticipants. This requires the development of external representations of mental
constructs, which in an individual task might otherwise remain intuitive and
imprecise. A number of typical generic classes of transmissions can be defined:
a series of descriptions of the system, which differ with respect to their level
of abstraction. This makes it possible to control the level of complexity in the
sense that only a limited number of units have to be considered at each level of
the hierarchy.” For example, even a proficient Combined Arms Team (CAT)
commander will be at some point cognitively incapable by himself of under-
standing what is happening in a large battlespace at the level of each individual
entity. Yet through a hierarchical decomposition, the CAT commander’s under-
standing of the battlespace can be simplified to regarding the overall status of
his CAUs and the likelihood of their accomplishing their missions, rather than
regarding the status of whether specific, individual entities of the CAU have
succeeded in crossing the river. Similarly, applying the same hierarchical prin-
ciple to managing the organizational structure, a CAT commander needs to
manage only three staff members and two CAU commanders rather than doz-
ens of individual people and entities.
Second, limits on control of any system result from the impossibility of
developing models of complex situations that provide sufficient prediction
information at every level of abstraction for a given time frame (Flake 1998).
For example, we can perfectly predict in September that it will be cold in the
winter and warm in the summer; however, due to the complex interactions
and nonlinearity in atmospheric phenomena, we can never have enough infor-
mation, in September, to predict the day (or perhaps even the month) of the
last frost for the coming year. This is an informational limit not a cognitive
limit: the principles of chaos theory assert that this constraint is inescapable,
regardless of measurement accuracy, computer power, or software sophisti-
cation. Therefore, although CAU commanders may be able to estimate the
likelihood of mission completion for the CAU, they cannot predict the pre-
cise future state of every vehicle in the unit in that same time frame. However,
to achieve meeting the commander’s mission objectives, individual vehicles
and other assets at the more detailed level of abstraction must be controlled
in real time compensating for the unpredictable real-time conditions on the
ground. The commander’s mission-completion estimate and the control of
the assets must happen at different levels of abstraction and in different time
frames. Both levels of abstraction are critical to mission success.
Because of the difference in conception and information structure at each
level, hierarchical organization requires that information, transmitted up from
one level to another, needs to be not just aggregated but transformed and
reinterpreted. Commanders do not need merely the sum of the casualties taken;
they need to know how the distribution of these casualties is going to affect the
campaign to secure their mission objective. This systematic transformation of
information should be an essential element of designing organization-system
integration (Katz and Kahn 1978). The significance of these hierarchical
organization considerations is that in a complex battlespace situation, collabo-
ration also needs to be designed hierarchically. When so designed, the scope
of function at one level will informationally encompass more than one func-
tion at the level below, but at a higher level of abstraction. With information
Enabling Collaboration 173
enable a level of situation awareness for the group of participants, which may
not be possible for any single member. Therefore, a second major consideration
for task transmissions is enabling such awareness. As was discussed in chapter 4,
Endsley (1995) has identified three levels of situation awareness (SA).
There are different collaboration implications for these three levels of situ-
ation awareness. Level-1 is the basic awareness of available information. It
answers the question, “What information do we have about the enemy?” or
“Where are the friendly forces?” It is information that can be placed in a
database (or pooled) for use by others because it has a global frame of refer-
ence that is not tied to the information recipient’s situation. In Figure 7.2,
a hypothetical example is the development by the intelligence manager of a
common BDA picture that can be drawn upon by both the maneuver man-
ager and effects manager to support developing their own situation-specific
Level-2 and Level-3 SA. In fact, Level-2 and Level-3 SA require that Level-1
information be interpreted to meet the information recipient’s needs. More-
over, when information is shared across organizational levels, up the chain
of command, the lower echelon’s Level-2/3 SA must be transformed and
reinterpreted into the higher echelon’s Level-1 SA. For example, the CAU-1
commander’s assessment of CAU-1’s combat effectiveness, and the status of
their plan execution, is Level-1 SA for the CAT commander’s assessment of
the CAT’s mission status.
Therefore, providing the same common operational picture at the same
level of abstraction, across levels of an organizational hierarchy, can be prob-
lematic because the same information is not equally useful at each level—the
form of the information and its level of abstraction should be dictated by the
information needs of the recipient. Sharing the same entity-level information
through the CSE can enable the operators at the same level of abstraction
(e.g., intelligence manager) to efficiently backstop and compensate for each
other regarding entity-level actions (Level-1 SA) like identifying targets and
directing fires. However, employing that same view across different levels
of abstraction (CAT or CAU commander) could result in a lack of mission-
oriented (Level-2/3 SA) command and control.
These information-sharing and load-sharing aspects of collaboration can
interact with each other to impact performance. For example, the effective-
ness of information sharing can interact with the hierarchical decomposition:
176 Battle of Cognition
as was seen in one MDC2 case where none of the operators developed a more
mission-oriented view of the situation, and the larger picture (e.g., likelihood
of mission success) may not have been evaluated even though all of the needed
information existed within the group.
In addition, the information requirements for the levels of situation aware-
ness interact with characteristics of the task process. The nature of this
interaction can be understood after a description of those characteristics is
presented in the following sections.
Table 7.1
Behaviors Required for Mutual Adjustment
(M), Planned (P) and Standardized (S)
Coordination
Collaborative M P S
Connection □ □ □
Notification □ □ □
Identification □ □ □
Transmission □ □ □
Common ground □ □ □
preservation
Confirmation □ □ □
Synchronization □ □ □
Election □ □ □
TYPE OF INTERDEPENDENCE
Thompson (1967) identified three general types of interdependence among
unit personnel and organizational units: pooled, sequential, and reciprocal.
In pooled interdependence, each team member or unit provides a discrete
contribution to the whole by collating (or pooling) their obtained information
and knowledge. In the MDC2 task, individual intelligence managers contrib-
uting to the shared CSE database is an example of pooled interdependence.
Although the final product depends on the activities of each intelligence man-
ager, the individual analysts’ work is not necessarily dependent on each other’s
activities. However, their organization as a group is critical to ensure that each
intelligence manager’s surveillance of part of the battlefield contributes to a
complete picture of the whole battlespace.
In sequential interdependence, the product of one unit (or person) is
dependent upon the output of another. In MDC2, the intelligence managers
and effects managers exhibit a sequential interdependence: the intelligence
manager identifies targets, the effects manager directs fires upon the targets,
the intelligence manager does BDA, and the sequence repeats.
Finally, in reciprocal interdependence, units pose critical contingencies for
each other that have to be resolved before taking action. Operations and logis-
tics often have a reciprocal interdependence. Whether or not different opera-
tions can be undertaken depends on the availability of certain resources, and,
in turn, the availability of those resources depends on previous and planned
operations. Therefore, operations and logistics pose critical contingencies for
each other that have to be addressed reciprocally during planning.
TASK ENVIRONMENT
An organization or task process exists within a context, its task environ-
ment. Thompson (1967) identifies two dimensions that are critical to the way
an organization is structured.
The first is the stability of the environment—how quickly the elements in
the environment change. Our discussion of situation awareness suggests that
this dynamism can be considered at the three different levels: not only how
quickly the battlespace entities change (Level-1), but also how sensitive the
situation (Level-2) is to those changes, and how sensitive the projected future
(Level-3) is to changes in the situation.
The second dimension is heterogeneity—how many different kinds of enti-
ties (and by analogy situations and futures) does the organization need to deal
with? Thompson proposes that in order to reduce uncertainty to manageable
levels, organizations should divide their environment into subdivisions that
are as stable and homogenous as possible, and they should create separate
organization units (e.g., different CAUs) to deal with each subdivision.
Collaborative technology can permit teams to respond faster and,
thereby, deal with more dynamic situations than previously possible (e.g.,
Enabling Collaboration 181
more mobile targets). It can also facilitate collaboration among more units
addressing different subdivisions of the environment—this enhances orga-
nizational management and expands the scope of battlespace understanding
and control.
CONCEPT OF OPERATIONS
Ultimately, the performance achieved for a given cost of coordination,
communication, and time depends upon how well the collaborative task, the
concept of operations (CONOPS), and the technology are fitted together.
Clearly, the best system (represented by the upper-right box in Figure 7.3)
is one where the CONOPS is appropriate for the collaborative task and the
CONOPS takes full advantage of technology to achieve the lowest cost coor-
dination possible given the constraints imposed by the task’s various dimen-
sions as described in the CEF. However, in developing new technology for a
task, we often find ourselves near the lower-right box. This is the case when
new technology simply replicates the function of old technology using the
existing CONOPS, without taking advantage of the new technology’s poten-
tial to change the CONOPS and reduce costs (or improve performance).
When this happens, we often see people develop work-arounds that move
toward a more effective CONOPS, even if it is outside the technology’s origi-
nal design (the middle box). Even though the work-around results in a worse
fit for the original design of the technology, the improvement in fit with the
task yields improved performance.
Often, collaborative technology is designed to permit the kind of coordi-
nation possible in face-to-face groups. Video and audio teleconferencing are
examples of this kind of collaborative technology. People in different geo-
graphic regions can now use this technology to coordinate in the same way
as face-to-face groups, typically via mutual adjustment and planned coordi-
nation. Web-based (or inspired) collaborative technology offers window and
file-sharing capabilities, instant messaging (or chat), and even bulletin boards
for electronic drawing.
However, as Figure 7.4 illustrates, the biggest gains in more cost-effectively
performing collaborative tasks is using technology to move to a new task
concept of operations with less expensive coordination. As discussed earlier,
standardization requires fewer collaborative behaviors and task transmissions
than planned and mutual adjustment coordination. If technology makes task
processes less intensive and thereby permits them to deal with more dynamic
environments and heterogeneous units than previously possible, then the cost
effectiveness of the technology can jump qualitatively rather than incremen-
tally by providing the same or better levels of performance for substantially
less communication. This is the long-term goal of the MDC2 program of
evaluating collaborative environments like CSE.
Finally, underlying all collaborative performance outcomes is the quality
of training. The effectiveness of distributing the cognitive load and sharing
182 Battle of Cognition
how to operate a tool’s buttons and menus. Establishing proper usage requires
defining an effective concept of operations that uses the tools to accomplish
the task efficiently. Therefore, a technology can appear to be ineffective, even
if it inherently supports required collaborative behaviors and task transmis-
sions, because team members do not know how best to use the tool within the
task context.
Given that Experiment 6 was the first exploration of CSE with multiple C2
vehicles, and that CONOPS development was exploratory, it was not unex-
pected that we would find ourselves in the central and lower-right regions of
Figure 7.3. This is the region of the figure illustrating improvements in task
performance, but where additional performance can be achieved by taking
advantage of the collaborative technology’s potential to better fit the technol-
ogy and CONOPS to the task. Therefore, the following observations can
provide a basis for developing concepts of operation that are more effective,
identifying requirements for training and identifying requirements for the
CSE and similar innovative C2 systems. Developments in each of these areas
should improve effectiveness.
184 Battle of Cognition
Figure 7.5. Often observed flat task organization. See Appendix for
explanation of abbreviations.
Enabling Collaboration 185
was aware of this tasking of his own effects manager coming from other
(subordinate) units.
The process trace of the loss of MCS1–2 during one experiment (Run 8)
is shown in Table 7.2. This example illustrates how the CAU-1 commander
becomes heavily engaged at the entity level. The ultimate result is the loss
of both of his MCSs and consequently his ability to complete his mission—
although, revealingly, he never seems to make (or at least did not articulate)
that mission-command assessment.
At the beginning of this trace, the CAU-1 commander is indeed involved
in directing fires on specific targets. In fact, he is engaged by CAU-2’s com-
mander in mutual-adjustment coordination to fire on one of CAU-2’s targets.
For whatever reason, the commanders (and the rest of the command cell) did
not use the CSE grid-coordinate or latitude-longitude coordinate systems.
Without an absolute coordinate system in use, difficulty in establishing com-
mon ground leads to confusion over just where this target is. Because of this,
at least 60 seconds are spent on this coordination, which is hypothetically at
too low a level of abstraction for either commander and should have been
performed by their maneuver managers or effects managers.
In the meantime, an ATD pops up in front of MCS1–2 and kills it at
13:50:21. Because of a number of interface considerations, the CAU-1 com-
mander does not become aware of this situation until over three minutes later,
when his attention is drawn to it by the CAT commander! Even then, he
does not fully understand that this is a firepower-kill and mobility-kill until
13:59. He does not appear to assess the importance of this loss with respect to
accomplishing his mission; even when his second MCS is killed, he continues
to insist that he can take his objective, clearly showing a lack of Level-2 and
Level-3 SA.
We see further evidence of the CAU-1 commander’s lack of mission-
oriented assessment in his report to the CAT commander at 13:56. In that
report, with inadequate surveillance resources, the CAU-1 commander
erroneously concludes there are no enemy forces counterattacking. More-
over, having lost most of his surveillance assets, he does not direct his intel-
ligence manager to develop and execute plans to compensate to provide the
CAU-1 commander with better situation awareness of the enemy’s status.
The intelligence manager could have better deployed the remaining class-2
UAVs, used the CSE’s “range fans” to better visualize the UAVs’ capabili-
ties and limits, or collaborated with other intelligence managers for more
complete UAV surveillance coverage. Without direction from the CAU-1
commander or taking initiative on his own, the intelligence manager does
not provide the needed mission-oriented situation awareness, and CAU-1
is destroyed.
However, in Experiment 7, the commanders intentionally tried to maintain
a more hierarchical organization. They took better advantage of the CSE
capabilities to maintain a mission perspective and consequently were more
successful in maintaining their strategic standoff and their survival.
Table 7.2
Process Trace of the Event When CAU-1 Lost Its MCSs. See Appendix for Explanation of Abbreviations
13:46:00 CAU1 CDR CAU1 BSM CAU1 CDR slowing MCS down to let Appropriate level of abstraction.
infantry go ahead.
13:47:00 CAU1 CDR CAU1 EM Fire at two target sets. Too detailed level for CAU1 CDR; should be CAU1
BSM?
13:47:00 CAU2 CDR CAU1 CDR Can you fire infantry near my target Discussion ensues about exactly which target that
P20? is. Without clear markers, there again is confusion
about location. CAU2 CDR highlights the target
(which does show up on CAU1 CDR display—but
CAU1 CDR does not appear to see it because he asks
for clarification). CAU2 CDR? Says it’s “south of
27”—but 27 is far north of the target.
13:48:00 CAU1 EM CAU2 CDR We’re shooting those. Not clear they have identified the same target.
13:49:46 ATD pops up in front of northern MCS1-2 on CAT
right screen.
13:49:47 NLOS CAU2 fires PAM at ATD
Unknown.
186
13:50:00 CAU1 CDR Where the hell is 17?
13:50:21 MCS fire/mobility kill—indicated in simulation kill
data base.
(continued)
187
Table 7.2
Process Trace of the Event When CAU-1 Lost Its MCSs. See Appendix for Explanation of Abbreviations (continued)
13:56:00 CAU1 CDR CAT CDR “Mobility kill” to one MCS. Not seeing CAU1 CDR told CAT CDR that is now a “mobility”
additional resistance. kill? See 13:59—he may have “misreported” the
status. He is “red on eyes” (loss 1 class 2 and 2 class 3
UAVs)—but not “seeing” any counterattacking enemy
forces advancing! Representation of surveillance quality
is missing—“if I don’t see it then it is obviously not
there.” Does have class 1’s—maybe basing on their
limited view, without recognizing their range
(w/o range fan displayed).
13:59:00 CAU1 CDR Appears that he is a mobility kill and Tries to move him and can’t.
firepower kill.
14:02:00 CAU1 CDR Out of comms with MCS1-1. As indicated by simulation system—red triangle.
14:02:33 CAU1 CDR CAT CDR Reports out of comms with “Lead” CAT CDR at first thinks he is talking about MCS1-2.
MCS.
188
Enabling Collaboration 189
Table 7.3
The Timelines Observed in Two Episodes of Command Succession
Run 5 was the first time the command cells had experienced the loss of a
C2V and exercised the command succession procedures, and the tardy time-
line reflects the inexperience and the attending confusion. When the same
command cells experienced the loss of a C2V again in Run 7, they improved
their timing considerably. Moreover, the average times shown indicate actu-
ally a very short period required to accomplish these tasks with the CSE, as
compared to a conventional, non-network-enabled environment.
As opposed to a conventional environment, the CSE interface clearly improved
Level-1 SA by providing visual stimulus such as “loss of communications”
symbols, as well as indicators for artillery and external fire impacts detected
by sensors. These assisted the commanders and staff in determining when a
C2V may have taken fire and reduced the time required to recognize the loss
of a C2V.
Furthermore, once the C2V was marked destroyed, that information was
distributed out to the force in real time, and the CSE interface provided a
standardized command succession tool that allowed the commander to quickly
view and reassign all assets that were previously under the control of the
destroyed C2V. The commander or staff responsible for reassigning the assets
could also use the CSE interface to view the status of all units as well as the
mission and tasks of each unit. They could see which units needed additional
assets and which units might not be able to handle the additional workload.
With this information, the commander or staff could make effective deci-
sions regarding the reassignment of assets. In addition, the network allowed
any commander or staff member to control any asset in the battlespace, if so
assigned. There was no need for them to relocate or physically be in the same
area as the assets that were under their control.
It is instructive to consider the dynamic changes in the volume of collabo-
rations associated with the command succession events. Table 7.4 shows the
average collaborations index across all the experimental runs. Interestingly, the
volume of collaborations for the two runs where a C2V was lost (i.e., Runs 5 and
7) did not show a significant increase or decrease versus the average for all runs.
In fact, the collaborations for each run fall within the 95 percent confi-
dence interval for the overall average of all runs. Given that in these two
Enabling Collaboration 193
Table 7.4
Impact of a Command-Cell Destruction on Average Volume of Collaboration
Activity
Collaboration
runs, one-third of the human command resources have been lost, one might
have expected the number of collaborations to decrease by perhaps one-third
or more since there were fewer people to engage in collaborations. Instead,
the collaboration statistic before and after the loss of the C2V in Runs 5
and 7 shows only a minor decrease in collaborations that is not statistically
significant. Apparently, we see a compensating increase in collaborations as
the CSE enabled the remaining command cells to mutually adjust and to
work together more intensively to address the loss of the C2V, coordinate on
reassignment of assets, and revise the mission accordingly.
This result should be at least partially attributed to the shared situation
awareness provided through the CSE interface that enables collaboration.
Additionally, the CSE provides several collaboration tools that facilitate col-
laboration through means other than radio transmissions. Overall, the CSE
demonstrated an effective capability to allow the remaining command cells to
quickly identify the loss of a C2V and to mitigate it collaboratively, yet with-
out an excessive, counterproductive increase in the volume of the required
collaborations.
Providing improved collaboration is only one element in a complex set of
elements that can help command decision making.
CHAPTER 8
The Time to Decide: How
Awareness and Collaboration
Affect the Command Decision
Making
Douglas J. Peters, LeRoy A. Jackson,
Jennifer K. Phillips, and Karol G. Ross
Ultimately, it is the command decision, and the resulting action, that affects
the battle outcome. All the processes we have discussed to this point—
collection of information, collaboration, and formation of situation aware-
ness—contribute to the success of the battle only inasmuch as they enable
effective battle decisions. Figure 8.1 depicts but a small part of the complex
relations between actions, decisions, collaboration, situation awareness, and
automation, as we observed them in the MDC2 program. Command deci-
sions—both the command cell’s decisions and the automated decisions—lead
to battle actions.
These, in turn, alter the battlefield situation, bring additional information,
often increase or decrease uncertainty, and engender or impede collabora-
tion. Changes in the availability of information lead to a modified common
operating picture, automated decisions produced by the system, and further
actions. These changes also lead to changes in the awareness of the battle
situation in the minds of the human decision makers. Collaboration impacts
the human situation awareness both positively and negatively (as we have seen
in the previous chapters), which in turn affects the quality and timeliness of
decisions and actions.
Still, the complexity of these relations in itself does not indicate that deci-
sion making in such an environment is difficult, or at least does not inform
us what makes it difficult. Yet, as the previous chapters have told us, the
command-cell members often find it very challenging to arrive at even a
remotely satisfactory decision. Why, then, is decision making so difficult in
this environment?
After all, we provide the cell members with a powerful information gather-
ing, integration, and presentation system. We give them convenient tools to
194
The Time to Decide 195
of a race between the demand for information and the ability of command
systems to meet it. The quintessential problem facing any command system is
dealing with uncertainty” (van Creveld 1985).
Another major source of challenges involves the limits on the rationality
of human decision makers (Simon 1991). Such limitations are diverse: con-
straints on the amount and complexity of the information that a human can
processes or acquire in a given time period and multiple known biases in deci-
sion making. In particular, time pressure is a well-recognized source of errors
in human decision making—as the number of decision tasks per unit time
grows, the average quality of decisions deteriorates (Louvet, Casey, and Levis
1988). In network-enabled warfare, when a small command cell is subjected
to a flood of information much of which requires some decisions, the time
pressure can be a major threat to the quality of decision making (Kott 2007).
Galbraith, for example, argued that the ability of a decision-making organi-
zation to produce successful performance is largely a function of avoiding
information-processing overload (Galbraith 1974).
Human decision-making biases are surprisingly powerful and resistant to
mitigation. Many experiments demonstrate that real human decision mak-
ing exhibits consistent and pervasive deviations (often termed paradoxes) from
the expected utility theory, which for decades was accepted as a normative
model of rational decision making. For example, humans tend to prefer those
outcomes that have greater certainty, even if their expected utility is lower
than those of alternative outcomes. For this reason, it is widely believed that
bounded rationality is a more accurate characterization of human decision
making than is the rationality described by expected utility theory (Tversky
and Kahneman 1974; Kahneman and Tversky 1979). The anchoring and
adjustment biases, for example, can be very influential when decision mak-
ers, particularly highly experienced ones, follow the decisions made in similar
situations in the past (naturalistic decision making [Klein 1999]).
Although such biases can be valuable as cognitive shortcuts, especially under
time pressure, they also are dangerous sources of potential vulnerabilities. For
example, deception techniques are often based on the tendency of human
decision makers to look for familiar patterns, to interpret the available infor-
mation in light of their past experiences. Deceivers also benefit from confir-
mation bias, the tendency to discount evidence that contradicts an accepted
hypothesis (Bell and Whaley 1991).
With a system like CSE, one might expect that biases are at least partially
alleviated by computational aids. Decision-support agents like the Attack
Guidance Matrix that we discussed earlier can greatly improve the speed and
accuracy of decision making, especially when the information volume is large
and time pressure is high. But they also add complexity to the system, lead-
ing to new and often more drastic types of errors, especially when interacting
with humans (Perrow 1999).
Additional challenges of decision making stem from other factors, such as
social forces within an organization, which go beyond the purely information-
The Time to Decide 197
were or were not made, what information and experiential components contributed
the most. This stage uses the event timeline and explores it in detail. Anomalies or
gaps in the story are investigated during this phase.
Step 4 focuses on the what-if queries. The purpose of this step is to consider what
conditions may have made a critical difference in how the situations unfolded and
in the decisions that were made. It also asks the question of what a less-experienced
person may have done in the same situation to further draw out the subtle factors
that enable the interviewee to make effective decisions.
Because both the process traces and the interviews proved to be effective
in Experiment 5, Experiments 6 and 7 built on these analytic tools and intro-
duced two additional tools.
The first additional tool—a detailed timeline of a run—became necessary
due to the increased complexity and duration of runs. Although we had exten-
sive and detailed records of what happened during each run (including video
and audio recordings), the task of producing a unified, concise description of
what happened during a run was difficult after the experiment was complete.
Therefore, after each experimental run, a group of analysts who had closely
observed the various echelons and cells (friendly and enemy) wrote a short
but complete synopsis of the run. In the synopsis they were able to capture
concisely the flow of the battle and detail the most significant events of the
battle from both the Blue and Red perspectives.
The second tool we introduced in the later experiments was focus groups.
Organized for each command cell, a focus group session was relatively short
(less than one hour) and was facilitated by a member of the core analysis team
who observed that cell during planning and execution. The facilitator began
the focus group session with candidate decisions of interest identified by the
analysis team during or immediately after the run. A recorder took notes.
After the focus group session, the facilitator or recorder briefed the entire
analytic observer team on key findings.
At the focus group sessions, we tried to understand the battle in general, and
the key events specifically, from the perspective of the operators. Facilitators
used the following questions to guide the focus group and to ensure that all
members participated in the session.
• Ask the operators to summarize the battle from their perspective. Brief back the key
elements of the battle summary. Use the operator’s words to the maximum extent
possible. Introduce the decisions of interest, placing them in the context of the
battle summary.
• Ask the operators to describe the events that led to a specific decision. Listen for
decision points, collaborations, shifts in situation awareness, gaps in the story, gaps
in the timeline, conceptual leaps, anomalies or violations of expectations, errors,
ambiguous cues, individual differences, and who played the key roles. Ask clarifying
questions and then brief back the incident timeline.
• Ask those operators who played key roles questions about situation assessment and
cues. Listen for critical decisions, cues and their implications, ambiguous cues, strategies,
The Time to Decide 203
The combination of focus group and CTA interviews along with the other
quantitative data logs gave us an ability to reconstruct the battle, to examine
how decisions were made, and to identify issues that may affect battle com-
mand in the future force. The following sections describe some of the result-
ing conclusions.
common types, move and strike decisions, accounted for about 25% each; see
Figure 8.2).
Still, the commanders in our experiments tended to delegate the entity-
based information-gathering responsibility to the intelligence manager. This
helped devolve a substantial cognitive load from the commander and also
served to unify control of the sensor assets. On the other hand, this dele-
gation deprived the cell of the critical big picture of the enemy since the
intelligence manager was focused on finding and characterizing individual
battlespace entities instead of developing an aggregated understanding of
the enemy.
In Experiment 6, one of the commanders recognized this deficiency and
saw that his intelligence manager was overloaded with tasks, while the effects
manager was being underutilized (since many of the engagement tasks were
automated or assisted by the CSE). The commander made the effects manager
responsible for coordinating with the intelligence manager to obtain images
for BDA and to conduct BDA assessments. The advantage of placing this
responsibility with the effects manager was obvious—not only did it alleviate
the cognitive load placed on the intelligence manager, but it also enabled a
rapid reengagement of assets that were not destroyed by the original engage-
ment. In general, the flexibility of CSE facilitated opportunities for creative
and unconventional allocation (and dynamic reallocation during the battle) of
responsibilities between members of the command cell.
BDA proved to be particularly critical and demanding throughout the experi-
mental program, and commanders struggled with obtaining quality assessments
from their available images. More often than not, BDA images (produced
with realistic imagery simulator) did not provide enough information to
make definitive conclusions about the results of an engagement. Thus, about
90 percent of BDA images from Experiment 4a were inconclusive (Figure 6.5
of Chapter 6). This ultimately led to frequent reengagements of targets in
order to ensure they were destroyed. In Experiment 4a, 44 percent of targets
were reengaged, and in Experiment 4b, 54 percent were reengaged.
The need to understand the state of enemy entities through effective BDA
was clearly demonstrated in Experiment 4a, Run 6, where a single enemy
armored personnel carrier destroyed enough of the Blue force to render
the unit combat ineffective. This particular enemy entity had been engaged
early in the battle and suffered a mobility-kill. However, the intelligence
manager classified the asset as dead based on a BDA picture. This mistake
was not found until it was too late. The Blue force was unable to continue
its mission.
Undoubtedly, tomorrow’s commanders will greatly benefit from the rich
information available to them. At the same time, they will be heavily taxed
with the need to process the vast information delivered through networked
sensors—both initial intelligence and BDA. Commanders should expect to
spend more time, perhaps over half of their time, on “seeing” the enemy.
Part of the solution is to equip them with appropriate information-processing
The Time to Decide 205
ADDICTION TO INFORMATION
Information can be addictive. We often observed situations when com-
manders delayed important decisions in order to pursue an actual or perceived
possibility of acquiring additional information. The cost of the additional
information is time, and lost time is a heavy price to pay, especially for the
future force that relies on agility.
As with today’s commanders, uncertainty is present in all decisions, and
decisions are often influenced by aversion to risk in the presence of uncer-
tainty. Unlike today’s commanders, however, our commanders had the tools
readily available to them to further develop their information picture. They
could reduce their uncertainty by maneuvering sensor platforms into position
to better cover a critical area. This availability of easy access to additional
information was a double-edged sword because it often slowed the Blue force
significantly. Commanders commonly sacrificed the speed advantage of their
lightly armored force in order to satisfy their perceived need for information.
These delays enabled the enemy to react to an assault and move to positions
of advantage.
An example of this occurred in Experiment 4a, Run 8, where the com-
mander incorrectly assessed that the enemy had a significant force along the
planned axis of advance. Even after covering this area several times with sen-
sors and not finding many enemy assets, the commander ordered “. . . need
to slow down a bit in the north . . . don’t want you wondering in there.” At
this time in the battle, the average velocity of moving Blue platforms dropped
from 20 km/h to 5 km/h. The commander exposed his force to enemy artil-
lery for the sake of obtaining even more detailed coverage of the area.
On the other hand, commanders also frequently made the opposite mistake
when they rushed into an enemy ambush without adequate reconnaissance.
An example of this occurred in Run 8 of Experiment 6 where several critical
sensor assets were lost early in the run, and the CAU-1 commander quickly
outran the coverage of his remaining sensors. In cases like this, the commander
was lulled by the lack of enemy detections on his CSE screen and advanced
without adequate information—perhaps perceiving the lack of detections as
sufficient information to begin actions on the objective. This event is discussed
in detail in the following section.
Today’s commanders are often taught that the effectiveness of a decision is
directly related to the timeliness of the decision. However, while timeliness
will remain critical, tomorrow’s commanders will need to pay more attention
to the complex trade-offs between additional information and decision timeli-
ness. Effective synchronization of information gathering with force maneu-
ver is a formidable challenge in information-rich (and therefore potentially
information addictive) warfare. Both specialized training and new tools are
206 Battle of Cognition
Figure 8.4. SAt curve for Experiment 6, Run 8. See Appendix for explanation of
abbreviations.
ing any counterattacking forces moving towards us [i.e., CAU-1]. I think the
majority of the enemy force is in [CAU-2’s] sector” at 52 minutes into the run.
This would be a reasonable conclusion if he were using his sensors to
develop the picture of the enemy, but in fact CAU-1 had focused his sensors
on his flank and did not have any sensor coverage in the area where he was
moving his troops. Soon thereafter, CAU-1 stumbled into a major Red coun-
terattack force and was combat ineffective within minutes.
So, the obvious question is, why did the CAU-1 commander not make
more effective use of his sensors? Certainly, one important factor was a tac-
tical blunder early in the run that led to the destruction of several key sen-
sor assets, leaving him with fewer sensors to conduct his mission. With this
reduced set of sensors, the commander had to protect his flank, scout forward
to the objective, and conduct necessary BDA. At 44 minutes into the fight,
the commander tasked his staff member to reposition the sensors to scout the
objective but was distracted by the collaboration with a staff member who
declared that he had found several enemy assets far to the west.
Because of this collaboration, the commander neglected his intended mis-
sion of covering the area ahead of his force and began focusing attention far
to the western flank of the advancing force. Yet, less than 10 minutes later,
and with no new information about the objective, the commander was secure
enough in his assessment that he began his offensive and was met with a major
enemy counterattack force that decimated his unit.
There were several reasons for this poor decision to begin operations with-
out conducting proper reconnaissance. The collaborative assessment of the
situation with CAU-2 commander and with CAT commander led the CAU-1
commander to expect few enemy forces in his zone. Later, the commander’s
collaboration with a staff member confirmed his erroneous understanding
that the enemy force was far from his zone.
Though this was a rather extreme example of a collaboration negatively
affecting decision making, there were many other examples throughout the
experiments that showed collaborations either distracting the commander
from making critical decisions or lulling him into accepting an incorrect
understanding of the battlespace. In fact, of seven collaboration process traces
chosen for detailed analysis in Experiment 6, only three cases of collabora-
tion yielded improved cognitive situation awareness for the operators. In the
remaining four cases, collaboration dangerously distracted the decision maker
from his primary focus or reinforced an incorrect understanding of the cur-
rent Red or Blue disposition.
Consider that commanders in our experiments were equipped with a sub-
stantial collection of collaboration tools—instant messaging, multiple radio
frequencies, shared displays, graphics overlays, and a shared whiteboard.
Although the commanders took full advantage of these tools and found them
clearly beneficial, there was also a significant cost to collaboration. To min-
imize such costs, future command cells will need effective protocols—and
208 Battle of Cognition
AUTOMATION OF DECISIONS
Commanders and staffs used automated decisions extensively and could use
them even more. However, the nature of these automated decisions requires
an explanation. In effect, the CSE allowed the commander to formulate his
decisions before a battle and enter them into the system. Then, during the
operations, a set of predefined conditions would trigger the decisions. Thus,
the decisions were actually made by the commander and staff. It was only the
invocation and execution of these decisions that was often performed auto-
matically when the proper conditions were met.
One type of such automatically triggered decision was the automated fires.
The conditions for invoking a fire mission included staff-defined criteria for
confidence level, type of target, the uncertainty of its location, and target-
acquisition quality. Recall that in chapter 3 we discussed the Attack Guidance
Matrix (AGM), an intelligent agent within the CSE that identified enemy tar-
gets and calculated the most suitable ways to attack them with Blue fire assets.
It could also execute fires; for example, it could issue a command to an auto-
mated unmanned mortar to fire at a particular target, automatically or semi-
automatically, as instructed by the human staff member. Typically, a commander
or an effects manager would specify the semiautomatic option: the AGM rec-
ommended the fire to them and would execute it only when a command-cell
member approved the recommendation. Occasionally, in extreme situations,
they would allow fully automated fires, without a human in the decision loop.
Another similar type of automated decision making was an intelligent
agent for automated BDA management. This agent used the commander-
established rules to determine which sensor asset was the most appropriate
to conduct BDA and would automatically task that asset to perform the BDA
assignment. For example, it would automatically command a UAV to collect
information about the status of a recently attacked target. Such decisions were
made based on the specified criteria regarding the available sensor platforms,
areas of responsibility, and enemy assets to be avoided.
In each experiment, we found that command-cell members used the auto-
mated fires feature effectively and frequently. Commanders and effects man-
agers spent ample time prior to the beginning of battle defining the conditions
for automated fires. During the runs, these settings were rarely changed and
almost every run had instances of automated engagements of enemy assets.
However, there were also many manual engagements that could have been
automated but weren’t. Instead, a cell member would manually identify a Red
target, select a Blue fire asset and suitable munitions, and then issue a com-
mand to fire—overall, a much more laborious and slower operation than a
semiautomated fire. One reason for preferring such manual fires was that
it often took too long to accumulate enough intelligence on an enemy tar-
The Time to Decide 209
be simple enough for the operators to understand when it will act and when
it won’t. In particular, there must be a very clear and easily understandable
distinction between the computer control and human control.
For example, in case of the automated fires, it was very clear whether the
human or the computer was to make the final decision, and once a muni-
tion was launched, there were no opportunity for—or confusion about—the
control. However, in case of the BDA management, there was continuous
uncertainty about who was in control of a given platform—a human or a
computer—and the information manager had no means to collaborate with
the system to answer his questions about control.
Second, it should be easy for the operator to enter rules that govern an
automated decision-making tool. For example, it may initially seem obvious
to the developers of an automated tool to call for fires on detected enemy
tanks as soon as possible. However, when low on ammunition, a commander
might want to fire only at those tanks that are able to affect his axis of advance.
Likewise, he may not want to automatically engage tanks near populated areas
or if a civilian vehicle was spotted nearby. The more rules and tweaks, the
harder it is to understand the decisions made by the tool and the sooner an
operator will build distrust when the tool does not perform as he expects.
Naturally, other nontechnological factors also affect the extent to which
automated decisions will be available to a future force. Perhaps our com-
manders accepted the automated fires so easily because the experiments were
merely a simulation: the consequence of a wrong automated decision was
the destruction of computer bytes and not of real people. In today’s practice,
a human is personally accountable for every fire decision, and great care is
taken to avoid accidents. With any automation of decisions related to either
lethal fires or to any other battle actions come many challenging questions
about responsibility and accountability.
212
Concluding Thoughts 213
adequately reviewed by his sensors and was devoid of enemy assets. A tool
could help operators maintain awareness of areas and threats that have been
(or not) seen by various sensors, the capabilities of the sensors, and the time
passed since the sensors visited the area. It could also proactively highlight to
the operator the areas that were inadequately explored or explored too long
ago. On a related note, tools that predict possible actions of the enemy, such
as the RAID system developed by DARPA (Ownby and Kott 2006), can alert
a commander to potential threats he may not have considered.
The necessity and difficulty of developing decision aids point to a yet more
challenging and overarching question—the nature of relationships between
the tools of command and the human mind of commanders. There is a well-
respected genre of literature dedicated to the history and the relationships of
technology and warfare (Boot 2006). A common theme of such works is the
assertion that technology is important but generally subordinate to other non-
technological factors, such as tactics and training. In other words, technology
is a collection of physical things, tools, artifacts, and as such it is entirely dis-
tinct and different from tactics, techniques, procedures, education, training,
and other things that exist in the human mind.
This view is misleading. More insightful definitions of technology stress
that technology is not a collection of tools, but rather a know-how of tech-
niques and processes. To explain how this applies to military technology, let
us digress into a historical example of military technology. Consider tercio, a
successful sixteenth-century invention of the Spanish military (Oman 1937). A
formation of about 1,500 to 3,000 soldiers, it was composed of several mobile
groups of musketeers and a square of pikemen. Combining firepower, the
stability of heavy infantry, and the discipline of its well-trained professional
soldiers, tercio was highly effective for over a century.
Even though the primary technical implements of tercio were the pike and
the musket, it would be misleading to identify them as the technology of
tercio. The know-how definition of technology is much more useful. The
tercio was a technology system, and its effectiveness as a technology was a
product of the collective know-how of its soldiers and commanders: how to
make and use pikes and muskets, how to form and operate the solid square
and the mobile teams of musketeers, how to maintain discipline and control
fear in the face of danger, and how to position and move the tercio. It was the
systemic know-how embodied in an integration of the weapons’ hardware
and the so-called software of human minds that constituted the technology
of tercio. It was not merely the pike and the musket.
Similarly, the technology of battle command is not its technical compo-
nents—a network or a computer or battle-planning software. Instead, it is
the collective know-how of the battle-command embodied in an integrated
whole: tools with their hardware and software, and human minds with their
techniques, procedures, and training. The oft-repeated arguments that dif-
ferentiate military technology from tactics and training are misleading. The
latter is a part of the former; they are an inseparable whole.
Concluding Thoughts 217
commanders and staff, sometimes with catastrophic results to the Blue force.
In view of the extremely strong role played by situation awareness in the
success of a battle, this difficulty deserves a great deal of attention.
Our experimental data suggest that situation awareness—as measured quan-
titatively using the instrumentation and techniques described in chapter 5—is
the most influential factor in determining the success of a mission. More
precisely, the critical factor is the difference between the situation awareness
of the Blue command and the situation awareness of the Red command. With
a greater positive difference, the Blue has greater chances of winning. Even
the temporal dynamics of situation awareness are very influential. When the
Blue force fails to develop a positive advantage in situation awareness—usually
due to an unsuccessful counterreconnaissance battle—its inadequate situation
awareness enters a self-reinforcing cycle that is rarely reversed.
What, then, are the challenges of acquiring situation awareness? While the
full answer must await further research, some culprits are fairly apparent. Part
of the blame can be placed on the CSE tools, especially on the means of pre-
senting the information to the operators. Further work on such tools must
focus on more meaningful and insightful presentations than merely the dis-
play of icons on the map. More important, however, seem to be the operators’
psychological biases. Learning, for example, can be a double-edged sword.
Having noticed a pattern of behavior displayed by the Red force in an earlier
war game, the commander tends to “recognize” it in the current situation.
With a creative, intelligent Red commander, however, the recognition is not
always helpful. Instead, it can lead to an erroneous assessment of the Red
situation or, worse yet, into a deception trap. Once a hypothesis is formed,
the commander is reluctant to abandon it and tends to ignore or rationalize
contradicting evidence. Also of note is the common and apparently uninten-
tional tendency to look for new enemy targets at the expense of assessing the
damage done to an already engaged target.
Of a somewhat similar nature is what appears to be the near-obsessive behav-
ior of a commander who eagerly watches for a change to appear on his screen—
often a new enemy platform detected by Blue sensors—and then vigorously
explores the new information in every detail and immediately proceeds to issue
related commands. Instead of concentrating on the broader meaning of the
unfolding battle, such a commander is absorbed in a potentially insignificant
detail. Experiment observers refer to such behavior as missing the forest for
the trees. In a related form of this behavior, the commander is paralyzed by an
endless cycle of hunting for new information—as an enemy asset is detected,
he calls for additional reconnaissance of the area while delaying any action, and
so on. While only a few commanders exhibit such behaviors consistently, most
succumb to them on occasion.
Other challenges have more obvious and rational causes. For example, an
important requirement for a force reliant on standoff engagements is to syn-
chronize maneuver and information collection: the maneuver assets should
not move into an area until it has been properly explored by sensing assets,
Concluding Thoughts 219
cost. To minimize such costs, future command cells will need specialized
training, discipline, and protocols for collaborating. These should guide the
proper frequency, conditions, and modes of collaboration.
Still, with all the organization, training, procedures, and tools, battle-
command technology cannot and will not produce perfect situation awareness.
This will remain the inescapable fact of warfare as long as it involves an intel-
ligent enemy who works hard to disguise the situation from his opponents.
Not only is perfect situation awareness impossible, it is also unnecessary.
Recall that we find the key determinant of success is not the absolute level
of situation awareness but the difference between the Red and Blue situation
awareness levels. A modest measure of situation awareness suffices when the
enemy is left with an even smaller measure.
Critics of network-enabled warfare sometimes lampoon the concept by
arguing that it relies on an impossibility—perfect intelligence (Kagan 2003).
The argument is fallacious. Network-enabled warfare neither requires nor
relies on perfect intelligence. In modern warfare, the fog of war is bound
to grow thicker, and a key contribution of network-enabled approaches
should be to enable operations under conditions of greater, not lower, lev-
els of uncertainty in battlespace intelligence. The proliferation of technology
and the gradual reduction in the technology gap between the United States
and its adversaries, the urbanization of combat, the growing sophistication
of irregular warfare practiced by the adversary, information warfare, and the
rigorous adherence to the laws of war by U.S. forces—all these contribute to
the thickening of the fog.
More disturbing than the relatively low level of achievable situation aware-
ness is the poor ability of commanders to self-assess their situation awareness.
In our experiments, we find limited correlation between the actual situation
awareness and the commander’s perception of his situation awareness. In
some cases, the commander gloomily worries about unknown dangers while
in fact possessing a nearly perfect picture of the enemy situation. In other
cases, with a grossly misunderstood situation, the commander marches con-
fidently into an ambush. Self-awareness seems even harder than the aware-
ness of the enemy. Can some tools help in this matter? It appears doubtful.
Is it possible that some yet unknown type of training will help? There is a
particularly troubling possibility: what if the very nature of network-enabled
command—with its massive flows of information, vivid displays, and chal-
lenged cognition—leads the commander to reduced self-awareness?
Such doubts aside, developments in battle-command technologies can
help the commander cope with the cognitive challenge, even if they cannot
eliminate it. To explain this point, let us resort to another historical analogy—
armored warfare. From the fifteenth-century battlewagons of Jan Zizka (see
Oman 1960) to the present-day development of the Future Combat System,
the struggle to provide warfighters with greater protection and lethality
always demands greater weight and propulsion power, which in turn strains
mobility and logistics. The progress of technology helps us reach increasingly
Concluding Thoughts 221
The terms here are defined in the way they are used in this book, which may
differ from the usage accepted elsewhere.
Some of the terms and abbreviations describe the systems used by the
hypothetical Red and Blue forces in our experiments. In the experimental
war games, the equipment of the Blue force was partly inspired by—but not
identical to—the U.S. Army FCS family of systems. More information on
FCS-related systems can be found at the FCS Web site (http://www.army.
mil/fcs/). Also see the 2005 FCS Briefing Book at http://www.boeing.com/
defense-space/ic/fcs/bia/041029_2005flipbook.html.
The equipment of the experimental Red force was modeled usually as
upgrades of existing non-U.S. systems. Below, in describing such systems, we
often refer to a comparable modern, currently existing system. The reader
can find more information regarding the modern weapon systems at Web
sites such as Wikipedia (www.wikipedia.org), Globalsecurity (www.globalse
curity.org), and FAS (www.fas.org).
223
224 Appendix
recon: reconnaissance
Red: refers to an enemy force
RedSEM: Red Sensor Effects Module
resupply: replenishing stocks in order to maintain required levels of supply
retasking: assigning a new or modified task to an asset
RFA: Restricted Fire Area
Ricebag ARK-1M: Red counterbattery artillery-locating radar on a tracked, medium-
armored platform; a hypothetical upgrade of the modern Russian ARK-1M Rys
ROE: Rules of Engagement
RPG: Rocket-Propelled Grenade
RPG-22: Red infantry’s and insurgent’s man-portable rocket-propelled grenade
launcher; a hypothetical upgrade of the modern Russian RPG-22
RSTA: Reconnaissance, Surveillance, and Target Acquisition
SA: Situation Awareness
SA-13: Red mobile, short-range, low-altitude air-defense surface-to-air missile system,
tracked, medium armored; a hypothetical upgrade of the modern Russian 9K35
Strela-10
SA-15 Decoy: Red static decoy that emulates the SA-15 system visually and by elec-
tronic emissions
SA-15: Red mobile, low-to-medium-altitude air-defense surface-to-air missile system,
tracked, medium armored; a hypothetical upgrade of the modern Russian 9K330
Tor
SA-18: Red man-portable surface-to-air missile system; a hypothetical upgrade of the
modern Russian 9K38 Igla
SAc: Situation Awareness: Cognitive
SAF: Semi-Automated Forces
SAGAT: Situation Awareness Global Assessment Technique
SAM: Surface-to-Air Missile
SAR: Synthetic Aperture Radar
SASO: Stability and Support Operation
SAt: Situation Awareness: Technical
SCT-TM: Scout Team
SEM: Sensor Effects Module
sensor: a device that responds to a stimulus, such as heat or light, and generates a
signal
SINCGARS: Single Channel Ground and Airborne Radio System
SK: System Knowledge
SME: Subject Matter Expert
SOF: Special Operations Forces
SP: Self-Propelled
SPF: Special Purpose Forces
Appendix 231
Not only is this book largely inspired by two research programs, but it also bor-
rows heavily from the programs’ reports and archives. This places the authors
in heavy debt to a very large number of people who conceived the programs,
built experimental systems, conducted experiments, analyzed the data, and
generated many of the ideas we attempted to present in this work. The authors
are also acknowledged as important leaders, contributors, and participants in
the many activities that are the basis for this book.
Unfortunately for the authors, a policy of the U.S. Department of Defense
restricts our ability to mention by name—for obvious reasons—the depart-
ment’s military and civilian personnel who contributed to this program. This
in no way diminishes our gratitude and appreciation of their enormous efforts.
The best we can do in such cases is to mention the organizations for which
these contributors and supporters work.
A number of senior leaders of the U.S. Army deserve deep thanks for
encouraging, motivating, and sponsoring this research. We are able to men-
tion only a few of them, particularly retired generals Eric Shinseki and Kevin
Byrnes. The goals and vision of our work greatly benefited from the expe-
rience and wisdom of James Barbarello, Allan Tarbell, John Gilmore, Paul
Casselburg, General (retired) David Maddox, and Colonel (retired) Greg
Fontenot. Critical operational concepts were provided by Joe Braddock, Lou
Marquet, and James Tegnalia; retired generals Paul Gorman, Paul Funk, and
Huba Wass de Czege; and retired colonels Ted Cranford, Brooks Lyles, Jack
Gumbert, and Dave Redding. Multiple military leaders, managers, and ana-
lysts at the Army TRADOC provided continued sponsorship, guidance, and
liaison with the army development community. Faculty, cadets, and interns of
233
234 Acknowledgments
the USMA provided useful studies. Contributions were also provided by the
Naval Postgraduate School and the Army Research Lab.
Experimental battle command systems and the simulation testbed required
a broad range of technologies. Mark Curry, Diane Oconnor, Tom Ince, Rob
Lawrence, Craig Klementowski, and Digant Modha of the Viecore Federal
Systems Division led the development of the Battle Command Support Envi-
ronment. John Sausman of Lockheed Martin provided Dismounted Infantry
Behaviors software. The U.S. Army Communications-Electronics Research,
Development and Engineering Center’s Information and Intelligence Warfare
Directorate helped us with the Synthetic Aperture Radar Model, while the
Night Vision & Electronic Sensors Directorate supplied the Mine/Counter-
mine Server. Ralph Forkenbrock, Jim Page, Ray Miller, and Mike Dayton of
Science Applications International Corp. greatly enhanced the OTB simula-
tion system and the Driver-Gunner Simulation model. The Army Topographic
Engineering Center provided the crucial Terrain Server. John Huebner and
John Roberts of Atlantic Consulting Services Inc. developed the very useful
C2 Tasking Library. Mark Berry and Jim Adametz of Computer Sciences Corp.
led the development of the Sensor Effects Model.
Integration of such complex systems—and the management of the required
multifaceted engineering efforts—were ably handled by the Army Research,
Development and Engineering Command; the Army Communications-
Electronics Command; and the Army Program Executive Office for Simula-
tion, Training and Instrumentation.
A great fraction of the efforts in this research was dedicated to experiment
design, experiment execution, data collection, and analysis. We are grateful to
Darrin Meek, LeeAnn Bongiorno, and retired colonels Steve Williams and
Todd Sherrill of Applied Research Associates Inc. for their contributions to
the Sensor Coverage Tool and data collection systems; to Don Timian and
Rick Hyde of Northrop Grumman for Experiment 1 and 2 design and collec-
tion plans; to the researchers of Army Research Institute for human factors
performance analysis; to James Hillman and Andrea Kagle of Johns Hopkins
University-Applied Physics Lab for the information exchange requirements
analysis. Other important contributions to the experimental design and analy-
sis have been made by Beth Meinert and Colonel (retired) Robert Chadwick of
the MITRE Corp. and by personnel of the Army Training and Doctrine Com-
mand Analysis Center. The extensive laboratory infrastructure that housed
and supported the experiments was the work of Jim Seward and Manish Bhatt
of David H. Pollack Consulting. Execution of the experiments, particularly
the portrayal of the Red force and the after-action reviews for the Blue force,
were made possible by the talents of retired colonels Darrell Combs and Al
Rose, and their colleagues from Military Professional Resources Inc.
The primary funding for this research has been provided by DARPA and by
the U.S. Army. Happily, we are allowed to mention and to thank the DARPA
leaders and managers who made this work possible: Frank Fernandez, Tony
Tether, Dick Wishner, Ted Bially, David Whelan, and Allan Adler. DARPA
Acknowledgments 235
also granted us the permission to use the materials on which this book is
partly based; it has been approved for public release, distribution unlimited.
The work reflected in chapter 4 was supported through Army Research Labo-
ratory’s Advanced Decision Architectures Collaborative Technology Alliance.
Of course, the views, opinions, and findings presented here are those of the
authors and should not be construed as those of any agency or organization
of the U.S. government.
Finally, special thanks to Susan Parks, Scott Fuhrer, James Scrocca, Terry
Stephenson, and Michael Ownby who supported this effort in numerous
ways.
236
Notes
INTRODUCTION
1. Coevolution of technology and warfare is a topic of many excellent studies.
A recent example is Max Boot, War Made New (New York: Gotham Books, 2006).
2. A highly influential work is David S. Alberts, John J. Garstka, and Frederick
P. Stein, Network Centric Warfare: Developing and Leveraging Information Superiority
(Washington, DC: CCRP, 2000).
3. Adoption of unmanned aerial vehicles by all services of the U.S. military has
been rapid and rather noncontroversial. A readable introductory history is offered in
Laurence R. Newcome, Unmanned Aviation: A Brief History of Unmanned Aerial Vehicles
(Reston, VA: AIAA [American Institute of Aeronautics], 2004).
4. U.S. Army, FCS Web site, http://www.army.mil/fcs/.
5. Discussed in A. Bacevich, The Pentomic Era: The U.S. Army Between Korea and
Vietnam (Darby, PA: DIANE Publishing Co., 1995).
6. Congressional Budget Office, “The Army’s Future Combat Systems Program
and Alternatives,” August 2006, p. XXII.
7. DARPA Web site, http://www.darpa.mil/.
8. Max Boot, War Made New, p. 463.
9. J. Gumbert, T. Cranford, T. Lyles, and D. Redding, “DARPA’s Future Combat
System Command and Control,” Military Review (May–June 2003): 79–84.
10. For the sake of brevity, we will refer to the combination of these two programs
as MDC2.
11. J. Barbarello, M. Molz, and G. Sauer, “Multicell and Dismount Command and
Control—Tomorrow’s Battle Command Environment Today,” Army AL&T (July–
August 2005): 66–71.
12. For the sake of simplicity and brevity, we use he when referring to a com-
mander or a staff member. This is not to imply the gender of the person.
237
238 Notes
CHAPTER 1
The material in this chapter draws extensively on a report from Carrick Communica-
tions Inc., which has kindly given permission for its use.
1. Histories and analyses of Jutland and its subsequent controversies are legion. A
very readable online summary of the battle is found at http://www.worldwar1.co.uk/
jutland.html.
2. Andrew Gordon, The Rules of the Game: Jutland and British Naval Command
(London: John Murray Publisher Ltd., 2000).
3. U.S. Army, “Battle Command,” in 2003 U.S. Army Transformation Roadmap,
http://www.army.mil/2003TransformationRoadmap.
4. Robert Coram, Boyd: The Fighter Pilot Who Changed the Art of War (New York:
Little, Brown and Co., 2002), pp. 327–44.
5. Carl von Clausewitz, On War, ed. Michael Howard and Peter Paret (Princeton,
NJ: Princeton University Press, 1976), pp. 101–2.
6. As one highly regarded military theorist stated, “The purpose of discipline
is to make men fight in spite of themselves.” Charles Ardant du Picq, Battle Studies,
trans. Col. John N. Greely and Maj. Robert C. Cotton, 1921, http://www.gutenberg.
org/dirs/etext05/8btst10.txt.
7. For more discussion of this challenge, see, for example, Victor Davis Hanson,
“Discipline,” in Reader’s Companion to Military History, http://college.hmco.com/his
tory/readerscomp/mil/html/mh_015100_discipline.htm.
8. Clausewitz, On War, p. 113.
9. Gordon, The Rules of the Game, p. 21.
10. Clausewitz, On War, p. 101.
11. For a perceptive analysis of two revealing cases of such commander–subordinate
disconnection, see Col. Adolf Carlson, A Chapter Not Yet Written: Information Manage-
ment and the Challenge of Battle Command (Washington, DC: Institute for National
Strategic Studies, 1995), http://www.ndu.edu/inss/siws/ch5.html.
12. Clausewitz, On War, p. 120.
13. Atul Gawande, Complications: A Surgeon’s Notes on an Imperfect Science (New York:
Henry Holt, 2002).
14. Gordon C. Rhea, The Battle of the Wilderness May 5–6, 1864 (Baton Rouge:
Louisiana State University Press, 1994).
15. Quoted in Martin van Creveld, Command in War (Boston: Harvard University
Press, 1985), p. 153. Contrast this with Ulysses S. Grant’s view that “the distant rear
of an army engaged in battle is not the best place from which to judge correctly what
is going on in front,” Ulysses S. Grant, Personal Memoirs (New York: Penguin Books,
1999), p. 185.
16. Timothy Lupfer, The Dynamics of Doctrine: The Changes in German Tactical
Doctrine during the First World War (Fort Leavenworth, KS: U.S. Army Command and
General Staff College, 1981). See also van Creveld, Command in War, pp. 183–84.
17. At Spottsylvania in 1864, for example, it prompted a bitter dispute between
Union generals George Meade and Philip Sheridan that finally had to be resolved by
Grant himself. See, for example, Bruce Catton, A Stillness at Appomattox (New York:
Doubleday & Company, 1953), pp. 99–100.
18. Also Russia, but Stalin’s prewar purge of his officer corps largely stifled practi-
cal implementation, as early Soviet defeats demonstrated only too starkly.
19. Ulysses S. Grant, attributed (http://en.wikiquote.org/wiki/Ulysses_S._Grant).
Notes 239
20. Field Marshal The Viscount Slim, Defeat into Victory (Philadelphia: David
McKay Company, 1961), p. 460.
21. TRADOC Pamphlet 525–3–0, The Army in Joint Operations: The Army Future
Force Capstone Concept (Fort Monroe, VA: U.S. Army Training and Doctrive Command,
April 7, 2005).
22. Clausewitz, On War, p. 77.
23. As an NCO in Iraq recently stated, “You know that mission we had all planned
out? That all just went to s—t.” Margaret Friedenauer, “Soldiers Employ Daring
Tactic,” Fairbanks Daily News–Miner, December 21, 2005.
24. van Creveld, Command in War, p. 8.
25. van Creveld, Command in War, pp. 255–56.
26. “The Leadership Legacy of John Whyte,” ARMY, December 2005, p. 64.
27. Army Field Manual 3.0, Operations (Washington, DC: Department of the Army,
June 2001), pp. 4–17. Debate persists about whether the term should be replaced by
orchestrating to diminish what some see as an unhealthy fixation on scheduling.
28. Mark Adkin, The Charge: Why the Light Brigade Was Lost (South Yorkshire, UK:
Leo Cooper, 1996), pp. 125–37.
29. Bill Mauldin, Up Front (New York: W.W. Norton & Co., 2000), p. 225.
30. Col. (Ret) Gregory Fontenot, E. J. Degen, and David John, On Point: The
United States Army in Operation Iraqi Freedom (Fort Leavenworth, KS: Combat Studies
Institute Press, 2004), p. 220.
31. Clausewitz, On War, p. 119.
32. One of Clausewitz’s modern successors went so far as to argue economy of
force to be the foundation of all other principles of war. See J.F.C. Fuller, The General-
ship of Ulysses S. Grant (Cambridge, MA: Da Capo Press, 1991), p. 18.
33. In one of his less-quoted comments, Moltke warned that an error in initial
deployment might well prove irremediable. He was speaking of operations, but the
problem is no less acute for the tactical commander.
34. Clausewitz, On War, p. 75.
35. Col. David Perkins, “Command Briefing,” May 18, 2003, quoted in Fontenot,
On Point, p. 295.
36. Maj. John Altman, quoted in Fontenot, On Point, p. 284.
37. Charles B. Macdonald and Sidney T. Matthews, Three Battles: Arnaville, Altuzzo,
and Schmidt (Washington, DC: Office of the Chief of Military History, Department of
the Army, 1952), pp. 268–71.
38. For the account prompting the comment, see Correlli Barnett, The Desert
Generals (Bloomington: Indiana University Press, 1982).
39. An excellent treatment is Donald W. Engels, Alexander the Great and the Logis-
tics of the Macedonian Army (Berkeley: University of California Press, 1980).
40. Quoted in Martin van Creveld, Supplying War: Logistics from Wallenstein to
Patton (Cambridge, MA: Cambridge University Press, 1977), p. 232.
41. Fontenot, On Point, pp. 408–9.
42. Chief of Staff, Army Warfighter Conference, Washington, DC, July 1984. The
writer was present.
43. Office of the Inspector General, “No Gun Ri Review” (Washington, DC:
Department of the Army, January 2001).
44. Proportionately, “human wave” attacks during the 1980–1988 Iran–Iraq War
may have come close. See, for example, Efraim Karsh, The Iran-Iraq War 1980–1988
(Oxford, UK: Osprey Publishing Ltd., 2002), pp. 35–36.
240 Notes
CHAPTER 2
1. Not a real name. When referring to the future, all names, characters, organiza-
tions, places, and incidents featured in this publication are either the product of the
authors’ imaginations or are used fictitiously.
2. David L. Grange, Huba Wass De Czege, Richard D. Liebert, John E. Rich-
ards, Charles A. Jarnot, Allen L. Huber, and Emery E. Nelson, Air-Mech-Strike: Asym-
metric Maneuver Warfare for the 21st Century, ed. Michael L. Sparks (Padukah, KY:
Turner Publishing Company, 2002).
3. B. Berkowitz, The New Face of War (New York: The Free Press, 2003),
pp. 111–15
4. One example is David S. Alberts, John J. Garstka, and Frederick P. Stein, Net-
work Centric Warfare: Developing and Leveraging Information Superiority (Washington,
DC: CCRP, 2000).
5. U.S. Army, FCS Web site, http://www.army.mil/fcs/.
6. Congressional Budget Office, “The Army’s Future Combat Systems Program
and Alternatives,” August 2006, pp. 35–39.
7. P.A. Wilson, J. Gordon, and D. E. Johnson, “An Alternative Future Force:
Building a Better Army,” Parameters (winter 2003–2004): 19–39.
8. Congressional Budget Office, “The Army’s Future Combat Systems Program
and Alternatives,” pp. 31–32.
9. Congressional Budget Office, “The Army’s Future Combat Systems Program
and Alternatives,” pp. 40–43.
10. Congressional Budget Office, “The Army’s Future Combat Systems Program
and Alternatives,” pp. 44–45.
11. F. Kagan, “War and Aftermath,” Policy Review, August-September 2003, http://
www.hoover.org/publications/policyreview/3448101.html.
12. U.S. Army, Army assessment of Congressional Budget Office study “The Army’s
Future Combat Systems Program and Alternatives,” http://www.army.mil/fcs/.
13. J. Gumbert, T. Cranford, T. Lyles, and D. Redding, “DARPA’s Future Combat
System Command and Control,” Military Review (May–June 2003): 79–84.
14. U.S. Army, TRADOC Pamphlet 525–3-90, O & O, The United States Army
Future Force Operational and Organizational Plan for the Unit of Action (Fort Knox, KY:
Unit of Action Maneuver Battle Lab, December 15, 2004).
Notes 241
CHAPTER 3
1. Gheorghe Tecuci, Building Intelligent Agents. An Apprenticeship Multistrategy
Learning Theory, Methodology, Tool and Case Studies (San Diego, CA: Academic Press,
1988), p. 1.
2. OneSAF.org, http://www.onesaf.org/onesafotb.html (accessed October 12, 2006).
3. CJMTK, “What is the CJMTK?” 2006, http://www.cjmtk.com//Faq/FaqMain.
aspx#Q1 (accessed September 20, 2006).
4. Michael Powers, “Battlespace Terrain Reasoning and Awareness (BTRA),”
2003, http://www.tec.army.mil/fact_sheet/BTRA.pdf (accessed October 12, 2006).
5. Rich Bormann, “A Decision Support Framework for Command and Control in
a Network Centric Warfare Environment,” technical report (Eatontown, NJ: Viecore,
2006).
6. Haley Systems Inc., “Rete Algorithm,” 2006, http://www.haley.com/28147578
2021120/brmsoverview/retereport.html (accessed September 20, 2006).
7. Haley Systems Inc., “HaleyRules: Business Rules Engine,” 2006, http://www.haley.
com/1548387250868224/products/HaleyRules.html (accessed October 12, 2006).
8. Production Systems Technologies Inc., “Clips/R2,” 2003, http://www.pst.com/
clpbro.htm (accessed October 12, 2006).Figure 3.5a. The SSE provides C2 and deci-
sion support to the dismounted warfighter.Figure 3.10. The use of automated code
generation minimizes development time and maximizes code reuse.
CHAPTER 4
Bolstad, C. A., and M. R. Endsley. 1999. Shared Mental Models and Shared Displays:
An Empirical Evaluation of Team Performance. Proceedings of the 43rd Annual
Meeting of the Human Factors and Ergonomics Society, Houston, TX, Human
Factors and Ergonomics Society, September 27–October 1, pp. 213–17.
Bolstad, C. A., and M. R. Endsley. 2000. The Effect of Task Load and Shared Displays
on Team Situation Awareness. Proceedings of the 14th Triennial Congress of
242 Notes
the International Ergonomics Association and the 44th Annual Meeting of the
Human Factors and Ergonomics Society, Santa Monica, CA, Human Factors
and Ergonomics Society, July 30–August 4, pp. 189–92.
Bolstad, C. A., and M. R. Endsley. 2003. Measuring Shared and Team Situation Aware-
ness in the Army’s Future Objective Force. Proceedings of the Human Factors
and Ergonomics Society 47th Annual Meeting, Denver, CO, Human Factors
and Ergonomics Society, October 13–17, pp. 369–73.
Bolstad, C. A., J. M. Riley, D. G. Jones, and M. R. Endsley. 2002. Using Goal Directed
Task Analysis with Army Brigade Officer Teams. Proceedings of the 46th Annual
Meeting of the Human Factors and Ergonomics Society, Baltimore, MD, Human
Factors and Ergonomics Society, September 30–October 4, pp. 472–76.
Collier, S. G., and K. Folleso. 1995. SACRI: A Measure of Situation Awareness for
Nuclear Power Plant Control Rooms. In Experimental Analysis and Measure-
ment of Situation Awareness, ed. D. J. Garland and M. R. Endsley, pp. 115–22.
Daytona Beach, FL: Embry-Riddle University Press.
Dyer, J. L., R. J. Pleban, J. H. Camp, G. H. Martin, D. Law, S. M. Osborn, et al. 1999.
What Soldiers Say about Night Operations. In Volume 1: Main Report (No.
ARI Research Report 1741). Alexandria, VA: Army Research Institute for the
Behavioral and Social Sciences.
Endsley, M. R. 1988. Design and Evaluation for Situation Awareness Enhancement.
Proceedings of the Human Factors Society 32nd Annual Meeting, Anaheim,
CA, Human Factors Society, October 24–28, pp. 97–101.
Endsley, M. R. 1990. Predictive Utility of an Objective Measure of Situation Aware-
ness. Proceedings of the Human Factors Society 34th Annual Meeting,
Orlando, FL, Human Factors Society, October 8–12, pp. 41–45.
Endsley, M. R. 1995a. Direct Measurement of Situation Awareness in Simulations
of Dynamic Systems: Validity and Use of SAGAT. In Experimental analysis
and measurement of situation awareness, ed. D. J. Garland and M. R. Endsley,
pp. 107–13. Daytona Beach, FL: Embry-Riddle University.
Endsley, M. R. 1995b. Measurement of Situation Awareness in Dynamic Systems.
Human Factors 37(1): 65–84.
Endsley, M. R. 1995c. Toward a Theory of Situation Awareness in Dynamic Systems.
Human Factors 37(1): 32–64.
Endsley, M. R. 1996. Situation Awareness Measurement in Test and Evaluation. In
Handbook of Human Factors Testing and Evaluation, ed. T. G. O’Brien and S. G.
Charlton, pp. 159–80. Mahwah, NJ: Lawrence Erlbaum.
Endsley, M. R. 2000. Direct Measurement of Situation Awareness: Validity and
Use of SAGAT. In Situation awareness analysis and measurement, ed. M. R.
Endsley and D. J. Garland, pp. 147–74. Mahwah, NJ: Lawrence Erlbaum
Associates.
Endsley, M. R., and C. A. Bolstad. 1994. Individual Differences in Pilot Situation
Awareness. International Journal of Aviation Psychology 4(3): 241–64.
Endsley, M. R., B. Bolte, and D. G. Jones. 2003. Designing for Situation Awareness: An
Approach to Human-Centered Design. London: Taylor and Francis.
Endsley, M. R., and D. J. Garland, eds. 2000. Situation Awareness Analysis and Measure-
ment. Mahwah, NJ: Lawrence Erlbaum.
Endsley, M. R., and W. M. Jones. 1997. Situation Awareness, Information Domi-
nance, and Information Warfare (No. AL/CF-TR-1997-0156). Wright-
Patterson AFB, OH: United States Air Force Armstrong Laboratory.
Notes 243
Endsley, M. R., and W. M. Jones. 2001. A Model of Inter- and Intrateam Situation
Awareness: Implications for Design, Training and Measurement. In New Trends
in Cooperative Activities: Understanding System Dynamics in Complex Environ-
ments, ed. M. McNeese, E. Salas, and M. Endsley, pp. 46–67. Santa Monica,
CA: Human Factors and Ergonomics Society.
Endsley, M. R., and E. O. Kiris. 1995. The Out-of-the-Loop Performance Problem
and Level of Control in Automation. Human Factors 37(2): 381–94.
Endsley, M. R., and M. M. Robertson. 2000. Training for Situation Awareness in Indi-
viduals and Teams. In Situation Awareness Analysis and Measurement, ed. M. R.
Endsley and D. J. Garland. Mahwah, NJ: LEA.
Endsley, M. R., S. J. Selcon, T. D. Hardiman, and D. G. Croft. 1998. A Compara-
tive Evaluation of SAGAT and SART for Evaluations of Situation Awareness.
Proceedings of the Human Factors and Ergonomics Society Annual Meeting,
Chicago, Human Factors and Ergonomics Society, October 5–9, pp. 82–86.
Gugerty, L. J. 1997. Situation Awareness during Driving: Explicit and Implicit Knowl-
edge in Dynamic Spatial Memory. Journal of Experimental Psychology: Applied 3:
42–66.
Hockey, G.R.J. 1986. Changes in Operator Efficiency as a Function of Environmen-
tal Stress, Fatigue and Circadian Rhythms. In Handbook of perception and
performance, vol. 2, ed. K. Boff, L. Kaufman, and J. Thomas, pp. 44/41–49.
New York: John Wiley.
National Research Council. 1997. Tactical Display for Soldiers. Washington, DC:
National Research Council.
Sharit, J. and G. Salvendy. 1982. Occupational Stress: Review and Reappraisal. Human
Factors 24(2): 129–62.
Strater, L. D., D. Jones, and M. R. Endsley. 2003. Improving SA: Training Challenges
for Infantry Platoon Leaders. Proceedings of the 47th Annual Meeting of the
Human Factors and Ergonomics Society, Denver, CO, Human Factors and
Ergonomics Society, October 13–17, pp. 2045–49.
U.S. Army. 2001. Concepts for the Objective Force. Washington, DC: U.S. Army.
CHAPTER 5
Brownlee, Les, and Peter J. Schoomaker. 2004. “Serving a Nation at War: A Campaign
Quality Army with Joint and Expeditionary Capabilities.” Parameters 34, no. 2
(Summer):18.
Klein, G. A., R. Calderwood, and D. MacGregor. 1989. Critical Decision Method for
Eliciting Knowledge. IEEE Transactions on Systems, Man, and Cybernetics 19(3):
462–72.
Woods, David D. 1993. Process Tracing Methods for the Study of Cognition Outside
of the Experimental Psychology Laboratory. In Decision Making in Action:
Models and Methods, ed. G. Klein, J. Orasanu, R. Calderwood, and C. Zsambok,
pp. 228–51. Norwood, NJ: Ablex Publishing Corporation.
CHAPTER 6
Cheikes, B. A., M. J. Brown, P. E. Lehner, and L. Alderman. 2004. Confirmation Bias
in Complex Analysis. Technical Report No. MTR 04B0000017. Bedford, MA:
MITRE.
244 Notes
CHAPTER 7
Brehmer, B. 1991. Organization for Decision Making in Complex Systems. In Distrib-
uted Decision Making: Cognitive Models for Cooperative Work, ed. J. Rasmussen,
B. Brehmer, and J. Leplat. New York: Wiley and Sons.
Clark, H. H. 1996. Using Language. New York: Cambridge University Press.
Endsley, M. R. 1995. Toward a Theory of Situation Awareness in Dynamic Systems.
Human Factors 37(1): 32–64.
Field Manual 6–0. 2003. Battle Command: Command and Control of Army Forces. Wash-
ington, DC: Headquarters, Department of the Army.
Flake, G. W. 1998. The Computational Beauty of Nature. Cambridge, MA: MIT Press.
Garstka, J., and D. Alberts. 2004. Network Centric Operations Conceptual Framework
Version 2. Vienna, VA: Evidence Based Research.
Katz, D., and R. L. Kahn. 1978. The Social Psychology of Organizations. New York: Wiley.
Klein, G. A. 1999. Sources of Power. Cambridge, MA: MIT Press.
Rasmussen, J., A. Pejtersen, and L. Goodstein. 1994. Cognitive Systems Engineering.
New York: John Wiley and Sons.
Simon, H. A. 1996. The Sciences of the Artificial. Cambridge, MA: MIT Press.
Thompson, J. D. 1967. Organizations in Action. New York: McGraw-Hill.
CHAPTER 8
Bell, J. B., and B. Whaley. 1991. Cheating and Deception. Edison, NJ: Transaction
Publishers.
Notes 245
Evidence Based Research, Inc. 2003. Network Centric Operations Conceptual Frame-
work Version 1.0. http://www.iwar.org.uk/rma/resource/new/new-conceptual-
framework.pdf.
Galbraith, J. 1974. Organization Design: An Information Processing View. Interfaces
4 (May): 28–36.
Janis, I. L. 1982. Groupthink: Psychological Studies of Policy Decisions and Fiascoes. Boston:
Houghton Mifflin Company.
Kahneman, D., and A. Tversky. 1979. Prospect Theory: An Analysis of Decision under
Risk. Econometrica 47(2): 263–92.
Klein, G. 1999. Sources of Power: How People Make Decisions. Cambridge, MA: MIT
Press.
Klein, G. A., R. Calderwood, and D. MacGregor. 1989. Critical Decision Method for
Eliciting Knowledge. IEEE Transactions on Systems, Man, and Cybernetics 19(3):
462–72.
Kott, A., ed. 2007. A Model of Self-Reinforcing Defeat in Command Structures Due
to Decision Overload. In Information Warfare and Organizational Decision-
Making, ed. A. Kott, pp. 135–41. Norwood, MA: Artech House.
Louvet, A-C., J. T. Casey, and A. H. Levis. 1988. “Experimental Investigation of the
Bounded Rationality Constraint.” In Science of Command and Control: Coping
with Uncertainty, ed. S. E. Johnson and A. H. Levis, pp. 73–82. Washington,
DC: AFCEA.
Perrow, C. 1999. Normal Accidents: Living with High-Risk Technologies. Princeton, NJ:
Princeton University Press.
Shattuck, L. G., and N. L. Miller. 2004. A Process Tracing Approach to the Investiga-
tion of Situated Cognition. Proceedings of the Human Factors and Ergonom-
ics Society’s 48th Annual Meeting, New Orleans, pp. 658–62.
Simon, H. 1991. Models of My Life. New York: Basic Books.
Tversky, A., and D. Kahneman. 1974. Judgment under Uncertainty: Heuristics and
Biases. Science 185: 1124–31.
van Creveld, M. 1985. Command in War. Cambridge, MA: Harvard University
Press.
Woods, David D. 1993. Process Tracing Methods for the Study of Cognition Out-
side of the Experimental Psychology Laboratory. In Decision Making in Action:
Models and Methods, ed. G. Klein, J. Orasanu, R. Calderwood, and C. Zsambok,
pp. 228–51. Norwood, NJ: Ablex Publishing Corporation.
CONCLUDING THOUGHTS
Bailey, Tracy A. 2005. “Air Assault Expeditionary Force Tests Technologies.” Army
News Service, December 1.
Boot, M. 2006. War Made New. New York: Gotham Books.
Kagan, F. 2003. War and Aftermath, Policy Review (August–September). http://www.
hoover.org/publications/policyreview/3448101.html.
Kott, A., ed. 2007. Information Warfare and Organizational Decision Process. Norwood,
MA: Artech House Publishers.
Kott, A., and W. McEneaney, eds. 2006. Adversarial Reasoning: Computational
Approaches to Reading the Opponent’s Mind. New York: Chapman and Hall,
CRC Press.
246 Notes
Oman, C. 1937. History of the Art of War in the Sixteenth Century. New York: E.P.
Hutton.
Oman, C. 1960. The Art of War in the Middle Ages. Ithaca, NY: Cornell University
Press, pp. 152–59.
Wilson, P. A., J. Gordon, and D. E. Johnson. 2004. An Alternative Future Force:
Building a Better Army. Parameters (Winter): 19–39.
Index
247
248 Index
Future Combat Systems Command and of, 61–63; shared, 157–62, 175–76
Control (FCS C2), 44–45, 46 (See also Collaboration); sharing of,
Future Force leaders, 18 219; task-specific, 171; transforma-
Future Force Warrior (FFW), 91 tion of, 172; visualization of, 70.
See also Common operating picture
Gantt charts, 80 (COP)
Garm, reconnoitering of, 80 Information advantage rules, 140–43
Garstka, John, 42 Information managers, 49, 73. See also
Gawande, Atul, 15 Intelligence managers
Geodetic coordinate system, 87 Information overload: cognitive
Geographic Intelligence Overlays, 77 filtering, 152–53; CSE interface
Georeferenced satellite imagery, 87 clutter, 153; groupthink and, 197;
Germany, Battle of Jutland, 10–11 during the invasion of Iraq, 22–23;
Gettysburg, Confederate artillery at, 16 negative impact of, 210–11; robotic
Globalization, enterprise structure sensors and, 50–53; situation
and, 63 awareness and, 104–5
Goal-Directed Task Analysis (GDTA), Insurgents, identification of, 28
107, 108, 113 Intelligence, 115–16, 117. See also
“Going sour” incidents, 153 Information
Gordon, Andrew, 11 Intelligence, Surveillance, and
Grand Fleet, United Kingdom, 10–11 Reconnaissance (ISR) protocols:
Grant, Ulysses S., 16, 18 development of, 144; information
Graphical Control Measures (GCMs), transformation and, 173–74;
76, 87 integration of, 163–64; visualization
Ground surveillance radar (GSR), 88 of sensor coverage, 190
Group interviews, 121 Intelligence gaps, misinterpretations
Groupthink, 197 and, 149–57
Intelligence management:
HaleyRules, 92 coordination and, 176; overload, 204;
Hammurabi [Division], 25 Picture Viewer function, 82–83
Heisenberg, Werner, 14 Intelligence managers, 49. See also
High Mobility Artillery Rocket Systems Information managers
(HIMARS), 58 Intelligence Preparation of the Battle-
High-payoff targets (HPT), 127–28 field (IPB) process, 141–42
High Seas Fleet, Germany, 10–11 Intelligent Agents: components of, 67;
Huertgen Forest, 26 CSE Tier 2, 71; CSE Tier 3, 71–72;
Human perceptions, 13 definitions, 65; functional areas of,
66–67
Identification, collaboration and, 177 Intel Viewer tool, 83, 84
Incident identification, 202 Intensive processes, 179
Information: abstraction, 171–74; Interdependence: collaboration and,
addiction to, 205–6; availability 180; types of, 180
of, 204; cost of, 203–5; deficien- Interviews: group, 121; methodology,
cies in, 18; degree of urgency, 157; 133–34; scoring, 201–2
distribution of, 167–68; drivers Iraq: insurgence, 33; invasion of, 22–23,
of, 171; gaps in, 105, 147–48, 213; 26–27; noncombatants in, 28
hierarchical levels, 171–74; indica-
tion of certainty, 157; networked, Javelins, history of, 21
34; presentation of, 218; processing Jellico, John (Sir), 10–11, 14
252 Index
257
258 About the Contributors
STEPHEN KIRIN, a retired colonel of the U.S. Army, has been a member of
the MITRE Corp. since July 2000. Since January 2006, he has served as the lead
of the Operations Research—Systems Analysis Division in the Joint Improvised
Device Defeat Organization. Kirin’s culminating active duty assignment was as
the deputy director for the TRADOC Analysis Center. During his four years at
TRAC, he was the lead analyst for a number of key experiments and studies to
underpin Army Transformation. In his 27 years of service, Kirin served at every
level from platoon to corps and has been a student of the issues associated with
battle command. Since joining MITRE and prior to his support to JIEDDO, he
has continued to investigate and analyze operational issues with a focus on battle
command. Kirin received a bachelor of science in engineering from the United
States Military Academy and a master’s of science in operations research and
applied mathematics from Rensselaer Polytechnic Institute. He was a U.S. Army
Rand Fellow for two years and is a graduate of the U.S. Naval War College.
GARY L. KLEIN focuses his work on modeling how people acquire and use
information. As the senior principal scientist in Cognitive Science & Artifi-
cial Intelligence in the C2C, he is responsible for developing and promoting
both of those technical areas with respect to supporting the development of
enhanced decision support. He also is developing the application of cognitive
systems engineering throughout MITRE. His current work is applying cogni-
tive systems engineering to the army’s 1st Information Operations Command
(Land). The objective is to identify transformational technology opportunities
related to 1st IO Command, which have new start potential for the Defense
Advanced Projects Agency. He and Leonard Adelman developed the Collabo-
ration Evaluation Framework originally to assess collaborative tools in intelli-
gence analysis in terms of their impact on collaboration per se. In an extension
of that effort, he led a team to help the intelligence community’s Disruptive
Technology Office (formerly ARDA) assess intelligence analysis tools with
regard to their ergonomic, cognitive, and collaborative suitability. In other
work, to improve understanding how policy changes lead to changes in deci-
sion making and subsequently organizational behavior, Klein developed the
Adaptive Decision Modeling Shell (ADMS) for creating cognitively realistic
agent-based social simulation models. For MITRE’s Center for Advanced
Aviation Systems Development, he recently led a C2C technical team in using
ADMS to develop a social simulation model of airline-scheduling decision
making. Dr. Klein led the MDC2 program’s research in collaboration within
and between command cells.
260 About the Contributors
programs in which she has conducted research studies to examine and model
naturalistic human cognition. In a program of research sponsored by the army,
she investigated the process by which individuals make sense of situations
as they unfold and developed a model of sense making. Phillips has applied
her research to the development of several training interventions focused on
improving complex cognitive skills such as decision making, sense making,
and situation awareness, and problem detection. She has extensive experience
conducting Cognitive Task Analysis to elicit expert knowledge and generate
design and training requirements. She has conducted studies to develop
decision-making training scenarios at all echelons of military command and
using a range of media, including Web- and computer-based simulations. She
has also studied the role of instructors as facilitators of the learning process
and has developed instructor guides and train-the-trainer workshops to ensure
a focus on the cognitive elements of decision making. In addition, Phillips has
developed assessment measures and conducted evaluation studies to deter-
mine the effectiveness of training interventions for improving cognitive skills.
Phillips received a BA in psychology from Kenyon College in 1995.
RICHARD HART SINNREICH retired from the U.S. Army in June 1990.
A 1965 West Point graduate, he earned a master’s degree in foreign affairs
from Ohio State University and is a graduate of the U.S. Army’s Command
and General Staff College and the National War College. His military service
included field commands from battery through division artillery; combat ser-
vice in Vietnam; teaching at West Point and Fort Leavenworth; tours on the
About the Contributors 263
Army, Joint, National Security Council, and SHAPE staffs; and appointment
as the first Army Fellow at the Center for Strategic and International Studies.
As first deputy director and second director of the army’s School of Advanced
Military Studies, he helped write the 1986 edition of the army’s capstone Air-
Land Battle doctrine and has published widely in military and foreign affairs.
Since retiring from military service, he has consulted for a number of defense
agencies, including the army’s Training and Doctrine Command, Joint Forces
Command, the Institute for Defense Analyses, and the Defense Advanced
Research Projects Agency. His defense column in the Lawton (OK) Constitu-
tion has been reprinted by the Washington Post, ARMY Magazine, and other
journals. His most recent book, with historian Williamson Murray and oth-
ers, is The Past as Prologue: The Importance of History to the Military Profession,
Cambridge University Press, May 2006. He led a team of military experts that
advised the MDC2 program and guided the program’s experiments.