SSRN 2055122
SSRN 2055122
SSRN 2055122
Forthcoming in: Allan Collins (ed.) Contemporary Security Studies (Oxford University Press 2012) –
long/unabridged version
Contents:
1 Introduction .........................................................................................................................................................2
2 Computer security 101 ........................................................................................................................................4
2.1 The inherent insecurity of computers and networks ................................................................................4
2.2 Computer vulnerabilities and threat agents .............................................................................................5
2.3 Hacking tools .............................................................................................................................................6
3 Three interlocking cyber-security discourses ......................................................................................................7
3.1 Viruses, worms and other bugs (technical discourse) ...............................................................................8
3.2 Cyber-crooks and digital spies (crime-espionage discourse)...................................................................10
3.3 Cyber(ed) conflicts and vital system security (military-civil defence discourse) .....................................13
4 Information Assurance, critical (information) infrastructure protection and international norms ..................17
4.1 Information Assurance ............................................................................................................................17
4.2 Critical (information) infrastructure protection ......................................................................................18
4.2.1 Public Private Partnerships and Information Sharing .........................................................................19
4.2.2 Resilience ............................................................................................................................................19
4.3 Cyber-offense, cyber-defence, cyber-deterrence ...................................................................................19
4.4 Arms control and other norms ................................................................................................................20
5 The level of cyber-risk ........................................................................................................................................22
6 Conclusion .........................................................................................................................................................24
7 Questions ...........................................................................................................................................................25
8 Further Reading .................................................................................................................................................26
9 Important Websites ...........................................................................................................................................27
10 Notes on Contributor ...................................................................................................................................28
11 References....................................................................................................................................................29
12 Key Terms .....................................................................................................................................................32
Reader’s guide
This chapter analyses why cyber-security is considered one of the key national security issues of our
times. The first section provides the necessary technical background information. The second unravels
three different, but interrelated discourses about cyber-security: discourse number one has a technical
focus and is about viruses and worms. Number two looks at the interrelationship between the
phenomenon of cyber-crime and cyber-espionage. Number three is a military and civil defence-driven
discourse about the double-edged sword of fighting wars in the information domain and the need for
critical infrastructure protection. Based on this, the third section turns to selected protection concepts
from each of the three discourses. The final section sets the threat into perspective: despite heightened
media attention and a general feeling of impending cyber-doom in some government circles, the level of
cyber-risk is generally overstated. This has important repercussions for decision-makers and students,
which are addressed in the concluding section.
1
1 Introduction
The factor of information has been considered a significant aspect of power, diplomacy, and armed
conflict for a very long time. Since the 1990s, however, information’s role in international relations and
security has diversified and its importance for political matters has increased, mostly due to the
proliferation of information and communication technology (ICT) into all aspects of life in
(post)industrialized societies. The ability to master the generation, management, use but also
manipulation of information has become a desired power resource, as the control over knowledge,
beliefs, and ideas are increasingly regarded as a complement to control over tangible resources such as
military forces, raw materials, and economic productive capability (Rothkopf 1998).
Within the larger set of information age issues, a particular type of danger-discourse has been the major
driver shaping the discussion about opportunities and pitfalls of society’s increasing reliance on ICT. In
fact, matters of cyber-(in)-security – though not always under this name – have been an issue in security
politics for at least three decades. What is the reason, then, that the topic of cyber-security is only
included in the third (2012) edition of this textbook?
On the one hand, this reflects the topic’s rising prominence in the broader security policy debate. For
many years, it has been almost exclusively shaped in the realm of (US) security experts and military
strategists. This has changed only recently: highly publicised events have given the impression that
cyber-attacks are becoming more frequent, more organised and more costly in the damage that they
inflict and pose an ever rising danger. Be this fact or just a matter of perception, many state officials now
consider the issue one of the top security threats of our time. On the other hand, this late “coming of
age” also reveals a lot about the special place the topic occupies in the academic world.
Cyber-security is so far only discussed in relatively small and closed circles. While disciplines such as
media, communication, cultural studies or sociology have long discovered the broader phenomenon of
the information age as a topic, security studies have been very slow to catch on. It does not easily fit into
well-established categories, neither conceptually nor theoretically and it sits between various intersecting
security discourses and disciplines. Because of persuasive threat clustering over the years – whereby less
typical “security” issues are convincingly linked to well-established ones – it has clearly become more than
just a technical issue. But even though an existential threat is frequently invoked, actual cyber-security
practices are almost exclusively about mitigating risks to information networks by technical (and
occasionally organizational) means. This is at odds with most of the analytical tools in the field of security
studies that link security to urgency and extreme/extraordinary measures. Beyond rhetorical buzzwords
such as cyber-weapon, cyber-terror and cyber-war, however, it is a far stretch to call any of the practices
in the realm of cyber-security “exceptional”. This might also explain why there is no coherent body of
literature on the subject in IR/security studies yet (see Think Point 25.1).
2
Think Point 25.1: The Lack of Cyber-Security Research in Security Studies
The majority of books and articles on cyber-security tend to be highly specific and policy-oriented
(see e.g. Arquilla and Ronfeldt 1997; Henry and Peartree 1998). The US is the main and often
exclusive arena and target of this literature, even though some American strategists have
focused on China and Russia (but only in order to reflect on the danger to the US) (Mulvenon and
Yang 1998; Thomas 1996, 2004). Furthermore, this body of literature does not communicate well
with more general international relations theory and research (Eriksson and Giacomello 2007).
Literature produced outside of specific US military journals and think tanks remains fragmented.
Some scholars have focused on the construction of information-age security threats by using
frameworks informed by constructivism, particularly securitization theory (Eriksson 2001;
Bendrath 2003; Dunn Cavelty 2008). From this, valuable insights can be gained with regard to
threat perceptions and policy reactions, but more research is warranted particularly with regard
to comparative studies of threat constructions in countries other than the US. Post-structuralism
has influenced another body of literature, which focuses on so-called ‘Postmodern War’ (Hables
Gray 1997), seen as a discourse on technical-military interaction that also focuses on the
centrality of information.
The more important cyber-security becomes in the policy domain, the more puzzling this lack of
theoretical research in security studies is. One explanation might be the a-typical nature of
cyber-security, as mentioned in the text. Many scholars are tempted to exclude technical threats
from security (studies) on the grounds of little discursive resemblance to political-military threats
(see Buzan and Hansen 2009: 19). Such exclusion is based on the notion that “security proper” is
linked to the grammar of urgency and extreme/extraordinary measures.
There are different ways the cyber-security story can be told. The most straightforward one is to follow
the reasoning of the policy-debate, in which the dependence of modern societies on inherently insecure
information infrastructures is seen as a very dangerous vulnerability that has national and international
security implications. Even though such an approach captures most of the important elements that
undergraduates should know about, it ultimately fails to expose those aspects of the cyber-security
discourse that make it so special and challenging for both academia and policy-makers. Only a more
discursive, critical view helps to reveal the fabric of current issues and fears that the cyber-domain makes
possible.
In this chapter, the cyber-(in)-security logic is unpacked in four sections. The first section provides the
necessary technical background information: why the information infrastructure is inherently insecure,
how computer vulnerabilities are conceptualized, who can exploit them and in what ways. The next
section presents three different, but always interrelated and interlocking cyber-security discourses. They
all have specific referent objects (that which is seen in need of protection) and are driven by different
actor groups. The third section looks at selected countermeasures that are discussed in the three
discourses. The fourth turns to the important question of how serious a threat cyber-attacks actually are,
arguing that due to a variety of reasons, the level of cyber-threat is generally overstated in the security
community. This comes with certain dangers and has important repercussions for decision-makers and
students of cyber-security.
3
2 Computer security 101
“Cyber-” is a prefix derived from the word cybernetics that has acquired the general meaning of
“through the use of a computer”. It is also used synonymously with “related to cyberspace”. Cyberspace
(a portmanteau word combining “cybernetics” with “space”) connotes the fusion of all communication
networks, databases, and sources of information into a vast, tangled, and diverse blanket of electronic
interchange. Thus, a “network ecosystem” is created, a place that is not part of the normal, physical
world. It is virtual and immaterial, a “bioelectronic environment that is literally universal: It exists
everywhere there are telephone wires, coaxial cables, fiber-optic lines or electromagnetic waves” (Dyson
et al. 1996). Cyberspace is not only virtual – it is also grounded in physical reality, a “real geography”
made up of servers, cables, computers, satellites, etc. (Suteanu 2005: 130). In popular usage, we tend to
use the terms cyberspace and Internet almost interchangeably, even though the Internet is just one part
of cyberspace, though certainly the most important one nowadays.
Cyber-security is both about the insecurity created by and through this new place/space and about the
practices or processes to make it (more) secure. It refers to a set of activities and measures, both
technical and non-technical, intended to protect the bioelectrical environment and the data it contains
and transports from all possible threats. Understanding the broader cyber-security discourse requires
some basic knowledge about the technology that lies at the heart of this debate. In the next few sub-
sections, readers will get a brief introduction into the basics of computer or information security.
Today’s version of the Internet is a dynamic evolution of the Advanced Research Projects Agency
Network (ARPANET), which was funded by the Defense Advanced Research Projects Agency (DARPA) of
the United States Department of Defense (DoD) from 1962 onwards, mainly for optimized information
exchange between the universities and research laboratories involved in DoD research. From the very
beginning, the network designers emphasized robustness and survivability over security. At the time,
there was no apparent need for a specific focus on security, because information systems were being
hosted on large proprietary machines that were connected to very few other computers (Leiner et al.,
1997).
Due to the dynamic evolution of ARPANET, this turned into a legacy problem. What makes systems so
vulnerable today is the confluence of three factors: the same basic packet-switching technology (not
built with security in mind), the shift to smaller and far more open systems (not built with security in
mind) and the rise of extensive networking at the same time (Libicki 2000). In addition to this, the
commercialization of the Internet in the 1990s led to a further security deficit. There are significant
market-driven obstacles to IT-security: There is no direct return on investment, time-to-market impedes
extensive security measures, and security mechanisms often have a negative impact on usability so that
4
security is often sacrificed for functionality (Anderson and Moore 2006). Also, an ongoing dynamic
globalization of information services in connection with technological innovation has led to an increase
of connectivity and complexity, leading to ill-understood behaviour of systems, as well as barely
understood vulnerabilities (Strogatz 2001). As known from information sciences, the more complex an IT
system is, the more bugs (problems in computer programs that produce unintended behaviour) it
contains – and the more complex, the harder an IT system’s security is to control or manage.
The terminology in information security is often seemingly congruent with the terminology in national
security discourses: it is about threats, agents, vulnerabilities, etc. However, the terms have very specific
meanings, so that seemingly clear analogies must be used with care. In information security, a threat is
defined as a possible danger that might exploit a vulnerability in the system. A vulnerability on the other
hand is a weakness in the system that can be exploited by a threat (International Organization for
Standardization 2011).
One (of several possible) ways to categorize threats is to differentiate between “failures”, “accidents”,
and “attacks”. Failures are potentially damaging events caused by deficiencies in the system or in an
external element on which the system depends. Failures may be due to software design errors, hardware
degradation, human errors, or corrupted data. Accidents include the entire range of randomly occurring
and potentially damaging events such as natural disasters. Usually, accidents are externally generated
events (i.e., from outside the system), whereas failures are internally generated events. Attacks (both
passive and active) are potentially damaging events orchestrated by a human adversary (Ellison et al.
1999). They are the main focus of the cyber-security discourse.
Human attackers are usually called “threat agents”. The most common label bestowed upon them is
hacker. This catchphrase is used in two main ways, one positive and one pejorative (Erickson 2003). For
members of the computing community, it describes a member of a distinct social group (or sub-culture),
a particularly skilled programmer or technical expert who knows a programming interface well enough to
write novel software. A particular ethic is ascribed to this subculture: a belief in sharing, openness, and
free access to computers and information; decentralization of government; and in improvement of the
quality of life (Levy 1984). In popular usage and in the media, however, the term hacker generally
describes computer intruders or criminals.
In the cyber-security debate, hacking is considered a modus operandi that can be used not only by
technologically skilled individuals for minor misdemeanour, but also by organized actor groups with truly
bad intent, such as terrorists or foreign states. Some hackers may have the skills to attack those parts of
the information infrastructure considered “critical” for the functioning of society. Though most hackers
would be expected to lack the motivation to cause violence or severe economic or social harm because
5
of their ethics (Denning 2001), government officials fear that individuals who have the capability to cause
serious damage, but little motivation, could be swayed by sufficiently large sums of money provided by a
group of malicious actors.
There are various tools and modes of attack. The term used for the totality of these tools is malware (a
combination of the words “malicious” and “software”). Well-known examples are viruses and worms,
computer programs that replicate functional copies of themselves with varying effects ranging from
mere annoyance and inconvenience to compromise of the confidentiality or integrity of information; or
Trojan horses, destructive programs that masquerade as benign applications, but set up a back door so
that the hacker can later return and enter the system.
Often, system intrusion is the main goal of more advanced attacks. If the intruder gains full system
control, or “root” access, he has unrestricted access to the inner workings of the system (Anonymous
2003). Due to the characteristics of digitally stored information, an intruder can delay, disrupt, corrupt,
exploit, destroy, steal, and modify information (Waltz 1998). Depending on the value of the information
or the importance of the application for which this information is required, such actions will have
different impacts with varying degrees of gravity.
Key Points:
• Cyberspace has both virtual and physical elements. We tend to use the terms cyberspace and
Internet interchangeably, even though cyberspace encompasses far more than just the Internet.
• Cyber-security is both about the insecurity created through cyberspace and about the technical
and non-technical practices of making it (more) secure.
• The Internet started as ARPANET in the 1960s and was never built with security in mind. This
legacy, combined with the rapid growth of the network, its commercialization, and its increasing
complexity made computer networks inherently insecure.
• Information security uses as vocabulary very similar to national security language, but has
specific meanings. Computer problems are caused by failures, accidents, or attacks. The latter
are the main focus of the cyber-security discourse. Attackers are generally called hackers.
• The umbrella term for all hacker tools is malware. The main goal of more advanced attacks is full
system control, which allows the intruder to delay, disrupt, corrupt, exploit, destroy, steal, or
modify information. Depending on the context, this can have different consequences.
6
3 Three interlocking cyber-security discourses
The cyber-security discourse originated in the US in the 1970s, built momentum in the late 1980s and
spread to other countries in the late 1990s. The US government shaped both the threat perception and
the envisaged countermeasures, with only little variation in other countries (cf. Brunner and Suter 2008).
On the one hand, the debate was decisively influenced by the larger post-Cold War strategic context, in
which the notion of asymmetric vulnerabilities, epitomised by the multiplication of malicious actors
(both state and non-state) and their increasing capabilities to do harm started to play a key role. On the
other hand, discussions about cyber-security always were and still are influenced by the ongoing
information revolution, which the US is shaping both technologically and intellectually, by discussing its
implications for international relations and security and acting on these assumptions.
The cyber-security discourse was never static, because the technical aspects of the information
infrastructure are constantly evolving. Most importantly, changes in the technical sub-structure changed
the referent object. In the 1970s and 1980s, cyber-security was about those parts of the private sector
that were becoming digitalized and also about government networks and the classified information
residing in it. The growth and spreading of computer networks into more and more aspects of life
changed this limited referent object in crucial ways. In the mid-1990s, it became clear that key sectors of
modern society, including those vital to national security and to the essential functioning of (post-
)industrialised economies, had come to rely on a spectrum of highly interdependent national and
international software-based control systems for their smooth, reliable, and continuous operation. The
referent object that emerged was the totality of critical (information) infrastructures that provide the
way of life that characterizes our societies.
7
When telling the cyber-security-story, we can distinguish between three different, but often closely
interrelated and reinforcing discourses, with specific threat imaginaries and security practices, referent
object and key actors. The first is a technical discourse concerned with malware (viruses, worms, etc.)
and system intrusions. The second is concerned with the phenomena cyber-crime and cyber-espionage.
The third is a discourse driven initially by the US military, focusing on matters of cyber-war initially but
increasingly also on critical infrastructure protection (see Figure 25.1).
The technical discourse is focused on computer and networks disruptions caused by different types of
malware. One of the first papers on viruses and their risks was Fred Cohen’s “Computer viruses - Theory
and Experiments”, first presented in 1984 and published in 1987 (Cohen 1987). His work demonstrated
the universality of risk and the limitations of protection in computer networks and solidified the basic
paradigm that there can be no absolute security / no zero risk in information systems. In 1988, the
ARPANET had its first major network incident: the “Morris Worm”. The worm used so many system
resources that the attacked computers could no longer function and large parts of the early Internet
went down.
Its devastating technical effect prompted the Defense Advanced Research Projects Agency (DARPA) to
set up a centre to coordinate communication among computer experts during IT emergencies and to
help prevent future incidents: a Computer Emergency Response Team (CERT) (Scherlis et al. 1990). This
centre, now called the CERT Coordination Center, still plays a considerable role in computer security
today and served as a role model for many similar centres all over the world. Around the same time, the
anti-virus industry emerged and with it techniques and programs for virus recognition, destruction and
prevention. This industry lives in an interesting symbiosis with those that exploit information technology
for financial gain or influence in ever more creative ways (Sampson 2007).
The worm also had a substantial psychological impact, by making people aware just how insecure and
unreliable the Internet was (Parikka 2005). While it had been acceptable in the 1960s that pioneering
computer professionals were hacking and investigating computer systems, the situation had changed by
the 1980s. Society had become dependent on computing in general for business practices and other
basic functions. Tampering with computers suddenly meant potentially endangering people’s careers
and property; and some even said their lives (Spafford 1989).
Ever since, malware as “visible” proof of the persuasive insecurity of the information infrastructure has
remained in the limelight of the cyber-security discourse; and it also provides the back-story for the
other two discourses. Table 25.1 lists some of the most prominent examples.
8
Table 25.1: Prominent Malware
Most obviously, the history of malware is a mirror of technological development: the type of malware,
the type of targets and the attack vectors all changed with the technology and the existing technical
countermeasures (and continue to do so). This development goes in sync with the development of the
cyber-crime market, which is driven by the huge sums of money available to criminal enterprises at low
risk of prosecution. While there was a tongue-in-cheek quality to many of the viruses in the beginning,
viruses have long lost their innocence. While prank-like viruses have not disappeared, computer security
professionals are increasingly concerned with the rising level of professionalization coupled with the
obvious criminal (or even strategic) intent behind attacks. Advanced malware is targeted: A hacker picks
a victim, scopes the defences and then designs malware to get around them (Symantec 2010). The most
prominent example for this kind of malware is Stuxnet (see Case Study 25.2). However, some IT security
companies have recently warned against overemphasizing so called advanced persistent threat attacks
9
just because we hear more about them (Verizon 2010: 16). Only about 3% of all incidents are considered
so sophisticated that they were impossible to stop. The vast majority of attackers go after low hanging
fruit, which are small to medium sized enterprises with bad defences (Maillart and Sornette 2010). These
types of incidents tend to remain under the radar of the media and even law-enforcement.
Key Points
The cyber-crime discourse and the technical discourse are very closely related. The development of IT
law (and, more specifically, Internet or cyber law) in different countries plays a crucial role in the second
discourse, because it allows the definition and prosecution of misdemeanour (Scott 2007). Not
surprisingly, the development of legal tools to prosecute unauthorized entry into computer systems (like
the Computer Fraud and Abuse Act of 1986 in the United States) coincided with the first serious network
incidents described above (cf. Mungo and Clough 1993).
Cyber-crime has come to refer to any crime that involves computers and networks, like release of
malware or spam, fraud, and many other things. Until today, notions of computer-related economic
crimes determine the discussion about computer misuse. However, a distinct national-security
dimension was established when computer intrusions (a criminal act) were clustered together with the
more traditional and well-established espionage discourse. Prominent hacking incidents – such as the
intrusions into high-level computers perpetrated by the Milwaukee-based ‘414s’ – led to a feeling in
policy circles that there was a need for action (Ross 1991): If teenagers were able to penetrate computer
networks that easily, it was assumed that better organized entities such as states would be even better
equipped to do so. Other events, like the Cuckoo’s Egg incident (Stoll 1989), the ‘Rome Lab incident’,
Solar Sunrise or Moonlight Maze (United States General Accounting Office 1996) made apparent that the
threat was not just one of criminals or juveniles, but that classified or sensitive information could be
acquired relatively easily by foreign nationals through hackers (see Table 25.2).
10
Table 25.2: Cyber-crime and cyber-espionage
There are three trends worth mentioning. First, tech-savvy individuals (often juveniles) with the goal of
mischief or personal enrichment shaped the early history of cyber-crime. Today, professionals dominate
the field. The Internet is a near ideal playground for semi- and organised crime for activities such as theft
(like looting online banks, intellectual property, or identities) or for fraud, forgery, extortion, and money
laundering. Actors in the “cyber-crime black market” are highly organized regarding strategic and
operational vision, logistics and deployment. Like many real companies, they operate across the globe
(Panda Security 2010).
Second, the cyber-espionage story has also changed. There has been an increase in allegations that China
is responsible for high-level penetrations of government and business computer systems in Europe,
North America and Asia. Because Chinese authorities have stated repeatedly that they consider
cyberspace a strategic domain and that they hope that mastering it will equalize the existing military
imbalance between China and the US more quickly, many officials readily accuse the Chinese
government of deliberate and targeted attacks or intelligence gathering operations. However, these
allegations almost exclusively rely on anecdotal and circumstantial evidence (Deibert and Rohozinski
2009).
11
The so-called attribution problem – which refers to the difficulty to clearly determining those initially
responsible for a cyber-attack plus identifying their motivating factors – is the big game-changer in the
cyber-domain. Due to the architecture of cyberspace, online identities can be optimally hidden. Blame
on the basis of the “cui bono”-logic (which translates into “to whose benefit?”) is not sufficient proof for
political action. Attacks and exploits that seemingly benefit states might well be the work of third-party
actors operating under a variety of motivations. At the same time, the challenges of clearly identifying
perpetrators also gives state actors convenient “plausible deniability and the ability to officially distance
themselves from attacks” (Deibert and Rohozinski 2009: 12).
The third trend is the increased attention that hacktivism – the combination of hacking and activism –
has gained in recent years. WikiLeaks, for example, has added yet another twist to the cyber-espionage
discourse. Acting under the hacker-maxim “all information should be free”, this type of activism
deliberately challenges the self-proclaimed power of states to keep information, which they think could
endanger or damage national security, secret. It emerges as a cyber-security issue in government
discourse, because of the way a lot of the data has been stolen (in digital form) but also how it is made
available to the whole world through multiple mirrors (Internet sites). Somewhat related are the
multifaceted activities of hacker collectives such as Anonymous or LulzSec. Behaving deliberately
hedonistic, uninhibited, and some might even say childish, they creatively play with anonymity in a time
obsessed with control and surveillance and humiliate high-visibility targets by DDoS-attacks, break-ins
and release of sensitive information.
Key Points
• The notion of computer crime and the development of cyber law coincided with the first
network attacks. Though this discourse is mainly driven by economic considerations until today,
political cyber-espionage, as a specific type of criminal computer activity, started worrying
officials around the same time.
• Over the years, cyber-criminals have become well-organised professionals, operating in a
consolidated cyber-crime black market.
• China is often blamed for high-level cyber-espionage, both political and economic. However,
there only is anecdotal and circumstantial evidence for this.
• As there is no way to clearly identify perpetrators that want to stay hidden in cyberspace
(attribution problem), anyone could be behind actions that seemingly benefit certain states –
and states on the other hand can plausibly deny being involved.
• Politically motivated or activist break-ins by hacker collectives that go after high-level targets,
with the aim to steal and publish sensitive information or just ridiculing them by targeting their
websites, have recently added to the feeling of insecurity in government circles.
12
3.3 Cyber(ed) conflicts and vital system security (military-civil defence
discourse)
The Second Persian Gulf War of 1991 created a watershed in US military thinking about cyber-war.
Military strategists saw the conflict as the first of a new generation of information age conflicts, in which
physical force alone was not sufficient, but was complimented by the ability to win the information war
and to secure ‘information dominance’. As a result, American military thinkers began to publish scores of
books on the topic and developed doctrines that emphasized the ability to degrade or even paralyse an
opponent’s communications systems (cf. Campen 1992; Arquilla and Ronfeldt 1993).
In the mid-1990s, the advantages of the use and dissemination of ICT that had fuelled the revolution in
military affairs were no longer seen only as a great opportunity providing the country with an
“information edge” (Nye and Owens 1996), but were also perceived as constituting an over-proportional
vulnerability vis-à-vis a plethora of malicious actors (Rattray 2001; Hables Gray 1997). Global information
networks seemed to make it much easier to attack the US asymmetrically, as such an attack no longer
required big, specialized weapons systems or an army: borders, already porous in many ways in the real
world, were nonexistent in cyberspace. There was widespread fear that those likely to fail against the
American military might would instead plan to bring the US to its knees by striking vital points
fundamental to the national security and the essential functioning of industrialized societies at home
(Berkowitz 1997). Apart from break-ins into computer networks that contained sensitive information
(see previous section), exercises such as ‘The Day After’ or ‘Eligible Receiver’ – designed to assess the
plausibility of information warfare scenarios and to help define key issues to be addressed in this area –
demonstrated that US critical infrastructure presented a set of attractive strategic targets for opponents
possessing information warfare capabilities, be it terrorist groups or states (Anderson and Hearn 1996).
At the same time, the development of military doctrine involving the information domain continued. For
a while, information warfare – the new type of warfare in the information age – remained essentially
limited to military measures in times of crisis or war. This began to change around the mid-1990s, when
the activities began to be understood as actions targeting the entire information infrastructure of an
adversary – political, economic, and military, throughout the continuum of operations from peace to war
(Dunn Cavelty 2010b). NATO’s 1999 intervention against Yugoslavia marked the first sustained use of the
full-spectrum of information warfare components in combat. Much of this involved the use of
propaganda and disinformation via the media (an important aspect of information warfare), but there
were also website defacements, a number of DDoS-attacks, and (unsubstantiated) rumours that
Slobodan Milosevic’s bank accounts had been hacked by the US armed forces (Dunn 2002: 151).
13
Table 25.3: Instances of cyber(ed)-conflict
The increasing use of the Internet during the conflict gave it the distinction of being the ‘first war fought
in cyberspace’ or the ‘first war on the Internet’. Thereafter, the term cyber-war came to be widely used
to refer to basically any phenomenon involving a deliberate disruptive or destructive use of computers.
For example, the cyber-confrontations between Chinese and US hackers plus many other nationalities in
2001 have been labelled the ‘first Cyber World War’. The cause was a US reconnaissance and
surveillance plane that was forced to land on Chinese territory after a collision with a Chinese jet fighter.
In 2007, DDoS-attacks on Estonian websites were readily attributed to the Russian government, and
various government officials claimed that this was the first known case of one state targeting another
using cyber-warfare (see Case Study 25.1). Similar claims were made in the confrontation between
Russia and Georgia of 2008 (Dunn Cavelty 2010b). In other cases, China is said to be the culprit (see
previous section and Table 25.3).
When the Estonian authorities removed a bronze statue of a World War II-era Soviet soldier from
a park, a three-plus-week cyberspace-“battle” ensued, in which a wave of so-called Distributed
Denial of Service attacks (DDoS) swamped various websites – among them the websites of the
Estonian parliament, banks, ministries, newspapers and broadcasters – disabling them by
overcrowding the bandwidths for the servers running the sites.
Even though it was not and will never be possible to provide sufficient evidence for who was
behind the attacks, various officials readily and publicly blamed the Russian government. Also,
despite the fact that the attacks bore no truly serious consequences for Estonia other than
(minor) economic losses, some officials even openly toyed with the idea of a counter-attack in
14
the spirit of Article 5 of the North Atlantic Treaty, which states that “an armed attack” against
one or more NATO countries “shall be considered an attack against them all” (Dunn Cavelty
2011a). The Estonian case is one of the cases most often referred to in government circles to
prove that there is a need for action.
The discovery of Stuxnet in 2010 changed the overall tone and intensity of the debate (see Case Study
25.2). Due to the attribution problem, it is impossible to know for certain who is behind this piece of
code. Nevertheless, many security experts are willing to believe that one or several state actors are
behind the computer worm and that is was programmed and released to sabotage the Iranian nuclear
program (Farwell and Rohozinski 2011). For those people, the ‘digital first strike’ has occurred and marks
the beginning of the unchecked use of cyber-weapons in military-like aggressions (Gross 2011).
Stuxnet is a computer worm that was discovered in June 2010 and has been called “[O]ne of the
great technical blockbusters in malware history” (Gross 2011). It is a complex programme. It is
likely that writing it took a substantial amount of time, advanced-level programming skills and
insider knowledge of industrial processes. Therefore, Stuxnet is probably the most expensive
malware ever found. In addition, it behaves differently from malware released for criminal
intent: it does not steal information and it does not herd infected computers into so-called
botnets to launch further attacks from. Rather, it looks for a very specific target: Stuxnet was
written to attack Siemens’ Supervisory Control And Data Acquisition (SCADA) systems that are
used to control and monitor industrial processes. In August 2010, the security company
Symantec noted that 60% of the infected computers worldwide were in Iran. It was also reported
that Stuxnet damaged centrifuges in the Iran nuclear program. This evidence led several experts
to the conclusion that one or several nation states – most often named are the US and/or Israel –
were behind the attack. Though this seems plausible, there is no proof for this assumption.
On another note, Stuxnet provided a platform for an ever-growing host of cyber-war-experts to
speculate about the future of cyber-aggression. Internationally, Stuxnet has had two main
effects: first, governments all over the world are currently releasing or updating cyber-security
strategies and are setting up new organisational units for cyber-defence (and -offense). Second,
Stuxnet can be considered a “wake-up” call: ever since its discovery, increasingly serious
attempts to come to some type of agreement on the non-aggressive use of cyberspace between
states are undertaken (Dunn Cavelty 2011c).
However, there are other reports pointing to the fact that this is highly unlikely (cf. Sommer and
Brown 2011), mainly due to the uncertain results a cyber-war would bring, the lack of motivation
on the part of the possible combatants and their shared inability to defend against
counterattacks. Future conflicts between nations will most certainly have a cyberspace
component, but they will be just a part of the battle. It is therefore more sensible to speak about
cyber(ed) conflicts, conflicts “in which success or failure for major participants is critically
dependent on computerized key activities along the path of events” (Demchak 2010). Dubbing
occurrences cyber war too carelessly bears the inherent danger of creating an atmosphere of
insecurity and tension and fuelling a cyber-security dilemma: Many countries are currently said
to have functional cyber-commands or be in the process of building one. Because cyber-
capabilities cannot be divulged by normal intelligence gathering activities, uncertainty and
mistrust are on the rise.
15
Key Points
• The Gulf War of 1991 is considered to be the first of a new generation of conflicts in which
mastering the information domain becomes a deciding factor. Afterwards, the information
warfare doctrine was developed in the US military.
• Increasing dependence of the military, but also of society in general, on information
infrastructures made clear that information warfare was a double-edged sword. Cyberspace
seemed the perfect place to launch an asymmetrical attack against civilian or military critical
infrastructures.
• The US military tested its information warfare doctrine for the first time during NATO operation
“Allied Force” in 1999. It was the first armed conflict in which all sides, including actors not
directly involved, had an active online presence, and in which the Internet was actively used for
the exchange and publication of conflict-relevant information. Thereafter, the term “cyber-war”
came to be used for almost any type of conflict with a cyber-component.
• The recent discovery of a computer worm that sabotages industrial processes and appears to
have been programmed by order of a state actor has alarmed the international community.
Some experts believe that this marks the beginning of unrestrained cyber-war among states.
• Others think that this is highly unlikely and warn against an excessive use of the term cyber-war.
Future conflicts between states will also be fought in cyberspace, but not exclusively. One useful
term for them is cyber(ed) conflicts.
16
4 Information Assurance, critical (information) infrastructure
protection and international norms
The three different discourses described above have produced specific types of concepts and actual
countermeasures in accordance with their focus and main referent objects (see Figure 25.2). This sub-
chapter briefly discusses the most important of them.
Despite fancy concepts such as cyber-deterrence, the common issue in all discourses is information
assurance, which is about the basic security of information and information systems. It is common
practice that the entities that own a computer network are also responsible for protecting it
(governments protect government networks, militaries only military ones, and companies protect their
own, etc.). However, there are some assets considered so crucial to the functioning of society in the
private sector that governments take additional measures to ensure an adequate level of protection.
These efforts are usually subsumed under the label of critical (information) infrastructure protection (the
focus of the next sub-section).
Information assurance is a standard practice to manage risks related to the use, processing, storage, and
transmission of information or data and the systems and processes used for those purposes (May et al.
2004). The classic definition of risk is a function of the likelihood of a given threat source displaying a
particular potential vulnerability, and the resulting impact of that adverse event (Haimes 1998).
17
Outcomes of the risk assessment process are used to provide guidance on the areas of highest risk, and
to devise policies and plans to ensure that systems are appropriately protected.
The classical IA model one has three goals: confidentiality, integrity, and availability of information
(Stoneburner 2001). “Confidentiality” refers to the protection of the information from disclosure to
unauthorized parties, while “integrity” refers to the protection of information from being changed by
unauthorized parties. “Availability” means that the information should be available to authorized parties
when requested. More elaborate frameworks add different other goals, like authentication and non-
repudiation (Committee on National Security Systems 2010). These information security goals can be
obtained by many different means, like firewalls and other network access controls, host system
“hardening” by doing patch management, intrusion detection, etc. This goes for all networks, regardless
whether they are business, privately owned or government networks and computers.
In the 1990s, critical infrastructures became the main referent object in the cyber-security debate.
Whereas critical infrastructure protection (CIP) encompasses more than just cyber-security, cyber-
aspects have always been the main driver (see Key Ideas 25.1).
Following the Oklahoma City Bombing, President Bill Clinton set up the Presidential Commission
on Critical Infrastructure Protection (PCCIP) to look into the security of vital systems such as gas,
oil, transportation, water, telecommunications, etc. The PCCIP presented its report in the fall of
1997 (Presidential Commission on Critical Infrastructure Protection 1997). It concluded that the
security, economy, way of life, and perhaps even the survival of the industrialized world were
dependent on the interrelated trio of electrical energy, communications, and computers.
Further, it stressed that advanced societies rely heavily upon critical infrastructures, which are
susceptible to classical physical disruptions and new virtual threats. While the study assessed a
list of critical infrastructures or “sectors” – for example the financial sector, energy supply,
transportation, and the emergency services – the main focus was on cyber-risks. There were two
reasons for this decision: First, these were the least known because they were basically new, and
secondly, many of the other infrastructures were seen to depend on data and communication
networks. The PCCIP linked the cyber-security discourse firmly to the topic of critical
infrastructures. Thereafter, CIP became a key topic in many other countries.
The key challenge for CIP efforts arise from the privatization and deregulation of large parts of the public
sector since the 1980s and the globalization processes of the 1990s, which have put many critical
infrastructures in the hands of private (transnational) enterprise. This creates a situation in which market
forces alone are not sufficient to provide the aspired level of security in designated critical infrastructure
18
sectors 1, but state actors are also incapable of providing the necessary level of security on their own
(unless they heavily regulate, which they are usually reluctant to do).
Public-Private Partnerships (PPP), a form of cooperation between the state and the private sector, are
widely seen as a panacea for this problem in the policy community – and cooperation programs that
follow the PPP idea are part of all existing initiatives in the field of CIP today, though with varying success
(Dunn Cavelty and Suter 2009). A large number of them is geared towards facilitating information
exchange between companies and between companies and government on security, disruptions, and
best practices among. Mutual win-win situations are to be created by exchanging information that the
other party does not have: the government offers classified information acquired by its intelligence
services about potentially hostile groups and nation-states, in exchange for technological knowledge
from the private sector that the public sector does not have (President's Commission on Critical
Infrastructure Protection 1997: 20). The problem owner is no longer only the military or the state;
responsibility for (national) security is distributed (Dunn Cavelty 2010c).
4.2.2 Resilience
The rationale of risk management not only guides Information Assurance processes, but also the
decisions for which elements of the critical infrastructures to protect in what ways. The important thing
to know is that managing risk is essentially about accepting that one is (or remains) insecure: the level of
risk can never be reduced to zero, so that choices are made for which ones to reduce in what way. This
also means that minor and probably also major cyber-incidents are deemed to happen, because they
simply cannot be avoided even with perfect risk management. This is one of the main reasons why the
concept of resilience has gained so much weight in the debate recently (Perelman 2007). Resilience is
commonly defined as the ability of a system to recover from a shock, either returning back to its original
state or to a new adjusted state. Resilience accepts that disruptions are inevitable and can be considered
a “Plan B” in case something goes wrong (Pommerening 2007). Therefore, the concept promises an
additional safety net also against large-scale, major and unexpected events (Dunn Cavelty 2011b).
In the military discourse, the terms cyber-offence, cyber-defence, and cyber-deterrence are often used.
Under closer scrutiny, cyber-offence and -defence are not much more than fancy words for information
assurance practices. Cyber-deterrence on the other hand deserves some attention. Cyberspace clearly
1
The most frequently listed examples are banking and finance, government services, telecommunication and
information and communication technologies, emergency and rescue services, energy and electricity, health
services, transportation, logistics and distribution, and water supply.
19
poses considerable limitations for deterrence (Libicki 2009). Deterrence works if the one party is able to
successfully convey to another that it is both capable and willing to use a set of available (often military)
instruments against him if the other steps over the line. This requires an opponent that is clearly
identifiable as an attacker and has to fear retaliation – which is not the case in cyber-security because of
the attribution problem. However, this is not stopping US government officials from threatening to use
kinetic response in case of a cyber-attack on their critical infrastructures (Gorman and Barnes 2011). In
addition, the military discourse naturally falls back on well-known concepts such as deterrence, which
means that the concept of cyber-deterrence, including its limits, will remain a much discussed issue in
the future.
In theory, effective cyber deterrence would require a wide-ranging scheme of offensive and defensive
cyber capabilities supported by a robust international legal framework as well as the ability to attribute
an attack to an attacker without any doubt. The design of defensive cyber-capabilities and the design of
better legal tools are relatively uncontested. Many international organizations and international bodies
have taken steps to raise awareness, establish international partnerships, and agree on common rules
and practices. One key issue is the harmonization of law to facilitate the prosecution of perpetrators of
cyber-crime (Brunner and Suter 2008: 465-526).
While there is wide agreement on what steps are necessary to tackle international cyber-crime, states
are unwilling to completely forego offensive and aggressive use of cyberspace. Due to this, and
increasingly so since the discovery of Stuxnet, efforts are underway to control the military use of
computer exploitation through arms control or multilateral behavioural norms, agreements that might
pertain to the development, distribution, and deployment of cyber-weapons, or to their use. However,
traditional capability-based arms control will clearly not be of much use, mainly due to the impossibility
of verifying limitations on the technical capabilities of actors (Libicki 2009: 199 - 201), especially non-
state ones. The avenues available for arms control in this arena are primarily information exchange and
norm-building, whereas structural approaches and attempts to prohibit the means of cyber-war
altogether or restricting their availability are largely impossible due to the ubiquity and dual-use nature
of information technology.
Key Points
• There are a variety of approaches and concepts to secure information and critical information
infrastructures. The key concept is a risk management practice known as information assurance,
which aims to protect the confidentiality, integrity, and availability of information and the
systems and processes used for the storage, processing, and transmission of information.
• Critical (information) infrastructure protection (CIIP) has become a key concept in the 1990s.
Because a very large part of critical infrastructures are no longer in the hands of government,
20
CIIP practices mainly build on public-private partnerships. At the core of them lies information
sharing between the private and the public sector.
• Because the information infrastructure is persuasively insecure, risk management strategies are
complemented by the concept of resilience. Resilience is about having systems rebound from
shocks in an optimal way. The concept accepts that absolute security cannot be obtained and
that minor or even major disturbances are bound to happen.
• The military concepts of cyber-defence and cyber-offence are militarised words for information
assurance principles. Cyber-deterrence, on the other hand, is a concept that moves deterrence
into the new domain of cyberspace.
• If cyber-deterrence were to work, functioning offensive and defensive cyber-capabilities, plus
the fear of retaliation, both militarily and legally, would be needed. This would also include the
ability to clearly attribute attacks.
• Internationally, efforts are underway to further harmonize cyber-law. In addition, because
future use of cyberspace for strategic military purposes remains one of the biggest fears in the
debate, there are attempts to curtail the military use of computer exploitation through arms
control or multilateral behavioural norms.
21
5 The level of cyber-risk
After having described the three interrelated cyber-security discourses and some of the currently
countermeasures, this section will give a more concrete estimate of how big the cyber-in-security
problem actually is.
Different political, economic and military conflicts clearly have had cyber(ed)-components for a number
of years now. Furthermore, criminal and espionage activities with the help of computers happen every
day. Cyber-incidents are causing minor and occasionally major inconveniences. These may be in the form
of lost intellectual property or other proprietary data, maintenance and repair, lost revenue, and
increased security costs. Beyond the direct impact, badly handled cyber-attacks have also damaged
corporate (and government) reputations and have, theoretically at least, the potential to reduce public
confidence in the security of Internet transactions and e-commerce if they become more frequent.
However, in the entire history of computer networks, there have been only very few examples of attacks
or other type of incidents that had the potential to rattle an entire nation or cause a global shock. There
are even fewer examples of cyber-attacks that resulted in actual physical violence against persons or
property (Stuxnet being the most prominent). The huge majority of cyber-incidents have caused
inconveniences or minor losses rather than serious or long-term disruptions. They are risks that can be
dealt with by individual entities using standard information security measures and their overall costs
remain low in comparison to other risk categories like financial risks.
This fact tends to be disregarded in policy circles, because the level of cyber-fears is high and the military
discourse has a strong mobilising power. This has important political effects. A large part of the discourse
evolves around “cyber-doom” (worst-case) scenarios in the form of major, systemic, catastrophic
incidents involving critical infrastructures caused by attacks. Since the potentially devastating effects of
cyber-attacks are so scary, the temptation to not only think about worst-case scenarios but also give
them a lot of (often too much) weight despite their very low probability is high.
There are additional reasons why the threat is overrated. First, as combating cyber-threats has become a
highly politicised issue, official statements about the level of threat must also be seen in the context of
different bureaucratic entities that compete against each other for resources and influence. This is
usually done by stating an urgent need for action (which they should take) and describing the overall
threat as big and rising (Dunn Cavelty 2008). Second, psychological research has shown that risk
perception is highly dependent on intuition and emotions, also the perceptions of experts (Gregory and
Mendelsohn 1993). Cyber-risks, especially in their more extreme form, fit the risk profile of so-called
“dread risks”, which appear uncontrollable, catastrophic, fatal, unknown and basically uncontrollable.
There is a propensity to be disproportionally afraid of these risks despite their low probability, which
22
translates into pressure for regulatory action of all sorts and willingness to bear high costs of uncertain
benefit.
The danger of overly dramatising the threat manifests itself in reactions that call for military retaliation
(as happened in the Estonian case and in other instances) or other exceptional measures. Though the last
section has shown that there are many different types of countermeasures in place, and that most of
them are in fact not exceptional, this kind of threat rhetoric invokes enemy images even if there is no
identifiable enemy, favours national solutions instead of international ones, and centres too strongly on
national-security measures instead of economic and business solutions. Only computer attacks whose
effects are sufficiently destructive or disruptive need the attention of the traditional national security
apparatus. Attacks that disrupt nonessential services, or that are mainly a costly nuisance, should not.
Key Points
• The majority of cyber-incidents so far have caused minor inconveniences and their cost remains
low in comparison to other risk categories. Only very few attacks had the potential for grave
consequences and even fewer actually had any impact on property. None have ever caused loss
of life.
• Despite of this, the feeling persists in policy circles that a large-scale cyber attack is just around
the corner. The potential for catastrophic cyber attacks against critical infrastructures, though
very unlikely, remains the main concern and the main reason for seeing cyber-security as
national security issue.
• The level of cyber-risk is overstated. Reasons are to be found in bureaucratic turf battles due to
scarce resources and in the fact that cyber-risks are so called “dread risks”, of which human
beings are disproportionally afraid. Overstating the risk comes with the danger of choosing the
wrong answers.
23
6 Conclusion
Despite the increasing attention cyber-security is getting in security politics and despite the possibility of
a major, systemic, catastrophic incident involving critical infrastructures, computer network
vulnerabilities are mainly a business and espionage problem. Depending on their (potential) severity,
however, disruptive incidents will continue to fuel the military discourse, and with it fears of strategic
cyber-war. Certainly, thinking about (and planning for) worst-case scenarios is a legitimate task of the
national security apparatus. However, they should not receive too much attention in favour of more
plausible and possible problems.
In seeking a prudent policy, the difficulty for decision-makers is to navigate the rocky shoals between
hysterical doomsday scenarios and uninformed complacency. Threat-representation must remain well
informed and well balanced not to allow over-reactions with too high costs and uncertain benefits. For
example, an “arms race” in cyberspace, based on the fear of other states cyber-capabilities, would most
likely have hugely detrimental effects on the way humankind uses the Internet. Also, solving the
attribution problem would come at a very high cost for privacy. Even though we must expect
disturbances in the cyber-domain in the future, we must not expect outright disasters. Some of the
cyber-disturbances may well turn into crises, but a crisis can also be seen as a turning point rather than
an end state, where the aversion of disaster or catastrophe is always possible. If societies become more
fault tolerant psychologically and more resilient overall, the likelihood for catastrophe in general and
catastrophic system failure in particular can be substantially reduced.
Cyber-security issues are also challenging for students and academics more generally. Experts of all sorts
widely disagree how likely future cyber-doom scenarios are – and all of their claims are based on
(educated) guesses. While there is at least proof and experience of cyber-crime, cyber-espionage or
other lesser forms of cyber-incidents on a daily basis, cyber-incidents of bigger proportions (cyber-terror
or cyber-war) exist solely in the form of stories or narratives. The way we imagine them influences our
judgment of their likelihood; and there are an infinite number of ways in how we could imagine them.
Therefore, there is no way to study the “actual” level of cyber-risk in any sound way, because it only
exists in and through the representations of various actors in the political domain. As a consequence, the
focus of research necessarily shifts to contexts and conditions that determine the process by which key
actors subjectively arrive at a shared understanding of how to conceptualize and ultimately respond to a
security threat.
24
7 Questions
1. What other explanations are there for the dearth of theoretically informed cyber-security
research in security studies?
2. Who benefits in what ways from calling malware cyber-weapons?
3. What are the pros and cons of abolishing anonymity (and therefore partially solving the
attribution problem) on the Internet in the name of security?
4. What side effects does the indiscriminate use of the terms cyber-terror and cyber-war have?
5. Are hacktivism activities a legitimate way to express political or economic grievances?
6. What are the limits of traditional arms control mechanisms applied to cyber-weapons?
7. Why does the intelligence community not have more information on cyber-capabilities of other
states?
8. What are the similarities and what the differences between information security and national
security?
9. Which aspects of cyber-security should be considered a part of national security, which not?
Why?
10. What might be the next referent object in the cyber-security discourse?
25
8 Further Reading
Brown, K.A. (2006), Critical Path: A Brief History of Critical Infrastructure Protection in the United States,
Arlington, VA: George Mason University Press. Provides a comprehensive overview of the evolution of
critical infrastructure protection in the United States.
Deibert, R. and Rohozinski, R. (2010) ‘Risking Security: Policies and Paradoxes of Cyberspace Security’,
International Political Sociology 4/1: 15–32. An intelligent account of the threat discourse that
differentiates between risks to cyberspace and risks through cyberspace.
Dunn Cavelty, M. (2008), Cyber-Security and Threat Politics: US Efforts to Secure the Information Age,
London: Routledge. Examines how, under what conditions, by whom, for what reasons, and with what
impact cyber-threats have been moved on to the political agenda in the US.
Libicki, M. (2009), Cyberdeterrence and Cyberwar, Santa Monica: RAND. Explores the specific laws of
cyberspace and uses the results to address the pros and cons of counterattack, the value of deterrence
and vigilance, and other defensive actions in the face of deliberate cyber-attack.
National Research Council (2009), Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use
of Cyberattack Capabilities, Washington, DC: The National Academies Press. Focuses on the use of cyber-
attack as an instrument of U.S. policy and explores important characteristics of cyber-attack.
Sommer, P. and Sommer, I. (2011), Reducing Systemic Cybersecurity Risk, OECD Research Report,
http://www.oecd.org/dataoecd/3/42/46894657.pdf. A down-to-earth report that concludes that it is
extremely unlikely that cyber-attacks could create problems like those caused by a global pandemic or
the recent financial crisis, let alone an actual war.
26
9 Important Websites
George Mason University (GMU), Critical Infrastructure Protection (CIP) Program Website: The GMU CIP
program is a valuable source of information for both US and international CIP-related issues and
developments. http://cipp.gmu.edu
Schneier on Security: Bruce Schneier is a refreshingly candid and lucid computer security critic and
commentator. In his blog, he covers computer security issues of all sorts. http://www.schneier.com
The Information Warfare Site: an online resource that aims to stimulate debate on a variety of issues
involving information security, information operations, computer network operations, homeland security
and more. http://www.iwar.org.uk
Infowar Site: A site dedicated to tracking open source stories relating to the full-spectrum of information
warfare, information security, and critical infrastructure protection. http://www.infowar.com
27
10 Notes on Contributor
Myriam Dunn Cavelty is Lecturer of Security Studies and Head of the Risk and Resilience Research Group
at ETH Zurich, Switzerland. She is the author of Cyber-Security and Threat Politics: US Efforts to Secure
the Information Age (2008) and many other publications on cyber-in-security issues.
28
11 References
Anderson, R. and Moore, T. (2006), ‘The Economics of Information Security’, Science 314/ 5799: 610–13.
Anderson, R.H. and Hearn, A.C. (1996), An Exploration of Cyberspace Security R&D Investment Strategies for DARPA:
"The Day After ... in Cyberspace II" (Santa Monica: RAND).
Anonymous (2003) Maximum Security: A Hacker’s Guide to Protecting your Internet Site and Network (Indiana:
Sams Publishing).
Arquilla, J. and Ronfeldt, D.F. (1997) (eds.), In Athena's Camp: Preparing for Conflict in the Information Age (Santa
Monica: RAND).
—— (1993), ‘Cyberwar is Coming!’, Comparative Strategy, 12/2: 141–65.
Bendrath, R. (2003), ‘The American Cyber-Angst and the Real World – Any Link?’, in R. Latham (ed.), Bombs and
Bandwidth: The Emerging Relationship between IT and Security (New York: The New Press), 49–73.
Berkowitz, B.D. (1997), 'Warfare in the Information Age', in Arquilla and Ronfeldt (1997: 175–90).
Brown, K.A. (2006), Critical Path: A Brief History of Critical Infrastructure Protection in the United States (Arlington,
VA: George Mason University Press).
Brunner, E. and Suter, M. (2008), The International CIIP Handbook 2008/2009 - An Inventory of Protection Policies in
25 Countries and 6 International Organizations (Zurich: Center for Security Studies).
Buzan, B. and Hansen, L. (2009), The Evolution of International Security Studies (Cambridge: Cambridge University
Press).
Campen, A.D. (1992) (ed.), The First Information War: The Story of Communications, Computers and Intelligence
Systems in the Persian Gulf War (Fairfax: AFCEA International Press).
Cohen, F. (1987), ‘Computer Viruses – Theory and Experiments.’ Computers and Security, 6/1: 22–35.
Committee on National Security Systems (2010), National Information Assurance (IA) Glossary, CNSS Instruction No.
4009 (Washington, D.C.).
Deibert, R. and Rohozinski, R. (2009), ‘Tracking GhostNet: Investigating a Cyber Espionage Network’, Information
Warfare Monitor (Toronto: The Munk School of Global Affairs).
—— (2010) ‘Risking Security: Policies and Paradoxes of Cyberspace Security’, International Political Sociology 4/1:
15–32.
Demchak, C. (2010), ‘Cybered Conflict as a New Frontier’, New Atlanticist, available online at
<http://www.acus.org/new_atlanticist/cybered-conflict-new-frontier> (accessed 31 December 2011).
Denning, D. (2001), ‘Activism, Hacktivism, and Cyberterrorism: The Internet as a Tool for Influencing Foreign Policy’,
in J. Arquilla and D.F. Ronfeldt (eds.), Networks and Netwars: The Future of Terror, Crime, and Militancy (Santa
Monica: RAND), 239–88.
Dunn Cavelty, M. (2008), Cyber-Security and Threat Politics: US Efforts to Secure the Information Age (London:
Routledge).
—— (2010a), ‘Cyber-security’, in P. Burgess (ed.), The Routledge Companion to New Security Studies (London:
Routledge), 154–62.
—— (2010b), ‘Cyberwar, in G. Kassimeris and J. Buckley (eds.), The Ashgate Research Companion to Modern
Warfare (Aldershot: Ashgate), 123–44.
—— (2010c) ‘Cyberthreats, in M. Dunn Cavelty and V. Mauer (eds.), The Routledge Handbook of Security Studies
(London: Routledge), 180–89.
—— (2011a), ‘Cyber-Allies: Strengths and Weaknesses of NATO’s Cyberdefense Posture’, IP Global Edition, 12/3:
11–15.
—— (2011b), ‘Systemic cyber/in/security – From risk to uncertainty management in the digital realm’, Swiss Re Risk
Dialogue Magazine, 15 September.
—— (2011c), ‘The Dark Side of the Net: Past, Present and Future of the Cyberthreat Story’, AIIA Policy Commentary,
No. 10, April, 51–62.
29
Dunn Cavelty, M. and Suter, M. (2009), ‘Public-Private Partnerships are no Silver Bulled: An Expanded Governance
Model For Critical Infrastructure Protection’, International Journal of Critical Infrastructure Protection, 2/4: 179–
87.
Dunn, M. (2002), ‘Information Age Conflicts: A Study of the Information Revolution and a Changing International
Operating Environment’, Zurich Contributions to Security Policy and Conflict Analysis Nr. 64 (Zurich: Center for
Security Studies).
Dyson, E., Gilder, G., Keyworth, G. and Toffler, A. (1996), 'Cyberspace and the American Dream: A Magna Carta for
the Knowledge Age', The Information Society 12/3: 295–308.
Ellison, R. J., Fisher, D. A., Linger, R. C., Lipson, H. F., Longstaff T. and Mead, N. R. (1999), ‘Survivable Network
Systems: An Emerging Discipline’, Technical Report, CMU/SEI-97-TR-013, ESC-TR-97-013 (revised edition)
(Pittsburgh: Carnegie Mellon University).
Erickson, J. (2003), Hacking: The Art of Exploitation (San Francisco: No Starch Press).
Eriksson, J. (2001), ‘Cyberplagues, IT, and Security: Threat Politics in the Information Age’, Journal of Contingencies
and Crisis Management, 9/4: 211–22.
Eriksson, J. and Giacomello, G. (eds.), International Relations and Security in the Digital Age (London: Routledge).
Farwell, J.P. and Rohozinski, R. (2011), ‘Stuxnet and the Future of Cyber War’, Survival: Global Politics and Strategy,
53/1: 23–40.
Gorman, S. and Barnes, J.E. (2011), ‘Cyber Combat: Act of War Pentagon Sets Stage for U.S. to Respond to
Computer Sabotage With Military Force’, The Wall Street Journal, 31 May.
Gross, M.J. (2011), ‘Stuxnet Worm: A Declaration of Cyber-War’, Vanity Fair, April.
Hables Gray, C. (1997), Postmodern War – The New Politics of Conflict (London: Routledge).
Haimes, Y.Y. (1998), Risk Modeling, Assessment, and Management (New York: Wiley).
Henry, C.R. and Peartree, E.C. (1998) (eds.) Information Revolution and International Security (Washington: Center
for Strategic and International Studies).
International Organization for Standardization (2011), ‘Information technology -- Security techniques -- Information
security risk management’, Edition: 2 | Stage: 60.60 | JTC 1/SC 27 , ICS: 35.040 (Geneva).
Leiner et al. (1997), ‘A Brief History of the Internet’, Website of the Internet Society, available online at
<http://www.internetsociety.org/internet/internet-51/history-internet/brief-history-internet> (last accessed on
31 December 2011).
Levy, S. (1984), Hackers: Heroes of the Computer Revolution (New York: Anchor Press).
Libicki, M.C. (2000), The Future of Information Security (Washington, D.C.: Institute for National Strategic Studies).
—— (2009), Cyberdeterrence and Cyberwar (Santa Monica: RAND).
Maillart, T. and Sornette, D. (2010), ‘Heavy-Tailed Distribution of Cyber-Risks’, The European Physical Journal B,
75/3: 357–64.
May, Chris et al. (2004), ‘Advanced Information Assurance Handbook’, CERT®/CC Training and Education Center,
CMU/SEI-2004-HB-001 (Pittsburgh: Carnegie Mellon University).
Mulvenon, J.C. and Yang, R.H. (1998) (eds.), The People’s Liberation Army in the Information Age (Santa Monica:
RAND).
Mungo, P. and Clough, B. (1993), Approaching Zero: The Extraordinary Underworld of Hackers, Phreakers, Virus
Writers, and Keyboard Criminals (New York: Random House).
National Research Council (2009), Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of
Cyberattack Capabilities (Washington, DC: The National Academies Press).
Nye, J.S. Jr. and Owens, W.A. (1996), 'America's Information Edge', Foreign Affairs, March/April: 20–36.
Panda Security (2010), Panda Security Report: The Cyber-crime Black Market: Uncovered (Bilbao).
Parrika, J. (2005), ‘Digital Monsters, Binary Aliens – Computer Viruses, Capitalism and the Flow of Information’,
Fibreculture Journal Issue 4 - Contagion and the Diseases of Information, available online at
<http://vxheavens.com/lib/mjp00.html> (last accessed at 31 December 2011).
30
Perelman, L.J. (2007), ‘Shifting Security Paradigms: Toward Resilience’, in J.A. McCarthy (ed.), ‘Critical Thinking:
Moving from Infrastructure Protection to Infrastructure Resilience’, CIP Program Discussion Paper Series
(Washington: George Mason University), 23–48.
Pommerening, C. (2007), ‘Resilience in Organizations and Systems: Background and Trajectories of an Emerging
Paradigm’, in J.A. McCarthy (ed.), ‘Critical Thinking: Moving from Infrastructure Protection to Infrastructure
Resilience’, CIP Program Discussion Paper Series (Washington: George Mason University), 9–21.
President's Commission on Critical Infrastructure Protection (1997), Critical Foundations: Protecting America's
Infrastructures (Washington: US Government Printing Office).
Rattray, G. (2001), Strategic Warfare in Cyberspace (Cambridge, MA: MIT Press).
Robin, G. and Mendelsohn, R. (1993), ‘Perceived Risk, Dread, and Benefits’, Risk Analysis, 13/3: 259–64.
Ross, A. (1991), 'Hacking Away at the Counterculture', in C. Penley and A. Ross (eds.), Technoculture (Minneapolis:
University of Minnesotta Press), 107–34.
Rothkopf, D.J. (1998), ‘Cyberpolitik: The Changing Nature of Power in the Information Age’, Journal of International
Affairs, 51/2: 325–60.
Sampson, T. (2007), ‘The Accidental Topology of Digital Culture: How the Network Becomes Viral’, Transformations:
online journal of region, culture and society, Issue 14, March, available online at
<http://www.transformationsjournal.org/journal/issue_14/editorial.shtml> (last accessed at 31 December
2011).
Scherlis, W.L., Squires, S.L. and Pethia, R.D. (1990), 'Computer Emergency Response,' in P. Denning (ed.), Computers
Under Attack: Intruders, Worms, and Viruses (Reading: Addison-Wesley), 495–504.
Scott, M.D. (2007), Internet and Technology Law Desk Reference (New York: Aspen Publishers, Inc.).
Sommer, P. and Brown, I. (2011), ‘Reducing Systemic Cyber Security Risk’, Report of the International Futures
Project, IFP/WKP/FGS(2011)3 (Paris: OECD).
Spafford, E.H. (1989), ‘The Internet Worm: Crisis and Aftermath’, Communications of the ACM, 32/6: 678–87.
Stoll, C. (1989), The Cuckoo's Egg: Tracking a Spy through the Maze of Computer Espionage (New York: Doubleday).
Stoneburner, G. (2001), ‘Computer Security. Underlying Technical Models for Information Technology Security.
Recommendations of the National Institute of Standards and Technology’, NIST Special Publication 800-33
(Washington: U.S. Government Printing Office).
Strogatz, S.H. (2001), 'Exploring Complex Networks', Nature, 410/ 6825: 268–76.
Suteanu, C. (2005), ‘Complexity, Science and the Public: The Geography of a New Interpretation’, Theory, Culture &
Society, 22/5: 113–40.
Symantec (2010), Internet Security Threat Report, Vol. 16 (Mountain View).
Thomas, T.L. (1996),‘Russian Views on Information-based Warfare’, Airpower Journal X (Special Edition): 26–35.
—— (2004), Dragon Bytes: Chinese Information-War Theory and Practice (Ft Leavenworth: Foreign Military Studies
Office).
United States General Accounting Office (1996), ‘Information Security: Computer Attacks at Department of Defense
Pose Increasing Risk’, GAO/AIMD-96-84 (Washington, DC: General Accounting Office).
Verizon (2010), 2010 Data Breach Investigations Report: A Study Conducted by the Verizon RISK Team in
cooperation with the United States Secret Service (New York).
Waltz, E. (1998), Information Warfare: Principles and Operations (Boston: Artech House).
31
12 Key Terms
Advanced persistent threats: a cyber-attack category, which connotes an attack with a high degree of
sophistication and stealithiness over a prolonged duration of time. The attack objectives typically extend
beyond immediate financial gain.
Attack vectors: A path or means by which unauthorized access to a computer or network can be gained.
Attribution problem: refers to the difficulty to clearly determine who is initially responsible for a cyber-
attack due to the fact that online identities can be hidden.
Bugs: Term used to describe an error, flaw, mistake, failure, or fault in a computer program that
produces an incorrect or unexpected result, or causes it to behave in unintended ways.
Botnets (or bots): a collection of compromised computers connected to the Internet. They run hidden
and can be exploited for further use by the person controlling them remotely.
Critical (information) infrastructures: Critical infrastructure includes all systems and assets whose inca-
pacity or destruction would have a debilitating impact on the national security, and the economic and
social well being of a nation. Critical information infrastructures are components such as
telecommunications, computers/ software, Internet, satellites, fiber optics, etc.
Critical infrastructure protection: Measures to secure all systems and assets whose incapacity or
destruction would have a debilitating impact on the national security, and the economic and social well
being of a nation.
Cyber-deterrence: influencing an actor, either by denying the potential gains of the actor or by
threatening punishment through the use of retaliation, in order to prevent the actor from utilizing
cyberspace as a means to degrade, disrupt, manipulate, deny, or destroy any portion of the critical
national infrastructure.
Cyber-espionage: the unauthorized probing to test a target computer’s configuration or evaluate its
system defenses, or the unauthorized viewing and copying of data files.
Cyber-war: The use of computers to disrupt the activities of an enemy country, especially deliberate
attacks on communication systems The term is also used loosely for cyber-incidents of a political nature.
Cyber-terror: Unlawful attacks against computers, networks, and the information stored therein, to
intimidate or coerce a government or its people in furtherance of political or social objectives. Such an
attack should result in violence against persons or property, or at least cause enough harm to generate
the requisite fear level to be considered ‘cyber-terrorism’. The term is also used loosely for cyber-
incidents of a political nature.
32
Cyber-weapon: Malware used for cyber-war activities.
Cyberspace: The electronic medium of computer networks, in which online communication takes place.
DDoS-attack: Attempt to make a computer or network resource unavailable to its intended users, mostly
by saturating the target machine with external communications requests so that it cannot respond to
legitimate traffic, or responds so slowly as to be rendered effectively unavailable.
Hacker: Term used in two main ways, one positive and one negative. Can mean a person particularly
skilled with computers (positive) or be used as a general term for a computer criminal (negative).
Hacktivism: The combination of hacking and activism, including operations that use hacking techniques
against a target’s internet site with the intention of disrupting normal operations.
Information age: Signifies the current era that is characterised by information, especially the ability of
individuals to transfer information freely, and to have instant access to knowledge previously
unavailable.
Information and communication technology: An umbrella term that includes any communication device
or application, encompassing: radio, television, cellular phones, computer and network hardware and
software, satellite systems etc., as well as the various services and applications associated with them,
such as videoconferencing and distance learning.
Information revolution: The dynamical evolution and propagation of information and communication
technologies into all aspects of life.
Internet: a worldwide system of computer networks in which users at any one computer can, if they
have permission, get information from any other computer.
Malware: a collective term for all types of malicious code and software.
Resilience: ability of a system to recover from a shock, either returning back to its original state or to
new adjusted state.
33