Security Ethics: Principled Decision-Making in Hard Cases: Andrew A. Adams
Security Ethics: Principled Decision-Making in Hard Cases: Andrew A. Adams
Security Ethics: Principled Decision-Making in Hard Cases: Andrew A. Adams
Introduction to ethics
959
the authority for making and enforcing laws as dictated by a God or Gods,
through holy writings, direct revelations, ordination as a member of a priestly
class or by birth (as a member of a privileged caste or as a monarch with the
divine right to rule).
Western philosophical approaches provide various justifications for the exis-
tence of laws, and states to support them, grounded in observation of human
nature:
• The necessity for avoiding the chaos of ‘the war of all against all’ (Hobbes,
1660);
• The benefits of cooperation over unlimited competition for access to limited
resources (Mill, 1869);
• Striving to create justice by providing rules for action and penalties for
transgressions of those rules (Rawls, 1971).
Security is often the justification offered for infringing on liberty, and the
legitimacy of security actions whether pre-emptive, proactive or reactive are
generally backed eventually by the state, whether carried out directly by agents
of the state (police, courts, other law enforcement) or authorized by the state
(private security guards, CCTV deployment, trespass laws). In considering eth-
ical questions in security, the source (philosophically and in reality) of the
legitimacy of security activity needs to be clearly understood, else security
seems capricious and arbitrary, undermining its goals of willing compliance
by the vast majority.
Large amounts of security involve drawing lines between permitted and forbid-
den actions, often on the basis of a categorization of both the actions and the
actor. Physical security, for example, often uses rules about who (actor) may
enter (action) certain physical spaces. The operational concerns of security pol-
icy involve minimizing risk while still allowing system liveness (a system so
completely ‘secure’ that it is unusable is as useless as a rail network that is com-
pletely ‘safe’ by virtue of lack of train movement (Rushby, 2001)). The holders
of power in the system, which includes those setting policies and those imple-
menting policies, too often only consider the benefits inside the system when
setting/implementing policies. Good security engineering should prevent them
from over-securing assets such that their utility to the system is diminished
(a less severe case than the extremes of the dead system described above), but
the resulting policy may have significant problems when viewed from outside
the system, in terms of questions of fairness and of transferred costs.
Consider an abstract example: in a population of 1,000 people say there are
10 people with a property A, while the other 990 people do not have property
A, and that this property is easily identifiable, entirely congenital and irre-
versible. Say that out of those who have property A, there is a 10% chance that
if allowed access to the system they will cause damage of X value. A system-
internal evaluation of the cost–benefit analysis of allowing access to the system
by the ten people with property A will depend on the loss of value that bar-
ring the nine non-damaging people with property A from access produces as a
result and whether that loss is greater or less than X. From outside the system,
however, those who have the property A but will not cause damage are being
deprived of access to the system on a collective basis on the basis of sharing
property A with a known to exist, but of unknown identity, member of the set
of people with property A. They have lost all potential benefit from access to
the system. To take this a step further, if all systems which might provide the
appropriate benefits to people with property A then follow the same logic, then
those nine innocent people with property A are all denied any possibility of
that benefit simply on the grounds of sharing a congenital property with a bad
actor.
Only by considering broader societal concerns alongside internal cost–benefit
analyses can appropriate security policies be deemed ethical. While such ethical
considerations are rife in many areas of security, in this chapter we will use two
Andrew A. Adams 963
• Poachers and gamekeepers: When someone has been previously found guilty
of violating security rules, should they be barred from acting as part of
the security apparatus; or is it foolish to ignore the knowledge they have
of the mindset and methods of their (hopefully former) peers?
• Knowledge, disclosure and security by obscurity: If someone discovers a
security flaw in a system, whom should they tell about this flaw, when and
in what detail?
There is little evidence that skilled poachers were favoured recruits to become
gamekeepers (in fact, while the policing aspect of gamekeeping might well
benefit from knowledge of poaching techniques the larger part of the job (man-
aging game sources) is a totally separate skill base (Munsche, 1981)). Despite
this the phrase poacher-turned-gamekeeper and variants thereof is often used
as shorthand for the recruitment or co-option of bad actors into the security
apparatus. There are arguments both for and against this practice, in both the
ethical and the practical sides of security systems design.
From a practical point of view the question is of whether the risks of employ-
ing someone with a history of law-breaking (and therefore it is assumed a
higher risk of doing so in future) are outweighed by the benefits of access to
their experience. In different areas of security, there are different practices and
often different justifications. It is also important to consider that from a security
point of view, all members of staff of an organization are part of the security
of the organization, not just those explicitly employed as security personnel,
and in fact many staff of low status (often the only kinds of jobs that former
convicts are considered for) end up with significant opportunities for wrong-
doing beyond those of other classes of staff. Cleaning staff, for example, are
often employed to work outside the normal operating hours of an organiza-
tion, may well be working alone and have broad physical access to premises.
From that point of view perhaps, it is reasonable that no firm should be hir-
ing former convicts in any position. However, there is significant criminology
research that shows that far from being congenitally criminal, many end up as
criminals because of a lack of (apparent, to them) other opportunities and/or
from being involved in a social circle within which criminal activity is the norm
rather than the exception (Clinard et al., 2010). Excluding former convicts from
all or most legitimate employment, from that point of view, simply creates a
criminal class with neither opportunity nor incentive to reform. Indeed, recent
964 Part VII: Critiquing Security
Law enforcement
In some places, recruits to law enforcement positions may not have a serious
criminal record. In California in the United States, for example ‘Peace Officer’
recruits cannot have a felony conviction (Commission on Peace Office Stan-
dards and Training (California), n.d.). This often extends to prison and parole
officers as well as police and other law enforcement agencies. This rule may
or may not extend to people who work in supporting roles – anything from a
cleaner to a scientific evidence processor. Other countries have less strict rules
and in the United Kingdom, for example, the Metropolitan Police Service (the
largest in the country covering most of London) states that ‘[a] conviction or
caution does not necessary bar you from joining the MPS. A candidate’s age
at the time of offence, the number of years that have elapsed (normally five
years must have elapsed for recordable offences) and the nature of the offence
is taken into account before a decision is made.’ (Metropolitan Police, n.d.) and
freedom of information requests to police forces in England and Wales in 2011
provided the information that over 900 serving officers had criminal records,
mostly for driving offences but including some more serious and even violent
offences.
Andrew A. Adams 965
The arguments given for banning or severely restricting those with prior
convictions from becoming law enforcement officers include the issue of per-
sonal character and authority, the nature of policing (which in general requires
an acceptance of its legitimacy by the populace) and the opportunities for
wrongdoing afforded to law enforcement officials. A ban on prior offenders
being allowed to take up support roles shares some of these arguments. These
include the fact that while support personnel do not have powers such as the
authority to arrest, they may have the opportunity to interfere with investi-
gations or evidence or to smuggle in illicit materials to prisoners. While all
of these things can be done by whoever undertakes a necessary role such as
cleaning an FBI office, the prior evidence of poor character on behalf of those
convicted of serious offences is offered as a reasonable basis for restraint of
opportunity.
Private security
The pay and conditions for many low-level private security operatives are
quite poor (Professional Security Magazine, 2011). The average night watchman
patrolling a warehouse containing ordinary consumer goods will be working
long antisocial hours on low pay. Former criminals are one of the few groups
who are willing to take jobs like this (Abrahams, 2013). Managers setting
security policies will be balancing the need to keep down the costs of provid-
ing security guards with the risk of employing those with previous criminal
connections.
There is also a tendency on the part of some managers to believe that employ-
ing some of the local criminal fraternity as security guards can be useful in
deterring crime against their premises in a number of ways: criminals may avoid
the premises patrolled by their friends to avoid coming into conflict with them;
convicted criminals may have a reputation for a violent response (a willingness
to go beyond what the law allows in action against intruders) which may scare
away criminal intrusion attempts; those who have committed crimes under-
stand the methods and mindset of criminals and are therefore better placed to
act as defenders against them.
These beliefs by managers represent quite insecure and unethical attitudes in
many ways. The belief that criminals as guards will scare away other criminals
is both unlikely to be true in most cases, and in particular a reliance on the
willingness of a security guard to break the law in one area (the limitations of
action against intruders) is a violation of agreement on the importance of the
rule of law in general. For most low-level security positions, good character on
behalf of the guards should be seen as a fundamental part of maintaining trust
in the security apparatus throughout the organization. Sharing the mindset
of criminals is not particularly an asset in such low-level positions and may
undermine training in suitable approaches.
966 Part VII: Critiquing Security
In certain specialist roles, however, it can be argued that there may be ben-
efits to accessing the experience and mindset of bad actors in order to better
design security systems. The ethical and practical justifications for this depend
on a number of factors. The first being whether there are legitimate ways for
those with a law-abiding mindset to nevertheless gain the relevant skills and
experience to help design better security systems. This issue is covered below in
respect of computer security in particular details. A common way to organize
the development of a security system is to engage in game theoretic approaches
and set one’s security team to working as attackers and defenders, figuring
out how to bypass the security systems put in place. Ideally, this is done in
the abstract but that is not always good enough. As stressed by many secu-
rity experts including unblemished characters such as Schneier and convicted
criminals such as Mitnick, ordinary people in an organization are a crucial part
of their security system (Schneier, 2003b; Mitnick and Simon, 2002). Hence,
in trying to figure out the vulnerability of a system to attack by, for exam-
ple, social engineering, it will be necessary to test the system in the field by
mounting social engineering attacks on it. The development of social engineer-
ing attack skills has limited legitimate usage but significant illegitimate usages
and hence the talent pool of good social engineers may well be mostly limited
to those who have misused those skills in the past. On the other hand, giv-
ing those people legitimate (and probably quite well paid) opportunities to use
their skills may well benefit everyone in the loop. Given the previously demon-
strated character issues of those convicted of crimes it would be foolhardy to
place them in charge of the development of security systems, or give them carte
blanche to attack a system. On the other hand, using their talents as part of a
group of socially responsible security experts to design better security, and lim-
iting their access to only those elements of the system that their expertise is
useful for, may well be both ethically acceptable and practically useful.
Computer security
In the early days of remotely accessible computers, access to these systems was
very limited (Levy, 2002). Those with an interest in how they worked could
often not gain legitimate access to them. In a spirit of challenge and some
mischief, they sought out machines which were visible and attempted to gain
access to them. Sometimes they caused damage to the systems in the ways
they tried to gain access or in what they did once they gained access. Often,
they only looked at the contents of the system and withdrew, perhaps leav-
ing behind a newly created backdoor back into the system for the future. The
operators of these systems, when they realized that they had been accessed
without authority, were understandably unhappy about this and in addition
to attempting to secure their systems against damage and unauthorized access,
many sought out the penetrators and attempted to prosecute them for their
Andrew A. Adams 967
Should honest people attempt to break security systems, physical ones such
as locks and informational ones such as password protection systems? If hon-
est people do succeed, what should they do with the information they have
gained? Should they only inform the producer of the vulnerable systems and
hope that they will reduce or remove the vulnerability? Is it possible to inform
honest users of the system without also allowing that knowledge to fall into the
hands of bad actors who will misuse it? What level of detail should be dissem-
inated, when and to whom? This is a major ethical dilemma for anyone who
works in security and who may spot a vulnerability in a system, but particu-
larly for security researchers who, in seeking to develop better systems, often
identify the weaknesses of current ones.
What information about a system helps a bad actor gain illegitimate access?
From one point of view, all information regarding the operations of a system,
whether formally a part of the security element or not, might be regarded as
potentially of help to the bad actor, and thus dissemination of that information
Andrew A. Adams 969
Kerckhoff’s principles
Many elements of a security system (and, in particular, access control systems)
depend on a mechanism with a large possible number of keys of which only
one or a small number will provide access. The debate about information on
physical lock mechanisms is mirrored in debates about information on encryp-
tion systems. A system that relies for its robustness largely on the assumption
that the mechanism is unknown to an attacker, as opposed to relying on the
particular key to a particular message being unknown to an attacker, is a weak
system. Kerckhoff (1883) codified a set of principles regarding the evaluation
of cryptographic systems which included this observation as well as a num-
ber of others, including the need for easy methods of changing the key. At the
time, encryption relied primarily on shared secret keys, a form of encryption
in which the same ‘key’ is used for both encrypting and decrypting a mes-
sage. While such systems are often fast and secure they require a secure method
for sharing the key and each party must rely on the other to keep it secret.
As far back as 1884 the idea that some kinds of mathematical operations might
allow for two separate keys to be generated, each of which is the opposite of
the other (so key A can be used to encrypt a message which can then only be
decrypted by key B while key B can be used to encrypt a message that can then
only be decrypted by key A) was proposed by Jevons (1874). However, suitable
mathematical operations were not discovered until almost a century later.
Kerckhoff’s principles include the concept that a strong communications
security system minimizes the information that is necessarily secret to the
actual clear-text message and the key needed to decrypt it. A good lock is hard
to open without the matching key (that is, hard to pick) whether one knows
about its mode of operation or not. A message encrypted with a good encryp-
tion system is hard to read without the matching (encryption key) whether one
knows about its mathematical encryption process or not.
Andrew A. Adams 971
Cryptography suppression
As discussed on the previous page, the basic idea of one-way functions and
their possible use for cryptographic purposes was extant in the 1870s, but
so far as it is known, it was not until the 1970s that feasible processes for
using asymmetric encryption where one key encrypts a message and a match-
ing one decrypts it (and vice versa) were developed. As Levy (2002) discusses
at length, not only was public key encryption developed but kept secret by
UK government communications surveillance operatives but also computer
cryptography researchers in the United Kingdom and the United States were
pressured (by various means) to avoid working on cryptography, or to only
submit such work to the relevant agencies (the NSA (National Security Agency)
in the United States and GCHQ (Government Communications Headquarters)
in the United Kingdom). By the end of the 1970s, however, cryptographers
were working openly outside government agencies. Various attempts by the
US government in particular were made to allow some use of cryptography
commercially but to prevent its spread becoming too wide, and in particu-
lar to prevent its spread from interfering with the abilities of the NSA and
GCHQ to read communications. In particular, encryption algorithms and their
implementation as software required an armaments export licence for many
years. Such licences, which required each individual item to be granted a
licence, were not a suitable mechanism for commercial software such as the
Lotus Notes suite of office communications tools which relied on commer-
cial retail sales. These export licences were removed in 1996, although some
restrictions still remain, requiring a national security review of commercial
products to be sold overseas, and a requirement for US organizations and cit-
izens releasing open source software with strong encryption keys (of 64 bits
or longer) to notify the Department of Commerce’s Bureau of Industry and
Security.
Strong cryptography protocols underlie the confidence of users for ecom-
merce (Thanh, 2000) and yet for decades the NSA and GCHQ strove to prevent
independent discovery or dissemination of such systems. Was this justified in
terms of the national security benefits?
Although the Snowden materials have sparked this debate into a major political
issue, it has been building for many years. In 2000 in the United Kingdom, a
submission by the National Crime and Intelligence Service (NCIS) to the Home
Office suggested that communications service providers should be required to
keep communications metadata (data about who contacted whom, when, but
not the exact contents of communications) on all forms of electronic com-
munications (Gaspar, 2000). This proposal generated significant opposition
and was publicly rejected by ministers at the time (McCarthy, 2000). How-
ever, those ministers and the government of which they were a part later
endorsed not only the requests from the NCIS for the United Kingdom but
pushed through a European Directive with significantly broader access justifi-
cations. In addition to the provisions of the USA-PATRIOT Act which provided
US government agencies with sweeping new powers of surveillance and lim-
ited oversight, the much lesser known Foreign Intelligence Surveillance Act of
1978 (FISA) had already provided for close to blanket authorization for US gov-
ernment surveillance of non-US citizens’/residents’ communications (Wilson,
2013). It became clear by 2005 that the administration of George W. Bush had
been violating even the restrictions in FISA and other US laws against target-
ing US citizens and residents with blanket surveillance (Risen and Lichtblau,
2005).
It is claimed by senior members of the intelligence services that revela-
tions about their surveillance capabilities and practices undermine their role
in protecting national security. For example, the head of the United Kingdom’s
‘MI5: The Security Service’ (https://www.mi5.gov.uk/) said in October 2013 that
‘[i]t causes enormous damage to make public the reach and limits of GCHQ
techniques. Such information hands the advantage to the terrorists’ (Hopkins,
2013). Huhne (2013) claimed that even the body of elected representatives who
are supposed to oversee the work of these security agencies had been kept in
the dark about the extent of the capabilities and their actual use. In the United
States, Senator Wyden (D-OR) has been pushing for better disclosure both to
the US legislature and the US public about the capabilities and practices of the
NSA (Epps, 2013). It is clear that this debate about the authority, the capability,
Andrew A. Adams 973
the practices and the public knowledge of security services mass surveillance
programmes will continue for a significant period of time.
Conclusions
databases that could only have come from police intelligence files, undermines
the claims that information gathered by intelligence and law enforcement is
strictly controlled and never passed to non-state actors. The role, limits, con-
trol and visibility of (particularly blanket) state surveillance of the populace are
some of the key emerging questions of political theory in the early 21st cen-
tury. It also links in to the meta-ethical question of security ethics, which is
how much we can trust those who make decisions about security policy, and
those who implement it. Should we require all those enforcing or making the
law to be as above reproach as Caesar’s wife, or can individuals be reformed and
trusted, having once broken society’s rules? How far can we judge current char-
acter based on only the negative aspects of prior actions? What are the limits of
our trust in those who demand transparency from everyone else, but demand
opacity in their own operations?
It is clear that information is now easier to disseminate than ever before,
although states and powerful private organizations are still seeking to close
down certain types of information transmission. Ethical security researchers
struggle with the questions of balance between revealing too much and help-
ing bad actors and revealing too little and preventing honest people from
protecting themselves.
Security is always connected with issues of power. When properly applied,
security balances the benefits to all against the costs to some. Without eth-
ical guidance, security can too often end up costing too much while often
delivering too little.
Recommended readings
have developed the technology that has shaped the 21st century. His Crypto:
Secrecy and Privacy in the New Code War provides a good explanation of the
basics of modern cryptography and its importance in areas such as e-commerce
and e-banking, alongside the story of the people who developed usable public
key cryptography approaches and their battles with the NSA over publica-
tion of their mathematics and dissemination of implementations in computer
programmes.
Bruce H. Kobayashi provides a solid and accessible analysis of the differences
between individual and social evaluations of risks, threats and security actions
and their consequences in the area of computer security in Private Versus Social
Incentives in Cybersecurity: Law and Economics. His approach can equally well be
applied to the world of physical security questions. Beatrice von Silva-Tarouca
Larsen tackled in depth the ethical questions surrounding the ongoing growth
in deployment of CCTV cameras in Setting the Watch: Privacy and the Ethics of
CCTV Surveillance, concluding that there is far too much deployment of CCTV
systems with both dubious ethical justifications and a clear lack of valid cost–
benefit analysis.
Bruce Schneier, called a ‘security guru’ by The Economist, tackled in Liars and
Outliers the broad question of the necessary place played by trust in main-
taining a civilization, in particular the issue of taking the risk of trusting a
stranger. This is a brilliant exposition of not only the nature and necessity
of trust and how to deal with its betrayals, but also the utility of those who
break society’s rules. A must-read for anyone interested in how to design secu-
rity policies which maintain maximum individual liberty and dignity while
offering minimal opportunities for bad actors to exploit the trust of their
peers.
References
Abrahams, J. (2013). What Can’t You Do with a Criminal Record? Prospect Maga-
zine. May 22nd, http://www.prospectmagazine.co.uk/magazine/criminal-record-chris
-huhne-vicky-pryce.
BBC (2013). Car Key Immobiliser Hack Revelations Blocked by UK Court. July 29th, http://
www.bbc.co.uk/news/technology-23487928.
Blaze, M. (2003). Rights Amplification in Master-Keyed Mechanical Locks. IEEE Security
and Privacy, 1(2), 24–32.
Blumstein, A. and Nakamura, K. (2009). Redemption in the Presence of Widespread
Criminal Background Checks. Criminology, 47(2), 327–359.
Clark, R. (1994). Computer Related Crime in Ireland. European Journal of Crime, Criminal
Law and Criminal Justice, 2(2), 252–277.
Clinard, M.B., Quinney, R. and Wildeman, J. (2010). Criminal Behavior Systems: A Typology.
Cincinnati, OH: Anderson.
Commission on Peace Office Standards and Training (California) (n.d.). General Questions.
https://www.post.ca.gov/general-questions.aspx.
978 Part VII: Critiquing Security
Epps, G. (2013). Ron Wyden: The Lonely Hero of the Battle Against the Surveillance State.
The Atlantic Online. http://www.theatlantic.com/politics/archive/2013/10/ron-wyden
-the-lonely-hero-of-the-battle-against-the-surveillance-state/280782/.
Freedman, B. (1978). A Meta-Ethics for Professional Morality. Ethics, 89(1), 1–19.
Freeman, R.B. (2003). Can We Close the Revolving Door?: Recidivism vs. Employment of
Ex-Offenders in the US. http://www.urban.org/publications/410857.html.
Gaspar, R. (2000). NCIS Submission on Communications Data Retention Law. http://
cryptome.org/ncis-carnivore.htm.
Gray, P. (2003). Mitnick Calls for Hackers’ War Stories. ZDNet.com. http://www.zdnet.
com/mitnick-calls-for-hackers-war-stories-3039118685/.
Hobbs, A.C. and Dodd, G. (1853). Rudimentary Treatise on the Construction of Locks.
London: J. Weale.
Hobbes, T. (1660). Leviathan: Or the Matter, Forme and Power of a Commonwealth Eccle-
siasticall and Civil. New Haven, CT: Yale University Press (Modern Edition Published
1960).
Hopkins, N. (2013). MI5 Chief: GCHQ Surveillance Plays Vital Role in Fight Against
Terrorism. Guardian, October 9. http://www.theguardian.com/uk-news/2013/oct/08/
gchq-surveillance-new-mi5-chief.
Huhne, C. (2013). Prism and Tempora: The Cabinet was Told Nothing of the Surveillance
State’s Excesses. Guardian, October 6. http://www.theguardian.com/commentisfree/
2013/oct/06/prism-tempora-cabinet-surveillance-state.
Jevons, W.S. (1874). Principles of Science. Macmillan & Co. https://www.archive.org/
stream/principlesofscie00jevorich#page/n165/mode/2up.
Kant, I. (1998). Critique of Pure Reason. Edited by P. Guyer and A.W. Wood. Cambridge:
Cambridge University Press.
Kerckhoff, A. (1883). La Cryptographie Militaire. IX Journal Des Science Militaires, 5–38
(January) and 161–191 (February). http://www.petitcolas.net/fabien/kerckhoffs/.
Kobayashi, B.H. (2006). Private versus Social Incentives in Cybersecurity: Law and
Economics. Grady & Parisi, 1, 13–28.
The Law Reform Commission (Ireland) (1992). Report on the Law Relating to Dishonesty.
http://www.lawreform.ie/_fileupload/Reports/rDishonesty.htm.
Levy, S. (2002). Crypto: Secrecy and Privacy in the New Code War. London: Penguin.
Lockwood, S., Nally, J.M., Ho, T. and Knutson, K. (2012). The Effect of
Correctional Education on Postrelease Employment and Recidivism A 5-
Year Follow-Up Study in the State of Indiana. Crime & Delinquency, 58(3),
380–396.
McCarthy, K. (2000). Govt Ministers Distance Themselves from Email Spy Plan. The Reg-
ister. December 5, http://www.theregister.co.uk/2000/12/05/govt_ministers_distance
_themselves.
Metropolitan Police (UK) (n.d.). Metropolitan Police Careers – Frequently Asked Questions.
http://www.metpolicecareers.co.uk/faq.html.
Mill, J.S. (1869). Utilitarianism in Crisp, R. (ed.) Oxford: Oxford University Press. Modern
Edition Published 1998. See http://ukcatalogue.oup.com/product/9780198751632.
Mill, J.S. (1869). On Liberty. http://www.bartleby.com/130.
Mitnick, K. and Simon, W.L. (2002). The Art of Deception. Indianapolis, IN: Wiley.
Munsche, P.B. (1981). The Gamekeeper and English Rural Society, 1660–1830. Journal of
British Studies, 20(2), 82–105.
Professional Security Magazine (2011). Pay Survey: Still A Minimum Wage Sector. Jan-
uary 26. http://www.professionalsecurity.co.uk/news/news-archive/pay-survey-still-a
-minimum-wage-sector.
Andrew A. Adams 979
Rasch, M. (2003). The Sad Tale of a Security Whistleblower. Security Focus. August 18.
http://www.securityfocus.com/columnists/179.
Rawls, J. (1971). A Theory of Justice. Cambridge, MA: Harvard University Press.
Risen, J. and Lichtblau, E. (2005). Bush Lets U.S. Spy on Callers Without Courts.
The New York Times. December 16. http://www.nytimes.com/2005/12/16/politics/
16program.html.
Rushby, J. (2001). Security Requirements Specifications: How and What. Symposium on
Requirements Engineering for Information Security (SREIS).
Schneier, B. (2000). Secrets and Lies: Digital Security in a Networked World. Wiley.
Schneier, B. (2003a). Locks and Full Disclosure. IEEE Security and Privacy, 1(2), 88.
Schneier, B. (2003b). Beyond Fear: Thinking Sensibly About Security in an Uncertain World.
Springer.
Schneier, B. (2012). Liars and Outliers: Enabling the Trust that Society Needs to Thrive. Wiley.
Singer, P. (1982). Practical Ethics. Cambridge University Press: Cambridge.
Thanh, D.V. (2000). Security Issues in Mobile Ecommerce, in Bauknecht, K., Kumar M.,
Sanjay and Pernul, G. (eds.) Proceedings of the First International Conference on Electronic
Commerce and Web Technologies. Springer-Verlag. pp. 467–476.
Uggen, C. (2000). Work as a Turning Point in the Life Course of Criminals: A Duration
Model of Age, Employment, and Recidivism. American Sociological Review, 529–546.
Wiener, N. (1954). The Human Use of Human Beings: Cybernetics and Society. Boston, MA:
Da Capo.
Wilson, C. (2013). A Guide to FISA §1881a: The Law Behind It All. Privacy International
Blog, June 13, https://www.privacyinternational.org/blog/a-guide-to-fisa-ss1881a-the
-law-behind-it-all.